CentOS 6.4+Hadoop2.2.0 Spark伪分布式安装(2)

通过jps查看启动是否成功:
[hadoop@localhost sbin]$ jps
4706 Jps
3692 DataNode
3876 SecondaryNameNode
4637 Worker
4137 NodeManager
4517 Master
4026 ResourceManager
3587 NameNode

可以看到有一个Master跟Worker进程 说明启动成功
可以通过:8080/查看spark集群状况

4 运行spark自带的程序 
首先需要进入spark下面的bin目录 :
[hadoop@localhost sbin]$ ll ../bin/
total 56
-rw-rw-r--. 1 hadoop hadoop 2601 Mar 27 13:44 compute-classpath.cmd
-rwxrwxr-x. 1 hadoop hadoop 3330 Mar 27 13:44 compute-classpath.sh
-rwxrwxr-x. 1 hadoop hadoop 2070 Mar 27 13:44 pyspark
-rw-rw-r--. 1 hadoop hadoop 1827 Mar 27 13:44 pyspark2.cmd
-rw-rw-r--. 1 hadoop hadoop 1000 Mar 27 13:44 pyspark.cmd
-rwxrwxr-x. 1 hadoop hadoop 3055 Mar 27 13:44 run-example
-rw-rw-r--. 1 hadoop hadoop 2046 Mar 27 13:44 run-example2.cmd
-rw-rw-r--. 1 hadoop hadoop 1012 Mar 27 13:44 run-example.cmd
-rwxrwxr-x. 1 hadoop hadoop 5151 Mar 27 13:44 spark-class
-rwxrwxr-x. 1 hadoop hadoop 3212 Mar 27 13:44 spark-class2.cmd
-rw-rw-r--. 1 hadoop hadoop 1010 Mar 27 13:44 spark-class.cmd
-rwxrwxr-x. 1 hadoop hadoop 3184 Mar 27 13:44 spark-shell
-rwxrwxr-x. 1 hadoop hadoop  941 Mar 27 13:44 spark-shell.cmd

run-example org.apache.spark.examples.SparkLR spark://localhost:7077

run-example org.apache.spark.examples.SparkPi spark://localhost:7077

Hadoop2.5.2 HA高可靠性集群搭建(Hadoop+Zookeeper)

Hadoop2.7完全分布式集群搭建以及任务测试   

一步步教你Hadoop多节点集群安装配置

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/781afe4adda772364618fca4c773f727.html