文章出處
文章列表
Apache Hadoop集群離線安裝部署(一)——Hadoop(HDFS、YARN、MR)安裝:http://www.cnblogs.com/pojishou/p/6366542.html
Apache Hadoop集群離線安裝部署(二)——Spark-2.1.0 on Yarn安裝:http://www.cnblogs.com/pojishou/p/6366570.html
Apache Hadoop集群離線安裝部署(三)——Hbase安裝:http://www.cnblogs.com/pojishou/p/6366806.html
〇、安裝文件準備
Scala:http://downloads.lightbend.com/scala/2.11.8/scala-2.11.8.tgz
Spark:http://www.apache.org/dist/spark/spark-2.1.0/spark-2.1.0-bin-hadoop2.7.tgz
一、安裝Scala
1、解壓
tar -zxvf scala-2.11.8.tgz -C /opt/program/ ln -s /opt/program/scala-2.11.8 /opt/scala
2、設置環境變量
vi /etc/profile export SCALA_HOME=/opt/scala export PATH=$SCALA_HOME/bin:$JAVA_HOME/bin:$PATH
3、生效
source /etc/profile
4、scp到其它節點
二、安裝Spark
1、解壓
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz -C /opt/program/ ln -s /opt/program/spark-2.1.0-bin-hadoop2.7 /opt/spark
2、修改配置文件
vi /opt/spark/conf/spark-env.sh export JAVA_HOME=/opt/java export SCALA_HOME=/opt/scala export HADOOP_HOME=/opt/hadoop export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
3、scp到其它節點
4、測試
/opt/hadoop/sbin/start-all.sh /opt/spark/bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode client \ --driver-memory 1g \ --executor-memory 1g \ --executor-cores 2 \ /opt/spark/examples/jars/spark-examples*.jar \ 10
求出pi便ok了
Java為1.8時 ,默認配置Spark運行會報錯,解決方案為切換成1.7或參考:http://www.cnblogs.com/pojishou/p/6358588.html
文章列表
全站熱搜