在现在的环境中hadoop+spark+mpp的OLAP的场景越来越多,学习spark需要的第一步就是搭建测试环境。
一、前提准备
spark或者说hadoop集群的最小机器就是3台,分别如下:
192.168.206.27 master
192.168.206.33 slave1
192.168.203.19 slave2
1.修改机器名
vi /etc/sysconfig/network
vi /etc/hosts
2.确保编码一致
/etc/sysconfig/i18n
这里需要多说一句,一般情况下我会设置成zh_CN.UTF-8因为一些提示信息中文会直观些,但是很多人喜欢en_US.UTF-8,如果需要安装中文支持
yum groupinstall chinese-support
yum install fonts-chinese.noarch
yum install m17n-db-common-cjk
yum install m17n-db-chinese
vi /etc/sysconfig/i18n
LANG="zh_CN.UTF-8"
SYSFONT="latarcyrheb-sun16"
LC_ALL="zh_CN.UTF-8"
vi /etc/profile
export LC_ALL="zh_CN.UTF-8"
#如果不行reboot试试,还不行localedef -v -c -i zh_CN -f UTF-8 zh_CN.UTF-8
#遇到一次很特殊的,使用yum -y install fontforge解决
3.ntp时间一致crontab -e
我习惯用ntpdate
* */2 * * * /usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock --systohc
二、安装HADOOP集群
1.安装jdk,3台全部安装
#上传 jdk到 /usr/local
cd /usr/local
rpm -ivh jdk-7u80-linux-x64.rpm
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_80
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
PATH=$PATH:$HOME/bin
export PATH
source /etc/profile
Java -version
#ssh root@slave1 slave2 上去操作
2.安装scala
spark1.6安装2.11版本
#http://www.scala-lang.org/download/2.11.8.html
mkdir -p /home/scala
tar -xzvf scala-2.11.8.tgz -C /home/scala/
vi /etc/profile
export SCALA_HOME=/home/scala/scala-2.11.8
PATH=$PATH:$HOME/bin:$SCALA_HOME/bin
export PATH
source /etc/profile
scala -version
scp -r /home/scala root@slave1
scp -r /home/scala root@slave2
#ssh后设置/etc/profile
3.hadoop安装
mkdir -p /home/hadoop
tar xzvf hadoop-2.7.2.tar.gz -C /home/hadoop/
cd /home/hadoop/hadoop-2.7.2/etc/hadoop
#3.1 hadoop-env.sh配置JAVA_HOME**
export JAVA_HOME=/usr/java/jdk1.7.0_80
#3.2在yarn-env.sh中配置JAVA_HOME
export JAVA_HOME=/usr/java/jdk1.7.0_80
#3.3在slaves中配置slave节点的ip或者host,
salve1
slave2
#3.4修改core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-2.7.2/tmp</value>
</property>
</configuration>
#3.5修改hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/hadoop-2.7.2/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/hadoop-2.7.2/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
#3.6 修改mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
3.7 修改yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8035</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
4.分发验证
#4.1 分发
scp -r hadoop/ root@slave1:/home
scp -r hadoop/ root@slave2:/home
#4.2格式化namenode
#增加环境变量
vi /etc/profile
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
PATH=$PATH:$HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin
export PATH
# slave1 2同样操作
cd $HADOOP_HOME
bin/hdfs namenode -format
#==提示错误nable to load native-hadoop library,
cd lib/native
file libhadoop.so.1.0.0 #看到是64位的,不是版本问题
#下载:wget http://dl.bintray.com/sequenceiq/sequenceiq-bin/hadoop-native-64-2.7.0.tar
tar xvf hadoop-native-64-2.7.0.tar -C native/
#用这个native替换hadoop下版本就可以了
#4.3启动
sbin/start-dfs.sh
sbin/start-yarn.sh
jps
#master可以看到如下进程:
3407 SecondaryNameNode
3218 NameNode
3552 ResourceManager
3910 Jps
# slave上可以看到进程
2072 NodeManager
2213 Jps
1962 DataNode
#浏览器:http://master:8088 yarn管理界面
http://master:50070 hdfs管理界面
5.spark安装
#注意选择pre-bulit for hadoop2.6 later
mkdir -p /home/spark
tar xzvf spark-1.6.1-bin-hadoop2.6.tgz -C /home/spark/
cd /home/spark/
mv spark-1.6.1-bin-hadoop2.6/ spark-1.6.1/
cd /home/spark/spark-1.6.1/conf
#5.1配置
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export SCALA_HOME=/home/scala/scala-2.11.8
export JAVA_HOME=/usr/java/jdk1.7.0_80
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
SPARK_MASTER_IP=master
SPARK_LOCAL_DIRS=/home/spark/spark-1.6.1
SPARK_DRIVER_MEMORY=1G
SPARK_WORKER_INSTANCES=1
SPARK_WORKER_MEMORY=1024m
cp slaves.template slaves
vi slaves
slave1
slave2
# 5.2 分发
scp -r /home/spark/ root@slave1:/home
scp -r /home/spark/ root@slave2:/home
#5.3启动
sbin/start-all.sh
#jps检查:master多了个master进程
7949 Jps
7328 SecondaryNameNode
7805 Master
7137 NameNode
7475 ResourceManager
#slave上多了worker进程
3132 DataNode
3759 Worker
3858 Jps
3231 NodeManager
#进入Spark的Web管理页面: http://master:8080
#5.4为了方便可以在/etc/profile加入spark
#完整的:
export JAVA_HOME=/usr/java/jdk1.7.0_80
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JAVA_BIN PATH CLASSPATH
export SCALA_HOME=/home/scala/scala-2.11.8
export HADOOP_HOME=/home/hadoop/hadoop-2.7.2
export SPARK_HOME=/home/spark/spark-1.6.1
PATH=$PATH:$HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin
export PATH
#5.5运行示例
#本地模式两线程运行
./bin/run-example SparkPi 10 --master local[2]
#Spark Standalone 集群模式运行
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://master:7077 \
lib/spark-examples-1.3.0-hadoop2.4.0.jar \
100
#Spark on YARN 集群上 yarn-cluster 模式运行
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \ # can also be `yarn-client`
lib/spark-examples*.jar \
10
#注意 Spark on YARN 支持两种运行模式,分别为yarn-cluster和yarn-client,具体的区别可以看这篇博文,从广义上讲,yarn-cluster适用于生产环境;
#而yarn-client适用于交互和调试,也就是希望快速地看到application的输出。
spark-shell --version
以上是安装笔记,后续应用待续…