http://www.cnblogs.com/onetwo/p/5424377.html
1. 规划
1.1 硬件
- 华白:master+slave3:
- master: 192.168.1.101
- slave3:192.168.1.203
- 华台:slave1+slave2
- 192.168.1.201
- 192.168.1.202
1.2 软件版本
- VMware-Fusion-8.0.0-2985594.dmg
- ubuntu-14.04.4-desktop-amd64.iso
- JDK1.7以上
- hadoop2.6.4
- scala-2.11.8.tgz
- spark-1.6.1-bin-hadoop2.6.tgz
2. 安装ubuntu
- vm虚拟机:80G内存;桥连接方式
- boot:200M:Ext4:primary(主分区,其余是逻辑分区)
- /:5G:Ext4:5120
- /tmp: 2G
- /var:2G
- /usr:15G:15360
- /swap:2G:swap
- /home:其余
3. ubuntu设置
3.1 设置无界面登陆
https://my.oschina.net/wake123/blog/208698
3.2 Ip设置
http://www.cnblogs.com/vincedotnet/p/4013099.html
auto eth0
iface eth0 inet static
address 192.168.1.201
gateway 192.168.1.1 #这个地址你要确认下 网关是不是这个地址
netmask 255.255.255.0
sudo /etc/init.d/network restart
* 静态ip设置后的上网问题
http://www.linuxdiyf.com/linux/14180.html
问题:
在ubuntu中配置静态IP后无法正常上网。
解决:
1、在终端执行
sudo gedit /etc/network/interfaces
在文件中加入如下内容,网关要写上,我开始一直无法上网就是因为没有配置网关
auto eth0
iface eth0 inet static
address 192.168.1.151
netmask 255.255.255.0
gateway 192.168.1.1
2、执行
gedit /etc/NetworkManager/NetworkManager.conf
将managed=false 改成true
3、执行
gedit /etc/resolvconf/resolv.conf.d/base
加入nameserver为你的DNS即可
如
nameserver 192.168.1.1
nameserver 114.114.114.114
4、重启机器。
3.3 自动登陆
- 带图形界面 (系统设置-用户账号)
- tty1情况:
vim /etc/init/tty1.conf
修改:exec /sbin/getty -8 38400 tty1
为: exec /sbin/getty -a username -8 38400 tty1
3.4 关闭防火墙
- 卸载防火墙
sudo apt-get remove iptables
3.5 配置: /etc/hosts (root权限)
/etc/hosts:
- 127.0.0.1 localhost
- 192.168.1.101 0master
- 192.168.1.201 0slave1
- 192.168.1.202 0slave2
- 192.168.1.203 0slave3
/etc/hostname
- 0master
- 0slave1
- 0slave2
- 0slave3
3.6 安装ssh
- 查看是否安装ssh:service ssh
- 安装: sudo apt-get install openssh-server
3.7 ssh无密码相互登陆
生产秘钥
ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa追加id_rsa.pub -> key
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys将 0slave1 ,0slave2,0slave3 的公钥 id_dsa.pub 传给 master
scp ~/.ssh/id_dsa.pub hadoopmi@0master:/home/hadoop/.ssh/id_dsa.pub.0slave1
scp ~/.ssh/id_dsa.pub hadoopmi@0master:/home/hadoop/.ssh/id_dsa.pub.0slave2
scp ~/.ssh/id_dsa.pub hadoopmi@0master:/home/hadoop/.ssh/id_dsa.pub.0slave3
- 将 slave01 和 slave02的公钥信息追加到 master 的 authorized_keys文件中。
cat ~/.ssh/id_dsa.pub.slave01 >> authorized_keys
cat ~/.ssh/id_dsa.pub.slave02 >> authorized_keys
cat ~/.ssh/id_dsa.pub.slave03 >> authorized_keys
- 将 master 的公钥信息 authorized_keys 复制到 0slave1 和 0slave2和0slave3 的 .ssh 目录下。
scp ~/.ssh/authorized_keys hadoopmi@0slave1:/home/hadoopmi/.ssh/authorized_keys
3.8 JDK
- 准备: 0master机器:/home/0master/jdk-7u80-linux-x64.tar.gz
- 拷贝到: sudo cp /home/0master/jdk-7u80-linux-x64.tar.gz /user/java
- 权限: sudo chmod 777 jdk-7u80-linux-x64.tar.gz
- sudo tar -zxvf jdk-7u80-linux-x64.tar.gz
* 设置环境变量
* sudo vim /etc/profile
* #set jdk environment
* export JAVA_HOME=/usr/java/jdk1.7.0_79
* export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
* export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH
- source /etc/profile (非root用户)
- java -version
* slave机器的java环境
* 拷贝到slave
scp /home/0master/jdk-7u79-linux-x64.tar.gz hadoopmi@0slave1:/home/hadoopmi
cp /home/hadoopmi/jdk-7u79-linux-x64.tar.gz /usr/java
- 权限: sudo chmod 777 jdk-7u80-linux-x64.tar.gz
2. hadoop的安装配置
2.1 准备工作(master)
- 拷贝到/usr/hadoop
cp /home/0master/hadoop-2.6.4.tar.gz /usr/hadoop/
- 改变hadoop-2.6.4.tar.gz权限
- 解压缩
* sudo chmod 777 hadoop-2.6.4.tar.gz
* sudo tar -zxvf hadoop-2.6.4.tar.gz
* 配置ubuntu环境
- 修改权限
* sudo `chown -R hadoopmi:hadoopmi hadoop-2.6.4
* sudo vim /etc/profile
#hadoop env
* export HADOOP_HOME=/usr/hadoop/hadoop-2.6.4
* export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
- source /etc/profile
- hadoop version
2.2 配置hadoop集群
*修改0master
- hadoop-env.sh
增加如下两行配置: (/usr/hadoop/hadoop-2.6.4/etc/hadoop)
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_PREFIX=/usr/hadoop/hadoop-2.6.4
core-site.xml
提前创建tmp目录
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/hadoop-2.6.4/tmp</value>
</property>
</configuration>
- hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
- mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
- yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_79
- yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
</configuration>
- slaves配置
0master
0slave1
0slave2
0slave3
* 拷贝到slave
- 拷贝到slave
* scp -r /usr/hadoop/hadoop-2.6.4 hadoopmi@0slave3:/usr/hadoop
- 改变权限
chown -R hadoopmi:hadoopmi hadoop-2.6.4
- 修改/etc/profile
#hadoop env
* export HADOOP_HOME=/usr/hadoop/hadoop-2.6.4
* export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
- source /etc/profile
- hadoop version
3. 启动hadoop
- 格式化
$hadoop_home/etc/hadoop/hdfs namenode -format
- 启动 NameNode 和 DateNode
- 在 master 机器上执行 start-dfs.sh, 如下:
* $hadoop_home/sbin/start-dfs.sh
- 使用 jps 命令查看 master 上的Java进程:
- 使用 jps 命令分别查看 slave01 和 slave02 上的 Java 进程:
* 查看 NameNode 和 NameNode 信息
- 浏览器输入地址: http://master:50070/ 可以查看 NameNode 信息
* 启动 ResourceManager 和 NodeManager
start-yarn.sh
- 使用 jps 查看 master 上的 Java 进程
-
- 使用 jps 查看 slave 上的 Java 进程
bug
- the command could not be located because ‘/bin’ is not included in the path environment variable
解决办法:
http://www.powerxing.com/linux-environment-variable/
vim ~/.profile
PATH="$PATH:/usr/bin:/bin"
* no datanode to stop
http://blog.sina.com.cn/s/blog_6d932f2a0101fsxn.html