安装钱准备:
* 关闭防火墙 : service iptables stop(关闭防火墙)) 或者 chkconfig iptables off (禁止防火墙)
* 修改IP : cd /etc/sysconfig/network-scripts vi ifcfg-eno16777736
HWADDR=00:0C:29:EF:24:D8
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736 (网卡名字)
UUID=284152da-6579-4137-be67-3906c974cc00
ONBOOT=no
IPADDR=192.168.31.238 (IP地址)
PREFIX=24
GATEWAY=192.168.31.255 (网关)
DNS1=192.168.31.255 (DNS)
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
******
******
重启: service network restart
* 修改hostname: vi /etc/hosts
* 设置ssh自动对登录: 详见其他的Blog
* 安装JDK: 见其他的Blog
*安装hadoop: 详见本Blog
1、下载
http://apache.fayea.com/hadoop/common/hadoop-2.6.4/hadoop-2.6.4.tar.gz
2、 安装,需要linux JDK 已安装完毕
cd /usr/local/
mkdir hadoop
cd hadoop
wget http://apache.fayea.com/hadoop/common/hadoop-2.6.4/hadoop-2.6.4.tar.gz
chmod a+x hadoop-2.6.4.tar.gz
tar -zxvf hadoop-2.6.4.tar.gz
cd ../
mkdir hdfs
mkdir tmp
3、配置
配置/usr/local/hadoop/hadoop-2.7.0/etc/hadoop目录下的core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.31.238:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
配置/usr/local/hadoop/hadoop-2.7.0/etc/hadoop目录下的hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.31.238:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
配置/usr/local/hadoop/hadoop-2.7.0/etc/hadoop目录下的mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.0.182:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.0.182:19888</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>768</value>
</property>
4、配置JDK
配置/usr/local/hadoop/hadoop-2.7.0/etc/hadoop目录下hadoop-env.sh、yarn-env.sh的JAVA_HOME,不设置的话,启动不了,先注释一下。
export JAVA_HOME=/home/jdk/jdk1.7.0_76
5、配置slaves
配置/usr/local/hadoop/hadoop-2.7.0/etc/hadoop目录下的slaves,删除默认的localhost,增加2个从节点,
192.168.0.183
-------
6、将配置好的Hadoop复制到各个节点对应位置上,通过scp传送
scp -r/usr/local/hadoop/hadoop-2.7.0 192.168.31.238:/home/
7、启动
/usr/local/hadoop/hadoop-2.6.4/bin
(1)初始化,输入命令: ./hdfs namenode -format
(2)全部启动sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh
(3)停止的话,输入命令,sbin/stop-all.sh
(4)输入命令,jps,可以看到如下相关信息:
8、访问
http://192.168.31.238:8088/
如下图所示:
9、hdfs namenode启动
/usr/local/hadoop/hadoop-2.6.4/bin
./hdfs namenode
注意查看Log日志,分析相关的数据:
输入:http://192.168.31.238:50070/
建议,多点点菜单栏的信息,进行分析
10、hdfs journalnode 启动
/usr/local/hadoop/hadoop-2.6.4/bin
./hdfsjournalnode
web 输入RUL:http://192.168.31.238:8480/