环境:CentOS 7虚拟机两台(192.168.31.224、192.168.31.225)、Hadoop-2.8.0 、jdk 1.8
实现两台虚拟机SSH免密码登录
1、修改主机名,192.168.31.224(hserver1),192.168.31.225(hserver2)
在主机192.168.31.224上执行
>hostname hserver1
在主机192.168.31.225上执行
>hostname hserver2
2、修改主机224和225的/etc/hosts文件,在hosts文件末尾加上
192.168.31.224 hserver1
192.168.31.225 hserver2
3、测试:分别在两个主机上执行 ping hserver1和ping hserver2,主机连通即可。
4、给两台机器生成SSH密钥文件
ssh-keygen -t rsa -P ''
查看/root/.ssh下会有两个文件
ls /root/.ssh
5、在两台主机的/root/.ssh目录下分别生成authorized_keys文件,内容为224主机的.ssh目录下id_rsa.pub和225主机.ssh目录下id_rsa.pub文件内容合并。若主机多了,可以采取在一台上创建,然后分发到其它主机的/root/.ssh目录的方式。
6、测试ssh免密是否配置成功,在hserver1主机上执行ssh hserver2,在hserver2上执行ssh hserver1,如下图,表示配置成功。使用exit可以登出远程ssh连接。
问题:
1、安装完成后,使用ssh 连接测试,报错:sign_and_send_pubkey: signing failed: agent refused operation
解决方法:
执行
eval "$(ssh-agent -s)"
ssh-add
安装配置hadoop
1、下载hadoop 2.8.0。
2、将tar包放到opt目录下,并解压。
3、创建目录
mkdir /root/hadoop
mkdir /root/hadoop/tmp
mkdir /root/hadoop/var
mkdir /root/hadoop/dfs
mkdir /root/hadoop/dfs/name
mkdir /root/hadoop/dfs/data
4、修改hserver2主机下的/opt/hadoop-2.8.0/etc/hadoop下的配置文件
a、修改core-site.xml,并用修改完成的core-site.xml替换hserver1主机的core-site.xml。设置fs.default.name用于定位文件系统的namenode
在<configuration></configuration>中加入
----------
<property>
<name>hadoop.tmp.dir</name>
<value>/root/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://hserver2:9000</value>
</property>
b、修改hdfs-site.xml, dfs.replication HDFS的备份参数。
在<configuration></configuration>中加入
----------
<property>
<name>dfs.name.dir</name>
<value>/root/hadoop/dfs/name</value>
<description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/root/hadoop/dfs/data</value>
<description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>need not permissions</description>
</property>
c、新建并修改mapred-site.xml, mapred.job.tracker参数用于定位JobTracker所在的主节点。
cp ./mapred-site.xml.template ./mapred-site.xml
在<configuration></configuration>中加入
----------
<property>
<name>mapred.job.tracker</name>
<value>hserver2:49001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/root/hadoop/var</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
d、修改slaves,删除localhost,在里面加上
hserver1
hserver2
e、修改yarn-site.xml
在<configuration></configuration>中加入
----------
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hserver2</value>
</property>
<property>
<description>The address of the applications manager interface in the RM.</description>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<description>The address of the scheduler interface.</description>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<description>The http address of the RM web application.</description>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<description>The https adddress of the RM web application.</description>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<description>The address of the RM admin interface.</description>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>2048</value>
<discription>每个节点可用内存,单位MB,默认8182MB</discription>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
f、修改/opt/hadoop-2.8.0/etc/hadoop/hadoop-env.sh (系统已经设置了JAVA_HOME环境变量,但是此处不改,启动的时候会报错:Error: JAVA_HOME is not set and could not be found.)
将export JAVA_HOME=${JAVA_HOME}改成export JAVA_HOME=/opt/jdk1.8.0_161
启动hadoop
1、进入hserver2主机的/opt/hadoop-2.8.0/bin目录,格式化HDFS
./hadoop namenode -format
没报错表示初始化成功。
2、查看/root/hadoop/dfs/name/目录下生成current目录,current目录下面生成一些文件。
3、在namenode上启动hadoop,主机192.168.31.225(hserver2)为namenode。进入/opt/hadoop-2.8.0/sbin下执行
./start-all.sh
4、测试,访问http://192.168.31.225:50070/
访问:http://192.168.31.225:8088
5、停止
./stop-all.sh