hadoop
master
1. 修改 /etc/hosts 文件,操作过程如下图所示:
vi /etc/hosts
10.117.0.180 master
10.117.0.180 slave1
10.117.0.180 slave2
scp -r /etc/hosts slave1:/etc/
scp -r /etc/hosts slave2:/etc/
测试是否成功
ssh slave1
解压hadoop,编辑profile
mkdir -p /usr/hadoop
tar -zxvf /opt/soft/hadoop-2.7.3.tar.gz -C /usr/hadoop/
vim /etc/profile
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.3
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib
export PATH=$PATH:$HADOOP_HOME/bin
source /etc/profile
分发profile
scp -r /etc/profile slave1:/etc/
scp -r /etc/profile slave2:/etc/
slave1 slave 2
source /etc/profile
master
2.对hadoop进行配置,所有的hadoop的配置文件都在/usr/hadoop/hadoop-2.7.3/etc/hadoop
- #配置hadoop-env
cd /usr/hadoop/hadoop-2.7.3/etc/hadoop
vim hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171
- #配置core-site.xml
vim core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/hadoop-2.7.3/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>fs.checkpoint.period</name>
<value>60</value>
</property>
<property>
<name>fs.checkpoint.size</name>
<value>67108864</value>
</property>
</configuration>
- #使用vim命令编辑yarn-site.xml文件
vim yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:18040</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:18030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:18088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:18025</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:18141</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- Site specific YARN configuration properties -->
</configuration>
- #编辑slaves文件
vim slaves
slave1
slave2
- #编辑master文件
vim master
master
- #编辑hdfs文件
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.3/hdfs/name</value>
<final>true</final>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/hadoop-2.7.3/hdfs/data</value>
<final>true</final>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
- #使用vim命令编辑mapred-site.xmll
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
分发软件和软件配置文件
scp -r /usr/hadoop root@slave1:/usr/
scp -r /usr/hadoop root@slave2:/usr/
格式化
hadoop namenode -format
开启
cd /usr/hadoop/hadoop-2.7.3
sbin/start-all.sh
master,slave1 slave2
查看主进程
jps
输入网址50070访问
在浏览器输入 masterIP:50070
后续shell查看
hadoop fs -ls /
hadoop fs -mkdir /data
hadoop fs -ls /