HADOOP分布式的配置:
1、解压安装包
[root@localhost software]# tar -zxvf hadoop-2.6.0-cdh5.14.2.tar.gz
2、 改名字:
[root@localhost software]# mv hadoop-2.6.0-cdh5.14.2 hadoop
3、 进入 Hadoop 文件夹下的etc 中的hadoop
[root@localhost software]# cd hadoop
[root@localhost hadoop]# cd etc/Hadoop
4、修改vi hadoop-env.sh文件
[root@hadoop5 hadoop]# vi hadoop-env.sh
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/root/software/jdk1.8.0_221
5、修改vi core-site.xml文件
[root@hadoop5 hadoop]# vi core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop5:9000</value> #修改成本主机的名称
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/root/software/hadoop/tmp</value>
</property>
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
</configuration>
6、修改hdfs-site.xml文件:
[root@localhost hadoop]# vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
7、修改vi mapred-site.xml.template文件:
[root@localhost hadoop]# vi mapred-site.xml.template
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
配置好之后改名:
mv mapred-site.xml.template mapred-site.xml
8、修改vi yarn-site.xml
<configuration> //下面hostname都改成你的主机名
<!-- reducer获取数据方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- 指定YARN的ResourceManager的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>HostName</value> //改成主机名
</property>
<!-- 日志聚集功能使用 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
</configuration>
8、修改Vi /etc/profile文件:
[root@localhost hadoop]# vi /etc.profile
export JAVA_HOME=/root/software/jdk1.8.0_221
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin/:/root/bin
export HADOOP_HOME=/root/software/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
9、使配置文件生效:
[root@localhost hadoop]# source /etc/profile
10、格式化HDFS
[root@localhost hadoop]#hdfs namenode -format
11、退出到software中的hadoop文件中
[root@localhost hadoop]# cd …/…
12、输入:ll
[root@localhost hadoop]# ll
会显示:
drwxr-xr-x. 2 1106 4001 137 Mar 28 2018 bin
drwxr-xr-x. 2 1106 4001 166 Mar 28 2018 bin-mapreduce1
drwxr-xr-x. 3 1106 4001 4096 Mar 28 2018 cloudera
drwxr-xr-x. 6 1106 4001 109 Mar 28 2018 etc
drwxr-xr-x. 5 1106 4001 43 Mar 28 2018 examples
drwxr-xr-x. 3 1106 4001 28 Mar 28 2018 examples-mapreduce1
drwxr-xr-x. 2 1106 4001 106 Mar 28 2018 include
drwxr-xr-x. 3 1106 4001 20 Mar 28 2018 lib
drwxr-xr-x. 3 1106 4001 261 Mar 28 2018 libexec
-rw-r--r--. 1 1106 4001 85063 Mar 28 2018 LICENSE.txt
drwxr-xr-x. 3 root root 4096 Mar 14 15:46 logs
-rw-r--r--. 1 1106 4001 14978 Mar 28 2018 NOTICE.txt
-rw-r--r--. 1 1106 4001 1366 Mar 28 2018 README.txt
drwxr-xr-x. 3 1106 4001 4096 Mar 28 2018 sbin
drwxr-xr-x. 4 1106 4001 31 Mar 28 2018 share
drwxr-xr-x. 18 1106 4001 4096 Mar 28 2018 src
drwxr-xr-x. 4 root root 37 Mar 14 15:46 tmp
13、启动
[root@localhost hadoop]#start-all.sh
[root@localhost hadoop]#JPS
会显示:
29942 ResourceManager
30232 Jps
30029 NodeManager
29551 NameNode
29663 DataNode
29807 SecondaryNameNode
14、到此配置完成
15、若配置出错,输入JPS后少了文件,则查看日志找错
[root@localhost hadoop]#tail -50f
logs/hadoop-root-datanode-hadoop2.log
#改成相应的缺少的文件名