Ubantu_Dowlod:
链接:https://pan.baidu.com/s/10icvUIwOizRUwXac0u6ZwQ?pwd=0000
hadoop_3.2.1 and JAVA_jdk_8u
链接:https://pan.baidu.com/s/1aQ4S9Sm-N9f-cNZONZzmpw?pwd=0000
Ubantu版:
^一^
前提软件:
sudo apt-get update
sudo apt-get install openssh-server -y
sudo apt-get install vim -y
上传jar文件路径:/opt/下
^二^
vim修改/etc/hostname
old_hostname
删掉改:
new_hostname
按:wq
reboot重启更新hostname(bash会把history给清空)
^三^
vim /etc/hosts
line_add:
hadoop01_ip name
hadoop02_ip name
hadoop03_ip name
按:wq
^四^
在/opt路径下解压:
tar -zxvf jdk-8u321-linux-x64.tar.gz
^五^
添加JAVA的环境变量:
sudo vim /etc/profile
add lines:
#JDK
export JAVA_HOME=/opt/jdk1.8.0_321
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools/jar
按:wq
^六^
更新环境变量:
source /etc/profile
^七^
检测JAVA:
java -version
^八^
创建用户(目的为了启动hadoop的)
sudo adduser hdp
su - hdp
(进入Hadoop地址再进入用户进行操作)
^九^
免密:
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 0600 ~/.ssh/authorized_keys
^十^
#把/home/hdp中的hadoop_tar文件解压
tar -zxvf hadoop-3.2.1.tar.gz
^十一^
vim /home/hdoop/.bashrc
#下面HOME是解压的地址
export HADOOP_HOME=/home/hdp/hadoop-3.2.1
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
按:wq
^十二^
(更新环境变量)
source /home/hdp/.bashrc
^十三^
vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh
add these lines:(添加)
export JAVA_HOME=/opt/jdk1.8.0_321
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_COMMON_LIB_NATIVE_DIR"
按:wq
^十四^
#查看ip:ifconfig(可在ifconfig文件修改指定ip,和设置start_ip)
^十五^
#gedit_core-site.xml
vim $HADOOP_HOME/etc/hadoop/core-site.xml
<configuration>
<!--用来指定使用hadoop时产生文件的存放目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hdp/tmpdata</value>
</property>
<!--指定namenode的地址-->
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.000.000:9000</value>
</property>
</configuration>
按:wq
^十六^
#gedit_hdfs-site.xml
vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
<!--指定hdfs中namenode的存储位置-->
<property>
<name>dfs.data.dir</name>
<value>/home/hdp/dfsdata/namenode</value>
</property>
<!--指定hdfs中datanode的存储位置-->
<property>
<name>dfs.data.dir</name>
<value>/home/hdp/dfsdata/datanode</value>
</property>
<!--指定hdfs保存数据的副本数量_工业数为3个-->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
按:wq
^十七^
vim $HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
<!--告诉hadoop以后MR(Map/Reduce)运行在YARN上-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
</configuration>
按:wq
^十八^
vim $HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration>
<!-- yarn主节点的位置 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.000.000</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
按:wq
^十九^
注意:hadoop2.X文件名为selvers(可能打错英文,s什么开头的英文)
hadoop3X的为workers
修改/home/hdp/hadoop-3.2.1/etc/hadoop/workers
为集群的主机名:(单机版就直接添加hadoop01_Main frame就行)(就是步骤二更改hostname文件的名字)
hadoop1_Main frame
hadoop2_Main frame
hadoop3_Main frame
按:wq
^二十^
格式化(主节点):
hdfs namenode -format
^二十一^
启动():
方式一:
start-dfs.sh
start-yarn.sh
方式二:
start-all.sh
^二十二^
查看命令:jps