前期准备
-
设置IP及主机名(/etc/sysconfig/network-scripts/ifcfg-ens33,/etc/hostname)
- 关闭防火墙(service iptables stop)
- 设置hosts映射(/etc/hosts)
- 时间同步(ntpdate)
- 必须安装 jdk。
- 必须安装 ssh 并且必须运行 sshd 才能使用管理远程 Hadoop 守护程序的 Hadoop 脚本。
Hadoop 配置
1.上传hadoop的tar包和jdk的rpm包
hadoop-2.6.5.tar.gz jdk-8u231-linux-x64.rpm
2.安装jdk并配置环境变量
[root@node01 ~]# rpm -ivh jdk-8u231-linux-x64.rpm
[root@node01 ~]# vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_231-amd64
export PATH=$JAVA_HOME/bin:$PATH
[root@node01 ~]# source /etc/profile
3.解压hadoop-2.6.5.tar.gz到/opt目录 并配置免密钥登陆
[root@node01 ~]# tar -zxf hadoop-2.6.5.tar.gz -C /opt/bdp
#配置免密钥登陆
[root@node01 ~]# rm -rf .ssh
#按dsa格式生成公钥和私钥 私钥放在~/.ssh/id_dsa 公钥放在~/.ssh/id_dsa.pub
[root@node01 ~]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
#将公钥追加到authorized_keys
[root@node01 ~]# cat id_dsa.pub >> authorized_keys
4.将HADOOP_HOME、HADOOP_HOME/bin、HADOOP_HOME/sbin添加到环境变量
[root@node01 ~]# vi /etc/profile
export HADOOP_HOME=/opt/bdp/hadoop-2.6.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
[root@node01 ~]# source /etc/profile
5.配置hadoop-env.sh
由于通过SSH远程启动进程的时候默认不会加载/etc/profile设置,JAVA_HOME变量就加载不到,需要手动指定。
[root@node01 ~]# cd $HADOOP_HOME/etc/hadoop
[root@node01 hadoop]# vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_231-amd64
6.配置$HADOOP_HOME/etc/hadoop/core-site.xml
<configuration>
<!-- 指定访问HDFS的时候路径的默认前缀 如果输入/ 代表hdfs://node01:9000/ -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://node01:9000</value>
</property>
<!-- 指定指定临时文件存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/var/bdp/hadoop/local</value>
</property>
<!-- 指定namenode存储目录 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/var/bdp/hadoop/local/dfs/name</value>
</property>
<!-- 指定datanode存储目录 -->
<property>
<name>dfs.datanode.date.dir</name>
<value>/var/bdp/hadoop/local/dfs/data</value>
</property>
<!-- 指定secondarynamenode存储目录 -->
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>/var/bdp/hadoop/local/dfs/secondary</value>
</property>
</configuration>
7.配置$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
<!-- 指定block副本数 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- 指定secondarynamenode所在的位置 -->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node01:50090</value>
</property>
</configuration>
8.指定$HADOOP_HOME/etc/hadoop/slaves(DataNode所在的节点)
[root@node01 hadoop]# vi slaves
node01
9.格式化
[root@node01 hadoop]# hdfs namenode -format
22/07/06 18:37:28 INFO common.Storage: Storage directory /var/bdp/hadoop/local/dfs/name has been successfully formatted.
22/07/06 18:37:28 INFO namenode.FSImageFormatProtobuf: Saving image file /var/bdp/hadoop/local/dfs/name/current/fsimage.ckpt
_0000000000000000000 using no compression22/07/06 18:37:28 INFO namenode.FSImageFormatProtobuf: Image file /var/bdp/hadoop/local/dfs/name/current/fsimage.ckpt_000000
0000000000000 of size 321 bytes saved in 0 seconds.
通过successfully formatted提示证明格式化成功,Storage directory 文件显示指定namenode 存储文件创建成功
10.启动$HADOOP_HOME/sbin
[root@node01 hadoop]# start-dfs.sh
[root@node01 sbin]# cd /var/bdp/hadoop/local/dfs/
[root@node01 dfs]# ll
total 0
drwx------ 3 root root 40 Jul 6 19:00 data
drwxr-xr-x 3 root root 40 Jul 6 19:00 name
drwxr-xr-x 2 root root 25 Jul 6 19:01 namesecondary
第一次启动的的时候datanode 和 secondary角色会初始化创建数据目录
查看hdfs界面通过访问http://192.168.23.101:50070
数据验证
#创建hdfs root 用户家目录
[root@node01 ~]# hdfs dfs -mkdir -p /usr/root
#将hadoop-2.6.5.tar.gz上传hdfs
[root@node01 ~]# hdfs dfs -put hadoop-2.6.5.tar.gz /usr/root
文件比block块大点开hadoop-2.6.5.tar.gz可以看到文件存在2个block块
hdfs 可以指定block块大小 将文件block块大小改为10M(1024*1024*10)
[root@node01 ~]# for i in `seq 1000000`;do echo "hello hadoop $i" >> data.txt ;done
[root@node01 ~]# ll -l -h | grep data.txt
-rw-r--r-- 1 root root 19M Jul 6 20:21 data.txt
[root@node01 ~]# hdfs dfs -D dfs.blocksize=10485760 -put data.txt /usr/root
数据可以在datanode存储目录看到 其中*.meta为数据校验
[root@node01 ~]# cd /var/bdp/hadoop/local/dfs/data/current/BP-132806812-192.168.23.101-1657103848515/current/finalized/subdir0/subdir0/
[root@node01 subdir0]# ll
total 216068
-rw-r--r-- 1 root root 10485760 Jul 6 20:58 blk_1073741827
-rw-r--r-- 1 root root 81927 Jul 6 20:58 blk_1073741827_1003.meta
-rw-r--r-- 1 root root 9403136 Jul 6 20:58 blk_1073741828
-rw-r--r-- 1 root root 73471 Jul 6 20:58 blk_1073741828_1004.meta
---------------------------------------------------------------------------------------------------------------------------------
ERROR1
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform...
网上有说是hadoop的预编译包是32bit的,运行在64bit上就会有问题。通过查看发现并不是
验证过程
[root@node0 hadoop]# cd $HADOOP_HOME/lib/native/
#用ldd命令查看依赖库
[root@node0 native]# ldd libhadoop.so.1.0.0
./libhadoop.so.1.0.0: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by ./libhadoop.so.1.0.0)
linux-vdso.so.1 => (0x00007fff1f3ff000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f5fb4451000)
libc.so.6 => /lib64/libc.so.6 (0x00007f5fb40bc000)
/lib64/ld-linux-x86-64.so.2 (0x00007f5fb487b000)
可以看到依赖的都是/lib64/的动态库,所以不是64位/32位问题。但是看到报错,GLIBC_2.14找不到,现在检查系统的glibc库, ldd --version即可检查。
[root@node0 native]# ldd --version
ldd (GNU libc) 2.12
Copyright (C) 2010 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Written by Roland McGrath and Ulrich Drepper.
可以看到系统预装的glibc库是2.12版本,而hadoop期望是2.14版本,所以出现打印警告信息,可以在log4j日志中去除告警信息。
[root@node0 native]# vi cd $HADOOP_HOME/etc/hadoop/log4j.properties
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
ERROR2
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on [2021-06-19 21:24:45,053 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable]
Error: Cannot find configuration directory: /etc/hadoop
Error: Cannot find configuration directory: /etc/hadoop
Starting secondary namenodes [2021-06-19 21:24:45,824 WARN [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
0.0.0.0]
解决方法:在hadoop-env.sh 更改hadoop配置文件所在目录
[root@node0 hadoop]# vi hadoop-env.sh
export HADOOP_CONF_DIR=/opt/hadoop-2.6.5/etc/hadoop/
[root@node0 hadoop]# source hadoop-env.sh
ERROR3
[root@node0 hadoop]# start-dfs.sh
Starting namenodes on [node0]
node0: namenode running as process 1783. Stop it first.
node0: datanode running as process 1424. Stop it first.
Starting secondary namenodes [node0]
node0: secondarynamenode running as process 1962. Stop it first.
错误说明:secondarynamenode正在运行,先停止它。
解决:
[root@node0 hadoop]# stop-all.sh
[root@node0 hadoop]# start-dfs.sh
[root@node0 hadoop]# jps
3847 Jps
3483 NameNode
3595 DataNode
3743 SecondaryNameNode