Hadoop的安装
集群的规划:
服务器IP | 192.168.252.150 | 192.168.252.151 | 192.168.252.152 |
---|---|---|---|
主机名 | node01 | node02 | node03 |
NameNode | 是 | 否 | 否 |
SecondaryNameNode | 是 | 否 | 否 |
dataNode | 是 | 是 | 是 |
ResourceManager | 是 | 否 | 否 |
NodeManager | 是 | 是 | 是 |
-
上传并解压
-
修改配置文件
-
分发安装包
-
格式化HDFS
-
启动集群
1. 上传并解压
-
上传压缩包到/export/software目录
-
cd /export/software
-
tar xzvf hadoop-3.1.1.tar.gz -C ../servers
2. 修改配置文件
首先配置ip映射
vim /etc/hosts
因为需要远程连接,所以这里ip要设置为本机ip,不能设置成127.0.0.1
192.168.252.150 hadoop
(1).修改hadoop解压目录下的 etc/hadoop/hadoop-env.sh文件
指定java目录
export JAVA_HOME=/usr/local/jdk1.8.0_131
(2)修改etc/hadoop/core-site.xml:
首先要创建tmp目录,用于数据持久化
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<!-- 临时文件存储目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/tmp</value>
</property>
</configuration>
3)修改etc/hadoop/hdfs-site.xml文件
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/datanode</value>
</property>
<property>
<!-- 由于只有一台机器,hdfs的副本数就指定为1 -->
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
(4)修改etc/hadoop/mapred-site.xml文件
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>mapred.child.tmp</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/tmp</value>
</property>
</configuration>
# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/usr/local/software/jdk1.8.0_151
# Some parts of the shell code may do special things dependent upon
# the operating system. We have to set this here. See the next
# section as to why....
export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
# Under certain conditions, Java on OS X will throw SCDynamicStore errors
# in the system logs.
# See HADOOP-8719 for more information. If one needs Kerberos
# support on OS X, one will want to change/remove this extra bit.
case ${HADOOP_OS_TYPE} in
Darwin*)
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.realm= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.kdc= "
export HADOOP_OPTS="${HADOOP_OPTS} -Djava.security.krb5.conf= "
;;
esac
export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"
<?xml version="1.0"?>
<configuration>
<!-- 设置不检查虚拟内存的值,不然内存不够会报错 -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!-- yarn上面运行一个任务,最少需要1.5G内存,虚拟机没有这么大的内存就调小这个值,不然会报错 -->
<property>
<name>yarn.app.mapreduce.am.resource.mb</name>
<value>128</value>
</property>
</configuration>
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/usr/local/hadoop/hadoop-3.1.1/datas/datanode</value>
</property>
<property>
<!-- 由于只有一台机器,hdfs的副本数就指定为1 -->
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:50075</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
4、安装Hadoop分布式文件系统
(1)格式化文件系统:
bin/hdfs namenode -format
(2)启动NameNode进程和DateNode进程
sbin/start-all.sh
(3)查看hadoop进程是否正常启动
ps -ef|grep hadoop
5、在web浏览器中访问NameNode的web接口,默认地址为:http://localhost:50075/
6. 格式化HDFS
-
为什么要格式化HDFS
-
HDFS需要一个格式化的过程来创建存放元数据(image, editlog)的目录
bin/hdfs namenode -format
-