Centos 新建hadoop集群

6 篇文章 0 订阅
1 篇文章 0 订阅

一:环境准备

主机名ip说明
dy-master01192.168.22.134主节点master,namenode
dy-slaver01192.168.22.135datanode,secondarynamenode
dy-slaver02192.168.22.133datanode

二:前提条件

1.jdk
export JAVA_HOME=/usr/local/java
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}
2.配置免密码
ssh-keygen -t rsa
ssh-copy-id dy-master01
ssh-copy-id dy-slaver01
ssh-copy-id dy-slaver02
3.主机名
vi /etc/hosts
192.168.22.134 dy-master01
192.168.22.135 dy-slaver01
192.168.22.133 dy-slaver02
4.NTP时间服务器
yum -y install ntp
vi /etc/ntp.conf
master节点 
 #server 0.centos.pool.ntp.org iburst
    #server 1.centos.pool.ntp.org iburst
    #server 2.centos.pool.ntp.org iburst
    #server 3.centos.pool.ntp.org iburst
    server ntp.api.gz iburst
    
其他节点
 #server 0.centos.pool.ntp.org iburst
    #server 1.centos.pool.ntp.org iburst
    #server 2.centos.pool.ntp.org iburst
    #server 3.centos.pool.ntp.org iburst
    server 192.168.22.134 iburst

所有节点:    
systemctl start ntpd
systemctl enable ntpd
5.selinux关闭(所有节点官方文档要求)
vim /etc/sysconfig/selinux
SELINUX=disabled
6.firewalld关闭
systemctl stop firewalld
systemctl disable firewalld

sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/config

重启 reboot
# getenforce
Disabled
# sestatus -v
SELinux status:                 disabled

三:下载配置hadoop

主节点(dy-master01)操作

1.安装包 hadoop-2.9.1.tar.gz

https://hadoop.apache.org/releases.html

tar -xzvf hadoop-2.9.1.tar.gz
mv hadoop-2.9.1/ home/
2.修改配置文件 cd /home/hadoop-2.9.1/etc/hadoop
- vi hadoop-env.sh
export JAVA_HOME=/usr/local/java
- vi hdfs-site.xml

设置secondary namenode

<configuration>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>dy-slaver01:50090</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.https-address</name>
        <value>dy-slaver01:50091</value>
    </property>
</configuration>
- vi core-site.xml
<configuration>
    <property>
#用于确定文件系统的主机、端口等。
        <name>fs.defaultFS</name>
        <value>hdfs://dy-master01:9000</value>
    </property>
    <property>
#临时文件 位置 重启会清空 很重要
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-2.9.1</value>
    </property>
</configuration>
- vi slaves 配置datanode
dy-slaver01
dy-slaver02
- vi masters 配置secondary namenode
dy-slaver01
- vi /etc/profile 环境变量
export HADOOP_HOME=/home/hadoop-2.9.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

3.复制安装包以及配置到其他节点
scp -r /home/hadoop-2.9.1/ root@dy-slaver01:/home/
scp -r /home/hadoop-2.9.1/ root@dy-slaver02:/home/

scp /etc/profile root@dy-slaver01:/etc/profile
scp /etc/profile root@dy-slaver02:/etc/profile

4.格式化

在namenode节点 dy-master01上操作

hdfs namenode -format

初始化 fsimage
5.启动
start-dfs.sh

dy-master01: starting namenode, logging to /home/hadoop-2.9.1/logs/hadoop-root-namenode-dy-master01.out
dy-slaver02: starting datanode, logging to /home/hadoop-2.9.1/logs/hadoop-root-datanode-dy-slaver02.out
dy-slaver01: starting datanode, logging to /home/hadoop-2.9.1/logs/hadoop-root-datanode-dy-slaver01.out
Starting secondary namenodes [dy-slaver01]
dy-slaver01: starting secondarynamenode, logging to /home/hadoop-2.9.1/logs/hadoop-root-secondarynamenode-dy-slaver01.out

web监控页面http://dy-master01:50070

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值