Hadoop安装部署

Hadoop全分布式搭建(第一篇文章)

1)准备3台客户机(关闭防火墙、静态ip、主机名称、配置ssh,hosts配置)
2)安装JDK
3)配置环境变量
4)安装Hadoop
5)配置环境变量
6)配置集群
7)群起并测试集群

!!!自己准备三台linux的虚拟机!!!

#关闭防火墙
[root@c7hadp1 ~] service iptables stop  #centos6关闭防火墙命令
[root@c7hadp1 ~] systemctl stop firewalld.service      #centos7**
#修改主机名
[root@c7hadp1 ~] hostnamectl set-hostname 主机名 #centos7修改主机名命令
[root@c7hadp1 ~] bash  #刷新

[root@c7hadp1 ~] vim /etc/sysconfig/network  #centos6修改主机名
# Created by anaconda
NETWORKING=yes
HOSTNAME=c7hadp1  #你要修改成的主机名
[root@c7hadp1 ~] reboot #重启
#配置ssh密钥
[node@c7hadp1 ~] ssh-keygen -t rsa  #命令输入直接回车三次 
#每台节点都先生成密钥,在传输
#用其他用户测试下
Generating public/private rsa key pair.
Enter file in which to save the key (/home/node/.ssh/id_rsa): #回车
Created directory '/home/node/.ssh'.
Enter passphrase (empty for no passphrase):  #回车
Enter same passphrase again:  #回车
Your identification has been saved in /home/node/.ssh/id_rsa.
Your public key has been saved in /home/node/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:X3YqY4mlua2zLuU8L+wxbhw5VodzdIuUSO/wRGiNUBM node@c7hadp1
The key's randomart image is:
+---[RSA 2048]----+
|        .+E*..   |
|          =+= .  |
|         ..+oo . |
|          +=+ .  |
|        So.+= .  |
|        ** + o   |
|       B*o* .    |
|      ..X* o     |
|       =BBo      |
+----[SHA256]-----+

[node@c7hadp1 ~]$ ls -a #当前用户目录下
.   .bash_history  .bash_profile  .cache   .mozilla           .ssh
..  .bash_logout   .bashrc        .config  .oracle_jre_usage

[node@c7hadp1 ~]$ ls .ssh/
id_rsa  id_rsa.pub  #私钥和公钥

[node@c7hadp1 ~]$ ssh-copy-id 主机名 #3台节点需要传输9次

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/node/.ssh/id_rsa.pub"
The authenticity of host 'c7hadp2 (192.168.174.11)' can't be established.
ECDSA key fingerprint is SHA256:PsPKEOV8F+c1qycqyHfqWNHeyZXHi6gjvFSaumciX3c.
ECDSA key fingerprint is MD5:42:1a:65:c5:1d:6a:9c:76:c3:7b:10:49:04:76:6c:75.
Are you sure you want to continue connecting (yes/no)? #输入yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
node@c7hadp2's password:       #输入密码
Permission denied, please try again.
node@c7hadp2's password: 
Permission denied, please try again.
node@c7hadp2's password:   #记不住密码了 你们自己试下
#centos7的ip文件是ifcfg-ens33
#centos6的ip文件为ifcfg-eth0
[root@c7hadp1 ~] vim /etc/sysconfig/network-scripts/ifcfg-ens33  #配置Ip

BOOTPROTO=static #设置为静态ip
ONBOOT=yes  #改为yes
IPADDR=192.168.174.10  #配置静态ip地址 具体查看vmware任务栏-编辑-虚拟网络编辑器
NETMASK=255.255.255.0  #子网掩码
GATEWAY=192.168.174.2  #网关
DNS1=114.114.114.114  #DNS

[root@c7hadp1 ~] service network restart  #重启网络即可
[root@c7hadp1 ~] ping www.baidu.com #测试
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=128 time=34.0 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=128 time=32.4 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=128 time=32.0 ms 

!!进入正题!!

#解压java hadoop
[root@c7hadp1 gz] tar -zxvf ./jdk-8u144-linux-x64.tar.gz -C /opt  #-C 指定解压路径
[root@c7hadp1 gz] tar -zxvf hadoop-2.7.2.tar.gz -C /opt
#编辑环境变量
[root@c7hadp1 gz] vim /etc/profile
#java hadoop path
JAVA_HOME=/opt/jdk1.8.0_144
export PATH=$JAVA_HOME/bin:$PATH

HADOOP_HOME=/opt/hadoop-2.7.2
export PATH=$HADOOP_HOME/bin:$PATH
export PATH=$HADOOP_HOME/Sbin:$PATH
#配置Hadoop前先把hadoop/bin,sbin,etc/hadoop下带.cmd的文件全部删除 rm -rf *.cmd

#配置Hadoop slaves
c7hadp1
c7hadp2
c7hadp3

配置Hadoop hadoop-env.sh,yarn-env.sh,mapred-env.sh
export JAVA_HOME=/opt/jdk1.8.0_144

core-site.xml

<configuration>
<property>
        <name>fs.default.name</name>
        <value>hdfs://主机名:9000</value>
</property>

<property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadpdata/data/tmp</value>
</property>
</configuration>

hdfs-site.xml

<configuration>
<property>
        <name>dfs.replication</name>
        <value>3</value>  
</property>

<property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/hapd/name</value>
</property>

<property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/hapd/data</value>
</property>

<property>
        <name>dfs.secondary.http-address</name>
        <value>c7hadp1:50090</value>
</property>
</configuration>

mapred-site.xml

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
</configuration>

yarn-site.xml

<configuration>
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>c7hadp1</value>
</property>

<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
</configuration>
#格式化Hadoop
[root@c7hadp1 bin] ./hdfs namenode -format

格式化成功
在这里插入图片描述
单节点启动

 #hadoop/sbin目录下
 start-dfs.sh  						
 start-yarn.sh 
hadoop-daemon.sh start ???   #单起
hadoop-daemons.sh start ???  #群起

群起集群

start-all.sh
  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值