hadoop集群配置

Hadoop集群搭建

创建三台虚拟机

在所有节点上设置

#vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.30.21 ceph01

192.168.30.22 ceph02

192.168.30.23 ceph03

所有节点

生成密钥文件

#ssh-keygen -t rsa   \\一直按enter

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa):

/root/.ssh/id_rsa already exists.

Overwrite (y/n)? y

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:68+bD/U2lCCZOE+Z1MbfQ4HOKun2c8ACk+hGMPTdNFs root@ceph01

The key's randomart image is:

+---[RSA 2048]----+

|  ..     o.E  ...|

|   o. . oo+=+. . |

|    o...+oB.+... |

|     o + + . +.o.|

|    o   S + o o .|

|     o   = = o   |

|    .   o + . +  |

|       . + + o . |

|        o.*++    |

+----[SHA256]-----+

所有节点中执行

#ssh-copy-id ceph01

#ssh-copy-id ceph02

#ssh-copy-id ceph03

在所有节点中创建文件夹

[root@ceph01 ~]# mkdir -p /export/data

[root@ceph01 ~]# mkdir -p /export/servers

[root@ceph01 ~]# mkdir -p /export/sofrware

在software中传入jdk和hadoop的安装包并解压jdk的安装包

重命名jdk文件夹

root@ceph01 sofrware]# ls

hadoop-2.7.4.tar.gz  jdk1.8.0_341  jdk-8u341-linux-x64.tar.gz

[root@ceph01 sofrware]# mv jdk1.8.0_341 jdk

[root@ceph01 sofrware]# ls

hadoop-2.7.4.tar.gz  jdk  jdk-8u341-linux-x64.tar.gz

在环境变量中加入如下内容

[root@ceph01 sofrware]#vim /etc/profile

#Jdk

export JAVA_HOME=/export/sofrware/jdk

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

重启环境变量

[root@ceph01 ~]# source /etc/profile

[root@ceph01 ~]# java -version

java version "1.8.0_341"

Java(TM) SE Runtime Environment (build 1.8.0_341-b10)

Java HotSpot(TM) 64-Bit Server VM (build 25.341-b10, mixed mode)

即代表jdk安装配置完成

解压hadoop安装包

[root@ceph01 sofrware]# tar -zxvf hadoop-2.7.4.tar.gz      

配置环境变量

[root@ceph01 sofrware]# vim /etc/profile

#hadoop

export HADOOP_HOME=/export/sofrware/hadoop-2.7.4

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

[root@ceph01 sofrware]# source /etc/profile

[root@ceph01 sofrware]# hadoop version

Hadoop 2.7.4

Subversion https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r cd915e1e8d9d0131462a0b7301586c175728a282

Compiled by kshvachk on 2017-08-01T00:29Z

Compiled with protoc 2.5.0

From source with checksum 50b0468318b4ce9bd24dc467b7ce1148

This command was run using /export/sofrware/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar

即代表hadoop安装完成

主节点

进入配置目录

[root@ceph01 ~]# cd /export/sofrware/hadoop-2.7.4/etc/hadoop/

[root@ceph01 hadoop]# vim hadoop-env.sh

export JAVA_HOME=/export/sofrware/jdk

[root@ceph01 hadoop]# vim core-site.xml

<configuration>

<!--用于设置Hadoop的文件系统,由URI指定-->

    <property>

        <name>fs.defaultFS</name>

        <!--用于指定namenode地址在hadoop01机器上-->

        <value>hdfs://ceph01:9000</value>

    </property>

    <!--配置Hadoop的临时目录,默认/tem/hadoop-${user.name}-->

    <property>

        <name>hadoop.tmp.dir</name>

        <value>/export/sofrware/hadoop-2.7.4/tmp</value>

    </property>

</configuration>

[root@ceph01 hadoop]# vim hdfs-site.xml

<configuration>

    <!--指定HDFS的数量-->

    <property>

        <name>dfs.replication</name>

        <value>3</value>

    </property>

    <!--secondary namenode 所在主机的IP和端口-->

    <property>

        <name>dfs.namenode.secondary.http-address</name>

        <value>ceph02:50090</value>

    </property>

</configuration>

[root@ceph01 hadoop]# vim mapred-site.xml

<configuration>

    <!--指定MapReduce运行时的框架,这里指定在YARN上,默认在local-->

    <property>

        <name>mapreduce.framework.name</name>

        <value>yarn</value>

    </property>

</configuration>

~                    

[root@ceph01 hadoop]# vim yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

 <!--指定YARN集群的管理者(ResourceManager)的地址-->

    <property>

        <name>yarn.resourcemanager.hostname</name>

        <value>ceph01</value>

    </property>

    <property>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

    </property>

</configuration>

[root@ceph01 hadoop]# vim slaves

ceph01

ceph02

ceph03

在主节点配置完毕之后将文件传给其余节点

[root@ceph01 hadoop]# scp -r /export/ ceph02:/

[root@ceph01 hadoop]# scp -r /export/ ceph03:/

在主节点进行格式化,若出现下面successfully formatted则表示成功

[root@ceph01 ~]# hdfs namenode -format

22/07/21 14:38:45 INFO common.Storage: Storage directory /export/sofrware/hadoop-2.7.4/tmp/dfs/name has been successfully formatted.

在所有节点启动hadoop服务

[root@ceph01 ~]# start-all.sh

查看服务启动情况

[root@ceph01 ~]# jps

2225 DataNode

3586 Jps

2132 NameNode

2678 NodeManager

2573 ResourceManager

在windows中修改C:\Windows\System32\drivers\etc下的hosts文件,在文件的最后加入节点的ip地址以及节点的名字

192.168.30.21 ceph01

192.168.30.22 ceph02

192.168.30.23 ceph03

查看hdfs集群状态,打开谷歌浏览器,输入ceph01:50070

查看yarn集群状态,打开谷歌浏览器,输入ceph01:8088

至此,hadoop集群已经搭建完成

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值