关闭

CentOS 7 安装Hadoop 2.7.1

标签: hadoopcentos
1738人阅读 评论(0) 收藏 举报
分类:

两台机器 CentOS7(机器名分别为master-CentOS7、slave-CentOS7) 内存2G (笔记本开虚拟机快撑不住了╮(╯-╰)╭
CentOS7 与 CetnOS6 有一些区别

网络配置

master-CentOS7

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=b30f5765-ecd7-4dba-a0ed-ebac92c836bd
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.1.182
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=114.114.114.114
DNS2=8.8.4.4

网络信息根据自己实际的网络情况配置。

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ifconfig

slave-CentOS7

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=b30f5765-ecd7-4dba-a0ed-ebac92c836bd
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.1.183
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=114.114.114.114
DNS2=8.8.4.4

网络信息根据自己实际的网络情况配置。

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ifconfig

设置hosts、hostname

master-CentOS7

[root@localhost ~]# vi /etc/hosts

添加

192.168.1.182 master
192.168.1.183 slave
[root@localhost ~]# vi /etc/hostname
localhost.localdomain

内容修改为

master

slave-CentOS7

[root@localhost ~]# vi /etc/hosts

添加

192.168.1.182 master
192.168.1.183 slave
[root@localhost ~]# vi /etc/hostname
localhost.localdomain

内容修改为

slave

关闭selinux

master-CentOS7

[root@master ~]# getenforce
Enforcing
[root@master ~]# vi /etc/selinux/config
SELINUX=enforcing

修改为

SELINUX=disabled

保存重启

[root@master ~]# getenforce
Disabled

slave-CentOS7

[root@slave ~]# getenforce
Enforcing
[root@slave ~]# vi /etc/selinux/config
SELINUX=enforcing

修改为

SELINUX=disabled

保存重启

[root@slave ~]# getenforce
Disabled

关闭firewalld

master-CentOS7

[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@master ~]# systemctl stop firewalld
[root@master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
[root@master ~]# yum install -y iptables-services
[root@master ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  确定  ]
[root@master ~]# systemctl enable iptables
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.

slave-CentOS7

[root@slave ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@slave ~]# systemctl stop firewalld
[root@slave ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        
[root@slave ~]# yum install -y iptables-services
[root@slave ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  确定  ]
[root@slave ~]# systemctl enable iptables
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.

密钥登陆

master-CentOS7

[root@master ~]# ssh-keygen

一直回车

[root@master ~]# cat .ssh/id_rsa.pub

复制~/.ssh/id_rsa.pub 内容

slave-CentOS7

[root@slave ~]# vi .ssh/authorized_keys
  • 复制master的~/.ssh/id_rsa.pub 内容到 slave的~/.ssh/authorized_keys时报错
    “.ssh/authorized_keys” E212: Can’t open file for writing
  • 解决方案
[root@slave ~]# ls -ld .ssh
ls: 无法访问.ssh: 没有那个文件或目录
[root@slave ~]# mkdir .ssh; chmod 700 .ssh
[root@slave ~]# ls -ld .ssh
drwx------ 2 root root 6 828 15:59 .ssh
[root@slave ~]# vi .ssh/authorized_keys

复制master的~/.ssh/id_rsa.pub 内容到 slave的~/.ssh/authorized_keys

[root@slave ~]# ls -l !$
ls -l .ssh/authorized_keys
-rw-r--r-- 1 root root 418 828 16:02 .ssh/authorized_keys

测试

master-CentOS7

[root@master ~]# ssh master
[root@master ~]# exit
[root@master ~]# ssh slave
[root@slave ~]# exit

安装JDK

hadoop2.7 需要安装jdk1.7版本,下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

先卸载CetnOS7自带的JDK
以slave-CentOS7为例(master-CetnOS7、slave-CentOS7上都需要卸载CetnOS7自带的JDK)

[root@slave ~]# java -version
openjdk version "1.8.0_101"
OpenJDK Runtime Environment (build 1.8.0_101-b13)
OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)
[root@master ~]# rpm -qa |grep jdk
java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
[root@slave ~]# java -version
-bash: /usr/bin/java: 没有那个文件或目录

master-CentOS7

[root@master ~]# wget http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz
[root@master ~]# tar zxvf jdk-7u79-linux-x64.tar.gz
[root@master ~]# mv jdk1.7.0_79 /usr/local/
[root@master ~]# vi /etc/profile.d/java.sh

添加

export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
[root@master ~]# source !$
source /etc/profile.d/java.sh
[root@master ~]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[root@master ~]# scp jdk-7u79-linux-x64.tar.gz slave:/root/
[root@master ~]# scp /etc/profile.d/java.sh slave:/etc/profile.d/

slave-CentOS7

[root@slave ~]# tar zxvf jdk-7u79-linux-x64.tar.gz
[root@slave ~]# mv jdk1.7.0_79 /usr/local/
[root@slave ~]# source /etc/profile.d/java.sh
[root@slave ~]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

安装Hadoop

master-CentOS7

[root@master ~]# wget 'http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz'
[root@master ~]# tar zxvf hadoop-2.7.1.tar.gz
[root@master ~]# mv hadoop-2.7.1 /usr/local/Hadoop
[root@master ~]# ls !$
ls /usr/local/hadoop
bin  include  libexec      NOTICE.txt  sbin
etc  lib      LICENSE.txt  README.txt  share
[root@master ~]# mkdir /usr/local/hadoop/tmp  /usr/local/hadoop/dfs  /usr/local/hadoop/dfs/data  /usr/local/hadoop/dfs/name
[root@master ~]# ls /usr/local/hadoop
bin  dfs  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share  tmp
[root@master ~]# rsync -av /usr/local/hadoop  slave:/usr/local

slave-CentOS7

[root@slave ~]# ls /usr/local/hadoop
bin  etc      lib      LICENSE.txt  README.txt  share
dfs  include  libexec  NOTICE.txt   sbin        tmp

配置Hadoop

master-CentOS7

[root@master ~]# vi /usr/local/hadoop/etc/hadoop/core-site.xml

添加

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

添加

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>

replication默认为3,如果不修改,DataNode少于3台就会报错。我的实验环境就一台从机(一台slave),所以值为1。

[root@master ~]# mv /usr/local/hadoop/etc/hadoop/mapred-site.xml.template  /usr/local/hadoop/etc/hadoop/mapred-site.xml
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/mapred-site.xml

添加

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/yarn-site.xml

添加

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>
[root@master ~]# cd /usr/local/hadoop/etc/hadoop
[root@master hadoop]# vi hadoop-env.sh

更改

export JAVA_HOME=/usr/local/jdk1.7.0_79
[root@master hadoop]# vi yarn-env.sh

更改

export JAVA_HOME=/usr/local/jdk1.7.0_79
[root@master hadoop]# vi slaves

更改为

slave

注意slave-CentOS7的IP

[root@master hadoop]# rsync -av /usr/local/hadoop/etc/ slave:/usr/local/hadoop/etc/

slave-CentOS7

[root@slave ~]# cd /usr/local/hadoop/etc/hadoop/
[root@slave hadoop]# cat slaves
slave

检查slave没问题

启动Hadoop

master-CentOS7

[root@master hadoop]# /usr/local/hadoop/bin/hdfs namenode -format
[root@master hadoop]# echo $?
0
[root@master hadoop]# /usr/local/hadoop/sbin/start-all.sh
[root@master hadoop]# jps
19907 ResourceManager
19604 SecondaryNameNode
19268 NameNode
20323 Jps

在执行格式化-format命令时,要避免NameNode的namespace ID与DataNode的namespace ID的不一致。这是因为每格式化一次就会产生Name、Data、temp等临时文件记录信息,多次格式化会产生很多的Name、Data、temp,这样容易导致ID的不同,使Hadoop不能正常运行。每次执行格式化-format命令时,就需要将DataNode和NameNode上原来的data、temp文件删除。

slave-CentOS7

[root@slave hadoop]# jps
18113 NodeManager
18509 Jps
17849 DataNode

浏览器打开
http://master:8088/
http://master:50070/

测试Hadoop

master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop/
[root@master hadoop]# bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 10 10

停止服务

master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop
[root@master hadoop]# sbin/stop-all.sh

  • 如果提示 copyFromLocal: Cannot create directory /123/. Name node is in safe mode.
    这是因为开启了安全模式
    解决方法:
    cd /usr/local/Hadoop
    bin/hdfs dfsadmin -safemode leave
1
0
查看评论

centos7 docker 安装hadoop 2.7.2记录

docker常用命令: 运行镜像: docker run -i -t centos /bin/bash 提交镜像: docker commit 3a09b2588478 ubuntu:mynewimage 挂载目录: docker run --privileged=true ...
  • luqiaolong
  • luqiaolong
  • 2016-03-13 23:08
  • 1053

centos 7 安装 hadoop 2.7.1

CentOS 7 安装Hadoop 2.7.1       两台机器 CentOS7(机器名分别为master-CentOS7、slave-CentOS7) 内存2G (笔记本开虚拟机快撑不住了╮(╯-╰)╭&#...
  • frostbite_Sword
  • frostbite_Sword
  • 2017-11-20 16:06
  • 105

Hadoop CentOS 7 安装配置

JAVA JDK http://blog.csdn.net/fenglailea/article/details/26006647 环境变量设置 http://blog.csdn.net/fenglailea/article/details/52457731 下载地址 http://...
  • wljk506
  • wljk506
  • 2016-11-24 16:57
  • 838

hadoop-2.7.1安装笔记

1、hadoop简介  Hadoop是Apache软件基金会旗下的一个开源分布式计算平台。以Hadoop分布式文件系统HDFS(Hadoop Distributed Filesystem)和MapReduce(Google MapReduce的开源实现)为核心的Hadoop为用户提供了系...
  • u010794465
  • u010794465
  • 2016-03-31 17:04
  • 1635

centos7搭建hadoop2.7.2完全分布式集群

centos7搭建hadoop2.7.2完全分布式集群我之前使用的是centos6.8安装hadoop2.7.2,但报错如下:WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… usi...
  • yangzhiyouvl
  • yangzhiyouvl
  • 2016-08-02 14:09
  • 1858

Centos 7 安装Hadoop 3.0.0-alpha1

Centos 7 安装单机版hadoop-3.0.0-alpha1简介:本文主要介绍如何安装和配置单节点Hadoop,运用Hadoop MapReduce和Hadoop分布式文件系统(HDFS)执行一些简单的操作。算是一个入门级的文档吧。前提支持的平台GNU / Linux的支持作为开发和应用的平台...
  • lyc417356935
  • lyc417356935
  • 2016-09-09 20:11
  • 3162

CentOs7安装Hadoop2.7.0总结

CentOS7.0安装配置hadoop2.7.0
  • circyo
  • circyo
  • 2015-07-02 11:25
  • 9558

Hadoop学习笔记(二)----环境搭建之CentOS 7 配置与安装Hadoop

配置步骤:    1.关闭防火墙    2.设置静态IP    3.修改hostname    4.安装SSH并配置SSH免密码登录    5.CentOS启动FTP    6...
  • u011781521
  • u011781521
  • 2016-10-27 19:44
  • 2869

Linux上安装Hadoop集群(CentOS7+hadoop-2.8.0)

Linux上安装Hadoop集群(CentOS7+hadoop-2.8.0) 版本:CentOS7 Hadoop2.8.0 JDK1.8
  • pucao_cug
  • pucao_cug
  • 2017-05-11 23:13
  • 15444

hadoop2.7.3在centos7上部署安装(单机版)

(1)hadoop2.7.3下载 下载地址:http://hadoop.apache.org/releases.html (注意是binary文件,source那个是源码) (2)解压tar.gz (3)配置hadoop
  • cafebar123
  • cafebar123
  • 2017-06-20 21:34
  • 2176
    个人资料
    • 访问:198602次
    • 积分:3135
    • 等级:
    • 排名:第13110名
    • 原创:158篇
    • 转载:6篇
    • 译文:4篇
    • 评论:27条
    最新评论