CentOS 7 安装Hadoop 2.7.1

原创 2016年08月29日 21:39:11

两台机器 CentOS7(机器名分别为master-CentOS7、slave-CentOS7) 内存2G (笔记本开虚拟机快撑不住了╮(╯-╰)╭
CentOS7 与 CetnOS6 有一些区别

网络配置

master-CentOS7

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=b30f5765-ecd7-4dba-a0ed-ebac92c836bd
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.1.182
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=114.114.114.114
DNS2=8.8.4.4

网络信息根据自己实际的网络情况配置。

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ifconfig

slave-CentOS7

[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eno16777736
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eno16777736
UUID=b30f5765-ecd7-4dba-a0ed-ebac92c836bd
DEVICE=eno16777736
ONBOOT=yes
IPADDR=192.168.1.183
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=114.114.114.114
DNS2=8.8.4.4

网络信息根据自己实际的网络情况配置。

[root@localhost ~]# systemctl restart network
[root@localhost ~]# ifconfig

设置hosts、hostname

master-CentOS7

[root@localhost ~]# vi /etc/hosts

添加

192.168.1.182 master
192.168.1.183 slave
[root@localhost ~]# vi /etc/hostname
localhost.localdomain

内容修改为

master

slave-CentOS7

[root@localhost ~]# vi /etc/hosts

添加

192.168.1.182 master
192.168.1.183 slave
[root@localhost ~]# vi /etc/hostname
localhost.localdomain

内容修改为

slave

关闭selinux

master-CentOS7

[root@master ~]# getenforce
Enforcing
[root@master ~]# vi /etc/selinux/config
SELINUX=enforcing

修改为

SELINUX=disabled

保存重启

[root@master ~]# getenforce
Disabled

slave-CentOS7

[root@slave ~]# getenforce
Enforcing
[root@slave ~]# vi /etc/selinux/config
SELINUX=enforcing

修改为

SELINUX=disabled

保存重启

[root@slave ~]# getenforce
Disabled

关闭firewalld

master-CentOS7

[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@master ~]# systemctl stop firewalld
[root@master ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination
[root@master ~]# yum install -y iptables-services
[root@master ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  确定  ]
[root@master ~]# systemctl enable iptables
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.

slave-CentOS7

[root@slave ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.
[root@slave ~]# systemctl stop firewalld
[root@slave ~]# iptables -nvL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination        
[root@slave ~]# yum install -y iptables-services
[root@slave ~]# service iptables save
iptables: Saving firewall rules to /etc/sysconfig/iptables:[  确定  ]
[root@slave ~]# systemctl enable iptables
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.

密钥登陆

master-CentOS7

[root@master ~]# ssh-keygen

一直回车

[root@master ~]# cat .ssh/id_rsa.pub

复制~/.ssh/id_rsa.pub 内容

slave-CentOS7

[root@slave ~]# vi .ssh/authorized_keys
  • 复制master的~/.ssh/id_rsa.pub 内容到 slave的~/.ssh/authorized_keys时报错
    “.ssh/authorized_keys” E212: Can’t open file for writing
  • 解决方案
[root@slave ~]# ls -ld .ssh
ls: 无法访问.ssh: 没有那个文件或目录
[root@slave ~]# mkdir .ssh; chmod 700 .ssh
[root@slave ~]# ls -ld .ssh
drwx------ 2 root root 6 828 15:59 .ssh
[root@slave ~]# vi .ssh/authorized_keys

复制master的~/.ssh/id_rsa.pub 内容到 slave的~/.ssh/authorized_keys

[root@slave ~]# ls -l !$
ls -l .ssh/authorized_keys
-rw-r--r-- 1 root root 418 828 16:02 .ssh/authorized_keys

测试

master-CentOS7

[root@master ~]# ssh master
[root@master ~]# exit
[root@master ~]# ssh slave
[root@slave ~]# exit

安装JDK

hadoop2.7 需要安装jdk1.7版本,下载地址http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

先卸载CetnOS7自带的JDK
以slave-CentOS7为例(master-CetnOS7、slave-CentOS7上都需要卸载CetnOS7自带的JDK)

[root@slave ~]# java -version
openjdk version "1.8.0_101"
OpenJDK Runtime Environment (build 1.8.0_101-b13)
OpenJDK 64-Bit Server VM (build 25.101-b13, mixed mode)
[root@master ~]# rpm -qa |grep jdk
java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64
[root@slave ~]# yum -y remove java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64
[root@slave ~]# java -version
-bash: /usr/bin/java: 没有那个文件或目录

master-CentOS7

[root@master ~]# wget http://download.oracle.com/otn-pub/java/jdk/7u79-b15/jdk-7u79-linux-x64.tar.gz
[root@master ~]# tar zxvf jdk-7u79-linux-x64.tar.gz
[root@master ~]# mv jdk1.7.0_79 /usr/local/
[root@master ~]# vi /etc/profile.d/java.sh

添加

export JAVA_HOME=/usr/local/jdk1.7.0_79
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
[root@master ~]# source !$
source /etc/profile.d/java.sh
[root@master ~]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[root@master ~]# scp jdk-7u79-linux-x64.tar.gz slave:/root/
[root@master ~]# scp /etc/profile.d/java.sh slave:/etc/profile.d/

slave-CentOS7

[root@slave ~]# tar zxvf jdk-7u79-linux-x64.tar.gz
[root@slave ~]# mv jdk1.7.0_79 /usr/local/
[root@slave ~]# source /etc/profile.d/java.sh
[root@slave ~]# java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)

安装Hadoop

master-CentOS7

[root@master ~]# wget 'http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.1/hadoop-2.7.1.tar.gz'
[root@master ~]# tar zxvf hadoop-2.7.1.tar.gz
[root@master ~]# mv hadoop-2.7.1 /usr/local/Hadoop
[root@master ~]# ls !$
ls /usr/local/hadoop
bin  include  libexec      NOTICE.txt  sbin
etc  lib      LICENSE.txt  README.txt  share
[root@master ~]# mkdir /usr/local/hadoop/tmp  /usr/local/hadoop/dfs  /usr/local/hadoop/dfs/data  /usr/local/hadoop/dfs/name
[root@master ~]# ls /usr/local/hadoop
bin  dfs  etc  include  lib  libexec  LICENSE.txt  NOTICE.txt  README.txt  sbin  share  tmp
[root@master ~]# rsync -av /usr/local/hadoop  slave:/usr/local

slave-CentOS7

[root@slave ~]# ls /usr/local/hadoop
bin  etc      lib      LICENSE.txt  README.txt  share
dfs  include  libexec  NOTICE.txt   sbin        tmp

配置Hadoop

master-CentOS7

[root@master ~]# vi /usr/local/hadoop/etc/hadoop/core-site.xml

添加

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

添加

<configuration>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/dfs/data</value>
    </property>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master:9001</value>
    </property>
    <property>
    <name>dfs.webhdfs.enabled</name>
    <value>true</value>
    </property>
</configuration>

replication默认为3,如果不修改,DataNode少于3台就会报错。我的实验环境就一台从机(一台slave),所以值为1。

[root@master ~]# mv /usr/local/hadoop/etc/hadoop/mapred-site.xml.template  /usr/local/hadoop/etc/hadoop/mapred-site.xml
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/mapred-site.xml

添加

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master:19888</value>
    </property>
</configuration>
[root@master ~]# vi /usr/local/hadoop/etc/hadoop/yarn-site.xml

添加

<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master:8088</value>
    </property>
</configuration>
[root@master ~]# cd /usr/local/hadoop/etc/hadoop
[root@master hadoop]# vi hadoop-env.sh

更改

export JAVA_HOME=/usr/local/jdk1.7.0_79
[root@master hadoop]# vi yarn-env.sh

更改

export JAVA_HOME=/usr/local/jdk1.7.0_79
[root@master hadoop]# vi slaves

更改为

slave

注意slave-CentOS7的IP

[root@master hadoop]# rsync -av /usr/local/hadoop/etc/ slave:/usr/local/hadoop/etc/

slave-CentOS7

[root@slave ~]# cd /usr/local/hadoop/etc/hadoop/
[root@slave hadoop]# cat slaves
slave

检查slave没问题

启动Hadoop

master-CentOS7

[root@master hadoop]# /usr/local/hadoop/bin/hdfs namenode -format
[root@master hadoop]# echo $?
0
[root@master hadoop]# /usr/local/hadoop/sbin/start-all.sh
[root@master hadoop]# jps
19907 ResourceManager
19604 SecondaryNameNode
19268 NameNode
20323 Jps

在执行格式化-format命令时,要避免NameNode的namespace ID与DataNode的namespace ID的不一致。这是因为每格式化一次就会产生Name、Data、temp等临时文件记录信息,多次格式化会产生很多的Name、Data、temp,这样容易导致ID的不同,使Hadoop不能正常运行。每次执行格式化-format命令时,就需要将DataNode和NameNode上原来的data、temp文件删除。

slave-CentOS7

[root@slave hadoop]# jps
18113 NodeManager
18509 Jps
17849 DataNode

浏览器打开
http://master:8088/
http://master:50070/

测试Hadoop

master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop/
[root@master hadoop]# bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar pi 10 10

停止服务

master-CentOS7

[root@master hadoop]# cd /usr/local/hadoop
[root@master hadoop]# sbin/stop-all.sh

  • 如果提示 copyFromLocal: Cannot create directory /123/. Name node is in safe mode.
    这是因为开启了安全模式
    解决方法:
    cd /usr/local/Hadoop
    bin/hdfs dfsadmin -safemode leave
版权声明:Copyright © Genesis2011

相关文章推荐

从零开始学Scala(一)——Scala环境搭建与第一行代码

开发环境搭建 安装JDK安装Scala, scala下载页面 下载版本2.12.3安装Intellij idea插件, Intellij idea Scala插件下载 环境搭建 1、...

从零开始学Scala(一)——Scala环境搭建与第一行代码

开发环境搭建 安装JDK安装Scala, scala下载页面 下载版本2.12.3安装Intellij idea插件, Intellij idea Scala插件下载 环境搭建 1、...

Kali渗透测试——利用metasploit攻击靶机WinXP SP1

搭建渗透测试环境Kali攻击机WinXP SP1 靶机启动metasploit跟windows RPC相关的漏洞内部提供的漏洞攻击靶机winxp sp1网络配置查看虚拟机的NAT网段配置WinXP S...

idea中创建package的问题

在idea中创建package时,会一直往后面累加(原来是com.huayang.action,想在huayang下面创建和action平级的service时,直接new的话,就变成了com.huay...

Hadoop-2.7.2集群的搭建——集群学习日记

前言 因为比赛的限制是使用Hadoop2.7.2,估在此文章下面的也是使用Hadoop2.7.2,具体下载地址为Hadoop2.7.2 开始的准备 目前在我的实验室上有三...

CentOS 6.7安装Hadoop 2.7.2

用VMware虚拟机创建两个虚拟机,分别作为此次实验的master节点(主机)、slave节点(从机)。 先新建一个内存为2G、硬盘占用为30G、CetnOS 6.7 64位的虚拟机(master),...
  • Noob_f
  • Noob_f
  • 2016-11-27 20:51
  • 1225

spark-2.2.0安装和部署——Spark集群学习日记

前言 在安装后hadoop之后,接下来需要安装的就是Spark。 scala-2.11.7下载与安装 具体步骤参见上一篇博文 Spark下载 为了方便,我直接...

测试Hadoop2.7.1

三台机器 CentOS7(机器名分别为master-CentOS7、slave1-CentOS7、slave2-CentOS7),每台机器内存2G(迫于无奈,刚换了内存条)之前写了一篇“CentOS ...
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:深度学习:神经网络中的前向传播和反向传播算法推导
举报原因:
原因补充:

(最多只允许输入30个字)