Hadoop完全分布式安装

安装 vmware
安装 linux

置 配置 linux
配置 IP
192.168.1.100 master
192.168.1.102 slave2
192.168.1.103 slave1
配置主机名
[ root@localhost ~]# vim /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=master
[ root@localhost ~]# hostname master
三台主机都配置主机名
[ root@localhost ~]# vim /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME= slave1
[ root@localhost ~]# hostname slave1
[ root@localhost ~]# vim /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME= slave2
[ root@localhost ~]# hostname slave2
重启才能生效
Rebot
然后使用远程连接工具,这里使用 xshell
关闭防火墙和 selinux
[ root@master ~]# vim /etc/sysconfig/selinux


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
[ root@master ~]# service iptables status
表格:filter
Chain INPUT (policy ACCEPT)
num target prot opt source destination
Chain FORWARD (policy ACCEPT)
num target prot opt source destination
Chain OUTPUT (policy ACCEPT)
num target prot opt source destination
域名解析
使得 master 和 slave1 slave2 连接对方
三台都要做
[ root@master ~]# vim /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.1.100 master
192.168.1.102 slave2
192.168.1.103 slave1
配置 java
启动 ftp
[ root@master ~]# /etc/init.d/vsftpd restart


关闭 vsftpd: [失败]
为 vsftpd 启动 vsftpd: [确定]
默认情况下 root 不允许使用 ftp
[ root@master vsftpd]# pwd
/etc/vsftpd
[ root@master vsftpd]# ls
ftpusers user_list
将这两个文件中的 root 注释掉。
然后重启 ftp
在三个机器上创建文件夹
[ root@master ~]# mkdir installer
[ root@master ~]#
上传 jdk
E:\开发工具
jdk-6u27-linux-i586-rpm.bin
到 installer 目录
[ root@master installer]# ll
总计 78876
-rw-r--r-- 1 root root 80680219 12-01 12:50 jdk-6u27-linux-i586-rpm.bin
[ root@master installer]# chmod a+x jdk-6u27-linux-i586-rpm.bin
[ root@master installer]#
[ root@master installer]# ./jdk-6u27-linux-i586-rpm.bin
Unpacking...
Checksumming...
Extracting...
UnZipSFX 5.50 of 17 February 2002, by Info-ZIP ( Zip-Bugs@lists.wku.edu).
inflating: jdk-6u27-linux-i586.rpm
这时候可以再开一个连接进行远程拷贝
[ root@master installer]# scp jdk-6u27-linux-i586-rpm.bin slave1:/root/installer
root@slave1's password:
jdk-6u27-linux-i586-rpm.bin
拷贝完在三个机器上都要安装 jdk。


[ root@master installer]# java -version
java version "1.6.0_27"
添加用户
三台机器都做
[ root@master ~]# useradd hadoop
[ root@master ~]# passwd hadoop
Changing password for user hadoop.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[ root@master ~]#
并且在 slave1 和 slave2 使用 hadoop 用户创建 installers 目录
[ root@slave1 ~]# su - hadoop
[ hadoop@slave1 ~]$ mkdir installer
配置 ssh  等效性
[ root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
5b:d6:30:95:6f:37:b1:d8:e0:ec:b4:cb:94:cc:3f:cc  root@master
[ root@master ~]#
一路回车
这一个动作在三台机器上都执行
[ hadoop@master ~]$ cd .ssh/

[ hadoop@master .ssh]$ ls
id_rsa id_rsa.pub
[ hadoop@master .ssh]$
[ hadoop@master .ssh]$ cat id_rsa
id_rsa id_rsa.pub
[ hadoop@master .ssh]$ cat id_rsa.pub >authorized_keys
[ hadoop@master .ssh]$ ls
authorized_keys id_rsa id_rsa.pub
[ hadoop@master .ssh]$
将生成的 authorized_keys 文件拷贝到 slave1 和 slave2
[ hadoop@master .ssh]$ scp authorized_keys slave1:~/.ssh/
The authenticity of host 'slave1 (192.168.1.103)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave1,192.168.1.103' (RSA) to the list of known hosts.
hadoop@slave1's password:
authorized_keys 100%
395 0.4KB/s 00:00
[ hadoop@master .ssh]$
然后进入 slave1
[ hadoop@slave1 .ssh]$ cat id_rsa.pub >> authorized_keys
[ hadoop@slave1 .ssh]$ cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMj
wcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v++2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BS
awwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMb
M0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r
1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw==  hadoop@master
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hy
Mp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVX
CPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq
+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+J
BOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw==  hadoop@slave1
[ hadoop@slave1 .ssh]$
然后在 slave1 传输到 slave2
[ hadoop@slave1 .ssh]$ scp authorized_keys slave2:~/.ssh
The authenticity of host 'slave2 (192.168.1.102)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,192.168.1.102' (RSA) to the list of known hosts.
hadoop@slave2's password:

authorized_keys 100%
790 0.8KB/s 00:00
[ hadoop@slave1 .ssh]$
到 slave2 上
[ hadoop@slave2 .ssh]$ cat id_rsa.pub >> authorized_keys
[ hadoop@slave2 .ssh]$ cat authorized_keys
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMj
wcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v++2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BS
awwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMb
M0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r
1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw==  hadoop@master
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hy
Mp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVX
CPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq
+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+J
BOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw==  hadoop@slave1
ssh-rsa
AAAAB3NzaC1yc2EAAAABIwAAAQEAzyFZKYRXh1HIm+p//kh/P268u6CHQJ88M+vEcb0fEjpXhNoD
aVDceuYhQZxc0E/3dJRd86jaRNWnV+G+IPN00ykV2+UJhE2yjsdMa+Yqwy6XU14H25lMaImJGtxpoX
O+3kWKJZ1uGB0E2TU2nS+Epb8EI+6ezZ0ilQhgwpc0kQR/jN6d6hUKKK5yTxKZg4agn4QsOZhyBNQZ
X7tLofHELR970T5n7to19UejB1j09AVdME+TYf7q3reLYHtVA1NsD7+wQcPB3WOKCRhHU5Uas+Rd3
ukIP2/H8h13mJ5NHhq5FzxdVa62OPw9BKZVVO2vXp7SvxJG0MW0Aw8fO+AuRQ==
[ hadoop@slave2 .ssh]$
然后将这个文件传回 slave1 和 master
[ hadoop@slave2 .ssh]$ scp authorized_keys master:~/.ssh/
The authenticity of host 'master (192.168.1.100)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.1.100' (RSA) to the list of known hosts.
hadoop@master's password:
authorized_keys 100%
1185 1.2KB/s 00:00
[ hadoop@slave2 .ssh]$
[ hadoop@slave2 .ssh]$ scp authorized_keys slave1:~/.ssh/
The authenticity of host 'slave1 (192.168.1.103)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave1,192.168.1.103' (RSA) to the list of known hosts.
hadoop@slave1's password:
authorized_keys 100%
1185 1.2KB/s 00:00


[ hadoop@slave2 .ssh]$
在三台机器上修改权限
[ hadoop@master .ssh]$ chmod 600 authorized_keys
到这里配置完毕,可以直接使用
Ssh slave1 链接 不需要提示密码。
配置 Hadoop
上传 hadoop 使用 hadoop 用户
解压缩
[ hadoop@master installer]$ tar xzf hadoop-1.2.1.tar.gz
[ hadoop@master installer]$ ll
总计 62428
drwxr-xr-x 15 hadoop hadoop 4096 2013-07-23 hadoop-1.2.1

-rw-r--r-- 1 hadoop hadoop 63851630 12-01 13:20 hadoop-1.2.1.tar.gz
[ hadoop@master installer]$
创建软连接
[ hadoop@master installer]$ mv hadoop-1.2.1 ..
[ hadoop@master installer]$ cd ..
[ hadoop@master ~]$ ln -s hadoop-1.2.1/ hadoop
[ hadoop@master ~]$ ll
总计 8
lrwxrwxrwx 1 hadoop hadoop 13 12-01 13:22 hadoop -> hadoop-1.2.1/
drwxr-xr-x 15 hadoop hadoop 4096 2013-07-23 hadoop-1.2.1
drwxrwxr-x 2 hadoop hadoop 4096 12-01 13:22 installer
[ hadoop@master ~]$
配置环境变量
[ hadoop@master ~]$ vim .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
#Hadoop1.0
export JAVA_HOME=/usr/java/jdk1.6.0_27
export HADOOP1_HOME=/home/hadoop/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP1_HOME/bin
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
拷贝到 slave1 和 slave2
[ hadoop@master ~]$ scp .bashrc slave1:~
.bashrc 100%
308 0.3KB/s 00:00
[ hadoop@master ~]$ scp .bashrc slave2:~
The authenticity of host 'slave2 (192.168.1.102)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,192.168.1.102' (RSA) to the list of known hosts.
.bashrc 100%

308 0.3KB/s 00:00
[ hadoop@master ~]$
配置 hadoop  文件
[ hadoop@master ~]$ cd hadoop
[ hadoop@master hadoop]$ cd conf
[ hadoop@master conf]$ vim hadoop-env.sh
[ hadoop@master conf]$ vim core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
</property>
</configuration>
[ hadoop@master conf]$ vim hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/data/hadoop</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
[ hadoop@master conf]$ vim mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:9001</value>
</property>
</configuration>
~
[ hadoop@master conf]$ vim masters
master
[ hadoop@master conf]$ vim slaves
slave1
slave2
在 在 slave  节点创建数据目录
要使用 root 用户
[ hadoop@slave1 ~]$ su - root
口令:
[ root@slave1 ~]# mkdir -p /data/hadoop


[ root@slave1 ~]# chown hadoop.hadoop /data/hadoop/
[ root@slave2 ~]# mkdir -p /data/hadoop
[ root@slave2 ~]# chown hadoop.hadoop /data/hadoop/
[ root@slave2 ~]#
拷贝
[ hadoop@master ~]$ scp -r hadoop-1.2.1/ slave1:~
在 slave1 创建软连接
[ hadoop@slave1 ~]$ ln -s hadoop-1.2.1/ hadoop
[ hadoop@master ~]$ scp -r hadoop-1.2.1/ slave2:~
[ hadoop@slave2 ~]$ ln -s hadoop-1.2.1/ hadoop
[ hadoop@slave2 ~]$ ll
总计 8
lrwxrwxrwx 1 hadoop hadoop 13 12-01 13:51 hadoop -> hadoop-1.2.1/
drwxr-xr-x 11 hadoop hadoop 4096 12-01 13:51 hadoop-1.2.1
drwxrwxr-x 2 hadoop hadoop 4096 12-01 13:06 installer
格式化
[ hadoop@master ~]$ hadoop namenode -format
14/12/01 13:49:36 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/192.168.1.100
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG:  build  =
by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_27
************************************************************/
14/12/01 13:49:37 INFO util.GSet: Computing capacity for map BlocksMap
14/12/01 13:49:37 INFO util.GSet: VM type = 32-bit

14/12/01 13:49:37 INFO util.GSet: 2.0% max memory = 101384192
14/12/01 13:49:37 INFO util.GSet: capacity = 2^19 = 524288 entries
14/12/01 13:49:37 INFO util.GSet: recommended=524288, actual=524288
14/12/01 13:49:37 INFO namenode.FSNamesystem: fsOwner=hadoop
14/12/01 13:49:37 INFO namenode.FSNamesystem: supergroup=supergroup
14/12/01 13:49:37 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/12/01 13:49:37 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/12/01  13:49:37  INFO  namenode.FSNamesystem:  isAccessTokenEnabled=false
accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/12/01 13:49:37 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/12/01 13:49:37 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/12/01  13:49:37  INFO  common.Storage:  Image  file
/home/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.
14/12/01  13:49:38  INFO  namenode.FSEditLog:  closing  edit  log:  position=4,
editlog=/home/hadoop/tmp/dfs/name/current/edits
14/12/01  13:49:38  INFO  namenode.FSEditLog:  close  success:  truncate  to  4,
editlog=/home/hadoop/tmp/dfs/name/current/edits
14/12/01 13:49:38 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has
been successfully formatted.
14/12/01 13:49:38 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.1.100
************************************************************/
启动集群
[ hadoop@master ~]$ start-all.sh
starting  namenode,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-master.out
slave2:  starting  datanode,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave2.out
slave1:  starting  datanode,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave1.out
The authenticity of host 'master (192.168.1.100)' can't be established.
RSA key fingerprint is 61:e5:be:d1:92:41:b4:22:4e:88:ff:b1:b1:a1:64:bb.
Are you sure you want to continue connecting (yes/no)? yes
master: Warning: Permanently added 'master,192.168.1.100' (RSA) to the list of known hosts.
master:  starting  secondarynamenode,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-master.out
starting  jobtracker,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-master.out

slave1:  starting  tasktracker,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave1.out
slave2:  starting  tasktracker,  logging  to
/home/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave2.out
[ hadoop@master ~]$ jps
15276 NameNode
15630 Jps
15447 SecondaryNameNode
15519 JobTracker
[ hadoop@slave1 ~]$ jps
15216 DataNode
15390 Jps
15312 TaskTracker
[ hadoop@slave1 ~]$
[ hadoop@slave2 ~]$ jps
15244 TaskTracker
15322 Jps
15149 DataNode
[ hadoop@slave2 ~]$
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值