生产环境部署hadoop

生产环境使用Hadoop区别与测试环境的主要是:
1)使用DNS而不是hosts文件解析主机名
2)使用NFS共享密钥文件,而不是逐个手工拷贝添加密钥
3)复制Hadoop时使用批量拷贝脚本而不是逐台复制

一。地址规划与安装环境
1.Ip地址规划
Ip地址 主机名 hadoop角色 其他用途
192.168.18.60 node0.myhadoop.com namenode和jobtracker DNS服务,NFS服务
192.168.18.61 node1.myhadoop.com datanode和tasktracker
192.168.18.62 node2.myhadoop.com datanode和tasktracker
2.安装环境
宿主机 i3 cpu, 6G RAM, 1T HD, win7x64
虚拟机软件 VMWare Player 5.0.2
安装的操作系统 CentOS 6.3 x64
Java jdk 1.7_40 x64
hadoop 1.2.1
bind 9.8.2

二。安装系统
1.安装CentOS 6.3
安装后注意关闭防火墙:
service iptables stop
chkconfig iptables off
关闭seLinux
vi /etc/selinux/config
将SELINUX=enforcing 改成SELINUX=disabled
2.安装JDK
[root@node0 grid]# tar -zxvf jdk-7u40-linux-x64.tar.gz
[root@node0 grid]# mv jdk1.7.0_40/ /usr/
3.环境变量
[root@node0 etc]# vi /etc/profile
export JAVA_HOME=/usr/jdk1.7.0_40
export HADOOP_FREFIX=/home/grid/hadoop-1.2.1
export HADOOP_COMMON_HOME=${HADOOP_FREFIX}
export HADOOP_CONF_DIR=${HADOOP_FREFIX}/conf
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:${HADOOP_FREFIX}/bin:${HADOOP_FREFIX}/sbin:$PATH
[root@node0 etc]# source /etc/profile

4.安装本地yum源
CentOS 6.3已经不支持yum联网安装了,所以需要设置yum本地源。
参考:
CentOS 6.x 使用本地源安装bind

三。安装DNS
1.从本地源安装
[root@node0 ~]# yum install bind_libs bind bind-utils
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
Setting up Install Process
No package bind_libs available.
Package 32:bind-utils-9.8.2-0.10.rc1.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package bind.x86_64 32:9.8.2-0.10.rc1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
bind x86_64 32:9.8.2-0.10.rc1.el6 Media 4.0 M
Transaction Summary
================================================================================
Install 1 Package(s)
Total download size: 4.0 M
Installed size: 7.2 M
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Warning: RPMDB altered outside of yum.
Installing : 32:bind-9.8.2-0.10.rc1.el6.x86_64 1/1
/sbin/ldconfig: /usr/lib64/libhdfs.so.0 is not a symbolic link
/sbin/ldconfig: /usr/lib64/libhadoop.so.1 is not a symbolic link
Verifying : 32:bind-9.8.2-0.10.rc1.el6.x86_64 1/1
Installed:
bind.x86_64 32:9.8.2-0.10.rc1.el6

Complete!


[root@node0 ~]# yum install bind_libs bind bind-utils bind-chroot
Loaded plugins: fastestmirror, refresh-packagekit, security
Loading mirror speeds from cached hostfile
Setting up Install Process
No package bind_libs available.
Package 32:bind-9.8.2-0.10.rc1.el6.x86_64 already installed and latest version
Package 32:bind-utils-9.8.2-0.10.rc1.el6.x86_64 already installed and latest version
Resolving Dependencies
--> Running transaction check
---> Package bind-chroot.x86_64 32:9.8.2-0.10.rc1.el6 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
bind-chroot x86_64 32:9.8.2-0.10.rc1.el6 Media 70 k
Transaction Summary
================================================================================
Install 1 Package(s)
Total download size: 70 k
Installed size: 0
Is this ok [y/N]: y
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : 32:bind-chroot-9.8.2-0.10.rc1.el6.x86_64 1/1
Verifying : 32:bind-chroot-9.8.2-0.10.rc1.el6.x86_64 1/1
Installed:
bind-chroot.x86_64 32:9.8.2-0.10.rc1.el6
Complete!
安装bind-chroot,将bind的根目录限定在某一个目录之中,增加系统安全性。
[root@node0 ~]# yum install bind_libs bind bind-utils
检查安装情况:
[root@node0 named]# rpm -qa bind
bind-9.8.2-0.10.rc1.el6.x86_64
[root@node0 named]# rpm -qa bind-chroot
bind-chroot-9.8.2-0.10.rc1.el6.x86_64
[root@node0 named]# rpm -qa bind-utils
bind-utils-9.8.2-0.10.rc1.el6.x86_64
将"$AddUnixListenSocket /var/named/chroot/dev/log"加入/etc/rsyslog.conf文件中,不然rsyslog守护程序将无法记载bind日志。
[root@node0 etc]# vi rsyslog.conf
$AddUnixListenSocket /var/named/chroot/dev/log


2.配置/etc/named.conf
[root@node0 etc]# vi named.conf
options {
listen-on port 53 { any; }; 把localhost改为any
listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-query { any; }; 把localhost改为any
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
dnssec-lookaside auto;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
//zone "." IN {
// type hint;
// file "named.ca";
// };
include "/etc/named.rfc1912.zones";
//include "/etc/named.root.key"; 注释掉

3.配置named.rfc1912.zones
[root@node0 etc]# vi named.rfc1912.zones
修改为以下内容:
zone "myhadoop.com" IN {
type master;
file "myhadoop.com.zone";
allow-update { none; };
};

zone "18.168.192.in-addr.arpa" IN {
type master;
file "18.168.192.zone";
allow-update { none; };
};


4.创建区域文件
需要创建正向记录文件myhadoop.com.zone和反向记录文件18.168.192.in-addr.zon,文件应在/var/named/chroot/var/named目录下
[root@node0 named]# vi myhadoop.com.zone
$TTL 1D
@ IN SOA node0.myhadoop.com. root.myhadoop.com. (
20140221 ; serial
1D ; refresh
1H ; retry
1W ; expire
3H ) ; minimum
@ IN NS node0.myhadoop.com.
node0.myhadoop.com. IN A 192.168.18.60
node1.myhadoop.com. IN A 192.168.18.61
node2.myhadoop.com. IN A 192.168.18.62
node3.myhadoop.com. IN A 192.168.18.63

[root@node0 named]# vi 18.168.192.zone
$TTL 86400
@ IN SOA node0.myhadoop.com. root.myhadoop.com. (
1997022700 ; Serial
28800 ; Refresh
14400 ; Retry
3600000 ; Expire
86400 ) ; Minimum
@ IN NS node0.myhadoop.com.
60 IN PTR node0.myhadoop.com
61 IN PTR node1.myhadoop.com
62 IN PTR node2.myhadoop.com
63 IN PTR node3.myhadoop.com

[root@node0 named]# chgrp named *
[root@node0 named]# ls -l
total 8
-rwxr-xr-x 1 root named 532 Feb 21 12:23 18.168.192.zone
-rwxr-xr-x 1 root named 336 Feb 21 11:38 myhadoop.com.zone

5.修改各节点/etc/resolv.conf文件
[root@node0 named]# vi /etc/resolv.conf
nameserver 192.168.18.60

6.检查
[root@node0 named]# named-checkzone node.myhadoop.com /var/named/chroot/var/named/myhadoop.com.zone
/var/named/chroot/var/named/myhadoop.com.zone:9: ignoring out-of-zone data (node0.myhadoop.com)
/var/named/chroot/var/named/myhadoop.com.zone:10: ignoring out-of-zone data (node1.myhadoop.com)
/var/named/chroot/var/named/myhadoop.com.zone:11: ignoring out-of-zone data (node2.myhadoop.com)
/var/named/chroot/var/named/myhadoop.com.zone:12: ignoring out-of-zone data (node3.myhadoop.com)
zone node.myhadoop.com/IN: loaded serial 20140221
OK
[root@node0 named]# named-checkzone 192.168.18.60 /var/named/chroot/var/named/18.168.192.zone
zone 192.168.18.60/IN: loaded serial 1997022700
OK

7.启动服务
[root@node0 named]# service named restart

检查:
[root@node0 named]# nslookup node0.myhadoop.com

8.配置开机自启动
[root@node0 named]# chkconfig named on


四。配置/安装NFS
1.检查
[root@node0 named]# rpm -qa |grep nfs
nfs-utils-1.2.3-26.el6.x86_64
nfs4-acl-tools-0.3.3-6.el6.x86_64
nfs-utils-lib-1.1.5-4.el6.x86_64
说明nfs已安装,如果没有安装可通过命令:
yum install nfs-utils

2.启动服务
[root@node0 named]# service rpcbind restart
[root@node0 named]# service nfs restart
[root@node0 named]# service nfslock restart

3.配置开机自启动
[root@node0 named]# chkconfig rpcbind on
[root@node0 named]# chkconfig nfs on
[root@node0 named]# chkconfig nfslock on

4.设置共享目录
[root@node0 ~]# vi /etc/exports
增加行:
/home/grid /share *(insecure,rw,async,no_root_squash)
说明:
/home/grid/share 是NFS要共享的目录
* 代表所有ip地址
rw为读写,ro为只读
Sync为立刻写入硬盘,rsync为优先写入缓存
No_root_squas root用户具有根目录的完全管理访问权限(这个如果不配置会造成远程root用户只读)
重启nfs服务:
[root@node0 ~]# service nfs restart
Shutting down NFS daemon: [ OK ]
Shutting down NFS mountd: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS mountd: [ OK ]
Stopping RPC idmapd: [ OK ]
Starting RPC idmapd: [ OK ]
Starting NFS daemon: [ OK ]

5.配置挂载目录
显示本地挂载点,即node0的共享目录:
[root@node0 etc]# showmount -e localhost
Export list for localhost:
/home/grid /share *
[root@node0 etc]# mkdir /nfs_share
[root@node0 etc]# mount -t nfs 192.168.18.60:/home/grid/share /nfs_share/
[root@node0 etc]# cd /nfs_share/
-bash: cd: /nfs_share/: Permission denied
[root@node0 etc]# su - grid
[grid@node0 ~]$ cd /nfs_share/
用同样的方法在其它客户端(node1,node2)上操作
node1:
[root@node1 ~]# mkdir /nfs_share
[root@node1 ~]# mount -t nfs 192.168.18.60:/home/grid/share /nfs_share/
node2
[root@node2 ~]# mkdir /nfs_share
[root@node2 ~]# mount -t nfs 192.168.18.60:/home/grid/share /nfs_share/

6.设置开机后自动挂载nfs共享目录
[root@node0 ~]# vi /etc/fstab
增加行:
192.168.18.60:/home/grid/share /nfs_share nfs defaults 1 1

相同方法设置node1,node2



五。使用NFS做免密码登录
  我们把每个节点生成的RSA密钥对中的公钥整合到共享目录的authorized_keys文件中,这样做的好处是当我们有新的节点接入时,不再需要分别向其它节点各自添加自己公钥信息,只需要把公钥信息追加到共享的authorized_keys公钥当中,其它节点可直接访问最新的公钥文件。
1.生成密钥对
[grid@node0 ~]$ ssh-keygen -t rsa

在其他节点上生成密钥对

2.整合authorized_keys
[grid@node0 .ssh]$ cp ~/.ssh/id_rsa.pub authorized_keys
[grid@node0 .ssh]$ ssh node1.myhadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[grid@node0 .ssh]$ ssh node2.myhadoop.com cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

3.创建共享目录文件authorized_keys的软连接
在node1:
[grid@node1 .ssh]$ ln -s /nfs_share/.ssh/authorized_keys ~/.ssh/authorized_keys
在node2:
[grid@node2 .ssh]$ ln -s /nfs_share/.ssh/authorized_keys ~/.ssh/authorized_keys

4.验证
[grid@node0 .ssh]$ ssh node1.myhadoop.com
Last login: Fri Feb 21 15:40:14 2014 from 192.168.18.60
[grid@node1 ~]$

八。安装hadoop系统
请先在各节点安装jdk
1.解压hadoop
[grid@node0 ~]$ tar -zxvf hadoop-1.2.1.tar.gz

2.修改配置文件
在/home/grid/hadoop-1.2.1/conf下:
[grid@node0 conf]$ vi hadoop-env.sh
修改jdk路径
export JAVA_HOME=/usr/jdk1.7.0_40

[grid@node0 conf]$ vi core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://node0.myhadoop.com:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/grid/hadoop-1.2.1/tmp</value>
</property>

[grid@node0 conf]$ vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>

[grid@node0 conf]$ vi mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>node0.myhadoop.com:9001</value>
</property>
</configuration>

[grid@node0 conf]$ vi masters
node0.myhadoop.com

[grid@node0 conf]$ vi slaves
node1.myhadoop.com
node2.myhadoop.com

3.使用awk生成分发脚本
[grid@node0 conf]$ cat slaves| awk '{print "scp -rp hadoop-1.2.1
grid@"$1":/home/grid"}' > cphadoop.sh
[grid@node0 conf]$ cat cphadoop.sh
scp -rp hadoop-1.2.1 grid@node1.myhadoop.com:/home/grid
scp -rp hadoop-1.2.1 grid@node2.myhadoop.com:/home/grid
[grid@node0 conf]$ chmod 755 cphadoop.sh

4.分发
[grid@node0 ~]$ cd
[grid@node0 ~]$ hadoop-1.2.1/conf/cphadoop.sh

5.格式化namenode
[grid@node0 hadoop-1.2.1]$ bin/hadoop namenode -format

6.启动hadoop集群
[grid@node0 hadoop-1.2.1]$ bin/start-all.sh
[grid@node0 hadoop-1.2.1]$ hadoop dfsadmin -report
Configured Capacity: 79413821440 (73.96 GB)
Present Capacity: 60468486174 (56.32 GB)
DFS Remaining: 60468428800 (56.32 GB)
DFS Used: 57374 (56.03 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 192.168.18.61:50010
Decommission Status : Normal
Configured Capacity: 39706910720 (36.98 GB)
DFS Used: 28687 (28.01 KB)
Non DFS Used: 9218297841 (8.59 GB)
DFS Remaining: 30488584192(28.39 GB)
DFS Used%: 0%
DFS Remaining%: 76.78%
Last contact: Fri Feb 21 17:06:28 CST 2014

Name: 192.168.18.62:50010
Decommission Status : Normal
Configured Capacity: 39706910720 (36.98 GB)
DFS Used: 28687 (28.01 KB)
Non DFS Used: 9727037425 (9.06 GB)
DFS Remaining: 29979844608(27.92 GB)
DFS Used%: 0%
DFS Remaining%: 75.5%
Last contact: Fri Feb 21 17:06:27 CST 2014

7.验证
[grid@node0 hadoop-1.2.1]$ jps
3560 SecondaryNameNode
3634 JobTracker
3788 Jps
3392 NameNode
[grid@node0 hadoop-1.2.1]$ hadoop fs -mkdir test
[grid@node0 hadoop-1.2.1]$ hadoop fs -ls
Found 1 items
drwxr-xr-x - grid supergroup 0 2014-02-21 17:08 /user/grid/test

七。问题
1.挂载问题
挂载共享目录出现无法挂载点问题
[root@node0 nfs_share]# mount -t nfs 192.168.18.60:/home/grid/share /nfs_share/
mount.nfs: access denied by server while mounting 192.168.18.60:/home/grid/share
解决方法:
修改挂载属性
[root@node0 nfs_share]# vi /etc/exports
/home/grid/share *(insecure,rw,async,no_root_squash)
~
2.访问问题
配置完免密码登录后,用ssh登录远程机器每次都要求输入密码,原来是目录访问权限问题
/home/grid/share目录与/home/grid/share/.ssh目录的访问权限应为700,
authorized_keys文件的访问权限应为644
[root@node0 grid]# chmod 700 share
[root@node0 grid]# cd share
[root@node0 share]# chmod 700 .ssh

3.格式化namenode问题
[grid@node0 bin]$ ./hadoop namenode -format
14/02/21 16:44:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = java.net.UnknownHostException: node0: node0: Name or service not known
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf ... branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.7.0_40
解决方法:
hostname为node0,系统没有解析,应把hostname设为node0.myhadoop.com
在线修改
hostname node0.myhadoop.com
永久保存:
vi /etc/sysconfig/network
HOSTNAME=node0.myhadoop.com

其他节点也要修改






转载地址

生产环境部署hadoop
http://f.dataguru.cn/hadoop-244415-1-1.html
(出处: 炼数成金)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值