在线扩容datanode脚本(用fabric)

环境说明:

在扩容之前,我的集群是三台机器,并且已经开启了的

hadoop版本:hadoop-2.7.3

192.168.40.140    hd1         NameNode

192.168.40.144    hd4         即将被扩容的datanode机器

第一步:将NameNode和datanode各自的root用户的秘钥拷贝过去,以免运行脚本时自己输入

生成公钥和私钥

ssh-keygen -t rsa

将公钥和私钥拷贝到目标主机

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.40.140

在两台机器上都执行以上命令(需更改目标主机ip)

第二步:脚本,我将脚本命名为amplify_datanode.py

from fabric.api import run, local, env, roles
import os
import sys
import getopt

env.roledefs = {'master': ['192.168.40.140'], 'datanode': ['192.168.40.144']}
env.hosts = '192.168.40.140'
@roles('master')
def master(local_ip, hostname):
    run('echo %s %s >> /etc/hosts' % (local_ip, hostname))
    run('echo %s>>/usr/local/hadoop/hadoop-2.7.3/etc/hadoop/slaves' % hostname)
    run('scp -r /usr/jdk1.8.0_131 %s:/usr' % hostname)
    run('scp -r /usr/local/hadoop %s:/usr/local' % hostname)
def datanode(local_ip, hostname):
    local('useradd hadoop')
    local('echo "hadoop" | passwd --stdin hadoop')
    local('mkdir /home/hadoop/.ssh')
    local('chown -R hadoop:hadoop /home/hadoop')
    local('cp ~/.ssh/authorized_keys /home/hadoop/.ssh')
    local('chown -R hadoop:hadoop /usr/local/hadoop')
    local('echo input /etc/profile')
    local('echo export JAVA_HOME=/usr/jdk1.8.0_131>>/etc/profile')
    local('echo export JAVA_BIN=/usr/jdk1.8.0_131/bin>>/etc/profile')
    local('echo export PATH =$PATH:/usr/jdk1.8.0_131/bin>>/etc/profile')
    local('echo export CLASSPATH=.:/usr/jdk1.8.0_131/lib/dt.jar:/usr/jdk1.8.0_131/lib/tools.jar>>/etc/profile')
    local('echo export JAVA_HOME JAVA_BIN PATH CLASSPATH>>/etc/profile')
    local('source /etc/profile')
    local('echo write into /etc/network')
    local('echo NETWORKING = yes>/etc/sysconfig/network')
    local('echo HOSTNAME = %s>>/etc/sysconfig/network' % hostname)
    local('echo input /etc/hosts')
    local('echo 192.168.40.140 hd1>>/etc/hosts')
    local('echo 192.168.40.141 hd2>>/etc/hosts')
    local('echo 192.168.40.142 hd3>>/etc/hosts')
    local('echo %s %s>>/etc/hosts' % (local_ip, hostname))
    local('echo input .bash_profile')
    local('echo export JAVA_HOME=/usr/jdk1.8.0_131>>/home/hadoop/.bash_profile')
    local('echo export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.3>>/home/hadoop/.bash_profile')
    local('echo export PATH=$PATH:/usr/jdk1.8.0_131/bin:/usr/local/hadoop/hadoop-2.7.3/bin>>/home/hadoop/.bash_profile')
    local('echo export JAVA_HOME HADOOP_HOME PATH>>/home/hadoop/.bash_profile')
    local('source /home/hadoop/.bash_profile')
    local('/usr/local/hadoop/hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode')


def do_work(local_ip,hostname):
    master(local_ip, hostname)
    datanode(local_ip, hostname)
第三步:运行脚本

fab -f amplify_datanode.py dowork:local_ip=192.168.40.144,hostname=hd4

等待脚本运行完成,基本就OK了

第四步:在NameNode上刷新节点

...bin/hdfs dfsadmin -refreshNode

...sbin/start-balancer.sh

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值