快速部署 Hadoop-PPVKE.txt

一、环境说明
1.1、安装节点
172.16.17.31
172.16.17.32
172.16.17.33

1.2、工作路径
mkdir -p /data/workDir
mkdir -p /data/workDir/softBefore
mkdir -p /data/workDir/softAfter
mkdir -p /data/workDir/dataPath


二、环境准备
关闭防火墙
service iptables status
service iptables stop

2.1、修改主机名
vi /etc/sysconfig/network
分别修改为bigdata1、bigdata2、bigdata3
hostname bigdata1
hostname bigdata2
hostname bigdata3


2.2、修改hosts映射文件 vi /etc/hosts
172.16.17.31 bigdata1
172.16.17.32 bigdata2
172.16.17.33 bigdata3

2.3、配置SSH
2.3.1、所有机器上执行
rm -rf /root/.ssh
ssh-keygen -t rsa
2.3.2、在bigdata1上执行
cd /root/.ssh
cp id_rsa.pub authorized_keys
2.3.2、在非bigdata1上执行
ssh-copy-id -i bigdata1
2.3.4、在bigdata1上执行
scp /root/.ssh/authorized_keys bigdata2:/root/.ssh/
scp /root/.ssh/authorized_keys bigdata3:/root/.ssh/

2.4、配置jdk
2.4.1、解压缩
cd /data/workDir/softBefore
tar -zxvf jdk1.7.0_60.tar.gz
2.4.2、配置Java环境变量 vi /etc/profile
export JAVA_HOME=/data/workDir/softBefore/jdk1.7.0_60
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=.:$PATH:$JAVA_HOME/bin
2.4.3、验证
source /etc/profile
echo $JAVA_HOME
java -version
2.4.4、配置其他机器
cd /data/workDir/softBefore
scp jdk1.7.0_60.tar.gz bigdata2:/data/workDir/softBefore/
scp jdk1.7.0_60.tar.gz bigdata3:/data/workDir/softBefore/


2.5、安装Zookeeper
2.5.1、解压缩
tar -zxvf zookeeper-3.4.5.tar.gz
2.5.2、配置环境变量 vi /etc/profile
#Zookeeper
export ZOOKEEPER_HOME=/data/workDir/softBefore/zookeeper-3.4.5
export PATH=$PATH:$ZOOKEEPER_HOME/bin

source /etc/profile
echo $ZOOKEEPER_HOME
2.5.3、配置其他机器
cd /data/workDir/softBefore
scp -rq zookeeper-3.4.5 bigdata2:/data/workDir/softBefore/
scp -rq zookeeper-3.4.5 bigdata3:/data/workDir/softBefore/

修改myid,vi /data/workDir/softBefore/zookeeper-3.4.5/data/myid
bigdata1	1
bigdata2	2
bigdata3	3

2.5.4、启动验证
zkServer.sh start
zkServer.sh status

#二、正式安装
cd /data/workDir/softBefore
tar -zxvf hadoop-2.6.0.tar.gz 
ln -s hadoop-2.6.0 hadoop

配置Hadoop环境变量 vi /etc/profile
export HADOOP_HOME=/data/workDir/softBefore/hadoop
export PATH=$PATH:$HADOOP_HOME/bin

mkdir -p /data/workDir/dataPath/hadoop/hadoopPidDir
mkdir -p /data/workDir/dataPath/hadoop/hadoopTmpDir
mkdir -p /data/workDir/dataPath/hadoop/dfsDataDir
mkdir -p /data/workDir/dataPath/hadoop/dfsNamenodeNameDir
mkdir -p /data/workDir/dataPath/hadoop/DfsJournalnodeDir
mkdir -p /data/workDir/dataPath/hadoop/mapredLocalDir

修改配置

拷贝到其他节点
scp -r hadoop-2.6.0 bigdata2:/data/workDir/softBefore/
scp -r hadoop-2.6.0 bigdata3:/data/workDir/softBefore/

#初始化集群、启动集群
 
#运行测试例子
##例子1:
bin/hadoop fs -mkdir -p hdfs://bigdata0:9000/firstTest/input

vi testData1.txt
PPVKE Zeus
Flume
Kafka Kafka
Storm Storm Storm

bin/hadoop fs -put testData1.txt hdfs://bigdata0:9000/firstTest/input
 
bin/hadoop jar /data/workDir/softBefore/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar wordcount hdfs://bigdata0:9000/firstTest/input hdfs://bigdata0:9000/firstTest/output

##测试写数据
bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar TestDFSIO -write -nrFiles 10 -fileSize 5MB -resFile /var/lib/hadoop-hdfs/TestDFSIO_results_read.log

##测试读数据
bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar TestDFSIO -read -nrFiles 10 -fileSize 5MB -resFile /var/lib/hadoop-hdfs/TestDFSIO_results_read.log

##测试排序
bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen 100 /user/hive/warehouse/test/teragen-input

bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar terasort /user/hive/warehouse/test/teragen-input /user/hive/warehouse/test/teragen-output

bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teravalidate /user/hive/warehouse/test/teragen-output /user/hive/warehouse/test/teragen-validate 
如果排序正确,reduce没有任何输出:
14/04/14 14:57:00 INFO mapred.JobClient:     Reduce output records=0

#完成的工作:
1、搭建3个节点的Hadoop集群;
2、测试集群性能;
3、输出相关文档;
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值