Hadoop配置

一、基础配置
(1) hostnamectl set-hostname master/slave1/slave2
(2) vi /etc/hosts
(3)更改时区
(4) vi /etc/ntp.conf
master: 192.168.196.101;192.168.196.2
server 127.127.1.0
fudge 127.127.1.0 stratum 10
slave: 192.168.196.102/103;192.168.196.2
server 192.168.196.101
fudge 192.168.196.101 stratum 10
(5) master: service ntpd start;chkconfig ntpd on
slave: ntpdate -u 192.168.196.101
(6) 定时任务
(7) ssh-keygen -t rsa -P ''
(8) ssh-copy-id master/slave1/slve2;ssh localhost;ssh localhost
(9) mkdir /usr/java/ 三台机都创建
(10) tar -zxvf /usr/package277/jdk1.8.0_211 -C /usr/java
(11) vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_221
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
(12) scp -r /usr/java/jdk1.8.0_221/ root@slave1:/usr/java/
       scp -r /usr/java/jdk1.8.0_211/ root@slave2:/usr/java/
(13) scp -r /etc/profile root@slave1:/etc/
       scp -r /etc/profile root@slave2:/etc/
(14) echo $JAVA_HOME
       java -version


二、zookeeper组件部署
(1) mkdir /usr/zookeeper/ 三台机
 tar -zxvf /usr/package277/zookeeper-3.4.14 -C /usr/zookeeper/
(2) vi /etc/profile
export ZOOKEEPER_HOME=/usr/zookeeper/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile
(3) cd /usr/zookeeper/zookeeper-3.4.14
mkdir zkdata && mkdir logs
echo 1 > /usr/zookeeper/zookeeper-3.4.14/zkdata/myid
(4) cd /usr/zookeeper/zookeeper-3.4.14/conf
cp zoo.... zoo.cfg
vi zoo.cfg
dataDir=/usr/zookeeper/zookeeper/zookeeper-3.4.14/zkdata
dataLog=/usr/zookeeper/zookeeper-3.4.14/zkdatalog
server.1=master:2888:3888
server.2=slave1:2888:3888
server.3=slave2:2888:3888
(5) scp -r /usr/zookeeper/zookeeper-3.4.14 root@slave1:/usr/zookeeper
     scp -r /etc/profile root@slave1:/etc
(6)slave:source /etc/profile
(7) slave1: echo 2 > /usr/zookeeper/zookeeper-3.4.14/zkdata/myid
(8) slave2: echo 3 >/usr/zookeeper/zookeeper-3.4.14/zkdata/myid
(9) zkServer.sh start    zkServer.sh status


三、Hadoop组件部署
(1) mkdir /usr/hadoop
tar -zxvf /usr/packagg277/hadoop-2.7.7 -C /usr/hadoop/
vi /etc/profile
export HADOOP_HOME=/usr/hadoop/hadoop-2.7.7
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile
(2) cd /usr/hadoop/hadoop-2.7.7/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_221
(3) vi core-site.xml
<property> 
<name>fs.default.name</name> 
<value>hdfs://master:9000</value> 
</property> 
<property> 
<name>io.file.buffer.size</name> 
<value>131072</value> 
</property> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>file:/root/hadoopData/tmp</value> 
</property>
(4) vi hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/hadoopData/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/hadoopData/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.use.datanode.hostname</name>
<value>true</value>
</property>
(5)vi yarn-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_221
(6) vi yarn-site.xml
<property> 
<name>yarn.resourcemanager.address</name> 
<value>master:8032</value> 
</property> 
<property> 
<name>yarn.resourcemanager.scheduler.address</name> 
<value>master:8030</value> 
</property> 
<property> 
<name>yarn.resourcemanager.resource-tracker.address</name> 
<value>master:8031</value> 
</property> 
<property> 
<name>yarn.resourcemanager.admin.address</name> 
<value>master:18141 </value> 
</property> 
<property> 
<name>yarn.resourcemanager.webapp.address</name> 
<value>master:8088</value> 
</property> 
<property> 
<name>yarn.nodemanager.aux-services</name> 
<value>mapreduce_shuffle</value> 
</property> 
<property> 
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name> 
<value>org.apache.hadoop.mapred.ShuffleHandler</value> 
</property>
(7)cp mapreduce..  mapreduce-site.xml
vi mapreduce-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

(9)scp -r /usr/hadoop/hadoop-2.7.7/etc/hadoop root@salve1:/usr/hadoop/hadoop-2.7.7/etc
scp -r /usr/hadoop/hadoop-2.7.7/etc/hadoop root@salve2:/usr/hadoop/hadoop-2.7.7/etc
(10)scp -r /etc/profile root@slave1:/etc
    scp -r /etc/profile root@slave2:/etc
    slave:source /etc/profile
(11)master:vi master里面master;vi slaves里面slave1和slave2
    slave1和slave2:vi slaves里面slave1和slave2
(12) cd/uer/hadoop/hadoop-2.7.7
bin/hdfs namenode -format
hadoop-daemon.sh start namenode
slave:hadoop-daemon.sh start datanode
master:start-all.sh

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值