部署hadoop2.6.0

  1. 下载 hadoop2.6.0
  2. 解压tar -xvf hadoop-2.6.0.tar.gz
  3. 复制到/usr/local sudo cp -r hadoop-2.6.0 /usr/local/hadoop
  4. 修改目录归属为hadoop sudo chown -R hadoop:users /usr/local/hadoop/
  5. 编辑hadoop配置文件
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
slaves
  1. 配置core-site.xml
<configuration>
	<property>
		<name>fs.default.name</name>
		<value>hdfs://master:9000</value>
	</property>
	<property>
		<name>hadoop.tmp.dir</name>
		<value>file:/home/hadoop/tmp</value>
	</property>
	<property>
		<name>fs.trash.interval</name>
		<value>1440</value>
	</property>
</configuration>
  1. 配置hdfs-site.xml
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/hadoop/tmp/dfs/name</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/home/hadoop/tmp/dfs/data</value>
  </property>
  <property>
    <name>dfs.namenode.http-address</name>
    <value>master:50070</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>master:50090</value>
  </property>
</configuration>
  1. 配置mapred-site.xml
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.address</name>
		<value>master:10020</value>
		<description>MapReduce JobHistory Server IPC host:port</description>
	</property>
	<property>
		<name>mapreduce.jobhistory.webapp.address</name>
		<value>master:19888</value>
		<description>MapReduce JobHistory Server Web UI host:port</description>
	</property>
	<property>
		<name>mapreduce.jobhistory.done-dir</name>
		<value>/history/done</value>
	</property>
	<property>
		<name>mapreduce.jobhistory.intermediate-done-dir</name>
		<value>/history/done_intermediate</value>
	</property>
	<property>
		<name>mapreduce.map.memory.mb</name>
		<value>2048</value>
	</property>
	<property>
		<name>mapreduce.reduce.memory.mb</name>
		<value>2048</value>
	</property>
	<property>
		<name>mapreduce.map.java.opts</name>
		<value>-Xmx2304m</value>
	</property>
	<property>
		<name>mapreduce.reduce.java.opts</name>
		<value>-Xmx2304m</value>
	</property>
	<property>
		<name>yarn.scheduler.minimum-allocation-mb</name>
		<value>1024</value>
	</property>
	<property>
		<name>yarn.scheduler.maximum-allocation-mb</name>
		<value>3072</value>
	</property>
</configuration>
  1. 配置yarn-site.xml
<configuration>
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>master</value>
	</property>
	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>
	<property>
		<name>yarn.log-aggregation-enable</name>
		<value>true</value>
	</property>
	<property>
		<name>yarn.log-aggregation.retain-seconds</name>
		<value>604800</value>
	</property>
</configuration>
  1. 配置slaves
slave1
slave2
  1. 配置~/.bashrc中环境变量
HADOOP_HOME=/usr/local/hadoop
PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  1. 将hadoop目录以及.bashrc同步到从节点
hadoop@master:~> scp -r /usr/local/hadoop/ root@slave1:/usr/local
hadoop@master:~> scp -r /usr/local/hadoop/ root@slave2:/usr/local
hadoop@master:~> scp ~/.bashrc hadoop@slave1:/home/hadoop
hadoop@master:~> scp ~/.bashrc hadoop@slave2:/home/hadoop
  1. 修改从节点hadoop目录权限
hadoop@slave1:~> sudo chown -R hadoop:users /usr/local/hadoop/
hadoop@slave2:~> sudo chown -R hadoop:users /usr/local/hadoop/
  1. 格式化HDFS
hadoop@master:~> hadoop namenode -format
0/07/11 10:16:33 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/192.168.88.100
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
...
20/07/11 10:16:34 INFO namenode.NNConf: ACLs enabled? false
20/07/11 10:16:34 INFO namenode.NNConf: XAttrs enabled? true
20/07/11 10:16:34 INFO namenode.NNConf: Maximum size of an xattr: 16384
20/07/11 10:16:34 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1430712971-192.168.88.100-1594433794505
20/07/11 10:16:34 INFO common.Storage: Storage directory /home/hadoop/tmp/dfs/name has been `successfully formatted`.
20/07/11 10:16:34 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/07/11 10:16:34 INFO util.ExitUtil: Exiting with status 0
20/07/11 10:16:34 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.88.100
************************************************************/
  1. 启动HDFS
hadoop@master:/usr/local/hadoop/sbin> ./start-dfs.sh
20/07/11 10:23:45 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-slave2.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
20/07/11 10:23:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  1. 启动YARN
hadoop@master:/usr/local/hadoop/sbin> ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave2: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave2.out
slave1: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hadoop-nodemanager-slave1.out
  1. 停止Hadoop命令
/usr/local/hadoop/sbin/stop-yarn.sh
/usr/local/hadoop/sbin/stop-dfs.sh
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值