centos6 hadoop安装

首先得安装jdk(不懂看上面)

其次要安装zookeeper(不懂看上面)

1、安装jdk

2、下载hadoop-3.0.2,解压

3、配置环境变量:

export HADOOP_HOME=/data/hadoop-3.0.2
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

4、重启环境变量

source /etc/profile

5、查看是否配置成功

[root@liu hadoop-3.0.2]# hadoop
Usage: hadoop [OPTIONS] SUBCOMMAND [SUBCOMMAND OPTIONS]
 or    hadoop [OPTIONS] CLASSNAME [CLASSNAME OPTIONS]
  where CLASSNAME is a user-provided Java class

  OPTIONS is none or any of:

buildpaths                       attempt to add class files from build tree
--config dir                     Hadoop config directory
--debug                          turn on shell script debug mode
--help                           usage information
hostnames list[,of,host,names]   hosts to use in slave mode
hosts filename                   list of hosts to use in slave mode
loglevel level                   set the log4j level for this command
workers                          turn on worker mode

6、配置core-site.xml文件

[root@liu hadoop]# vi /data/hadoop-3.0.2/etc/hadoop/core-site.xml 
#添加下面配置
<configuration>
<property>
 <name>dfs.replication</name>
 <value>1</value>
</property>

<property>
  <name>dfs.name.dir</name>
  <value>file:///opt/hadoop/hadoopdata/namenode</value>
</property>

<property>
  <name>dfs.data.dir</name>
  <value>file:///opt/hadoop/hadoopdata/datanode</value>
</property>

<property>
  <name>dfs.http.address</name>
  <value>0.0.0.0:50070</value>
</property>
</configuration>

7、在opt/hadoop/hadoopdata/下创建文件夹namenode 和datanode

8、修改文件vi hdfs-site.xml 

[root@liu hadoop]# vi hdfs-site.xml 
#修改配置
<configuration>
<property>
  <name>fs.default.name</name>
    <value>hdfs://192.168.252.16:9000</value>
</property>
</configuration>

9、修改文件: vi yarn-site.xml 

<configuration>
<property>
  <name>yarn.nodemanager.aux-services</name>
  <value>mapreduce_shuffle</value>
</property>
</configuration>

10、设置SSH免密码登录

[root@liu hadoop]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
Generating public/private dsa key pair.
Created directory '/root/.ssh'.
Your identification has been saved in /root/.ssh/id_dsa.
Your public key has been saved in /root/.ssh/id_dsa.pub.
The key fingerprint is:
c8:4e:b5:0b:72:e1:dd:4e:15:64:0d:10:aa:d7:c7:d7 root@liu
The key's randomart image is:
+--[ DSA 1024]----+
|          o+=o   |
|         . . ..  |
|      . o   .    |
|     o * + o   . |
|    . O S + o . E|
|     = o + . .   |
|      . . .      |
|                 |
|                 |
+-----------------+
[root@liu hadoop]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
[root@liu hadoop]# chmod 0600 ~/.ssh/authorized_keys

11、 修改hosts为本机IP地址(请参考上面)

12、重新格式化namenode

[root@test-hbase bin]# cd /data/hadoop-3.0.2/bin/
[root@test-hbase bin]# hdfs namenode -format

13、启动

[root@test-hbase sbin]# sh /data/hadoop-3.0.2/sbin/start-dfs.sh
[root@test-hbase sbin]# sh /data/hadoop-3.0.2/sbin/start-yarn.sh 

14、启动start-dfs.sh,报如下错误则进行如下修改

ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no  defined. Aborting operation.

[root@test-hbase sbin]# vi /data/hadoop-3.0.2/sbin/start-dfs.sh
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
#!/usr/bin/env bash

15、 启动start-yarn.sh ,报如下错误,则进行如下修改

ERROR: Attempting to operate on yarn nodemanager as root
ERROR: but there is no  defined. Aborting operation.

[root@test-hbase sbin]# vi start-yarn.sh 
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
#!/usr/bin/env bash

16、报以下错误并确定jdk环境变量已配置好则可进行如下操作

Starting namenodes on [test-hbase]
test-hbase: Warning: Permanently added 'test-hbase,192.168.252.16' (RSA) to the list of known hosts.
test-hbase: ERROR: JAVA_HOME is not set and could not be found.

修改/data/hadoop-3.0.2/etc/hadoop/hadoop-env.sh,增加如下内容

# export JAVA_HOME=

  export JAVA_HOME=/usr/local/java/jdk1.8.0_60

17、启动并查看当前进程状态

[root@test-hbase sbin]# jps
5840 NodeManager
5137 NameNode
5474 SecondaryNameNode
5267 DataNode
6020 Jps
4150 ResourceManager

 18、输入如下http://192.168.252.16:8088/cluster  查看集群所有节点状态

         http://192.168.252.16:9870                             文件管理界面

Centos7 安装Hadoop3.x 完全分布式部署_java爱好者的博客-CSDN博客

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值