Hadoop 安装 centos(伪分布式单节点)

环境Hadoop-2.6.0-cdh5.7.0 centos 6.5 jdk1.7

我们将会根据官方文档来进行hadoop的安装文档地址

http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0/hadoop-project-dist/hadoop-common/SingleCluster.html

首先

1.配置ssh

ssh-keygen -t rsa
cp .ssh/id_rsa.pub ~/.ssh/authorized_keys
看到隐藏文件
ls -la

cd .ssh/

2.配置jdk

vi ~/.bash_profile
export JAVA_HOME=/usr/java/jdk1.7.0_80
export PATH=$ JAVA_HOME/bin:$ PATH
使brash.profile生效
source ~/.brash.profile
#Assuming your installation directory is /usr/local/hadoop
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin: $HADOOP_HOME/sbin: $ PATH

3.解压hadoop到 app目录下

(tar -zxvf -C)
tar -zxvf /home/hadoop/software/hadoop-2.6.0-cdh5.7.0.tar.gz -C /home/hadoop/app/
1)edit the file etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_80
2)Configuration(配置)
Use the following:
etc/hadoop/core-site.xml:

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://localhost:8020</value>
				#如果想使用之际hostname 比如hadoop000 需要在 etc/hosts中 映射hosename ip地址
        </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/app/tmp</value>
        </property>
</configuration>

etc/hadoop/hdfs-site.xml: 修改副本系数为一

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

在slaves文件中配置上你的datanode 只有一个所以

4.Execution运行

1)第一次运行时需要格式化文件系统Format the filesystem:
$ bin/hdfs namenode -format
2)启动 Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh(在sbin目录下执行 ./start-dfs.sh )
cd sbin
./start-dfs.sh

[root@hadoop000 sbin]# ./start-dfs.sh
19/01/10 11:07:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop000]
hadoop000: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-namenode-hadoop000.out
hadoop000: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-datanode-hadoop000.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-secondarynamenode-hadoop000.out
19/01/10 11:07:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

运行jps(所有java进程pid的命令) 查看进程、
出现则运行成功:
7820 NameNode
7914 DataNode
8613 Jps
3)停止hdfs:
$ sbin/stop-dfs.sh

YARN 单节点启动

yarn相当于一个操作系统级别的调度框架,它使得spark 内存计算框架 strom流计算框架 。。能够在hdoop2.X 同一个集群上共享同一份hdfs资源
配置:
vi etc/hadoop/mapred-site.xml:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

vi etc/hadoop/yarn-site.xml:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

启动yarn:
$ sbin/start-yarn.sh
cd /sbin
./start-yarn.sh

停止:
$ sbin/stop-yarn.sh

执行完成后我们可以在浏览器中输入http://localhost:50070/查看

active 活跃的
在这里插入图片描述

参考文献:http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0/hadoop-project-dist/hadoop-common/SingleCluster.html

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值