Hadoop伪分布式搭建

Hadoop

  • 搭建环境

    • CentOS Linux release 7.4.1708 (Core)
    • hadoop 3.2.2
  • 注意:

    • Hadoop和Hbase之间有兼容性的要求,可以参考官网查看。

Hadoop 自带web页面: http://192.168.75.139:9870/

一、伪分布式搭建

-1.配置免密登录

免密登录给本机和需要连接服务器都配置免密登录

* 生成私钥和公钥命令(在任意目录执行)
# ssh-keygen

# 出现下面的信息,按三次回车即可
----------------显示-------------------
[root@hbase-139 .ssh]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:HgHxi2cQ2vdxwN74vPuLMttGsDAK8RVb3KkOavT6XWs root@hbase-139
The key's randomart image is:
+---[RSA 2048]----+
|      +..+o. .   |
|    .o +.oo.o    |
|    .oo.=..+.    |
|    . o+++=o.    |
|     o.+S=.=     |
|      ++..o +    |
|     . ..  ...   |
|      .  .ooEo   |
|       .. o*=oo. |
+----[SHA256]-----+

------------------执行完上面的命令,会在/root/.ssh目录下生成公钥和私钥----------------------

# cd /root/.ssh

* 输入要免密登录的服务器ip
# ssh-copy-id 192.168.75.139

# 输入yes,然后再输入连接服务器的密码
-----------------显示-----------------
[root@hbase-139 .ssh]# ssh-copy-id 192.168.75.139
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.75.139 (192.168.75.139)' can't be established.
ECDSA key fingerprint is SHA256:bdCj1PkE+gCBkoRq1pqA3IwNK0LqIY+eRyJmt1aooAk.
ECDSA key fingerprint is MD5:c8:4e:68:04:60:bd:50:ef:91:a2:5c:35:79:37:16:f9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.75.139's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.75.139'"
and check to make sure that only the key(s) you wanted were added.

------------------执行完上面的命令,会在连接的服务器/root/.ssh目录下生成authorized_keys文件-------------------------
------------------authorized_keys和公钥的内容一样-----

# 至此,可以直接登录到设置了免密登录的服务器,而不用输入密码
0.配置环境变量
# vi /etc/profile

-------------加入下面的配置----------------
# hadoop
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root


# source /etc/profile
1.安装hadoop
# cd /opt/hadoop

# tar -zxvf hadoop-3.2.2.tar.gz

# cd hadoop-3.2.2
2.hadoop-env.sh
# vi etc/hadoop/hadoop-env.sh

# java环境变量
export JAVA_HOME=/opt/jdk/jdk1.8.0_112
3.hdfs-site.xml
# vi etc/hadoop/hdfs-site.xml

-------------加入下面的配置----------------
<configuration>
    <!-- 副本数量 -->
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
4.core-site.xml
# vi etc/hadoop/core-site.xml

-------------加入下面的配置----------------
<configuration>
	<!-- 默认文件系统的位置 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop-1:9000</value>
    </property>
</configuration>
5.ip映射
# vi /etc/hosts

192.168.75.138  hbase-1
192.168.75.139  hadoop-1
6.格式化文件系统
# ./bin/hdfs namenode -format
7.启动
# ./sbin/start-dfs.sh

hdfs启动后,别的服务要往hdfs中写数据,只需要指定hdfs://hadoop-1:9000地址即可(hdfs的如果按照以上的配置,数据会保存在/tmp目录中,重新启动服务器会丢失)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值