hadoop安装

1.下载hadoop和jdk

hadoop下载地址
https://archive.apache.org/dist/hadoop/common/
jdk下载地址
https://repo.huaweicloud.com/java/jdk/8u192-b12/jdk-8u192-linux-x64.tar.gz

2.将文件传入linux系统

利用xftp
注意!!!!
在这里插入图片描述
选择此opt文档,否则装的是open jdk,不是oracal jdk,千万不要自己建opt文档

3.解压压缩文件

tar -zxvf jdk-8u192-linux-x64.tar.gz

查看解压文件

cd /opt
ls

配置jdk环境

vim /etc/profile
#进入vim编辑器在最末尾添加
export JAVA_HOME=/opt/jdk1.8.0_192
export PATH=$PATH:$JAVA_HOME/bin
#刷新配置文件
source /etc/profile
#查看jdk是否配置好
java -version
java version "1.8.0_192"
Java(TM) SE Runtime Environment (build 1.8.0_192-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.192-b12, mixed mode)
[root@sjm opt]# vim /etc/profile
tar -zxvf hadoop-2.7.3.tar.gz
#查看解压文件
ls
hadoop-2.7.3  hadoop-2.7.3.tar.gz  jdk1.8.0_192  jdk-8u192-linux-x64.tar.gz

SSH免密登录

此文详细解决办法
http://t.csdn.cn/t2WEb
遇到提示输入或者按回车键即可

[root@sjm opt]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:/v4YYBTG1jTAT58U0p7x5KR/mKMpEU/XNu0UmB6Qb/c root@sjm
The key's randomart image is:
+---[RSA 2048]----+
|       o+++o+.o  |
|       .+.o+++o. |
|       ..o +oX..o|
|       .  o B++o=|
|        S  +.o.*o|
|       o .. . = E|
|        . .. o o |
|         ..oo    |
|         .+o.    |
+----[SHA256]-----+

输入密码时等待

此段是清除设置
[root@sjm ~]# ssh-keygen -R 192.168.27.128
# Host 192.168.27.128 found: line 1
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old


此为配置成功
[root@sjm ~]#  ssh-copy-id 192.168.27.128
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.27.128 (192.168.27.128)' can't be established.
ECDSA key fingerprint is SHA256:pG2znqiaa/8QzMZHOnQx4wYTbsLAWzyeQIrTVRiMb9A.
ECDSA key fingerprint is MD5:6d:d3:af:50:44:ee:98:2b:f8:03:c8:f0:c7:37:56:b1.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.27.128's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.27.128'"
and check to make sure that only the key(s) you wanted were added.

输入ssh '192.168.27.128’如果不需要密码验证则设置成功

ssh '192.168.27.128'
login: Fri May  6 22:57:06 2022 from sjm

hadoop伪分布式安装

找到配置文件路径

[root@sjm opt]# cd /opt/hadoop-2.7.3/etc/hadoop
[root@sjm hadoop]# ls
capacity-scheduler.xml      httpfs-env.sh            mapred-env.sh
configuration.xsl           httpfs-log4j.properties  mapred-queues.xml.template
container-executor.cfg      httpfs-signature.secret  mapred-site.xml.template
core-site.xml               httpfs-site.xml          slaves
hadoop-env.cmd              kms-acls.xml             ssl-client.xml.example
hadoop-env.sh               kms-env.sh               ssl-server.xml.example
hadoop-metrics2.properties  kms-log4j.properties     yarn-env.cmd
hadoop-metrics.properties   kms-site.xml             yarn-env.sh
hadoop-policy.xml           log4j.properties         yarn-site.xml
hdfs-site.xml               mapred-env.cmd

配置core-site.xml文件

vim core-site.xml

<configuration>
  <property>
                <name>fs.defaultFS</name>
                <value>hdfs://192.168.27.128:9000</value>
        </property>

        <property>
                <name>hadoop.tmp.dir</name>
                <value>file:hadoop-2.7.3/tmp</value>
 </property>
</configuration>


配置hadoop-env.sh文件

vim hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/opt/jdk1.8.0_192

配置hdfs-site.xml文件

vim hdfs-site.xml

<property>
				<name>dfs.replication</name>
				<value>1</value>
		</property>
		<property>
				<name>dfs.namenode.name.dir</name>
				<value>file:/opt/hadoop2.7.3/tmp/dfs/name</value>
		</property>
		<property>
				<name>dfs.datanode.data.dir</name>
				<value>file:/opt/hadoop2.7.3/tmp/dfs/data</value>
		</property>

修改配置 mapred-site.xml 文件来配置MapReduce 参数

①由于 /home/bigdata/Opt/hadoop-2.10.1/etc/hadoop 目录下只有/mapred-site.xml.template文件

scp mapred-site.xml.template mapred-site.xml

② 下面我们 复制 mapred-site.xml.template 文件生成mapred-site.xml 文件

vim mapred-site.xml
		<property>
				<name>mapreduce.framework.name</name>
				<value>yarn</value>
		</property>

配置 yarn-site.xml 文件,用于配置集群资源管理系统参数

vim yarn-site.xml

		<property>
				<name>yarn.resourcemanager.hostname</name>
				<value>192.168.27.128</value>
		</property>
		<property>
				<name>yarn.nodemanager.aux-service</name>
				<value>mapreduce_shuffle</value>
		</property>

配置Hadoop环境变量

export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

刷新配置

输入命令:

source /etc/profile

配置完成后,执行 NameNode 的格式化


cd /opt/hadoop-2.7.3/sbin
hdfs namenode -format

开启所有服务

./start-all.sh
[root@sjm sbin]# ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [sjm]
The authenticity of host 'sjm (fe80::20c:29ff:fedd:2928%ens33)' can't be established.
ECDSA key fingerprint is SHA256:pG2znqiaa/8QzMZHOnQx4wYTbsLAWzyeQIrTVRiMb9A.
ECDSA key fingerprint is MD5:6d:d3:af:50:44:ee:98:2b:f8:03:c8:f0:c7:37:56:b1.
Are you sure you want to continue connecting (yes/no)? yes
sjm: Warning: Permanently added 'sjm,fe80::20c:29ff:fedd:2928%ens33' (ECDSA) to the list of known hosts.
sjm: starting namenode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-namenode-sjm.out
localhost: starting datanode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-datanode-sjm.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is SHA256:pG2znqiaa/8QzMZHOnQx4wYTbsLAWzyeQIrTVRiMb9A.
ECDSA key fingerprint is MD5:6d:d3:af:50:44:ee:98:2b:f8:03:c8:f0:c7:37:56:b1.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-sjm.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-resourcemanager-sjm.out
localhost: starting nodemanager, logging to /opt/hadoop-2.7.3/logs/yarn-root-nodemanager-sjm.out

#检查配置是否成功
[root@sjm sbin]# jps
11120 Jps
10614 NameNode
10710 DataNode
10983 ResourceManager
10830 SecondaryNameNode
11086 NodeManager
[root@sjm sbin]# 

开启NameNode和DataNod

./start-dfs.sh

Hadoop中HDFS还提供Web访问页面,默认端口 50070,通过HTTP协议访问

http://192.168.27.128:50070/

Hadoop集群信息也提供Web访问页面,默认端口8088,通过HTTP协议访问

http://192.168.27.128:8088/

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值