1.Hadoop的安装,配置,部署

1.实验环境

系统:CentOS7

软件:本文使用的是VMware Workstation

如果不会安装CentOS,请看下列链接

https://blog.csdn.net/Xuanyi_54/article/details/133680066?spm=1001.2014.3001.5502

实验软件

链接:https://pan.baidu.com/s/11IZ8VB12oZcCuvNWyz5sQw?pwd=sj7v 
提取码:sj7v 

图1-1 上传文件到/opt目录,使用tar命令全部解压出来

2.安装Java

2-1 安装jdk
[root@172-2-25-16 opt]# useradd hadoop

图2-1 创建用户hadoop

[root@172-2-25-16 opt]# mkdir /usr/java/
[root@172-2-25-16 opt]# mv jdk1.8.0_181/ /usr/java/
[root@172-2-25-16 opt]# ls /usr/java/
jdk1.8.0_181

图2-2 把压缩的文件移动到指定目录

export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=${JAVA_HOME}/bin:${PATH}

图2-3 在文件中加入上面两行

[root@172-2-25-16 opt]# source ~/.bashrc
[root@172-2-25-16 opt]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
[root@172-2-25-16 opt]# 

图2-4 执行上上面修改的文件并且查看Java版本看到版本号确定安装成功

3.设置免密登陆

[root@172-2-25-16 opt]# passwd hadoop

图3-1 设置hadoop用户密码

[root@172-2-25-16 opt]# su hadoop
[hadoop@172-2-25-16 opt]$ 

图3-2 切换为上面添加的hadoop用户登陆

 首先确认能否不输入口令就用ssh登录localhost

[hadoop@172-2-25-16 opt]$ ssh localhost

图3-3 我们使用了密码,说明没有免密登陆

[hadoop@172-2-25-16 ~]$ exit

图3-4 退出ssh登陆

接下来我们要设置免密登陆

[hadoop@172-2-25-16 opt]$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa 
[hadoop@172-2-25-16 opt]$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

图3-5 设置免密登陆

[hadoop@172-2-25-16 opt]$ cd ~
[hadoop@172-2-25-16 ~]$ ll .ssh/
[hadoop@172-2-25-16 ~]$ vim ~/.ssh/config
[hadoop@172-2-25-16 ~]$ cat ~/.ssh/config
UserKnownHostsFile ~/.ssh/known_hosts

图3-6 在创建的新文件里面输入下面内容

注意:

 ~/.ssh需要是700权限,authorized_keys和config文件需要是644权限

[hadoop@172-2-25-16 ~]$ chmod 700 ~/.ssh
[hadoop@172-2-25-16 ~]$ chmod 644 ~/.ssh/authorized_keys
[hadoop@172-2-25-16 ~]$ chmod 644 ~/.ssh/config
[hadoop@172-2-25-16 ~]$ ls -al ~/

图3-7 更改文件权限

[hadoop@172-2-25-16 ~]$ ssh localhost

图3-8 登陆的时候发现已经可以免密登陆了

4.安装Hadoop 3.1.3

首先我们/opt目录使用tar解压出来

首先我们要切换到root用户

[hadoop@172-2-25-16 root]$ su root
密码:
[root@172-2-25-16 ~]# mkdir /usr/local/hadoop
[root@172-2-25-16 ~]# cp -r /opt/hadoop-3.1.3/* /usr/local/hadoop/
[root@172-2-25-16 ~]# ls /usr/local/hadoop/

图4-1 拷贝文件复制到指定目录

[root@172-2-25-16 ~]# cd /usr/local/
[root@172-2-25-16 local]# sudo chown -R hadoop:hadoop ./hadoop 
[root@172-2-25-16 local]# ll

图4-2 为文件夹设置用户和组都为hadoop

[root@172-2-25-16 local]# cd /usr/local/hadoop
[root@172-2-25-16 hadoop]# ./bin/hadoop version

图4-3 验证Hadoop是否安装成功

[root@172-2-25-16 hadoop]# vim ~/.bashrc
[root@172-2-25-16 hadoop]# cat ~/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=${JAVA_HOME}/bin:${PATH}

export HADOOP_HOME=/usr/local/hadoop
export PATH=${HADOOP_HOME}/bin:${PATH}
[root@172-2-25-16 hadoop]# source ~/.bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=${HADOOP_HOME}/bin:${PATH}

图4-4 添加上面的变量并执行

[root@172-2-25-16 hadoop]# hadoop version

图4-5 看到版本号表示Hadoop安装成功

5.配置Hadoop伪分布式

[root@172-2-25-16 hadoop]# cat ~/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=${JAVA_HOME}/bin:${PATH}

export HADOOP_HOME=/usr/local/hadoop
export PATH=${HADOOP_HOME}/bin:${PATH}
# Hadoop Environment Variables
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
[root@172-2-25-16 hadoop]# source ~/.bashrc
# Hadoop Environment Variables
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

图5-1 添加上面配置文件

[root@172-2-25-16 hadoop]# su hadoop
[hadoop@172-2-25-16 hadoop]$ 

图5-2 切换为Hadoop用户

[hadoop@172-2-25-16 hadoop]$ cd /usr/local/hadoop/etc/hadoop/
[hadoop@172-2-25-16 hadoop]$ vim core-site.xml 
[hadoop@172-2-25-16 hadoop]$ vim hdfs-site.xml

图5-3 修改下面两个文件,文件在下面,直接复制粘贴就行

[hadoop@172-2-25-16 hadoop]$ cat core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
[hadoop@172-2-25-16 hadoop]$ cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
       Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
<property>
                    <name>dfs.namenode.http.address</name>
                   <value>0.0.0.0:50070</value>
           </property>
<property>
                    <name>dfs.http.address</name>
                   <value>0.0.0.0:50070</value>
           </property>

</configuration>
[hadoop@172-2-25-16 hadoop]$ cd /usr/local/hadoop/
[hadoop@172-2-25-16 hadoop]$ pwd
/usr/local/hadoop
[hadoop@172-2-25-16 hadoop]$ ./bin/hdfs namenode -format

图5-4 打开指定目录并执行初始化,初始化只能一次,最好照快照

图5-5 在最底下看到下面内容代表初始化成功,初始化只能一次,最好照快照

[hadoop@172-2-25-16 hadoop]$ ./sbin/start-dfs.sh

图5-6 出现上面情况是启动失败,可能是之前的配置文件没有了,需要重新添加

[root@172-2-25-16 hadoop]# cat ~/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi
export JAVA_HOME=/usr/java/jdk1.8.0_181
export PATH=${JAVA_HOME}/bin:${PATH}

export HADOOP_HOME=/usr/local/hadoop
export PATH=${HADOOP_HOME}/bin:${PATH}
# Hadoop Environment Variables
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
[root@172-2-25-16 hadoop]# source ~/.bashrc

图5-7 更改环境变量并执行

[hadoop@172-2-25-16 hadoop]$ ./sbin/start-dfs.sh
[hadoop@172-2-25-16 hadoop]$ jps

图5-8 出现这个就启动成功了

http://localhost:50070/

 图5-9 在Linux浏览器里面输入上面网址,能看到HDFS管理界面

[root@172-2-25-16 hadoop]# systemctl stop firewalld.service
[root@172-2-25-16 hadoop]# setenforce 0

图5-10 在root用户下关闭防火墙

172.2.25.16:50070
IP地址:50070

图5-11 在浏览器输入Linux的IP地址就会出现HDFS管理界面

[hadoop@172-2-25-16 hadoop]$ ./sbin/stop-dfs.sh
[hadoop@172-2-25-16 hadoop]$ jps

图5-12 停止Hadoop

5.配置YARN的准备工作

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值