大数据三大组件(spark,hadoop,hive)的安装之Hadoop之伪分布式

本次准备写三篇博客分享一下spark,Hadoop,hive的安装教程,用于处理大型的离线型数据

相关文件材料

链接:https://pan.baidu.com/s/17240ITPR14vcRku6_P0kug?pwd=me15 
提取码:me15

1、本次安装的各个组件版本

2、先/usr/lib下创建一个jvm目录用户安装java

        sudo mkdir /usr/lib/jvm

3、然后把你的Java包解压到那个目录下

        sudo tar -zxvf jdk-8u162-linux-x64.tar.gz -C /usr/lib/jvm/

4、配置环境变量

        4.1先进入环境配置(命令:vim ~/.bashrc)

        4.2配置如下内容,放在最上面就行

export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_162
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH

        4.3让我们配置的环境变量马上生效(命令:source ~/.bashrc)

5、验证Java是否配置成功

6、用这个命令把Hadoop解压到/usr/local目录下

        sudo tar -zxf ~/下载/hadoop-3.1.3.tar.gz -C /usr/local

        sudo mv ./hadoop-3.1.3/ ./hadoop # 将文件夹名改为hadoop

        sudo chown -R hadoop ./hadoop

7、查看版本信息

cd /usr/local/hadoop
./bin/hadoop version

8、伪分布式需要配置两个文件,在cd /usr/local/hadoop/etc/hadoop目录下,分别是

core-site.xml配置内容。解释了作用了的

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
                 <name>hadoop.proxyuser.hadoop.hosts</name>
                  <value>*</value>
          </property>
          <property>
                   <name>hadoop.proxyuser.hadoop.groups</name>
                    <value>*</value>
            </property>
    <property>
        <!--配置数据的临时存储位置-->
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <!--配置数据的访问ip和端口-->
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
 

hdfs-site.xml配置内容

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
                 <name>hadoop.proxyuser.hadoop.hosts</name>
                  <value>*</value>
          </property>
          <property>
                   <name>hadoop.proxyuser.hadoop.groups</name>
                    <value>*</value>
            </property>
    <property>
        <!--配置数据的临时存储位置-->
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <!--配置数据的访问ip和端口-->
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>
hadoop@hadoop-VirtualBox:/usr/local/hadoop/etc/hadoop$ 
hadoop@hadoop-VirtualBox:/usr/local/hadoop/etc/hadoop$ cat hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <!--配置节点数量个数 -->
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <!--配置名称节点存放位置 -->
        <name>dfs.namenode.name.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/name</value>
    </property>
    <property>
        <!--配置数据节点存放位置 -->
        <name>dfs.datanode.data.dir</name>
        <value>file:/usr/local/hadoop/tmp/dfs/data</value>
    </property>
</configuration>

9、格式化cd /usr/local/hadoop
./bin/hdfs namenode -format

10、开启进程

cd /usr/local/hadoop
./sbin/start-dfs.sh  #start-dfs.sh

11,只要看到这两个进程就成功了

有问题可以留言,我有空会回复的

下一篇hive

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值