hsdoop单节点,伪分布式,完全分布式的搭建及测试

实验环境:红帽7.3 linux系统

hadoop单节点测试

  • 不建议用超户来做该实验,实验前先建立hadoop用户
[root@hadoop1 ~]# useradd hadoop
[root@hadoop1 ~]# id hadoop
uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop)
[root@hadoop1 ~]# passwd hadoop
  • 安装软件(徐要java环境)
[root@hadoop1 ~]# ls
hadoop-3.0.3.tar.gz  jdk-8u181-linux-x64.tar.gz
[root@hadoop1 ~]# mv * /home/hadoop/
[root@hadoop1 ~]# su - hadoop
[hadoop@hadoop1 ~]$ ls
hadoop-3.0.3.tar.gz  jdk-8u181-linux-x64.tar.gz
[hadoop@hadoop1 ~]$ tar zxf hadoop-3.0.3.tar.gz 
[hadoop@hadoop1 ~]$ ln -s hadoop-3.0.3  hadoop
[hadoop@hadoop1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz 
[hadoop@hadoop1 ~]$ ln -s jdk1.8.0_181/ java
  • 配置环境变量
[hadoop@hadoop1 ~]$ cd hadoop/etc/hadoop/
[hadoop@hadoop1 hadoop]$ vim hadoop-env.sh 

[hadoop@hadoop1 ~]$ vim .bash_profile 
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin
[hadoop@hadoop1 ~]$ source .bash_profile 
[hadoop@hadoop1 ~]$ jps #配置成功可以成功调用
  • 测试
[hadoop@hadoop1 hadoop] $ pwd
 /home/hadoop/hadoop
[hadoop@hadoop1 hadoop] $ mkdir input
[hadoop@hadoop1 hadoop] $ cp etc/hadoop/*.xml input
[hadoop@hadoop1 hadoop] $ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'
[hadoop@hadoop1 hadoop] $ cat output/*
1	dfsadmin
[hadoop@hadoop1 hadoop]$ rm -fr input/ output/

伪分布式

  • 编辑配置文件
[hadoop@hadoop1 hadoop]$ vim etc/hadoop/core-site.xml 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

[hadoop@hadoop1 hadoop]$ vim etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>
  • 生成密钥,做免密
[hadoop@hadoop1 hadoop]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
ca:f2:05:21:bc:50:1d:a7:f0:d1:10:11:42:75:7d:7c hadoop@hadoop1
The key's randomart image is:
+--[ RSA 2048]----+
|   .=oOBo. .     |
|   o +.=. . o E  |
|  . o +    . .   |
|   . o .         |
|    . . S        |
|     . o         |
|    . o .        |
|     o .         |
|      .          |
+-----------------+
[hadoop@hadoop1 hadoop]$ ssh-copy-id localhost
  • 使生效
  • 格式化文件系统
[hadoop@hadoop1 hadoop]$  bin/hdfs namenode -format
  • 启动namenode守护程序和datanode守护程序:
[hadoop@hadoop1 hadoop]$  sbin/start-dfs.sh
[hadoop@hadoop1 hadoop]$ jps
10481 DataNode
10370 NameNode
10694 SecondaryNameNode
11371 Jps
  • web查看(9870端口)
    在这里插入图片描述
  • 建议测试的HDFS目录(执行MapReduce作业所需的)
[hadoop@hadoop1 hadoop]$  bin/hdfs dfs -mkdir -p /user/hadoop
  • 将input文件复制到分布式文件系统中:
[hadoop@hadoop1 hadoop]$ bin/hdfs dfs -put etc/hadoop input
  • 例子
[hadoop@hadoop1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

再次在web界面查看
在这里插入图片描述

上传到分布式文件系统的文件可以在网页查看,也可以缓存到本地

[hadoop@hadoop1 hadoop]$ bin/hdfs dfs -get output output
bin/hdfs dfs -cat output/*

完全分布式

  • 先停掉服务,清除原来的数据
[hadoop@hadoop1 hadoop]$  sbin/stop-dfs.sh
Stopping namenodes on [localhost]
Stopping datanodes
Stopping secondary namenodes [hadoop1]

[hadoop@hadoop1 tmp]$ pwd
/tmp
[hadoop@hadoop1 tmp]$ rm -fr *
  • 新开两个虚拟机,当做dataname节点
[root@hadoop2 ~]# useradd hadoop
[root@hadoop2 ~]# id hadoop
uid=1000(hadoop) gid=1000(hadoop) groups=1000(hadoop)
[root@hadoop3 ~]# useradd hadoop

##安装nfs-utils
[root@hadoop3 ~]# yum install -y nfs-utils
[root@hadoop3 ~]# yum install nfs-utils
[root@hadoop2 ~]# yum install nfs-utils

[root@hadoop1 ~]# systemctl start rpcbind
[root@hadoop2 ~]# systemctl start rpcbind
[root@hadoop3 ~]# systemctl start rpcbind
  • hadoop1开启服务,配置
[root@hadoop1 ~]# systemctl start nfs-server
[root@hadoop1 ~]# vim /etc/exports
/home/hadoop   *(rw,anonuid=1000,anongid=1000)

[root@hadoop1 ~]# exportfs -rv
exporting *:/home/hadoop
[root@hadoop1 ~]# showmount -e
Export list for hadoop1:
/home/hadoop *
  • hadoop2,3挂载
[root@hadoop2 ~]# mount 172.25.254.1:/home/hadoop /home/hadoop
[root@hadoop3 ~]# mount 172.25.254.1:/home/hadoop /home/hadoop
[root@hadoop3 ~]# df
Filesystem                1K-blocks    Used Available Use% Mounted on
/dev/sda3                  20243456 1088260  19155196   6% /
devtmpfs                     498472       0    498472   0% /dev
tmpfs                        508264       0    508264   0% /dev/shm
tmpfs                        508264    6716    501548   2% /run
tmpfs                        508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                    508580  110624    397956  22% /boot
tmpfs                        101656       0    101656   0% /run/user/0
172.25.254.1:/home/hadoop  20243456 2786176  17457280  14% /home/hadoop
  • 重新编辑文件
[hadoop@hadoop1 ~]$ vim hadoop/etc/hadoop/core-site.xml 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://172.25.254.1:9000</value>
    </property>
</configuration>

[hadoop@hadoop1 ~]$ vim hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

[hadoop@hadoop1 hadoop]$ vim workers		#其他节点自动同步
[hadoop@hadoop1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop


[hadoop@hadoop2 hadoop]$ cat workers 
172.25.254.2
172.25.254.3
  • 格式化,并启动服务
[hadoop@hadoop1 hadoop]$ bin/hdfs namenode -format
[hadoop@hadoop1 hadoop]$ sbin/start-dfs.sh
[hadoop@hadoop1 hadoop]$ jps
15013 SecondaryNameNode
15127 Jps
14687 NameNode

##从节点可以看到datanode信息
[hadoop@hadoop2 hadoop]$ jps
1693 Jps
1631 DataNode

[hadoop@hadoop3 ~]$ jps
1563 DataNode
1647 Jps

在这里插入图片描述

  • 测试
[hadoop@hadoop1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@hadoop1 hadoop]$ bin/hdfs dfs -mkdir input
[hadoop@hadoop1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input
[hadoop@hadoop1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output 'dfs[a-z.]+'

在这里插入图片描述

server4模拟客户端

[root@hadoop4 ~]# useradd hadoop
[root@hadoop4 ~]# yum install nfs-utils
[root@hadoop4 ~]# systemctl start rpcbind
[root@hadoop4 ~]# mount 172.25.254.1:/home/hadoop /home/hadoop

[hadoop@hadoop4 hadoop]$ vim workers 
172.25.254.2
172.25.254.3
172.25.254.4

[hadoop@hadoop4 hadoop]$ sbin/hadoop-daemon.sh start datanode
[hadoop@hadoop4 hadoop]$ jps
1856 DataNode
1917 Jps

在这里插入图片描述

[hadoop@hadoop4 hadoop]$  dd if=/dev/zero of=bigfile bs=1M count=500
500+0 records in
500+0 records out
524288000 bytes (524 MB) copied, 22.3239 s, 23.5 MB/s
[hadoop@hadoop4 hadoop]$  bin/hdfs dfs -put bigfile

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值