hadoop单机模式、伪分布式、分布式

hadoop 单机模式

建立用户设置密码

[root@server1 ~]# useradd -u 1000 hadoop
[root@server1 ~]# passwd hadoop
Changing password for user hadoop.
New password: 
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: 
passwd: all authentication tokens updated successfully.

切换到hadoop 并解压安装包

[hadoop@server1 ~]$ tar zxf hadoop-3.0.3.tar.gz 
[hadoop@server1 ~]$ tar zxf jdk-8u181-linux-x64.tar.gz 
[hadoop@server1 ~]$ ln -s jdk1.8.0_181/ java
[hadoop@server1 ~]$ ln -s hadoop-3.0.3 hadoop

设置环境变量

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim hadoop-env.sh
 54 export JAVA_HOME=/home/hadoop/java
[hadoop@server1 ~]$ vim .bash_profile 
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$HOME/java/bin
[hadoop@server1 ~]$ source .bash_profile
[hadoop@server1 ~]$ jps
1053 Jps

测试

[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ ls
bin  include  libexec      NOTICE.txt  sbin
etc  lib      LICENSE.txt  README.txt  share
[hadoop@server1 hadoop]$ mkdir input
[hadoop@server1 hadoop]$ cp etc/hadoop/*.xml input/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output '[a-z.]+'
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS

伪分布式

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop/etc/hadoop
[hadoop@server1 hadoop]$ vim core-site.xml 
<configuration>
   <property>
        <name>fs.defaultFS</name>
        <value>hdfs://172.25.76.1:9000</value>
   </property>
</configuration>


[hadoop@server1 hadoop]$ vim hdfs-site.xml
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

生成密钥

[hadoop@server1 hadoop]$ ssh-keygen
[hadoop@server1 hadoop]$ ssh-copy-id localhost
[hadoop@server1 hadoop]$ ssh-copy-id server1

格式化 开启服务

[hadoop@server1 hadoop]$ pwd
/home/hadoop/hadoop
[hadoop@server1 hadoop]$ bin/hdfs namenode -format

[hadoop@server1 hadoop]$ cd sbin/
[hadoop@server1 sbin]$ ./start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
Starting secondary namenodes [server1]
[hadoop@server1 sbin]$ jps
14084 SecondaryNameNode
13893 DataNode
14184 Jps
13787 NameNode

测试:

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
[hadoop@server1 hadoop]$ bin/hdfs dfs -put input/
[hadoop@server1 hadoop]$ bin/hdfs dfs -ls
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2019-05-21 17:42 input

[hadoop@server1 hadoop]$ rm -rf input/ output/
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output '[a-z.]+'
[hadoop@server1 hadoop]$ ls
bin  include  libexec      logs        README.txt  share
etc  lib      LICENSE.txt  NOTICE.txt  sbin
[hadoop@server1 hadoop]$ bin/hdfs dfs -get output
[hadoop@server1 hadoop]$ ls
bin  include  libexec      logs        output      sbin
etc  lib      LICENSE.txt  NOTICE.txt  README.txt  share
[hadoop@server1 hadoop]$ cd output/
[hadoop@server1 output]$ ls
part-r-00000  _SUCCESS

在172.25.76.1:9870上查看

在这里插入图片描述
在这里插入图片描述

分布式

清理之前的环境

[hadoop@server1 hadoop]$ sbin/stop-dfs.sh 
Stopping namenodes on [server1]
Stopping datanodes
Stopping secondary namenodes [server1]
[hadoop@server1 hadoop]$ cd /tmp/
[hadoop@server1 tmp]$ ls
hadoop  hadoop-hadoop  hsperfdata_hadoop
[hadoop@server1 tmp]$ rm -rf *

创建用户安装nfs

[root@server2 ~]# useradd -u 1000 hadoop
[root@server2 ~]# yum install -y nfs-utils

[root@server3 ~]# useradd -u 1000 hadoop
[root@server3 ~]# yum install -y nfs-utils

[root@server1 ~]# yum install -y nfs-utils

[root@server1 ~]# systemctl start rpcbind
[root@server1 ~]# systemctl start rpcbind
[root@server1 ~]# systemctl start rpcbind

启动和配置nfs服务

[root@server1 tmp]# systemctl start nfs-server
[root@server1 tmp]# vim /etc/exports
/home/hadoop    *(rw,anonuid=1000,anongid=1000)
[root@server1 tmp]# exportfs -rv
exporting *:/home/hadoop

[root@server1 tmp]# showmount -e
Export list for server1:
/home/hadoop *

server2 server3 挂载

[root@server2 ~]# mount 172.25.76.1:/home/hadoop/ /home/hadoop/
[root@server2 ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root     17811456 1099612  16711844   7% /
devtmpfs                    497292       0    497292   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264    6716    501548   2% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                  1038336  123376    914960  12% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.76.1:/home/hadoop  17811456 2923264  14888192  17% /home/hadoop


[root@server3 ~]# mount 172.25.76.1:/home/hadoop/ /home/hadoop/
[root@server3 ~]# df
Filesystem               1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root     17811456 1099720  16711736   7% /
devtmpfs                    497292       0    497292   0% /dev
tmpfs                       508264       0    508264   0% /dev/shm
tmpfs                       508264    6716    501548   2% /run
tmpfs                       508264       0    508264   0% /sys/fs/cgroup
/dev/sda1                  1038336  123376    914960  12% /boot
tmpfs                       101656       0    101656   0% /run/user/0
172.25.76.1:/home/hadoop  17811456 2923264  14888192  17% /home/hadoop

配置hdfs

[root@server1 hadoop]# su - hadoop
[hadoop@server1 ~]$ cd hadoop
[hadoop@server1 hadoop]$ cd etc/hadoop/


[hadoop@server1 hadoop]$ vim hdfs-site.xml 
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
</configuration>

[hadoop@server1 hadoop]$ vim workers 
172.25.76.2
172.25.76.3





[root@server2 ~]# su - hadoop
[hadoop@server2 ~]$ cd hadoop/etc/hadoop/
[hadoop@server2 hadoop]$ cat workers
172.25.76.2
172.25.76.3


Server3相同
[root@server3 ~]# su - hadoop
[hadoop@server3 ~]$ cd hadoop/etc/hadoop/
[hadoop@server3 hadoop]$ cat workers
172.25.76.2
172.25.76.3

格式化 并启动dfs

[hadoop@server1 hadoop]$ bin/hdfs namenode -format


[hadoop@server1 hadoop]$ sbin/start-dfs.sh 
Starting namenodes on [server1]
Starting datanodes
172.25.76.2: Warning: Permanently added '172.25.76.2' (ECDSA) to the list of known hosts.
172.25.76.3: Warning: Permanently added '172.25.76.3' (ECDSA) to the list of known hosts.
Starting secondary namenodes [server1]
[hadoop@server1 hadoop]$ jps
16544 Jps
16411 SecondaryNameNode
16221 NameNode

[hadoop@server2 hadoop]$ jps
10724 Jps
10662 DataNode
[hadoop@server3 hadoop]$ jps
10717 Jps
10654 DataNode

上传文件

[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir -p /user/hadoop
[hadoop@server1 hadoop]$ bin/hdfs dfs -mkdir input
[hadoop@server1 hadoop]$ bin/hdfs dfs -put etc/hadoop/*.xml input
[hadoop@server1 hadoop]$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.3.jar grep input output '[a-z.]+'

在172.25.76.1:9870 可以查看在这里插入图片描述

在这里插入图片描述

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值