Hadoop-HDFS的shell操作

本文章基于尚硅谷Hadoop 3.x视频进行总结,仅作为学习交流使用        视频链接如下:44_尚硅谷_Hadoop_HDFS_Shell命令上传_哔哩哔哩_bilibili

1.在hadoop-3.3.4目录下启动hdfs服务

[atguigu@hadoop102 hadoop-3.3.4]$ sbin/start-dfs.sh 
Starting namenodes on [hadoop102]
Starting datanodes
Starting secondary namenodes [hadoop104]

#2.查看一个命令的用法

 [atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -help rm
-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ... :
  Delete all files that match the specified file pattern. Equivalent to the Unix
  command "rm <src>"
                                                                                 
  -f          If the file does not exist, do not display a diagnostic message or 
              modify the exit status to reflect an error.                        
  -[rR]       Recursively deletes directories.                                   
  -skipTrash  option bypasses trash, if enabled, and immediately deletes <src>.  
  -safely     option requires safety confirmation, if enabled, requires          
              confirmation before deleting large directory with more than        
              <hadoop.shell.delete.limit.num.files> files. Delay is expected when
              walking over large directory recursively to count the number of    
              files to be deleted before the confirmation.                       

 3.创建/sanguo文件夹

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -mkdir /sanguo

第一部分.上传 

1.-moveFromLocal:从本地剪切粘贴到HDFS

创建shuguo.txt文件

[atguigu@hadoop102 hadoop-3.3.4]$ vim shuguo.txt

shuguo 

 将本地的shuguo.txt剪切到hadoop上的/sanguo文件夹下

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -moveFromLocal shuguo.txt /sanguo

2.-copyFromLocal:从本地文件系统中拷贝文件到HDFS路径去 

创建weiguo.txt文件

[atguigu@hadoop102 hadoop-3.3.4]$ vim weiguo.txt

weiguo

将本地的weiguo.txt复制到hadoop上的/sanguo文件夹下

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -copyFromLocal weiguo.txt /sanguo

 3.-put:等同于copyFromLocal,生产环境更习惯用put

创建wuguo.txt文件

[atguigu@hadoop102 hadoop-3.3.4]$ vim wuguo.txt 

wuguo

将本地的weiguo.txt复制到hadoop上的/sanguo文件夹下

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -put wuguo.txt /sanguo 

4.-appendToFile:追加一个文件到已经存在的文件末尾 

创建liubei.txt

[atguigu@hadoop102 hadoop-3.3.4]$ vim liubei.txt

liubei 

将本地的liubei.txt文件下的内容追加到hadoop上的/sanguo/shuguo.txt文件下 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo.txt

第二部分.下载 

1.-copyToLocal:从HDFS拷贝到本地

将hadoop上的shuguo.txt拷贝到本地

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -copyToLocal /sanguo/shuguo.txt ./

 2.-get:等同于copyToLocal,生产环境更习惯用get

 [atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -get /sanguo/shuguo.txt ./shuguo2.txt

第三部分.HDFS直接操作

 1.-ls: 显示目录信息

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -ls /
Found 6 items
-rw-r--r--   3 atguigu supergroup  148162542 2022-10-30 09:17 /jdk-8u341-linux-x64.tar.gz
drwxr-xr-x   - atguigu supergroup          0 2022-10-30 11:42 /output2
drwxr-xr-x   - atguigu supergroup          0 2022-10-31 15:39 /sanguo
drwx------   - atguigu supergroup          0 2022-10-30 11:42 /tmp
drwxr-xr-x   - atguigu supergroup          0 2022-10-30 09:14 /wcinput
drwxr-xr-x   - atguigu supergroup          0 2022-10-30 09:25 /wcoutput
[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -ls /sanguo
Found 3 items
-rw-r--r--   3 atguigu supergroup         14 2022-10-31 15:47 /sanguo/shuguo.txt
-rw-r--r--   3 atguigu supergroup          7 2022-10-31 15:35 /sanguo/weiguo.txt
-rw-r--r--   3 atguigu supergroup          6 2022-10-31 15:39 /sanguo/wuguo.txt

 2.-cat:显示文件内容

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -cat /sanguo/shuguo.txt
shuguo
liubei 

3.-chgrp、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限

 [atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -chown atguigu:atguigu /sanguo/shuguo.txt

4.-mkdir:创建路径 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -mkdir /jinguo

5.-cp:从HDFS的一个路径拷贝到HDFS的另一个路径 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -cp /sanguo/shuguo.txt /jinguo

6.-mv:在HDFS目录中移动文件 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo
[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -mv /sanguo/weiguo.txt /jinguo

7.-tail:显示一个文件的末尾1kb的数据 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -tail /jinguo/shuguo.txt
shuguo
liubei 

8.-rm:删除文件或文件夹 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -rm /sanguo/shuguo.txt
Deleted /sanguo/shuguo.txt

9.-rm -r:递归删除目录及目录里面内容 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -rm -r /sanguo
Deleted /sanguo 

10.-du统计文件夹的大小信息 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -du -s -h /jinguo
27  81  /jinguo
[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -du -h /jinguo
14  42  /jinguo/shuguo.txt
7   21  /jinguo/weiguo.txt
6   18  /jinguo/wuguo.txt

11.-setrep:设置HDFS中文件的副本数量 

[atguigu@hadoop102 hadoop-3.3.4]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
Replication 10 set: /jinguo/shuguo.txt

        这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,只有节点数的增加到10台时,副本数才能达到10。 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值