shell命令
1.显示制定文件的详细信息(-ls)
hadoop fs -ls /sanguo/shuguo/
2.创建相关文件夹(-mkdir)
hadoop fs -mkdir ./sanguo/shuguo
3.显示文件内容(-cat)
hadoop fs -cat /sanguo/shuguo/1.txt
4.上传新文件(-put)
hadoop fs -put 2.txt /sanguo/shuguo/1.txt
5.修改文件所属权限(-chgrp -chown -chmod)
hadoop fs -chgrp root /sanguo/shuguo/1.txt
6.从本地文件系统中拷贝到HDFS路径中(-copyFromLocal)
hadoop fs -copyFromLocal ./zhangfei.txt /sanguo/shuguo/1.txt
7.从HDFS拷贝到本地(-copyToLocal)
hadoop fs -copyToLocal /sanguo/shuguo/1.txt ./zhangfei.txt
8.从HDFS一个路径拷贝到HDFS的另一个路径里面(-cp)
hadoop fs -cp /sanguo/shuguo/1.txt /sanguo
9.在HDFS目录中移动文件(-mv)
hadoop fs -mv /sanguo/shuguo/1.txt /
10.合并下载多个文件; 比如HDFS目录/aaa/下有多个文件1.txt、2.txt(-getmerge)
hadoop fs -getmerge /sanguo/shuguo/* ./
11.显示文件的末尾 (查看后几行为了查文件新添加的内容) (-tail)
hadoop fs -tail /sanguo/shuguo/1.txt
12.删除文件或文件夹 (-rm)
hadoop fs -rm /sanguo/shuguo/1.txt
hadoop fs -rm -r /sanguo (删除一个目录就得加 - r)
13.删除空目录(-rmdir)
14.统计文件夹的大小(-du)
hadoop fs -du -h / (加- h 为了显示文件的具体大小2:M)
15.设置HDFS的文件副本数量(为了备份) (-setrep)
hadoop fs -setrep 5 /sanguo/shuguo/1.txt