1、启动阶段
1)启动hadoop集群
sbin/start-dfs.sh
sbin/start-yarn.sh
2)、help 输出这个命令参数
hadoop fs -help rm
3)、创建/sanguo文件夹
hadoop fs -mkdir /sanguo
2、上传
- -moveFromLocal:从本地剪切粘贴到HDFS
$ vim shuguo.txt
输入:
shuguo
$ hadoop fs -moveFromLocal ./shuguo.txt /sanguo
2)-copyFromLocal:从本地文件系统中拷贝文件到HDFS路径去
$ vim weiguo.txt
输入:
weiguo
$ hadoop fs -copyFromLocal weiguo.txt /sanguo
3)-put:等同于copyFromLocal,生产环境更习惯使用put
$vim wuguo.txt
输入:
wuguo
$ hadoop fs -put ./wuguo.txt /sanguo
4)-appendToFile:追加一个文件到已经存在的文件末尾
$vim liubei.txt
输入:
liubei
$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo.txt
------------------------解压安装包到某一文件夹命令------------------------------
tar -zxvf kafka_2.11-2.4.1.tgz -C /opt/module/
-----------把jar下所有文件传到hdfs的spark-jars目录中--------------------------
jars]$ hadoop fs -put ./* /spark-jars