生产10G文件
[hadoop@masternode1 ~]$ dd if=/dev/zero of=/home/hadoop/a.txt bs=100M count=100
记录了100+0 的读入
记录了100+0 的写出
10485760000字节(10 GB)已复制,64.0325 秒,164 MB/秒
向hdfs上传10G文件
[hadoop@masternode1 ~]$ hdfs dfs -ls /backup/a
Found 1 items
-rw-r--r-- 3 hadoop supergroup 10485760000 2016-11-10 14:37 /backup/a/a.txt
[hadoop@masternode1 ~]$ hdfs dfs -df /backup/a
Filesystem Size Used Available Use%
hdfs://cluster-ha 3134851063808 100535264891 2860436976612 3%
允许开启快照
Allowing snaphot on /backup/a succeeded
生成a目录备份
[hadoop@masternode1 ~]$ hdfs dfs -cre