依赖包准备:
~/hadoop-2.6.5/share/hadoop/common/lib
~/hadoop-2.6.5/share/hadoop/common/
~hadoop-2.6.5/share/hadoop/hdfs/lib
~hadoop-2.6.5/share/hadoop/hdfs/
demo01:
eclipse构建工程添加相应的jar包:
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class demo01 {
public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
Configuration cong = new Configuration();
cong.set("fs.defaultFS", "hdfs://192.168.8.xxx:9000");
FileSystem fs = FileSystem.get(cong);
System.out.println(fs);
fs.mkdirs(new Path("/demo0001"));
}
错误信息
1. 报错:Cannot create directory /demo0001. Name node is in safe mode.
处理方式:hadoop dfsadmin -safemode leave
2. 报错:Permission denied: user=xxx, access=WRITE, inode="/":root:supergroup:drwxr-xr-x
处理方式:hdfs-site.xml 下添加
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
结果
结果在根目录下创建目录成功
drwxr-xr-x - xxx supergroup 0 2018-12-12 23:09 /demo0001
2)maven的构建路径:
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs
3)kerberos认证
https://www.cnblogs.com/wukenaihe/p/3732141.html
#快照
hdfs dfsadmin -allowSnapshoth 目录
# 回收站
就是打开后,删除的东西自动转移到该目录下,时间为分钟
配置命令
fs.trash.
# 配额
1)名称配额
限定hdfs文件/目录的个数,实际存放为N -1个
设置配额命令:
hdfs dfs -setQuota 3 目录 指定配额
clrQuota==清楚配额
2)空间配额
setSpaceQuota设置空间配额
clrSpaceQuota清除空间配额
# API步骤操作:
IOUtils
采用IOUtils直接copy数据
@Test
public void IOUtilsTest() throws IllegalArgumentException, IOException{
//输入
FileInputStream fis = new FileInputStream(new File("F:/demo/text.txt"));
//输出
FSDataOutputStream fos = fs.create(new Path("/demotext0511.txt"));
IOUtils.copyBytes(fis, fos, conf);
IOUtils.closeStream(fos);
IOUtils.closeStream(fis);
fs.close();