视频:【美妙人生】Hadoop课程系列之HDFS--手把手教你精通HDFS
【美妙人生】Hadoop课程系列之HDFS--手把手教你精通HDFS
【视频笔记】
通过java.net.URL类访问写入HDFS数据
----------------------------------------------------------------
/**
* 通过java.net.URL类访问写入HDFS数据
* 结论:通过URL的方式不能实现对HDFS的写操作,抛java.net.UnknownServiceException: protocol doesn't support output
* @throws IOException
*/
@Test
public void writeByURL() throws Exception {
URL _url =new URL("hdfs://master1:9000/spaceQuota/hello.txt");
URLConnection conn = _url.openConnection();
OutputStream out = conn.getOutputStream();
out.write("hello world".getBytes());
out.close();
}
通过FileSystem API做write操作
----------------------------------------------------------------------------
/**
* 通过FileSystem API做写操作
*/
@Test
public void writeByAPI() throws IOException{
Configuration conf = new Configuration();
FileSystem fs =FileSystem.get(conf);
Path file = new Path("/spaceQuota/hello.txt");
FSDataOutputStream out = fs.create(file);
out.write("hello world".getBytes());
out.close();
}
通过FileSystem API做写操作,动态设置相关参数:replication为2和blocksize为10字节
-----------------------------------------------------------------------------------------------
/**
* 通过FileSystem API做写操作,动态设置相关参数:replication和blocksize
* 有一些参数集群生效,例如:dfs.namenode.fs-limits.min-block-size=10
* 注:加载config信息的优先级:
* 代码级-->{classPath/***-site.xml}-->{HOADOOP_HOME/etc/haddop/***-site.xml}
*/
@Test
public void writeByAPIForBlocksize() throws IOException{
Configuration conf = new Configuration();
conf.set("fs.defaultFS", "hdfs://master1:9000");
conf.set("dfs.bytes-per-checksum", "10");
FileSystem fs =FileSystem.get(conf);
Path file = new Path("/spaceQuota/hello6666.txt");
FSDataOutputStream out = fs.create(file, true, 4096,(short)2, 10);
out.write("hello world".getBytes());
out.close();
}