1.通过FileSystem API访问BlockLocations块位置信息
指定一个文件名,查看存储文件信息的block位置信息@Test
public void BlockByFs() throws IOException{
Configuration conf=new Configuration();
FileSystem fs=FileSystem.get(conf);
Path file=new Path("/spaceQuota/hello222.txt");//集群文件的路径有11个字节大小,是分为两个block存储
FileStatus fst=fs.getFileStatus(file);
BlockLocation[] blocklocations=fs.getFileBlockLocations(fst,0,20);
//相当于hello222.txt从第0字节到20字节输出,
for(BlockLocation blocklocation:blocklocations){
String[] hosts=blocklocation.getHosts();
for(String host:hosts){
System.out.println(host);//集群名
}
String[] names=blocklocation.getNames();
for(String name:names){
System.out.println(name);
}
String[] topo =blocklocation.getTopologyPaths();
for(String tp:topo){
System.out.println(tp);
}
}
运行结果:
slave1
192.168.207.51:50010
/default-rack/192.168.207.51:50010
slave1
192.168.207.51:50010
/default-rack/192.168.207.51:50010
BlockLocation[] blocklocations=fs.getFileBlockLocations(fst,0,9);//要是改成9的话
结果:
slave1
192.168.207.51:50010
/default-rack/192.168.207.51:50010
2.通过FileSystem API访问Datanode信息
@Test
public void datanodeByFS() throws IOException{
Configuration conf=new Configuration();
System.setProperty("HADOOP_USER_NAME", "hyxy");
FileSystem fs=FileSystem.get(conf);
Path file=new Path("/spaceQuota/hello.txt");
DistributedFileSystem dfs=(DistributedFileSystem)fs;
DatanodeInfo[] dni=dfs.getDataNodeStats();
for (DatanodeInfo datanodeInfo2 : dni) {
System.out.println("Capacity:"+datanodeInfo2.getCapacity());
System.out.println("Remaining:"+datanodeInfo2.getRemaining());
System.out.println("DatanodeReport:"+datanodeInfo2.getDatanodeReport());
System.out.println("------------------------");
}
}
问题:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.
security.AccessControlException): Access denied for
user Mclusilu. Superuser privilege is required
拒绝用户Mclusilu(电脑名)的访问。 超级用户权限是必需的
原因:没有指定主机名
解决:
1)System.setProperty("HADOOP_USER_NAME", "hyxy");
2)上述要是不好使,还报该错的话,
则: 单击 右键 : run configuration
添加属性: Arguments :-DHADOOP_USER_NAME=hyxy
( hyxy:是我的主机名)