Hadoop(2)

使用java接口操作hdfs,上传、下载、删除等。

将本地文件复制到Hadoop文件系统并显示进度。
 

目的:熟悉HDFS的基本操作。

 

思路分析
使用java.net.URI对象打开一个数据流,并从中读取数据。
使用FileSystem API读取数据。文件在Hadoop文件系统中被视为一个Hadoop Path对象,把一个路径视为Hadoop的文件系统URL
     使用FileSystem写数据,给拟创建的文件指定一个路径对象,然后返回一个写输出流。
     使用FileSystemdelete()可以永久删除Hadoop中的文件或目录。
Public Boolean delet(Path f, Boolean recursive) throws IOException

     


使用URLStreamHandler以标准输出显示Hadoop文件系统文件

 

代码:

import java.io.*;
import java.net.URL;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
 
public class URLCat {
static {
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
 
public static void main(String[] args) throws Exception {
InputStream in = null;
try {
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}
 


 

使用FileSystem API显示Hadoop文件系统中的文件

 

代码:

import java.io.*;
import java.net.URI;
import java.net.URL;
import java.util.*;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
 
public class DoubleCat {
public static void main(String[] args) throws Exception {
String uri = args[0];
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
FSDataInputStream in = null;
try {
in = fs.open(new Path(uri));
IOUtils.copyBytes(in, System.out, 4096, false);
in.seek(3); // go back to pos 3 of the file
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}


 

 

将本地文件复制到Hadoop文件系统并显示进度。

 

代码:

import java.io.*;
import java.net.URL;
import org.apache.hadoop.fs.FsUrlStreamHandlerFactory;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
 
public class URLCat {
static {
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
}
public static void main(String[] args) throws Exception {
InputStream in = null;
try {
in = new URL(args[0]).openStream();
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}
package cn.edu.hut.hadoop.hdfs;
 
import java.io.IOException;
import java.net.InetSocketAddress;
 
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
 
public class RPCClient {
 
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
Bizable proxy = RPC.getProxy(Bizable.class,
10010,
new InetSocketAddress("192.168.88.1", 9527),
new Configuration() );
String result = proxy.sayHi("tomcat");
System.out.println(result);
RPC.stopProxy(proxy);
}
}


RPC客户端


import java.io.IOException;
import java.net.InetSocketAddress;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
 
public class RPCClient {
 
public static void main(String[] args) throws IOException {
// TODO Auto-generated method stub
Bizable proxy = RPC.getProxy(Bizable.class,
10010,
new InetSocketAddress("192.168.88.1", 9527),
new Configuration() );
String result = proxy.sayHi("tomcat");
System.out.println(result);
RPC.stopProxy(proxy);
}
}

RPC Server端代码:

 

import java.io.IOException;
import org.apache.hadoop.HadoopIllegalArgumentException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;
import org.apache.hadoop.ipc.Server;
public class RPCServer implements Bizable{
 
public static void main(String[] args) throws HadoopIllegalArgumentException, IOException {
// TODO Auto-generated method stub
Configuration conf = new Configuration();
Server server = new RPC.Builder(conf)
.setProtocol(Bizable.class)
.setInstance(new RPCServer())
.setBindAddress("192.168.88.1")
.setPort(9527)
.build();
server.start();
}
public String sayHi(String name)
{
return "hi " + name;
}



 

接口定义:

package cn.edu.hut.hadoop.hdfs;
 
public interface Bizable {
public String sayHi(String name);
public static final long versionID= 10010;
}


 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值