准备工作:
1、安装好Hadoop;
2、创建HelloWorld.jar包,本文在Linux shell下创建jar包:
编写HelloWorld.java文件
public class HelloWorld
{
public static void main(String []args) throws Exception
{
System.out.println("Hello World");
}
}
javac HelloWorld.java进行编译后得到HelloWorld.class
在目录中简历MANIFEST.MF文件:
Manifest-Version: 1.0
Created-By: jdk1.6.0_45 (Sun Microsystems Inc.)
Main-Class: HelloWorld
运行命令:jar cvfm HelloWorld.jar MANIFEST.MF HelloWorld.class
再运行java -jar HelloWorld.jar可以看到运行结果
执行:
1、hadoop 执行HelloWorld.jar文件, cd到HelloWorld.jar所在目录,执行hadoop jar HelloWorld.jar,执行结果与java -jar HelloWorld.jar结果相同。
依据HelloWorld执行过程,测试对HDFS系统文件的读操作,代码如下:
import java.io.InputStream;
import java.net.URI;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IOUtils;
public class FileSystemCat {
public static void main(String[] args) throws Exception {
String uri = args[0];
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(URI.create(uri), conf);
InputStream in = null;
try {
in = fs.open(new Path(uri));
IOUtils.copyBytes(in, System.out, 4096, false);
} finally {
IOUtils.closeStream(in);
}
}
}
1、在linux上编译此FileSystemCat.java文件前,需要设置classpath:
1)运行hadoop classpath命令可以查看hadoop需要的classpath,在我的电脑上得到的结果如下:
/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
2)将上面的结果写入CLASSPATH,添加入/etc/profile文件
export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:/opt/hadoop-2.2.0/etc/hadoop:/opt/hadoop-2.2.0/share/hadoop/common/lib/*:/opt/hadoop-2.2.0/share/hadoop/common/*:/opt/hadoop-2.2.0/share/hadoop/hdfs:/opt/hadoop-2.2.0/share/hadoop/hdfs/lib/*:/opt/hadoop-2.2.0/share/hadoop/hdfs/*:/opt/hadoop-2.2.0/share/hadoop/yarn/lib/*:/opt/hadoop-2.2.0/share/hadoop/yarn/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.2.0/share/hadoop/mapreduce/*:/opt/hadoop-2.2.0/contrib/capacity-scheduler/*.jar:$CLASSPATH
3) Shell下执行source /etc/profile
2、javac FileSystemCat.java 编译java文件;
3、jar cvfm FileSystemCat.jar MANIFEST.MF FileSystemCat.class创建FileSystemCat.jar包;
4、HDFS系统中创建input目录,并将HelloWorld.java文件拷入HDFS系统中:
hadoop fs -mkdir input
hadoop fs -copyFromLocal ./HelloWorld.java /input
5、hadoop jar FileSystemCat.jar /input/HelloWorld.java就可以看到HelloWorld.java文件的内容,注意这里的/input/HelloWorld.java是处于HDFS系统中,默认的全路径为
core-site.xml中fs.defaultFS的路径+/input/HelloWorld.java,如果配置如下
<property>
<name>fs.defaultFS</name>
<value>hdfs://cloud001:9000</value>
</property>
我们也可以使用hadoop jar FileSystemCat.jar hdfs://cloud001:9000/input/HelloWorld.java来运行。