**URL url = new URL("hdfs://master:9000/file");
Exception:no protocol hdfsSolution:
1.导入hadoop-hdfs
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.6</version>
</dependency>
2.设置hdfs协议工厂
URL.setURLStreamHandlerFactory(new FsUrlStreamHandlerFactory());
**获得FileSystem
Configuration conf = new Configuration();//conf.set(key, value);
conf.set("fs.DefaulfFS", "hdfs://master:9000");
FileSystem fs = FileSystem.get(conf);
**Hadoop 分块根据冗余机制存储,可动态横向扩展
三大逻辑块yarn : 资源调度
namenode -- resourcemanager(一般分开,占内存高)
datanode -- nodemanager (一般一起,避免资源搬运)
配置yarn-site.xml
start-yarn.sh
hdfs : 分布式存储空间
mapreduce : 数据运算处理
按行拆分成key(当前字节数),value(内容)
再把内容map成想要的键值
将map的结果进行reduce
输出结果
三者之间low coupling,可单独存在
**代码示例WordCount
准备工作1.maven依赖
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.6</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-hdfs</artifactId>
<version>2.7.6</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.7.6</version>
</dependency>
2.Exception Retrying connect to server: 0.0.0.0/0.0.0.0:10020
配置服务器的mapred-site.xml
cp mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
//启动historyserver
mr-jobhistory-daemon.sh start historyserver
3.配置Configuration
Configuration conf = new Configuration();
//配置namenode端口
conf.set("fs.defaultFS", "hdfs://master:9000");
//将jar传给服务器
conf.set("mapreduce.job.jar", "learnhadoop.jar");
conf.set("mapreduce.framework.name", "yarn");
conf.set("yarn.resourcemanager.hostname", "master");
conf.set("mapreduce.app-submission.cross-platform", "true");
//jobhistory日志
conf.set("mapreduce.jobhistory.address", "master:10020");
代码
1.map
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
//每一个单词key计数为1
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
2.reduce
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
//reduce the value of key
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
3.job
String input = "/hello.txt";//相对hdfs根目录
String output = "/output/";//相对hdfs根目录,自动创建目录mkdir
// String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
// if (otherArgs.length < 2) {
// System.err.println("Usage: wordcount <in> [<in>...] <out>");
// System.exit(2);
// }
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
//输出的键值类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
// for (int i = 0; i < otherArgs.length - 1; ++i) {
// FileInputFormat.addInputPath(job, new Path(otherArgs[i]));
// }
FileInputFormat.setInputPaths(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output));
// FileOutputFormat.setOutputPath(job,
// new Path(otherArgs[otherArgs.length - 1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);