网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。
一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
}
}
//将最后结果输出
context.write(new Text("PI"), new Text(totle+"--"+sum));
}
}
2. 给出 reduce 方法
public class SolvingPiReducer extends Reducer<Text, Text, Text, DoubleWritable> {
/**
* name 输入的 “PI”
* message 输入的"totle–num"
* context 输出的<k,v>
* 所有键位"PI"的输入都用这个方法进行处理
/
@Override
protected void reduce(Text name, Iterable
throws IOException, InterruptedException {
//声明试验进行的总数
long sumTotle =0;
//声明落点在扇形区域中的总数
long sumOrder =0;
//解析输入的message信息,从这提取上述两个值
for (Text text : message) {
String[] nums = text.toString().split(“–”);
sumTotle+= new Integer(nums[0]);
sumOrder+= new Integer(nums[1]);
}
//System.out.println(“π的近似值为”+sumOrder
//输出最后结果
context.write(name,new DoubleWritable(sumOrder*4.0/sumTotle));
}
}
解析 map 方法返回的信息,进行汇总并输出最后结果.
3. 定义一个主类,用来描述job并提交job
public class SolvingPiRunner {
//把业务逻辑相关的信息(哪个是 mapper,哪个是 reducer,要处理的数据在哪里,输出的结果放在哪里……)描述成一个 job 对象
//把这个描述好的 job 提交给集群去运行
public static void main(String[] args) throws Exception {
//用户自定义输入
System.out.println(“请输入你想分的片数:”);
Scanner sc = new Scanner(System.in);
int pice=new Integer(sc.nextLine());
System.out.println(“请输入你每片执行多少次:”);
String line=sc.nextLine();
//按照分片生成文件(在实际环境中需要在hdfc中创建文件)
for(int i=0;i<pice;i++){
BufferedWriter bw = new BufferedWriter(new FileWriter(new File(“D:\hadoop\input\”+(i+1)+“.txt”)));
bw.write(line);
bw.close();
}
//把业务逻辑相关的信息(哪个是 mapper,哪个是 reducer,要处理的数据在哪里,输出的结果放在哪里……)描述成一个 job 对象
//把这个描述好的 job 提交给集群去运行
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
//知道这个job所在jar包
job.setJarByClass(SolvingPiRunner.class);
job.setMapperClass(SolvingPiMapper.class);
job.setReducerClass(SolvingPiReducer.class);
//设置我们的业务逻辑Mapper类的输出key 和 value 的数据
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
//设置我们的业务逻辑Reducer 类的输出Key和value 的数据类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(DoubleWritable.class);
//指定要处理的数据所在的位置
FileInputFormat.setInputPaths(job, "D:\\hadoop\\input\\*.txt");
//指定处理完成后,结果所保存的位置
FileOutputFormat.setOutputPath(job, new Path("D:\\hadoop\\output\\result"));
//向yarn集群提交这个job
boolean res = job.waitForCompletion(true);
System.exit(res?0:1);
}
}
在windows环境下模拟集群环境执行测试;
#### 遇到的问题
* 启动问题报错
Exception in thread “main” java.io.IOException: (null) entry in command string: null chmod 0700 D:\tmp\hadoop-lxc\mapred\staging\lxc1332581434.staging
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:869)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:852)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:733)
at org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:491)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:532)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:509)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:312)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at test.demo.SolvingPiRunner.main(SolvingPiRunner.java:54)
**解决方法**
* 在<https://github.com/SweetInk/hadoop-common-2.7.1-bin>中下载winutils.exe,libwinutils.lib 拷贝到%HADOOP\_HOME%\bin目录 。
* 在<https://github.com/SweetInk/hadoop-common-2.7.1-bin>中下载hadoop.dll,并拷贝到c:\windows\system32目录中。
---
#### 报错二
Exception in thread “main” java.lang.RuntimeException: Error while running command to get file permissions : java.io.IOException: (null) entry in command string: null ls -F D:\hadoop\input\1.txt
at org.apache.hadoop.util.Shell
S
h
e
l
l
C
o
m
m
a
n
d
E
x
e
c
u
t
o
r
.
e
x
e
c
u
t
e
(
S
h
e
l
l
.
j
a
v
a
:
773
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
u
t
i
l
.
S
h
e
l
l
.
e
x
e
c
C
o
m
m
a
n
d
(
S
h
e
l
l
.
j
a
v
a
:
869
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
u
t
i
l
.
S
h
e
l
l
.
e
x
e
c
C
o
m
m
a
n
d
(
S
h
e
l
l
.
j
a
v
a
:
852
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
f
s
.
F
i
l
e
U
t
i
l
.
e
x
e
c
C
o
m
m
a
n
d
(
F
i
l
e
U
t
i
l
.
j
a
v
a
:
1097
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
f
s
.
R
a
w
L
o
c
a
l
F
i
l
e
S
y
s
t
e
m
ShellCommandExecutor.execute(Shell.java:773) at org.apache.hadoop.util.Shell.execCommand(Shell.java:869) at org.apache.hadoop.util.Shell.execCommand(Shell.java:852) at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097) at org.apache.hadoop.fs.RawLocalFileSystem
ShellCommandExecutor.execute(Shell.java:773)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:869)atorg.apache.hadoop.util.Shell.execCommand(Shell.java:852)atorg.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1097)atorg.apache.hadoop.fs.RawLocalFileSystemDeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:659)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:634)
at org.apache.hadoop.fs.LocatedFileStatus.(LocatedFileStatus.java:49)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1733)
at org.apache.hadoop.fs.FileSystem$4.next(FileSystem.java:1713)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:305)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:318)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:196)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at test.demo.SolvingPiRunner.main(SolvingPiRunner.java:54)
原因及解决办法
* 在windows环境下读取文件**不能直接写文件所在路径**,需要写到文件,如果需要读多个文件可以用通配符 \* 代之多个;
![img](https://img-blog.csdnimg.cn/img_convert/91fa8a8599727dc58f2d1a72864faa05.png)
![img](https://img-blog.csdnimg.cn/img_convert/8bb3a77bc9ac8683e87d4df82b718f76.png)
**网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。**
**[需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/forums/4f45ff00ff254613a03fab5e56a57acb)**
**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**
需要这份系统化资料的朋友,可以戳这里获取](https://bbs.csdn.net/forums/4f45ff00ff254613a03fab5e56a57acb)**
**一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!**