一、解决
在hadoop用户下执行hadoop classpath
命令,我们可以得到运行 Hadoop 程序所需的全部 classpath 信息。
然后vi .bashrc(Debian版本,Redhat版本下是.bash_profile文件)添加:
export CLASSPATH=.:/home/hadoop/hadoop-2.6.0-cdh5.5.2/etc/hadoop:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/common/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/common/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/hdfs/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/yarn/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/yarn/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/mapreduce/lib/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/share/hadoop/mapreduce/*:/home/hadoop/hadoop-2.6.0-cdh5.5.2/contrib/capacity-scheduler/*.jar
使配置文件生效命令:source .bashrc
二、验证
运行一个Hadoop1版本简单的mr:
[hadoop@h40 ~]$ vi WordCount.java
import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.FileInputFormat;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobClient;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapred.MapReduceBase;
import org.apache.hadoop.mapred.Mapper;
import org.apache.hadoop.mapred.OutputCollector;
import org.apache.hadoop.mapred.Reducer;
import org.apache.hadoop.mapred.Reporter;
import org.apache.hadoop.mapred.TextInputFormat;
import org.apache.hadoop.mapred.TextOutputFormat;
public class WordCount {
public static class Map extends MapReduceBase implements
Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements
Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values,
OutputCollector<Text, IntWritable> output, Reporter reporter)
throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
执行:
[hadoop@h40 ~]$ /usr/jdk1.7.0_25/bin/javac WordCount.java
[hadoop@h40 ~]$ /usr/jdk1.7.0_25/bin/jar cvf xx.jar WordCount*class
added manifest
adding: WordCount.class(in = 1516) (out= 744)(deflated 50%)
adding: WordCount$Map.class(in = 1918) (out= 794)(deflated 58%)
adding: WordCount$Reduce.class(in = 1591) (out= 642)(deflated 59%)
[hadoop@h40 ~]$ vi he.txt
hello world
hello hadoop
hello hive
[hadoop@h40 ~]$ hadoop fs -mkdir /input
[hadoop@h40 ~]$ hadoop fs -put he.txt /input
[hadoop@h40 ~]$ hadoop jar xx.jar WordCount /input/he.txt /output(这个output目录不能存在)
[hadoop@h40 ~]$ hadoop fs -cat /output/part-00000
hadoop 1
hello 3
hive 1
world 1
在网页中查看Job的具体信息,hadoop2版本中使用MapReduce JobHistory Server,http://jhs_host:port/,端口号默认为19888,地址由参数mapreduce.jobhistory.webapp.address
配置管理
[hadoop@h40 ~]$ mapred historyserver
(在CRT端启动,自己不会退出,按Ctrl+c退出则页面也就无法加载)
三、hadoop put命令不好使的问题
装完hadoop2.6.0-cdh5.5.2后put命令不好使,并且装完hive后导入本地数据也报错,put命令一直报这个错:
hadoop@debian:~$ hadoop fs -put he.tt /aaa
17/01/02 23:37:52 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /aaa/he.tt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3286)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:676)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
at org.apache.hadoop.ipc.Client.call(Client.java:1466)
at org.apache.hadoop.ipc.Client.call(Client.java:1403)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1674)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1471)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:668)
put: File /aaa/he.tt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
这个报错折磨了我好几天,最后才发现在装hadoop的时候主机名必须修改,我一开始认为不修改也没事,为了图方便就没改直接用的IP地址。
解决:
vi /etc/hosts (3节点都修改,可以把初始内容都删掉后添加如下内容)
192.168.8.11 h11
192.168.8.12 h12
192.168.8.13 h13
vi /etc/hostname (主节点,其他节点类似)
h11
reboot (3节点都重启)