百度上搜的方法如下:可没解决。。。。
解法一:帮 /home/hadoop 加入其他使用者也可以写入的权限
sudo chmod -R a+w /home/hadoop/tmp
解法二:改用 user 身份可以写入的路径 hadoop.tmp.dir 的路径 - 修改 core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hadoop-${user.name}</value>
</property>
Hadoop这样做的目的是防止错误地将已存在的集群格式化了
第三方jar包:
The files/archives can be distributed by setting the property mapreduce.job.cache.{files |archives}. If more than one file/archive has to be distributed, they can be added as comma separated paths. The properties can also be set by APIs Job.addCacheFile(URI)/ Job.addCacheArchive(URI) and [Job.setCacheFiles(URI[])](../../api/org/apache/hadoop/mapreduce/Job.html)/ [Job.setCacheArchives(URI[])](../../api/org/apache/hadoop/mapreduce/Job.html) where URI is of the formhdfs://host:port/absolute-path\#link-name. In Streaming, the files can be distributed through command line option -cacheFile/-cacheArchive.
The DistributedCache can also be used as a rudimentary software distribution mechanism for use in the map and/or reduce tasks. It can be used to distribute both jars and native libraries. The Job.addArchiveToClassPath(Path) or Job.addFileToClassPath(Path) api can be used to cache files/jars and also add them to theclasspath of child-jvm. The same can be done by setting the configuration properties mapreduce.job.classpath.{files |archives}. Similarly the cached files that are symlinked into the working directory of the task can be used to distribute native libraries and load them.
<property>
<name>mapreduce.map.java.opts</name>
<value>
-Xmx512M -Djava.library.path=/home/mycompany/lib -verbose:gc -Xloggc:/tmp/@taskid@.gc
-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
</value>
</property>
<property>
<name>mapreduce.reduce.java.opts</name>
<value>
-Xmx1024M -Djava.library.path=/home/mycompany/lib -verbose:gc -Xloggc:/tmp/@taskid@.gc
-Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
</value>
</property>
解决unable to load hadoop native
hadoop checknative -a
如果文件太短,
mv libhadoop.so libhadoop.so.bak
ln -s libhadoop.so.1.*** libhadoop.so
如果显示glibc版本问题
下载对应版本,2.7.1要求2.14.1
yum -y install gcc
export CFLAGS="-g -O2"
cd glibc-2.14.1
yum install msgfmt makeinfo autoconf
../glibc-2.14.1/configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin
make
make install
不过在centos6上装glibc2.14.1遇到了麻烦
最后直接拷贝2.6.0用的native目录即可!
cp -R /usr/hadoop/hadoop-2.6.0/lib/native/ /usr/hadoop/hadoop-2.7.1/lib/
cp -R /usr/hadoop/hadoop-2.6.0-old/lib/native/ /usr/hadoop/hadoop-2.7.1/lib/
chmod -R 755 /usr/spark/spark-1.4.1/sbin /usr/spark/spark-1.4.1/bin
mv /usr/spark/spark-1.4.0 /usr/spark/spark-1.4.0-old
mv /usr/hadoop/hadoop-2.7.1 /usr/hadoop/hadoop-2.7.1-old
chmod
------------------------
: Name or service not knownstname slave006
: Name or service not knownstname slave002
: Name or service not knownstname slave011
--
把slaves文件删除了,重新创建了一个slaves文件,使用命令创建的,估计是slaves文件被格式污染