vim命令:http://blog.csdn.net/yu870646595/article/details/52045150
centos7 修改网卡名称:http://blog.csdn.net/u010579068/article/details/54016509
centos7 修改网卡名称:http://blog.csdn.net/dylloveyou/article/details/78697896
centos 虚拟机网络net配置:http://blog.csdn.net/cmqwan/article/details/61932792
卸载掉原装java 安装Oracle 的java:http://blog.csdn.net/hui_2016/article/details/69941850
卸载掉原装java 安装Oracle 的java:https://www.cnblogs.com/CuteNet/p/3947193.html
Linux修改文件所属的用户和组: http://blog.csdn.net/zhiaicq_r/article/details/79228981
Linux修改用户所属的组:https://www.imooc.com/article/17776?block_id=tuijian_wz
Linux修改用户所属的组:http://blog.csdn.net/looksun/article/details/50668722
Linux文件权限:https://www.linuxidc.com/Linux/2016-08/134047.htm
Linux 下java jps命令使用解析详解:http://www.jb51.net/article/108138.htm
hadoop安装:https://mp.weixin.qq.com/s/NLmCUyovXiomGcKc8oMT1A
Hadoop集群克隆master节点为slave节点之后操作步骤:http://www.gjnote.com/archives/724.html
CENTOS7防火墙关闭以及启用默认iptables防火墙:https://blog.csdn.net/suzhi921/article/details/52273564
CentOS7关闭防火墙和selinux:https://blog.csdn.net/u010793761/article/details/54136339
CentOS 7.2 关闭防火墙:https://jingyan.baidu.com/article/359911f5bffb5257fe030630.html
jps查询进程发现少了namenode:这里写链接内容
jps查询进程发现少了namenode:https://www.zhihu.com/question/31239901
hadoop启动 脚本
#! /bin/sh
echo “————————”
echo “启动hadoopHDFS和yarn进程”
echo “————————”
#删除tmp下的文件
echo “—删除tmp下的文件—”
rm -fr /opt/soft/hadoopdata/tmp/*
#删掉自己建的hdfs下name和data中的文件
echo “—删掉自己建的hdfs下name和data中的文件—”
rm -fr /opt/soft/hadoopdata/dfs/name/*
rm -fr /opt/soft/hadoopdata/dfs/data/*
#格式化namenode
echo “—格式化namenode—”
/opt/soft/hadoop-2.6.5/bin/hadoop namenode -format
#启动dfs
echo “—起动dfs—”
/opt/soft/hadoop-2.6.5/sbin/start-dfs.sh
#启动yarn
echo “—启动yarn—”
/opt/soft/hadoop-2.6.5/sbin/start-yarn.sh
#查看进程
echo “—查看进程—”
jps
Spark On YARN 集群安装部署:https://www.linuxidc.com/Linux/2016-01/127003.htm
Spark on Yarn集群搭建详细过程:https://www.jianshu.com/p/aa6f3a366727
启动spark脚本:
**#! /bin/sh
#启动spark**
echo “—启动spark—”
#启动dfs
echo “—起动dfs—”
/opt/soft/hadoop-2.6.5/sbin/start-dfs.sh
#启动yarn
echo “—启动yarn—”
/opt/soft/hadoop-2.6.5/sbin/start-yarn.sh
#启动spark
echo “—启动spark—”
/opt/soft/spark-1.6.2-bin-hadoop2.6/sbin/start-all.sh
#查看进程
echo “—查看进程—”
jps
spark运行示例脚本:
#! /bin/sh
cd /opt/soft/spark-1.6.2-bin-hadoop2.6/bin
./spark-submit \
–class org.apache.spark.examples.SparkPi \
–master yarn \
–deploy-mode cluster \
–driver-memory 1G \
–executor-memory 1G \
–executor-cores 1 $SPARK_HOME/lib/spark-examples-1.6.2-hadoop2.6.0.jar 40
集群安装完成后可能出现的问题:
Spark只启动了Master,Worker没启动的:https://blog.csdn.net/eggsdevil/article/details/54575181
Spark启动集群所有机器时worker无法启动或无法在集群管理器上线:
重点内容http://www.xmanblog.net/2017/04/12/spark-worker-connot-connect/
Hadoop datanode正常启动,但是Live nodes中却缺少节点的问题:
https://blog.csdn.net/wk51920/article/details/51729460
hadoop集群中部分datanode有进程但不是active:
https://blog.csdn.net/drhhyh/article/details/44308731
Spark分布式搭建(2)——ubuntu14.04下修改hostname和hosts:
https://blog.csdn.net/xummgg/article/details/50634327
hadoop集群不管怎么启动在hadoop管理界面都看到只有一个datanode:
https://blog.csdn.net/baidu_19473529/article/details/52996380
hadoop 搭建3节点集群,遇到Live Nodes显示为0时解决办法:
https://blog.csdn.net/u010801439/article/details/76944008
Hadoop datanode正常启动,但是Live nodes中却缺少节点的问题:
https://blog.csdn.net/wk51920/article/details/51729460
Hadoop-2.2.0集群部署时live nodes数目不对的问题:
https://blog.csdn.net/u013281331/article/details/17963363
在hadoop集群中各个datanode都显示成功启动,但是在网页上查看live nodes与实际个数不符合:
https://bbs.csdn.net/topics/390171780
Linux下IntelliJ IDEA 2017的安装破解教程:http://www.pc0359.cn/article/linux/79157.html
Linux下IntelliJ IDEA 2017的安装破解教程:http://www.jb51.net/softjc/588626.html