注:此篇文章主要面向对hadoop有一定了解的开发和运维人员,若是初次接触hadoop集群,具体安装过程请更多参考Ambari的安装部署教程:http://blog.csdn.net/balabalayi/article/details/64920537
CDH Manager的部署与安装与Ambari的安装有极大的相似性,几乎就是“安装包和文件目录不一样”的区别
过程简单阐述,具体请参见官方文档(建议方式):https://www.cloudera.com/documentation/enterprise/latest/topics/cm_ig_install_path_b.html
一. 基础依赖安装:
yum install -y net-tools ntp psmisc perl libxml2 libxslt lrzsz httpd telnet wget bind-utils
二. 环境准备:
包括java,ssh,ntp,hosts
三. 下载部署CDH Manager:
建议使用在线或是离线(将rpm提前下载至本地,修改yum.repo)然后直接yum install 的方式
四. 部署安装CDH
建议离线将CDH parcel提前下载至本地,放入指定parcel-repo目录,则通过CDH Manager可直接进行解压安装和部署
问题整理:
一. 主机问题检查常见问题:
1.Cloudera 建议将 /proc/sys/vm/swappiness 设置为最大值 10。当前设置为 30。使用 sysctl 命令在运行时更改该设置并编辑 /etc/sysctl.conf,以在重启后保存该设置。您可以继续进行安装,但 Cloudera Manager 可能会报告您的主机由于交换而运行状况不良。
解决:
执行
sysctl vm.swappiness=10
vi /etc/sysctl.conf
添加:
vm.swappiness=10
2.已启用透明大页面压缩,可能会导致重大性能问题。请运行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”和“echo never > /sys/kernel/mm/transparent_hugepage/enabled”以禁用此设置,然后将同一命令添加到 /etc/rc.local 等初始化脚本中,以便在系统重启时予以设置。
解决:
执行
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
vi /etc/rc.local
添加:
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
二. 权限问题(parcel激活过程中,建立的var目录下的目录权限有误):
如:
ls -l /var/lib/ |grep hadoop
d---------. 2 root root 4096 Jul 17 16:19 hadoop-hdfs
d---------. 2 root root 4096 Jul 17 16:19 hadoop-httpfs
d---------. 2 root root 4096 Jul 17 16:19 hadoop-kms
d---------. 2 root root 4096 Jul 17 16:19 hadoop-mapreduce
d---------. 3 root root 4096 Jul 17 17:54 hadoop-yarn
解决(请根据服务安装的具体情况自行变通):
chown -R flume:flume /var/lib/flume-ng
chown -R hdfs:hdfs /var/lib/hadoop-hdfs
chown -R httpfs:httpfs /var/lib/hadoop-httpfs
chown -R kms:kms /var/lib/hadoop-kms
chown -R mapred:mapred /var/lib/hadoop-mapreduce
chown -R yarn:yarn /var/lib/hadoop-yarn
chown -R hbase:hbase /var/lib/hbase
chown -R hive:hive /var/lib/hive
chown -R impala:impala /var/lib/impala
chown -R llama:llama /var/lib/llama
chown -R oozie:oozie /var/lib/oozie
chown -R sentry:sentry /var/lib/sentry
chown -R solr:solr /var/lib/solr
chown -R spark:spark /var/lib/spark
chown -R sqoop:sqoop /var/lib/sqoop
chown -R sqoop2:sqoop2 /var/lib/sqoop2
chown -R zookeeper:zookeeper /var/lib/zookeeper
chmod -R 755 /var/lib/flume-ng
chmod -R 755 /var/lib/hadoop-hdfs
chmod -R 755 /var/lib/hadoop-httpfs
chmod -R 755 /var/lib/hadoop-kms
chmod -R 755 /var/lib/hadoop-mapreduce
chmod -R 755 /var/lib/hadoop-yarn
chmod -R 755 /var/lib/hbase
chmod -R 755 /var/lib/hive
chmod -R 755 /var/lib/impala
chmod -R 755 /var/lib/llama
chmod -R 755 /var/lib/oozie
chmod -R 755 /var/lib/sentry
chmod -R 755 /var/lib/solr
chmod -R 755 /var/lib/spark
chmod -R 755 /var/lib/sqoop
chmod -R 755 /var/lib/sqoop2
chmod -R 755 /var/lib/zookeeper
三. Parcel部署激活过程卡在“正在获取安装锁”:
解决:
在问题节点执行:
rm -rf /tmp/scm_prepare_node.*
rm -rf /tmp/.scm_prepare_node.lock
然后重试
四. Parcel部署激活过程报错“ProtocolError: <ProtocolError for 127.0.0.1/RPC2: 401 Unauthorized>”:
解决:
在问题节点执行:
ps -ef | grep supervisord
kill -9 <processID>
然后重试
五. HDFS部署启动后,检查报错“Canary 测试无法为 /tmp/.cloudera_health_monitoring_canary_files 创建父目录”:
原因:常发生于集群启动或是集群不健康时,前者的话无影响
解决:
在nameNode节点执行:
sudo -uhdfs hdfs dfsadmin -safemode leave
关于CDH的集群卸载,与Ambari类似没有太好的办法,CDH也仅仅提供了安装资源的卸载方式:
https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh_comp_uninstall.html
作者在此将目录及用户的删除也做简单补充(请注意,在删除之前请保证cdh manager 的 server和agent端相关进程都已停止):
rm -rf /var/run/hadoop-*/ /var/run/hdfs-*/
rm -rf /var/lib/hadoop-* /var/lib/impala /var/lib/llama /var/lib/solr /var/lib/zookeeper /var/lib/hbase /var/lib/hue /var/lib/oozie /var/lib/pgsql /var/lib/sqoop* /var/lib/sentry /var/lib/spark*
rm -rf /var/log/hadoop*
rm -rf /usr/bin/hadoop* /usr/bin/zookeeper* /usr/bin/hbase* /usr/bin/hive* /usr/bin/hdfs /usr/bin/mapred /usr/bin/yarn /usr/bin/spark* /usr/bin/sqoop* /usr/bin/oozie
rm -rf /etc/hadoop* /etc/zookeeper* /etc/hive* /etc/hue /etc/impala /etc/sqoop* /etc/oozie /etc/hbase* /etc/hcatalog
rm -rf /dfs /hbase /yarn
userdel -rf oozie
userdel -rf hive
userdel -rf flume
userdel -rf hdfs
userdel -rf knox
userdel -rf storm
userdel -rf mapred
userdel -rf hbase
userdel -rf solr
userdel -rf impala
userdel -rf hue
userdel -rf tez
userdel -rf zookeeper
userdel -rf kafka
userdel -rf falcon
userdel -rf sqoop
userdel -rf yarn
userdel -rf hcat
userdel -rf atlas
userdel -rf spark
userdel -rf spark2
userdel -rf ams
userdel -rf llama
userdel -rf httpfs
userdel -rf sentry
userdel -rf sqoop
userdel -rf sqoop2
userdel -rf cloudera-scm
groupdel cloudera-scm