CDH集群规划

Cluster1集群规划

版本
操作系统Redhat7.2
JDK1.8
mysql5.7.20
CM6.3.1
CDH6.2.1
iphostname
10.162.5.181(SERVER)dsj-4t-323
10.162.5.182dsj-4t-324
10.162.5.183dsj-4t-325
10.162.5.184dsj-4t-326
10.162.5.185dsj-4t-327
10.162.5.186dsj-4t-328
10.162.5.187dsj-4t-329
10.162.5.188dsj-4t-330
10.162.5.189dsj-4t-331

Cluster2集群规划

版本
JDK1.8
mysql5.7.29
CM6.3.1
CDH6.2.1
iphostname
10.162.5.190(SERVER)dsj-4t-332
10.162.5.191dsj-4t-333
10.162.5.192dsj-4t-334
10.162.5.193dsj-4t-335
10.162.5.194dsj-4t-336

安装CM过程中遇到的问题

安装agent时检测不到信号:

确保9001与9000端口没有被占用

netstat -tunlp | grep 9001
sudo kill -9 <PID>

注意:不要先杀死所有主机的9001的进程再统一安装agent,因为一些进程啥杀死后过一段时间还会自己启动,要逐个主机的安装agent。

出现ProtocolError:
ps -ef | grep supervisord
kill -9 <PID>
ps aux | grep super
kill -9 <PID>
sudo service cloudera-scm-agent restart

安装CDH过程中遇到的问题

1. 所有CDH的安装位置
cd /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib
2. 查看zookeeper数据
cd /opt/cloudera/parcels/CDH-6.2.1-1.cdh6.2.1.p0.1425774/lib/zookeeper/bin/
sudo bash zkCli.sh -server 127.0.0.1:2181
3. 修改kafka的ID后还要修改meta.propertie中的id
cd /var/local/kafka/data
sudo vim meta.propertie
4. 安装HDFS后NFS Gateway启动失败

先来到安装了NFS Gateway的主机上,启动rpcbind服务

sudo service rpcbind status
sudo service rpcbind start
5. 安装HDFS时出现错误
Failed to add storage directory [DISK]file:/mnt/sd01/dfs/dn
java.io.IOException: Incompatible clusterIDs in /mnt/sd01/dfs/dn: namenode clusterID = cluster22; datanode clusterID = cluster13

这种情况是所有datanode无法启动,需要去所有安装datanode的主机上删除current文件

rm -rf /mnt/sd01/dfs/dn/current/
rm -rf /mnt/sd02/dfs/dn/current/
rm -rf /mnt/sd03/dfs/dn/current/
rm -rf /mnt/sd04/dfs/dn/current/
rm -rf /mnt/sd05/dfs/dn/current/
rm -rf /mnt/sd06/dfs/dn/current/
rm -rf /mnt/sd07/dfs/dn/current/
rm -rf /mnt/sd08/dfs/dn/current/
rm -rf /mnt/sd09/dfs/dn/current/
rm -rf /mnt/sd10/dfs/dn/current/
rm -rf /mnt/sd11/dfs/dn/current/
6. 安装HIve是测试数据库连接时报错:Logon denied for user/password. Able to find the database server and database, but logon request was rejected

root用户无法连接新建的hive数据库,要赋权一下

mysql -uroot -p
grant all privileges on *.* to 'root'@'10.162.5.190' identified by 'HXa#@2018QdoP' with grant option;
flush privileges;
7.修改配置文件在web上能够上传下载文件到HDFS:
8.从本地上传文件到HDFS权限不够,在linux执行:
sudo groupadd supergroup
sudo usermod -a -G supergroup root
hdfs dfsadmin -refreshUserToGroupsMappings
grep 'supergroup:' /etc/group
hadoop fs  -moveFromLocal  ./1.txt  /
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值