CDH 6.2.0 或 6.3.0 安装实战及官方文档资料链接

CDH 5.x 官方文档  |   CDH 5.x 配置文档
CDH 6.x 官方文档  |   Cloudera Manager API
Download CDH 6.2.0  |  Detailed CDH 6.2.x Installation Guide  |   Cloudera Manager 6.2.0  |   CDH 6.2.x Download
Download CDH 6.3.0  |  Detailed CDH 6.3.x Installation Guide  |   Cloudera Manager 6.3.0  |   CDH 6.3.x Download


目录


常见的 JDK 有 Oracle JDK、和 Open JDK,而常用到的 Open JDK有 Linux yum 版的 Open JDK、Zulu JDKGraalVM CE JDK等。安装 CDH 环境的 JDK 时还是建议先使用官方提供的下载资源列表里的 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm。如果公司有要求必须使用Open JDK,可以先安装下载包中的Oracle JDK 安装,等CDH安装完毕,需要的组件服务安装配置完成之后,再升级为自己需公司要求的 Open JDK,且强烈建议这样做
Supported JDKs
CDH升级JDK可以参考我的另一篇博客 CDH-5.16之 JDK 版本升级为 Open JDK1.8
cdh-java-home

对于包中个组件的版本可查看 CDH 6.2.0 Packaging,或者镜像库中查看 noarch | x86_64注意:从cdh6.0开始,Hadoop的版本升级到了3.0
在这里插入图片描述



1. 环境准备

这里是CDH对环境的要求:Cloudera Enterprise 6 Requirements and Supported Versions

1.1 环境清除或重装

假设旧环境已经安装过CDH或HDP或者其它,需要清除这些,清除相对麻烦些,删除的时候需谨慎。但整体可以这样快速清除

  • 1 获取root用户
  • 2 卸除通过rmp安装的服务:
    rpm -qa 
    # 或者指定某些服务
    rpm -qa 'cloudera-manager-*'
    
    # 移除服务
    rpm -e 上面命令查到的名字 --nodeps
    
    # 清除yum 缓存
    sudo yum clean all
    
  • 3 查看进程:ps -aux > ps.txt,通过第一列USER 和 最后一列COMMAND 确定是否为清除的进程,如果是那么可根据第二列PID kill掉该服务:kill -9 pid号
  • 4 查看系统的用户信息:cat /etc/passwd
  • 5 删除多余的用户:userdel 用户名
  • 6 搜索这个用户相关的的文件,并删除:
     find / -name 用户名*
     # 删除查到的文件
     rm -rf 文件
    
  • 7 在删除时有时可能文件被占用,可以先通过 lsof 命令找到被占用的进程,关闭后再删除。
  • 8 虽然查不到进程被占用,可能文件被挂载了,卸除后再删除: umount cm-5.16.1/run/cloudera-scm-agent/process
    然后再:rm -rf cm-5.16.1/。当卸除命令执行后还是不能删除时,可以多运行几次,再尝试删除文件。

1.2 Apache HTTP 服务安装

因为有些服务器对访问外网有严格限制时,可以配置一个 HTTP服务,将下载的资源上传上去,方便后面的安装。

1.2.1 不带用户认证

Step 1:先查Apache http服务状态

如果状态可查到,只需要修改配置文件(查看Step 3),重启服务就行。如果状态查询失败,需要先安装 Apache HTTP 服务(接着Step 2)。

sudo systemctl start httpd
Step 2:安装 Apache http服
 yum -y install httpd
Step 3:修改 Apache http 配置

配置如下内容。最后保存退出。可以查看到配置的文档路径为:/var/www/html。其他的配置项可以默认,也可以根据情况修改。

vim /etc/httpd/conf/httpd.conf
 
 # 大概在 119 行
DocumentRoot "/var/www/html"

# 大概在 131 行,<Directory "/var/www/html"> </Directory>标签内对索引目录样式设置
# http://httpd.apache.org/docs/2.4/en/mod/mod_autoindex.html#indexoptions
# 最多显示100个字符,utf-8字符集,开启目录浏览修饰,目录优先排序
IndexOptions NameWidth=100 Charset=UTF-8 FancyIndexing FoldersFirst

配置完后,记得重启服务。

Step 4:创建资源路径
sudo mkdir -p /var/www/html/cloudera-repos

1.2.2 带用户认证

如果集群环境对资源镜像有安全要求,可以对 httpd 添加上用户认证:

Step 1:修改 httpd.conf 配置文件

vi /etc/httpd/conf/httpd.conf

##==========添加如下配置(Directory 为需要设置用户认证的资源路径)
<Directory "/var/www/html/dist">
	Options Indexes FollowSymLinks
	IndexOptions NameWidth=100 Charset=UTF-8 FancyIndexing FoldersFirst
	AllowOverride authconfig
	Order allow,deny
	Allow from all
</Directory>

Step 2: 资源路径下添加 .htaccess 配置文件

vi /var/www/html/dist/.htaccess 

##==========在文件中添加如下
AuthName "请输入密码,密码可联系管理员获取"
AuthType basic
AuthUserFile  /var/www/html/members.txt
require valid-user

说明如下

  • AuthName:定义提示信息,用户访问时提示信息会出现在认证的对话框中
  • AuthType:定义认证类型,在HTTP1.0中,只有一种认证类型:basic。在HTTP1.1中有几种认证类型,如:MD5
  • AuthUserFile:定义包含用户名和密码的文本文件,每行一对
  • AuthGroupFile:定义包含用户组和组成员的文本文件。组成员之间用空格分开,如:group1:user1 user2
  • require命令:定义哪些用户或组才能被授权访问。如:
    • require user user1 user2 (只有用户user1和user2可以访问)
    • requires groups group1 (只有group1中的成员可以访问)
    • require valid-user (在AuthUserFile指定的文件中的所有用户都可以访问)

Step 3:用户名和密码
直接使用 htpasswd 命令生成用户认证文件

#第一次没有文件时,通过使用 -c 参数创建
htpasswd -bc /var/www/html/members.txt admin 123456

# 之后如果添加用户时
htpasswd -b /var/www/html/members.txt test 123456

使用
方式一:浏览器直接输入 http://{host_ip}/dist,根据提示输入用户名和密码(admin / 123456)
方式二:访问连接中直接添加上用户认证信息 http://admin:123456@{host_ip}/dist

1.3 Host 配置

将集群的Host的ip和域名配置到每台机器的/etc/hosts
注意 hostname必须是一个FQDN(全限定域名),例如myhost-1.example.com,否则后面转到页面时在启动Agent时有个验证会无法通过

# cdh1
sudo hostnamectl set-hostname cdh1.example.com
# cdh2
sudo hostnamectl set-hostname cdh2.example.com
# cdh3
sudo hostnamectl set-hostname cdh3.example.com

#配置 /etc/hosts
192.168.33.3 cdh1.example.com cdh1
192.168.33.6 cdh2.example.com cdh2
192.168.33.9 cdh3.example.com cdh3


# 配置 /etc/sysconfig/network 
# cdh1
HOSTNAME=cdh1.example.com
# cdh2
HOSTNAME=cdh2.example.com
# cdh3
HOSTNAME=cdh3.example.com

1.4 NTP

NTP服务在集群中是非常重要的服务,它是为了保证集群中的每个节点的时间在同一个频道上的服务。如果集群内网有时间同步服务,只需要在每个节点配置上NTP客户端配置,和时间同步服务同步实际就行,但如果没有时间同步服务,那就需要我们配置NTP服务。

规划如下,当可以访问时间同步服务,例如可以直接和亚洲NTP服务进行同步。例如不能访问时,可以将cdh1.example.com配置为NTP服务端。集群内节点和这个服务进行时间同步。

ip用途
asia.pool.ntp.org亚洲NTP时间服务地址
cdh1.example.comntpd服务,以本地时间为准
cdh2.example.comntpd客户端。与ntpd服务同步时间
cdh3.example.comntpd客户端。与ntpd服务同步时间
step1 ntpd service
# NTP服务,如果没有先安装
systemctl status ntpd.service
step2 与系统时间一起同步

非常重要 硬件时间与系统时间一起同步。修改配置文件vim /etc/sysconfig/ntpd。末尾新增代码SYNC_HWCLOCK=yes

# Command line options for ntpd
#OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
OPTIONS="-g"
SYNC_HWCLOCK=yes
step3 添加NTP服务列表

编辑vim /etc/ntp/step-tickers

# List of NTP servers used by the ntpdate service.

#0.centos.pool.ntp.org
cdh1.example.com
step4 NTP服务端ntp.conf

修改ntp配置文件vim /etc/ntp.conf

driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
#允许任何IP的客户端进行时间同步,但不允许修改NTP服务端参数,default类似于0.0.0.0
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
#restrict 10.135.3.58 nomodify notrap nopeer noquery
#允许通过本地回环接口进行所有访问
restrict 127.0.0.1
restrict  -6 ::1
# 允许内网其他机器同步时间。网关和子网掩码。注意有些集群的网关可能比较特殊,可以用下面的命令获取这部分的信息
# 查看网关信息:/etc/sysconfig/network-scripts/ifcfg-网卡名;route -n、ip route show  
restrict 192.168.33.2 mask 255.255.255.0 nomodify notrap
# 允许上层时间服务器主动修改本机时间
#server asia.pool.ntp.org minpoll 4 maxpoll 4 prefer
# 外部时间服务器不可用时,以本地时间作为时间服务
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10
step5 NTP客户端ntp.conf
driftfile /var/lib/ntp/drift
logfile /var/log/ntp.log
pidfile   /var/run/ntpd.pid
leapfile  /etc/ntp.leapseconds
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
restrict default kod nomodify notrap nopeer noquery
restrict -6 default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict -6 ::1
server 192.168.33.3 iburst
step6 NTP服务重启和同步
#重启服务
systemctl restart ntpd.service
#开机自启
chkconfig ntpd on

ntpq -p
#ntpd -q -g 
#ss -tunlp | grep -w :123
#手动触发同步
#ntpdate -uv cdh1.example.com
ntpdate -u  cdh1.example.com

# 查看同步状态。需要过一段时间,查看状态会变成synchronised
ntpstat
timedatectl
ntptime
step7 NTP服务状态查看

如果显示如下则同步是正常的状态(状态显示 PLL,NANO):

[root@cdh2 ~]# ntptime
ntp_gettime() returns code 0 (OK)
  time e0b2b842.b180f51c  Fri, Apr 19 2019  11:09:20.333, (.693374110),
  maximum error 27426 us, estimated error 0 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency 3.932 ppm, interval 1 s,
  maximum error 27426 us, estimated error 0 us,
  status 0x2001 (PLL,NANO),
  time constant 6, precision 0.001 us, tolerance 500 ppm,

或者使用timedatectl命令查看(如果显示 NTP synchronized: yes,表示同步成功):

[root@cdh2 ~]#  timedatectl
      Local time: Fri 2019-04-19 11:09:20 CST
  Universal time: Fri 2019-04-19 11:09:20 UTC
        RTC time: Fri 2019-04-19 11:09:20
       Time zone: Asia/Shanghai (CST, +0800)
     NTP enabled: no
NTP synchronized: yes
 RTC in local TZ: no
      DST active: n/a

1.5 MySQL

Download MySQL

step1 配置环境变量
# 配置Mysql环境变量
export PATH=$PATH:/usr/local/mysql/bin
step2 创建用户和组
#①建立一个mysql的组 
groupadd mysql
#②建立mysql用户,并且把用户放到mysql组 
useradd -r -g mysql mysql
#③还可以给mysql用户设置一个密码(mysql)。回车设置mysql用户的密码
passwd mysql 
#④修改/usr/local/mysql 所属的组和用户
chown -R mysql:mysql /usr/local/mysql/
step3 设置MySQL配置文件

编辑/etc/my.cnf文件:vim /etc/my.cnf。设置为如下:

[mysqld]
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
port = 3306
socket=/var/lib/mysql/mysql.sock
character-set-server=utf8
 
transaction-isolation = READ-COMMITTED
# Disabling symbolic-links is recommended to prevent assorted security risks;
# to do so, uncomment this line:
symbolic-links = 0

server_id=1
max-binlog-size = 500M
log_bin=/var/lib/mysql/mysql_binary_log
#binlog_format = mixed
binlog_format = Row
expire-logs-days = 14

max_connections = 550
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M
  
[client]
default-character-set=utf8
socket=/var/lib/mysql/mysql.sock
  
[mysql]
default-character-set=utf8
socket=/var/lib/mysql/mysql.sock
 
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
 
sql_mode=STRICT_ALL_TABLES
step4 配置解压和设置
# 解压到 /usr/local/ 下
tar -zxf mysql-5.7.27-el7-x86_64.tar.gz -C /usr/local/
# 重命名
mv /usr/local/mysql-5.7.27-el7-x86_64/ /usr/local/mysql
 
# 实现mysqld -install这样开机自动执行效果
cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql
vim /etc/init.d/mysql
# 添加如下配置
basedir=/usr/local/mysql
datadir=/usr/local/mysql/data
 
#创建存放socket文件的目录
mkdir -p /var/lib/mysql
chown mysql:mysql /var/lib/mysql
#添加服务mysql 
chkconfig --add mysql 
# 设置mysql服务为自动
chkconfig mysql on 
step5 开始安装
#初始化mysql。注意记录下临时密码: ?w=HuL-yV05q
/usr/local/mysql/bin/mysqld --initialize --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data
#给数据库加密
/usr/local/mysql/bin/mysql_ssl_rsa_setup --datadir=/usr/local/mysql/data
 
# 启动mysql服务。过段时间,当不再刷屏时,按Ctrl + C退出后台进程
/usr/local/mysql/bin/mysqld_safe --user=mysql & 
# 重启MySQL服务
/etc/init.d/mysql restart 
#查看mysql进程 
ps -ef|grep mysql 
step6 登陆MySQL,完成后续设置
#第一次登陆mysql数据库,输入刚才的那个临时密码
/usr/local/mysql/bin/mysql -uroot -p

输入前面一步生成的临时密码,进入MySQL的命令行,其中密码密码可以访问☌随机密码生成☌网站生成安全强度较高的随机密码,生产环境一般是有强度要求。

--必须先修改密码
mysql> set password=password('V&0XkVpHZwkCEdY$');

--在mysql中添加一个远程访问的用户 
mysql> use mysql; 
mysql> select host,user from user; 
-- 添加一个远程访问用户scm,并设置其密码为 U@P3uXBSmAe%kQh^
mysql> grant all privileges on *.* to 'scm'@'%' identified by '*YPGT$%GqA' with grant option; 
--刷新配置
mysql> flush privileges;

1.6 剩下

这部分安装我们都比较熟悉,可以自行先将这些先安装完成。下面以 Centos 7.4 为例安装 CDH 6.2.0 。

注意 Mysql的配置文件/etc/my.cnf请参考Configuring and Starting the MySQL Server,进行配置。

1.7 其他

其他更详细的可以阅读CDH官方文档:


2. 下载资源

如果服务器无法下载,对步骤2.12.2选其一种方式在本地将如下资源下载后,上传到Apache HTTP 服务器上的目录:/var/www/html/cloudera-repos

这里分享两种方式,一种是基础包版,将最基础的包下载到本地;一种是完全版,相当于把官方的镜像库拉取到本地。2.12.2选择一种方式下载即可,推荐第一种方式,直下载基础的包进行快速部署安装,后期parcel或者cdh组件升级时,再下载对应的包,然后进行后续的升级即可。

2.1 基础包版下载

其中将下载资源上传到搭建的 Apache HTTP 服务节点,如果文件夹不存在,需要手动创建。记得文件路径有足够的权限:

# 注意文件的权限
chmod 555 -R /var/www/html/cloudera-repos

2.1.1 下载 parcel 包

 wget -b https://archive.cloudera.com/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel			
 wget https://archive.cloudera.com/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1		
 wget https://archive.cloudera.com/cdh6/6.2.0/parcels/manifest.json

将下载的包上传到 /var/www/html/cloudera-repos/cdh6/6.2.0/parcels

cdh 6.3.0的parcel 包访问:https://archive.cloudera.com/cdh6/6.3.0/parcels/

2.1.2 下载需要的rpm包

 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.2.0-968826.el7.x86_64.rpm
 wget -b https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.2.0-968826.el7.x86_64.rpm
 wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

将下载的包上传到 /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/RPMS/x86_64

其它版本的可以访问Cloudera Manager页面,例如CDH 6.3.0:https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/

2.1.3 获取 cloudera-manager 其他资源

2.1.3.1 获取cloudera-manager.repo

将下面下载的包上传到 /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum

wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera
wget https://archive.cloudera.com/cm6/6.2.0/redhat7/yum/cloudera-manager.repo
2.1.3.2 获取allkeys.asc

将下面下载的包上传到 /var/www/html/cloudera-repos/cm6/6.2.0

wget https://archive.cloudera.com/cm6/6.2.0/allkeys.asc
mv allkeys.asc /var/www/html/cloudera-repos/cm6/6.2.0
2.1.3.3 初始化repodata

进入到Apache HTTP服务器的:/var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/目录下,然后执行

#yum repolist
# 如果没有安装 createrepo,请 yum 安装 createrepo
yum -y install createrepo
cd /var/www/html/cloudera-repos/cm6/6.2.0/redhat7/yum/
# 创建repodata
createrepo .

2.1.4 下载数据库驱动

这里保存元数据的数据库选用Mysql,因此需要下载Mysql数据库驱动,如果选用的其他数据,请详细阅读安装和配置数据库

并将下载的驱动压缩包解压,或获得mysql-connector-java-5.1.46-bin.jar,记得务必将其名字改为mysql-connector-java.jar

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.tar.gz
# 解压
tar zxvf mysql-connector-java-5.1.46.tar.gz
# 重命名mysql-connector-java-5.1.46-bin.jar为mysql-connector-java.jar,并放到/usr/share/java/下
mv mysql-connector-java-5.1.46-bin.jar /usr/share/java/mysql-connector-java.jar
# 同时发送到其它节点
scp /usr/share/java/mysql-connector-java.jar root@cdh2:/usr/share/java/
scp /usr/share/java/mysql-connector-java.jar root@cdh3:/usr/share/java/

*2.2 完整镜像版下载

2.2.1 下载 parcel files

 cd /var/www/html/cloudera-repos
 sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cdh6/6.2.0/parcels/ -P /var/www/html/cloudera-repos
 sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/gplextras6/6.2.0/parcels/ -P /var/www/html/cloudera-repos
 sudo chmod -R ugo+rX /var/www/html/cloudera-repos/cdh6
 sudo chmod -R ugo+rX /var/www/html/cloudera-repos/gplextras6

2.2.2 下载 Cloudera Manager

sudo wget --recursive --no-parent --no-host-directories https://archive.cloudera.com/cm6/6.2.0/redhat7/ -P /var/www/html/cloudera-repos
sudo wget https://archive.cloudera.com/cm6/6.2.0/allkeys.asc -P /var/www/html/cloudera-repos/cm6/6.2.0/
sudo chmod -R ugo+rX /var/www/html/cloudera-repos/cm6

2.2.3 下载数据库驱动

2.1.4 下载数据库驱动

2.3 设置安装节点的 cloudera-manager yum信息

假设通过上面,已经将需要的资源下载下来,并以上传到服务器可以访问的HTTP服务了。

2.3 .1 下载

一下连接中的${cloudera-repos.http.host} 请更换为自己的Apache HTTP服务的IP。

wget http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/cloudera-manager.repo -P /etc/yum.repos.d/
# 导入存储库签名GPG密钥:
sudo rpm --import http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera

2.3.2 修改

修改 cloudera-manager.repo。执行命令:vim /etc/yum.repos.d/cloudera-manager.repo,修改为如下(注意,原先的https一定要改为http)

[cloudera-manager]
name=Cloudera Manager 6.2.0
baseurl=http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/
gpgkey=http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0/redhat7/yum/RPM-GPG-KEY-cloudera
gpgcheck=1
enabled=1
autorefresh=0
type=rpm-md

2.3.3 更新yum

#清除 yum 缓存
sudo yum clean all
#更新yum
sudo yum update

3. 安装

经过前面的准备,这里进入到正式的安装过程。

3.1 安装 Cloudera Manager

  • 在 Server 端 执行

    sudo yum install -y cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server
    
  • 在 Agent 端 执行

    sudo yum install -y cloudera-manager-agent cloudera-manager-daemons
    
  • 在安装完后,程序会自动在server节点上创建一个如下文件或文件夹:

     /etc/cloudera-scm-agent/config.ini
     /etc/cloudera-scm-server/
     /opt/cloudera
    ……
    
  • 为了后面安装的更快速,将下载的CDH包裹放到这里(仅Server端执行):

    cd /opt/cloudera/parcel-repo/
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1
    wget http://${cloudera-repos.http.host}/cloudera-repos/cdh6/6.2.0/parcels/manifest.json
    # 在manifest.json中找到对应版本的密钥(大概在755行),复制到*.sha文件中
    # 一般CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha1文件的内容和parcel密钥是一致的,只需重命名即可
    echo "e9c8328d8c370517c958111a3db1a085ebace237"  > CDH-6.2.0-1.cdh6.2.0.p0.967373-el7.parcel.sha
    #echo "d6e1483e47e3f2b1717db8357409865875dc307e"  > CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
    #修改属主属组。
    chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/*
    
    
  • 修改agent配置文件,将Cloudera Manager Agent 配置为指向 Cloudera Manager Serve。
    这里主要是配置 Agent节点的 config.ini 文件。

    vim /etc/cloudera-scm-agent/config.ini
    #配置如下项
    # Hostname of the CM server. 运行Cloudera Manager Server的主机的名称
    server_host=cdh1.example.com
    # Port that the CM server is listening on. 运行Cloudera Manager Server的主机上的端口
    server_port=7182
    #1位启用为代理使用 TLS 加密,如果前面没有设置,一定不要开启TLS
    #use_tls=1
    

3.2 设置 Cloudera Manager 数据库

Cloudera Manager Server包含一个可以数据库prepare的脚本,主要是使用这个脚本完成对数据库的相关配置进行初始化,这里不对元数据库中的表进行创建。

3.2.1 创建 Cloudera 软件对应的数据库:

这一步主要是创建 Cloudera 软件所需要的数据库,否则当执行后面一步的监本时会报如下错误

[                          main] DbCommandExecutor              INFO  Able to connect to db server on host 'localhost' but not able to find or connect to database 'scm'.
[                          main] DbCommandExecutor              ERROR Error when connecting to database.
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'scm'
……

Cloudera 软件对应的数据库列表如下:
Databases for Cloudera Software
如果只是先安装 Cloudera Manager Server ,就如上图,只需要创建scm的数据库,如果要安装其他服务请顺便也把数据库创建好。

# 登陆 Mysql后执行如下命令
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

# 顺便把其他的数据库也创建
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
# Hive、Impala等元数据库
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

3.2.2 初始化数据库

初始化数据库时,主要使用的scm_prepare_database.sh脚本。脚本的语法如下

# options参数可以执行 scm_prepare_database.sh --help获取
sudo /opt/cloudera/cm/schema/scm_prepare_database.sh [options] <databaseType> <databaseName> <databaseUser> <password>

初始化 scm 数据库配置。这一步会在/etc/cloudera-scm-server/db.properties更新配置(如果驱动找不到,请确认/usr/share/java是否有mysql-connector-java.jar)。

[root@cdh1 ~]#  sudo /opt/cloudera/cm/schema/scm_prepare_database.sh -h localhost  mysql scm scm scm
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/local/zulu8/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!

参数说明:

  • options 指定操作,如果数据库不再本地请用-h--host 指定mysql的host,不指定默认为localhost
  • databaseType 指定为mysql,也可以是其它类型的数据,例如:oracle等
  • databaseName 指定为scm数据库,这里使用 scm库
  • databaseUser 指定mysql用户名,这里使用 scm
  • password 指定mysql其用户名的密码,这里使用scm

这一步如果我们用的是自己配置的JDK可能会报如下的错误:

[root@cdh1 java]# sudo /opt/cloudera/cm/schema/scm_prepare_database.sh -h cdh1 mysql scm scm scm
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              ERROR Error when connecting to database.
java.sql.SQLException: java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:964)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:897)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:886)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:860)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:877)
        at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:873)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:443)
        at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:389)
        at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:330)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:247)
        at com.cloudera.enterprise.dbutil.DbCommandExecutor.testDbConnection(DbCommandExecutor.java:263)
        at com.cloudera.enterprise.dbutil.DbCommandExecutor.main(DbCommandExecutor.java:139)
Caused by: java.lang.Error: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at sun.util.calendar.ZoneInfoFile$1.run(ZoneInfoFile.java:261)
        at java.security.AccessController.doPrivileged(Native Method)
        at sun.util.calendar.ZoneInfoFile.<clinit>(ZoneInfoFile.java:251)
        at sun.util.calendar.ZoneInfo.getTimeZone(ZoneInfo.java:589)
        at java.util.TimeZone.getTimeZone(TimeZone.java:560)
        at java.util.TimeZone.setDefaultZone(TimeZone.java:666)
        at java.util.TimeZone.getDefaultRef(TimeZone.java:636)
        at java.util.GregorianCalendar.<init>(GregorianCalendar.java:591)
        at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:706)
        at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
        ... 6 more
Caused by: java.io.FileNotFoundException: /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/jre/lib/tzdb.dat (No such file or directory)
        at java.io.FileInputStream.open0(Native Method)
        at java.io.FileInputStream.open(FileInputStream.java:195)
        at java.io.FileInputStream.<init>(FileInputStream.java:138)
        at sun.util.calendar.ZoneInfoFile$1.run(ZoneInfoFile.java:255)
        ... 20 more
[                          main] DbCommandExecutor              ERROR Exiting with exit code 4
--> Error 4, giving up (use --force if you wish to ignore the error)

解决办法,打开执行的脚本/opt/cloudera/cm/schema/scm_prepare_database.sh 在108行local JAVA8_HOME_CANDIDATES=()方法中将自己配置的JAVA_HOME填入:

  local JAVA8_HOME_CANDIDATES=(
  	'/usr/java/jdk1.8.0_181-cloudera'
    '/usr/java/jdk1.8'
    '/usr/java/jre1.8'
    '/usr/lib/jvm/j2sdk1.8-oracle'
    '/usr/lib/jvm/j2sdk1.8-oracle/jre'
    '/usr/lib/jvm/java-8-oracle'
  )

3.3 安装CDH和其他软件

只需要在 Cloudera Manager Server 端启动 server即可,Agent 在进入Web页面后徐步骤中会自动帮我们启动。

3.3.1 启动Cloudera Manager Server

sudo systemctl start cloudera-scm-server

查看启动结果

sudo systemctl status cloudera-scm-server

如果要观察启动过程可以在 Cloudera Manager Server 主机上运行以下命令:

sudo tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log
# 当您看到此日志条目时,Cloudera Manager管理控制台已准备就绪:
# INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.

如果日志有问题,可以根据提示解决。比如:

2019-06-13 16:33:19,148 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No read permission to the server storage directory [/var/lib/cloudera-scm-server/search]
2019-06-13 16:33:19,148 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No write permission to the server storage directory [/var/lib/cloudera-scm-server/search]
……
2019-06-13 16:33:19,637 ERROR WebServerImpl:org.springframework.web.servlet.DispatcherServlet: Context initialization failed
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'reportsController': Unsatisfied dependency expressed through field 'viewFactory'; nested exception is org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'viewFactory': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
……
Caused by: org.springframework.beans.factory.BeanCreationNotAllowedException: Error creating bean with name 'viewFactory': Singleton bean creation not allowed while singletons of this factory are in destruction (Do not request a bean from a BeanFactory in a destroy method implementation!)
……
================================================================================
Starting SCM Server. JVM Args: [-Dlog4j.configuration=file:/etc/cloudera-scm-server/log4j.properties, -Dfile.encoding=UTF-8, -Duser.timezone=Asia/Shanghai, -Dcmf.root.logger=INFO,LOGFILE, -Dcmf.log.dir=/var/log/cloudera-scm-server, -Dcmf.log.file=cloudera-scm-server.log, -Dcmf.jetty.threshhold=WARN, -Dcmf.schema.dir=/opt/cloudera/cm/schema, -Djava.awt.headless=true, -Djava.net.preferIPv4Stack=true, -Dpython.home=/opt/cloudera/cm/python, -XX:+UseConcMarkSweepGC, -XX:+UseParNewGC, -XX:+HeapDumpOnOutOfMemoryError, -Xmx2G, -XX:MaxPermSize=256m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/tmp, -XX:OnOutOfMemoryError=kill -9 %p], Args: [], Version: 6.2.0 (#968826 built by jenkins on 20190314-1704 git: 16bbe6211555460a860cf22d811680b35755ea81)
Server failed.
java.lang.NoClassDefFoundError: Could not initialize class sun.util.calendar.ZoneInfoFile
	at sun.util.calendar.ZoneInfo.getTimeZone(ZoneInfo.java:589)
	at java.util.TimeZone.getTimeZone(TimeZone.java:560)
	at java.util.TimeZone.setDefaultZone(TimeZone.java:666)
	at java.util.TimeZone.getDefaultRef(TimeZone.java:636)
	at java.util.Date.normalize(Date.java:1197)
	at java.util.Date.toString(Date.java:1030)
	at java.lang.String.valueOf(String.java:2994)
	at java.lang.StringBuilder.append(StringBuilder.java:131)
	at org.springframework.context.support.AbstractApplicationContext.toString(AbstractApplicationContext.java:1367)
	at java.lang.String.valueOf(String.java:2994)
	at java.lang.StringBuilder.append(StringBuilder.java:131)
	at org.springframework.context.support.AbstractApplicationContext.prepareRefresh(AbstractApplicationContext.java:583)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:512)
	at org.springframework.context.access.ContextSingletonBeanFactoryLocator.initializeDefinition(ContextSingletonBeanFactoryLocator.java:143)
	at org.springframework.beans.factory.access.SingletonBeanFactoryLocator.useBeanFactory(SingletonBeanFactoryLocator.java:383)
	at com.cloudera.server.cmf.Main.findBeanFactory(Main.java:481)
	at com.cloudera.server.cmf.Main.findBootstrapApplicationContext(Main.java:472)
	at com.cloudera.server.cmf.Main.bootstrapSpringContext(Main.java:375)
	at com.cloudera.server.cmf.Main.<init>(Main.java:260)
	at com.cloudera.server.cmf.Main.main(Main.java:233)
================================================================================

修改文件权限问题,修改时区。如果问题不能解决,请更换为Oracle JDK
时区问题可以在/opt/cloudera/cm/bin/cm-server文件中,大概第40行添加CMF_OPTS="$CMF_OPTS -Duser.timezone=Asia/Shanghai"
cloudera-scm-server-timezone

如果提示如下错误,请删除/var/lib/cloudera-scm-agent/cm_guid的guid。

[15/Jun/2019 13:54:55 +0000] 24821 MainThread agent        ERROR    Error, CM server guid updated, expected 198b7045-53ce-458a-9c0a-052d0aba8a22, received ea04f769-95c8-471f-8860-3943bfc8ea7b

*(可选,如果需要)实例化一个新的cloudera-scm-server,需重启

uuidgen > /etc/cloudera-scm-server/uuid

3.3.2 转到 Web 浏览器

在Web浏览器数据 http://<server_host>:7180,其中<server_host> 是运行Cloudera Manager Server的主机的FQDN或IP地址。

登录Cloudera Manager Admin Console,默认凭证为

  • Username: admin
  • Password: admin

登陆用户名后,显示如下页面,根据提示进行安装即可:欢迎 -> Accept License -> Select Edition。
cdh-web-01.png
cdh-web-02

这一步选择安装的版本,不同版本支持的主要功能已列出,第一列为Cloudera免费的快速体验版;第二列为Cloudera为企业级试用版(免费试用60天);第三列是功能和服务最全的Cloudera企业版,是需要认证且收费的。Cloudera Express 和 Cloudera Enterprise 中的可用功能的完整列表
cdh-web-03
选择第一列,快速体验版服务,完全免费,功能和服务对于需求不是很特殊和复杂的试用基本没什么问题,如果后期功能不够,或满足不了需求,想使用Cloudera的企业版也不用担心,在Cloudera Manager页面,点击页面头部的管理菜单,在下拉列表中单机许可证,可在页面上选择:试用Cloudera Enterprise 60天升级至Cloudera Enterprise ,更详细的升级说明可查看 从Cloudera Express升级到Cloudera Enterprise ➹

选择第二列,可以直接免费体验Cloudera Enterprise全部功能60天,且这个每次只能试用一次,关于许可证到期或试用许可证的说明可访问 Managing Licenses ➹

选择第三列,使用Cloudera企业版,需要获取许可证,要获得Cloudera Enterprise许可证,请填写此表单 或致电866-843-7207。关于许可证的详细说明可以访问Managing Licenses。其功能和价格可参考 功能和价格 页面。



集群安装

cdh-web-04.png

  • 欢迎
  • Cluster Basics:给集群设置一个名字。
    * Specify Hosts:输入集群的主机名,多个换行添加,例如:
  • 这里需要重点注意的是,这个地址一定是符合FQDN(全限定域名)规范的,否则在Agents安装时会有验证,
    cdh1.example.com
    cdh2.example.com
    cdh3.example.com
    
    如果root的密码不一样怎么办?,除了可以找管理员将root密码统一改为一样的,也可以这样解决,以单节点方式安装,Web页面到达集群设置选择安装组件服务时,另起一个页面,进入Cloudera Manager,选择 主机 -> 所有主机 -> Add Hosts ,根据提示依次将其它节点添加到这个集群名字下,中间输入每个机器的root的密码完成验证即可。

Host验证

  • 选择存储库:可以设置自定义存储库(即安装的http://${cloudera-repos.http.host}/cloudera-repos/cm6/6.2.0),等。
  • JDK 安装选项:如果环境已经安装了,则不用勾选,直接继续。
  • 提供 SSH 登录凭据:输入Cloudera Manager主机的账号,用root,输入密码。
  • Install Agents:这一步会启动集群中的Agent节点的Agent服务。cdh-web-05.png
  • Install Parcelscdh-web-06.png
  • Inspect Cluster:点击检查NetWork和Host,然后继续。cdh-web-07.png
    如果这一步有提示如下的错误(这里引用的是CDH 6.3.0的页面):
    在这里插入图片描述
⚠️警告1 的处理
Cloudera 建议将 /proc/sys/vm/swappiness 设置为最大值 10。当前设置为 30。使用 sysctl 命令在运行时更改该设置并编辑 /etc/sysctl.conf,以在重启后保存该设置。
您可以继续进行安装,但 Cloudera Manager 可能会报告您的主机由于交换而运行状况不良。以下主机将受到影响: 

处理:

sysctl vm.swappiness=10
# 这里我们的修改已经生效,但是如果我们重启了系统,又会变成原先的值
echo 'vm.swappiness=10'>> /etc/sysctl.conf
⚠️警告2 的处理
已启用透明大页面压缩,可能会导致重大性能问题。请运行“echo never > /sys/kernel/mm/transparent_hugepage/defrag”和
“echo never > /sys/kernel/mm/transparent_hugepage/enabled”以禁用此设置,
然后将同一命令添加到 /etc/rc.local 等初始化脚本中,以便在系统重启时予以设置。以下主机将受到影响: 

处理:

echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled

# 然后将命令添加到初始化脚本中
vi /etc/rc.local
# 添加如下
echo never > /sys/kernel/mm/transparent_hugepage/defrag
echo never > /sys/kernel/mm/transparent_hugepage/enabled
使用向导设置群集
  • Select Services:这里可以先选择基本服务Essentials,后期再添加服务。如果是数仓类型可以选择第三个。cdh-web-08.png

  • 自定义角色分配:对选取的组件进行分配。
    自定义角色分配

  • 数据库设置:数据库务必对组件服务使用的用户赋予相应的权限,否则其它节点的服务连接元数据库会失败。

服务主机名称数据库用户名密码
Hivecdh1.yore.commetastorescmYP***GqA
Activity Monitorcdh1.yore.comamonscmYP***GqA
Oozie Servercdh1.yore.comooziescmYP***GqA
Huecdh1.yore.comhuescmYPG***qA

如果数据库的密码忘记了怎么办,多数是可以根据你的感觉去try的,也可以直接查看/etc/cloudera-scm-server/db.properties文件。
cdh-web-10.png

  • 审核更改cdh-web-11.png
  • 命令详细信息:这一步如果是数据库的问题,就把对应的库删除,重新创建。cdh-web-12.png
  • 汇总cdh-web-13.png

4. 其他问题

4.1 Error starting NodeManager

发生如下异常:

2019-06-16 12:19:25,932 WARN org.apache.hadoop.service.AbstractService: When stopping the service NodeManager : java.lang.NullPointerException
java.lang.NullPointerException
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceStop(NodeManager.java:483)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:222)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:54)
	at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:104)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:869)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:942)
2019-06-16 12:19:25,932 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied
	at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:173)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:281)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:354)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:869)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:942)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: IO error: /var/lib/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/LOCK: Permission denied
	at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
	at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
	at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.openDatabase(NMLeveldbStateStoreService.java:1517)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:1504)
	at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:342)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
	... 5 more

查看每个NodeManager节点的/var/lib

cd /var/lib
ls -l | grep -i hadoop

发现有一个节点信息如下:

[root@cdh1 lib]#  ls -l | grep -i hadoop
drwxr-xr-x   3          996          992 4096 Apr 25 14:39 hadoop-hdfs
drwxr-xr-x   2 cloudera-scm cloudera-scm 4096 Apr 25 13:50 hadoop-httpfs
drwxr-xr-x   2 sentry       sentry       4096 Apr 25 13:50 hadoop-kms
drwxr-xr-x   2 flume        flume        4096 Apr 25 13:50 hadoop-mapreduce
drwxr-xr-x   4 solr         solr         4096 Apr 25 14:40 hadoop-yarn

而其他节点为:

[root@cdh2 lib]#  ls -l | grep -i hadoop
drwxr-xr-x  3 hdfs         hdfs         4096 Jun 16 06:04 hadoop-hdfs
drwxr-xr-x  3 httpfs       httpfs       4096 Jun 16 06:04 hadoop-httpfs
drwxr-xr-x  2 mapred       mapred       4096 Jun 16 05:06 hadoop-mapreduce
drwxr-xr-x  4 yarn         yarn         4096 Jun 16 06:07 hadoop-yarn

所以执行如下,修改有问题的那个节点对应文件的归属和权限。重启有问题的节点的NodeManager

chown  -R hdfs:hdfs /var/lib/hadoop-hdfs
chown  -R httpfs.httpfs /var/lib/hadoop-httpfs
chown  -R kms.kms /var/lib/hadoop-kms
chown  -R mapred:mapred /var/lib/hadoop-mapreduce
chown  -R yarn:yarn /var/lib/hadoop-yarn
chmod -R 755 /var/lib/hadoop-*

4.2 Could not open file in log_dir /var/log/catalogd: Permission denied

查看日志如下异常信息:

+ exec /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/impala/../../bin/catalogd --flagfile=/var/run/cloudera-scm-agent/process/173-impala-CATALOGSERVER/impala-conf/catalogserver_flags
Could not open file in log_dir /var/log/catalogd: Permission denied

……

+ exec /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/impala/../../bin/statestored --flagfile=/var/run/cloudera-scm-agent/process/175-impala-STATESTORE/impala-conf/state_store_flags
Could not open file in log_dir /var/log/statestore: Permission denied

以执行如下,修改有问题的那个节点对应文件的归属和权限。重启有问题的节点的对应的服务

cd /var/log
ls -l /var/log | grep -i catalogd
# 在`Imapala Catalog Server`节点执行
chown  -R impala:impala /var/log/catalogd
# 在`Imapala StateStore`节点
chown  -R impala:impala /var/log/statestore

4.3 Cannot connect to port 2049

CONF_DIR=/var/run/cloudera-scm-agent/process/137-hdfs-NFSGATEWAY
CMF_CONF_DIR=
unlimited
Cannot connect to port 2049.
using /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/bigtop-utils as JSVC_HOME

NFS Gateway节点启动rpcbind

# 查看各节点的 NFS服务状态
 systemctl status nfs-server.service
# 如果没有就安装
 yum -y install nfs-utils 
 
# 查看 rpcbind 服务状态
 systemctl status rpcbind.service
# 如果没有启动,则启动 rpcbind
 systemctl start rpcbind.service

4.4 Kafka不能创建Topic

当我们将Kafka组件安装成功之后,我们创建一个Topic,发现创建失败:

[root@cdh2 lib]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic canal
Error while executing topic command : Replication factor: 1 larger than available brokers: 0.
19/06/16 23:27:30 ERROR admin.TopicCommand$: org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0.

此时可以登陆zkCli.sh 查看Kafka的zNode信息,发现一切正常,ids都在,后台程序创建的Topic Name也在,但就是无法用命令查看。

此时可以先将Zookeeper和Kafka都重启一下,再尝试,如果依旧不行,将Kafka在Zookeeper的zNode目录设置为根节点。然后重启,再次创建和查看,发现现在Kafka正常了。

4.5 Hive组件安装和启动时驱动找不到

有时明明驱动包已经放置到了/usr/share/java/ 下,可能会依然报如下错误

+ [[ -z /opt/cloudera/cm ]]
+ JDBC_JARS_CLASSPATH='/opt/cloudera/cm/lib/*:/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.1.4.jre7.jar:/usr/share/java/oracle-connector-java.jar'
++ /usr/java/jdk1.8.0_181-cloudera/bin/java -Djava.net.preferIPv4Stack=true -cp '/opt/cloudera/cm/lib/*:/usr/share/java/mysql-connector-java.jar:/opt/cloudera/cm/lib/postgresql-42.1.4.jre7.jar:/usr/share/java/oracle-connector-java.jar' com.cloudera.cmf.service.hive.HiveMetastoreDbUtil /var/run/cloudera-scm-agent/process/32-hive-metastore-create-tables/metastore_db_py.properties unused --printTableCount
Exception in thread "main" java.lang.RuntimeException: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.countTables(HiveMetastoreDbUtil.java:203)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.printTableCount(HiveMetastoreDbUtil.java:284)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.main(HiveMetastoreDbUtil.java:334)
Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:264)
	at com.cloudera.enterprise.dbutil.SqlRunner.open(SqlRunner.java:180)
	at com.cloudera.enterprise.dbutil.SqlRunner.getDatabaseName(SqlRunner.java:264)
	at com.cloudera.cmf.service.hive.HiveMetastoreDbUtil.countTables(HiveMetastoreDbUtil.java:197)
	... 2 more
+ NUM_TABLES='[                          main] SqlRunner                      ERROR Unable to find the MySQL JDBC driver. Please make sure that you have installed it as per instruction in the installation guide.'
+ [[ 1 -ne 0 ]]
+ echo 'Failed to count existing tables.'
+ exit 1

把驱动拷贝一份到Hive的lib下

# 驱动包不管你是什么版本,它的名字一定叫 mysql-connector-java.jar
#务必将驱动包赋予足够的权限
chmod 755 /usr/share/java/mysql-connector-java.jar
#ln -s /usr/share/java/mysql-connector-java.jar /opt/cloudera/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813/lib/hive/lib/mysql-connector-java.jar
ln -s /usr/share/java/mysql-connector-java.jar /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/libmysql-connector-java.jar

4.6 Hive组件启动提示获取VERSION失败

查看日志如果提示metastore获取VERSION失败,可以查看Hive的元数据库hive库下是否有元数据表,如果没有,手动将表初始化到Mysql的hive库下:

# 查找Hive元数据初始化的sql脚本,会发现搜到了各种版本的sql脚本
find / -name hive-schema*mysql.sql
# 例如可以得到:/opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-2.1.1.mysql.sql

# 登陆Mysql数据库
mysql -u root -p
> use hive;
> source /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-2.1.1.mysql.sql

这一步就初始化了Hive的元数据表,然后重启Hive实例的服务。

4.7 Impala时区问题设置

Impala不进行设置,获取的日期类型的数据时区是有八个小时的时差,因此最好设置一下。

Cloudera Manager Web页面  >  Impala  >  配置  >  搜索:Impala Daemon 命令行参数高级配置代码段(安全阀)  >  添加 -use_local_tz_for_unix_timestamp_conversions=true

保存配置,并重启Impala。

4.8 hdfs用户登录不上

当HDFS开启了权限认证,有时操作HDFS需要切换到hdfs用户对数据进行操作,但可能会提示如下问题:

[root@cdh1 ~]# su hdfs
This account is currently not available.

此时查看系统的用户信息,将hdfs/sbin/nologin改为/bin/bash,然后保存,再次登录hdfs即可。

[root@cdh1 ~]# cat /etc/passwd | grep hdfs
hdfs:x:954:961:Hadoop HDFS:/var/lib/hadoop-hdfs:/sbin/nologin

#将上面的信息改为如下
 hdfs:x:954:961:Hadoop HDFS:/var/lib/hadoop-hdfs:/bin/bash

4.9 NTP问题

查看角色日志详细信息发现:

Check failed: _s.ok() Bad status: Runtime error: Cannot initialize clock: failed to wait for clock sync using command '/usr/bin/chronyc waitsync 60 0 0 1': /usr/bin/chronyc: process exited with non-zero status 1

在服务器上运行ntptime 命令的信息如下,说明NTP存在问题

[root@cdh3 ~]# ntptime
ntp_gettime() returns code 5 (ERROR)
  time e0b2b833.5be28000  Tue, Jun 18 2019  9:09:07.358, (.358925),
  maximum error 16000000 us, estimated error 16000000 us, TAI offset 0
ntp_adjtime() returns code 5 (ERROR)
  modes 0x0 (),
  offset 0.000 us, frequency 9.655 ppm, interval 1 s,
  maximum error 16000000 us, estimated error 16000000 us,
  status 0x40 (UNSYNC),
  time constant 10, precision 1.000 us, tolerance 500 ppm,

特别要注意一下输出中的重要部分(us - 微妙):

  • maximum error 16000000 us:这个时间误差为16s,已经高于Kudu要求的最大误差
  • status 0x40 (UNSYNC):同步状态,此时时间已经不同步了;如果为status 0x2001 (PLL,NANO)时则为健康状态。

正常的信息如下:

[root@cdh1 ~]# ntptime
ntp_gettime() returns code 0 (OK)
  time e0b2b842.b180f51c  Tue, Jun 18 2019  9:09:22.693, (.693374110),
  maximum error 27426 us, estimated error 0 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency 3.932 ppm, interval 1 s,
  maximum error 27426 us, estimated error 0 us,
  status 0x2001 (PLL,NANO),
  time constant 6, precision 0.001 us, tolerance 500 ppm,

如果是UNSYNC状态,请查看服务器的NTP服务状态:systemctl status ntpd.service,如果没有配置NTP服务的请安装,可以参考1.4 NTP部分安装和配置。这部分介绍还可以查看文档NTP Clock Synchronization,或者其他文档。

4.10 root用户对HDFS文件系统操作权限不够问题

# 1 在Linux执行如下命令增加 supergroup
groupadd supergroup

# 2 如将用户root增加到 supergroup 中
usermod -a -G supergroup root

# 3 同步系统的权限信息到HDFS文件系统
sudo -u hdfs hdfs dfsadmin -refreshUserToGroupsMappings

# 4 查看属于 supergroup 用户组的用户
grep 'supergroup:' /etc/group

4.11 安装组件的其他异常

如果前面都没问题体,在安装组件最常见的失败异常,就是文件的角色和权限问题,请参照4.2方式排查和修复。多查看对应的日志,根据日志信息解决异常。



PO 一张最后安装完成的CDH Web 页面

Cloudera Manager Admin 页面如下:
在这里插入图片描述

5 API 的方式查看服务状态和重启服务

官方 API 的地址为:

当我们不方便访问 Cloudera Manager Admin 管理页面的时候,有时需要查看服务或者如果服务停止需要重启的时候可以通过 API 方式。下面假设管理页面的管理员账号为 admin,密码为 admin,cloudera-scm-server 服务在 cdh1,主要以 Kudu 服务为例,其它服务类似。

5.1 查看集群所有主机信息

# -u 指定用户名和密码
curl -u admin:admin 'http://cdh1:7180/api/v1/hosts' 
# 返回一个 json 数据,格式如下

可以看到 items 中列出了当前集群所有的主机IP、主机名 和 hostId 等信息,后面有些角色在某个节点的服务标识会用到 hostId 值。

{
  "items" : [ {
    "hostId" : "ecf4247c-xxxx-438e-b026-d77becff1fbe",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh1.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/ecf4247c-xxxx-438e-b026-d77becff1fbe"
  }, {
    "hostId" : "6ce8ae83-xxxx-46e1-a47a-96201681a019",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh2.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/6ce8ae83-xxxx-46e1-a47a-96201681a019"
  }, {
    "hostId" : "9e512856-xxxx-4608-8891-0573cdc68bee",
    "ipAddress" : "192.168.xxx.xx",
    "hostname" : "cdh3.yore.com",
    "rackId" : "/default",
    "hostUrl" : "http://cdh1.yore.com:7180/cmf/hostRedirect/9e512856-xxxx-4608-8891-0573cdc68bee"
  } ]
}

5.2 查看集群名

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters'

name 就是其中的集群名,下面请求接口会用到这个值

{
  "items" : [ {
    "name" : "yore-cdh-test",
    "version" : "CDH6"
  } ]
}

5.3 查看集群下服务

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services' 

其它服务已省略,本次重点关注其中 Kudu 服务,一般 name 就是组件的服务,例如 Apache Kudu 的 name 为 kudu,

{
  "items": [
    {
      "healthChecks": [
        {
          "name": "HIVE_HIVEMETASTORES_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HIVE_HIVESERVER2S_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HIVE_WEBHCATS_HEALTHY",
          "summary": "GOOD"
        }
      ],
      "name": "hive",
      "type": "HIVE",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/hive",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [],
      "name": "kudu",
      "type": "KUDU",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/kudu",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "IMPALA_CATALOGSERVER_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "IMPALA_IMPALADS_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "IMPALA_STATESTORE_HEALTH",
          "summary": "GOOD"
        }
      ],
      "name": "impala",
      "type": "IMPALA",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/impala",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "HUE_HUE_SERVERS_HEALTHY",
          "summary": "GOOD"
        },
        {
          "name": "HUE_LOAD_BALANCER_HEALTHY",
          "summary": "GOOD"
        }
      ],
      "name": "hue",
      "type": "HUE",
      "clusterRef": {
        "clusterName": "yore-cdh-test"
      },
      "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/hue",
      "serviceState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    }
  ]
}

5.4 查看 指定服务的状态信息

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu'

通过这里可以看到 kudu 服务的启动信息启动状态(STARTED),健康状态信息良好(GOOD),

{
  "healthChecks": [],
  "name": "kudu",
  "type": "KUDU",
  "clusterRef": {
    "clusterName": "yore-cdh-test"
  },
  "serviceUrl": "http://cdh1.yore.com:7180/cmf/serviceRedirect/kudu",
  "serviceState": "STARTED",
  "healthSummary": "GOOD",
  "configStale": false
}

5.5 查看指定服务的角色信息

curl -u admin:admin 'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu/roles'

从返回的JSON 结果可以看到 Kudu 各个角色实例的运行信息,这里通过 hostId 和 5.1 查看集群所有主机信息 部分获取的信息可以得知是哪个节点的服务,例如获取 hostname 为 cdh1.yore.com 上的 Kudu 的 Tablet Server ,通过 5.1 查看集群所有主机信息 可以知道 hostname 为 cdh1.yore.com 的 hostId 为 ecf4247c-xxxx-438e-b026-d77becff1fbe,通过下面的 JSON 可以看到 hostId 为 ecf4247c-xxxx-438e-b026-d77becff1fbe 的 Tablet Server 的 name 为 kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3

{
  "items": [
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "ecf4247c-xxxx-438e-b026-d77becff1fbe"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_MASTER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_MASTER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_MASTER-ec14a1fa91e54c0ec078bbc575a3db83",
      "type": "KUDU_MASTER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "9e512856-xxxx-4608-8891-0573cdc68bee"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_MASTER-ec14a1fa91e54c0ec078bbc575a3db83",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-ec14a1fa91e54c0ec078bbc575a3db83",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "9e512856-xxxx-4608-8891-0573cdc68bee"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-ec14a1fa91e54c0ec078bbc575a3db83",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    },
    {
      "healthChecks": [
        {
          "name": "KUDU_KUDU_TSERVER_FILE_DESCRIPTOR",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_HOST_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_LOG_DIRECTORY_FREE_SPACE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SCM_HEALTH",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_SWAP_MEMORY_USAGE",
          "summary": "GOOD"
        },
        {
          "name": "KUDU_KUDU_TSERVER_UNEXPECTED_EXITS",
          "summary": "GOOD"
        }
      ],
      "name": "kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f",
      "type": "KUDU_TSERVER",
      "serviceRef": {
        "clusterName": "yore-cdh-test",
        "serviceName": "kudu"
      },
      "hostRef": {
        "hostId": "6ce8ae83-xxxx-46e1-a47a-96201681a019"
      },
      "roleUrl": "http://cdh1.yore.com:7180/cmf/roleRedirect/kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f",
      "roleState": "STARTED",
      "healthSummary": "GOOD",
      "configStale": false
    }
  ]
}

5.6 重启指定节点上的指定服务实例

通过上面的分析可以知道 hostname 为 cdh1.yore.com 上的 Kudu 的 Tablet Server 的 角色实例 name 为 kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3,例如我们重启此服务,也可以一次请求重启多个服务实例,只需要将指定的 角色实例 name 填写到 下面的 item jsonArray 中即可。

curl -X POST -H "Content-Type:application/json" -u admin:admin \
-d '{ "items": ["kudu-KUDU_TSERVER-90ffd2c4da706e992590ec4ad20ec5a3"] }' \
'http://cdh1:7180/api/v1/clusters/yore-cdh-test/services/kudu/roleCommands/restart'

没有报错信息,返回一个命令执行的 id,则服务重启成功

{
  "errors" : [ ],
  "items" : [ {
    "id" : 5050,
    "name" : "Restart",
    "startTime" : "2019-06-14T02:10:59.726Z",
    "active" : true,
    "serviceRef" : {
      "clusterName" : "yore-cdh-test",
      "serviceName" : "kudu"
    },
    "roleRef" : {
      "clusterName" : "yore-cdh-test",
      "serviceName" : "kudu",
      "roleName" : "kudu-KUDU_TSERVER-a4748f9a954f807e8e341f3f802b972f"
    }
  } ]
}

6 HBase 数据迁移备份及恢复相关的问题

6.1 迁移备份

迁移备份可以使用 hbase 自带的工具导入导出表数据(ExportImport),也可以使用 FlumeSqoopDataX 等工具。在某些情况下可以直接将 HDFS 上 HBase 上对应表数据数据(路径可查看配置文件 hbase.rootdir配置项,例如hdfs上的 /hbase/data/命名空间/表名 ) get 到本地,在需要的时候,如果恢复数据则 put 到 HDFS 上 HBase 表命空间下,如果迁移则put 到 另一个 HDFS 上 HBase 表命空间路径下,最后修复元数据即可。

下面介绍一下使用最后一种方式的过程及中间出现的问题的处理。首先确定 HBase 需要备份的表,然后将该表数据文件 get 到本地hddfs dfs -get /hbase/data/default/表名(不过最好在拉取数据到本地之前先查看下 HDFS 上该表对应的文件大小hdfs dfs -du -h /hbase/data/default/表名),接下来将获取的数据文件传输或者拷贝到另一个 HBase 集群环境,将这份数据上传到 HDFS 上hddfs dfs -put 表名数据文件夹 /hbase/data/default/,记得将上传的文件的属组改为 HBase 其它表一样的,避免权限问题,例如hadoop fs -chown -R hbase:supergroup /hbase/data/default/表名。接下来重要的就是来恢复或修复新环境下 HBase 的元数据信息。

这种方式可以不必先在新环境下创建出 HBase表。如果知道表结果也可以手动创建出来,这样在新环境的 HDFS 上 HBase 数据目录下就有完整的信息,然后直接将前面获取到数据中的 region 上传到对应的 region 目录下即可,这样做的好处可以不用修复元数据,直接重启 HBase 服务即可查询数据,但如果表比较多,建表是个麻烦的事情,所以这里重点说下不手动建表而是回复元数据(这里说的元数据重点是 HBase 元数据表 hbase:meta,ZooKeeper 上的可以不用考虑)的方式还原表。

6.2 hbck 及 hbck2

这部分仅对 hbck 及 hbck2 进行讨论,可以直接跳到 6.3 元数据修复 部分进行元数据修复。

在 HBase 2.x 的版本中,HBase 自带的数据修复工具,查看该工具的帮助信息,其中可以看到 NOTE: Following options are NOT supported as of HBase version 2.0+. 提示,所以如果你的集群是 HBase 2.x 的并且网上看到的有些文章中进行数据修复是可能并不会生效。

 hbase hbck -help
 # 执行后可以看到如下部分信息
-----------------------------------------------------------------------
Datafile Repair options: (expert features, use with caution!)
   -checkCorruptHFiles     Check all Hfiles by opening them to make sure they are valid
   -sidelineCorruptHFiles  Quarantine corrupted HFiles.  implies -checkCorruptHFiles
 Replication options
   -fixReplication   Deletes replication queues for removed peers
  Metadata Repair options supported as of version 2.0: (expert features, use with caution!)
   -fixVersionFile   Try to fix missing hbase.version file in hdfs.
   -fixReferenceFiles  Try to offline lingering reference store files
   -fixHFileLinks  Try to offline lingering HFileLinks
   -noHdfsChecking   Don't load/check region info from HDFS. Assumes hbase:meta region info is good. Won't check/fix any HDFS issue, e.g. hole, orphan, or overlap
   -ignorePreCheckPermission  ignore filesystem permission pre-check
   
NOTE: Following options are NOT supported as of HBase version 2.0+.
  UNSUPPORTED Metadata Repair options: (expert features, use with caution!)
   -fix              Try to fix region assignments.  This is for backwards compatiblity
   -fixAssignments   Try to fix region assignments.  Replaces the old -fix
   -fixMeta          Try to fix meta problems.  This assumes HDFS region info is good.
   -fixHdfsHoles     Try to fix region holes in hdfs.
   -fixHdfsOrphans   Try to fix region dirs with no .regioninfo file in hdfs
   -fixTableOrphans  Try to fix table dirs with no .tableinfo file in hdfs (online mode only)
   -fixHdfsOverlaps  Try to fix region overlaps in hdfs.
   -maxMerge <n>     When fixing region overlaps, allow at most <n> regions to merge. (n=5 by default)
   -sidelineBigOverlaps  When fixing region overlaps, allow to sideline big overlaps
   -maxOverlapsToSideline <n>  When fixing region overlaps, allow at most <n> regions to sideline per group. (n=2 by default)
   -fixSplitParents  Try to force offline split parents to be online.
   -removeParents    Try to offline and sideline lingering parents and keep daughter regions.
   -fixEmptyMetaCells  Try to fix hbase:meta entries not referencing any region (empty REGIONINFO_QUALIFIER rows)
  UNSUPPORTED Metadata Repair shortcuts
   -repair           Shortcut for -fixAssignments -fixMeta -fixHdfsHoles -fixHdfsOrphans -fixHdfsOverlaps -fixVersionFile -sidelineBigOverlaps -fixReferenceFiles-fixHFileLinks
   -repairHoles      Shortcut for -fixAssignments -fixMeta -fixHdfsHoles
 Replication options
   -fixReplication   Deletes replication queues for removed peers
   -cleanReplicationBrarier [tableName] clean the replication barriers of a specified table, tableName is required

那有没有更好的方式呢?查看 CDH 官方文档 Using the HBCK2 Tool to Remediate HBase Clusters,获取 hbck2 可以通过如下方式

# 获取源码
git clone https://github.com/apache/hbase-operator-tools.git
cd hbase-operator-tools

# 需要环境中有 Maven ,使用 Maven 编译
mvn clean package -Dmaven.test.skip=true
ls hbase-hbck2/target/

# 编译成功后的包在 hbase-hbck2/target/hbase-hbck2-1.1.0-SNAPSHOT.jar 下,
# 这里可以拷贝到 hbase 用户的家目录下,
cp hbase-hbck2/target/hbase-hbck2-1.1.0-SNAPSHOT.jar /var/lib/hbase/

# 命令格式为。
hbase hbck -j <jar包地址> <命令>

接下来使用这个工具包进行修复HBase元数据信息,关于这个工具更详细的使用除了查看上面的 CDH 文档外,还可以查看 hbase-hbck2/README.md

# 查看帮助信息
hbase hbck -j hbase-hbck2-1.1.0-SNAPSHOT.jar -help

# 例如 修复 default 命名(默认)空间下的 A_title_info 表,可以这样
hbase hbck -j hbase-hbck2-1.1.0-SNAPSHOT.jar addFsRegionsMissingInMeta default:A_title_info

进入 hbase shell 查看元数据信息中该表对应的信息

-- 高兴的是此时查询到有元数据信息了,
hbase(main):003:0> scan 'hbase:meta' , {LIMIT=>10,FILTER=>"PrefixFilter('A_title_info')"}
ROW                                                          COLUMN+CELL
 A_title_info                                                column=table:state, timestamp=1603245643483, value=\x08\x00
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:regioninfo, timestamp=1603246870750, value={ENCODED => 04b5133af66afdb34d548182a363b608, NAME => 'A_title_info,,1601168827694.04b5133af66afdb34d548182a363b608.', ST
 8.                                                          ARTKEY => '', ENDKEY => ''}
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:state, timestamp=1603246870750, value=CLOSED
 8.
2 row(s)
Took 0.9919 seconds

-- 但是在查询数据时还是报了如下的错误,
hbase(main):002:0> count 'A_title_info'
ERROR: No server address listed in hbase:meta for region A_title_info,,1601168827694.04b5133af66afdb34d548182a363b608. containing row
For usage try 'help "count"'
Took 8.5935 seconds

查看表结构没问题,但是只要查询数据,就会报上面的错误,从上面的错误可以知道元数据信息记录的该表的 region 对应的服务无法监听到,其实这里还是元数据信息恢复的不完整导致的,如果正常恢复后,查看元数据表可以看到 column=info:server, timestamp=1603250438191, value=cdh2.yore.com:16020 等信息,这样 HBase 就知道该 region 数据是分布在那个节点。

6.3 元数据修复

经过第二部分的尝试我们的元数据虽然修复出来了,但是还是不完整,因此我们需要用新的方法完成修复。

先删除HBase 元数据表中不完整的表信息(如果当前HBase 元数据表 hbase:meta 没有待修复表元数据内容,可以跳过)

hbase(main):008:0> delete 'hbase:meta','A_title_info,,1601168827694.04b5133af66afdb34d548182a363b608.','info:regioninfo'
Took 0.3305 seconds
hbase(main):009:0> delete 'hbase:meta','A_title_info,,1601168827694.04b5133af66afdb34d548182a363b608.','info:state'
Took 0.1940 seconds

使用网上提供的修复工具 hbase-meta-repair进行修复

# 下载源码
git clone https://github.com/DarkPhoenixs/hbase-meta-repair.git
cd hbase-meta-repair/

# 配置 application.properties
vim src/main/resources/application.properties

重要配置 HBase 使用的 ZK 信息,znode 路径名,hbase 在 HDFS 上的根路径(最好直接写 ip 值),修复的表名选填,后面执行的时候可以手动指定此参数。

spring.application.name=hbase-meta-repair
zookeeper.address=cdh3:2181,cdh2:2181,cdh1:2181
zookeeper.nodeParent=/hbase
# 配置为 hbase 的 hbase.rootdir 路径
hdfs.root.dir=hdfs://192.168.xx.xx:8020/hbase
repair.tableName=A_title_info
logging.level.root=warn

拷贝自己环境下的 core-site.xmlhdfs-site.xml 配置文件到项目配置文件夹下(替换)

# 例如 CDH 环境可以这样
cp /etc/hbase/conf/core-site.xml src/main/resources/
cp /etc/hbase/conf/hdfs-site.xml src/main/resources/

最后就是打包,并执行修复

# 编译打包,最好跳过测试
mvn install -Dmaven.test.skip=true

# 执行程序,修复 HBase 表。-Drepair.tableName 指定修复的表名
[root@cdh2 hbase-meta-repair]# java -jar -Drepair.tableName=A_title_info target/hbase-meta-repair-0.0.1.jar
  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.2.0.RELEASE)
 ……

执行成功后会最后会显示如下信息,程序自动退出

WARN 27097 — [ main] o.d.hbase.repair.HbaseRepairRunner : [{“hostAndPort”:“cdh2.yore.com:16020”,“hostname”:“cdh2.yore.com”,“hostnameLowerCase”:“cdh2.yore.com”,“port”:16020,“serverName”:“cdh2.yore.com,16020,1603247045392”,“startcode”:1603247045392,“versionedBytes”:“AABjZGgyLnlnYnguY29tLDE2MDIwLDE2MDMyNDcwNDUzOTI=”},{“hostAndPort”:“cdh3.yore.com:16020”,“hostname”:“cdh3.yore.com”,“hostnameLowerCase”:“cdh3.yore.com”,“port”:16020,“serverName”:“cdh3.yore.com,16020,1603247044440”,“startcode”:1603247044440,“versionedBytes”:“AABjZGgzLnlnYnguY29tLDE2MDIwLDE2MDMyNDcwNDQ0NDA=”}]

重启 HBase 服务,通过 hbase shell 查看修复表的元数据信息,发现这次信息是比较完整的,包含了 column=info:server 等信息。

hbase(main):001:0> scan 'hbase:meta' , {LIMIT=>10,FILTER=>"PrefixFilter('A_title_info')"}
ROW                                                          COLUMN+CELL
 A_title_info                                                column=table:state, timestamp=1603245643483, value=\x08\x00
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:regioninfo, timestamp=1603250719476, value={ENCODED => 04b5133af66afdb34d548182a363b608, NAME => 'A_title_info,,1601168827694.04b5133af66afdb34d548182a363b608.', ST
 8.                                                          ARTKEY => '', ENDKEY => ''}
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:seqnumDuringOpen, timestamp=1603250719476, value=\x00\x00\x00\x00\x00\x00\x13+
 8.
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:server, timestamp=1603250719476, value=cdh2. yore.com:16020
 8.
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:serverstartcode, timestamp=1603250719476, value=1603250703456
 8.
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:sn, timestamp=1603250717575, value=cdh2.yore.com,16020,1603250703456
 8.
 A_title_info,,1601168827694.04b5133af66afdb34d548182a363b60 column=info:state, timestamp=1603250719476, value=OPEN
 8.
2 row(s)
Took 0.9666 seconds

查看我们的数据

hbase(main):014:0> count 'A_title_info'
57 row(s)
Took 0.0467 seconds
=> 57

scan success


7 Hive 集成 HBase

使用 HBase 时我们往往希望可以通过 SQL 的方式查询数据,因此一般会集成 Apache Phoenix,除此之外我们还可以选择在 Hive 中创建 HBase 的外部映射表,这个 Hive 本身是支持这个功能的,具体可以参考 Apache Hive 的官方文档
HBaseIntegration

7.1 创建 HBase 外部映射表示例

-- 注意大小写
CREATE EXTERNAL TABLE impala_hive.hbase_app_test(
  `row_key` string COMMENT '',
  `age` int COMMENT '',
  `c_eng_nme` string COMMENT '',
  `c_latest_mrk` decimal(10,0) COMMENT '',
  `t_birthday` timestamp COMMENT ''
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe'
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (
    "hbase.columns.mapping"=":key,info:age,info:c_eng_nme,info:c_latest_mrk,info:t_birthday'
)
TBLPROPERTIES ("hbase.table.name"="ods_xxx_app_test")

重点在 SERDEPROPERTIES 中通过 "hbase.columns.mapping" 配置HBase 列字段和 Hive 建表的映射关系,其中 :key 指定主键,不写默认为第一个字段为主键。最后在 TBLPROPERTIES 中通过 "hbase.table.name" 指定映射的 HBase 中的表,这里唯一需要注意的就是大小写。

7.2 HBase 数据导入 Hive 表

现在我们可以在 Hive 中直接执行熟悉的 SQL 查询 HBase 的数据了,但假如我们想把数据直接导入到 Hive 表可以通过如下的方式:

-- 提前在 hive 中创建一个 orc 表
CREATE EXTERNAL TABLE `hbase_app_test_orc`(
`row_key` string,
`age` int,    
`c_eng_nme` string COMMENT '',
`c_latest_mrk` decimal(10,0) COMMENT '',
`t_birthday` timestamp COMMENT ''
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' 
STORED AS ORC;

-- 将 HBase 数据插入 Hive 表
jdbc:hive2://bdm1:10000/default> insert overwrite table hbase_app_test_orc
. . . . . . . . . . . . . . . . . > select * from impala_hive.hbase_app_test;

7.3 org.apache.hadoop.hbase.client.RetriesExhaustedException: Can’t getthe location for replica 0

这个是于 HBase 集成后,在 Hive 中将 HBase 映射表 insert overwrite 的时候报的一个错,具体错误信息如下:

Error: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1617775791467_1624_1_00, diagnostics=[Vertex vertex_1617775791467_1624_1_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t_m_cus_bas_test initializer failed, vertex=vertex_1617775791467_1624_1_00 [Map 1], org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't getthe location for replica 0
        at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:332)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
        at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
        at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:268)
        at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:436)
        at org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:311)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:596)
        at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:754)
        at org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:670)
        at org.apache.hadoop.hbase.MetaTableAccessor.scanMetaForTableRegions(MetaTableAccessor.java:665)
        at org.apache.hadoop.hbase.client.HRegionLocator.listRegionLocations(HRegionLocator.java:152)
        at org.apache.hadoop.hbase.client.HRegionLocator.getAllRegionLocations(HRegionLocator.java:88)
        at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.getRegionServersOfTable(RegionSizeCalculator.java:103)
        at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.init(RegionSizeCalculator.java:79)
        at org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator.<init>(RegionSizeCalculator.java:61)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.oneInputSplitPerRegion(TableInputFormatBase.java:294)
        at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:257)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplitsInternal(HiveHBaseTableInputFormat.java:349)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.access$200(HiveHBaseTableInputFormat.java:68)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$2.run(HiveHBaseTableInputFormat.java:271)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat$2.run(HiveHBaseTableInputFormat.java:269)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:269)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:524)
        at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:779)
        at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:243)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269)
        at org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253)
        at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
        at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
        at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
        at org.apache.hadoop.hbase.client.ConnectionImplementation.get(ConnectionImplementation.java:2009)
        at org.apache.hadoop.hbase.client.ConnectionImplementation.locateMeta(ConnectionImplementation.java:785)
        at org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:752)
        at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:325)
        ... 41 more
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/meta-region-server
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:189)
        at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)
        ... 1 more
]Vertex killed, vertexName=Reducer 3, vertexId=vertex_1617775791467_1624_1_02, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1617775791467_1624_1_02 [Reducer 3] killed/failed due to:OTHER_VERTEX_FAILURE]Vertex killed, vertexName=Reducer 2, vertexId=vertex_1617775791467_1624_1_01, diagnostics=[Vertex received Kill in INITED state., Vertex vertex_1617775791467_1624_1_01 [Reducer 2] killed/failed due to:OTHER_VERTEX_FAILURE]DAG did not succeed due to VERTEX_FAILURE. failedVertices:1 killedVertices:2 (state=08S01,code=2)

通过上面的错误信息,我们可以看到两点,首先 HBase client 在调用 RpcRetryingCallerWithReadReplicas 获取信息时 Can't getthe location for replica 0。其次是获取HBase znode 上的 meta-region-server 元信息时访问的路径为 /hbase/meta-region-server,这个路径是默认的值,这个可以查看 hive 的 lib 下的 hbase-client-*.jar 包的源码
hbase-client-2.0.2.3.1.0.0-78.jar/org.apache.hadoop.hbase.zookeeper.ZNodePaths
因此我们大概知道了其中的原因了,通过 Hive 执行 SQL 时,HBase 的部分配置没有生效或添加进来,导致按照默认的 ZooKeeper 的信息请求 Region Server 的 meta 信息,导致无法正常获取到而报了异常,本应想着通过 hive cli 或 beeline 的 –hivevar 指定,发现虽然设置了,但是 HBase Client 还是无法正常加载 ZooKeeper 相关配置,因此直接修改 hive-site.xml 配置文件,添加如下配置(下面三个配置需要和集群中的 HBase 配置保持一致

    <property>
      <name>zookeeper.znode.parent</name>
      <value>/hbase-unsecure</value>
    </property>

    <property>
      <name>hbase.zookeeper.quorum</name>
      <value>cdh1,cdh2,cdh3</value>
    </property>

    <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
    </property>

重启 Hive 服务,再次执行上面的 SQL 可正常执行。


最后

这个安装的过程同样适用于cdh 6.x的其它版本。因为时间限制,文中有难免有错误,如果各位查阅发现文中有错误,或者安装中有其它问题,欢迎留言交流


说明:本文档整个安装过程是在 x86 架构的系统上进行部署,这个也是大数据生态支持最友好的,如果系统环境是 aarch64架构(属于 ARMv8架构的一种执行状态),目前官网没有对其直接的支持,这个需要对源码重新编译,华为有对其进行了部分的支持,移植指南可以参考官方华为云鲲鹏官方文档移植指南(CDH)


同时关于 JDK 或者 CDH 升级可以查看我的另一篇 blog DH之JDK 版本升级(Open JDK1.8)和cdh升级



  • 12
    点赞
  • 62
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 30
    评论
评论 30
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Yore Yuen

你的支持认可是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值