(离线安装)CDH5.15-mariab10.1安装

文章目录

环境

centos7.5
cdh5.15.1
mariadb10.1

下载CDH和CM

CDH选择
下载地址:http://archive.cloudera.com/cdh5/parcels/5.15.1/

看好对应版本

CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel	2018-08-17 10:15	2.0 GB
CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha1	2018-08-17 10:15	41.0 B
manifest.json	2018-08-23 10:12	72.0 KB

CM的选择
下载地址:http://archive.cloudera.com/cm5/cm/5/

cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz

基础准备

网络名称修改

# hostnamectl set-hostname node101.yyd.cn

vi /etc/hostname 
node101.yyd.cn


vi /etc/hosts
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103


vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=node101.yyd.cn

[root@localhost ~]# service network restart

查看当前操作系统环境以及主机映射关系

[root@localhost ~]# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core) 
[root@localhost ~]#
[root@localhost ~]#
[root@localhost ~]# uname -r
3.10.0-862.el7.x86_64
[root@localhost ~]# 
[root@localhost ~]# uname -m
x86_64
[root@localhost ~]# 
[root@localhost ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:         128659        1825      124924          18        1909      126155
Swap:          7991           0        7991
[root@localhost ~]# 

[root@localhost ~]# hostname -i
172.30.1.71
[root@node101 ~]# 
[root@node101 ~]# cat /etc/hosts | grep gla
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103
[root@node101 ~]# 
[root@node101 ~]#

打通SSH,设置ssh无密码登陆(所有节点)

#在主节点上执行
[root@cdh101 ~]$ ll -a   #查看.ssh目录
[root@cdh101 ~]$cd .ssh
# 每个服务器都要生成这个
[root@cdh101 ~]$ssh-keygen -t rsa    #遇到提示一直按回车就行
[root@cdh101 ~]$ll    #此时ll查看会发现里面多了俩个文件
id_rsa       --> 私密钥匙
id_rsa.pub     --> 公用钥匙

#接着将公用钥匙写入到authorized_keys文件中,并修改这个文件的权限(重要,请不要忽略)
[root@cdh101 ~]$cat id_rsa.pub >> authorized_keys
[root@cdh101 ~]$chmod 600 authorized_keys #设置authorized_keys的访问权限
[root@cdh101 ~]$ssh localhost  #不再提示输入密码,那么现在启动集群就不会再一直输入密码了
#每个服务器都操作,假如有三台,authorized_keys里面就有三个地址
[root@cdh101 ~]$scp ~/.ssh/authorized_keys hadoop@cdh-1:~/.ssh/ #scp文件到所有datenode节点

禁用防火墙(所有节点)

systemctl stop firewalld
systemctl disable firewalld

需要在所有的节点上执行,因为涉及到的端口太多了,临时关闭防火墙是为了安装起来更方便,安装完毕后可以根据需要设置防火墙策略,保证集群安全

关闭 SELinux mode(所有节点)

vi  /etc/sysconfig/selinux
SELINUX=permissive 或者 SELINUX=disabled

python2.7

centos 7.4 默认2.7 无需改变

禁用swappiness

没有swap,内存大,就直接关闭吧

vi /etc/sysctl.conf  
新增
vm.swappiness = 0

修改最大文件数

临时修改

ulimit -n 10240

永久修改

vi /etc/security/limits.conf

* soft nofile 10240
* hard nofile 10240

禁用 Transparent Huge pages (THP)

大页面传输压缩,HADOOP MONGDB 等 推荐关闭

    vi /etc/default/grub

    #在下面行尾双引号前加入 [空格+transparent_hugepage=never]
    GRUB_CMDLINE_LINUX="crashkernel=auto  net.ifnames=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet transparent_hugepage=never"

    #重新生成grup配置文件
    grub2-mkconfig -o /etc/grub2.cfg
    reboot

编写集群管理脚本

批量执行服务的命令的脚本

shell命令xcall该脚本用于在所有主机上同时执行相同的命令

[root@node1 ~]# vi /usr/bin/xcall.sh
[root@node1 ~]# 
[root@node1 ~]# more `which xcall.sh`                                 #查看脚本内容
#!/bin/bash
#@author :yyd

#判断用户是否传参
if [ $# -lt 1 ];then
        echo "请输入参数"
        exit
fi

#获取用户输入的命令
cmd=$@

for (( i=101;i<=104;i++ ))
do
        #使终端变绿色
        tput setaf 2
        echo ============= cdh${i}.gla.net.cn : $cmd ============
        #使终端变回原来的颜色,即白灰色
        tput setaf 7
        #远程执行命令
        ssh cdh${i}.gla.net.cn  $cmd
        #判断命令是否执行成功
        if [ $? == 0 ];then
                echo "命令执行成功"
        fi
done

[root@node1 ~]# chmod +x /usr/bin/xcall.sh                 #千万别忘记添加执行权限哟
[root@node1 ~]# 
[root@node1 ~]# xcall.sh ls -d /home/                                    #我们这里测试使用一下我们的脚本
============= node101.yyd.cn : ls -d /home/ ============
/home/
命令执行成功
============= node102.yyd.cn : ls -d /home/ ============
/home/
命令执行成功
============= node103.yyd.cn : ls -d /home/ ============
/home/
命令执行成功
[root@node1 ~]# 

批量执行服务的命令的脚本([root@node101 ~]# more `which xcall.sh` )

使用我们上面自定义脚本批量安装rsync服务

[root@node1 ~]# xcall.sh "yum -y install rsync"                            #我的集群已经安装好了rsync服务,这里已经配置好了。
============= node101.yyd.cn : yum -y install rsync ============
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.cn99.com
Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
Nothing to do
命令执行成功
============= node102.yyd.cn : yum -y install rsync ============
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.163.com
Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
Nothing to do
命令执行成功
============= node103.yyd.cn : yum -y install rsync ============
Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.aliyun.com
Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
Nothing to do
命令执行成功
[root@node1 ~]# 

使用我们上面自定义脚本批量安装rsync服务([root@node101 ~]# xcall.sh "yum -y install rsync" )

批量同步文件的脚本

rsync是linux系统下的数据镜像备份工具

[root@node1 ~]# vi /usr/bin/xrsync.sh
[root@node1 ~]# 
[root@node1 ~]# chmod +x /usr/bin/xrsync.sh 
[root@node1 ~]# 
[root@node1 ~]# more `which xrsync.sh`
#!/bin/bash
#@author :yyd


#判断用户是否传参
if [ $# -lt 1 ];then
    echo "请输入参数";
    exit
fi


#获取文件路径
file=$@

#获取子路径
filename=`basename $file`

#获取父路径
dirpath=`dirname $file`

#获取完整路径
cd $dirpath
fullpath=`pwd -P`

#同步文件到DataNode
for (( i=102;i<=104;i++ ))
do
    #使终端变绿色
    tput setaf 2
    echo =========== cdh${i}.gla.net.cn : $file ===========
    #使终端变回原来的颜色,即白灰色
    tput setaf 7
    #远程执行命令
    rsync -lr $filename `whoami`@cdh${i}.gla.net.cn:$fullpath
    #判断命令是否执行成功
    if [ $? == 0 ];then
        echo "命令执行成功"
    fi
done

[root@node1 ~]# 

批量同步文件的脚本([root@node1~]# vi /usr/bin/xrsync.sh)

测试使用xrsync.sh同步数据

[root@cdh101 .ssh]## xcall.sh cat /etc/hosts | grep gla                #同步数据之前的数据
============= node101.yyd.cn : cat /etc/hosts ============
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103
192.168.26.148  cdh104.gla.net.cn  cdh104
============= node102.yyd.cn : cat /etc/hosts ============
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103
192.168.26.148  cdh104.gla.net.cn  cdh104
============= node103.yyd.cn : cat /etc/hosts ============
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103
192.168.26.148  cdh104.gla.net.cn  cdh104
============= cdh104.gla.net.cn : cat /etc/hosts ============
192.168.26.151  node101.yyd.cn  cdh101
192.168.26.150  node102.yyd.cn  cdh102
192.168.26.149  node103.yyd.cn  cdh103
192.168.26.148  cdh104.gla.net.cn  cdh104
[root@cdh101 .ssh]#


使用xrsync.sh同步数据([root@node101 ~]# xrsync.sh /etc/hosts)

下载第三方依赖包(注意,依赖包所有机器都得安装)

[root@cdh101 ~]# xcall.sh "yum -y install chkconfig python bind-utils psmisc libxslt zlib sqlite cyrus-sasl-plain cyrus-sasl-gssapi fuse fuse-libs redhat-lsb"

[root@cdh101 ~]# 

Hue错误: Load Balancer 该角色的进程启动失败

需要提前安装环境 httpd, mod_ssl

yum install httpd

yum install mod_ssl

WebUi hue数据库连接不上

登录安装Hue的节点执行以下操作,等待安装完成后再回到界面点击测试连接即可成功连接

yum install python-psycopg2
yum install libxml2-python
yum install mysql*

CDH中安装Hue连接不上数据库 Unexpected error. Unable to verify database connection 完美解决方案

使用我们上面自定义脚本批量安装wget,vim服务

[root@node1 ~]# xcall.sh "yum -y install wget vim lrzsz "

安装jdk(所有节点)

下载页面

https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

上传jdk1.8.0_181.tar.gz 版本到~/soft 目录下

[root@cdh101 ~]# mkdir /usr/java  && cd /root/soft
[root@cdh101 ~]# tar -zxf jdk-8u191-linux-x64.tar.gz
[root@cdh101 ~]# mv jdk1.8.0_191 jdk1.8
[root@cdh101 ~]# mv jdk1.8 /usr/java

vi ~/.bash_profile 文件

JAVA_HOME=/usr/java/jdk1.8
PATH=$PATH:$JAVA_HOME/bin

source ~/.bash_profile

默认cdh5 使用jdk7

查看是否能执行

[root@cdh101 soft]# java -version
openjdk version "1.8.0_161"
OpenJDK Runtime Environment (build 1.8.0_161-b14)
OpenJDK 64-Bit Server VM (build 25.161-b14, mixed mode)

安装NTP服务

首先检查系统中是否安装ntp包:rpm -q ntp

在线安装ntp: yum -y install ntp

安装成功之后,再次执行 rpm -q ntp 可以看到对应的包

设置成功后,ntp服务并不能立即启动,而是在下次重启之后启动,所以现在手动启动ntp:

启动之后,执行: netstat -an | grep 123 可以看到ntp服务的123端口已经使用

查看ntp进程是否启动ps -ef | grep ntpd

默认情况下ntp是从外网时间服务器来更新时间的,在集群中使用只要保证集群中所有的服务器时间一致即可,所以先配置其中一台服务器为时间服务器,其他服务器相对来说为这台时间服务器的客户端,从时间服务器上获取时间数据,从而避免联网,可用性更高

主节点时间服务器配置:

时间服务器的IP:192.168.26.151

vi /etc/ntp.conf

以下两行去掉前面的#
restrict 127.0.0.1
restrict ::1
然后在restrict指定的两行下面追加一行: 
restrict 192.168.1.1 mask 255.255.252.0 nomodify notrap


注释掉
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org

行尾添加
server 127.127.1.0
fudge 127.127.1.0 stratum 10

这个配置根据自己的网关和网段配置,只要能保证局域网主机通信就可以,比如这里子网掩码为255.255.252.0那么网段配置192.168.0.0也可以,只是通信的范围不太一样,总之这个配置就是授权局域网内能从本地同步时间的主机范围,其中server设置127.127.1.0为其自身,新增加一个restrict段为可以接受服务的网段

配置好之后,保存并退出,重启ntp服务即可

systemctl restart ntpd.service 

其他服务器的配置,这里相当于客户端的配置

vi /etc/ntp.conf

注释掉
#server 0.centos.pool.ntp.org
#server 1.centos.pool.ntp.org
#server 2.centos.pool.ntp.org

行尾添加主机ip
server 192.168.26.151

在client节点上同步server的时间

ntpdate 192.168.26.151

client节点启动ntpd服务

systemctl start ntpd
systemctl enable ntpd 或者 chkconfig ntpd on

错误提示

手动执行报错:
[root@localhost /]# /usr/sbin/ntpdate cn.pool.ntp.org
22 May 13:56:26 ntpdate[17023]: the NTP socket is in use, exiting

解决

分析应该是NTP服务被占用导致计划任务未成功执行,停止NTP服务,再执行ntpdate同步命令,同步成功。


停止ntp服务:
[root@localhost /]# service ntpd stop
Shutting down ntpd: [  OK  ]

设置ntp服务开机不自启动:
[root@localhost /]# chkconfig ntpd off

手动执行成功:
[root@localhost /]# /usr/sbin/ntpdate cn.pool.ntp.org
22 May 14:11:27 ntpdate[17352]: step time server 5.79.108.34 offset 826.232303 sec

配置好之后,保存,重启ntp服务即可
所有客户端都进行以上配置,都启动之后,集群会自动定期进行服务的同步,这样集群的时间就保持一致了

安装数据库 mariadb-server(10.1)

环境

CentOS7.2
10.1.38-MariaDB

全部删除MySQL/MariaDB(新机器直接跳过此步)

MySQL 已经不再包含在 CentOS 7 的源中,而改用了 MariaDB;

查看rpm已经安装的

[root@localhost logs]# rpm -qa | grep Maria*  
MariaDB-common-10.1.38-1.el7.centos.x86_64
MariaDB-server-10.1.38-1.el7.centos.x86_64
MariaDB-shared-10.1.38-1.el7.centos.x86_64
MariaDB-client-10.1.38-1.el7.centos.x86_64

删除所有


rpm -e MariaDB-common-10.1.38-1.el7.centos.x86_64

[root@cdh-1 mysql]# rpm -e MariaDB-shared-10.1.38-1.el7.centos.x86_64
error: Failed dependencies:
	libmysqlclient.so.18()(64bit) is needed by (installed) MySQL-python-1.2.5-1.el7.x86_64
	libmysqlclient.so.18(libmysqlclient_18)(64bit) is needed by (installed) MySQL-python-1.2.5-1.el7.x86_64

#强制卸载,因为没有--nodeps
rpm -e --nodeps MariaDB-client-5.5.49-1.el7.centos.x86_64


#卸载数据库:
[root@localhost logs]# yum -y remove mari*
#删除数据库文件:
[root@localhost logs]# rm -rf /var/lib/mysql/*

增加mariaDB的yum源

[root@cdh101 java]# cd /etc/yum.repos.d/
[root@cdh101 yum.repos.d]# cp CentOS-Base.repo CentOS-Base.repo.bak
[root@cdh101 yum.repos.d]# vim CentOS-Base.repo

[mariadb]
name = MariaDB
baseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.1/centos7-amd64/
gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck=1

#网址:https://mirrors.ustc.edu.cn/mariadb/yum/
#系统及版本选择,你可以根据这个网址(国内)去选择你想安装centos7amd64位其他版本号的版本,网址。直接修改baseurl中的版本号即可。安装Mariadb之前,你可以先导入GPG key

[root@VM_0_4_centos ~]# rpm --import https://yum.mariadb.org/RPM-GPG-KEY-MariaDB

运行安装命令安装MariaDB(安装过程可能会比较漫长,使用中科大的就快了)

yum -y install MariaDB-server MariaDB-client

移除原有InnoDB log文件,如果有

cd /var/lib/mysql/
rm -rf /var/lib/mysql/ib_logfile0 
rm -rf /var/lib/mysql/ib_logfile1
vi /etc/my.cnf   v5.x
vi /etc/my.cnf.d/server.cnf v10+

新增[mysqld]配置

[mysqld] 部分新增


lower_case_table_names=1
collation-server = utf8_general_ci
character-set-server = utf8

transaction-isolation = READ-COMMITTED
key_buffer = 16M
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
query_cache_limit = 8M
query_cache_size = 64M
query_cache_type = 1

max_connections = 550
#expire_logs_days = 10
#max_binlog_size = 100M

#log_bin should be on a disk with enough free space.
#Replace '/var/lib/mysql/mysql_binary_log' with an appropriate path for your
#system and chown the specified folder to the mysql user.
log_bin=/var/lib/mysql/mysql_binary_log

#In later versions of MariaDB, if you enable the binary log and do not set
#a server_id, MariaDB will not start. The server_id must be unique within
#the replicating group.
server_id=1

binlog_format = mixed

read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M


配置项说明

  1. 防止死锁,设置隔离级别为READ-COMMITTED
    transaction_isolation = READ-COMMITTED
  2. 由于大量写,推荐硬盘直写,不经过系统缓存
    innodb_flush_method = O_DIRECT
  3. 设置最大连接数max_connections,原则上小于50主机,每个数据库100连接,并最后结果+50连接
    例如Cloudera Manager Server, Activity Monitor, Reports Manager, Cloudera Navigator, and Hive metastore
    5个数据库,设置最大连接550
    如果大于50主机,那么数据库应该拆分到多个主机上
    max_connections = 550
  4. 二进制日志部分不是必须的,依据数据库管理策略进行设置
systemctl enable mysql &&\
systemctl restart mysql


# 有可能以上命令没办法实现
systemctl start mariadb.service

设置密码

/usr/bin/mysql_secure_installation

[...]
Enter current password for root (enter for none):
OK, successfully used password, moving on...
[...]
Set root password? [Y/n] Y  <– 是否设置root用户密码,输入y并回车或直接回车
New password: <– 设置root用户的密码
Re-enter new password: <– 再输入一次你设置的密码
Remove anonymous users? [Y/n] Y <– 是否删除匿名用户,生产环境建议删除,所以直接回车
[...]
Disallow root login remotely? [Y/n] N <–是否禁止root远程登录,根据自己的需求选择Y/n并回车
[...]
Remove test database and access to it [Y/n] Y <– 是否删除test数据库,直接回车
[...]
Reload privilege tables now? [Y/n] Y <– 是否重新加载权限表,直接回车
All done!


创建数据库

这里面我直接进行创建

mysql -u root -p
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;

GRANT ALL ON *.* TO 'root'@'%' IDENTIFIED BY '123123';

flush privileges;
# 这是创建语句样例
CREATE DATABASE <database> DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON <database>.* TO '<user>'@'%' IDENTIFIED BY '<password>';

数据库列表描述

ServiceDatabaseUser
Cloudera Manager Serverscmscm
Activity Monitoramonamon
Reports Managerrmanrman
Huehuehue
Hive Metastore Servermetastorehive
Sentry Serversentrysentry
Cloudera Navigator Audit Servernavnav
Cloudera Navigator Metadata Servernavmsnavms
Oozieoozieoozie

移除嵌入的postgresql数据库

如果有这个文件,那么删除

rm /etc/cloudera-scm-server/db.mgmt.properties

创建cloudera-scm用户(所有节点)

[root@node101 schema]# xcall.sh 'useradd --system --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm'
============= node101.yyd.cn : useradd --system --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm ============
命令执行成功
============= node102.yyd.cn : useradd --system --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm ============
命令执行成功
============= node103.yyd.cn : useradd --system --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm ============
命令执行成功
[root@node101 schema]# 
[root@node101 schema]# xcall.sh  id cloudera-scm
============= node101.yyd.cn : id cloudera-scm ============
uid=994(cloudera-scm) gid=993(cloudera-scm) groups=993(cloudera-scm)
命令执行成功
============= node102.yyd.cn : id cloudera-scm ============
uid=993(cloudera-scm) gid=992(cloudera-scm) groups=992(cloudera-scm)
命令执行成功
============= node103.yyd.cn : id cloudera-scm ============
uid=994(cloudera-scm) gid=993(cloudera-scm) groups=993(cloudera-scm)
命令执行成功
[root@node101 schema]# 

安装Cloudera Manager Server,Agent(在所有机器)

创建目录上传文件

[root@node101 ~]# xcall.sh mkdir `pwd`/root/soft
============= node101.yyd.cn : mkdir /root/download ============
命令执行成功
============= node102.yyd.cn : mkdir /root/download ============
命令执行成功
============= node103.yyd.cn : mkdir /root/download ============
命令执行成功
[root@node101 ~]# 
[root@node101 ~]# cd download/                            #将之前下载好的CDH文件上传到:node101.yyd.cn这台服务器上。
[root@node101 download]#
[root@node101 download]# ll
total 2890556
-rw-r--r-- 1 root root 2120090032 Aug 23 03:54 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel
-rw-r--r-- 1 root root         41 Sep 13 01:44 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha1
-rw-r--r-- 1 root root  838894986 Sep 13 02:47 cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz
-rw-r--r-- 1 root root      73767 Aug 24 08:33 manifest.json
-rw-r--r-- 1 root root     855946 Sep 12 18:14 mysql-connector-java-5.1.26.jar
[root@node101 download]# 
[root@node101 download]# 

分发CM文件到各个agentcloudera-manager-centos7-cm5.15.1_x86_64.tar.gz

[root@node101 download]# ll
total 2890556
-rw-r--r-- 1 root root 2120090032 Aug 23 03:54 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel
-rw-r--r-- 1 root root         41 Sep 13 01:44 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha1
-rw-r--r-- 1 root root  838894986 Sep 13 02:47 cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz
-rw-r--r-- 1 root root      73767 Aug 24 08:33 manifest.json
[root@node101 download]# 
[root@node101 download]# xrsync.sh cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz         #分发CM文件到各个agent
=========== node102.yyd.cn : %file ===========
命令执行成功
=========== node103.yyd.cn : %file ===========
命令执行成功
[root@node101 download]# 
[root@node101 download]# 
[root@node101 download]# 
[root@node101 download]# xcall.sh ls -l `pwd`                                        #查看是否分发完成
============= node101.yyd.cn : ls -l /root/download ============
total 2890556
-rw-r--r-- 1 root root 2120090032 Aug 23 03:54 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel
-rw-r--r-- 1 root root         41 Sep 13 01:44 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha1
-rw-r--r-- 1 root root  838894986 Sep 13 02:47 cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz
-rw-r--r-- 1 root root      73767 Aug 24 08:33 manifest.json
命令执行成功
============= node102.yyd.cn : ls -l /root/download ============
total 819236
-rw-r--r-- 1 root root 838894986 Sep 13 07:13 cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz
命令执行成功
============= node103.yinzhengjie.org.cn : ls -l /root/download ============
total 819236
-rw-r--r-- 1 root root 838894986 Sep 13 07:14 cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz
命令执行成功
[root@node101 download]# 

创建解压CM默认解压的目录(所有节点)

[root@node101 download]# xcall.sh mkdir /opt/cloudera-manager
============= node101.yyd.cn : mkdir /opt/cloudera-manager ============
命令执行成功
============= node102.yyd.cn : mkdir /opt/cloudera-manager ============
命令执行成功
============= node103.yyd.cn : mkdir /opt/cloudera-manager ============
命令执行成功
[root@node101 download]# 

解压CM到/opt/cloudera-manager目录下(所有节点)

[root@node101 download]# xcall.sh tar -zxf /root/soft/cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz -C /opt/cloudera-manager
============= node101.yyd.cn : tar -zxf /root/download/cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz -C /opt/cloudera-manager ============
命令执行成功
============= node102.yyd.cn : tar -zxf /root/download/cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz -C /opt/cloudera-manager ============
命令执行成功
============= node103.yyd.cn : tar -zxf /root/download/cloudera-manager-centos7-cm5.15.1_x86_64.tar.gz -C /opt/cloudera-manager ============
命令执行成功
[root@node101 download]# 
[root@node101 download]# xcall.sh ls -l /opt/cloudera-manager/
============= node101.yyd.cn : ls -l /opt/cloudera-manager/ ============
total 0
drwxr-xr-x 4 1106 4001 34 Jul 31 18:28 cloudera
drwxr-xr-x 9 1106 4001 81 Jul 31 18:28 cm-5.15.1
命令执行成功
============= node102.yyd.cn : ls -l /opt/cloudera-manager/ ============
total 0
drwxr-xr-x 4 jenkins jenkins 34 Jul 31 18:28 cloudera
drwxr-xr-x 9 jenkins jenkins 81 Jul 31 18:28 cm-5.15.1
命令执行成功
============= node103.yyd.cn : ls -l /opt/cloudera-manager/ ============
total 0
drwxr-xr-x 4 1106 4001 34 Jul 31 18:28 cloudera
drwxr-xr-x 9 1106 4001 81 Jul 31 18:28 cm-5.15.1
命令执行成功
[root@node101 download]# 

配置CM Server数据库

注意jar包名称要重命名,切记要将版本号去除,否则你在做下一步的时候会给你带来一些不必要的烦恼哟!

初始化脚本/usr/share/cmf/schema/scm_prepare_database.sh需要/usr/share/java/mysql-connector-java.jar这个文件来执行数据库初始化

# cdh 想要初始化数据库的配置文件
#  /usr/java/jdk1.8.0_181/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/usr/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
mkdir -p /usr/share/java/
上传 mysql-connector-java-5.1.46-bin.jar  到/usr/share/java/ 目录下
重命名 mysql-connector-java.jar


或者wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.46.tar.gz
需要解压!!!!要里面的.jar!!!

初始化cm本机数据库

[root@node101 download]# cd /opt/cloudera-manager/cm-5.15.1/share/cmf/schema/
[root@node101 schema]# 
[root@node101 schema]# ll
total 60
drwxr-xr-x 4 1106 4001  8192 Jul 31 18:29 mysql
drwxr-xr-x 4 1106 4001  8192 Jul 31 18:29 oracle
drwxr-xr-x 4 1106 4001 12288 Jul 31 18:29 postgresql
-rw-r--r-- 1 1106 4001  1437 Jul 31 18:29 scm_database_functions.sh
-rwxr-xr-x 1 1106 4001 12723 Jul 31 18:29 scm_prepare_database.sh
[root@node101 schema]# 
[root@node101 schema]# ./scm_prepare_database.sh mysql scm root

配置CM的Agent端(修改主机名)

[root@node101 soft]# 
[root@node101 soft]# grep server_port /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini              #注意,server_port 是Server和Agent的通信端口,没事别瞎改啊!
server_port=7182
[root@node101 soft]# 
[root@node101 soft]# grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini               #CM服务器默认是本机
server_host=localhost
[root@node101 soft]# 
[root@node101 soft]# sed -i 's/server_host=localhost/server_host=node101.yyd.cn/g' /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini         #我们指定CM的Server端
[root@node101 soft]# 
[root@node101 soft]# grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini                  #查看修改后的内容
server_host=node101.yyd.cn
[root@node101 soft]# 
[root@node101 soft]# 

上一步的配置信息同步到其它节点上

[root@node101 soft]# xrsync.sh /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini        #和各个节点配置同步配置信息
=========== node102.yyd.cn : %file ===========
命令执行成功
=========== node103.yyd.cn : %file ===========
命令执行成功
[root@node101 soft]# 
[root@node101 soft]# 
[root@node101 soft]# xcall.sh grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini              #验证配置信息是否同步成功
============= node101.yyd.cn : grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini ============
server_host=node101.yyd.cn
命令执行成功
============= node102.yyd.cn : grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini ============
server_host=node101.yyd.cn
命令执行成功
============= node103.yyd.cn : grep server_host /opt/cloudera-manager/cm-5.15.1/etc/cloudera-scm-agent/config.ini ============
server_host=node101.yyd.cn
命令执行成功
[root@node101 soft]# 


创建Parcel目录

Server端创建parcel-repo目录

[root@node101 schema]# mkdir -p /opt/cloudera/parcel-repo                                #Server端创建Parcel目录
[root@node101 schema]# 
[root@node101 schema]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/        #别忘记把权限给赋给我们之前创建的cloudera-scm 用户
[root@node101 schema]# 
[root@node101 schema]# ll -d  /opt/cloudera/parcel-repo/
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Sep 13 07:34 /opt/cloudera/parcel-repo/
[root@node101 schema]# 

agent端创建parcels目录

[root@node101 schema]# xcall.sh mkdir -p /opt/cloudera/parcels                                    #agent端创建parcels目录
============= node101.yyd.cn : mkdir -p /opt/cloudera/parcels ============
命令执行成功
============= node102.yyd.cn : mkdir -p /opt/cloudera/parcels ============
命令执行成功
============= node103.yyd.cn : mkdir -p /opt/cloudera/parcels ============
命令执行成功
[root@node101 schema]# 
[root@node101 schema]# xcall.sh chown cloudera-scm:cloudera-scm /opt/cloudera/parcels                #别忘了把该目录权限授权给我们之前创建的cloudera-scm用户!
============= node101.yyd.cn : chown cloudera-scm:cloudera-scm /opt/cloudera/parcels ============
命令执行成功
============= node102.yyd.cn : chown cloudera-scm:cloudera-scm /opt/cloudera/parcels ============
命令执行成功
============= node103.yyd.cn : chown cloudera-scm:cloudera-scm /opt/cloudera/parcels ============
命令执行成功
[root@node101 schema]# 
[root@node101 schema]# 
[root@node101 schema]# xcall.sh ls -ld /opt/cloudera/parcels                                        #验证是否授权成功
============= node101.yinzhengjie.org.cn : ls -ld /opt/cloudera/parcels ============
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Sep 13 07:35 /opt/cloudera/parcels
命令执行成功
============= node102.yyd.cn : ls -ld /opt/cloudera/parcels ============
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Sep 13 07:34 /opt/cloudera/parcels
命令执行成功
============= node103.yyd.cn : ls -ld /opt/cloudera/parcels ============
drwxr-xr-x 2 cloudera-scm cloudera-scm 6 Sep 13 07:34 /opt/cloudera/parcels
命令执行成功
[root@node101 schema]# 


制作CDH本地源

[root@node101 schema]# cd ~/soft/
[root@node101 soft]# 
[root@node101 soft]# cp CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel /opt/cloudera/parcel-repo/
[root@node101 soft]# 
[root@node101 soft]# cp manifest.json /opt/cloudera/parcel-repo/                                        #别小瞧这个文件,尽管它不是很大,但是它却记录着CDH和Hadoop生态圈组件的版本依赖关系!
[root@node101 soft]# 
[root@node101 soft]# cp CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha1 /opt/cloudera/parcel-repo/CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha                    #注意,我在拷贝到时候重命名该文件了!
[root@node101 soft]# 
[root@node101 soft]# ll /opt/cloudera/parcel-repo/
total 2070484
-rw-r--r-- 1 root root 2120090032 Sep 13 07:40 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel
-rw-r--r-- 1 root root         41 Sep 13 07:41 CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel.sha
-rw-r--r-- 1 root root      73767 Sep 13 07:41 manifest.json
[root@node101 soft]# 



温馨提示:
        如果你没有下载到“CDH-5.4.0-1.cdh5.4.0.p0.27-el6.parcel.sha”文件是,可以找到“manifest.json”文件中"parcelName": "CDH-5.15.1-1.cdh5.15.1.p0.4-el7.parcel"对应的"hash": "deff00898e410a34cf0a1e66c5dbe87546608f0c"复制到该文件即可。这个方法也适用于其他的版本!

启动CM的Server,Agent端

[root@node101 soft]# cd /opt/cloudera-manager/cm-5.15.1/etc/init.d/
[root@node101 init.d]# 
[root@node101 init.d]# ll
total 32
-rwxr-xr-x 1 1106 4001 8871 Jul 31 18:28 cloudera-scm-agent
-rwxr-xr-x 1 1106 4001 8417 Jul 31 18:28 cloudera-scm-server
-rwxr-xr-x 1 1106 4001 4444 Jul 31 18:28 cloudera-scm-server-db
[root@node101 init.d]# 
[root@node101 init.d]# ./cloudera-scm-server start                                #启动Server的服务端
Starting cloudera-scm-server:                              [  OK  ]
[root@node101 init.d]# 
[root@node101 init.d]# 
[root@node101 init.d]# xcall.sh `pwd`/cloudera-scm-agent start                    #批量启动各个agent端
============= node101.yyd.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent start ============
Starting cloudera-scm-agent: [  OK  ]
命令执行成功
============= node102.yyd.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent start ============
Starting cloudera-scm-agent: [  OK  ]
命令执行成功
============= node103.yyd.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent start ============
Starting cloudera-scm-agent: [  OK  ]
命令执行成功
[root@node101 init.d]# 
[root@node101 init.d]# ./cloudera-scm-server status                                                #查看Server服务是否在运行
cloudera-scm-server (pid  16569) is running...
[root@node101 init.d]# 
[root@node101 init.d]# xcall.sh `pwd`/cloudera-scm-agent status                                  #查看Agent服务是否在运行
============= node101.yinzhengjie.org.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent status ============
cloudera-scm-agent (pid  16667) is running...
命令执行成功
============= node102.yinzhengjie.org.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent status ============
cloudera-scm-agent (pid  13847) is running...
命令执行成功
============= node103.yyd.cn : /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent status ============
cloudera-scm-agent (pid  8815) is running...
命令执行成功
[root@node101 init.d]# 
[root@node101 init.d]#

以上只是命令上的启动,我们需要观察Server的日志文件

tail -f /opt/cloudera-manager/cm-5.15.1/log/cloudera-scm-server/cloudera-scm-server.log

如果出现以下内容说明启动成功:

2019-06-14 14:28:56,128 INFO WebServerImpl:org.mortbay.log: Started SelectChannelConnector@0.0.0.0:7180
2019-06-14 14:28:56,128 INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.


验证是否可以登录成功,访问webUI界面

ip:7180

admin
admin

我们安装CM的过程通过WebUI的安装向导来进行安装,推荐使用谷歌浏览器,不要使用容易崩溃的浏览器,这样会影响你安装进度的!如果你也出现了以上界面,恭喜你CM部署成功

开始配置

  1. 同意协议
  2. 点选Express版本
  3. 输入主机名称或者IP 搜索选中
  4. 集群安装:使用Parcel,
  5. CDH 版本:CDH-5.15.1-1.cdh5.15.1.p0.4
  6. JDK不安装
  7. 单用户模式不勾选
  8. 输入SSH密码
  9. 系统建立临时yum源,并自动安装 cloudera-manager-daemons cloudera-manager-agent
  10. 检查主机正确性
  • 第一个警告直接在所有节点上执行echo 10 > /proc/sys/vm/swappiness
  • 第二个警告按照上面的说明,在所有节点上执行这两条命令echo never > /sys/kernel/mm/transparent_hugepage/defrag >> /etc/rc.localecho never > /sys/kernel/mm/transparent_hugepage/enabled >> /etc/rc.local
  1. 群集设置–选择安装服务–按需选择
  2. 群集设置–自定义角色–默认即可
  3. 群集设置–数据库设置
  • hue连接不上问题
  • 缺少mysql-community-libs-compat-5.7.20-1.el7.x86_64.rpm
  • 缺少python-lxml,yum install python-lxml
  1. 群集设置–审查–默认即可
  2. 群集设置–首次设置–默认即可
  3. 开始安装服务

集群配置

hadoop sqoop 使用mysql连接

root 用户

cd /var/lib/hadoop-yarn/ &&\
cp mysql-connector-java-5.1.46-bin.jar /opt/cloudera/parcels/CDH/jars/mysql-connector-java.jar

cd /opt/cloudera/parcels/CDH/lib/hadoop/lib &&\
ln -s ../../../jars/mysql-connector-java.jar mysql-connector-java.jar

cd /opt/cloudera/parcels/CDH/lib/sqoop/lib &&\
ln -s ../../../jars/mysql-connector-java.jar mysql-connector-java.jar

oozie 启用web页面

root 用户

yum -y install unzip

cd /var/lib/hadoop-yarn/ &&\
unzip ext-2.2.zip  &&\
cd /opt/cloudera/parcels/CDH/lib/oozie/libext/ &&\
mv /var/lib/hadoop-yarn/ext-2.2 ./ &&\
chown -R  oozie:oozie ext-2.2/

CDH运行变量调优

登录 CHD web控制台http://xx:7180/ ,基本修改方法如下

  1. 主页左侧点击服务
  2. 点击配置标签
  3. 修改变量,保存
  4. 重启服务或者集群

hive

  • 自动创建和升级 Hive Metastore 数据库架构 datanucleus.autoCreateSchema 勾选
  • hive.metastore.schema.verification 去除勾选

hdfs

  • 去除 dfs.permissions,原因未知

yarn

导致请求运行内存溢出,目前是hive操作部分

Container [pid=23898,containerID=container_1540292190468_0015_01_000005] is running beyond physical memory limits.

Container killed on request. Exit code is 143

需要yarn-tuning-guide.xlsx 来确认信息

要求yarn.nodemanager.resource.memory-mb >= yarn.scheduler.maximum-allocation-mb

  • mapreduce.map.memory.mb 4G 在0502中使用 否则报错,先用这一个参数
  • mapreduce.reduce.memory.mb 4G
  • yarn.scheduler.maximum-allocation-mb 24G
  • yarn.nodemanager.resource.memory-mb 24G
  • yarn.scheduler.minimum-allocation-mb 1G
  • yarn.scheduler.increment-allocation-mb 512M

调优后需要集群服务重启

问题

cdh安装中遇到“正在获取安装锁”的解决办法

  1. kill 带scm_prepare_node 的进程
ps aux|grep scm_prepare_node|awk '{print $2}'|xargs kill -9
  1. cd /tmp目录,删除scm_prepare_node.*的文件
cd /tmp
ls -a
rm -rf scm_prepare_node.*

  1. 数据库配置
    问题描述:开始给Cloudera Manager配置数据库的时候,找的是网上的命令,/opt/cm-5.13.0/share/cmf/schema/scm_prepare_database.sh mysql cm -hlocalhost -uroot -p --scm-host localhost scm scm scm,一直报错,不知道什么问题,报错信息java.sql.SQLException: Your password does not satisfy the current policy requirements,各种改密码复杂度及降低数据库密码复杂度要求都不行,怀疑是CDH版本更新命令参数发生了变化
    解决方法,去掉部分参数,
    /opt/cm-5.13.0/share/cmf/schema/scm_prepare_database.sh mysql -uroot -p scm scm

  2. cloudera-scm-server启动,cm-5.13.0/etc/init.d/cloudera-scm-server:行109: pstree: 未找到命令,centos最小安装版缺少软件包,安装即可,yum install psmisc

  3. 重启服务

# mariadb节点
systemctl restart mysql
# 主节点
sudo /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-server stop
# 所有安装agent节点
sudo /opt/cloudera-manager/cm-5.15.1/etc/init.d/cloudera-scm-agent stop

6 非root用户,启动显示OK,但是7180端口起不来,查看日志报如下错误

com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Access denied for user 'scm'@'%' to database 'cm'

解决办法:用mysql shell 客户端登录 后执行下面命令

grant all on cm.* to 'scm'@'%' identified by 'scm' with grant option;

问题4)非root用户,启动显示OK,但是7180端口起不来,查看日志报如下错误

2017-09-28 12:48:52,618 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: The server storage directory [/var/lib/cloudera-scm-server] doesn't exist.
2017-09-28 12:48:52,618 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No read permission to the server storage directory [/var/lib/cloudera-scm-server]
2017-09-28 12:48:52,618 ERROR WebServerImpl:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No write permission to the server storage directory [/var/lib/cloudera-scm-server]
2017-09-28 12:48:54,663 INFO WebServerImpl:org.springframework.web.servlet.handler.SimpleUrlHandlerMapping: Root mapping to handler of type [class org.springframework.web.servlet.mvc.ParameterizableViewController]
2017-09-28 12:48:54,716 INFO WebServerImpl:org.springframework.web.servlet.DispatcherServlet: FrameworkServlet 'Spring MVC Dispatcher Servlet': initialization completed in 2536 ms
2017-09-28 12:48:54,738 INFO WebServerImpl:com.cloudera.server.web.cmon.JobDetailGatekeeper: ActivityMonitor configured to allow job details for all jobs.
2017-09-28 12:48:55,813 ERROR SearchRepositoryManager-0:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: The server storage directory [/var/lib/cloudera-scm-server] doesn't exist.
2017-09-28 12:48:55,813 ERROR SearchRepositoryManager-0:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No read permission to the server storage directory [/var/lib/cloudera-scm-server]
2017-09-28 12:48:55,813 ERROR SearchRepositoryManager-0:com.cloudera.server.web.cmf.search.components.SearchRepositoryManager: No write permission to the server storage directory [/var/lib/cloudera-scm-server]

解决办法:

sudo mkdir /var/lib/cloudera-scm-server

其它

Hue错误: Load Balancer 该角色的进程启动失败

需要提前安装环境 httpd, mod_ssl

yum install httpd

yum install mod_ssl

安装好之后, 重启就好了

Activity Monitor启动失败

查看是否在/usr/share/java有连接驱动

hive安装失败

缺少驱动在/opt/cloudera/parcels/CDH/lib/hive/lib下加入mysql-connector-java-5.1.46-bin.jar

impala中 Impala Daemon启动失败

错误日志

Invalid short-circuit reads configuration:
  - Impala cannot read or execute the parent directory of dfs.domain.socket.path
  
  
Aborting Impala Server startup due to improper configuration. Impalad exiting.  

解决方案

1. 去HDFS中找到dfs.domain.socket.path配置
2. 查看其中值目录(/var/run/hdfs-sockets/dn)
3. 找到报错的主机,创建对应的目录即可(我的是/var/run/hdfs-sockets)
4. 重启对应的Impala Daemon进程,即可
datanode起不来

报错信息

java.net.BindException: bind(2) error: Permission denied when trying to bind to '/var/run/hdfs-sockets/dn'
        at org.apache.hadoop.net.unix.DomainSocket.bind0(Native Method)

解决方案

chown hdfs:root -R /var/run/hdfs-sockets

Oozie web console is disabled.(oozie控制台被禁用)

yum -y install unzip

上传安装包ext-2.2.zip到soft目录并解压
unzip ext-2.2.zip
移动到oozie lib目录下
mv ext-2.2 /opt/cloudera/parcels/CDH/lib/oozie/libext
修改权限
chown -R  oozie:oozie ext-2.2/
hadoop sqoop 连接数据库没驱动

报错
数据库连接不上,没有驱动

解决

上传mysql-connector-java-5.1.46-bin.jar到soft目录下

移动到cdh jars下
mv mysql-connector-java-5.1.46-bin.jar /opt/cloudera/parcels/CDH/jars
移动到CDH Hadoop lib下

移动到CDH sqoop lib下
hue中oozie从mysql导入hive报错Could not load db driver class: com.mysql.jdbc.Driver
解决方法:
hdfs dfs -put /opt/cloudera/parcels/CDH/lib/sqoop/lib/mysql-connector-java-5.1.46-bin.jar  /user/oozie/share/lib/lib_20190620105005/sqoop/

重启oozie服务
此 DataNode 未连接到其一个或多个 NameNode。

原因
vi /etc/sysconfig/network
hostname 全设置成了主节点

修改
修改为各自的hostname就可以了

[ERROR] Fatal error: Can’t open and lock privilege tables: Table 'mysql.user
数据库没删干净

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值