企业级私有云服务(Ovirt+K8S)整合ClouderaManager实现

最近公司打算上云平台,同时也要上一套CDH的服务。本人专业是大数据开发方向,临危受命,没有太多基础,但是在百度与自己的学习下,总算搭建成功,现在就整个流程总结一下。以免后面忘记了,也给新人们铺个路。再次说明,本人云计算方向新手一枚,有很多写的不好的,忘高手指教!

软件环境

软件版本
操作系统CentOS7.5_x64
Docker18-ce
Kubernetes1.12

服务器配置及角色

服务器地址CPU内存硬盘K8S角色Ovirt角色类型主机名
192.168.3.23136核256GB26TMaster/物理机host231.zhijia
192.168.3.23236核256GB26TSlaveSlave物理机host232.zhijia
192.168.3.2094核64GB550GB/Master虚拟机host209.zhijia

简单说下,Ovirt是在虚拟机内搭建的主节点,本来公司分配的四台物理机做实验的,后来因为各种原因,就剩下两台231和232,其中一台231当时还出问题了,不能做主节点,而虚拟机那台209因为不支持虚拟化,也不能做Ovirt的从节点,所以就搭成了209的虚拟机做Ovirt的主节点,232的物理机做从节点,数据域放在物理机232上。

这里要注意,Ovirt搭建,一定确定好主机支不支持虚拟化,不然到后面白忙,目前结论,不支持虚拟化的机器不适合做从节点,就是需要虚拟化主机的,但是做主节点没关系(前提主节点只负责调度,不做虚拟机)!

一、服务器环境准备

以下操作需要在每台主机上操作

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

关闭selinux

sed -i "s/enforcing/disabled/" /etc/selinux/config && setenforce 0

提前准备好需要用到的基础软件服务

yum -y install wget \
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo \
yum -y install epel-release \
yum makecache \
yum -y install lrzsz \
yum -y install openssh* \
yum -y install vim \
yum install -y sudo \
yum install -y initscripts \
yum install -y net-tools.x86_64 \
yum -y install python-pip

二、K8S环境搭建

整个K8S搭建参考的是李振良的技术视频,相关文章可以可以参考此链接
Kubernetes v1.12/v1.13 二进制部署集群(HTTPS+RBAC)
同时提供视频课程的链接:
请点击此处
下面开始正题

官方提供的几种Kubernetes部署方式

minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。

官方地址:https://kubernetes.io/docs/setup/minikube/

kubeadm

Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二进制包

从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

小结:

生产环境中部署Kubernetes集群,只有Kubeadm和二进制包可选,Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。我们这里使用二进制包部署Kubernetes集群,我也是推荐大家使用这种方式,虽然手动部署麻烦点,但学习很多工作原理,更有利于后期维护。

搭建过程中我们需要提前准备好各种安装包,所有安装包我们都可以去网上自己下载,同时我也提供了资源离线版安装包,本过程全部采用离线安装,所有资源全部放入主节点231的/root/resources目录下
以下操作根据情况将相关变量更换为主机IP,相关变量如下:
变量备注
$MasterIP_01192.168.3.231以下简称231
$NodeIP_01192.168.3.232以下简称232

1、设置证书

mkdir /root/k8s
cd /root/k8s
mkdir k8s-cert
mkdir etcd-cert
mv /root/resources/etcd-cert.sh /root/k8s/etcd-cert/
mv /root/resources/cfssl.sh /root/k8s/etcd-cert/
#下载cfssl工具,需要确保时间正确,否则报错,可执行修正时间命令ntpdate time.windows.com
sh /root/k8s/etcd-cert/cfssl.sh
cd /root/k8s/etcd-cert/
#替换etcd-cert.sh文件里的IP字段为指定IP,这里手动修改时注意JSON格式,最后一个没有逗号,否则可能server证书不能生成
sed -i -e "s/10.206.240.188/$MasterIP_01/" -i -e "s/10.206.240.189/$NodeIP_01/" /root/k8s/etcd-cert/etcd-cert.sh
#执行etcd-cert.sh脚本,可生成ca.pem、ca-key.pem、server.pem、server-key.pem等证书
sh /root/k8s/etcd-cert/etcd-cert.sh

2、ETCD集群搭建

以下操作请在主节点231进行
#下载ETCD的安装包,https://github.com/etcd-io/etcd/releases(安装包已在资源里)
cd /root/resources
tar -zxvf /root/resources/etcd-v3.3.10-linux-amd64.tar.gz
#创建相关目录
mkdir -p /opt/etcd/{bin,cfg,ssl}
#将解压的可执行文件移动到工作目录
mv /root/resources/etcd-v3.3.10-linux-amd64/etcd /root/resources/etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin 
#拷贝证书
cp /root/k8s/etcd-cert/{ca,server-key,server}.pem /opt/etcd/ssl/
#执行etcd.sh脚本
sh /root/resources/etcd.sh etcd01 $MasterIP_01 etcd02=https://$NodeIP_01:2380
#将etcd工作目录拷贝到NODE节点
scp -r /opt/etcd/ root@$NodeIP_01:/opt/
#将etcd的service文件拷贝到NODE节点
scp /usr/lib/systemd/system/etcd.service root@$NodeIP_01:/usr/lib/systemd/system/
以下操作请在从节点232操作
sed -i "2s/etcd01/etcd02/;1,9s/$MasterIP_01/$NodeIP_01/" /opt/etcd/cfg/etcd
systemctl daemon-reload
systemctl restart etcd

可以使用以下命令查看集群ETCD的健康状态

/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://192.168.186.131:2379,https://192.168.186.132:2379" \
cluster-health

可以在每台设备上查看ETCD服务状态

systemctl status etcd

3、docker环境部署

以下操作在从节点232进行
cd /root/resources/ 
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#这里有个警告(docker-ce-19.03.5-3.el7.x86_64.rpm 的公钥尚未安装)不用管
yum install docker-ce -y
#镜像拉取过慢请换加速器
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
systemctl enable docker
systemctl restart docker

可以在每台设备上查看docker服务状态

systemctl status docker
docker-compose down 关闭容器
docker run -it busybox 运行容器

4、Flannel环境搭建

以下操作在主节点231进行
#flannel源码包地址https://github.com/coreos/flannel/releases
#之前测试这里出现Flannel服务起不来的情况,可能跟下面语句有关系,要注意后面的值是单引号来包裹
/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://$MasterIP_01:2379,https://$NodeIP_01:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
scp /root/resources/flannel-v0.10.0-linux-amd64.tar.gz root@$NodeIP_01:/root/resources/
scp /root/resources/flannel.sh root@$NodeIP_01:/root/resources/
以下操作在从节点232进行
#一定要注意这一步,要切换目录,否则解压时解压到家目录
cd /root/resources/
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
tar -zxvf /root/resources/flannel-v0.10.0-linux-amd64.tar.gz
mv /root/resources/flanneld /root/resources/mk-docker-opts.sh /opt/kubernetes/bin
sh /root/resources/flannel.sh https://$MasterIP_01:2379,https://$NodeIP_01:2379
systemctl restart docker

flannel部署完可以在NODE节点运行docker run -it (容器名)进到容器里,尝试ping对方IP看是否通
#可以在每台设备上查看Flannel服务状态

systemctl status Flannel

5、部署MASTER

#K8S-Master的三个组件分别是kube-apiserver、kube-controller-manager、kube-scheduler
#Kubernetes安装包地址https://github.com/kubernetes/kubernetes/releases
cd /root/resources/
unzip /root/resources/master.zip
tar zxvf /root/resources/kubernetes-server-linux-amd64.tar.gz
mkdir -p /opt/kubernetes/{bin,cfg,ssl}
cp /root/resources/kubernetes/server/bin/kube-apiserver /root/resources/kubernetes/server/bin/kube-controller-manager /root/resources/kubernetes/server/bin/kube-scheduler /opt/kubernetes/bin/
sh /root/resources/apiserver.sh $MasterIP_01 https://$MasterIP_01:2379,https://$NodeIP_01:2379
mkdir /opt/kubernetes/logs
sed -i "2s/true/false/" /opt/kubernetes/cfg/kube-apiserver
#之前这里修改kube-apiserver文件的转义\错了,要以下面这句为准
sed -i "3 i --log-dir=/opt/kubernetes/logs \ \\" /opt/kubernetes/cfg/kube-apiserver
sed -i -e "s/10.206.176.19/$MasterIP_01/;s/10.206.240.188/$NodeIP_01/" /root/resources/k8s-cert.sh
sh /root/resources/k8s-cert.sh
cp /root/resources/ca.pem /root/resources/ca-key.pem /root/resources/server.pem /root/resources/server-key.pem /opt/kubernetes/ssl/
cp /root/resources/kubeconfig.sh /root/k8s/k8s-cert/
cd /root/k8s/k8s-cert/
#将kubectl工具加入系统环境中
cp /root/resources/kubernetes/server/bin/kubectl /usr/bin/
cp /root/resources/ca.pem /root/resources/kube-proxy.pem /root/resources/kube-proxy-key.pem /root/k8s/k8s-cert/
sh /root/k8s/k8s-cert/kubeconfig.sh $MasterIP_01 /root/k8s/k8s-cert/
mv /root/k8s/k8s-cert/token.csv /opt/kubernetes/cfg/
sh /root/resources/controller-manager.sh 127.0.0.1
sh /root/resources/scheduler.sh 127.0.0.1
sed -i "3s/true/false/" /opt/kubernetes/cfg/kube-controller-manager
sed -i "4 i --log-dir=/opt/kubernetes/logs \ \\" /opt/kubernetes/cfg/kube-controller-manager
sed -i "2s/true/false/" /opt/kubernetes/cfg/kube-scheduler
sed -i "3 i --log-dir=/opt/kubernetes/logs \ \\" /opt/kubernetes/cfg/kube-scheduler
#可以在MASTER上查看kube-apiserver、kube-scheduler、kube-controller-manager服务状态systemctl status kube-apiserver

6、部署NODE节点

以下操作在主节点231进行
#如果集群起来后MASTER不能接受NODE请求,请确认MASTER的KUBE-apiserver和NODE的kubelet是否正常,还可能与下面这句话有关系,需要重新执行
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
#删除用户kubectl delete clusterrolebinding kubelet-bootstrap
scp /root/k8s/k8s-cert/bootstrap.kubeconfig /root/k8s/k8s-cert/kube-proxy.kubeconfig root@$NodeIP_01:/opt/kubernetes/cfg/
scp /root/resources/node.zip root@$NodeIP_01:/root/resources/
scp /root/resources/kubernetes/server/bin/kubelet /root/resources/kubernetes/server/bin/kube-proxy root@$NodeIP_01:/opt/kubernetes/bin/
以下操作在从节点232进行
cd /root/resources/
unzip /root/resources/node.zip
sh /root/resources/kubelet.sh $NodeIP_01
mkdir /opt/kubernetes/logs
sed -i "2s/true/false/" /opt/kubernetes/cfg/kubelet
#这里注意,日志这行改后可能导致kubelet起不起来
sed -i "3 i --log-dir=/opt/kubernetes/logs \ \\" /opt/kubernetes/cfg/kubelet
systemctl restart kubelet
sh /root/resources/proxy.sh $NodeIP_01
查看组件状态
kubectl get cs 
查看请求
kubectl get csr 
查看节点
kubectl get node 
执行以下命令允许连接主节点
kubectl certificate approve {NODE_NAME}

7、安装dashboard插件

以下操作在从节点232进行,提前拉取镜像
cd /root/resources/
docker pull lizhenliang/kubernetes-dashboard-amd64:v1.10.1
以下操作在主节点231进行
下载文件
cd /root/resources/
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
sed -i "s/k8s.gcr.io/lizhenliang/" /root/resources/kubernetes-dashboard.yaml
sed -i "158 i   type: NodePort" /root/resources/kubernetes-dashboard.yaml
sed -i "162 i   nodePort: 30001" /root/resources/kubernetes-dashboard.yaml
kubectl apply -f /root/resources/kubernetes-dashboard.yaml

使用可以查看pod及service信息

kubectl get pods,svc -n kube-system

默认使用火狐浏览器访问,需要加https://IP:端口方式访问,注意要加https
谷歌浏览器参考此文http://blogs.cpolar.cn/articles/2019/11/24/1574606459175.html
打开网页需要登录,采取令牌方式,方法如下

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

8、目前需要手动设置的,NODE加入后需要主节点手动同意

#主节点使用kubectl get csr获取NAME后,使用kubelet certificate approve {NAME}来允许NODE节点加入
#可以使用kubectl get node查看节点状态

9、部署NGINX(测试环节,可略过!)

以下操作在主节点231进行
kubectl run nginx --image=nginx 
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
kubectl get pod,svc
kubectl scale deployment nginx --replicas=3 这个代表扩容副本为3

#访问集群中部署的Nginx,打开浏览器输入:http://192.168.3.232:38696

三、Ovirt集群搭建

此平台搭建参考了以下文章,说的挺详细的,就是作者在写的时候有很多重复代码,新手不注意的话容器掉进坑里,现在我在整理一下吧!
原文链接如下:开源KVM虚拟化平台oVirt 4.3集群部署与配置
在Engine内先更新系统,安装一些基本工具:

yum -y update
yum -y install epel-release
yum -y install supervisor nano net-tools

每台机器都是静态IP,不要去用DHCP,这里先把Engine的网卡IP配置好,例如我的外网口网卡名是ens33,那么编辑:

vim /etc/sysconfig/network-scripts/ifcfg-ens33

配置网卡的静态IP格式:

BOOTPROTO=“static”
IPADDR=“192.168.3.209”
PREFIX=“24”
GATEWAY=“10.10.10.1”
DNS1=“114.114.114.114”

不知道VMWare的网关IP是多少?下面的命令走一走:

netstat -r -n

重启网络服务:

systemctl restart network

添加Hosts:

echo "192.168.3.209 host209.zhijia" >> /etc/hosts
echo "192.168.3.232 host232.zhijia" >> /etc/hosts

安装Engine:

yum -y install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
yum -y install ovirt-engine

安装完成之后使用下面的命令开始部署:

cd /root
engine-setup --generate-answer=~/answer.txt

整个应答流程大概如下,基本上一路回车保持默认就行了,只有在配置防火墙的时候选择个No即可(下面凡是留空的即代表使用了默认值):

--== PRODUCT OPTIONS ==-- PRODUCT OPTIONS ==--
         
Set up Cinderlib integrationSet up Cinderlib integration
(Currently in tech preview)(Currently in tech preview)
(Yes, No) [No]: (Yes, No) [No]: 
Configure Engine on this host (Yes, No) [Yes]: Configure Engine on this host (Yes, No) [Yes]: 
Configure ovirt-provider-ovn (Yes, No) [Yes]: Configure ovirt-provider-ovn (Yes, No) [Yes]: 
Configure Image I/O Proxy on this host (Yes, No) [Yes]: Configure Image I/O Proxy on this host (Yes, No) [Yes]: 
Configure WebSocket Proxy on this host (Yes, No) [Yes]: Configure WebSocket Proxy on this host (Yes, No) [Yes]: 

* Please note * : Data Warehouse is required for the engine.* Please note * : Data Warehouse is required for the engine.
If you choose to not configure it on this host, you have to configureIf you choose to not configure it on this host, you have to configure
it on a remote host, and then configure the engine on this host so, and then configure the engine on this host so
that it can access the database of the remote Data Warehouse host.Data Warehouse host.
Configure Data Warehouse on this host (Yes, No) [Yes]: Configure Data Warehouse on this host (Yes, No) [Yes]: 
Configure VM Console Proxy on this host (Yes, No) [Yes]: Configure VM Console Proxy on this host (Yes, No) [Yes]: 

--== PACKAGES ==----== PACKAGES ==--
         
[ INFO  ] Checking for product updates...[ INFO  ] Checking for product updates...
[ INFO  ] No product updates found[ INFO  ] No product updates found
         
--== NETWORK CONFIGURATION ==----== NETWORK CONFIGURATION ==--
         
Host fully qualified DNS name of this server [ovirtengine.lala.im]: Host fully qualified DNS name of this server [ovirtengine.lala.im]: 
Setup can automatically configure the firewall on this system.Setup can automatically configure the firewall on this system.
Note: automatic configuration of the firewall may overwrite current settings.Note: automatic configuration of the firewall may overwrite current settings.
NOTICE: iptables is deprecated and will be removed in future releases: iptables is deprecated and will be removed in future releases
Do you want Setup to configure the firewall? (Yes, No) [Yes]: NoDo you want Setup to configure the firewall? (Yes, No) [Yes]: No

--== DATABASE CONFIGURATION ==----== DATABASE CONFIGURATION ==--

Where is the DWH database located? (Local, Remote) [Local]: Where is the DWH database located? (Local, Remote) [Local]: 
Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications.Setup can configure the local postgresql server automatically for the DWH to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Would you like Setup to automatically configure postgresql and create DWH database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: 
Where is the Engine database located? (Local, Remote) [Local]: Where is the Engine database located? (Local, Remote) [Local]: 
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Would you like Setup to automatically configure postgresql and create Engine database, or prefer to perform that manually? (Automatic, Manual) [Automatic]: 

--== OVIRT ENGINE CONFIGURATION ==----== OVIRT ENGINE CONFIGURATION ==--

Engine admin password: Engine admin password: 
Confirm engine admin password: Confirm engine admin password: 
[WARNING] Password is weak: The password is shorter than 8 characters[WARNING] Password is weak: The password is shorter than 8 characters
Use weak password? (Yes, No) [No]: YesUse weak password? (Yes, No) [No]: Yes
Application mode (Virt, Gluster, Both) [Both]: Application mode (Virt, Gluster, Both) [Both]: 
Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: Use default credentials (admin@internal) for ovirt-provider-ovn (Yes, No) [Yes]: 

--== STORAGE CONFIGURATION ==----== STORAGE CONFIGURATION ==--

Default SAN wipe after delete (Yes, No) [No]: Default SAN wipe after delete (Yes, No) [No]: 

--== PKI CONFIGURATION ==----== PKI CONFIGURATION ==--

Organization name for certificate [lala.im]: Organization name for certificate [lala.im]: 

--== APACHE CONFIGURATION ==----== APACHE CONFIGURATION ==--

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]: Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]: 
Setup can configure apache to use SSL using a certificate issued from the internal CA.Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]: 

--== MISC CONFIGURATION ==----== MISC CONFIGURATION ==--

Please choose Data Warehouse sampling scale:Please choose Data Warehouse sampling scale:
(1) Basic(1) Basic
(2) Full(2) Full
(1, 2)[1]: (1, 2)[1]: 

--== CONFIGURATION PREVIEW ==----== CONFIGURATION PREVIEW ==--

Application mode                        : bothApplication mode                        : both
Default SAN wipe after delete           : FalseDefault SAN wipe after delete           : False
Update Firewall                         : FalseUpdate Firewall                         : False
Host FQDN                               : ovirtengine.lala.imHost FQDN                               : ovirtengine.lala.im
Set up Cinderlib integration            : FalseSet up Cinderlib integration            : False
Configure local Engine database         : TrueConfigure local Engine database         : True
Set application as default page         : TrueSet application as default page         : True
Configure Apache SSL                    : TrueConfigure Apache SSL                    : True
Engine database secured connection      : FalseEngine database secured connection      : False
Engine database user name               : engineEngine database user name               : engine
Engine database name                    : engineEngine database name                    : engine
Engine database host                    : localhostEngine database host                    : localhost
Engine database port                    : 5432Engine database port                    : 5432
Engine database host name validation    : FalseEngine database host name validation    : False
Engine installation                     : TrueEngine installation                     : True
PKI organization                        : lala.im: lala.im
Set up ovirt-provider-ovn               : TrueSet up ovirt-provider-ovn               : True
Configure WebSocket Proxy               : TrueConfigure WebSocket Proxy               : True
DWH installation                        : True: True
DWH database host                       : localhost: localhost
DWH database port                       : 5432: 5432
Configure local DWH database            : TrueConfigure local DWH database            : True
Configure Image I/O Proxy               : TrueConfigure Image I/O Proxy               : True
Configure VMConsole Proxy               : TrueConfigure VMConsole Proxy               : True

Please confirm installation settings (OK, Cancel) [OK]: Please confirm installation settings (OK, Cancel) [OK]: 

--== SUMMARY ==----== SUMMARY ==--
......
An example of the required configuration for iptables can be found at:An example of the required configuration for iptables can be found at:
/etc/ovirt-engine/iptables.example/etc/ovirt-engine/iptables.example
Please use the user 'admin@internal' and password specified in order to loginPlease use the user 'admin@internal' and password specified in order to login
Web access is enabled at:Web access is enabled at:
http://ovirtengine.lala.im:80/ovirt-engine://ovirtengine.lala.im:80/ovirt-engine
https://ovirtengine.lala.im:443/ovirt-engine://ovirtengine.lala.im:443/ovirt-engine
Internal CA 68:63:4D:F5:FC:12:3F:FA:A3:82:4E:64:82:06:A8:6C:22:2E:E8:2AInternal CA 68:63:4D:F5:FC:12:3F:FA:A3:82:4E:64:82:06:A8:6C:22:2E:E8:2A
SSH fingerprint: SHA256:...: SHA256:...
......

配置Engine的SSL证书
默认情况下,Engine是只能通过FQDN访问,我们可以设置一下让它可以直接用服务器的IP访问: 写入:

echo "SSO_ALTERNATE_ENGINE_FQDNS="192.168.3.209"" >> /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf

重启Engine服务:

systemctl restart ovirt-engine.service

此时我们还是通过IP地址的方式去访问Engine,我的访问地址是:

https://192.168.3.209

配置nfs存储

oVirt集群有本地存储与共享存储两种方式,选择本地存储时集群中只能添加单台Node节点,所以为了更好发挥集群特性,采用nfs共享存储方式,在每台Node节点上配置nfs共享存储用于安装虚拟机,执行以下命令:

#创建iso存储域用于存放系统镜像
mkdir /ovirt/iso
#创建data存储域用于存放虚拟机数据
mkdir /ovirt/data
#赋予VDSM权限
chown 36:36 -R /ovirt
 #赋予执行权限
chmod 755 -R /ovirt
#按照以下内容编辑ovirt文件
cat /etc/exports
/ovirt/iso *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
/ovirt/data *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
#开机启动并立刻启动
systemctl enable --now nfs.service

四、Cloudera Manager集群搭建

CDH搭建参考了如下文章,讲的很详细了,原文链接:Docker容器部署CDH6.3.0
操作前别忘了编辑host文件,以及做免密登录
2.1 初始化环境
使用Xshell或其它工具ssh连接容器,容器的用户密码为root/root

yum install -y kde-l10n-Chinese telnet reinstall glibc-common vim wget ntp net-tools \
&& yum clean all

执行结果

Installed:
  kde-l10n-Chinese.noarch 0:4.10.5-2.el7      net-tools.x86_64 0:2.0-0.24.20131004git.el7      ntp.x86_64 0:4.2.6p5-28.el7.centos      telnet.x86_64 1:0.17-64.el7      vim-enhanced.x86_64 2:7.4.160-6.el7_6     
  wget.x86_64 0:1.14-18.el7_6.1

2.2 配置中文环境变量

(
cat <<EOF
export LC_ALL=zh_CN.utf8
export LANG=zh_CN.utf8
export LANGUAGE=zh_CN.utf8
EOF
) >> ~/.bashrc \
&& localedef -c -f UTF-8 -i zh_CN zh_CN.utf8 \
&& source ~/.bashrc \
&& echo $LANG

执行结果

zh_CN.utf8

2.3 这两个命令的主机名是否一致

uname -a && hostname

执行结果

[root@cm ~]# uname -a && hostname
Linux cm.hadoop 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
cm.hadoop

2.4 检查eth0网卡是否开启

ifconfig | head -n2

执行结果

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.10.0.2  netmask 255.255.0.0  broadcast 172.10.255.255

2.5 是否能上网

ping www.baidu.com -c 3

执行结果

PING www.a.shifen.com (220.181.38.150) 56(84) bytes of data.
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=1 ttl=50 time=7.68 ms
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=2 ttl=50 time=7.63 ms
64 bytes from 220.181.38.150 (220.181.38.150): icmp_seq=3 ttl=50 time=7.58 ms

--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 7.586/7.634/7.686/0.040 ms

2.6 配置NTP时间同步服务

vim /etc/ntp.conf

#更改为以下四个时钟服务器

server 0.cn.pool.ntp.org
server 1.cn.pool.ntp.org
server 2.cn.pool.ntp.org
server 3.cn.pool.ntp.org

启动ntp服务

systemctl start ntpd && \
systemctl enable ntpd && \
ntpdate -u 0.cn.pool.ntp.org && \
hwclock --systohc && \
date 

执行结果

Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
14 Aug 10:18:01 ntpdate[1853]: adjust time server 139.199.214.202 offset 0.005992 sec
2019年 08月 14日 星期三 10:18:02 CST

2.7 准备MySQL安装包

mkdir -p /root/hadoop_CHD/mysql \
&& wget -O /root/hadoop_CHD/mysql/mysql-5.7.27-1.el7.x86_64.rpm-bundle.tar \
https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.27-1.el7.x86_64.rpm-bundle.tar \
&& ls /root/hadoop_CHD/mysql

执行结果

mysql-5.7.27-1.el7.x86_64.rpm-bundle.tar

2.8 准备MySQL JDBC驱动

mkdir -p /root/hadoop_CHD/mysql-jdbc \
&& wget -O /root/hadoop_CHD/mysql-jdbc/mysql-connector-java-5.1.48.tar.gz \
https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz \
&& ls /root/hadoop_CHD/mysql-jdbc

执行结果

mysql-connector-java-5.1.48.tar.gz

2.9 准备Cloudera-Manager安装包

mkdir -p /root/hadoop_CHD/cloudera-repos \
&& wget -O /root/hadoop_CHD/cloudera-repos/allkeys.asc \
https://archive.cloudera.com/cm6/6.3.0/allkeys.asc \
&& wget -O /root/hadoop_CHD/cloudera-repos/cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm \
&& wget -O /root/hadoop_CHD/cloudera-repos/cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm \
&& wget -O /root/hadoop_CHD/cloudera-repos/cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm \
&& wget -O /root/hadoop_CHD/cloudera-repos/cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm \
&& wget -O /root/hadoop_CHD/cloudera-repos/enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm \
&& wget -O /root/hadoop_CHD/cloudera-repos/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm \
https://archive.cloudera.com/cm6/6.3.0/redhat7/yum/RPMS/x86_64/oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm \
&& ll /root/hadoop_CHD/cloudera-repos

执行结果

total 1378004
-rw-r--r-- 1 root root      14041 8月   1 00:08 allkeys.asc
-rw-r--r-- 1 root root   10479136 8月   1 00:08 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root 1201341068 8月   1 00:08 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root      11464 8月   1 00:08 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root      10996 8月   1 00:08 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root   14209884 8月   1 00:08 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root  184988341 8月   1 00:08 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

2.10 准备Parcel包

mkdir -p /root/hadoop_CHD/parcel \
&& wget -O /root/hadoop_CHD/parcel/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel \
https://archive.cloudera.com/cdh6/6.3.0/parcels/CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel \
&& wget -O /root/hadoop_CHD/parcel/manifest.json \
https://archive.cloudera.com/cdh6/6.3.2/parcels/manifest.json \
&& ll /root/hadoop_CHD/parcel

执行结果

total 2036848
-rw-r--r-- 1 root root 2085690155 8月   1 00:03 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r-- 1 root root      33887 8月   1 00:04 manifest.json

2.11 搭建本地yum源

yum -y install httpd createrepo \
&& systemctl start httpd \
&& systemctl enable httpd \
&& cd /root/hadoop_CHD/cloudera-repos/ && createrepo . \
&& mv /root/hadoop_CHD/cloudera-repos /var/www/html/ \
&& yum clean all \
&& ll /var/www/html/cloudera-repos

执行结果

total 1378008
-rw-r--r-- 1 root root      14041 8月   1 00:08 allkeys.asc
-rw-r--r-- 1 root root   10479136 8月   1 00:08 cloudera-manager-agent-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root 1201341068 8月   1 00:08 cloudera-manager-daemons-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root      11464 8月   1 00:08 cloudera-manager-server-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root      10996 8月   1 00:08 cloudera-manager-server-db-2-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root   14209884 8月   1 00:08 enterprise-debuginfo-6.3.0-1281944.el7.x86_64.rpm
-rw-r--r-- 1 root root  184988341 8月   1 00:08 oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm
drwxr-xr-x 2 root root       4096 8月  14 11:05 repodata

2.12 安装JDK

cd /var/www/html/cloudera-repos/;rpm -ivh oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm

执行结果

warning: oracle-j2sdk1.8-1.8.0+update181-1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID b0b19c9f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:oracle-j2sdk1.8-1.8.0+update181-1################################# [100%]

2.13 安装配置MySQL数据库

cd /root/hadoop_CHD/mysql/;tar -xvf mysql-5.7.27-1.el7.x86_64.rpm-bundle.tar \
&& yum -y remove mariadb-libs \
&& yum install -y libaio numactl \
&& rpm -ivh mysql-community-common-5.7.27-1.el7.x86_64.rpm \
&& rpm -ivh mysql-community-libs-5.7.27-1.el7.x86_64.rpm \
&& rpm -ivh mysql-community-client-5.7.27-1.el7.x86_64.rpm \
&& rpm -ivh mysql-community-server-5.7.27-1.el7.x86_64.rpm \
&& rpm -ivh mysql-community-libs-compat-5.7.27-1.el7.x86_64.rpm \
&& echo character-set-server=utf8 >> /etc/my.cnf \
&& rm -rf /root/hadoop_CHD/mysql/ \
&& yum clean all \
&& rpm -qa |grep mysql

执行结果

Installed:
  libaio.x86_64 0:0.3.109-13.el7                                                                           numactl.x86_64 0:2.0.9-7.el7                                                                          

Dependency Installed:
  numactl-libs.x86_64 0:2.0.9-7.el7                                                                                                                                                                               

Complete!
warning: mysql-community-common-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-common-5.7.27-1.e################################# [100%]
warning: mysql-community-libs-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-libs-5.7.27-1.el7################################# [100%]
warning: mysql-community-client-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-client-5.7.27-1.e################################# [100%]
warning: mysql-community-server-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-server-5.7.27-1.e################################# [100%]
warning: mysql-community-libs-compat-5.7.27-1.el7.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 5072e1f5: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:mysql-community-libs-compat-5.7.2################################# [100%]
mysql-community-libs-5.7.27-1.el7.x86_64
mysql-community-server-5.7.27-1.el7.x86_64
mysql-community-common-5.7.27-1.el7.x86_64
mysql-community-client-5.7.27-1.el7.x86_64
mysql-community-libs-compat-5.7.27-1.el7.x86_64

2.14 数据库授权
编写SQL脚本

(
cat <<EOF
set password for root@localhost = password('123456Aa.');
grant all privileges on *.* to 'root'@'%' identified by '123456Aa.';
flush privileges;
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON metastore.* TO 'hive'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY '123456Aa.';
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY '123456Aa.';
SHOW DATABASES;
EOF
) >> /root/c.sql

获取MySQL初始密码

systemctl start mysqld && grep password /var/log/mysqld.log | sed 's/.*\(............\)$/\1/'

执行SQL脚本

[root@cm ~]# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 4
Server version: 5.7.27

Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> source /root/c.sql

执行结果

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 1 row affected (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

Query OK, 0 rows affected, 1 warning (0.01 sec)

Query OK, 0 rows affected, 1 warning (0.00 sec)

+--------------------+
| Database           |
+--------------------+
| information_schema |
| amon               |
| hue                |
| metastore          |
| mysql              |
| nav                |
| navms              |
| oozie              |
| performance_schema |
| rman               |
| scm                |
| sentry             |
| sys                |
+--------------------+
13 rows in set (0.00 sec)

2.15 配置mysql jdbc驱动

mkdir -p /usr/share/java/ \
&& cd /root/hadoop_CHD/mysql-jdbc/;tar -zxvf mysql-connector-java-5.1.48.tar.gz \
&& cp  /root/hadoop_CHD/mysql-jdbc/mysql-connector-java-5.1.48/mysql-connector-java-5.1.48-bin.jar /usr/share/java/mysql-connector-java.jar \
&& rm -rf /root/hadoop_CHD/mysql-jdbc/ \
&& ls /usr/share/java/

执行结果

mysql-connector-java.jar

2.16 安装Cloudera Manager

(
cat <<EOF
[cloudera-manager]
name=Cloudera Manager 6.3.0
baseurl=http://192.168.3.232:7900/cloudera_6.3.0-repos/
gpgcheck=0
enabled=1
EOF
) >> /etc/yum.repos.d/cloudera-manager.repo \
&& yum clean all \
&& yum makecache \
&& yum install -y cloudera-manager-daemons cloudera-manager-agent cloudera-manager-server \
&& yum clean all \
&& rpm -qa | grep cloudera-manager 

执行结果

Installed:
  cloudera-manager-agent.x86_64 0:6.3.0-1281944.el7                   cloudera-manager-daemons.x86_64 0:6.3.0-1281944.el7                   cloudera-manager-server.x86_64 0:6.3.0-1281944.el7                  

Dependency Installed:
  GeoIP.x86_64 0:1.5.0-13.el7                   MySQL-python.x86_64 0:1.2.5-1.el7                 at.x86_64 0:3.1.13-24.el7                                    avahi-libs.x86_64 0:0.6.31-19.el7                 
  bc.x86_64 0:1.06.95-13.el7                    bind-libs.x86_64 32:9.9.4-74.el7_6.2              bind-utils.x86_64 32:9.9.4-74.el7_6.2                        cronie.x86_64 0:1.4.11-20.el7_6                   
  cronie-anacron.x86_64 0:1.4.11-20.el7_6       crontabs.noarch 0:1.11-6.20121102git.el7          cups-client.x86_64 1:1.6.3-35.el7                            cups-libs.x86_64 1:1.6.3-35.el7                   
  cyrus-sasl-gssapi.x86_64 0:2.1.26-23.el7      cyrus-sasl-plain.x86_64 0:2.1.26-23.el7           ed.x86_64 0:1.9-4.el7                                        file.x86_64 0:5.11-35.el7                         
  fuse.x86_64 0:2.9.2-11.el7                    fuse-libs.x86_64 0:2.9.2-11.el7                   gettext.x86_64 0:0.19.8.1-2.el7                              gettext-libs.x86_64 0:0.19.8.1-2.el7              
  initscripts.x86_64 0:9.49.46-1.el7            iproute.x86_64 0:4.11.0-14.el7_6.2                iptables.x86_64 0:1.4.21-28.el7                              keyutils-libs-devel.x86_64 0:1.5.8-3.el7          
  krb5-devel.x86_64 0:1.15.1-37.el7_6           less.x86_64 0:458-9.el7                           libcom_err-devel.x86_64 0:1.42.9-13.el7                      libcroco.x86_64 0:0.6.12-4.el7                    
  libgomp.x86_64 0:4.8.5-36.el7_6.2             libkadm5.x86_64 0:1.15.1-37.el7_6                 libmnl.x86_64 0:1.0.3-7.el7                                  libnetfilter_conntrack.x86_64 0:1.0.6-1.el7_3     
  libnfnetlink.x86_64 0:1.0.1-4.el7             libpipeline.x86_64 0:1.2.3-3.el7                  libselinux-devel.x86_64 0:2.5-14.1.el7                       libsepol-devel.x86_64 0:2.5-10.el7                
  libtirpc.x86_64 0:0.2.4-0.15.el7              libunistring.x86_64 0:0.9.3-9.el7                 libverto-devel.x86_64 0:0.2.5-4.el7                          libxslt.x86_64 0:1.1.28-5.el7                     
  m4.x86_64 0:1.4.16-10.el7                     mailx.x86_64 0:12.5-19.el7                        make.x86_64 1:3.82-23.el7                                    man-db.x86_64 0:2.6.3-11.el7                      
  mod_ssl.x86_64 1:2.4.6-89.el7.centos.1        openssl.x86_64 1:1.0.2k-16.el7_6.1                openssl-devel.x86_64 1:1.0.2k-16.el7_6.1                     patch.x86_64 0:2.7.1-10.el7_5                     
  pcre-devel.x86_64 0:8.32-17.el7               postfix.x86_64 2:2.10.1-7.el7                     postgresql-libs.x86_64 0:9.2.24-1.el7_5                      psmisc.x86_64 0:22.20-15.el7                      
  python-psycopg2.x86_64 0:2.5.1-3.el7          redhat-lsb-core.x86_64 0:4.1-27.el7.centos.1      redhat-lsb-submod-security.x86_64 0:4.1-27.el7.centos.1      rpcbind.x86_64 0:0.2.0-47.el7                     
  spax.x86_64 0:1.5.2-13.el7                    systemd-sysv.x86_64 0:219-62.el7_6.9              sysvinit-tools.x86_64 0:2.88-14.dsf.el7                      time.x86_64 0:1.7-45.el7                          
  zlib-devel.x86_64 0:1.2.7-18.el7             

Dependency Updated:
  bind-license.noarch 32:9.9.4-74.el7_6.2                openssl-libs.x86_64 1:1.0.2k-16.el7_6.1                systemd.x86_64 0:219-62.el7_6.9                systemd-libs.x86_64 0:219-62.el7_6.9               

Complete!
Loaded plugins: fastestmirror, ovl
Cleaning repos: base cloudera-manager extras updates
Cleaning up list of fastest mirrors
cloudera-manager-server-6.3.0-1281944.el7.x86_64
cloudera-manager-daemons-6.3.0-1281944.el7.x86_64
cloudera-manager-agent-6.3.0-1281944.el7.x86_64

2.17 配置parcel库

cd /opt/cloudera/parcel-repo/;mv /root/hadoop_CHD/parcel/* ./ \
&& sha1sum CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel| awk '{ print $1 }' >CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha \
&& rm -rf /root/hadoop_CHD/parcel/ \
&& chown -R cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo/* \
&& ll /opt/cloudera/parcel-repo/ 

执行结果

total 2036852
-rw-r--r-- 1 cloudera-scm cloudera-scm 2085690155 8月   1 00:03 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel
-rw-r--r-- 1 cloudera-scm cloudera-scm         41 8月  14 11:38 CDH-6.3.0-1.cdh6.3.0.p0.1279813-el7.parcel.sha
-rw-r--r-- 1 cloudera-scm cloudera-scm      33887 8月   1 00:04 manifest.json

2.18 初始化scm库

/opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm 123456Aa.

执行结果

[root@cm parcel-repo]# /opt/cloudera/cm/schema/scm_prepare_database.sh mysql scm scm 123456Aa.
JAVA_HOME=/usr/java/jdk1.8.0_181-cloudera
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.8.0_181-cloudera/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/opt/cloudera/cm/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
Wed Aug 14 11:39:18 CST 2019 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!

2.19 启动服务

systemctl start cloudera-scm-server \
&& sleep 2 \
&& tail -f /var/log/cloudera-scm-server/cloudera-scm-server.log | grep "INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server"

执行结果

2019-08-14 11:42:34,191 INFO WebServerImpl:com.cloudera.server.cmf.WebServerImpl: Started Jetty server.

查到日志出现这句话,就可以浏览器访问CM了,默认用户名密码:admin/admin
http://IP:7180
访问不了的话去检查宿主机的防火墙和阿里云的安全组规则有没有开放7180

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值