CentOS7 离线环境 使用ansible自动部署CDH5.16
前言
本文介绍如何使用作者开发的自动化脚本,离线部署cdh集群。只需要简单的配置下yum源和cdh集群节点IP等几个参数,就可实现一键部署cdh集群,省去配置mysql、ntp服务、主机配置、cdh文件分发等繁杂操作,安装过程快速便捷。我自己测试,三个节点的集群,cdh安装不超过15分钟。
注意: 主机配置逻辑存储卷并不在自动化范围内。如您已配置逻辑卷或不需要逻辑卷,请继续往下。(建议先做逻辑卷,以后好扩容存储,以后有时间,我会把配置逻辑卷也加到脚本里。笔者使用的磁盘由ceph的块存储服务提供,可自由扩容。)
简介
整个cdh集群的部署由anible-palybook的剧本完成,其中mysql采用docker的部署方式,ntp服务使用chrony搭建。
下载安装包
下载地址:https://pan.baidu.com/s/1yosjmPLZHngL1QFbxV095g
提取码:w4uf
文件大小:3.74GB
安装包包含软件:cdh5.16、ansible2.9.21、docker20.10.7、chrony、mysql5.7、vim等基础工具包以及作者开发的相关自动化脚本。
安装并配置ansible
将安装包发送至目标主机(scm-server节点)/root
目录下,并解压
[root@cdh-auto-deploy-test-1 ~]# tar -xvf cdh.5.16.tar
[root@cdh-auto-deploy-test-1 ~]# ll cdh5.16
total 2984412
-rw-r--r--. 1 root root 2127506677 Jun 9 10:14 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
-rw-r--r--. 1 root root 41 Jun 9 10:13 CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha
-rw-r--r--. 1 root root 841524318 Jun 9 10:14 cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz
drwxr-xr-x. 4 root root 153 Jun 25 15:17 deployfiles
-rw-r--r--. 1 root root 5670 Jun 9 10:14 KAFKA-1.2.0.jar
-rw-r--r--. 1 root root 85897902 Jun 9 10:14 KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
-rw-r--r--. 1 root root 41 Jun 9 10:14 KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.sha
-rw-r--r--. 1 root root 66538 Jun 9 10:14 manifest.json
-rw-r--r--. 1 root root 5356 Jun 9 10:14 manifestkafka.json
-rw-r--r--. 1 root root 1007502 Jun 9 10:14 mysql-connector-java-5.1.47.jar
修改yum源指向解压的安装包中的目录
[root@cdh-auto-deploy-test-1 ~]# mkdir /etc/yum.repos.d/back
[root@cdh-auto-deploy-test-1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/back/
[root@cdh-auto-deploy-test-1 ~]# cp /root/cdh5.16/deployfiles/yumPackages/software.repo /etc/yum.repos.d/
[root@cdh-auto-deploy-test-1 ~]# vi /etc/yum.repos.d/software.repo
[software]
name=software
## 修改`baseurl`的值,使其指向软件包。
baseurl=file:///root/cdh5.16/deployfiles/yumPackages/rpmPackages/
enabled=1
gpgcheck=0
[vim]
name=vim
## 修改`baseurl`的值,使其指向解压出的文件。
baseurl=file:///root/cdh5.16/deployfiles/yumPackages/vim/
enabled=1
gpgcheck=0
使用yum安装ansible
[root@cdh-auto-deploy-test-1 ~]# yum install -y ansible vim perl
[root@cdh-auto-deploy-test-1 ~]# ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Apr 9 2019, 14:30:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
[root@cdh-auto-deploy-test-1 ~]#
修改ansible.cfg,关闭ssh密钥检查。
[root@cdh-auto-deploy-test-1 ~]# vim /etc/ansible/ansible.cfg
host_key_checking = False
修改ansible的hosts文件,配置ansible主机,替换主机IP和密码。
[root@cdh-auto-deploy-test-1 ~]# vim /etc/ansible/hosts
[scm_server]
10.0.5.77 ansible_host=10.0.5.77 hostname=cdh1 ansible_user=root ansible_ssh_pass=12345 ansible_connection=local
[scm_agent]
10.0.5.74 ansible_host=10.0.5.74 hostname=cdh2 ansible_user=root ansible_ssh_pass=12345
[cdh:children]
scm_server
scm_agent
[db:children]
scm_server
测试主机通信,ping所有节点:
[root@cdh-auto-deploy-test-1 ~]# ansible all -m ping
cdh1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
cdh2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
[root@cdhdeploytest-2 yumPackages]#
安装cdh5.16
修改/root/cdh5.16/deployfiles/vars.yaml
,配置参数:
##变量存储文件
#
#vm_type代表ansible节点的虚拟机类型。
#当前版本有效值为“centos7”。
vm_type: "centos7"
#当有可用的ntp服务器时填写服务器的ip,没有不填。不填(为空)时,将同步scm-server所在节点的时间同步至其它节点。
ntp_server:
#安装包(解压后)所在目录。
cdh_packages_dir: "/root"
#数据盘目录。docker的持久卷、cdh的scm等将存放在此目录下。
cdh_data_dir: "/opt"
使用ansible-playbook自动安装cdh5.16
[root@cdh-auto-deploy-test-1 ~]# ansible-playbook /root/cdh5.16/deployfiles/deploy-cdh.yaml
注意:
/root/cdh5.16/deployfiles/deploy-cdh.yaml
是自动化部署cdh的ansible脚本。
可以自行修改此脚本,定制部署自己的cdh。
mysql用户密码、docker卷目录都在此文件中修改。
当脚本结束后,执行 tail -200f /opt/cloudera-manager/cm-5.16.1/log/cloudera-scm-server/cloudera-scm-server.log
查看scn-server的日志,大约几分钟后出现如下内容表示sever启动完成:
Started SelectChannelConnector@0.0.0.0:7180
Started Jetty server.
ScmActive completed successfully.
Discovered parcel on CM server: CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
Created torrent file: /opt/cloudera/parcel-repo/CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.torrent
Creating single-file torrent for CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel...
Hashing data from CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel with 4 threads (4058 pieces)...
... 10% complete
... 20% complete
... 30% complete
... 40% complete
... 50% complete
... 60% complete
... 70% complete
... 80% complete
... 90% complete
Hashed 1 file(s) (2127506677 bytes) in 4058 pieces (4058 expected) in 6605.6ms.
Single-file torrent information:
Torrent name: CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel
Announced at: Seems to be trackerless
Created on..: Fri Jun 25 16:49:59 CST 2021
Created by..: cm-server
Pieces......: 4058 piece(s) (524288 byte(s)/piece)
Total size..: 2,127,506,677 byte(s)
calParcelManagerImpl: Discovered parcel on CM server: KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
calParcelManagerImpl: Created torrent file: /opt/cloudera/parcel-repo/KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.torrent
Creating single-file torrent for KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel...
Hashing data from KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel with 4 threads (164 pieces)...
... 10% complete
... 20% complete
... 30% complete
... 40% complete
... 50% complete
... 60% complete
... 70% complete
... 80% complete
... 90% complete
Hashed 1 file(s) (85897902 bytes) in 164 pieces (164 expected) in 277.9ms.
Single-file torrent information:
Torrent name: KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel
Announced at: Seems to be trackerless
Created on..: Fri Jun 25 16:50:06 CST 2021
Created by..: cm-server
Pieces......: 164 piece(s) (524288 byte(s)/piece)
Total size..: 85,897,902 byte(s)
现在可以登录cdh1:7180
部署集群了。用户:admin
密码:admin
备注
/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent status
查看agent状态
/opt/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server status
查看server状态
附:安装包中的自动化脚本 |
deploy-cdh.yaml
--- - hosts: all vars_files: - ./vars.yaml tasks: - name: send centos7init to all node synchronize: src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/centos7init.sh" dest: "{{cdh_data_dir}}/" - name: init all node with centos7init shell: bash "{{cdh_data_dir}}/centos7init.sh" register: initinfo ignore_errors: yes - name: initinfo debug: msg: - "return code is {{initinfo.rc}}" - "{{initinfo.stdout_lines}}" - name: set hostname shell: hostnamectl set-hostname "{{hostname|quote}}" ## 此任务禁用,请使用下面同名的任务。因为blockfile模块可以避免重复插入。 # - name: set hosts # shell: echo "{{item.key}} {{item.value.hostname}}" >> /etc/hosts # with_dict: # - "{{hostvars}}" - name: set hosts blockinfile: path: /etc/hosts block: | {% for item in hostvars %} {{hostvars[item]['ansible_host']}} {{hostvars[item]['hostname']}} {% endfor %} state: present - name: disable selinux shell: setenforce 0 ignore_errors: yes - name: set selinux config shell: sed -i 's/SELINUX=enforcing/SELINUX=disable/g' /etc/selinux/config ignore_errors: yes - name: shutdown firewalld shell: systemctl stop firewalld ignore_errors: yes - name: disable firewall shell: systemctl disable firewalld ignore_errors: yes - name: send yum packages to all node synchronize: src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/yumPackages" dest: /root/ - name: check yum repo shell: ls /etc/yum.repos.d | grep software.repo register: repos ignore_errors: yes - name: make back repo shell: mkdir -p /etc/yum.repos.d/back ignore_errors: yes - name: back yum repo shell: mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/back/ when: repos.rc != 0 ignore_errors: yes - name: set yum repo synchronize: src: /root/yumPackages/software.repo dest: /etc/yum.repos.d/software.repo when: repos.rc != 0 - name: install openjdk1.8 chrony psmisc use yum yum: name: - perl - psmisc - chrony - java-1.8.0-openjdk.x86_64 state: present - name: set openjdk ssl shell: sed -i 's/jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4/jdk.tls.disabledAlgorithms=RC4/g' /usr/lib/jvm/jre-1.8.0-openjdk/lib/security/java.security - name: set chrony.conf template: src: chrony.conf.j2 dest: "/etc/chrony.conf" when: vm_type == "centos7" - name: sync time shell: systemctl {{item}} with_items: - "enable chronyd" - "restart chronyd" - hosts: db vars_files: - ./vars.yaml tasks: - name: install docker-ce yum: name: - docker-ce state: present - name: enable docker shell: systemctl enable docker - name: check docker status shell: systemctl status docker register: dockerstat ignore_errors: yes - name: check mysql status shell: docker ps -a | grep mysql register: mysqlstat ignore_errors: yes - name: print mysqlstat.rc debug: msg: - "{{mysqlstat.rc}}" - "{{mysqlstat.stdout_lines}}" - name: start docker-ce shell: service docker restart when: dockerstat.rc != 0 - name: send mysql.tar synchronize: src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/mysql.tar" dest: "{{cdh_data_dir}}/mysql.tar" when: mysqlstat.rc != 0 - name: load mysql image shell: docker load < "{{cdh_data_dir}}"/mysql.tar when: mysqlstat.rc != 0 - name: make dir for mysql volume command: cmd: mkdir -p {{item}} with_items: - "{{cdh_data_dir}}/mysql/mysql-config" - "{{cdh_data_dir}}/mysql/mysql-data" when: mysqlstat.rc != 0 - name: set my.cnf template: src: my.cnf.j2 dest: "{{cdh_data_dir}}/mysql/mysql-config/my.cnf" - name: docker restart mysql shell: docker restart mysql when: mysqlstat.rc == 0 - name: docker run mysql shell: docker run -it -d --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD="1qaz2wsx" -v "{{cdh_data_dir}}"/mysql/mysql-config:/etc/mysql -v "{{cdh_data_dir}}"/mysql/mysql-data:/var/lib/mysql mysql:5.7.34 register: result when: mysqlstat.rc != 0 - name: wart for mysql ready wait_for: timeout: 300 port: 3306 delay: 60 state: drained - name: copy init_cdhmsyql.sh synchronize: src: "{{cdh_packages_dir}}/cdh5.16/deployfiles/init-cdh-server-mysql.sh" dest: "{{cdh_data_dir}}/mysql/mysql-config/" when: mysqlstat.rc != 0 - name: create databases and user shell: docker exec mysql /bin/bash /etc/mysql/init-cdh-server-mysql.sh # when: result.rc == 0 mysqlstat.rc == 0 ignore_errors: yes - hosts: cdh vars_files: - ./vars.yaml tasks: - name: mkdir java shell: mkdir -p /usr/share/java - name: copy mysql-connector-java synchronize: src: "{{cdh_packages_dir}}/cdh5.16/mysql-connector-java-5.1.47.jar" dest: /usr/share/java/mysql-connector-java.jar - hosts: scm_server vars_files: - ./vars.yaml tasks: - name: create dir shell: mkdir -p "{{cdh_data_dir}}/{{item}}" with_items: - cloudera-manager - cloudera/parcel-repo - cloudera/parcels - name: extract cloudera manager shell: tar -zxvf "{{cdh_packages_dir}}"/cdh5.16/cloudera-manager-centos7-cm5.16.1_x86_64.tar.gz -C "{{cdh_data_dir}}"/cloudera-manager/ - name: cp parcel-repo shell: cp "{{cdh_packages_dir}}/cdh5.16/{{item}}" "{{cdh_data_dir}}/cloudera/parcel-repo/" with_items: - "CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel" - "CDH-5.16.1-1.cdh5.16.1.p0.3-el7.parcel.sha" - "KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel" - "KAFKA-4.0.0-1.4.0.0.p0.1-el7.parcel.sha" - "manifest.json" - "manifestkafka.json" - "KAFKA-1.2.0.jar" - name: fix agent file shell: sed -i "s/server_host=localhost/server_host={{hostname}}/g" "{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/etc/cloudera-scm-agent/config.ini - name: set scm-server dbproperties shell: sed -i "s/^.*com.cloudera.cmf.db.{{ item.name }}=.*$/com.cloudera.cmf.db.{{ item.name }}={{ item.value }}/" "{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/etc/cloudera-scm-server/db.properties with_items: - { name: 'type', value: 'mysql' } - { name: 'host', value: 'cdh1' } - { name: 'name', value: 'cmf' } - { name: 'user', value: 'cmf' } - { name: 'password', value: '1qaz2wsx' } - { name: 'setupType', value: 'EXTERNAL' } ignore_errors: yes - hosts: scm_agent vars_files: - ./vars.yaml tasks: - name: cp files synchronize: src: "{{cdh_data_dir}}/{{ item }}" dest: "{{cdh_data_dir}}/" with_items: - "cloudera" - "cloudera-manager" - hosts: cdh vars_files: - ./vars.yaml tasks: - name: del cloudera-scm user: name: cloudera-scm state: absent - name: add user shell: useradd --system --home="{{cdh_data_dir}}"/cloudera-manager/cm-5.16.1/run/cloudera-scm-server/ --no-create-home --shell=/bin/false --comment "Cloudera SCM User" cloudera-scm - name: change owner of file shell: chown -R cloudera-scm:cloudera-scm "{{cdh_data_dir}}/{{item}}" with_items: - "cloudera" - "cloudera-manager" - hosts: scm_server vars_files: - ./vars.yaml tasks: - name: copy scm-server synchronize: src: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server" dest: "/etc/init.d/" - name: set scm-server enable shell: "chkconfig {{item}}" with_items: - "--add cloudera-scm-server" - "cloudera-scm-server on" - name: set scm-server env prameters shell: sed -i 's?CMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}?CMF_DEFAULTS=${CMF_DEFAULTS:-/opt/cloudera-manager/cm-5.16.1/etc/default}?g' /etc/init.d/cloudera-scm-server - name: start server shell: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-server restart" - hosts: cdh vars_files: - ./vars.yaml tasks: - name: copy scm-agent synchronize: src: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent" dest: "/etc/init.d/" - name: set scm-agent enable shell: "chkconfig {{item}}" with_items: - "--add cloudera-scm-agent" - "cloudera-scm-agent on" - name: set scm-agent env prameters shell: sed -i 's?CMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}?CMF_DEFAULTS=${CMF_DEFAULTS:-/opt/cloudera-manager/cm-5.16.1/etc/default}?g' /etc/init.d/cloudera-scm-agent - name: wait for scm-server wait_for: timeout: 300 port: 7182 delay: 60 state: drained - name: start agent shell: "{{cdh_data_dir}}/cloudera-manager/cm-5.16.1/etc/init.d/cloudera-scm-agent restart"
centos7init.sh
#!/bin/bash cat << EOF +--------------------------------------------------------------+ | === Welcome to CentOS System init === | +--------------------------------------------------------------+ EOF #set transparent_hugepage echo never > /sys/kernel/mm/transparent_hugepage/defrag echo never > /sys/kernel/mm/transparent_hugepage/enabled echo 'echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> /etc/rc.local echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' >> /etc/rc.local #set limits cat > /etc/security/limits.conf << EOF root soft nproc 65535 root hard nproc 65535 * soft nofile 1024000 * hard nofile 1024000 EOF cat << EOF +-------------------set limits success-----------------+ EOF #set sysctl cat > /etc/sysctl.conf << EOF fs.file-max = 1024000 vm.swappiness = 0 kernel.sysrq = 0 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.default.arp_announce = 2 net.ipv4.conf.lo.arp_announce=2 net.ipv4.conf.all.arp_announce=2 net.ipv4.tcp_max_tw_buckets = 5000 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 1024 net.ipv4.tcp_synack_retries = 2 EOF /sbin/sysctl -p cat << EOF +-------------------set sysctl success-----------------+ EOF
init-cdh-server-myusql.sh
#!/bin/bash creat_cmf="create database cmf DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_hive="create database hive DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_oozie="create database oozie DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_hue="create database hue DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_amon="create database amon DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_activity="create database activity DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_reports="create database reports DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_audit="create database audit DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" creat_metadata="create database metadata DEFAULT CHARACTER SET utf8 COLLATE 'utf8_general_ci';" access_auth_cmf="grant all on cmf.* TO 'cmf'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_hive="grant all on hive.* TO 'hive'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_oozie="grant all on oozie.* TO 'oozie'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_hue="grant all on hue.* TO 'hue'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_amon="grant all on amon.* TO 'amon'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_activity="grant all on activity.* TO 'activity'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_reports="grant all on reports.* TO 'reports'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_audit="grant all on audit.* TO 'audit'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_metadata="grant all on metadata.* TO 'metadata'@'%' IDENTIFIED BY '1qaz2wsx';" access_auth_remote="grant all PRIVILEGES on *.* to 'root'@'%' identified by '1qaz2wsx' with grant option;" flush="flush privileges;" sqls=("${creat_cmf}" "${creat_hive}" "${creat_oozie}" "${creat_hue}" "${creat_amon}" "${access_auth_cmf}" "${access_auth_hive}" "${access_auth_oozie}" "${access_auth_hue}" "${access_auth_amon}" "${access_auth_remote}" "${flush}") for i in "${sqls[@]}" do mysql -uroot -p1qaz2wsx -e "${i}"; done
mysql配置文件的ansible模板:my.cnf.j2
[mysqld] skip-name-resolve pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql #log-error = /var/log/mysql/error.log # By default we only accept connections from localhost #bind-address = 127.0.0.1 # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 server-id=1 log-bin=mysql-bin binlog_format = ROW binlog_row_image = full max_binlog_size = 1G max_allowed_packet = 2G log_timestamps=SYSTEM wait_timeout=2880000 innodb_flush_log_at_trx_commit=0 innodb_buffer_pool_size=6442450944 max_allowed_packet =67108864 default-time_zone = '+8:00' character-set-server=utf8 character-set-server=utf8 max_connections = 3000 sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES #[client] #default-character-set=utf8 [mysql] #default-character-set=utf8
chrony配置文件的ansible模板:chrony.conf.j2
# Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). {% if ntp_server is none or not ntp_server %} server {{groups['scm_server'][0]}} iburst {% else %} server {{ntp_server}} iburst {% endif %} # Record the rate at which the system clock gains/losses time. driftfile /var/lib/chrony/drift # Allow the system clock to be stepped in the first three updates # if its offset is larger than 1 second. makestep 1.0 3 # Enable kernel synchronization of the real-time clock (RTC). rtcsync # Enable hardware timestamping on all interfaces that support it. #hwtimestamp * # Increase the minimum number of selectable sources required to adjust # the system clock. #minsources 2 # Allow NTP client access from local network. #allow 192.168.0.0/16 {% if ntp_server is none or not ntp_server %} {% if ansible_host == groups['scm_server'][0] %} {% set list1 = ansible_host.split('.') %} allow {{list1[0]}}.{{list1[1]}}.{{list1[2]}}.0/24 local stratum 10 {% endif %} {% endif %} # Serve time even if not synchronized to a time source. #local stratum 10 # Specify file containing keys for NTP authentication. #keyfile /etc/chrony.keys # Specify directory for log files. logdir /var/log/chrony # Select which information is logged. #log measurements statistics tracking
部署服务
登录首页后可以按如下操作部署一个示例集群。
步骤:
-
勾选接受许可条款,点击继续
-
选择免费,点击继续
-
点击继续
-
输入节点IP,以逗号分割。点击搜索
-
勾选主机,点击继续
-
其它parcel勾选kafka,点击继续
-
待parcel安装完毕,检查主机完成后,点击完成
-
集群安装勾选自定义服务、选择服务(如图),点击继续
-
在集群设置中,选择服务将要分发的节点,图中以6个节点做示范。
-
输入相关数据库和用户名称(其中navigation的服务都使用cmf数据库,
用户是cmf。其它服务的用户和数据库如图),测试连接,点击继续
-
将所有数据目录修改到挂载的数据卷下,此处是/opt目录下。
-
点击继续,然后查看部署结果。
如果部署中止或失败,查看错误报告解决相关问题。
本文属作者原创,转载请注明出处。