[PROJECT] DEPLOY CLUSTER

1 Environment Preparation

1.1 All Nodes Setup Root Login

vim /etc/ssh/sshd_config

---------------------------
PermitRootLogin yes

systemctl restart sshd.service

1.2 All Nodes Disk Mount


fdisk -l

pvs
pvcreate /dev/vdb
pvs

vgs
vgextend vg00 /dev/vdb
vgs

lvs
lvcreate -l 50%free -n lv_data vg00
lvs
mkfs.xfs /dev/vg00/lv_data

mkdir /data


mkdir /data
mount /dev/vg00/lv_data /data



cat >> /etc/fstab << EOF
/dev/vg00/lv_data      /data                    xfs     defaults        0 0
EOF

mount -a
reboot

df -h




pvcreate /dev/vdb
vgextend vg00 /dev/vdb

lvcreate -l 50%free -n lv_data vg00
lvcreate -l 100%free -n lv_pcc vg00

mkfs.xfs /dev/vg00/lv_data
mkfs.xfs /dev/vg00/lv_pcc
mkdir /data
mount /dev/vg00/lv_data /data
mkdir /pcc
mount /dev/vg00/lv_pcc /pcc

cat >> /etc/fstab << EOF
/dev/vg00/lv_data      /data                    xfs     defaults        0 0
/dev/vg00/lv_pcc       /pcc                     xfs     defaults        0 0
EOF
mount -a
reboot



pvcreate /dev/vdb
vgcreate vg00 /dev/vdb
lvcreate -l 100%free -n LVdata vg00
mkfs.xfs /dev/vg00/LVdata
mkdir /data
echo "/dev/vg00/LVdata  /data  xfs  defaults        0 0" >> /etc/fstab
mount -a
把data,换成要挂载的目录名字,然后把这些命令粘进去就行了


--------------------------------------------------------------------
pvcreate /dev/vdb
vgextend vg00 /dev/vdb
lvcreate -l 50%Free -n lv_data vg00
lvcreate -L 50G -n lv_data vg00
mkfs -t xfs /dev/vg00/lv_data
mkfs -t xfs /dev/vg00/lv_docker

#创建目录并挂载
mkdir /data
mkdir /var/lib/docker
mount /dev/vg00/lv_data /data
mount /dev/vg00/lv_docker /var/lib/docker

#开机自动挂载
cat >> /etc/fstab/ << EOF
/dev/vg00/lv_data      /data                    xfs     defaults        0 0
/dev/vg00/lv_docker    /data                    xfs     defaults        0 0
EOF

1.3 All Nodes SSH Login

# check ssh-keygen
ls  /root/.ssh/ -l

#################
# create ssh-keygen
# 一路回车
#################
ssh-keygen


# master and worker node: host-ip
ssh-copy-id xxx.xxx.xxx.xxx(master and worker node host-ip)

# deploy-node: host-ip
ssh-copy-id xxx.xxx.xxx.xxx(deploy-node host-ip)

# loging into the machine
ssh 'xxx.xxx.xxx.xxx'

# exit machine
exit

2. Deploy-Node Ready

2.1 Copy the Required File to Deploy Node

# Deploy-Node
cd /pcc
mkdir breeze
cd breeze
rz
  • 拷贝离线包到Deploy-Node
docker-rpm.tar.gz    		#包含docker、docker-compose离线rpm包,解压后为文件夹./rpms/
deploy-image.tar           	#包含部署程序docker镜像
docker-compose.yml  		#拉起部署程序的yml文件

2.2 Install docker & docker-compose

  • Decompression docker yum rpm bkg
cd /pcc/breeze

# Unzip to get ./rpms/
tar -zxvf docker-rpm.tar.gz
  • 配置本地yum源
cd /etc/yum.repo.d/
ll

cat >> /etc/yum.repos.d/docker-local.repo << EOF
[docker]
name=docker-rpm
baseurl=file:///pcc/breeze/rpms/
gpgcheck=0
enabled=1
EOF

##########################
# 注意事项:baseurl 配置路径为解压的路径
  • install docker & docker-compose
# yum ready
yum clean all && yum makecache

# local-install docker & docker-compose
yum install -y docker docker-compose

# set start
systemctl start docker && systemctl enable docker
# 加载docker镜像到本地
docker load -i deploy-image.tar

# 查看镜像
docker images
#
docker-compose up -d 
# 查看容器
docker ps -a

3 Deploy K8S Cluster

3.1 创建集群

3.2 添加主机

3.3 添加组件

3.4 Deploy K8S Cluster

  • Verify cluster status
# Master-Node
kubectl get nodes

# Master-node: 查看启动的容器
docker ps -a

3.5 Check Node status

# Deploy-Node Reset
kubeadm reset

# View deploy-main logs
docker logs -f deploy-main

# Master Node: View nodes
kubectl get cs

# Master Node: View nodes
kubectl get nodes

# Master Node: View pods
kubectl -n kube-system get pods -o wide

# View Plugin: Master-Node
cd /etc/kubernetes/manifests
ll
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml

# restart docker
systemctl restart docker

# View daemon.json
cat /etc/docker/daemon.json

# View docker config
cd /etc/sysconfig/
ll | grep docker

# View Docker
docker ps 
docker ps -a
# View deploy-main logs
docker logs -f deploy-main

4 Add Delpoy-Node to Cluster

最后通过breeze部署程序将Deploy节点加入到K8S Cluster,问题解决如下

4.1 hostname

  • Modify hostname
# Deploy-Node
hostnamectl set-hostname xxx

# 断开重连
exit

# 查看主机名
hostname
  • Modify hosts
# 查看 /etc/hosts 下的配置
# Deploy-Node Modify hosts 参考其他节点配置hosts
cat /etc/hosts

4.2 configure yum source

  • configure yum source
# deploy-node
cd /etc/yum.repo.d/
ls

mkdir /pcc/ori
mv docker-local.repo /pcc/ori/

# master-node or other worker-node except deploy-node
cd  /etc/yum.repos.d/
ls
scp wise2c.repo deploy-node-ip:/etc/yum.repos.d/

# 执行下面命令,如果没错,说明yum源配置没有问题
yum makecache

4.3 Only Add delpoy-node to cluster

4.4 [Error] & [solution]

[Error] NotReady

  • [Error] 完成后查看node状态为NotReady
kubectl get nodes
  • [solution]
    新增的node节点docker为手动安装,缺失配置文件,从任意节点scp配置文件到新的节点即可
# /etc/docker/daemon.json
cd /etc/docker/
ls
cat daemon.json

#  /etc/cni file floder
cd /etc/cni/net.d/
ls
cat /etc/cni/net.d/flannel.conflist

# worker-node or master-node
cd /etc/docker/
scp daemon.json xxx.xxx.xxx.xxx:/etc/docker/

cd /etc
scp -r cni/ xxx.xxx.xxx.xxx:/etc/

scp daemon.json 100.70.88.97:/etc/docker
scp -r cni/ 100.70.88.97:/etc

[Error] docker配置冲突问题

  • [solution] 清空以下4个配置文件内容
[root@nodecti2 ~]# cd /etc/sysconfig/
[root@nodecti2 sysconfig]# ll | grep docker
cd /etc/sysconfig/
ll | grep docker
-rw-r--r--  1 root root    1 Mar 25 20:05 docker
-rw-r--r--  1 root root    0 Mar 25 20:06 
-rw-r--r--  1 root root    1 Mar 25 20:05 docker-storage
-rw-r--r--  1 root root    0 Mar 25 20:06 docker-storage-setup

cat docker
cat docker-network
cat docker-storage
cat docker-storage-setup

cp docker /pcc/ori
cp docker-netword /pcc/ori
cp docker-storage /pcc/ori
cp docker-storage-setup /pcc/ori

echo > docker
echo > docker-network
echo > docker-storage
echo > docker-storage-setup

示例 echo > docker-network

  • 最后重启docker服务,查看你node状态是否为Ready
# deploy-node restart docker
systemctl restart docker

# master node or other node except deploy-node
kubectl get nodes

4.5 Verify cluster status

4.5.1 Check Node status

# Deploy-Node Reset
kubeadm reset

# View deploy-main logs
docker logs -f deploy-main

# Master Node: View nodes
kubectl get cs

# Master Node: View nodes
kubectl get nodes

# Master Node: View pods
kubectl -n kube-system get pods -o wide

# View Plugin: Master-Node
cd /etc/kubernetes/manifests
ll
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml

# restart docker
systemctl restart docker

# View daemon.json
cat /etc/docker/daemon.json

# View docker config
cd /etc/sysconfig/
ll | grep docker

# View Docker
docker ps 
docker ps -a
# View deploy-main logs
docker logs -f deploy-main

4.5.2 Master

# 
cd /var/tmp/wise2c/kubernetes/
cat kubeadm.conf


# View Plugin: Master-Node
cd /etc/kubernetes/manifests
ls
kube-apiserver.yaml
kube-controller-manager.yaml
kube-scheduler.yaml

# check node info
kubectl describe node node-name

# check pod logs
kubectl -n kube-system logs -f pod-name


# check pod status: Running
kubectl get pod -n kube-system -o wide
NAME	READY	STATUS	RESTARTS	AGE	IP	NODE	NOMINATED-NODE

# Plugin: Running
coredns(one of the master-nodes)
kube-apiserver-master
kube-controller-manager-master
kube-scheduler-master
kube-flannel(master & worker)
kube-proxy(master & worker)
kubernetes-dashboard(one of the master-nodes)

# delete node
kubectl delete node-name

  • View cluster status
# 查看集群状态:
kubectl get cs     #正常状态为healthy
# 查看节点状态:
kubectl get nodes  #正常状态为ready
# 查看集群系统组件pod状态:
kubectl -n kube-system get pod   #pod状态应该全部为running

4.5.3 Access Harbor 仓库

浏览器输入registry节点IP
默认用户名密码admin/Harbor12345

Deploy APP

Common troubleshooting

# deploy-node 部署日志查看
docker logs -f deploy-main

# 登录容器内部
docker exec -it deploy-main sh

Wise2c



vim /etc/


# delete /app and extend root 
umount /app/
lvremove vg00 app 

lvextend -L +40G /dev/vg00/root
xfs_growfs /dev/vg00/root



# extend var 
lvextend -L +180G /dev/vg00/var
xfs_growfs /dev/vg00/var



mkdir /data
lvcreate -l 100%Free -n lv_data vg00

mkfs.xfs /dev/vg00/lv_data
mount /dev/vg00/lv_data /data


# 开机挂载df 

# 删除

vim /etc/fstab
/dev/mapper/vg00-app    /app                    xfs     defaults        0 0


# 添加

cat >> /etc/fstab << EOF
/dev/vg00/lv_data      /data                    xfs     defaults        0 0
EOF


mount -a 
reboot



cat >> /etc/yum.repos.d/docker-ce-local.repo << EOF
[docker]
name=docker-local-repo
baseurl=file:///breeze/rpms/
gpgcheck=0
enabled=1
EOF


#构建本地安装源
yum clean all && yum makecache

#安装docker和docker-compose
yum install -y docker docker-compose

#启动docker服务并设为开机启动
systemctl start docker && systemctl enable docker


docker load -i docker-image.tar
docker images

docker-compose up -d
docker ps


ssh-keygen   一路回车
ls  /root/.ssh/ -l


 # 登陆容器内部: docker exec -it deploy-main sh  ,  exit 退出容器
 
 
 
 添加最后一个node节点
 
 breeze 仅勾选kubernetes组件    勾选just add new worker nodes, do notreinstall this cluster
 不选择master节点,仅选择最后一个node节点,最后点击开始安装
 
 
 完成后查看node状态为notready
 kubectl get nodes
 
 解决步骤
 1.新增的node节点手动安装docker,缺失两项配置文件
  从任意node节点scp配置到新节点即可
 /etc/docker/daemon.json
 /etc/cni
 

 scp daemon.json 100.67.33.10:/etc/docker
 scp -r cni/ 100.67.33.10:/etc
 
 
 2.另外需要解决docker配置冲突问题
清空以下4个配置文件内容
[root@nodecti2 ~]# cd /etc/sysconfig/
[root@nodecti2 sysconfig]# ll | grep docker
-rw-r--r--  1 root root    1 Mar 25 20:05 docker
-rw-r--r--  1 root root    0 Mar 25 20:06 
-rw-r--r--  1 root root    1 Mar 25 20:05 docker-storage
-rw-r--r--  1 root root    0 Mar 25 20:06 docker-storage-setup
示例 echo > docker-network

最后重启docker服务,查看你node状态恢复为Running

A. SIP-PROXY安装

1. localrpm ready

  • bzip2-1.0.6-13.el7.x86_64.rpm
  • docker-rpm.tar.gz

docker local install

cat >> /etc/yum.repos.d/docker-ce-local.repo << EOF
[docker]
name=docker-local-repo
baseurl=file:///breeze/rpms/
gpgcheck=0
enabled=1
EOF

# 刷新缓存
yum clean all && yum makecache

#安装docker和docker-compose
yum install -y docker docker-compose

#启动docker服务并设为开机启动
systemctl start docker && systemctl enable docker

docker load -i docker-image.tar
docker images


docker load -i fsdeb.tar



  • 安装记录
    1  [2019-03-28 17:29:19] vim /etc/ssh/sshd_config 
    2  [2019-03-28 17:30:06] cp /etc/ssh/sshd_config /etc/ssh/sshd_config.ori
    3  [2019-03-28 17:30:08] vim /etc/ssh/sshd_config 
    4  [2019-03-28 17:31:04] systemctl restart sshd
    5  [2019-03-28 17:35:06] cd /etc/ssh
    6  [2019-03-28 17:35:07] ls
    7  [2019-03-28 17:35:14] cp sshd_config sshd_config.ori
    8  [2019-03-28 17:35:18] cp sshd_config sshd_config.ori2
    9  [2019-03-28 17:35:21] vi sshd_config
   10  [2019-03-28 17:35:30] ifconfig
   11  [2019-03-28 17:35:39] systemctl restart sshd
   12  [2019-03-29 14:07:24] ls
   13  [2019-03-29 14:07:29] cd /app
   14  [2019-03-29 14:07:30] ls
   15  [2019-03-29 14:07:54] cat >> /etc/yum.repos.d/docker-ce-local.repo << EOF
[docker]
name=docker-local-repo
baseurl=file:///breeze/rpms/
gpgcheck=0
enabled=1
EOF

   16  [2019-03-29 14:08:17] vim /etc/yum.repos.d/docker-ce-local.repo 
   17  [2019-03-29 14:09:32] ls
   18  [2019-03-29 14:09:34] cd localrpm/
   19  [2019-03-29 14:09:35] ls
   20  [2019-03-29 14:09:51] tar -zxvf docker-rpm.tar.gz 
   21  [2019-03-29 14:10:02] ls
   22  [2019-03-29 14:10:14] vim /etc/yum.repos.d/docker-ce-local.repo 
   23  [2019-03-29 14:10:28] yum clean all && yum makecache
   24  [2019-03-29 14:11:00] yum install -y docker docker-compose
   25  [2019-03-29 14:11:59] systemctl start docker && systemctl enable docker
   26  [2019-03-29 14:12:11] ls
   27  [2019-03-29 14:12:34] yum localinstall bzip2-1.0.6-13.el7.x86_64.rpm 
   28  [2019-03-29 14:34:24] cd ../SIPPROXY/
   29  [2019-03-29 14:34:25] ls
   30  [2019-03-29 14:34:37] tar -zxvf rainny.tar.gz 
   31  [2019-03-29 14:34:44] ls
   32  [2019-03-29 14:34:55] cd app/
   33  [2019-03-29 14:34:57] ls
   34  [2019-03-29 14:35:05] mv rainny ../
   35  [2019-03-29 14:35:06] ls
   36  [2019-03-29 14:35:10] cd ..
   37  [2019-03-29 14:35:11] ls
   38  [2019-03-29 14:35:21] rm -rf app
   39  [2019-03-29 14:35:23] ls
   40  [2019-03-29 14:35:31] cd rainny/
   41  [2019-03-29 14:35:33] ls
   42  [2019-03-29 14:46:22] vim FreeswitchWatchdog 
   43  [2019-03-29 14:47:27] chmod +x FreeswitchWatchdog 
   44  [2019-03-29 14:47:31] ll
   45  [2019-03-29 14:47:46] chmod +x docker.sh 
   46  [2019-03-29 14:47:49] ll
   47  [2019-03-29 14:48:08] ./docker.sh 
   48  [2019-03-29 14:48:25] vim docker.sh 
   49  [2019-03-29 15:40:01] ps -ef |grep docker.sh 
   50  [2019-03-29 15:40:06] ps -ef |grep docke
   51  [2019-03-29 15:40:20] ps -ef |grep freeswitch
   52  [2019-03-29 15:43:30] docker images
   53  [2019-03-29 15:43:51] ls
   54  [2019-03-29 15:43:58] cd /app
   55  [2019-03-29 15:43:59] ls
   56  [2019-03-29 15:44:04] cd SIPPROXY/
   57  [2019-03-29 15:44:05] ls
   58  [2019-03-29 15:44:14] cd rainny/
   59  [2019-03-29 15:44:15] ls
   60  [2019-03-29 15:47:02] cd ..
   61  [2019-03-29 15:47:04] ls
   62  [2019-03-29 15:47:07] cd ..
   63  [2019-03-29 15:47:09] ls
   64  [2019-03-29 15:47:13] cd ..
   65  [2019-03-29 15:47:15] ls
   66  [2019-03-29 15:56:48] history 

常用的命令

PATH:/var/rainny/ippbx


运维

常用的命令

  • 抓包分析
tcpdump -np -s 0 -i eth0 -w /app/demp.pcap udp

tcpdump -np -s 0 -i eth0   udp

  • K8S调试命令

  • Master节点

  • Path:/root/

kubectl delete -f pcc.yml;
kubectl apply -f pcc.yml ; sleep 6;
kubectl get pod -n pcc -o wide;

kubectl exec -it ippbx-1 -n pcc sh
功能命令备注
保存操作history >> /root/nodeippbx2
查看日志kubectl logs -f ippbx-0
删除podkubectl delete -f pcc.yml;
创建podkubectl apply -f pcc.yml ; sleep 6;
查看podkubectl get pod -n pcc -o wide;
进入podkubectl exec -it ippbx-1 -n pcc sh
  • 搜索文件
  252  [2019-03-27 11:43:11] find -name "*.yml"
  253  [2019-03-27 11:44:24] find -name "*.yaml"
kubectl delete -f pcc.yml ; kubectl apply -f pcc.yml ; sleep 6 ;kubectl get pod -n pcc -o wide

./fs_cli -H 127.0.0.1 -p wzw -P 4521
docker load -i fsdeb.tar

docker images

mv  freeswitch freeswitch.old


kubectl delete -f pcc.yml ; kubectl apply -f pcc.yml ; sleep 6 ;kubectl get pod -n pcc -o wide

ippbx cti 应用节点部署

1. 文件准备

  • 拷贝文件 zs招商copy.tar.gz 到跳板机,
  • 从跳板机上拷贝到每个机器的/root下,
  • 解压得到 keepalived,freeswtich,a生产环境部署三个目录

zs招商copy.tar.gz

2 准备本地镜像文件

准备应用节点镜像文件

  • load freeswtich压缩文件到docker 镜像,并上传到harbor仓库:

(a) 在deploy节点,即100.67.33.10上操作:

  • 准备工作
[root@nodecti2 ~]# cd /root
[root@nodecti2 ~]# tar -xzvf zs*.tar.gz  
[root@nodecti2 ~]# ls
bin  slogs  zs????copy.tar.gz  zs招商copy
[root@nodecti2 ~]# cd zs招商copy/
[root@nodecti2 zs招商copy]# ls
a生产环境部署  freeswitch  keepalived
[root@nodecti2 zs招商copy]# cd freeswitch/
[root@nodecti2 freeswitch]# ls
freeswitch.tar.bz2  fsdeb.tar  rainny.tar.bz2

  • load freeswtich压缩文件到docker 镜像
[root@nodecti2 freeswitch]# docker load -i fsdeb.tar 
f33a13616df9: Loading layer [==================================================>] 82.96 MB/82.96 MB
c8f952ba8693: Loading layer [==================================================>] 1.024 kB/1.024 kB
5ca3e1235786: Loading layer [==================================================>] 1.024 kB/1.024 kB
8cda936b4d69: Loading layer [==================================================>]  5.12 kB/5.12 kB
5329b221f182: Loading layer [==================================================>] 93.21 MB/93.21 MB
9c9167e39e1a: Loading layer [==================================================>] 359.9 MB/359.9 MB
9aa6a792bc60: Loading layer [==================================================>] 443.9 MB/443.9 MB

[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# 

  • 补充: docker images 查看镜像
删除镜像docker rmi 仓库:TAG
删除镜像docker rmi ${IMAGE ID}
docker load -i 镜像经过压缩的tar文件
docker tag 旧镜像repo:tag 新镜像repo:tag
docker push 镜像repo:tag (执行前,需要docker login docker-register-server)
docker login docker-register-server
  • 补充: docker images 查看镜像
    docker rmi 仓库:TAG 用于删除镜像
    docker rmi ${IMAGE ID} 用于删除镜像
    docker load -i 镜像经过压缩的tar文件
    docker tag 旧镜像repo:tag 新镜像repo:tag
    docker push 镜像repo:tag (执行前,需要docker login docker-register-server)

b) 上传镜像到Harbor仓库

  • 浏览器打开100.67.33.9 harbor仓库(登陆置灰,也能点击登陆),打开library 项目,可以查看到一些镜像,此时不需要在界面上做任何操作。

  • 在deploy节点上执行:

[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
  • 重新打标签
[root@nodecti2 freeswitch]# docker  tag fs:deb1 100.67.33.9/library/fs:deb1

[root@nodecti2 freeswitch]# 
[root@nodecti2 freeswitch]# docker images
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
100.67.33.9/library/fs                 deb1                9c3b419a16e0        4 weeks ago         960 MB
fs                                     deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# docker rmi fs:deb1
Untagged: fs:deb1

[root@nodecti2 freeswitch]# docker images     
REPOSITORY                             TAG                 IMAGE ID            CREATED             SIZE
wise2c/playbook                        v1.11.8             e52b775e14b3        2 weeks ago         1.14 GB
wise2c/yum-repo                        v1.11.8             6ff733f70993        3 weeks ago         758 MB
100.67.33.9/library/kube-proxy-amd64   v1.11.8             2ed65dca1a98        3 weeks ago         98.1 MB
100.67.33.9/library/fs                 deb1                9c3b419a16e0        4 weeks ago         960 MB
100.67.33.9/library/flannel            v0.11.0-amd64       ff281650a721        8 weeks ago         52.6 MB
wise2c/pagoda                          v1.1                d4f2b4cabdec        2 months ago        483 MB
wise2c/deploy-ui                       v1.3                2f159b37bf13        2 months ago        40.1 MB
100.67.33.9/library/pause              3.1                 da86e6ba6ca1        15 months ago       742 kB
[root@nodecti2 freeswitch]# cat /etc/docker/daemon.json 
{
  "exec-opt": [
    "native.cgroupdriver=systemd"
  ],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m",
    "max-file": "5"
  },
  "insecure-registries": [
      "100.67.33.9"
  ],
  "storage-driver": "overlay2"
}
  • 远程登陆Docker
[root@nodecti2 freeswitch]# docker login 100.67.33.9
Username: admin
Password:           #此处输入Harbor12345
Login Succeeded
  • push image 到Harbor
[root@nodecti2 freeswitch]# docker push 100.67.33.9/library/fs:deb1
The push refers to a repository [100.67.33.9/library/fs]
8c7f9e41cf2f: Pushed 
0e8f26b8afa9: Pushed 
683e839e85ce: Pushed 
1999ce8fd68d: Pushed 
5f70bf18a086: Pushed 
16ada34affd4: Pushed 
deb1: digest: sha256:37a3b7114091c8ea7dfd2f6b16a6c708469443e6da6edf96a2b0201c226b0ed2 size: 1787
  • 此时浏览器刷新harbor,可查看到推送的镜像

3 四个node-worker节点安装和配置keepalived

a 本地安装keepalived

此处在 100.67.33.7 上执行,其它三个节点类似:

[root@nodeippbx1 ~]# cd zs招商copy/
[root@nodeippbx1 zs招商copy]# ls
a生产环境部署  freeswitch  keepalived
[root@nodeippbx1 zs招商copy]# cd keepalived/
[root@nodeippbx1 keepalived]# ls
keepalived-1.3.5-8.el7_6.x86_64.rpm  lm_sensors-libs-3.4.0-6.20160601gitf9185e5.el7.x86_64.rpm  net-snmp-agent-libs-5.7.2-37.el7.x86_64.rpm  net-snmp-libs-5.7.2-37.el7.x86_64.rpm
  • 本地安装keepalived
    与上面路径一致
[root@nodeippbx1 keepalived]# yum localinstall *.rpm -y
...
Installed:
  keepalived.x86_64 0:1.3.5-8.el7_6                         net-snmp-agent-libs.x86_64 1:5.7.2-37.el7                         net-snmp-libs.x86_64 1:5.7.2-37.el7                        

Updated:
  lm_sensors-libs.x86_64 0:3.4.0-6.20160601gitf9185e5.el7                                                                                                                                 

Complete!
[root@nodeippbx1 keepalived]# 
systemctl enable keepalived.service;systemctl start keepalived.service

b 配置ippbx

注意keepavlied.conf 里面,主从的virtual_router_id 必须一致,最好不要大于255,而 router_id 必须不一样,主从的router_id,state,priority 不能一样,具体如下。

  • 100.67.33.7
 # Backup
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.ori



vim /etc/keepalived/keepalived.conf
  ###### begin
  [root@nodeippbx1 keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id ippbx_master
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 188
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       100.67.33.16/23 dev eth0 label eth0:1
    }
}
###### end
  • 100.67.33.8
###### begin
[root@nodeippbx2 keepalived]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id ippbx_backup
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 188
    priority 20
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
       100.67.33.16/23 dev eth0 label eth0:1
    }
}
 ###### end
  • 补充
 两台主机分别:  systemctl  enable keepalived     && systemctl start keepalived   && ps -ef | grep keepalived
 分别查看ip:   ifconfig,会发现在主节点上有:
   eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 100.67.33.16  netmask 255.255.254.0  broadcast 0.0.0.0
        ether fa:16:3e:dd:97:61  txqueuelen 1000  (Ethernet)
 在主节点关闭keepalived,再分别查看ip,会发现从节点出现,主节点没有此vip了:
   eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 100.67.33.16  netmask 255.255.254.0  broadcast 0.0.0.0
        ether fa:16:3e:06:a4:f6  txqueuelen 1000  (Ethernet)

c 配置两台CTI

  • 100.67.33.9
##################begin
[root@nodecti1 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id cti_master
}

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 189
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
       100.67.33.17/23 dev eth0 label eth0:1
    }
}
##################end
  • 100.67.33.10
 ##################begin
 [root@nodecti2 keepalived]# cat /etc/keepalived/keepalived.conf
global_defs {
   router_id cti_backup
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 189
    priority 20
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 2222
    }
    virtual_ipaddress {
       100.67.33.17/23 dev eth0 label eth0:1
    }
}
##################end

d 测试Keepalived

两台节点分别执行:

[root@nodecti1 keepalived]# systemctl enable keepalived
[root@nodecti1 keepalived]# systemctl start keepalived 
[root@nodecti1 keepalived]# ps -ef | grep keep
root      1780     1  0 Jan29 ttyS0    00:00:00 /sbin/agetty --keep-baud 115200 38400 9600 ttyS0 vt220
root     16941     1  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     16942 16941  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     16943 16941  0 19:20 ?        00:00:00 /usr/sbin/keepalived -D
root     18826  4506  0 19:22 pts/1    00:00:00 grep --color=auto keep

[root@nodecti1 keepalived]# ifconfig | grep eth0   ## 在主节点能查到两个ip,从节点只有一个,当主节点停止keepalived服务后,从节点多了vip
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
eth0:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
[root@nodecti1 keepalived]# ifconfig | grep 67
        inet 100.67.33.9  netmask 255.255.254.0  broadcast 100.67.33.255
        inet 100.67.33.17  netmask 255.255.254.0  broadcast 0.0.0.0
        RX packets 723588  bytes 667124730 (636.2 MiB)
        RX packets 459  bytes 86709 (84.6 KiB)
vethe922676: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        TX packets 750673  bytes 669843972 (638.8 MiB)
        inet6 fe80::867:1dff:fe29:55d0  prefixlen 64  scopeid 0x20<link>
        ether 0a:67:1d:29:55:d0  txqueuelen 0  (Ethernet)

4 配置fs app 的yaml文件

a 本地安装bzip2 包并解压文件

  • rpm包准备
    需要从外拷贝bzip2-1.0.6-13.el7.x86_64.rpm 到deploy机器,并 yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm

  • 本地安装安装bzip2

yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm
  • 解压rainny.tar.bz2与freeswitch.tar.bz2
[root@nodecti2 freeswitch]# yum localinstall ./bzip2-1.0.6-13.el7.x86_64.rpm
[root@nodecti2 freeswitch]# pwd
/root/zs招商copy/freeswitch
[root@nodecti2 freeswitch]# ls
freeswitch.tar.bz2  fsdeb.tar  rainny.tar.bz2    
[root@nodecti2 freeswitch]# tar -xjvf rainny.tar.bz2  
[root@nodecti2 freeswitch]# tar -xjvf freeswitch.tar.bz2
[root@nodecti2 freeswitch]# ls
freeswitch  freeswitch.tar.bz2  fsdeb.tar  rainny  rainny.tar.bz2

b 准备映射到pod内的host目录

  • 为四个node_worker节点打标签
    运行节点:在master上执行:
[root@master1 ~]# kubectl get nodes -o wide
NAME         STATUS    ROLES     AGE       VERSION
master1      Ready     master    1d        v1.11.8
master2      Ready     master    1d        v1.11.8
master3      Ready     master    1d        v1.11.8
nodecti1     Ready     <none>    1d        v1.11.8
nodecti2     Ready     <none>    1d        v1.11.8
nodeippbx1   Ready     <none>    1d        v1.11.8
nodeippbx2   Ready     <none>    1d        v1.11.8
[root@master1 ~]# kubectl label node nodeippbx1 pcc/ippbx=true
[root@master1 ~]# kubectl label node nodeippbx2 pcc/ippbx=true
[root@master1 ~]# kubectl label node nodecti1 pcc/cti=true
[root@master1 ~]# kubectl label node nodecti2 pcc/cti=true
  • [?]配置免密传输
免密登陆
  • 准备映射到pod内的host目录:
    将上面解压的rainny目录,拷贝到四个工作节点的 /data (单独逻辑卷)目录下:
 scp -r rainny 100.67.33.7:/data
 scp -r rainny 100.67.33.8:/data
 scp -r rainny 100.67.33.9:/data
 scp -r rainny 100.67.33.10:/data

补充脚本

IPPBX
 cp /root/zs招商copy/freeswitch/config/rc.local-pbbxc rc.local

c 修改配置文件 pcc.yaml

  • Path: /data
  • command

Keepalived config

code

运维

常用的命令

  • 搜索文件
  252  [2019-03-27 11:43:11] find -name "*.yml"
  253  [2019-03-27 11:44:24] find -name "*.yaml"
ActionCommandRemarks
kubectl logs -f pod/ippbx-0 -n pcc
kubectl describe pod/ippbx-0 -n pcc
kubectl get nodes,pod -n pcc
restartsystemctl start keepalived
check processps -ef | grep keepalived
check ipifconfig | grep eth0
check ipip -a



Reference

[1] tar.bz2文件解压命令
[2] Linux运行shell脚本提示No such file or directory错误的解决办法

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值