openshift4.6.8部署笔记

一、前言
本次部署在公司私有云平台部署,预先在私有云上创建8台主机

二、部署前资源准备

1、虚拟机准备和用途

1.1 ocp部署一共需要四种类型的主机
bastion:用于安装dns、harbor、http、haproxy等服务的主机
bootstrap:用于部署openshift集群的中转节点,后续可删除
master:
worker:

主机用途ip
haproxy192.168.123.10
harbor、dns、http192.168.123.11 bastion
bootstrap192.168.123.101
master1192.168.123.102
master2192.168.123.103
master3192.168.123.104
worker1192.168.123.105
worker2192.168.123.106

2、192.168.123.10部署haproxy

2.1 使用yum安装haproxy

yum install haproxy

2.2修改haproxy配置文件

frontend openshift-api-server              
    bind *:6443
    default_backend openshift-api-server
    mode tcp
    option tcplog

backend openshift-api-server
    balance source
    mode tcp
    server bootstrap 192.168.123.101:6443 check  
    server master1 192.168.123.102:6443 check 
    server master2 192.168.123.103:6443 check 
    server master3 192.168.123.104:6443 check
      
frontend machine-config-server          
    bind *:22623
    default_backend machine-config-server
    mode tcp
    option tcplog

backend machine-config-server
    balance source
    mode tcp
    server bootstrap 192.168.123.101:22623 check 
    server master1 192.168.123.102:22623 check   
    server master2 192.168.123.103:22623 check  
    server master3 192.168.123.104:22623 check

2.3启动haproxy

systemctl start haproxy
systemctl enable haproxy

2.4如果启用了防火墙还需要开通防火墙端口

firewall-cmd --zone=public --add-port=6443 --permanent
firewall-cmd --zone=public --add-port=22623 --permanent
firewall-cmd --reload

3、192.168.123.11部署harbor、dns、http服务

3.1 harbor服务部署

参考:https://blog.csdn.net/lhc0602/article/details/114575874

3.2 http服务部署(主要用于pxe安装操作系统时,提供安装镜像和启动的配置文件;由于harbor和http部署在同一台服务器上,harbor会占用80、443端口,所以需要修改http服务的端口,这边修改为8080)

yum install httpd -y
systemctl start httpd 
systemctl enable httpd 
firewall-cmd --zone=public --add-port=8080 --permanent 
firewall-cmd --reload
3.3 dns服务部署(这里采用dnsmasq)

3.3.1 安装dnsmasq

yum install dnsmasq -y

3.3.2 修改dnsmasq的配置文件

cat /etc/dnsmasq.d/more.conf
# ocp4 node
address=/master1.ocp4.lhc.com/192.168.123.102
address=/master2.ocp4.lhc.com/192.168.123.103
address=/master3.ocp4.lhc.com/192.168.123.104
address=/worker1.ocp4.lhc.com/192.168.123.105
address=/worker2.ocp4.lhc.com/192.168.123.106

# etcd
address=/etcd-0.ocp4.lhc.com/192.168.123.102
address=/etcd-1.ocp4.lhc.com/192.168.123.103
address=/etcd-2.ocp4.lhc.com/192.168.123.104
# etcd srv 
# <name>,<target>,<port>,<priority>,<weight>
srv-host=_etcd-server-ssl._tcp.ocp4.lhc.com,etcd-0.ocp4.lhc.com,2380,0,10
srv-host=_etcd-server-ssl._tcp.ocp4.lhc.com,etcd-1.ocp4.lhc.com,2380,0,10
srv-host=_etcd-server-ssl._tcp.ocp4.lhc.com,etcd-2.ocp4.lhc.com,2380,0,10

# lb
address=/.ocp4.lhc.com/192.168.123.10
address=/api.ocp4.lhc.com/192.168.123.10
address=/api-int.ocp4.lhc.com/192.168.123.10

# other
address=/bootstrap.ocp4.lhc.com/192.168.123.101
address=/bastion.ocp4.lhc.com/192.168.123.11
address=/harbor.lhc.com/192.168.123.11

3.3.3 dnsmasq服务启动并设置开机自启

systemctl start dnsmasq
systemctl enable dnsmasq
systemctl status dnsmasq

3.3.4 防火墙设定

 firewall-cmd --add-port=53/tcp --permanent
 firewall-cmd --add-port=53/udp --permanent
 firewall-cmd --reload

4、部署ocp

4.1 安装openshift工具oc

wget https://mirror.openshift.com/pub/openshift-v4/clients/oc/4.6/linux/oc.tar.gz
tar -zxvf oc.tar.gz
mv oc /usr/local/bin
oc version
4.2 创建镜像同步文件pull-secret.json

4.2.1 创建私有仓库的secret信息

[root@master1 ~]#  echo -n 'admin:Test@1234'  | base64 -w0
YWRtaW46VGVzdEAxMjM0

4.2.2 从官网下载pull-secret文件(需要redhat账号)

下载地址: https://cloud.redhat.com/openshift/install/pull-secret 会获得一个txt文件
cat pull-secret.txt |  jq  . > pull-secret.json

4.2.3 合并pull-secret.json文件(把上面私有仓库的secret信息加入pull-secret.json文件中,用于将官方镜像同步到私有仓库)

        {
         "auths": {
           "harbor.lhc.com": {
             "auth": "YWRtaW46VGVzdEAxMjM0",
             "email": ""
           },
           "cloud.openshift.com": {
             "auth": "b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K29jbV9hY2Nlc3NfYTdmNGQ1MjZiMGVlNDkwNzk2MmViZWRiZTE1ZjEwNTI6SVVFSExFTk9SNVdQVVc4QldUT1k2VVlSMlc2V0xMQTQwNDA5UTRJRzNBRDRHS0lXR0NGTzJaN0dXOTJTMzIzMg==",
             "email": "lf_30y@163.com"
           },
           ……
         }
       }
4.3 同步镜像(执行路径/root/ocp4)

4.3.1 先在私有仓库创建一个名为“openshift“的镜像(如下的openshift/ocp4.6.8,后面的ocp4.6.8就不需要手动创建了,会自动创建)

4.3.2 设置环境变量同步镜像

export LOCAL_REGISTRY='harbor.lhc.com'
export LOCAL_REPOSITORY='openshift/ocp4.6.8'
export PRODUCT_REPO='openshift-release-dev'
export RELEASE_NAME='ocp-release'
export OCP_RELEASE='4.6.8'
export ARCHITECTURE='x86_64'
export LOCAL_SECRET_JSON='/root/ocp4/pull-secret.json'
export GODEBUG='x509ignoreCN=0'

4.3.3 执行命令同步镜像

oc adm release mirror -a ${LOCAL_SECRET_JSON}   --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}   --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}

镜像同步完成会有如下输出

     Success
     Update image:  harbor.lhc.com/openshift/ocp4.6.8:4.6.8-x86_64
     Mirror prefix: harbor.lhc.com/openshift/ocp4.6.8

     To use the new mirrored repository to install, add the following section to the install-config.yaml:

     imageContentSources:
     - mirrors:
       - harbor.ocp4.lhc.com/openshift/ocp4.6.8
       source: quay.io/openshift-release-dev/ocp-release
     - mirrors:
       - harbor.ocp4.lhc.com/openshift/ocp4.6.8
       source: quay.io/openshift-release-dev/ocp-v4.0-art-dev


     To use the new mirrored repository for upgrades, use the following to create an ImageContentSourcePolicy:

     apiVersion: operator.openshift.io/v1alpha1
     kind: ImageContentSourcePolicy
     metadata:
       name: example
     spec:
       repositoryDigestMirrors:
       - mirrors:
         - harbor.ocp4.lhc.com/openshift/ocp4.6.8
         source: quay.io/openshift-release-dev/ocp-release
       - mirrors:
         - harbor.ocp4.lhc.com/openshift/ocp4.6.8
         source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

4.3.4创建openshift-install工具

oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE}"

命令执行完成会生成 openshift-install

mv openshift-install /usr/local/bin
openshift-install version

4.3.5 可以在私有仓库上查看镜像是否同步过来

5、准备安装文件

5.1 因为coreos的默认用户是core,所以要准备core用户的ssh key
ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/core_rsa
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/core_rsa
5.2 准备install-config.yaml文件(/root/ocp4)
# cat /root/ocp4/install-config.yaml
apiVersion: v1
baseDomain: lhc.com
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 3
metadata:
  name: ocp4
networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
  networkType: OpenShiftSDN
  serviceNetwork:
  - 172.30.0.0/16
platform:
  none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-rsa ...'
additionalTrustBundle: |
  -----BEGIN CERTIFICATE-----
  省略,注意这里要前面空两格
  -----END CERTIFICATE-----
imageContentSources:
- mirrors:
  - harbor.ocp4.lhc.com/openshift/ocp4.6.8
  source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - harbor.ocp4.lhc.com/openshift/ocp4.6.8
  source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

install-config.yaml文件文件解析:

baseDomain : 所有 Openshift 内部的 DNS 记录必须是此基础的子域,并包含集群名称。

compute : 计算节点配置。这是一个数组,每一个元素必须以连字符 - 开头。

hyperthreading : Enabled 表示启用同步多线程或超线程。默认启用同步多线程,可以提高机器内核的性能。如果要禁用,则控制平面和计算节点都要禁用。

compute.replicas : 计算节点数量。因为我们要手动创建计算节点,所以这里要设置为 0。

controlPlane.replicas : 控制平面节点数量。控制平面节点数量必须和 etcd 节点数量一致,为了实现高可用,本文设置为 3。

metadata.name : 集群名称。即前面 DNS 记录中的 <cluster_name>。

cidr : 定义了分配 Pod IP 的 IP 地址段,不能和物理网络重叠。

hostPrefix : 分配给每个节点的子网前缀长度。例如,如果将 hostPrefix 设置为 23,则为每一个节点分配一个给定 cidr 的 /23 子网,允许 510(2(32−23)−2)个 Pod IP 地址。

serviceNetwork : Service IP 的地址池,只能设置一个。

pullSecret : 上篇文章使用的 pull secret,可通过命令 cat /root/pull-secret.json|jq -c 来压缩成一行。
sshKey : 上面创建的公钥,可通过命令 cat ~/.ssh/new_rsa.pub 查看。

additionalTrustBundle : 私有镜像仓库 Quay 的信任证书,可在镜像节点上通过命令 cat /data/quay/config/ssl.cert 查看。

imageContentSources : 来自前面 oc adm release mirror 的输出结果。
5.3 创建ocp4install目录,并生成coreos初始化系统配置文件ign和准备安装需要的文件(bastion机器操作)

5.3.1 创建目录ocp4install,并将install-config.yaml文件放入(使用cp命令,该文件在后续生成ign的过程中会被删除)

mkdir -p /root/ocp4/ocp4install
cp /root/ocp4/install-config.yaml /root/ocp4/ocp4install   

5.3.2 生成点火配置文件 ign

openshift-install create manifests --dir=/root/ocp4/ocp4install
openshift-install create ignition-configs --dir=/root/ocp4/ocp4install

最终ocp4install文件夹生成如下文件(tree /root/ocp4/ocp4install):
.
├── auth
│ ├── kubeadmin-password
│ └── kubeconfig
├── bootstrap.ign
├── master.ign
├── metadata.json
└── worker.ign
【注意:从生成这个文件开始,24小时内必须完成ocp集群的安装!!!】

mv /root/ocp4/ocp4install/*.ign /var/www/html
chmod +r /var/www/html/*.ign

测试是否能通过http下载ign文件

wget http://192.168.123.11:8080/bootstrap.ing

5.3.3 下载安装需要的img文件,并放入/var/www/html文件夹

下载地址: https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/
下载如下文件
rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
rhcos-4.6.8-x86_64-live-kernel-x86_64 mg
rhcos-4.6.8-x86_64-live-rootfs.x86_64.img	 
rhcos-4.6.8-x86_64-live.x86_64.iso	 
rhcos-4.6.8-x86_64-metal.x86_64.raw.gz

cd /var/www/html

wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-kernel-x86_64
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live-rootfs.x86_64.img
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-live.x86_64.iso
wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.6/4.6.8/rhcos-4.6.8-x86_64-metal.x86_64.raw.gz

拉取完毕后测试是否能通过http下载这些文件

wget 192.168.123.11:8080//rhcos-4.6.8-x86_64-metal.x86_64.raw.gz

5.3.4 创建每个节点的init文件放入/var/www/html
init文件

ip=PARAM1::PARAM2:PARAM3:PARAM4:ens3:none nameserver=PARAM5 coreos.inst.install_dev=vda coreos.inst.image_url=PARAM6 coreos.inst.ignition_url=PARAM7 coreos.inst=yes rd.neednet=1 coreos.inst.platform_id=qemu coreos.inst.insecure

init文件解析

PARAM1:待安装机器的IP 
PARAM2:待安装机器IP的网关 (通过ip route查看)
PARAM3:待安装机器IP的掩码  
PARAM4:待安装机器的hostname(申请的对应的域名)
PARAM5:dns 服务器IP
PARAM6:coreos 镜像地址,http://<init机器IP>/rhcos-4.6.8-x86_64-metal.x86_64.raw.gz
PARAM7:不同类型主机 ign 文件地址;例如: http://<init机器IP>/worker.ign

举例bootstrap:

cat /var/www/html/bootstrap.init
ip=192.168.123.101::192.168.123.1:255.255.255.0:bootstrap.ocp4.lhc.com:ens3:none nameserver=192.168.123.11 coreos.inst.install_dev=/dev/vda coreos.inst.image_url=http://192.168.123.11:8080/rhcos-4.6.8-x86_64-metal.x86_64.raw.gz coreos.inst.ignition_url=http://192.168.123.11:8080/bootstrap.ign coreos.inst=yes rd.neednet=1 coreos.inst.platform_id=qemu coreos.inst.insecure
chmod +r /var/www/html/bootstrap.ini

验证:

wget http://192.168.123.11:8080/bootstrap.init

6、ocp4.6.8安装

6.1 所有的机器都安装了centos系统(所以提前知道ip和网卡名称),并准备脚本
#!/bin/bash
ip a add 172.16.105.80/24 dev ens18
cd /boot;curl -O http://<http主机IP>/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
cd /boot;curl -O http://<http主机IP>/rhcos-4.6.8-x86_64-live-kernel-x86_64
init=curl -s http://<http主机IP>/init文件
cat >> /etc/grub.d/40_custom <<EOF
menuentry 'RHEL CoreOS (Live)' --class fedora --class gnu-linux --class gnu --class os {
        linux /boot/rhcos-4.6.8-x86_64-live-kernel-x86_64  random.trust_cpu=on rd.luks.options=discard  ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<init主机IP>/rhcos-4.6.8-x86_64-live-rootfs.x86_64.img $init
        initrd /boot/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
}
EOF
grub2-set-default 'RHEL CoreOS (Live)'
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

比如: bootstrap

cat /root/init.sh
#!/bin/bash
ip a add 192.168.123.101/24 dev ens3
cd /boot;curl -O http://192.168.123.11:8080/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
cd /boot;curl -O http://192.168.123.11:8080/rhcos-4.6.8-x86_64-live-kernel-x86_64
init=`curl -s http://192.168.123.11:8080/bootstrap.init`
cat >> /etc/grub.d/40_custom <<EOF
menuentry 'RHEL CoreOS (Live)' --class fedora --class gnu-linux --class gnu --class os {
        linux /boot/rhcos-4.6.8-x86_64-live-kernel-x86_64  random.trust_cpu=on rd.luks.options=discard  ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://<init主机IP>/rhcos-4.6.8-x86_64-live-rootfs.x86_64.img $init
        initrd /boot/rhcos-4.6.8-x86_64-live-initramfs.x86_64.img
}
EOF
grub2-set-default 'RHEL CoreOS (Live)'
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
6.2 万事具备只差点火

首先在bootstrap上面执行脚本(执行过后主机会重启,如果一切顺利的话,在重启界面会看到有镜像复制的百分比,说明系统正在重装)

./init.sh
  • 4
    点赞
  • 16
    收藏
    觉得还不错? 一键收藏
  • 8
    评论
评论 8
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值