三、kubernetes—集群安装

目录

一、前期准备

二、集群安装

1、系统初始化

2、kubeadm部署安装K8S

3、企业级Docker私有仓库Harbor安装


一、前期准备

本实验采用一个主节点,两个工作节点,一个私有仓库,机器分布如下:

实验采用VMware workstations创建虚拟机,然后安装Centos7系统,具体如何利用VMware workstations创建虚拟机并安装操作系统,大家可以关注公众号【菜鸟自学大数据Hadoop】,参考文章环境准备篇一:VMware workstations新建虚拟机并安装Centos7系统进行创建安装,在此就不在赘述。

二、集群安装

1、系统初始化

1、修改系统主机名以及hosts文件

在此以一个节点为例进行操作演示,其他节点方法相同

# 三个节点都操作
# vim /etc/hostname
k8s-master

# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
#下面为新增内容
192.168.221.131  k8s-master
192.168.221.132  k8s-node01
192.168.221.133  k8s-node02

2、安装依赖包

# 三个节点都操作
# yum  install -y conntrack ntpdate ntp ipvsadm ipset  jq iptables curl  sysstat  libseccomp wget vim  net-tools git

3、设置防火墙为iptables并设置空规则

# 三个节点都操作
# systemctl  stop  firewalld.service  && systemctl  disable firewalld.service
# yum install -y iptables-services  &&   systemctl  start  iptables  &&  systemctl  enable iptables  && iptables  -F  &&  service  iptables save

4、关闭Swap和Selinux

# 三个节点都操作
# swapoff  -a  &&  sed -i  '/ swap / s/^\(.*\)$/#\1/g'  /etc/fstab
# setenforce 0 &&  sed -i  's/^SELINUX=.*/SELINUX=disabled/'  /etc/selinux/config

5、调整内核参数(对于K8s)

# 三个节点都操作
# vim kubernetes.conf
#新增以下内容
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0  #禁止使用swap空间,只有当系统OOM时才允许使用它
vm.overcommit_memory=1  #不检查物理内存是否够用
vm.panic_on_oom=0  #开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720


# cp kubernetes.conf   /etc/sysctl.d/kubernetes.conf
# sysctl -p /etc/sysctl.d/kubernetes.conf

6、调整系统时间并配置阿里时间同步

# 三个节点都操作
#设置系统时间为 中国/上海
# timedatectl set-timezone  Asia/Shanghai
#将当前的UTC时间写入硬件时钟
# timedatectl  set-local-rtc  0

# crontab  -e 
* */1 * * * ntpdate  ntp.aliyun.com  > /dev/null  &


# 重启依赖于系统时间的服务
# systemctl restart rsyslog
systemctl  restart  crond

7、关闭系统不需要的服务

# 三个节点都操作
# systemctl stop postfix
# systemctl  disable postfix

8、设置rsyslogd和systemd journald

# 三个节点都操作
# mkdir  /var/log/journal
# mkdir   /etc/systemd/journald.conf.d
# vim /etc/systemd/journald.conf.d/99-prophet.conf
# 写入以下内容
[Journal]
#持久化保存到磁盘
Storage=persistent
#压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

#最大占用空间 10G
SystemMaxUse=10G
#单日志文件最大 200M
SystemMaxFileSize=200M
日志保存时间2周
MaxRetentionSec=2week
#不将日志转发到syslog
ForwardToSyslog=no



# systemctl restart systemd-journald

三台机器所有的配置完成后,对三台机器进行重启操作。

2、kubeadm部署安装K8S

1、kube-proxy开启ipvs的前置条件

# 三个节点都操作

# modprobe br_netfilter
# vim /etc/sysconfig/modules/ipvs.modules
# 写入以下内容
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

# chmod 755 /etc/sysconfig/modules/ipvs.modules   &&  bash  /etc/sysconfig/modules/ipvs.modules   &&  lsmod |grep -e ip_vs -e nf_conntrack_ipv4

2、安装docker软件

# 三个节点都操作

# yum install -y yum-utils  device-mapper-persistent-data  lvm2
# yum-config-manager    --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum update -y && yum install -y docker-ce


# 创建/etc/docker 目录
# mkdir /etc/docker

# vim /etc/docker/daemon.json
# 写入以下内容
{
        "exec-opts": ["native.cgroupdriver=systemd"],
        "log-driver": "json-file",
        "log-opts": {
                "max-size": "100m"
        }
}


# mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
# systemctl  daemon-reload  && systemctl restart docker  &&  systemctl enable docker

3、安装Kubeadm(主从配置)

# 三个节点都操作

# vim /etc/yum.repos.d/kubernetes.repo
# 写入以下内容

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg



# yum install -y kubeadm-1.18.6  kubectl-1.18.6  kubelet-1.18.6

# systemctl enable  kubelet.service

4、下载镜像

如果某些原因,国内无法下载google镜像源,在此教大家如何下载安装相关的镜像。

1)首先查看所需要下载镜像的版本

#  kubeadm config images list
I0311 23:07:15.531530   38268 version.go:252] remote version is much newer: v1.20.4; falling back to: stable-1.18
W0311 23:07:18.243533   38268 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.16
k8s.gcr.io/kube-controller-manager:v1.18.16
k8s.gcr.io/kube-scheduler:v1.18.16
k8s.gcr.io/kube-proxy:v1.18.16
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

2)利用国内镜像源下载相关镜像

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16
v1.18.16: Pulling from google_containers/kube-apiserver
4ba180b702c8: Pull complete 
73b4eaad3acc: Pull complete 
Digest: sha256:0f588dcfba503f25926d14b1457b6899284f0e795e36028758615740dc757b5b
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16
v1.18.16: Pulling from google_containers/kube-controller-manager
4ba180b702c8: Already exists 
33729e2a1566: Pull complete 
Digest: sha256:4bd2a89ae02815adbadcb066c7cb8929c55a1f4438b291eb3f43d2e9259b13c5
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16
v1.18.16: Pulling from google_containers/kube-scheduler
4ba180b702c8: Already exists 
ea10723547a2: Pull complete 
Digest: sha256:3ec3e389816a9176f6045d67decbd10df8590db347aac6896f26d4ac00e8d39d
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16
v1.18.16: Pulling from google_containers/kube-proxy
4ba180b702c8: Already exists 
85b604bcc41a: Pull complete 
fafe7e2b354a: Pull complete 
b2c4667c1ca7: Pull complete 
c93c6a0c3ea5: Pull complete 
beea6d17d8e9: Pull complete 
0e370c89e9e0: Pull complete 
Digest: sha256:337c531ad42fb857f51bbe70569f3a26b3d422352331d0a12e1cf3630fe7ab44
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
3.2: Pulling from google_containers/pause
c74f8866df09: Pull complete 
Digest: sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
3.4.3-0: Pulling from google_containers/etcd
39fafc05754f: Pull complete 
3736e1e115b8: Pull complete 
79de61f59f2e: Pull complete 
Digest: sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0

[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
1.6.7: Pulling from google_containers/coredns
c6568d217a00: Pull complete 
ff0415ad7f19: Pull complete 
Digest: sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7


[root@k8s-master ~]# docker images
REPOSITORY                                                                    TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.18.16   f64b8b5e96a6   2 weeks ago     117MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.18.16   26e38f7f559a   2 weeks ago     173MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.18.16   b3c57ca578fb   2 weeks ago     162MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.18.16   5a84bb672db8   2 weeks ago     96.1MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   12 months ago   683kB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.6.7      67da37a9a360   13 months ago   43.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.3-0    303ce5db0e90   16 months ago   288MB

3)重新给镜像打标记为所需要的名字

[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16    k8s.gcr.io/kube-proxy:v1.18.16
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16   k8s.gcr.io/kube-apiserver:v1.18.16
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16   k8s.gcr.io/kube-controller-manager:v1.18.16
[root@k8s-master ~]# docker tag  registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16   k8s.gcr.io/kube-scheduler:v1.18.16
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2   k8s.gcr.io/pause:3.2
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7  k8s.gcr.io/coredns:1.6.7
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0   k8s.gcr.io/etcd:3.4.3-0

4)删除之前下载的镜像

[root@k8s-master ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy@sha256:337c531ad42fb857f51bbe70569f3a26b3d422352331d0a12e1cf3630fe7ab44

[root@k8s-master ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver@sha256:0f588dcfba503f25926d14b1457b6899284f0e795e36028758615740dc757b5b

[root@k8s-master ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager@sha256:4bd2a89ae02815adbadcb066c7cb8929c55a1f4438b291eb3f43d2e9259b13c5
 
[root@k8s-master ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16 
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.16
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler@sha256:3ec3e389816a9176f6045d67decbd10df8590db347aac6896f26d4ac00e8d39d

[root@k8s-master ~]# docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f

[root@k8s-master ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7 
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800
 
[root@k8s-master ~]# docker rmi  registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0
Untagged: registry.cn-hangzhou.aliyuncs.com/google_containers/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646

5)将镜像打包成文件然后分发到工作节点导入

# docker save -o kube-proxy.tar  k8s.gcr.io/kube-proxy:v1.18.16
# docker save -o kube-controller-manager.tar  k8s.gcr.io/kube-controller-manager:v1.18.16
# docker save -o kube-apiserver.tar  k8s.gcr.io/kube-apiserver:v1.18.16
# docker save -o kube-scheduler.tar k8s.gcr.io/kube-scheduler:v1.18.16
# docker save -o pause.tar  k8s.gcr.io/pause:3.2
# docker save -o coredns.tar  k8s.gcr.io/coredns:1.6.7
# docker save -o etcd.tar  k8s.gcr.io/etcd:3.4.3-0

写了一个批量导入镜像的脚本,执行脚本即可完成镜像导入,内容如下:

[root@k8s-node01 k8s-1.18.16]# cat load_images.sh 
#! /bin/bash
for i in `ls /usr/local/src/k8s-1.18.16`
do
   docker load  < $i
done

5、初始化主节点

# kubeadm  config  print  init-defaults  > kubeadm-config.yaml

修改配置文件kubeadm-config.yaml,将IP修改主节点的IP,新增podSubnet和下面的内容:

localAPIEndpoint:
  advertiseAddress: 192.168.221.131
  bindPort: 6443
kubernetesVersion: v1.18.16
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}


---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs



# kubeadm   init  --config=kubeadm-config.yaml   --upload-certs    |tee  kubeadm-init.log

6、加入主节点以及其他工作节点

执行安装日志中的加入命令

# 主节点上操作
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config


# 从节点操作
# kubeadm join 192.168.221.131:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:9e53bf68513f1799b718da015c966426b33113ef803b20b7f0871e6102132868

7、部署网络

查看节点状态发下为NotReady,需要部署flannel

root@k8s-master ~]# mkdir install--k8s
[root@k8s-master ~]# mv kubeadm-init.log   kubeadm-config.yaml  install--k8s/
[root@k8s-master install--k8s]# mkdir  core
[root@k8s-master install--k8s]# mv * core/
mv: cannot move ‘core’ to a subdirectory of itself, ‘core/core’
[root@k8s-master install--k8s]# mkdir plugin
[root@k8s-master flannel]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master flannel]# kubectl create -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created


[root@k8s-master flannel]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-rwxx2             0/1     Pending   0          11m
coredns-66bff467f8-t9ztd             0/1     Running   0          11m
etcd-k8s-master                      1/1     Running   0          12m
kube-apiserver-k8s-master            1/1     Running   0          12m
kube-controller-manager-k8s-master   1/1     Running   0          12m
kube-flannel-ds-cg77h                1/1     Running   0          28s
kube-proxy-tb2hw                     1/1     Running   0          11m
kube-scheduler-k8s-master            1/1     Running   0          12m




[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   16m     v1.18.6
k8s-node01   Ready    <none>   2m42s   v1.18.6
k8s-node02   Ready    <none>   112s    v1.18.6

8、操作测试

[root@k8s-master ~]# kubectl get pod -n kube-system   # 查看kube-system空间下的Pod
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-rwxx2             1/1     Running   0          15m
coredns-66bff467f8-t9ztd             1/1     Running   0          15m
etcd-k8s-master                      1/1     Running   0          15m
kube-apiserver-k8s-master            1/1     Running   0          15m
kube-controller-manager-k8s-master   1/1     Running   0          15m
kube-flannel-ds-7kmrb                1/1     Running   0          91s
kube-flannel-ds-cg77h                1/1     Running   0          3m39s
kube-flannel-ds-jpphf                1/1     Running   0          41s
kube-proxy-8b4bj                     1/1     Running   0          91s
kube-proxy-kx9fw                     1/1     Running   0          41s
kube-proxy-tb2hw                     1/1     Running   0          15m
kube-scheduler-k8s-master            1/1     Running   0          15m
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide#查看kube-system下pod的详细信息
NAME                                 READY   STATUS    RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
coredns-66bff467f8-rwxx2             1/1     Running   0          15m     10.244.0.3        k8s-master   <none>           <none>
coredns-66bff467f8-t9ztd             1/1     Running   0          15m     10.244.0.2        k8s-master   <none>           <none>
etcd-k8s-master                      1/1     Running   0          15m     192.168.221.131   k8s-master   <none>           <none>
kube-apiserver-k8s-master            1/1     Running   0          15m     192.168.221.131   k8s-master   <none>           <none>
kube-controller-manager-k8s-master   1/1     Running   0          15m     192.168.221.131   k8s-master   <none>           <none>
kube-flannel-ds-7kmrb                1/1     Running   0          101s    192.168.221.132   k8s-node01   <none>           <none>
kube-flannel-ds-cg77h                1/1     Running   0          3m49s   192.168.221.131   k8s-master   <none>           <none>
kube-flannel-ds-jpphf                1/1     Running   0          51s     192.168.221.133   k8s-node02   <none>           <none>
kube-proxy-8b4bj                     1/1     Running   0          101s    192.168.221.132   k8s-node01   <none>           <none>
kube-proxy-kx9fw                     1/1     Running   0          51s     192.168.221.133   k8s-node02   <none>           <none>
kube-proxy-tb2hw                     1/1     Running   0          15m     192.168.221.131   k8s-master   <none>           <none>
kube-scheduler-k8s-master            1/1     Running   0          15m     192.168.221.131   k8s-master   <none>           <none>
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pod -n kube-system -w   # 动态查看pod状况
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-rwxx2             1/1     Running   0          15m
coredns-66bff467f8-t9ztd             1/1     Running   0          15m
etcd-k8s-master                      1/1     Running   0          15m
kube-apiserver-k8s-master            1/1     Running   0          15m
kube-controller-manager-k8s-master   1/1     Running   0          15m
kube-flannel-ds-7kmrb                1/1     Running   0          2m2s
kube-flannel-ds-cg77h                1/1     Running   0          4m10s
kube-flannel-ds-jpphf                1/1     Running   0          72s
kube-proxy-8b4bj                     1/1     Running   0          2m2s
kube-proxy-kx9fw                     1/1     Running   0          72s
kube-proxy-tb2hw                     1/1     Running   0          15m
kube-scheduler-k8s-master            1/1     Running   0          15m

至此K8S集群安装完毕。

3、企业级Docker私有仓库Harbor安装

1)安装docker-ce,安装方法参考上面,安装完成后修改daemon.json文件,增加insecure-registries项,k8s集群中的机器如果需要使用该仓库,则都需要配置这项,并重启docker

# vim  /etc/docker/daemon.json
{
        "exec-opts": ["native.cgroupdriver=systemd"],
        "log-driver": "json-file",
        "log-opts": {
                "max-size": "100m"
        },
        "insecure-registries": ["https://hushensong.com"]
}
# systemctl  restart docker

2)下载安装docker-compose

# 在harbor服务器上
[root@habor ~]# curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
[root@habor ~]# chmod +x /usr/local/bin/docker-compose

[root@habor bin]# docker-compose  version 
docker-compose version 1.21.2, build a133471
docker-py version: 3.3.0
CPython version: 3.6.5
OpenSSL version: OpenSSL 1.0.1t  3 May 2016

2)下载harbor软件包安装

# wget https://github.com/goharbor/harbor/releases/download/v2.2.0/harbor-offline-installer-v2.2.0.tgz

# tar xzvf harbor-offline-installer-v2.2.0.tgz 

# mv harbor  /usr/local/

# mkdir /data/cert
# cd /data/cert
# 创建 https 证书以及配置相关目录权限

[root@habor cert]# openssl   genrsa  -des3  -out server.key  2048
Generating RSA private key, 2048 bit long modulus
....................................................................................................+++
.....................................+++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:

[root@habor cert]# openssl  req  -new -key server.key  -out server.csr
Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:SX 
Locality Name (eg, city) [Default City]:XA
Organization Name (eg, company) [Default Company Ltd]:K8S
Organizational Unit Name (eg, section) []:K8S 
Common Name (eg, your name or your server's hostname) []:hushensong.com
Email Address []:hushensong@136.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
You have new mail in /var/spool/mail/root

[root@habor cert]# cp server.key  server.key.org

[root@habor cert]# openssl  rsa  -in server.key.org   -out  server.key
Enter pass phrase for server.key.org:

[root@habor cert]# openssl   x509  -req -days 365 -in server.csr  -signkey server.key -out server.crt
Signature ok
subject=/C=CN/ST=SX/L=XA/O=K8S/OU=K8S/CN=hushensong.com/emailAddress=hushensong@136.com
Getting Private key

harbor.yml配置文件默认没有,复制模板重命名,修改harbor.yml文件

# cp harbor.yml.tmpl harbor.yml

然后执行install.sh安装

[root@habor habor]# ./install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 20.10.5

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.21.2

[Step 2]: loading Harbor images ...
Loaded image: goharbor/harbor-jobservice:v2.2.0
Loaded image: goharbor/harbor-exporter:v2.2.0
Loaded image: goharbor/registry-photon:v2.2.0
Loaded image: goharbor/harbor-core:v2.2.0
Loaded image: goharbor/harbor-db:v2.2.0
Loaded image: goharbor/notary-server-photon:v2.2.0
Loaded image: goharbor/trivy-adapter-photon:v2.2.0
Loaded image: goharbor/harbor-registryctl:v2.2.0
Loaded image: goharbor/redis-photon:v2.2.0
Loaded image: goharbor/harbor-log:v2.2.0
Loaded image: goharbor/nginx-photon:v2.2.0
Loaded image: goharbor/notary-signer-photon:v2.2.0
Loaded image: goharbor/chartmuseum-photon:v2.2.0
Loaded image: goharbor/prepare:v2.2.0
Loaded image: goharbor/harbor-portal:v2.2.0


[Step 3]: preparing environment ...

[Step 4]: preparing harbor configs ...
prepare base dir is set to /usr/local/habor
Clearing the configuration file: /config/portal/nginx.conf
Clearing the configuration file: /config/log/logrotate.conf
Clearing the configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir



[Step 5]: starting Harbor ...
Creating network "habor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-db     ... done
Creating registry      ... done
Creating registryctl   ... done
Creating harbor-portal ... done
Creating redis         ... done
Creating harbor-core   ... done
Creating nginx             ... done
Creating harbor-jobservice ... done
✔ ----Harbor has been installed and started successfully.----
You have new mail in /var/spool/mail/root

3)配置hosts文件

在k8s集群机器及harbor服务器和Windows服务器hosts文件新增:

192.168.221.134  hushensong.com

4)测试登录

  • Linux机器
[root@k8s-master ~]# docker login  https://hushensong.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded


[root@k8s-master ~]# docker tag hushensonglinux/myapp:v1 hushensong.com/library/myapp:v1


[root@k8s-master ~]# docker push hushensong.com/library/myapp:v1
The push refers to repository [hushensong.com/library/myapp]
a0d2c4392b06: Pushed 
05a9e65e2d53: Pushed 
68695a6cfd7d: Pushed 
c1dc81a64903: Pushed 
8460a579ab63: Pushed 
d39d92664027: Pushed 
v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569
  • Windows机器

输入用户名密码:admin/Harbor12345

测试拉取镜像

# docker pull hushensong.com/library/myapp:v1

 

测试通过harbor中的镜像软件部署服务

[root@k8s-master ~]#  kubectl  run nginx-deployment  --image=hushensong.com/library/myapp:v1 --port=80  --replicas=1

[root@k8s-master ~]# kubectl  get pod -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment   1/1     Running   0          7m18s   10.244.2.2   k8s-node02   <none>           <none>


[root@k8s-node02 ~]# docker ps -a |grep nginx
c7cd2449265f   hushensong.com/library/myapp   "nginx -g 'daemon of…"   2 hours ago   Up 2 hours                         k8s_nginx-deployment_nginx-deployment_default_9c6bbe16-da45-4a92-9231-ba9799a928f9_0
9643e2a6b48c   k8s.gcr.io/pause:3.2           "/pause"                 2 hours ago   Up 2 hours                         k8s_POD_nginx-deployment_default_9c6bbe16-da45-4a92-9231-ba9799a928f9_0


[root@k8s-master ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>


[root@k8s-master ~]# curl 10.244.2.2/hostname.html
nginx-deployment

Hello,大家好,我是菜鸟HSS,始终坚信没有天生的高手,更没有永远的菜鸟。专注于Linux云计算和大数据的学习和研究。欢迎扫码关注我的公众号「菜鸟自学大数据Hadoop」,本公众号一切分享、资源皆免费,希望能尽自己一己之力帮助到更多的朋友,同时自己的技术也能得到提升,达到一种双赢的效果。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值