使用kubeadm快速部署一套稳定的K8S集群

一、 操作系统初始化配置

1.1 设置主机名

根据规划设置主机名 (所有节点)

hostnamectl set-hostname <hostname>

1.2 设置hosts解析主机名

设置/etc/hosts保证主机名能够解析 (所有节点)

# cat /etc/hosts
192.168.21.209 k8s-master
192.168.21.203 k8s-node1
192.168.21.202 k8s-node2

1.3 关闭SELinux和防火墙

所有节点

# 关闭防火墙
systemctl disable firewalld  永久
systemctl stop firewalld   临时


# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

1.4 关闭swap

所有节点

swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

1.5 设置系统参数

所有节点

vim /etc/sysctl.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

net.ipv4.ip_forward = 1

#iptables透明网桥的实现
# NOTE: kube-proxy 要求 NODE 节点操作系统中要具备 /sys/module/br_netfilter 文件,而且还要设置 bridge-nf-call-iptables=1,如果不满足要求,那么 kube-proxy 只是将检查信息记录到日志中,kube-proxy 仍然会正常运行,但是这样通过 Kube-proxy 设置的某些 iptables 规则就不会工作。

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1


# 生效配置
sysctl -p

1.6 设置节点间的SSH免密码登录

所有节点

ssh-keygen -t rsa
ssh-copy-id k8s-master -p 2322
ssh-copy-id k8s-node1 -p 2322
ssh-copy-id k8s-node2 -p 2322

1.7 配置docker

  • 安装docker

所有节点需要安装docker,并且docker版本需要一致,此处不再详解docker的安装方式

  • 修改daemon.json配置文件
# vim /etc/docker/daemon.json
{
    "log-driver": "json-file",
    "log-opts": {"max-size":"10m", "max-file":"1"},
    "insecure-registries": ["http://baas-harbor.peogoo.com"],
    "registry-mirrors": ["https://zn14eon5.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "max-concurrent-downloads": 10,
    "data-root": "/opt/data/docker",
    "hosts": ["tcp://127.0.0.1:5000", "unix:///var/run/docker.sock"]
}

参数解释


{
    "log-driver": "json-file", #容器日志的默认驱动程序(默认为“ json-file”)
    "insecure-registries": ["http://10.20.17.20"], # 设置私有仓库地址可以设为http
    "registry-mirrors": ["https://zn14eon5.mirror.aliyuncs.com"], # 设置阿里云镜像加速器
    "log-opts": {"max-size":"10m", "max-file":"1"},  # 容器默认日志驱动程序选项
    "exec-opts": ["native.cgroupdriver=systemd"], # 高版本 kubernetes kubeadm 依赖;
    "max-concurrent-downloads": 10, # 设置每个请求的最大并发下载量(默认为3)
    "data-root": "/opt/data/docker", # Docker运行时使用的根路径
    "hosts": ["tcp://0.0.0.0:5000", "unix:///var/run/docker.sock"], # 需要删除docker启动文件:service文件中的 -H fd://(/usr/lib/systemd/system/docker.service)
    "graph": "/opt/data/docker", # 已标记为未来版本弃用,建议 data-root ;
}
  • 修改docker启动配置文件
# vim /usr/lib/systemd/system/docker.service

14 #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock  --graph=/opt/data/docker
15 ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
  • 重启docker生效配置文件
systemctl daemon-reload
systemctl restart docker
systemctl status docker

如果报错,可执行命令查看原因

dockerd

基本上都是修改这个文件导致的:/etc/docker/daemon.json

如果依然无法启动,可查看系统日志,查看详细报错

tail -f /var/log/messages

二、etcd 部署

使用etcdadm 方式进行部署:

优点:一键搞定证书 + etcd + 扩容问题;

缺点:支持的 etcd 启动参数非常少,安装后需要通过 etcd.env 调整运行参数;

2.1 编译二进制文件

# 安装git
yum -y install git

# 下载源码包
cd /opt/tools/
git clone https://github.com/kubernetes-sigs/etcdadm.git; cd etcdadm

# 编译:本地有 Go 环境;
make etcdadm

# 编译:本地有 docker 环境;
make container-build

2.2 启动etcd

  • 启动命令如下:
/opt/tools/etcdadm/etcdadm init --certs-dir /etc/kubernetes/pki/etcd/ --install-dir /usr/bin/
  • 启动过程如下:
INFO[0000] [install] Artifact not found in cache. Trying to fetch from upstream: https://github.com/coreos/etcd/releases/download 
INFO[0000] [install] Downloading & installing etcd https://github.com/coreos/etcd/releases/download from 3.4.9 to /var/cache/etcdadm/etcd/v3.4.9 
INFO[0000] [install] downloading etcd from https://github.com/coreos/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz 
######################################################################## 100.0%
INFO[0003] [install] extracting etcd archive /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /tmp/etcd103503474 
INFO[0003] [install] verifying etcd 3.4.9 is installed in /usr/bin/ 
INFO[0003] [certificates] creating PKI assets           
INFO[0003] creating a self signed etcd CA certificate and key files 
[certificates] Generated ca certificate and key.
INFO[0003] creating a new server certificate and key files for etcd 
[certificates] Generated server certificate and key.
[certificates] server serving cert is signed for DNS names [k8s-master] and IPs [192.168.21.209 127.0.0.1]
INFO[0004] creating a new certificate and key files for etcd peering 
[certificates] Generated peer certificate and key.
[certificates] peer serving cert is signed for DNS names [k8s-master] and IPs [192.168.21.209]
INFO[0004] creating a new client certificate for the etcdctl 
[certificates] Generated etcdctl-etcd-client certificate and key.
INFO[0004] creating a new client certificate for the apiserver calling etcd 
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki/etcd/"
INFO[0005] [health] Checking local etcd endpoint health 
INFO[0005] [health] Local etcd endpoint is healthy      
INFO[0005] To add another member to the cluster, copy the CA cert/key to its certificate dir and run: 
INFO[0005] 	etcdadm join https://192.168.21.209:2379  
  • 查看etcd服务状态为已启动状态
# systemctl status etcd.service

注:此处单节点etcd完成,如不需要etcd集群,可跳过以下步骤

2.3 新节点加入

  • 传输文件
    传输已生成的证书文件、编译完成的etcd二进制工具(已有节点中执行)
# scp -P 2322 /etc/kubernetes/pki/etcd/ca.* 192.168.21.203:/etc/kubernetes/pki/etcd/
# scp -P 2322 /opt/tools/etcdadm/etcdadm 192.168.21.203:/opt/tools/
  • 新节点加入etcd集群

将新节点加入etcd集群(新节点中执行)

# /opt/tools/etcdadm join https://192.168.21.209:2379 --certs-dir /etc/kubernetes/pki/etcd/ --install-dir /usr/bin/

注:IP地址为已有节点IP

新节点加入过程如下

INFO[0000] [certificates] creating PKI assets           
INFO[0000] creating a self signed etcd CA certificate and key files 
[certificates] Using the existing ca certificate and key.
INFO[0000] creating a new server certificate and key files for etcd 
[certificates] Generated server certificate and key.
[certificates] server serving cert is signed for DNS names [k8s-node1] and IPs [192.168.21.203 127.0.0.1]
INFO[0000] creating a new certificate and key files for etcd peering 
[certificates] Generated peer certificate and key.
[certificates] peer serving cert is signed for DNS names [k8s-node1] and IPs [192.168.21.203]
INFO[0000] creating a new client certificate for the etcdctl 
[certificates] Generated etcdctl-etcd-client certificate and key.
INFO[0001] creating a new client certificate for the apiserver calling etcd 
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki/etcd/"
INFO[0001] [membership] Checking if this member was added 
INFO[0001] [membership] Member was not added            
INFO[0001] Removing existing data dir "/var/lib/etcd"   
INFO[0001] [membership] Adding member                   
INFO[0001] [membership] Checking if member was started  
INFO[0001] [membership] Member was not started          
INFO[0001] [membership] Removing existing data dir "/var/lib/etcd" 
INFO[0001] [install] Artifact not found in cache. Trying to fetch from upstream: https://github.com/coreos/etcd/releases/download 
INFO[0001] [install] Downloading & installing etcd https://github.com/coreos/etcd/releases/download from 3.4.9 to /var/cache/etcdadm/etcd/v3.4.9 
INFO[0001] [install] downloading etcd from https://github.com/coreos/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz 
######################################################################## 100.0%
INFO[0003] [install] extracting etcd archive /var/cache/etcdadm/etcd/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz to /tmp/etcd662320697 
INFO[0003] [install] verifying etcd 3.4.9 is installed in /usr/bin/ 
INFO[0004] [health] Checking local etcd endpoint health 
INFO[0004] [health] Local etcd endpoint is healthy
  • 检查节点状态 (旧节点中执行)
# echo -e "export ETCDCTL_API=3\nalias etcdctl='etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --endpoints=https://192.168.21.209:2379,https://192.168.21.203:2379,https://192.168.21.202:2379 --write-out=table'" >> /root/.bashrc; source /root/.bashrc


# etcdctl endpoint health
+-----------------------------+--------+-------------+-------+
|          ENDPOINT           | HEALTH |    TOOK     | ERROR |
+-----------------------------+--------+-------------+-------+
| https://192.168.21.209:2379 |   true | 10.540029ms |       |
| https://192.168.21.202:2379 |   true | 11.533006ms |       |
| https://192.168.21.203:2379 |   true | 10.948705ms |       |
+-----------------------------+--------+-------------+-------+

三、k8s集群部署

3.1 启动maser节点

3.1.1 新增 kubernetes yum源

新增 kubernetes yum源 (所有节点执行)

# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

3.1.2 安装kubelet kubeadm kubectl

  • 安装kubelet kubeadm kubectl(master节点执行)
# yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0

注:安装时不指定版本号,默认安装最新版本

# yum install kubeadm kubectl kubelet
  • 启动kubelet服务(master节点执行)
# systemctl enable kubelet && systemctl start kubelet

3.1.3 kubeadm 连接前准备

kubeadm 连接前准备(通过 yaml 文件,指定使用外部 etcd)

  • 拷贝etcd证书至kubernetes目录下 (etcd所在节点执行)
注:如果etcd与k8s-master不在同一台服务器,则在etcd节点执行以下命令即可
scp /etc/kubernetes/pki/etcd/ca.crt k8s-master:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/apiserver-etcd-client.crt k8s-master:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/apiserver-etcd-client.key k8s-master:/etc/kubernetes/pki/


注:如果etcd与k8s-master在同一台服务器,则执行以下命令即可
cp /etc/kubernetes/pki/etcd/apiserver-etcd-client.crt /etc/kubernetes/pki/
cp /etc/kubernetes/pki/etcd/apiserver-etcd-client.key /etc/kubernetes/pki/
  • kubeadm的yml文件编写 (master节点执行)
# cd /opt/kubernetes/yaml

# cat > kubeadm-config.yml << EOF
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  name: 192.168.21.209
---
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.21.209:6443 
controllerManager: {}
dns:
  type: CoreDNS
etcd:         # 指定外部 etcd
  external:
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    endpoints:                        #etcd集群节点
    - https://192.168.21.209:2379
    - https://192.168.21.203:2379
    - https://192.168.21.202:2379
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
kind: ClusterConfiguration
kubernetesVersion: v1.19.0  # kubernetes版本
networking:
  dnsDomain: cluster.local
  podSubnet: 10.2.0.0/16    # pod 网段,对应 flannel 插件
  serviceSubnet: 10.1.0.0/16  # svc网段
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs    # 修改 kube-proxy 模式
EOF

3.1.4 初始化master节点

以下命令在master节点中执行

# kubeadm init --config /opt/kubernetes/yaml/kubeadm-config.yml --upload-certs

# 初始化完毕后提示以下三条命令,记得执行!
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3.1.5 查看当前k8s状态

  • 查看当前启动的k8s服务容器(master节点执行)
# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED              STATUS              PORTS               NAMES
99f40d4ce5b3        bc9c328f379c           "/usr/local/bin/kube…"   57 seconds ago       Up 56 seconds                           k8s_kube-proxy_kube-proxy-9bz45_kube-system_e547ace5-518b-4ca0-9eeb-e95ab8d3e021_0
fdce58eb4153        k8s.gcr.io/pause:3.2   "/pause"                 57 seconds ago       Up 56 seconds                           k8s_POD_kube-proxy-9bz45_kube-system_e547ace5-518b-4ca0-9eeb-e95ab8d3e021_0
ab6a4a558649        cbdc8369d8b1           "kube-scheduler --au…"   About a minute ago   Up About a minute                       k8s_kube-scheduler_kube-scheduler-192.168.21.209_kube-system_5146743ebb284c11f03dc85146799d8b_0
62c66d10cfe9        09d665d529d0           "kube-controller-man…"   About a minute ago   Up About a minute                       k8s_kube-controller-manager_kube-controller-manager-192.168.21.209_kube-system_5500b5fcc2cfd6b7e22acb4ee171ced7_0
2820b1b23393        1b74e93ece2f           "kube-apiserver --ad…"   About a minute ago   Up About a minute                       k8s_kube-apiserver_kube-apiserver-192.168.21.209_kube-system_8c10638aa5b29a5f9775277f0ba78439_0
99a38423a26b        k8s.gcr.io/pause:3.2   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-scheduler-192.168.21.209_kube-system_5146743ebb284c11f03dc85146799d8b_0
dffa9e76cca6        k8s.gcr.io/pause:3.2   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-controller-manager-192.168.21.209_kube-system_5500b5fcc2cfd6b7e22acb4ee171ced7_0
90438c7966cf        k8s.gcr.io/pause:3.2   "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-apiserver-192.168.21.209_kube-system_8c10638aa5b29a5f9775277f0ba78439_0
  • 查看当前存在的节点(master节点执行)
# kubectl get node
NAME             STATUS     ROLES    AGE     VERSION
192.168.21.209   NotReady   master   5m39s   v1.19.0
  • 查看已有的Namespace命名空间(master节点执行)
# kubectl get namespace
NAME              STATUS   AGE
default           Active   119s
kube-node-lease   Active   2m
kube-public       Active   2m
kube-system       Active   2m
  • 列出“kube-system”命名空间下的所有pod (master节点执行)
# kubectl get pods -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-j4cfb                  0/1     Pending   0          3m25s
coredns-f9fd979d6-lvttk                  0/1     Pending   0          3m25s
kube-apiserver-192.168.21.209            1/1     Running   0          3m35s
kube-controller-manager-192.168.21.209   1/1     Running   0          3m35s
kube-proxy-9bz45                         1/1     Running   0          3m25s
kube-scheduler-192.168.21.209            1/1     Running   0          3m35s

3.2 部署 flannel

  • flannel 配置文件下载(master节点中执行)
# cd /opt/kubernetes/yaml/
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • 修改flannel 配置文件(master节点中执行)
修改 Network、image 字段;
Network:对应 kubeadm init 命令 --pod-network-cidr 参数;
Image:可用的 flannel 镜像,例如:registry.cn-beijing.aliyuncs.com/mayaping/flannel:v0.12.0-amd64;

128       "Network": "10.2.0.0/16",

注:这里对应的Network的IP地址段必须与上文pod的IP地址段一致,不然将会有pod无法跨主机通信集群其它服务器问题

  • 启动flannel网络插件(master节点中执行)
# kubectl apply -f kube-flannel.yml

如果使用私有仓库的镜像,1min左右 master 节点会启动完毕;

master 节点拥有集群全部功能,但是默认不允许运行 pod(可以解除该限制,不建议);

# kubectl get node
NAME             STATUS   ROLES    AGE   VERSION
192.168.21.209   Ready    master   13m   v1.19.0

# kubectl get pods -o wide -n kube-system
NAME                                     READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
coredns-f9fd979d6-j4cfb                  1/1     Running   0          14m     10.2.0.3         192.168.21.209   <none>           <none>
coredns-f9fd979d6-lvttk                  1/1     Running   0          14m     10.2.0.2         192.168.21.209   <none>           <none>
kube-apiserver-192.168.21.209            1/1     Running   0          14m     192.168.21.209   192.168.21.209   <none>           <none>
kube-controller-manager-192.168.21.209   1/1     Running   0          14m     192.168.21.209   192.168.21.209   <none>           <none>
kube-flannel-ds-s8t6t                    1/1     Running   0          2m13s   192.168.21.209   192.168.21.209   <none>           <none>
kube-proxy-9bz45                         1/1     Running   0          14m     192.168.21.209   192.168.21.209   <none>           <none>
kube-scheduler-192.168.21.209            1/1     Running   0          14m     192.168.21.209   192.168.21.209   <none>           <none>

注:flannel启动后,可看到master节点状态为Ready,coredns的pod容器状态变为Running

3.3 添加worker 节点

  • node节点安装 kubeadm、kubelet (node节点中执行)
# yum -y install kubeadm-1.19.0 kubelet-1.19.0 
# systemctl enable kubelet.service
  • master 节点生成 token(master节点中执行)
# kubeadm token create --print-join-command
W0202 00:04:54.117133    1292 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 192.168.21.209:6443 --token 1wdqkt.v0fh8u0acqq0m92b     --discovery-token-ca-cert-hash sha256:50bf1c5cc4db5bdf91937f4acbc036ec7e365064825311752e72f50f234ec17a 
  • node节点加入集群 (node节点中执行)
# kubeadm join 192.168.21.209:6443 --token 1wdqkt.v0fh8u0acqq0m92b --discovery-token-ca-cert-hash sha256:50bf1c5cc4db5bdf91937f4acbc036ec7e365064825311752e72f50f234ec17a --node-name 192.168.21.203

注:nodename建议直接用本机ip,以后能省很多事!!!

四、harbor部署

4.1 docker-compose安装

  • 使用国内镜像下载docker-compose
# curl -L https://get.daocloud.io/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
  • 赋予执行权限
# chmod +x /usr/local/bin/docker-compose
  • 查看docker-compose版本
# docker-compose -version
docker-compose version 1.22.0, build f46880fe

4.2 harbor配置

  • harbor下载

此处我们下载的版本是:2.0.1版本:

# cd /opt/tools/
# wget https://github.com/goharbor/harbor/releases/download/v2.0.1/harbor-offline-installer-v2.0.1.tgz
  • 解压harbor离线版安装包
# tar xf harbor-offline-installer-v2.0.1.tgz
  • harbor数据目录准备
# harbor目录移动
mv /opt/tools/harbor /opt/docker/  

# 复制harbor配置文件
cp /opt/docker/harbor/harbor.yml.tmpl  /opt/docker/harbor/harbor.yml

# 创建用于存放harbor的持久化数据
# mkdir -p /opt/docker/harbor/data     
  • 修改harbor安装的配置文件

harbor.yml配置文件主要修改参数如下:

# 需要写IP地址或者域名
5 hostname: baas-harbor.peogoo.com   

# http配置
 8 http:
 9 # port for http, default is 80. If https enabled, this port will redirect to https port
 10   port: 80
 
# https配置(如不需要https,需要进行注释)
 12 # https related config
 13 # https:
 14 # https port for harbor, default is 443
 15 #  port: 443
 16 # The path of cert and key files for nginx
 17 #  certificate: /your/certificate/path
 18 #  private_key: /your/private/key/path

# admin密码
 34 harbor_admin_password: Harbor12345

# 数据库配置

 37 database:
 39   password: root123
 41   max_idle_conns: 50
 44   max_open_conns: 100
 
# 持久化数据目录
 47 data_volume: /opt/docker/harbor/data

4.3 harbor安装并启动

  • 安装之前需要启动docker,然后执行以下安装脚本:
# ./install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 19.03.11

[Step 1]: checking docker-compose is installed ...

Note: docker-compose version: 1.22.0

[Step 2]: loading Harbor images ...
  • 安装完成后查看下正在运行的docker容器:
# docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS                            PORTS                       NAMES
e0b9445e89c9        goharbor/harbor-jobservice:v2.0.1    "/harbor/entrypoint.…"   5 seconds ago       Up 4 seconds (health: starting)                               harbor-jobservice
e8a7a9659de9        goharbor/nginx-photon:v2.0.1         "nginx -g 'daemon of…"   5 seconds ago       Up 4 seconds (health: starting)   0.0.0.0:80->8080/tcp        nginx
8f6de08dac41        goharbor/harbor-core:v2.0.1          "/harbor/entrypoint.…"   6 seconds ago       Up 5 seconds (health: starting)                               harbor-core
6bf71d634d1f        goharbor/registry-photon:v2.0.1      "/home/harbor/entryp…"   7 seconds ago       Up 5 seconds (health: starting)   5000/tcp                    registry
7d249368c18e        goharbor/harbor-registryctl:v2.0.1   "/home/harbor/start.…"   7 seconds ago       Up 6 seconds (health: starting)                               registryctl
53a677135f83        goharbor/redis-photon:v2.0.1         "redis-server /etc/r…"   7 seconds ago       Up 6 seconds (health: starting)   6379/tcp                    redis
d94b3e718501        goharbor/harbor-db:v2.0.1            "/docker-entrypoint.…"   7 seconds ago       Up 5 seconds (health: starting)   5432/tcp                    harbor-db
5911494e5df4        goharbor/harbor-portal:v2.0.1        "nginx -g 'daemon of…"   7 seconds ago       Up 6 seconds (health: starting)   8080/tcp                    harbor-portal
c94a91def7be        goharbor/harbor-log:v2.0.1           "/bin/sh -c /usr/loc…"   7 seconds ago       Up 6 seconds (health: starting)   127.0.0.1:1514->10514/tcp   harbor-log

4.4 访问harbor

  • 需要登录harbor的服务器,需要修改Docker的配置文件/etc/docker/daemon.json
# cat /etc/docker/daemon.json 
    "insecure-registries": ["http://baas-harbor.peogoo.com"],
  • 重新加载启动Docker:
systemctl daemon-reload
systemctl restart docker
  • 服务器登录验证
# docker login http://baas-harbor.peogoo.com
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
  • 访问harbor WEB界面
域名:http://baas-harbor.peogoo.com
用户名:admin
密码:Harbor12345

4.5 踩坑记录

如果安装遇到报错:

ERROR:root:Error: The protocol is https but attribute ssl_cert is not set。

原因是harbor.yml中默认是配置https的端口及证书路径的。

解决办法是把这些配置都注释掉。

# https related config
# https:
  # # https port for harbor, default is 443
  # port: 443
  # # The path of cert and key files for nginx
  # certificate: /your/certificate/path
  # private_key: /your/private/key/path
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

杰哥的技术杂货铺

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值