以k8s为核心构建一个高可用的web集群(三)

k8s运维环境搭建

  1.安装ansible自动化运维工具并用它传递promtheus的node_exporter

1.搭建免密通道
[root@nfs-ansible ~]# ssh-keygen     一直回车
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:jRzzCic2+cSA3cywivxrz+OELCVi7J7QDvnX+fRm0Vk root@nfs-ansible
The key's randomart image is:
+---[RSA 2048]----+
|      .          |
|     o *         |
|    . + *        |
|.. . . = *    E  |
|.o+ o * S o. o   |
|o+ = o B .. o    |
|+.o +...+  .     |
|o+..o++. .o      |
| ooo.o+o.o.      |
+----[SHA256]-----+

[root@nfs-ansible ~]# cd .ssh/
[root@nfs-ansible .ssh]# ls
id_rsa  id_rsa.pub

[root@nfs-ansible .ssh]# ssh-copy-id -i  id_rsa.pub root@192.168.249.141
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "id_rsa.pub"
The authenticity of host '192.168.249.141 (192.168.249.141)' can't be established.
ECDSA key fingerprint is SHA256:wEEjFpLr7JNKpX3OA4T6zzrUbvu17JHrXvGInOcril8.
ECDSA key fingerprint is MD5:93:cb:aa:b7:2f:d0:5b:37:72:ef:40:43:d8:48:b1:20.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.249.141's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@192.168.249.141'"
and check to make sure that only the key(s) you wanted were added.

[root@nfs-ansible .ssh]# ssh root@192.168.249.141
Last login: Sun Sep 17 17:28:18 2023 from 192.168.249.1
[root@master ~]# exit
登出


将要被ansible控制的服务器都进行此操作
[root@nfs-ansible .ssh]# ssh-copy-id -i  id_rsa.pub root@192.168.249.142
[root@nfs-ansible .ssh]# ssh 'root@192.168.249.142'
Last login: Sun Sep 17 17:28:25 2023 from 192.168.249.1
[root@node-1 ~]# 


[root@nfs-ansible .ssh]# ssh-copy-id -i  id_rsa.pub root@192.168.249.143
[root@nfs-ansible .ssh]# ssh 'root@192.168.249.143'
Last login: Sun Sep 17 17:28:29 2023 from 192.168.249.1
[root@node-2 ~]# 


2.下载ansible
[root@nfs-ansible .ssh]# yum install epel-release -y
[root@nfs-ansible .ssh]# yum  install ansible -y
[root@nfs-ansible .ssh]# cd /etc/ansible
[root@nfs-ansible ansible]# ls
ansible.cfg  hosts  roles

ansible.cfg  是ansible的配置文件
hosts 里面定义主机清单

[root@nfs-ansible ansible]# vim hosts    
## [webservers]
## alpha.example.org
## beta.example.org
## 192.168.1.100
## 192.168.1.110
[k8s]              定义k8s组, 以后有什么要用ansible管理的都可以加进来,在这之前要先建立免密通道,更方便。
192.168.249.141
192.168.249.142
192.168.249.143

[root@nfs-ansible ~]# ls          传送安装prometheus所需的安装包
anaconda-ks.cfg
grafana-enterprise-9.5.1-1.x86_64.rpm
node_exporter-1.5.0.linux-amd64.tar.gz
prometheus-2.44.0-rc.1.linux-amd64.tar.gz

3.用ansible传输node_exporter-1.5.0.linux-amd64.tar.gz压缩包到k8s组的所有服务器,以便等会prometheus对k8s集群进行监测
[root@nfs-ansible ~]# vim exporter.yaml 
- hosts: k8s
  remote_user: root
  tasks:
  - name: mkdir /exporter
    file:
      path: "/exporter"
      state: directory
      owner: root
      group: root
      mode: 755
  - name: tar node_exporter.tar.gz
    unarchive:
      src: /root/node_exporter-1.5.0.linux-amd64.tar.gz
      dest: /exporter
      copy: yes

[root@nfs-ansible ~]# ansible-playbook --syntax-check  exporter.yaml   

playbook: exporter.yaml
[root@nfs-ansible ~]# ansible-playbook exporter.yaml 

PLAY [k8s] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [192.168.249.142]
ok: [192.168.249.141]
ok: [192.168.249.143]

TASK [mkdir /exporter] *********************************************************
changed: [192.168.249.141]
changed: [192.168.249.142]
changed: [192.168.249.143]

TASK [tar node_exporter.tar.gz] ************************************************
changed: [192.168.249.143]
changed: [192.168.249.141]
changed: [192.168.249.142]

PLAY RECAP *********************************************************************
192.168.249.141            : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.249.142            : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.249.143            : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   


[root@master ~]# cd /exporter
[root@master exporter]# ls
node_exporter-1.5.0.linux-amd64




  

  2.安装promtheus对k8s集群进行监测

[root@nfs-ansible ~]# mkdir /prom
[root@nfs-ansible ~]# ls
anaconda-ks.cfg
exporter.yaml
grafana-enterprise-9.5.1-1.x86_64.rpm
node_exporter-1.5.0.linux-amd64
node_exporter-1.5.0.linux-amd64.tar.gz
prometheus-2.44.0-rc.1.linux-amd64.tar.gz
[root@nfs-ansible ~]# mv prometheus-2.44.0-rc.1.linux-amd64.tar.gz /prom
[root@nfs-ansible ~]# cd /prom
[root@nfs-ansible prom]# tar xf prometheus-2.44.0-rc.1.linux-amd64.tar.gz 
[root@nfs-ansible prom]# ls
prometheus-2.44.0-rc.1.linux-amd64
prometheus-2.44.0-rc.1.linux-amd64.tar.gz
[root@nfs-ansible prom]# mv prometheus-2.44.0-rc.1.linux-amd64 prometheus
[root@nfs-ansible prom]# cd prometheus
[root@nfs-ansible prometheus]# PATH=/prom/prometheus:$PATH
[root@nfs-ansible prometheus]# vim /root/.bashrc 
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi
PATH=/prom/prometheus:$PATH   #添加


[root@nfs-ansible prometheus]# vim /usr/lib/systemd/system/prometheus.service 
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target


[root@nfs-ansible prometheus]# systemctl daemon-reload   重新加载systemd相关的服务

[root@nfs-ansible prometheus]# service prometheus start
Redirecting to /bin/systemctl start prometheus.service
[root@nfs-ansible prometheus]# ps aux|grep prom
root       1586  0.3  2.2 796664 42540 ?        Ssl  19:23   0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
root       1595  0.0  0.0 112824   976 pts/0    R+   19:23   0:00 grep --color=auto prom


[root@nfs-ansible prometheus]# systemctl enable  prometheus
Created symlink from /etc/systemd/system/multi-user.target.wants/prometheus.service to /usr/lib/systemd/system/prometheus.service.


http://192.168.249.144:9090/   访问你promtheus服务器的9090端口,可以出现页面


你要监测几个节点,就要做几个节点的操作,我是用的xshell对三台机器一起操作
[root@master ~]# cd /exporter/
[root@master exporter]# ls
node_exporter-1.5.0.linux-amd64
[root@master exporter]# mv node_exporter-1.5.0.linux-amd64/ node_exporter
[root@master exporter]# PATH=/exporter/node_exporter/:$PATH
[root@master exporter]# vim /root/.bashrc
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

PATH=/exporter/node_exporter/:$PATH

[root@master exporter]# cd node_exporter/
[root@master node_exporter]# ls
LICENSE  node_exporter  NOTICE

[root@master node_exporter]# nohup node_exporter --web.listen-address 0.0.0.0:8090  &
[1] 21618
[root@master node_exporter]# nohup: 忽略输入并把输出追加到"nohup.out"

[root@master node_exporter]# ps aux|grep node
root      21618  0.0  0.3 725168  6756 pts/0    Sl   19:34   0:00 node_exporter --web.listen-address 0.0.0.0:8090
有这一条就完成了

http://192.168.249.141:8090/metrics
http://192.168.249.142:8090/metrics
http://192.168.249.143:8090/metrics
访问这三个都能出现大量数据


设置node_exporter开机自启
[root@master node_exporter]# vim /etc/rc.local 
touch /var/lock/subsys/local

nohup /exporter/node_exporter/node_exporter --web.listen

[root@master node_exporter]# chmod +x /etc/rc.d/rc.local


在prometheus服务器上添加抓取数据的配置,添加node节点服务器,将抓取的数据存储到时序数据库里


[root@nfs-ansible prometheus]# ls
console_libraries  prometheus
consoles           prometheus.yml
LICENSE            promtool
NOTICE
[root@nfs-ansible prometheus]# vim prometheus.yml 
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]

  - job_name: "master"     添加node节点服务器
    static_configs:
      - targets: ["192.168.249.141:8090"]    node节点的ip和你设置监听的端口

  - job_name: "node-1"
    static_configs:
      - targets: ["192.168.249.142:8090"]

  - job_name: "node-2"
    static_configs:
      - targets: ["192.168.249.143:8090"]


[root@nfs-ansible prometheus]# service  prometheus restart
Redirecting to /bin/systemctl restart prometheus.service

http://192.168.249.144:9090/targets?search=  查看是否添加成功

安装grafana  wget https://dl.grafana.com/enterprise/release/grafana-enterprise-9.5.1-1.x86_64.rpm

[root@nfs-ansible prom]# cd ~
[root@nfs-ansible ~]# ls
anaconda-ks.cfg
exporter.yaml
grafana-enterprise-9.5.1-1.x86_64.rpm
node_exporter-1.5.0.linux-amd64
node_exporter-1.5.0.linux-amd64.tar.gz
[root@nfs-ansible ~]#  yum install grafana-enterprise-9.5.1-1.x86_64.rpm  -y
[root@nfs-ansible ~]# service grafana-server start   设置启动和开机自启
Starting grafana-server (via systemctl): [  确定  ]
[root@nfs-ansible ~]# systemctl enable grafana-server  
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.

[root@nfs-ansible ~]# ps aux|grep grafana  查看是否启动
grafana    1869  7.2  5.3 1075408 99116 ?       Ssl  19:49   0:02 /usr/share/grafana/bin/grafana server --config=/etc/grafana/grafana.ini --pidfile=/var/rungrafana/grafana-server.pid --packaging=rpm cfg:default.paths.logs=/var/log/grafana cfg:default.paths.data=/var/lib/grafana cfg:default.paths.plugins=/var/lib/grafana/plugins cfg:default.paths.provisioning=/etc/grafana/provisioning
root       1899  0.0  0.0 112824   980 pts/0    R+   19:50   0:00 grep --color=auto grafana

http://192.168.249.144:3000/login
默认的用户名和密码是
用户名admin
密码admin

安装完成后,可以去修改语言环境为中文  Home->AdministrationDefault->preferences->Language
->save


在右上角的个人资料里进行修改
1.先配置prometheus的数据源
	管理--》数据源--》add new data source-->prometheus

2.导入grafana的模板  Dashboards  --》Import dashboard
8919  -->推荐使用,因为是中文版的字符

成功的图片如下:

成功监测到三个k8s节点

成功用8919模板导入promtheus的数据

  3.安装jumpserver来增加k8s集群的安全性

      jumpserver官网开源社区 - FIT2CLOUD 飞致云icon-default.png?t=N7T8https://community.fit2cloud.com/#/products/jumpserver/getstarted

仅需两步快速安装 JumpServer:

  1. 准备一台 2核4G (最低)且可以访问互联网的 64 位 Linux 主机;
  2. 以 root 用户执行如下命令一键安装 JumpServer。

curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash

1. 可以使用如下命令启动, 然后访问
cd /opt/jumpserver-installer-v3.6.4
./jmsctl.sh start

2. 其它一些管理命令
./jmsctl.sh stop
./jmsctl.sh restart
./jmsctl.sh backup
./jmsctl.sh upgrade
更多还有一些命令, 你可以 ./jmsctl.sh --help 来了解

3. Web 访问
http://192.168.249.146:80
默认用户: admin  默认密码: admin

4. SSH/SFTP 访问
ssh -p2222 admin@192.168.249.146
sftp -P2222 admin@192.168.249.146

5. 更多信息
我们的官网: https://www.jumpserver.org/
我们的文档: https://docs.jumpserver.org/

服务器只允许堡垒机登录的实现是通过tcp wrappers和添加防火墙规则,jumpserver只是加强管理

[root@nfs-ansible ~]# vim /etc/hosts.allow
sshd:192.168.249.1, 192.168.249.146
只要加192.168.249.146,加了虚拟网卡VMnet8的ip地址,让电脑能连上虚拟机好操作。

[root@nfs-ansible ~]# vim /etc/hosts.deny
sshd:all

给内网除堡垒机以外的服务器全添加以上规则


[root@iptables_server ~]# ssh gala@192.168.249.144 
ssh_exchange_identification: read: Connection reset by peer


[root@jumpsever jumpserver-installer-v3.6.4]# ssh gala@192.168.249.144 
[gala@nfs-ansible ~]$ 

4. 编写脚本部署firewalld服务器

给iptables服务器添加一块桥接的网卡
ip add 看网卡名字
[root@iptables_server ~]# cd /etc/sysconfig/network-scripts/
[root@iptables_server network-scripts]# ls
[root@iptables_server network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[root@iptables_server network-scripts]# vim ifcfg-ens36
BOOTPROTO="none"
NAME="ens36"
DEVICE="ens36"
ONBOOT="yes"
IPADDR=192.168.1.144
PREFIX=24
GATEWAY=192.168.1.1
DNS1=114.114.114.114
[root@iptables_server network-scripts]# vim ifcfg-ens33
BOOTPROTO="none"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.249.145
PREFIX=24
[root@iptables_server network-scripts]# service network restart
Restarting network (via systemctl):                          [  确定  ]
[root@iptables_server network-scripts]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.254) 56(84) bytes of data.
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=1 ttl=54 time=25.4 ms
[root@iptables_server network-scripts]# ping 192.168.249.144
PING 192.168.249.144 (192.168.249.144) 56(84) bytes of data.
64 bytes from 192.168.249.144: icmp_seq=1 ttl=64 time=0.496 ms


编写snat和dnat的脚本
[root@iptables_server /]# vim nat.sh
#!/bin/bash

# Enable routing
echo 1 >/proc/sys/net/ipv4/ip_forward

# stop firewalld
service firewalld stop
systemctl disable firewalld

# clear iptables 
iptables -F
iptables -t nat -F

# enable snat
iptables -t nat -A POSTROUTING -s 192.168.249.0/24 -o ens36 -j MASQUERADE


# enable dnat
iptables -t nat -A PREROUTING -d 192.168.1.144 -i ens36 -p tcp --dport 8001 -j DNAT --to-destination 192.168.249.142:80
iptables -t nat -A PREROUTING -d 192.168.1.144 -i ens36 -p tcp --dport 8002 -j DNAT --to-destination 192.168.249.143:80

# enable ssh
iptables -t nat -A PREROUTING -d 192.168.1.144 -i ens36 -p tcp --dport 2223 -j DNAT --to-destination 192.168.249.146:22

[root@iptables_server /]# bash nat.sh
Redirecting to /bin/systemctl stop firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@iptables_server /]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DNAT       tcp  --  0.0.0.0/0            192.168.1.144        tcp dpt:8001 to:192.168.249.142:80
DNAT       tcp  --  0.0.0.0/0            192.168.1.144        tcp dpt:8002 to:192.168.249.143:80
DNAT       tcp  --  0.0.0.0/0            192.168.1.144        tcp dpt:2223 to:192.168.249.146:22

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  192.168.249.0/24     0.0.0.0/0        


[root@jumpsever network-scripts]# vim ifcfg-ens33 
BOOTPROTO="none"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.249.146
PREFIX=24
GATEWAY=192.168.249.145
DNS1=114.114.114.114
                      
[root@jumpsever network-scripts]# service network restart
Restarting network (via systemctl):                        [  确定  ]

[root@jumpsever network-scripts]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.254) 56(84) bytes of data.
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=1 ttl=53 time=27.7 ms
64 bytes from 14.119.104.254 (14.119.104.254): icmp_seq=2 ttl=53 time=28.7 ms

然后去xshell连接堡垒机, ip为iptables的WAN口ip, port为2223.
成功连接

[root@iptables_server /]# vim /etc/rc.local
设置开机自动执行脚本
bash /nat.sh
[root@iptables_server /]# chmod +x /etc/rc.d/rc.local 

5.搭建harbor来管理镜像

harbor-offline-installer-v2.8.3.tgz 去外网下载
[root@node-1 ~]# mkdir /harbor && cd /harbor
[root@node-1 harbor]# tar xf ~/harbor-offline-installer-v2.8.3.tgz  
[root@node-1 harbor]# ls
harbor
[root@node-1 harbor]# cd harbor/
[root@node-1 harbor]# ls
common.sh  harbor.v2.8.3.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@node-1 harbor]# cp harbor.yml.tmpl harbor.yml
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.249.142

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 8089

# https related config
#https:
  # https port for harbor, default is 443
 # port: 443
  # The path of cert and key files for nginx

[root@node-1 harbor]# ./install.sh 
[root@node-1 harbor]# docker compose ps
NAME                IMAGE                                COMMAND                                SERVICE       CREATED          STATUS                    PORTS
harbor-core         goharbor/harbor-core:v2.8.3          "/harbor/entrypoint.sh"                core          36 seconds ago   Up 35 seconds (healthy)   
harbor-db           goharbor/harbor-db:v2.8.3            "/docker-entrypoint.sh  13"            postgresql    37 seconds ago   Up 35 seconds (healthy)   
harbor-jobservice   goharbor/harbor-jobservice:v2.8.3    "/harbor/entrypoint.sh"                jobservice    36 seconds ago   Up 34 seconds (healthy)   
harbor-log          goharbor/harbor-log:v2.8.3           "/bin/sh -c /usr/local/bin/start.sh"   log           37 seconds ago   Up 36 seconds (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       goharbor/harbor-portal:v2.8.3        "nginx -g 'daemon off;'"               portal        37 seconds ago   Up 35 seconds (healthy)   
nginx               goharbor/nginx-photon:v2.8.3         "nginx -g 'daemon off;'"               proxy         36 seconds ago   Up 34 seconds (healthy)   0.0.0.0:8089->8080/tcp, :::8089->8080/tcp
redis               goharbor/redis-photon:v2.8.3         "redis-server /etc/redis.conf"         redis         37 seconds ago   Up 35 seconds (healthy)   
registry            goharbor/registry-photon:v2.8.3      "/home/harbor/entrypoint.sh"           registry      37 seconds ago   Up 35 seconds (healthy)   
registryctl         goharbor/harbor-registryctl:v2.8.3   "/home/harbor/start.sh"                registryctl   37 seconds ago   Up 35 seconds (healthy)   

http://192.168.249.142:8089/account/sign-in?redirect_url=%2Fharbor%2Fprojects
账号密码
admin
Harbor12345

让另一台k8s机器使用harbor仓库
[root@master ~]# vim /etc/docker/daemon.json
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"],
 "insecure-registries": ["192.168.249.142:8089"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}             
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker(reboot0   #k8s里面很多东西,不好操作,直接reboot,所以最好在搭建好环境就安装好harbor

[root@master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
mynginx                                                           1.0        462596dea237   5 days ago      141MB

在harbor建了用户
[root@master ~]# docker login 192.168.249.142:8089
Username: ga'la
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

要用gala用户创建gala仓库,再打上tag才能push
[root@master ~]# docker tag mynginx:1.0 192.168.249.142:8089/gala/mynginx:1.0
[root@master ~]# docker push 192.168.249.142:8089/gala/mynginx:1.0
The push refers to repository [192.168.249.142:8089/gala/mynginx]
4a61cec06cca: Pushed 
d874fd2bc83b: Pushed 
32ce5f6a5106: Pushed 
f1db227348d0: Pushed 
b8d6e692a25e: Pushed 
e379e8aedd4d: Pushed 
2edcec3590a4: Pushed 
1.0: digest: sha256:b57010b822bf25996280ac0fb84a567c920f21485320ee54e41d48bcc7aab784 size: 1778


[root@master ~]# docker rmi 192.168.249.142:8089/gala/mynginx:1.0
Untagged: 192.168.249.142:8089/gala/mynginx:1.0
Untagged: 192.168.249.142:8089/gala/mynginx@sha256:b57010b822bf25996280ac0fb84a567c920f21485320ee54e41d48bcc7aab784

[root@master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
mynginx                                                           1.0        462596dea237   5 days ago      141MB
kubernetesui/metrics-scraper                                      v1.0.8     115053965e86   15 months ago   43.8MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   2 years ago     118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   2 years ago     122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   2 years ago     116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   2 years ago     47.3MB
calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   2 years ago     21.7MB
calico/node                                                       v3.18.0    5a7c4970fbc2   2 years ago     172MB
calico/cni                                                        v3.18.0    727de170e4ce   2 years ago     131MB
calico/kube-controllers                                           v3.18.0    9a154323fbf7   2 years ago     53.4MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.0    10cc881966cf   2 years ago     118MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.0    ca9843d3b545   2 years ago     122MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.0    b9fa1895dcaa   2 years ago     116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.0    3138b6e3d471   2 years ago     46.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   3 years ago     253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   3 years ago     45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   3 years ago     683kB

[root@master ~]# docker pull 192.168.249.142:8089/gala/mynginx:1.0
1.0: Pulling from gala/mynginx
Digest: sha256:b57010b822bf25996280ac0fb84a567c920f21485320ee54e41d48bcc7aab784
Status: Downloaded newer image for 192.168.249.142:8089/gala/mynginx:1.0
192.168.249.142:8089/gala/mynginx:1.0
[root@master ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED         SIZE
192.168.249.142:8089/gala/mynginx                                 1.0        462596dea237   5 days ago      141MB

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值