docker相关操作

一、docker安装

centos7
通过官方源进行安装,步骤如下:

1.删除旧版本

yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine

2.安装依赖包

yum install -y yum-utils device-mapper-persistent-data lvm2

3.添加仓库

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

4.禁用edge版本

yum-config-manager --disable docker-ce-edge

5.安装ce版本

yum install docker-ce

6.修改启动脚本,使用国内加速器(比如daocloud或者ali)

vim /usr/lib/systemd/system/docker.service

修改下面内容

#ExecStart=/usr/bin/dockerd
ExecStart=/usr/bin/dockerd -H 0.0.0.0:2375 \
          -H unix:///run/docker.sock \
          --registry-mirror=http://ec41f627.m.daocloud.io
提示:添加-H fd://启动失败,修改为-H unix:///var/run/docker.sock(systemd用法)

7.重新加载服务

systemctl daemon-reload

8.启动docker服务

systemctl restart docker
## 提示:离线安装下载安装包然后直接使用rpm -ivh docker-ce*进行安装
## 说明:centos6不使用systemd进行服务管理,需要修改方式不同

通过epel源安装(相对前面方式来说版本较老,但相对来说比较稳定,ee版)
yum upgrade device-mapper-libs
yum install epel-release
yum install docker-io -y

ubuntu安装方式:跟centos差不多,详见官方文档
修改加速器:
/etc/systemd/system/multi-user.target.wants/docker.service
--registry-mirror=https://gk6k4x63.mirror.aliyuncs.com

二、不使用sudo操作
(为了避免每次运行docker命令的时候都需要输入sudo,可以创建一个docker用户组,并把相应的用户添加到这个分组里面。当docker进程启动的时候,会设置该套接字可以被docker这个分组的用户读写。这样只要是在docker这个组里面的用户就可以直接执行docker命令了)
非root用户不使用sudo命令进行操作
1.添加docker组

# groupadd docker

2.将操作用户添加到docker组

# usermod -aG docker shaojia

三、docker基本操作
服务管理(这里以centos7为例,6不使用systemd管理,改用service命令)

systemctl start docker      启动docker服务
systemctl stop docker     停止docker服务
systemctl disable docker    禁用开机启动
systemctl enable docker     启用开机启动
systemctl daemon-reload     修改服务脚本后重新加载

docker基本操作

0.查看docker信息

docker info

1.下载镜像

docker pull ubuntu/14.04

2.查看镜像

docker images

3.启动镜像(非后台,使用-it参数)

sudo docker run -t -i ubuntu:14.04 /bin/bash 
使用--name指定运行时容器的名字(提示:镜像相当于系统ISO包,容器相当于运行时的操作系统) 
sudo docker run --name my_container_name -t -i ubuntu:14.04 /bin/bash

4.查看容器

docker ps   仅列出正在运行的容器
docker ps -a  列出所有已经创建的容器

5.进入到一个后台启动的容器

docker attach CONTAINER_ID或者NAME

6.删除容器

docker rm CONTAINER_ID或者NAME

7.删除镜像

docker rmi CONTAINER_ID或者NAME

8.以守护进程方式启动容器(使用-d参数)

docker run --name daemon_ubuntu -d daocloud.io/library/ubuntu /bin/bash -c "while true;

9.查看容器日志

docker logs daemon_ubuntu
docker logs -f daemon_ubuntu

10.查看容器进程

docker top daemon_ubuntu

11.停止容器

docker stop CONTAINER_ID或者NAME
docker kill

12.容器自动重启

--restart=always、--restart=on-failure:5(尝试启动5次)

13.获取容器更多详细信息(比如网络等)

docker inspect daemon_ubuntu
docker inspect --format='{{ .State.Running}}' daemon_ubuntu
docker inspect --format='{{.NetworkSettings.IPAddress}}' daemon_ubuntu

14.具体更多的比如-p端口映射、-v目录挂载参见官方文档

Docker 容器镜像删除
1.停止所有的container,这样才能够删除其中的images:

docker stop $(docker ps -a -q)

如果想要删除所有container的话再加一个指令:

docker rm $(docker ps -a -q)

2.查看当前有些什么images

docker images

3.删除images,通过image的id来指定删除谁

docker rmi <image id>

想要删除untagged images,也就是那些id为的image的话可以用

docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

要删除全部image的话

docker rmi $(docker images -q)

docker安装mysql:

docker pull mysql:5.7.6
docker images |grep mysql
docker ps -a
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=529529 -d mysql:5.7.6
docker exec -it  mysql bash 
mysql -uroot -p -h localhost

docker安装jdk:

docker  search  jdk  
docker  pull  openjdk

修改镜像名称

docker tag {imageid} {name}:{tag}
docker  run  -d  -it  --name myopenjdk  openjdk  /bin/bash

进入容器验证:

docker  exec  -it  myopenjdk  /bin/bash

docker安装nginx:

docker pull nginx
docker run --name justin-nginx -p 8081:80 -d nginx
docker exec -it justin-nginx /bin/bash

nginx部署(路径: /etc/nginx/):
首先,创建目录 nginx, 用于存放后面的相关东西

mkdir -p /data/soft/nginx/web-ui /data/soft/nginx/logs /data/soft/nginx/conf
docker cp {imagesid}:/etc/nginx/nginx.conf /data/soft/nginx/conf

部署命令:

docker run -d -p 8082:80 --name justin-nginx-web -v /data/soft/nginx/web-ui/dist:/usr/share/nginx/html -v /data/soft/nginx/conf/nginx.conf:/etc/nginx/nginx.conf -v /data/soft/nginx/logs:/var/log/nginx nginx

执行可运行的jar文件

docker run -dit --restart=always -p 9090:8402 -v /data/PlanResource/WizData-Portal/current:/data/PlanResource/WizData-Portal/current --name wizdata-portal openjdk:1.8.0  java -jar -Duser.timezone=GMT+08 -XX:MetaspaceSize=128m  -XX:MaxMetaspaceSize=128m  -Xms512m -Xmx512m -Xmn256m  -Xss256k -XX:SurvivorRatio=8 -XX:+UseConcMarkSweepGC /data/PlanResource/WizData-Portal/current/wizdata-portal-show.jar   /bin/bash

查看日志

docker logs -f -t --tail=100 {imagesid}

四、Dockerfile编写
举个例子jdk8+tomcat8

FROM docker.io/ubuntu:14.04
MAINTAINER shaojia
ENV JRE_HOME /opt/lastest
ENV CLASSPATH=$JRE_HOME/lib
ENV PATH=$JRE_HOME/bin:$CATALINA_HOME/bin:$PATH
ENV CATALINA_HOME=/opt/apache-tomcat-8.0.46
ADD jre-8u131-linux-x64.tar.gz /opt/
ADD apache-tomcat-8.0.46.tar.gz /opt/
RUN ln -s /opt/jre1.8.0_131 /opt/lastest
RUN groupadd -g 11024 tomcat
RUN useradd -g 11024 -u 10024 -m -s /bin/bash tomcat
RUN chown -R tomcat:tomcat /opt/apache-tomcat-8.0.46
EXPOSE 8080
USER tomcat
#ENTRYPOINT /opt/apache-tomcat-8.0.46/bin/startup.sh
ENTRYPOINT [ "/opt/apache-tomcat-8.0.46/bin/catalina.sh", "run" ]
# start tomcat on startup
#CMD [ "/opt/apache-tomcat-8.0.46/bin/catalina.sh", "run" ]
# start tomcat on startup 2
#RUN /opt/apache-tomcat-8.0.46/bin/catalina.sh start

五、docker-compose语法
对上面例子进行修改

version: "2"
services:
  tomcat8:
    image: docker.io/ubuntu:14.04
    environment:
      JRE_HOME: /opt/lastest
      CLASSPATH: $JRE_HOME/lib
      PATH: $JRE_HOME/bin:$CATALINA_HOME/bin:$PATH
      CATALINA_HOME: /opt/apache-tomcat-8.0.46
  mysql5.6:
    image: daocloud.io/library/mysql:5.7.6
    environment:
      MYSQL_ROOT_PASSWORD: 123456

六、swarm集群配置及使用

创建swarm集群,产生一个token:token://3f9b7af791eded88e2036c7983c0b8e6
docker run --rm swarm create
添加节点到该集群
docker run -d -p 2376:2375 swarm manage token://3f9b7af791eded88e2036c7983c0b8e6
查看集群环境
docker -H 192.168.137.111:2376 info
docker run --rm swarm list token://3f9b7af791eded88e2036c7983c0b8e6
集群服务查看
docker service ls
docker service create --replicas 1 --name helloworld alpine ping www.baidu.com
docker service inspect --pretty helloworld
docker service ps helloworld
docker service scale helloworld=5
更多资料查看官方文档

七、k8s集群及使用

禁用防火墙
# systemctl stop firewalld
# systemctl disable firewalld
禁用security
方式一:使用yum在线安装(会自动安装docker,如果已安装可以删除)
# yum install -y etcd kubernetes
========================================================================================================================================
 Package                                Arch                   Version                                    Repository               Size
========================================================================================================================================
Installing:
 etcd                                   x86_64                 3.1.9-1.el7                                extras                  7.3 M
 kubernetes                             x86_64                 1.5.2-0.7.git269f928.el7                   extras                   36 k
Installing for dependencies:
 conntrack-tools                        x86_64                 1.4.4-3.el7_3                              updates                 186 k
 kubernetes-client                      x86_64                 1.5.2-0.7.git269f928.el7                   extras                   14 M
 kubernetes-master                      x86_64                 1.5.2-0.7.git269f928.el7                   extras                   25 M
 kubernetes-node                        x86_64                 1.5.2-0.7.git269f928.el7                   extras                   14 M
 libnetfilter_cthelper                  x86_64                 1.0.0-9.el7                                base                     18 k
 libnetfilter_cttimeout                 x86_64                 1.0.0-6.el7                                base                     18 k
 libnetfilter_queue                     x86_64                 1.0.2-1.el7                                base                     23 k
 socat                                  x86_64                 1.7.2.2-5.el7                              base                    255 k
Updating for dependencies:
 libnetfilter_conntrack                 x86_64                 1.0.6-1.el7_3                              updates                  55 k

在启动之前需要修改几个文件:
vim /etc/kubernetes/apiserver
找到这”KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",去掉ServiceAccount,保存退出。
systemctl restart kube-apiserver  重启此服务

vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled=false --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi



安装按顺序启动服务
systemctl start etcd
systemctl start docker
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy

安装mysql:
docker pull mysql
sudo vi mysql-svc.yaml
mysql-svc.yaml内容为:
apiVersion: v1
kind: ReplicationController                            #副本控制器RC
metadata:
  name: mysql                                          #RC的名称,全局唯一
spec:
  replicas: 1                                          #Pod副本的期待数量
  selector:
    app: mysql                                         #符合目标的Pod拥有此标签
  template:                                            #根据此模板创建Pod的副本(实例)
    metadata:
      labels:
        app: mysql                                     #Pod副本拥有的标签,对应RC的Selector
    spec:
      containers:                                      #Pod内容器的定义部分
      - name: mysql                                    #容器的名称
        image: hub.c.163.com/library/mysql:5.7.6              #容器对应的Docker image
        ports: 
        - containerPort: 3306                          #容器应用监听的端口号
        env:                                           #注入容器内的环境变量
        - name: MYSQL_ROOT_PASSWORD 
          value: "529529"

在master节点使用kubectl命令将它发布到k8s集群中:
kubectl create -f mysql-svc.yaml
-- 删除
-- kubectl delete -f mysql-svc.yaml

方式二:手动安装配置

1.安装服务发现etcd(见etcd.txt)


//  集群部署
2.这里以三台机器k8s-master、k8s-node1、k8s-node2为例进行安装
先到github上下载安装包,各机器安装步骤如下
k8s-master机器
===========================================================
master的kube-apiserser服务
[root@k8s-master ~]# cat /usr/lib/systemd/system/kube-apiserver.service 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
master的kube-controller-manager服务
[root@k8s-master ~]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LiitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
master的kube-scheduler服务
[root@k8s-master ~]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kubenetes Scheduler Server
Documentation=https:///github.com/GoogleCloudPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
master的kube-apiserver服务配置
[root@k8s-master ~]# cat /etc/kubernetes/apiserver 
KUBE_API_ARGS="--etcd_servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=192.168.231.1/16  --service-node-port-range=1-65535 --admission_control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================
master的kube-controller-manager服务配置
[root@k8s-master ~]# cat /etc/kubernetes/controller-manager 
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://192.168.37.138:8080 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================
master的kube-scheduler服务配置
[root@k8s-master ~]# cat /etc/kubernetes/scheduler 
KUBE_SCHEDULER_ARGS="--master=http://192.168.37.138:8080 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================








k8s-node1机器
===========================================================
node1的kubelet服务
[root@k8s-node1 ~]# cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubenetes Kubelet Server
Documentation=https:///github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
node1的kube-proxy服务
[root@k8s-node1 ~]# cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubenetes Kube-Proxy Server
Documentation=https:///github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
node1的docker服务
[root@k8s-node1 ~]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
#Wants=docker-storage-setup.service
#Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
KillMode=process
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
# change systemd to cgroupfs
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --registry-mirror=https://docker.mirrors.ustc.edu.cn \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY
#ExecStart=/usr/bin/dockerd -H 0.0.0.0:2375 -H unix:///run/docker.sock
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
MountFlags=slave

[Install]
WantedBy=multi-user.target
===========================================================
node1的kubelet服务配置
[root@k8s-node1 ~]# cat /etc/kubernetes/kubelet
KUBELET_ARGS="--api-servers=http://192.168.232.102:8080 --cgroup-driver=systemd --hostname-override=192.168.232.162 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================
node1的kube-proxy服务配置
[root@k8s-node1 ~]# cat /etc/kubernetes/proxy 
KUBE_PROXY_ARGS="--master=http://192.168.232.102:8080 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================


k8s-node2机器
===========================================================
node2的kubelet服务
[root@k8s-node2 ~]# cat /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubenetes Kubelet Server
Documentation=https:///github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
node2的kube-proxy服务
[root@k8s-node2 ~]# cat /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubenetes Kube-Proxy Server
Documentation=https:///github.com/GoogleCloudPlatform/kubernetes
After=network.service
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
===========================================================
node2的docker服务
[root@k8s-node2 ~]# cat /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
#Wants=docker-storage-setup.service
#Requires=docker-cleanup.timer

[Service]
Type=notify
NotifyAccess=all
KillMode=process
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
# change systemd to cgroupfs
ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          --registry-mirror=https://docker.mirrors.ustc.edu.cn \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY
#ExecStart=/usr/bin/dockerd -H 0.0.0.0:2375 -H unix:///run/docker.sock
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
MountFlags=slave

[Install]
WantedBy=multi-user.target
===========================================================
node2的kubelet服务配置
[root@k8s-node2 ~]# cat /etc/kubernetes/kubelet
KUBELET_ARGS="--api-servers=http://192.168.232.102:8080 --cgroup-driver=systemd --hostname-override=192.168.232.180 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"
===========================================================
node2的kube-proxy服务配置
[root@k8s-node2 ~]# cat /etc/kubernetes/kube-proxy
KUBE_PROXY_ARGS="--master=http://192.168.232.102:8080 --logtostderr=false --log-dir=/var/log/kubernetes --v=2"

启动集群:

k8s-master: kube-apiserver kube-controller-manager kube-scheduler
k8s-node1: docker kubelet kube-proxy
k8s-node2: docker kubelet kube-proxy

提示:使用kubeadm可以快速搭建k8s集群,但不建议在开发环境使用
提示:可使用flannel实现k8s虚拟网络

简单测试例子如下:

编写mysql-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql
        ports:
        - containerPort: 3306
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "123456"
      
创建mysql rc
kubectl create -f mysql-rc.yaml
查看rc
kubectl get rc
查看pods
kubectl get pods

编写mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  ports:
    - port: 3306
  selector:
    app: mysql
  
创建mysql svc
kubectl create -f mysql-svc.yaml

编写myweb-rc.yaml
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 5
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: kubeguide/tomcat-app:v1
          port:
          - containerPort: 8080
          env:
          - name: MYSQL_SERVICE_HOST
            value: 'mysql'
          - name: MYSQL_SERVICE_PORT
            value: '3306'

创建myweb rc
kubectl create -f myweb-rc.yaml

编写myweb-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 8080
      nodePort: 30001
  selector:
    app: myweb

创建myweb svc
kubectl create -f myweb-svc.yaml

查看pod和service
kubectl get pods
kubectl get services

访问http://192.168.232.102:30001/能够查看到tomcat的页面,成功


说明:
Master
  负责对整个集群的管理和控制
  运行进程:
    kube-apiserver: 提供http rest接口的关键服务进程,增删改查等操作的唯一入口,也是集群控制的入口进程
  kube-controller-manager: 所有资源对象的自动化控制中心
  kube-scheduler: 资源调度(Pod调度)进程
  etcd: 服务发现进程

Node
  集群的工作负载节点(Node在运行期间可以动态增加到集群中)
  运行进程:
    kubelet: 负责Pod对应的容器的创建、启动、停止等任务,与Master节点协作实现集群管理的基本功能
  kube-proxy: 实现kubernetes service的通信与负载均衡的重要组件
  docker: 复制本机容器的创建和管理工作
  查看node节点:kubectl get nodes
  查看node界面描述:kubectl describe node <node_name>


评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值