Kubernetes安装部署
- Kubernetes是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。
- 使用Kubernetes可以:
自动化容器的部署和复制
随时扩展或收缩容器规模
将容器组织成组,并且提供容器间的负载均衡
很容易地升级应用程序容器的新版本
节省资源,优化硬件资源的使用
提供容器弹性,如果容器失效就替换它 - Kubernetes特点
便携性:支持公有云、私有云、混合云
可扩展:模块化、插件化、可组合、可挂载
自修复:自动部署,自动重启,自动复制,自动伸缩扩展 - Kubernetes特性
Kubernetes是一种用于在一组主机上运行和协同容器化应用程序的系统,旨在提供可预测性、可扩展性与高可用的性的方法来完全管理容器化应用程序和服务的生命周期的平台。
安装前准备#(master节点和所有node节点都需要执行)
三台centos7虚拟机,配置2G,2核
master: 192.168.176.137
node1: 192.168.176.138
node2: 192.168.176.139
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# yum -y install ntpdate
[root@localhost ~]# ntpdate pool.ntp.org
//三台主机名分别改为master,node1,node2
[root@localhost ~]# hostname master
[root@localhost ~]# hostname node1
[root@localhost ~]# hostname node2
[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
##添加三台的ip+主机名
192.168.176.137 master
192.168.176.138 node1
192.168.176.139 node2
Kubeadm部署Kubernetes集群
1. 安装docker(这里使用阿里云仓库)#master节点和所有node节点都需要执行
//配置docker的yum仓库(这里使用阿里云仓库)
[root@master ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo CentOS-Debuginfo.repo CentOS-SIG-ansible-29.repo docker-ce.repo
CentOS-Base.repo.backup CentOS-Media.repo CentOS-Vault.repo
//安装docker
[root@master ~]# yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
[root@master ~]# docker --version
Docker version 18.09.7, build 2d0083d
//修改docker cgroup driver为systemd
[root@master ~]# vim /etc/docker/daemon.json
[root@master ~]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master ~]# systemctl restart docker
[root@master ~]# docker info|grep Cgroup
Cgroup Driver: systemd
2.安装kubeadm #master节点和所有node节点都需要执行
//配置kubenetes的yum仓库(这里使用阿里云仓库)
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
[root@master ~]# yum makecache
//安装kubelat、kubectl、kubeadm
[root@master ~]# yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2
# 会出现以下问题
失败的软件包是:kubelet-1.15.2-0.x86_64
GPG 密钥配置为:https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
# 解决方法:
[root@master ~]# rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@master ~]# rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 重新安装一下
[root@master ~]# yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2
[root@master ~]# rpm -aq kubelet kubectl kubeadm
kubectl-1.15.2-0.x86_64
kubelet-1.15.2-0.x86_64
kubeadm-1.15.2-0.x86_64
//将kubelet加入开机启动,这里刚安装完成不能直接启动。(因为目前还没有集群还没有建立)
[root@master ~]# systemctl enable kubelet
3.初始化Master
注意:在master节点执行
通过kubeadm --help帮助手册,可以看到可以通过kubeadm init初始化一个master节点,然后再通过kubeadm join将一个node节点加入到集群中。
//配置忽略swap报错
[root@master ~]# vim /etc/sysconfig/kubelet
[root@master ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
//初始化
[root@master ~]# kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
......
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.176.137:6443 --token 0dscr0.79zkrsbjwea5ggl9 \
--discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
//按照上面初始化成功提示创建配置文件
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
//初始化完成后可以看到所需镜像也拉取下来了
[root@master ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.2 34a53be6c9a7 15 months ago 207MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.2 9f5df470155d 15 months ago 159MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.2 88fa9cb27bd2 15 months ago 81.1MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 15 months ago 82.4MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 22 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 23 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
//添加flannel网络组件
flannel项目地址:https://github.com/coreos/flannel
方法一:
[root@master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
方法二:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
//验证flannel网络插件是否部署成功(Running即为成功)
[root@master ~]# kubectl get pods -n kube-system |grep flannel
kube-flannel-ds-765qc 1/1 Running 0 2m34s
4.加入Node节点
向集群中添加新节点,执行在kubeadm init 输出的kubeadm join命令,再在后面同样添加忽略swap报错参数。
//配置忽略swap报错
[root@node1 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@node2 ~]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
//加入node1节点
[root@node1 ~]# swapoff -a
[root@node1 ~]# kubeadm join 192.168.176.137:6443 --token 0dscr0.79zkrsbjwea5ggl9 --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
# 报错处理
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
# 解决
[root@node1 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
# 再添加一下
[root@node1 ~]# kubeadm join 192.168.176.137:6443 --token 0dscr0.79zkrsbjwea5ggl9 --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
。。。。。。
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
//加入node2节点
[root@node2 ~]# swapoff -a
[root@node2 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@node2 ~]# kubeadm join 192.168.176.137:6443 --token 0dscr0.79zkrsbjwea5ggl9 --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3
。。。。。。
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
5.检查集群状态
//在master节点输入命令检查集群状态,(重点查看STATUS内容为Ready时,则说明集群状态正常)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 23m v1.15.2
node1 Ready <none> 5m32s v1.15.2
node2 Ready <none> 117s v1.15.2
//查看集群客户端和服务端程序版本信息
[root@master ~]# kubectl version --short=true
Client Version: v1.15.2
Server Version: v1.15.2
//查看集群信息
[root@master ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.176.137:6443
KubeDNS is running at https://192.168.176.137:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
//查看每个节点下载的镜像
# master节点
[root@master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.13.0 e708f4bb69e3 4 weeks ago 57.2MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.15.2 34a53be6c9a7 15 months ago 207MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.15.2 88fa9cb27bd2 15 months ago 81.1MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 15 months ago 82.4MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.15.2 9f5df470155d 15 months ago 159MB
registry.aliyuncs.com/google_containers/coredns 1.3.1 eb516548c180 22 months ago 40.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.10 2c4adeb21b4f 23 months ago 258MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
# node1节点
[root@node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.13.0 e708f4bb69e3 4 weeks ago 57.2MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 15 months ago 82.4MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
# node2节点
[root@node2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.13.0 e708f4bb69e3 4 weeks ago 57.2MB
registry.aliyuncs.com/google_containers/kube-proxy v1.15.2 167bbf6c9338 15 months ago 82.4MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
6.删除节点(出现故障再操作此步)
//有时节点出现故障,需要删除节点,方法如下
在master节点上执行
[root@master ~]# kubectl drain <NODE-NAME> --delete-local-data --force --ignore-daemonsets
[root@master ~]# kubectl delete node <NODE-NAME>
在需要移除的节点上执行
[root@node1 ~]# kubeadm reset
命令式容器应用编排
1.部署应用Pod
//创建Deployment控制器对象
# 创建一个名字叫做nginx的deployment控制器,并指定pod镜像使用nginx:1.12版本,并暴露容器内的80端口,并指定副本数量为1个,并先通过--dry-run测试命令是否错误。
[root@master ~]# kubectl run nginx --image=nginx:1.12 --port=80 --replicas=1 --dry-run=true
[root@master ~]# kubectl run nginx --image=nginx:1.12 --port=80 --replicas=1
[root@master ~]# kubectl get pods #查看所有pod对象
NAME READY STATUS RESTARTS AGE
nginx-685cc95cd4-s2s9v 0/1 ContainerCreating 0 15s
###参数说明:
--image 指定需要使用到的镜像。
--port 指定容器需要暴露的端口。
--replicas 指定目标控制器对象要自动创建Pod对象的副本数量。
//打印资源对象的相关信息
[root@master ~]# kubectl get deployment #查看所有deployment控制器对象
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 2m35s
[root@master ~]# kubectl get deployment -o wide #查看deployment控制器对象的详细信息
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
nginx 1/1 1 1 3m45s nginx nginx:1.12 run=nginx
//访问Pod对象
[root@master ~]# kubectl get pods -o wide #查看pod详细信息
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-685cc95cd4-s2s9v 1/1 Running 0 5m22s 10.244.1.2 node1 <none> <none>
//kubernetes集群的master节点上访问
[root@master ~]# curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
//kubernetes集群的node节点上访问
[root@node1 ~]# curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
## 上面访问是基于一个pod的情况下,但是,当这个pod由于某种原因意外挂掉了,或者所在的节点挂掉了,那么deployment控制器会立即创建一个新的pod,这时候再去访问这个IP就访问不到了,而我们不可能每次去到节点上看到IP再进行访问。测试如下
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-685cc95cd4-s2s9v 1/1 Running 0 9m17s 10.244.1.2 node1 <none> <none>
//删除上面的pod
[root@master ~]# kubectl delete pods nginx-685cc95cd4-s2s9v
pod "nginx-685cc95cd4-s2s9v" deleted
//可以看出,当上面pod刚删除,接着deployment控制器又马上创建了一个新的pod,且这次分配在k8s-node2节点上了。
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-685cc95cd4-87pwc 1/1 Running 0 103s 10.244.2.2 node2 <none> <none>
//访问之前的pod,可以看到已经不能访问
[root@master ~]# curl 10.244.1.2
curl: (7) Failed connect to 10.244.1.2:80; 没有到主机的路由
//访问新的pod,可以正常访问
[root@master ~]# curl 10.244.2.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
部署Service对象
1.创建Service对象(将Service端口代理至Pod端口示例)
[root@master ~]# kubectl expose deployment/nginx --name=nginx-svc --port=80 --target-port=80 --protocol=TCP
service/nginx-svc exposed
###参数说明:
--name 指定service对象的名称
--port 指定service对象的端口
--target-port 指定pod对象容器的端口
--protocol 指定协议
//查看service对象。或者kubectl get service
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 138m
nginx-svc ClusterIP 10.108.170.23 <none> 80/TCP 66s
//这时候可以在kubernetes集群上所有节点上直接访问nginx-svc的cluster-ip及可访问到名为deployment控制器下nginx的pod。并且,集群中的别的新建的pod都可以直接访问这个IP或者这个service名称即可访问到名为deployment控制器下nginx的pod
[root@master ~]# curl 10.108.170.23
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
2.创建Service对象(将创建的Pod对象使用“NodePort”类型的服务暴露到集群外部)
//创建一个deployments控制器并使用nginx镜像作为容器运行的应用
[root@master ~]# kubectl run mynginx --image=nginx:1.12 --port=80 --replicas=2
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx-68676f64-c6cww 1/1 Running 0 9s
mynginx-68676f64-d4wkh 1/1 Running 0 9s
nginx-685cc95cd4-87pwc 1/1 Running 0 101m
//创建一个service对象,并将mynginx创建的pod对象使用NodePort类型暴露到集群外部
[root@master ~]# kubectl expose deployments/mynginx --type="NodePort" --port=80 --name=myngnix-svc
service/myngnix-svc exposed
//查看service
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 149m
myngnix-svc NodePort 10.111.108.122 <none> 80:30081/TCP 46s
nginx-svc ClusterIP 10.108.170.23 <none> 80/TCP 11m
//查看master节点上是否有监听上面的30081端口
[root@master ~]# netstat -lptnu|grep 30081
tcp6 0 0 :::30081 :::* LISTEN 63639/kube-proxy
//查看node节点上是否有监听上面的30081端口
[root@node1 ~]# netstat -lptnu|grep 30081
tcp6 0 0 :::30081 :::* LISTEN 7146/kube-proxy
3.客户端访问kubernetes集群的30081端口(客服端访问任一node节点的30081端口)
4.Service资源对象的描述
[root@master ~]# kubectl describe service
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.96.0.1
Port: https 443/TCP
TargetPort: 6443/TCP
Endpoints: 192.168.176.137:6443
Session Affinity: None
Events: <none>
Name: myngnix-svc
Namespace: default
Labels: run=mynginx
Annotations: <none>
Selector: run=mynginx
Type: NodePort
IP: 10.111.108.122
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30081/TCP
Endpoints: 10.244.1.3:80,10.244.2.3:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Name: nginx-svc
Namespace: default
Labels: run=nginx
Annotations: <none>
Selector: run=nginx
Type: ClusterIP
IP: 10.108.170.23
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.2.2:80
Session Affinity: None
Events: <none>
###字段说明:
Selector 当前Service对象使用的标签选择器,用于选择关联的Pod对象
Type 即Service的类型,其值可以是ClusterIP、NodePort和LoadBalancer等其中之一
IP 当前Service对象的ClusterIP
Port 暴露的端口,即当前Service用于接收并响应的端口
TargetPort 容器中的用于暴露的目标端口,由Service Port路由请求至此端口
NodePort 当前Service的NodePort,它是否存在有效值与Type字段中的类型相关
Endpoints 后端端点,即被当前Service的Selector挑中的所有Pod的IP及其端口
Session Affinity 是否启用会话粘性
External Traffic Policy 外部流量的调度策略
扩容和缩容
1.扩容示例
//查看标签run=nginx的pod
[root@master ~]# kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx-685cc95cd4-87pwc 1/1 Running 0 114m
//将其扩容到3个
[root@master ~]# kubectl scale deployments/nginx --replicas=3
deployment.extensions/nginx scaled
//再次查看
[root@master ~]# kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx-685cc95cd4-69b6q 1/1 Running 0 57s
nginx-685cc95cd4-87pwc 1/1 Running 0 116m
nginx-685cc95cd4-8n9w8 1/1 Running 0 57s
//查看Deployment对象nginx详细信息
[root@master ~]# kubectl describe deployments/nginx
2.缩容示例(缩容的方式和扩容相似,只不过是将Pod副本的数量调至比原来小的数字即可。例如将nginx的pod副本缩减至2个)
[root@master ~]# kubectl scale deployments/nginx --replicas=2
deployment.extensions/nginx scaled
[root@master ~]# kubectl get pods -l run=nginx
NAME READY STATUS RESTARTS AGE
nginx-685cc95cd4-69b6q 1/1 Running 0 3m46s
nginx-685cc95cd4-87pwc 1/1 Running 0 118m
删除对象
//查看当前所有的service对象
[root@master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 163m
myngnix-svc NodePort 10.111.108.122 <none> 80:30081/TCP 14m
nginx-svc ClusterIP 10.108.170.23 <none> 80/TCP 25m
//删除service对象nginx-svc
[root@master ~]# kubectl delete service nginx-svc
service "nginx-svc" deleted
//删除默认名称空间中所有的Deployment控制器
[root@master ~]# kubectl delete deployment --all
deployment.extensions "mynginx" deleted
deployment.extensions "nginx" deleted