环境ok
删除上一个实验的操作:
一、资源清单
格式如下:
apiVersion:group/version 指明api属于哪个 群组和版本,同一组可以有多个版本
kubectl api-version #查询命令
kind: #标记创建的资源类型,k8s主要支持以下资源类别
Pod,ReplicaSet,Deployment,StatefulSet,DaemonSet,Job,Cronjob
metadata: 元数据
name:对象名称
namespace:对象属于哪个命名空间
lables:指定资源标签,标签是一种键值数据
spec:定义目标资源的期望状态
kubectl explain pod #查询帮助文档
kubectl explain pods.spec.containers #查询
1.编写一个
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod
NAME READY STATUS RESTARTS AGE
jd618 1/1 Running 0 4s
自主设置的pod删除后不会再运行:
[root@node1 manifest]# kubectl delete pod jd618
pod "jd618" deleted
[root@node1 manifest]# kubectl get pod
No resources found in default namespace.
2.如果一个pod里运行两个同样的容器,会出现资源争抢
[root@node1 manifest]# cat ~/manifest/pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
- name: jd2
image: myapp:v1
效果:
3.一个pod运行两个不同的容器
[kubeadm@node1 ~]$ vim pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
app: demo
spec:
containers:
- name: vm1
image: nginx
- name: vm2
image: busybox
command:
- /bin/sh
- -c
- sleep 3300
效果:
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod
NAME READY STATUS RESTARTS AGE
jd618 2/2 Running 0 5s
[root@node1 manifest]# kubectl get pod
NAME READY STATUS RESTARTS AGE
jd618 1/2 Error 1 12s
[root@node1 manifest]# kubectl logs jd618 #需指定容器名
error: a container name must be specified for pod jd618, choose one of: [jd1 jd2]
[root@node1 manifest]# kubectl logs jd618 -c jd1
[root@node1 manifest]# kubectl logs jd618 -c jd2 #由于pod中的容器共享网络和存储,所以端口会冲突
2020/06/25 00:01:16 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/25 00:01:16 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/25 00:01:16 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/25 00:01:16 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/25 00:01:16 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use)
2020/06/25 00:01:16 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
删除后pod就没了
kubectl delete -f pod1.yml
如何解决呢?
[root@node1 manifest]# kubectl explain pod.spec.containers | less #查看一下
需要加tty,stdin,stdinOnce这三个参数
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
- name: jd2
image: busyboxplus
tty: true #加上这三个参数
stdin: true
stdinOnce: true
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod
NAME READY STATUS RESTARTS AGE
jd618 2/2 Running 0 55s
[root@node1 manifest]# kubectl describe pod jd618 #详细查看pod中每个容器的信息
现在有一个pod,jd618,pod里有两个容器jd1和jd2,jd1的镜像是myapp:v2,jd2的镜像是busyboxplus,现在进到jd2里,查看localhost。
[root@node1 manifest]# kubectl attach jd618 -c jd2 -it
[root@node1 manifest]# kubectl exec -it jd618 -c jd1 – sh #进入shell
eg1. 端口映射
spec.container.port
[root@node1 manifest]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
ports:
- name: httpd
containerPort: 80
hostPort: 80
[root@node1 manifest]# kubectl get pod -o wide #查看一下分配在哪个节点上
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jd618 1/1 Running 0 14s 10.244.1.33 node2 <none> <none>
在node2上,访问本机,DNAT到容器中去
spec.hostNetwork
[root@node1 manifest]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
hostNetwork: true
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod -o wide #宿主机网络
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jd618 1/1 Running 0 14s 172.25.26.3 node3 <none> <none>
eg2. cpu,memory资源
[root@node1 manifest]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
resources:
requests:
memory: 100Mi Mi代表1M=1024kb,M代表1M=1000KB
cpu: 0.1
limits:
memory: 200Mi
cpu: 0.2
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl describe pod jd618
eg3. 容器运行节点选择
spec.selector
原本节点在node3上:
现在要它在node2节点上:
[root@node1 manifest]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
spec:
containers:
- name: jd1
image: myapp:v2
resources:
requests:
memory: 100Mi
cpu: 0.1
limits:
memory: 200Mi
cpu: 0.2
nodeSelector:
kubernetes.io/hostname: node2
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod -o wide #在node2上了
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
jd618 1/1 Running 0 19s 10.244.1.34 node2 <none> <none>
eg4. 标签
[root@node1 manifest]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
name: jd618
labels:
app: demo #加个标签
spec:
containers:
- name: jd1
image: myapp:v2
resources:
requests:
memory: 100Mi
cpu: 0.1
limits:
memory: 200Mi
cpu: 0.2
nodeSelector:
kubernetes.io/hostname: node2
[root@node1 manifest]# kubectl apply -f pod1.yml
pod/jd618 created
[root@node1 manifest]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
jd618 1/1 Running 0 11s app=demo
[root@node1 manifest]# kubectl label pod jd618 app=nginx
error: 'app' already has a value (demo), and --overwrite is false
[root@node1 manifest]# kubectl label pod jd618 app=nginx --overwrite #更改标签,需要加--overwrite参数
pod/jd618 labeled
[root@node1 manifest]# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
jd618 1/1 Running 0 75s app=nginx