k8s实战入门

本文详细介绍了Kubernetes中的Namespace概念,如何实现Pod隔离,并展示了如何使用标签进行资源管理和权限控制。探讨了如何通过Namespace、标签和授权机制实现多环境隔离及多租户资源管控,包括添加、修改和删除标签的操作实例。
摘要由CSDN通过智能技术生成

Namespace是kubernetes系统中的一种非常重要资源,它的主要作用是用来实现多套环境的资源隔离或者多租户的资源隔离。

默认情况下,kubernetes集群中的所有的Pod都是可以相互访问的。

测试:kubernetes集群中的所有的Pod都是可以相互访问的
标签操作

进入容器:kubectl exec --help
Usage:
  kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]
[root@k8s-master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS       AGE
apache-855464645-4zxf2   1/1     Running   2 (115m ago)   22h
[root@k8s-master ~]# kubectl exec apache-855464645-4zxf2 -it -- bash
root@apache-855464645-4zxf2:/usr/local/apache2# ls
bin    cgi-bin  error   icons    logs
build  conf     htdocs  include  modules
root@apache-855464645-4zxf2:/usr/local/apache2# 

查看sleep在busybody里的位置
[root@k8s-node2 ~]# docker run -it --rm busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
5cc84ad355aa: Pull complete 
Digest: sha256:5acba83a746c7608ed544dc1533b87c737a0b0fb730301639a0179f9344b1678
Status: Downloaded newer image for busybox:latest
/ # which slep
/ # which sleep
/bin/sleep
/ # exit
[root@k8s-node2 ~]# 
  
在容器里执行命令方法:
[root@k8s-master ~]# kubectl explain pods.spec.containers

    command: ["/bin/sleep","6000"]
    
[root@k8s-master ~]# cd manifest/
[root@k8s-master manifest]# ls
nginxpod.yml
[root@k8s-master manifest]# cp nginxpod.yml test.yml
[root@k8s-master manifest]# vim test.yml 
[root@k8s-master manifest]# cat test.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: dev

---

apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: dev
spec:
  containers:
  - name: nginx-containers
    image: busybox 
    command: ["/bin/sleep","6000"] 
    
---

apiVersion: v1
kind: Pod
metadata:
  name: apache
spec:
  containers:
  - name: httpd
    image: busybox
    command: ["/bin/sleep","6000"] 
[root@k8s-master manifest]# 


运行命令    
[root@k8s-master manifest]# kubectl apply -f test.yml 
namespace/dev created
pod/nginx created
pod/apache created
[root@k8s-master manifest]# kubectl get -f test.yml 
NAME            STATUS   AGE
namespace/dev   Active   17s

NAME         READY   STATUS    RESTARTS   AGE
pod/nginx    1/1     Running   0          17s
pod/apache   1/1     Running   0          17s
[root@k8s-master manifest]# 
[root@k8s-master manifest]# kubectl exec apache -it -- sh
/ # ls
bin   dev   etc   home  proc  root  sys   tmp   usr   var
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether be:ab:f1:70:78:47 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.19/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::bcab:f1ff:fe70:7847/64 scope link 
       valid_lft forever preferred_lft forever
/ # 
/ # ping 10.244.2.18
PING 10.244.2.18 (10.244.2.18): 56 data bytes
64 bytes from 10.244.2.18: seq=0 ttl=64 time=0.117 ms
64 bytes from 10.244.2.18: seq=1 ttl=64 time=1.426 ms
64 bytes from 10.244.2.18: seq=2 ttl=64 time=0.078 ms
^C
--- 10.244.2.18 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.078/0.540/1.426 ms

      
[root@k8s-master ~]# kubectl exec nginx -itn dev -- sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
    link/ether ca:ef:b8:81:d1:b5 brd ff:ff:ff:ff:ff:ff
    inet 10.244.2.18/24 brd 10.244.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c8ef:b8ff:fe81:d1b5/64 scope link 
       valid_lft forever preferred_lft forever
/ # ping 10.244.2.19
PING 10.244.2.19 (10.244.2.19): 56 data bytes
64 bytes from 10.244.2.19: seq=0 ttl=64 time=0.065 ms
64 bytes from 10.244.2.19: seq=1 ttl=64 time=0.060 ms
^C
--- 10.244.2.19 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.060/0.062/0.065 ms
/ # 

      
是相通的      
  • kubernetes集群中的所有的Pod都是可以相互访问的

  • 但是在实际中,可能不想让两个Pod之间进行互相的访问,那此时就可以将两个Pod划分到不同的namespace下。kubernetes通过将集群内部的资源分配到不同的Namespace中,可以形成逻辑上的"组",以方便不同的组的资源进行隔离使用和管理

  • 可以通过kubernetes的授权机制,将不同的namespace交给不同租户进行管理,这样就实现了多租户的资源隔离。此时还能结合kubernetes的资源配额机制,限定不同租户能占用的资源,例如CPU使用量、内存使用量等等,来实现租户可用资源的管理。

1、查看标签:

# kubectl get pod/nginx-pod --show-labels 
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          10m   app=nginx,env=qa

可以看到在LABELS下有 app=nginx,env=qa 这两个标签了

2、添加标签:

第一种,命令方式

# kubectl label pod/nginx-pod release=v1.20
pod/nginx-pod labeled
 
查看
# kubectl get pod/nginx-pod --show-labels 
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          15m   app=nginx,env=qa,release=v1.20

第二种,yaml文件

在metadata下新增labels,然后写上自己想要的标签键值对

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    env: qa
    app: nginx

3、修改标签

# kubectl get pod/nginx-pod --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          19m   app=nginx,env=qa,release=v1.20
 
# kubectl label pod/nginx-pod release=v1.21 --overwrite 
pod/nginx-pod labeled
 
# kubectl get pod/nginx-pod --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          19m   app=nginx,env=qa,release=v1.21
release=v1.20 --> release=v1.21

3、删除标签

# kubectl get pod/nginx-pod --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          23m   app=nginx,env=qa,release=v1.21
 
# kubectl label pod/nginx-pod release-
pod/nginx-pod labeled
 
# kubectl get pod/nginx-pod --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          23m   app=nginx,env=qa

在key后加一个“-”减号就是删除该标签

同样集群中的node节点也有标签,增删改查和上面一样

增
kubectl label node/<nodename> env=qa
 
删
kubectl label node/<nodename> env
 
改
kubectl label node/<nodename> env=dev --overwrite
 
查
kubectl get node/<nodename> --show-labels
标签过滤,即查找标签为xxx的资源

查找所有node节点标签为 env=qa 的节点

# kubectl get node --show-labels -l env=qa

其他过滤表达式

表达式	                                 作用
env=qa	                                 等于
env!=qa	                                 不等于
'env in (qa,dev,qc)'	                 包含
'env notin (qa,dev,qc)'	                 不包含
'app,env notin (dev,qc)'	             组合过滤
 

Service根据标签匹配Pod

# kubectl get pod/nginx-pod --show-labels
NAME        READY   STATUS    RESTARTS   AGE   LABELS
nginx-pod   1/1     Running   0          23m   app=nginx,env=qa
 
 
创建service yaml匹配标签为app=nginx的pod
apiVersion: v1
kind: Service
metadata:
  labels:
    svcname: nginx-pod-svc
  name: nginx-pod-svc
spec:
  ports:
  - name: server
    port: 8080        # service对外端口
    protocol: TCP
    targetPort: 80    # Pod的容器端口
  selector:
    app: nginx        # 标签匹配
  type: ClusterIP

测试:

# kubectl get svc --field-selector metadata.name=nginx-pod-svc
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
nginx-pod-svc   ClusterIP   192.168.48.63   <none>        8080/TCP   23m
# curl 192.168.48.63:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值