K8s第七篇授权认证

K8s授权认证:
  • 认证检查:检查是否为合法的用户,支持多种认证插件,用户只要通过一种插件的认证,则为合法的用户。
  • 授权检查:检查用户的权限。
  • 准入控制:如果用户的操作关联到多个资源,则进行准入控制检查。
客户端可以通过API Server的URL对资源进行操作:
  • user:username,uid。
  • group:组。
  • extra:额外信息。
  • API Request Path:K8s的API Server会提供url来实现用户对资源的操作,每个资源都会有自己的url。
将k8smaster的8080端口反向代理到6443端口
#k8s api server使用6443端口做的HTTPS认证。
[root@k8smaster ~]# netstat -anput | grep 6443
tcp        0      0 192.168.43.45:46918     192.168.43.45:6443      ESTABLISHED 837/kubelet         
tcp        0      0 192.168.43.45:47050     192.168.43.45:6443      ESTABLISHED 4364/kube-controlle 
tcp        0      0 192.168.43.45:46760     192.168.43.45:6443      ESTABLISHED 4364/kube-controlle 
tcp        0      0 192.168.43.45:46962     192.168.43.45:6443      ESTABLISHED 5109/kube-proxy     
tcp        0      0 192.168.43.45:46764     192.168.43.45:6443      ESTABLISHED 4438/kube-scheduler 
tcp6       0      0 :::6443                 :::*                    LISTEN      4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.176:37346    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.176:58268    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:46760     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.136:59222    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      10.244.0.28:54622       ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      10.244.0.29:33264       ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:47050     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:46962     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 ::1:59460               ::1:6443                ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.136:59202    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.136:38694    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.176:58290    ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:46764     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:39168     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 192.168.43.45:6443      192.168.43.45:46918     ESTABLISHED 4307/kube-apiserver 
tcp6       0      0 ::1:6443                ::1:59460               ESTABLISHED 4307/kube-apiserver 

#在root用户的家目录下会有k8s的认证配置文件,包含了客户端证书,客户端私钥。
[root@k8smaster ~]# cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1UQXlPREV3TlRBek9Gb1hEVEk1TVRBeU5URXdOVEF6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSzQwCjhheHhqaS9PY1lFOHpEWmpyYjBrdXBaZHB6S2xYdGJ6bStrcXFtbkRJL01mNWN1a21ITlAxT295aDlUZVdkS1MKVDBLWkZuekR4TklyQTBYOEZVRzBpbll3QkswQnQrUEtjakVMUkNuUVcxWjJFdU9iOHBkZEtsS29ra0IwU0psSApOUGM1L0RqL2Z1ZHRxaEd0Ym1mc0RzUEMwZlFzaGZNLzkxYTlZZXBWOE41K0phaWxma0R6SHhIbG9aNmZid04wCnRiZm9JT2J6TjZwR2FCTFlnbkNJdzhPanBnelNZZ2U1N3pIR2tMQ1B3Q29sWUpJdFFtR0RKcUdDZUUzY1UvYW0KQ0ZRT0dRczE2R0pJbEFkNm5Tc1RSTElIRFRjektIb2diZFJUR3Nqb3NoV0xhZk50Y01PNEcxY04zVXBDNExtcAorZVNPMThMcDRnMXRPWlUyZHNNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLZkJhcVlpMXp2N2hSeFdsUHpuYmJOS2lkQkkKUjBydWdxenNDZU1FUnJCazVuMndaU2dJN1FwTml3UGk1ZXk0ZEVJbTc4NzBJTEJQUG9kbGt5a0haUkw1bWhOcwpxd1JjbkZOV3JKVkFuTldtSURSZ3JCZHdsOGFCaTBoSVJacGR1Q3lXU3RlV2RXME5zNEgrTVpxQUZrRG01d25rCmRqQlFDelZaR2pQMHBWZnJKUG03WCtmNXRKdUhmR0RISTA1ZC9uRVlmYnBSdXNqK1c5am56eE1PZjFLbFNSaUgKOE1PdElXMmtSbE5CM21zYWJ6MFFSWC9kdE40c1FOaitad2lBWmUwTjRKT3p5bXR0eXEzTktPNUNGamhmTWNuYgp3cTJmUVdPb1lWL1Fja1o4ckFudTNCQzN0eDhLS2lxMmExNjN5TzFrYktubE9HSzU3a244ZEJNL2ZWQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.43.45:6443
name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
    name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
    user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJZDBGd2gzUkhQZmt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RFd01qZ3hNRFV3TXpoYUZ3MHlNREV3TWpjeE1EVXdOREZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXVIOEc3TTVhSFhmNXB1NzYKTWRNMmdRYzQ2d0wrL0lGZE5QbENzRXYwSEgzWXVrV2JDYWpMMU5WZ0QrbzdXbGlvRFZxRWwySGh1VW1Nb2NGcgpVZzEzTGlWLzdzZjJaNkFEQ1JIRnhpdEpsWEFDS2FOS3ZIRU5HRFVlRmROVHNVWUVvelhmRGE1Q3V1eC8yUUt5Ck5JbjFxdDA0ZGp1Z3JkK0V6eVB1cmYyZ295dG43WHJpcW1lbHNVanlJZTM3K2ZPbDBzNENxOEdublJPTE1KU2MKNGxsNmxmMlVXbHpNVUxPSmpCTEYvYVErRzIxaWc3eVZkUUJ0VHNpVUc1MkhzZllQZnMxVmNnNDNIaVBGZ2pmaAo4cTFQSFUwU0NmZVgrdStSMWVOT2pMd3RqMzVMcEdVNWZDYW82SDV4WnU1RjJROE1iSmU1L0ZyWCtERXYxS3RKCmdMczF3d0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEUjdyMDBoTE9DYWYyOFM5UHBPM2E4Y3FIOUllaVZ6ell5dAplZEk2bGx2QWJtS3hLRWxueWwxM2toaFU4MVRmcnprbXZublJaS25TNm41TFZDYkthQUF0RXNwK1Y3bXNnOVNMCjBzVkdIb0MrM1kyVFR1UVQ5VHVWM1FoRnNQOHFJcmR3Ni9kck1PdEdjMjE0dmlqTC9QV3RlWFh1VkZuZmd3QVUKanp4cGkvYXZ6bjVhd1hyK3hkTENBdmJJMW1Qb3RwalFvZUkyZWpWS3NnNlFWY2lsbVF3MisxOStoYThwSjFLSwo3MjRYendyZFp1eWRhbWZ2aFBiaDhkd3JNbGl4YVlKYWpwc0wxbkxPaHljeTJ2MjBEYUExeVFoR1FRdEx6OC9GCkZ4Ym9QUXg3Q0ZyYkVvdDZkZ3JhSkFBSmFGYllpRW9YUzNKREVSRlNTdElDVHM5OWVJMD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdUg4RzdNNWFIWGY1cHU3Nk1kTTJnUWM0NndMKy9JRmROUGxDc0V2MEhIM1l1a1diCkNhakwxTlZnRCtvN1dsaW9EVnFFbDJIaHVVbU1vY0ZyVWcxM0xpVi83c2YyWjZBRENSSEZ4aXRKbFhBQ0thTksKdkhFTkdEVWVGZE5Uc1VZRW96WGZEYTVDdXV4LzJRS3lOSW4xcXQwNGRqdWdyZCtFenlQdXJmMmdveXRuN1hyaQpxbWVsc1VqeUllMzcrZk9sMHM0Q3E4R25uUk9MTUpTYzRsbDZsZjJVV2x6TVVMT0pqQkxGL2FRK0cyMWlnN3lWCmRRQnRUc2lVRzUySHNmWVBmczFWY2c0M0hpUEZnamZoOHExUEhVMFNDZmVYK3UrUjFlTk9qTHd0ajM1THBHVTUKZkNhbzZINXhadTVGMlE4TWJKZTUvRnJYK0RFdjFLdEpnTHMxd3dJREFRQUJBb0lCQVFDZEFDd0tkSWVuTUNPWQo5U0NnS2RibDhobHpsRGNjOVpFMXRUQVZDbTJQbVdCSEUxaWQzYkNuUzNUVjFrUHYzQ1lXUndNeU42OTRsNmcvCk5uTjNmZEgveVJXWFF6N2lhLzVwUjJDQUJQSTNZdnZVSnd0QVZRd0puNW9jaEp0aDdlMmdYZ1dVaE1od2ZUVkcKbk02OWV2RStGOGNtaGhOMEl4UEhtaEpRcWRaN1F0RG1NSlU3T24wUi81SW5IV3BtZlFOUGdOV3hla2MrNGhzZQpWcGR5WTUweGxYVThQUGN2ZFUzVS9MTEt4MWd2VVgrZU1HbVo3dWtFeWNVSTVEQSsyRm45L2lSUTZGOFAzWUgwCkdIQlVIQllOck9LN2FhTm5hOU0vbWlvZnhWVnorMmxuSHVFREloemZPemFWZEZjM1FLaDE5REdiYUExV1VCUlcKWmF4T1hVOFJBb0dCQU5kblZUM29kbHh5VGU3VnlPRGtlVENrY1BaSTAyREpuL0N5cVVjRDNqM0x5VGFNTWx0YgpnMmhtTnc2ZklPaDM3OUVkakd0N0VGa1YrdzJQOU5wMklJMUx3U1ZULzVtTEQ1Z3hyRFBseHdCdjNuMTM2dEdxCjZVQ2cydkkwV3lQVkdLZDJPekl1Nkp3UTNaYnNkUlBadFNaTHgvcDByWGF5aTdWQzlpeE1JRmdOQW9HQkFOdEUKZlhMN1A3YmNGYVVldldlRmpxNzJFVTJobTBwTTBsUWFPcGViU0xOTDF4MmR3WGtzSS83OFZRZDlpSlRWcDVDVApDUGtkbzMvaVBGWDFmdmt1QmhIbXM1RUtmWHJxdEdXam94ajVVTXE4U24xY1ZTMlNHY2srNHRkTkFiOVFTTHdNCkh4RDZYVUUvZW4wYjZNemlkZVJNWmRmY1BiSHI3OFlyUGtBRzlnRVBBb0dBTUJvQVBCbnNUSXF1QXBhMURCdVoKUUphSUwwZG1CS2doMGxOalg5dHFScXg2VzNjRlM4ZHMyZVJ4aVE5Wi91L0JteFlaSkd0UDVFVDNVamtDZWNLRgpWR2hGVW51bWlYZzNYRXBEWlRkN3NBcExTZ044YWFQY0FMV3JEd2xJRFFGcVJ3TXRCdkRZdXZrOU1wWE5NMGliCm5saXY2S3NqalcwanE2K3ZYNGNFZGdVQ2dZQTZSTkV4cFNNaGJRc3pmaC9IU3U3SUFBeEpIUkV2aFlxL1h0a0QKUVBqbzdOYVZ3RDZSL1BEejZncU9td1dZeDg1bjFTc2xTSU1Ta1FTSHMxMnl5bEJDb1pSR2p3c1poeFc1ak9yaQowQjV3UWVscHR3Zkx2Ryt0MDFCazlzbm9GV1crMDFuT0lUcDNCRytBbjlJVjRIaUQydW1WbTZtcGhwR0prQ1JTCno0YkFjUUtCZ0V2THM0SjMwaCtIb0VUODdOVFQ5WTBWYzBGRFBXWG0xcDdmKzJKa1pzejJ0dUhOOUcvR0RIUU0KY1N6Tll2MFI4azJqR0tnQjdnSVBDWlFkN29YVW81YU5maXVtZ2dDQUJ0RjV3eWFBWGtFWTRxVExabWRTbm5peQpUcktMa2l3aXRRd2lmckg4bHo1TkxTL2tGcXFKMXRuOWhJWWRDMmtsUHFDdnVjbUliMk1ECi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

#我们使用curl通过api server的url来实现对资源的操作,但是curl没有k8s认证,所以我们通过proxy 将api server反向代理到本机的8080端口,这样curl就不会走HTTPS的认证。
[root@k8smaster ~]# kubectl proxy --port=8080 &
Starting to serve on 127.0.0.1:8080
[root@k8smaster ~]# ss -tnl | grep 8080
LISTEN     0      128    127.0.0.1:8080                     *:*                  

#查看namespace
[root@k8smaster ~]# curl http://localhost:8080/api/v1/namespaces
{
    "kind": "NamespaceList",
    "apiVersion": "v1",
    "metadata": {
    "selfLink": "/api/v1/namespaces",
    "resourceVersion": "242473"
},
    "items": [
    {
        "metadata": {
        "name": "default",
        "selfLink": "/api/v1/namespaces/default",
        "uid": "7d5c1c7e-babc-4a03-9d5a-1f71a8f43bc3",
        "resourceVersion": "150",
        "creationTimestamp": "2019-10-28T10:51:23Z"
    },
        "spec": {
            "finalizers": [
            "kubernetes"
        ]
    },
        "status": {
            "phase": "Active"
        }
    },
    {
    "metadata": {
        "name": "kube-node-lease",
        "selfLink": "/api/v1/namespaces/kube-node-lease",
        "uid": "bfe8130f-8dd9-4517-8eaa-24fdd4be2d8e",
        "resourceVersion": "38",
        "creationTimestamp": "2019-10-28T10:51:20Z"
    },
    "spec": {
        "finalizers": [
        "kubernetes"
        ]
    },
    "status": {
        "phase": "Active"
    }
    },
    {
    "metadata": {
    "name": "kube-public",
    "selfLink": "/api/v1/namespaces/kube-public",
    "uid": "aa0e095d-57d3-433d-908e-b688572975c0",
    "resourceVersion": "37",
    "creationTimestamp": "2019-10-28T10:51:20Z"
  },
  "spec": {
    "finalizers": [
      "kubernetes"
    ]
  },
  "status": {
    "phase": "Active"
    }
    },
    {
    "metadata": {
        "name": "kube-system",
        "selfLink": "/api/v1/namespaces/kube-system",
        "uid": "970dbea7-9819-4096-9769-2a588921b739",
        "resourceVersion": "35",
        "creationTimestamp": "2019-10-28T10:51:20Z"
    },
    "spec": {
            "finalizers": [
            "kubernetes"
        ]
    },
        "status": {
            "phase": "Active"
        }
    }
    ]
}

#查看kube-system名称空间下所有的deployment。
[root@k8smaster ~]# kubectl get deploy -n kube-system
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
coredns   2/2     2            2           63d
[root@k8smaster ~]# curl http://localhost:8080/apis/apps/v1/namespaces/kube-system/deployments
HTTP Request verb(HTTP请求动作):
  • get
  • post
  • put
  • delete
API Request verb(K8s Api Server请求动作):
  • get
  • list
  • create
  • update
  • patch
  • watch
  • proxy
  • redirect
  • delete
  • deletecollection
用户:
  • Use Account(用户账号):一般是指由独立于Kubernetes之外的其他服务管理的用 户账号,例如由管理员分发的密钥、Keystone一类的用户存储(账号库)、甚至是包 含有用户名和密码列表的文件等。Kubernetes中不存在表示此类用户账号的对象, 因此不能被直接添加进 Kubernetes 系统中 。
  • Service Account(服务账号):是指由Kubernetes API 管理的账号,用于为Pod 之中的服务进程在访问Kubernetes API时提供身份标识( identity ) 。Service Account通常要绑定于特定的命名空间,它们由 API Server 创建,或者通过 API 调用于动创建 ,附带着一组存储为Secret的用于访问API Server的凭据。
ServiceAccount:
#创建一个ServiceAccount
[root@k8smaster ~]# kubectl create serviceaccount admin
serviceaccount/admin created
[root@k8smaster ~]# kubectl get sa
NAME      SECRETS   AGE
admin     1         5s
default   1         63d
[root@k8smaster ~]# kubectl describe sa admin
Name:                admin
Namespace:           default
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   admin-token-bs7sd
Tokens:              admin-token-bs7sd
Events:              <none>
[root@k8smaster ~]# kubectl  get secret
NAME                  TYPE                                  DATA   AGE
admin-token-bs7sd     kubernetes.io/service-account-token   3      81s
default-token-kk2fq   kubernetes.io/service-account-token   3      63d
mysql-root-password   Opaque                                1      10d

#指定Pod使用我们创建的ServiceAccount
[root@k8smaster ~]# vim Pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
  name: pod-sa-demo
  namespace: default
  labels:
    app: myapp
    tier: frontend
  annotations:
    k8s.com/created-by: "cluster admin"
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
  serviceAccountName: admin
[root@k8smaster ~]# kubectl apply -f Pod-demo.yaml 
pod/pod-sa-demo created
[root@k8smaster ~]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   0          102s
[root@k8smaster ~]# kubectl describe pod pod-sa-demo
Name:         pod-sa-demo
Namespace:    default
Priority:     0
Node:         k8snode1/192.168.43.136
Start Time:   Tue, 31 Dec 2019 17:15:18 +0800
Labels:       app=myapp
                tier=frontend
Annotations:  k8s.com/created-by: cluster admin
            kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"k8s.com/created-by":"cluster admin"},"labels":{"app":"myapp","tier":"frontend"...
Status:       Running
IP:           10.244.1.139
IPs:          <none>
  Containers:
    myapp:
    Container ID:   docker://08de4d8f62d0c5b900664fda23e7243a94ef925d2003a62f2fe95876bf034c79
    Image:          ikubernetes/myapp:v1
    Image ID:       docker-pullable://ikubernetes/myapp@sha256:40ccda7b7e2d080bee7620b6d3f5e6697894befc409582902a67c963d30a6113
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Tue, 31 Dec 2019 17:15:19 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from admin-token-bs7sd (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  admin-token-bs7sd:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  admin-token-bs7sd    #发现与上面的admin的Tokens一致。
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  113s  default-scheduler  Successfully assigned default/pod-sa-demo to k8snode1
  Normal  Pulled     113s  kubelet, k8snode1  Container image "ikubernetes/myapp:v1" already present on machine
  Normal  Created    113s  kubelet, k8snode1  Created container myapp
  Normal  Started    113s  kubelet, k8snode1  Started container myapp

#查看k8s控制台信息
[root@k8smaster ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.43.45:6443   #集群api server的路径
  name: kubernetes
contexts:
- context:
    cluster: kubernetes                  #集群名
    user: kubernetes-admin               #用户名
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes   #当前使用的用户
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
创建自定义的用户并认证:
[root@k8smaster ~]# cd /etc/kubernetes/
[root@k8smaster kubernetes]# ls
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf
[root@k8smaster kubernetes]# cd pki/
[root@k8smaster pki]# ls
apiserver.crt              apiserver.key                 ca.crt  front-proxy-ca.crt      front-proxy-client.key
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.key      sa.key
apiserver-etcd-client.key  apiserver-kubelet-client.key  etcd    front-proxy-client.crt  sa.pub

#创建证书和客户端密钥。
[root@k8smaster pki]# (umask 077; openssl genrsa -out k8s.key 2048)
Generating RSA private key, 2048 bit long modulus
.....................+++
.....+++
e is 65537 (0x10001)
[root@k8smaster pki]# openssl req -new -key k8s.key -out k8s.csr -subj "/CN=k8s"
[root@k8smaster pki]# openssl x509 -req -in  k8s.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out k8s.crt -days 365
Signature ok
subject=/CN=k8s
Getting CA Private Key
[root@k8smaster pki]# ls
apiserver.crt              apiserver.key                 ca.crt  etcd                front-proxy-client.crt  k8s.csr  sa.pub
apiserver-etcd-client.crt  apiserver-kubelet-client.crt  ca.key  front-proxy-ca.crt  front-proxy-client.key  k8s.key
apiserver-etcd-client.key  apiserver-kubelet-client.key  ca.srl  front-proxy-ca.key  k8s.crt                 sa.key

[root@k8smaster pki]# kubectl config set-credentials k8s --client-certificate=./k8s.crt --client-key=./k8s.key --embed-certs=true
User "k8s" set.
[root@k8smaster pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.43.45:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: k8s
user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admin
user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

#给定安全上下文
[root@k8smaster pki]# kubectl config set-context k8s@kubernetes --cluster=kubernetes --user=k8s 
Context "k8s@kubernetes" created.
[root@k8smaster pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.43.45:6443
name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: k8s
name: k8s@kubernetes
- context:
    cluster: kubernetes
    user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: k8s
user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: kubernetes-admin
user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

#使用我们创建的账户
[root@k8smaster pki]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
#发现权限被拒绝,因为我们创建的账户并没有设置任何权限。
[root@k8smaster pki]# kubectl get pods    
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "default"

#我们再切换到管理员账户
[root@k8smaster pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
#发现已经有权限了。
[root@k8smaster pki]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   0          90m

#将当前的集群状态生成一个配置文件。(默认的配置文件再用户家目录下的.kube/config)
[root@k8smaster ~]# kubectl config set-cluster mycluster --kubeconfig=/tmp/test.conf --server="https://192.168.43.45:6443"  --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true
Cluster "mycluster" set.
[root@k8smaster ~]# kubectl config view --kubeconfig=/tmp/test.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.43.45:6443
  name: mycluster
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
授权插件RBAC:

 RBAC (Role-Based Access Control,基于角色的访问控制)是一种新型、灵活且使用广泛的访问控制机制,它将权限授予“角色”(role)之上,这一点有别于传统访问控制机制中 将权限直接赋予使用者的方式,简单点来说就是将权限绑定到role中,然后用户和role绑定,这样用户就拥有了和role一样的权限。

Role和ClusterRole:
  • Role是只作用于命名空间级别的,用于定义命名空间内资源权限集合。
  • ClusterRole则用于集群级别的资源权限集合,它们都是标准的 API 资源类型 。
  • 一般来说, ClusterRole 的许可授权作用于整个集群,因此常用于控制 Role 无法生效的资源类型,这包括集群级别的资源(如Nodes)、非资源类型的端点(如/healthz)和作用于所有命名空间的资源(例如跨命名空间获取任何资源的权限)。
RoleBinding和ClusterRoleBinding:
  • RoleBinding用于将Role上的许可权限绑定到一个或一组用户之上,它隶属于且仅能作用于一个命名空间。绑定时,可以引用同一名称中的Role,也可以引用集群级别的 ClusterRole。
  • ClusterRoleBinding则把ClusterRole中定义的许可权限绑定在一个或一组用户之上,它仅可以引用集群级别的ClusterRole。
  • 一个命名空间中可以包含多个Role和RoleBinding对象,类似地,集群级别也可以同时存在多个ClusterRole和ClusterRoleBinding对 象。而一个账户也可经由RoleBinding ClusterRoleBinding关联至多个角色,从而具有多重许可授权。
role&rolebinding(role和rolebinding之间的绑定):
#创建一个role,具有可以查看default名称空间内Pod的权限。
[root@k8smaster ~]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  creationTimestamp: null
  name: pods-reader
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@k8smaster ~]# kubectl create role pods-reader --verb=get,list,watch --resource=pods --dry-run -o yaml > role-demo.yaml
[root@k8smaster ~]# vim role-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pods-reader
  namespace: default
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@k8smaster ~]# kubectl apply -f role-demo.yaml 
role.rbac.authorization.k8s.io/pods-reader created
[root@k8smaster ~]# kubectl get role
NAME          AGE
pods-reader   6s
[root@k8smaster ~]# kubectl describe role pods-reader
Name:         pods-reader
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"pods-reader","namespace":"default"},"rules...
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  pods       []                 []              [get list watch]

#创建一个rolebindding,将我们之前创建的k8s用户绑定到role。
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --role=pods-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: k8s-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --role=pods-reader --user=k8s --dry-run -o yaml > rolebinding-demo.yaml
[root@k8smaster ~]# vim rolebinding-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: k8s-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: pods-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl apply -f rolebinding-demo.yaml 
rolebinding.rbac.authorization.k8s.io/k8s-read-pods created
[root@k8smaster ~]# kubectl get rolebinding
NAME            AGE
k8s-read-pods   20s
[root@k8smaster ~]# kubectl describe rolebinding k8s-read-pods
Name:         k8s-read-pods
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"k8s-read-pods","namespace":"default...
Role:
  Kind:  Role
  Name:  pods-reader
Subjects:
  Kind  Name  Namespace
  ----  ----  ---------
  User  k8s   

#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   1          21h
#但是我们只授予了这个role拥有查看Pod的权限,所有查看名称空间的话会报权限拒绝。
[root@k8smaster ~]# kubectl get namespace
Error from server (Forbidden): namespaces is forbidden: User "k8s" cannot list resource "namespaces" in API group "" at the cluster scope
#并且也查看不到其他名称空间内的Pods
[root@k8smaster ~]# kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
clusterrole&clusterrolebinding(clusterrole和clusterrolebinding之间的绑定):
#创建一个clusterrole,赋予它可以查看所有名称空间内Pod的权限。
[root@k8smaster ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster ~]# kubectl create clusterrole cluster-reader --verb=get,list,watch --resource=pods -o yaml --dry-run 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  creationTimestamp: null
  name: cluster-reader
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@k8smaster ~]# kubectl create clusterrole cluster-reader --verb=get,list,watch --resource=pods -o yaml --dry-run > clusterrole-demo.yaml
[root@k8smaster ~]# vim clusterrole-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-reader
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
[root@k8smaster ~]# kubectl apply -f clusterrole-demo.yaml 
clusterrole.rbac.authorization.k8s.io/cluster-reader created

#创建一个clusterrolebinding,让它绑定到我们的k8s用户。
[root@k8smaster ~]# kubectl get rolebinding
NAME            AGE
k8s-read-pods   21m
[root@k8smaster ~]# kubectl delete rolebinding k8s-read-pods
rolebinding.rbac.authorization.k8s.io "k8s-read-pods" deleted
[root@k8smaster ~]# kubectl create clusterrolebinding k8s-read-all-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: null
  name: k8s-read-all-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl create clusterrolebinding k8s-read-all-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml  > clusterrolebinding-demo.yaml
[root@k8smaster ~]# vim clusterrolebinding-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: k8s-read-all-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl apply -f clusterrolebinding-demo.yaml 
clusterrolebinding.rbac.authorization.k8s.io/k8s-read-all-pods created
[root@k8smaster ~]# kubectl describe clusterrolebinding k8s-read-all-pods
Name:         k8s-read-all-pods
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"annotations":{},"name":"k8s-read-all-pods"},"ro...
Role:
  Kind:  ClusterRole
  Name:  cluster-reader
Subjects:
  Kind  Name  Namespace
  ----  ----  ---------
  User  k8s   

#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   1          21h
#我们发现这次k8s用户可以查看到其他名称空间的Pod了
[root@k8smaster ~]# kubectl get pods -n kube-system
NAME                                READY   STATUS             RESTARTS   AGE
coredns-bf7759867-8h4x8             1/1     Running            6          64d
coredns-bf7759867-slmsz             1/1     Running            6          64d
etcd-k8smaster                      1/1     Running            18         64d
kube-apiserver-k8smaster            1/1     Running            61         63d
kube-controller-manager-k8smaster   1/1     Running            16         64d
kube-flannel-ds-amd64-6zhtw         1/1     Running            15         64d
kube-flannel-ds-amd64-wnh9k         1/1     Running            6          64d
kube-flannel-ds-amd64-wqvz9         1/1     Running            15         64d
kube-proxy-2j8w9                    1/1     Running            15         64d
kube-proxy-kqxlq                    1/1     Running            14         64d
kube-proxy-nb82z                    1/1     Running            6          64d
kube-scheduler-k8smaster            1/1     Running            16         64d
traefik-ingress-controller-8wbtb    0/1     CrashLoopBackOff   343        15d
traefik-ingress-controller-bmpbk    0/1     CrashLoopBackOff   343        15d
#但是只有查看Pod的权限。
[root@k8smaster ~]# kubectl get service -n kube-system
Error from server (Forbidden): services is forbidden: User "k8s" cannot list resource "services" in API group "" in the namespace "kube-system"
clusterrole&rolebinding(clusterrole和rolebinding之间的绑定):
[root@k8smaster ~]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster ~]# kubectl delete clusterrolebinding k8s-read-all-pods
clusterrolebinding.rbac.authorization.k8s.io "k8s-read-all-pods" deleted
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  creationTimestamp: null
  name: k8s-read-pods
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl create rolebinding k8s-read-pods --clusterrole=cluster-reader --user=k8s --dry-run -o yaml > rolebinding-clusterrole-demo.yaml
[root@k8smaster ~]# vim rolebinding-clusterrole-demo.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: k8s-read-pods
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: k8s
[root@k8smaster ~]# kubectl apply -f rolebinding-clusterrole-demo.yaml 
rolebinding.rbac.authorization.k8s.io/k8s-read-pods created
[root@k8smaster ~]# kubectl describe rolebinding k8s-read-pods
Name:         k8s-read-pods
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"k8s-read-pods","namespace":"default...
Role:
  Kind:  ClusterRole
  Name:  cluster-reader
Subjects:
  Kind  Name  Namespace
  ----  ----  ---------
  User  k8s   

#验证权限
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
[root@k8smaster ~]# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   1          21h
[root@k8smaster ~]# kubectl get pods -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
admin角色:

 admin是default名称空间的管理员,它可以管理default名称空间的所有资源,可以实现增删改查,但是没有对default名称空间外的任何权限。

#创建一个rolebinding让k8s用户绑定到k8s集群default名称空间的admin角色
[root@k8smaster ~]# kubectl create rolebinding default-ns-admin --clusterrole=admin --user=k8s 
rolebinding.rbac.authorization.k8s.io/default-ns-admin created
[root@k8smaster ~]# kubectl config use-context k8s@kubernetes
Switched to context "k8s@kubernetes".
#可以查
[root@k8smaster ~]# kubectl get pods 
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   1          21h
[root@k8smaster ~]# cd /data/configmap/
#可以创建
[root@k8smaster configmap]# kubectl apply -f .
pod/pod-cm created
pod/pod-se created
[root@k8smaster configmap]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
pod-cm        1/1     Running   0          90s
pod-sa-demo   1/1     Running   1          21h
pod-se        1/1     Running   0          90s
#可以删
[root@k8smaster configmap]# kubectl delete -f .
pod "pod-cm" deleted
pod "pod-se" deleted
[root@k8smaster configmap]# kubectl get pod
NAME          READY   STATUS    RESTARTS   AGE
pod-sa-demo   1/1     Running   1          21h
#但是没有对其他名称空间的权限
[root@k8smaster ~]# kubectl get pods  -n kube-system
Error from server (Forbidden): pods is forbidden: User "k8s" cannot list resource "pods" in API group "" in the namespace "kube-system"
cluster-admin角色:

 cluster-admin是集群的管理员,它具有对整个集群的管理权限。

[root@k8smaster configmap]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@k8smaster configmap]# kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2019-10-28T10:51:21Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
  resourceVersion: "96"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
  uid: 78362692-0520-436c-bb3d-a5a5c6e0a8bd
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:masters
参考链接:https://blog.csdn.net/IT8421/article/details/89389609
  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Kubernetes (K8s) 提供了多种身份验证和授权机制,包括基于令牌、基于证书、基于 OpenID Connect 等。以下是 K8s认证授权机制: 1. 基于令牌的认证K8s 使用令牌进行用户身份验证。用户可以使用用户名和密码向 K8s 集群请求令牌,该令牌可用于后续请求的身份验证。 2. 基于证书的认证K8s 还支持使用证书进行身份验证。集群管理员可以在集群中生成证书,然后将证书分发给用户,让其用于身份验证。这种方法更加安全,因为证书可以被撤销。 3. OpenID Connect 认证K8s 还支持 OpenID Connect,这是一种基于 OAuth 2.0 的身份验证协议。它使得用户可以使用 Google、GitHub 等身份提供者进行身份验证,并获取到一个令牌,用于后续请求的身份验证。 在授权方面,K8s 提供了以下授权机制: 1. 基于角色的访问控制 (RBAC):K8s 的 RBAC 允许管理员为不同的用户或用户组分配不同的角色。角色定义了用户或用户组可以访问的资源和操作。 2. 基于节点的访问控制 (NBAC):NBAC 允许管理员为不同的节点分配不同的角色。角色定义了节点可以访问的资源和操作。 3. 基于命名空间的访问控制 (NSAC):NSAC 允许管理员为不同的命名空间分配不同的角色。角色定义了命名空间中可以访问的资源和操作。 4. 基于 Webhook 的授权:Webhook 允许管理员将授权决策交给外部服务来进行。这种方法比较灵活,因为管理员可以根据需要进行自定义授权

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值