部署kubernetes(K8S)

k8s官网https://kubernetes.io/
https://docs.docker.com/install/linux/docker-ce/centos/
部署官网地址https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

介绍

k8s 就是为了 容器 container而做准备的

k8s :
	1.自动化部署
	2.容器的管理
	3.伸缩

  • 如果单纯的用docker来做 会有什么不方便呢?
    每一个 container 表示一个进程 一个一个启动 太费劲了

  • 为了方便,我们可以使用 k8s

我们使用 1.13版本的k8s

在这里插入图片描述

  • kubeadm:the command to bootstrap the cluster 来进行部署 k8s,引导初始化的一个程序
  • kubelet :the component that runs on all of the machines in your cluster and does things like starting pods and containers.用于启动服务,是后台进程的服务
  • kubectl: the command line util to talk to your cluster. 客户端,用于常规的运维操作

环境准备

hadoop001  : docker 、harbor
hadoop002  : docker
hadoop003  : docker

进行部署 k8s cluster

准备: hosts文件 ,凡是涉及到集群的都要配hosts文件

172.19.242.226  hadoop002
172.19.242.225  hadoop001
172.19.242.227  hadoop003

三台部署 docker

部署看另一篇博客

https://blog.csdn.net/weixin_43212365/article/details/105612828
https://blog.csdn.net/weixin_43212365/article/details/105641773

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
yum install docker-ce-18.06.1.ce -y
systemctl enable docker && systemctl start docker

早期的问题:docker同k8s不兼容

坑: https://stackoverflow.com/questions/54330068/kubelet-saying-node-master01-not-found

但是本次课程使用直接(没有使用上述版本,直接按照官网): https://docs.docker.com/install/linux/docker-ce/centos/

Server: Docker Engine - Community
 Engine:
  Version:          19.03.1

hadoop001部署harbor 、docker hub

[root@hadoop002 ~]# mkdir -p /etc/docker/certs.d/hadoop001

[root@hadoop001 ~]# scp  /etc/docker/certs.d/hadoop001/* hadoop002:/etc/docker/certs.d/hadoop001/

[root@hadoop002 ~]# cat  /etc/docker/daemon.json
{
  "insecure-registries" : ["172.19.242.225"],
  "registry-mirrors": ["https://hadoop001"]
}
[root@hadoop002 ~]# systemctl restart docker
//提醒:重启docker之前 ,假如本节点为 harbor ,先docker-compose stop ,换而言之  docker ps 查看一下

第三台机器一致
要验证的是:
	1.三台机器都要能登陆 第一台机器的 harbor服务
即:
	三台机器 登陆 我们的 私服 Harbor 是没有问题的
[root@hadoop001 harbor]# docker login -u admin -p Harbor12345 172.19.242.225
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@hadoop001 harbor]# 

[root@hadoop002 hadoop001]# cd /root/
[root@hadoop002 ~]# docker login -u admin -p Harbor12345 172.19.242.225
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@hadoop002 ~]# 

[root@hadoop003 ~]# docker login -u admin -p Harbor12345 172.19.242.225
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@hadoop003 ~]# 

关闭防火墙

三台机器一起做

如果各个主机启用了防火墙,需要开放Kubernetes各个组件所需要的端口,可以查看Installing kubeadm中的”Check required ports”一节。 这里简单起见在各节点禁用防火墙:

systemctl stop firewalld.service 
systemctl disable firewalld.service 
firewall-cmd --state 

禁用SELINUX

三台机器一起做

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

关闭 swap(这是官网要求的)

三台机器一块做

swapoff -a

解决路由异常问题

三台机器一块做

echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.d/k8s.conf 
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.d/k8s.conf 
echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf 
modprobe br_netfilter

//生效(不需要重启)
sysctl -p /etc/sysctl.d/k8s.conf

或者通过以下命令
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

注意:
	cat <<EOF > 的用法就是:
		上面的 
			net.bridge.bridge-nf-call-ip6tables = 1
			net.bridge.bridge-nf-call-iptables = 1
		写到k8s.conf 文件的意思

安装 kubelet kubeadm kubectl

三台机器一块做

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubernetes-cni-0.6.0-0 kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 

systemctl enable kubelet 
#创建一个服务,仅仅是enable,切不可start启动

# 卸载
yum remove -y kubelet kubeadm kubectl
注:
	暂不使用kubernetes1.14.1,
	因为docker版本为18.06.1有坑 https://stackoverflow.com/questions/54330068/kubelet-saying-node-master01-not-found,
	所以使用kubernetes1.13.2

三个节点下载镜像

参考镜像地址https://hub.docker.com/u/hackeruncle

images=(
    kube-apiserver:v1.13.2
    kube-controller-manager:v1.13.2
    kube-scheduler:v1.13.2
    kube-proxy:v1.13.2
    pause:3.1
    etcd:3.2.24
    coredns:1.2.6
)

for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}   
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}   
    docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} 
done

//上面是从阿里云下载的,将阿里云下载的打标签,重新生成一个官方的,因为我们在执行初始化命令的时候,他是去拉官方的镜像的名字,优先从本地去找,如果找不到再通过公网的docker hub去找,到时候下载会非常慢,当然也可以下载好推送到docker的私服里,这里就可以从私服拉

//上面的命令可以制作一个shell脚本

//也可以直接下载 + 打标签成k8s 的 
docker pull hackeruncle/pause:3.1
docker tag  hackeruncle/pause:3.1 k8s.gcr.io/pause:3.1

docker pull hackeruncle/etcd:3.2.24
docker tag hackeruncle/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

docker pull hackeruncle/coredns:1.2.6
docker tag hackeruncle/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker pull hackeruncle/kube-scheduler:v1.13.2
docker tag hackeruncle/kube-scheduler:v1.13.2 k8s.gcr.io/kube-scheduler:v1.13.2

docker pull hackeruncle/kube-controller-manager:v1.13.2
docker tag hackeruncle/kube-controller-manager:v1.13.2 k8s.gcr.io/kube-controller-manager:v1.13.2

docker pull hackeruncle/kube-proxy:v1.13.2
docker tag hackeruncle/kube-proxy:v1.13.2 k8s.gcr.io/kube-proxy:v1.13.2

docker pull hackeruncle/kube-apiserver:v1.13.2
docker tag hackeruncle/kube-apiserver:v1.13.2 k8s.gcr.io/kube-apiserver:v1.13.2


[root@hadoop002 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-apiserver            v1.13.2             177db4b8e93a        15 months ago       181MB
k8s.gcr.io/kube-controller-manager   v1.13.2             b9027a78d94c        15 months ago       146MB
k8s.gcr.io/kube-proxy                v1.13.2             01cfa56edcfc        15 months ago       80.3MB
k8s.gcr.io/kube-scheduler            v1.13.2             3193be46e0b3        15 months ago       79.6MB
k8s.gcr.io/coredns                   1.2.6               f59dcacceff4        17 months ago       40MB
k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        19 months ago       220MB
k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
[root@hadoop002 ~]# 

//这里没有推送给harbor ,每个节点都直接操作

因为 我这个节点是 都可以访问外网的  节省时间成本 就不操作了,很简单 

etcd :做存储的

master节点:hadoop001

kubeadm init \
--kubernetes-version=v1.13.2 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--ignore-preflight-errors=Swap

注意:
pod-network-cidr :pod 是最小单元 就等价与 容器 ,一个进程run,所以harbor私服才会有那么多container
	10.244.0.0  :  10.244 是ip的开头 
service-cidr: 对外的网段

[root@hadoop001 harbor]# kubeadm init \
> --kubernetes-version=v1.13.2 \
> --pod-network-cidr=10.244.0.0/16 \
> --service-cidr=10.96.0.0/12 \
> --ignore-preflight-errors=Swap
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.8. Latest validated version: 18.06
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [hadoop001 localhost] and IPs [172.19.242.225 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [hadoop001 localhost] and IPs [172.19.242.225 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [hadoop001 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.19.242.225]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.501523 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hadoop001" as an annotation
[mark-control-plane] Marking the node hadoop001 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node hadoop001 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 03tvsf.a2qfxgtqsz3d1nsq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.19.242.225:6443 --token 03tvsf.a2qfxgtqsz3d1nsq --discovery-token-ca-cert-hash sha256:1624002031b2913ab207ba7a37db5fb80d38023bc66e11594eeb8f5635c2528d

//分散到集群上不同的机器,容器之间要做交互,中间使用的网络,大部分公司都使用这个组件https://kubernetes.io/docs/concepts/cluster-administration/addons/
//我们使用Flannel组件,不过性能不是太高效

//按照上面的操作,这步就是做认证的 
//复制/etc/kubernetes/admin.conf文件到~/.kube/config,使得kubelet工具有权限访问集群:
[root@hadoop001 ~]# mkdir -p $HOME/.kube
[root@hadoop001 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@hadoop001 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@hadoop001 ~]# ll $HOME/.kube/config
-rw------- 1 root root 5454 Apr 22 14:09 /root/.kube/config
[root@hadoop001 ~]# 

//安装flannel,创建网络
[root@hadoop001 ~]# kubectl apply \
> -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@hadoop001 ~]# 


//也可以你先下载一个 kube-flannel.yml, 网上一大堆
[root@container01 ~]# ll
total 20
drwxr-xr-x 2 root root  4096 Feb  7 10:55 harbor
-rw-r--r-- 1 root root 11289 Feb 10 13:32 kube-flannel.yml
drwxr-xr-x 2 root root  4096 Feb  7 10:45 mysql5.7
[root@container01 ~]# 

[root@container01 ~]# kubectl apply -f kube-flannel.yml   
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[root@container01 ~]# 

node节点: hadoop002、hadoop003

让其他机器 加入进来 (当前 01机器是不能自己 加自己的)

kubeadm join 172.19.242.225:6443 --token 03tvsf.a2qfxgtqsz3d1nsq --discovery-token-ca-cert-hash sha256:1624002031b2913ab207ba7a37db5fb80d38023bc66e11594eeb8f5635c2528d

//此命令上面有

[root@hadoop002 ~]# kubeadm join 172.19.242.225:6443 --token 03tvsf.a2qfxgtqsz3d1nsq --discovery-token-ca-cert-hash sha256:1624002031b2913ab207ba7a37db5fb80d38023bc66e11594eeb8f5635c2528d
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.8. Latest validated version: 18.06
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "172.19.242.225:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.19.242.225:6443"
[discovery] Requesting info from "https://172.19.242.225:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.19.242.225:6443"
[discovery] Successfully established connection with API Server "172.19.242.225:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hadoop002" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@hadoop002 ~]# 

[root@hadoop003 ~]# kubeadm join 172.19.242.225:6443 --token 03tvsf.a2qfxgtqsz3d1nsq --discovery-token-ca-cert-hash sha256:1624002031b2913ab207ba7a37db5fb80d38023bc66e11594eeb8f5635c2528d
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.8. Latest validated version: 18.06
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[discovery] Trying to connect to API Server "172.19.242.225:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.19.242.225:6443"
[discovery] Requesting info from "https://172.19.242.225:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.19.242.225:6443"
[discovery] Successfully established connection with API Server "172.19.242.225:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "hadoop003" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

[root@hadoop003 ~]# 

检查

[root@hadoop001 soft]# kubectl get nodes
NAME        STATUS     ROLES    AGE   VERSION
hadoop001   NotReady   master   43m   v1.13.2
hadoop002   NotReady   <none>   44s   v1.13.2
hadoop003   NotReady   <none>   52s   v1.13.2
[root@hadoop001 soft]# 

[root@hadoop001 ~]# kubectl get all -n kube-system 
NAME                                    READY   STATUS              RESTARTS   AGE
pod/coredns-86c58d9df4-5rxkv            0/1     ContainerCreating   0          55m
pod/coredns-86c58d9df4-62lq2            0/1     ContainerCreating   0          55m
pod/etcd-hadoop001                      1/1     Running             0          54m
pod/kube-apiserver-hadoop001            1/1     Running             0          55m
pod/kube-controller-manager-hadoop001   1/1     Running             0          55m
pod/kube-proxy-5n9sg                    1/1     Running             0          13m
pod/kube-proxy-724sn                    1/1     Running             0          55m
pod/kube-proxy-gmbnl                    1/1     Running             0          13m
pod/kube-scheduler-hadoop001            1/1     Running             0          55m

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   56m

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/kube-flannel-ds-amd64     0         0         0       0            0           <none>          44m
daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>          44m
daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>          44m
daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>          44m
daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>          44m
daemonset.apps/kube-proxy                3         3         3       3            3           <none>          56m

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   0/2     2            0           56m

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-86c58d9df4   2         2         0       55m
[root@hadoop001 ~]# 

//-o wide 详细的
[root@hadoop001 ~]# kubectl get all -n kube-system -o wide
NAME                                    READY   STATUS              RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
pod/coredns-86c58d9df4-5rxkv            0/1     ContainerCreating   0          56m   <none>           hadoop001   <none>           <none>
pod/coredns-86c58d9df4-62lq2            0/1     ContainerCreating   0          56m   <none>           hadoop001   <none>           <none>
pod/etcd-hadoop001                      1/1     Running             0          56m   172.19.242.225   hadoop001   <none>           <none>
pod/kube-apiserver-hadoop001            1/1     Running             0          56m   172.19.242.225   hadoop001   <none>           <none>
pod/kube-controller-manager-hadoop001   1/1     Running             0          56m   172.19.242.225   hadoop001   <none>           <none>
pod/kube-proxy-5n9sg                    1/1     Running             0          14m   172.19.242.228   hadoop002   <none>           <none>
pod/kube-proxy-724sn                    1/1     Running             0          56m   172.19.242.225   hadoop001   <none>           <none>
pod/kube-proxy-gmbnl                    1/1     Running             0          14m   172.19.242.227   hadoop003   <none>           <none>
pod/kube-scheduler-hadoop001            1/1     Running             0          56m   172.19.242.225   hadoop001   <none>           <none>

NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR
service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   57m   k8s-app=kube-dns

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS     IMAGES                                   SELECTOR
daemonset.apps/kube-flannel-ds-amd64     0         0         0       0            0           <none>          45m   kube-flannel   quay.io/coreos/flannel:v0.12.0-amd64     app=flannel
daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>          45m   kube-flannel   quay.io/coreos/flannel:v0.12.0-arm       app=flannel
daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>          45m   kube-flannel   quay.io/coreos/flannel:v0.12.0-arm64     app=flannel
daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>          45m   kube-flannel   quay.io/coreos/flannel:v0.12.0-ppc64le   app=flannel
daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>          45m   kube-flannel   quay.io/coreos/flannel:v0.12.0-s390x     app=flannel
daemonset.apps/kube-proxy                3         3         3       3            3           <none>          57m   kube-proxy     k8s.gcr.io/kube-proxy:v1.13.2            k8s-app=kube-proxy

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                     SELECTOR
deployment.apps/coredns   0/2     2            0           57m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns

NAME                                 DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                     SELECTOR
replicaset.apps/coredns-86c58d9df4   2         2         0       56m   coredns      k8s.gcr.io/coredns:1.2.6   k8s-app=kube-dns,pod-template-hash=86c58d9df4
[root@hadoop001 ~]# 


[root@hadoop001 ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

//指定命名空间
[root@hadoop001 ~]# kubectl get pods -n kube-system
NAME                                READY   STATUS              RESTARTS   AGE
coredns-86c58d9df4-5rxkv            0/1     ContainerCreating   0          58m
coredns-86c58d9df4-62lq2            0/1     ContainerCreating   0          58m
etcd-hadoop001                      1/1     Running             0          57m
kube-apiserver-hadoop001            1/1     Running             0          57m
kube-controller-manager-hadoop001   1/1     Running             0          57m
kube-proxy-5n9sg                    1/1     Running             0          16m
kube-proxy-724sn                    1/1     Running             0          58m
kube-proxy-gmbnl                    1/1     Running             0          16m
kube-scheduler-hadoop001            1/1     Running             0          57m
[root@hadoop001 ~]# 

k8s : 是 标准的 主从架构

到此:命令行的部署好了 但是 dashborad 没有部署好

部署Dashboard

安装证书和secret

[root@hadoop001 ~]# mkdir certs
[root@hadoop001 ~]# ls certs/

openssl req -nodes -newkey rsa:2048 -keyout certs/dashboard.key -out certs/dashboard.csr -subj "/C=/ST=/L=/O=/OU=/CN=kubernetes-dashboard"

//或者
openssl req -nodes -newkey rsa:2048  \
-subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=kubernetes-dashboard" \
-keyout dashborad.key \
-out dashborad.csr

[root@hadoop001 ~]# openssl req -nodes -newkey rsa:2048 -keyout certs/dashboard.key -out certs/dashboard.csr -subj "/C=/ST=/L=/O=/OU=/CN=kubernetes-dashboard"
Generating a 2048 bit RSA private key
...................................+++
......+++
writing new private key to 'certs/dashboard.key'
-----
No value provided for Subject Attribute C, skipped
No value provided for Subject Attribute ST, skipped
No value provided for Subject Attribute L, skipped
No value provided for Subject Attribute O, skipped
No value provided for Subject Attribute OU, skipped
[root@hadoop001 ~]# ls certs/
dashboard.csr  dashboard.key

openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt

//或者
openssl  x509 -req -sha256 -days 365 \
-in dashborad.csr \
-signkey dashborad.key \
-out dashborad.crt


[root@hadoop001 ~]# openssl x509 -req -sha256 -days 365 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt
Signature ok
subject=/CN=kubernetes-dashboard
Getting Private key
[root@hadoop001 ~]# ls certs/
dashboard.crt  dashboard.csr  dashboard.key
[root@hadoop001 ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system
secret/kubernetes-dashboard-certs created
[root@hadoop001 ~]# 

注意:
secret generic  就是创建一个 秘钥 普通的
--from-file  从哪个文件夹
-n  就是 namespace

创建dashboard

先要准备配置文件

https://www.jianshu.com/p/12d0b8b0be14
https://www.cnblogs.com/xuziyu/p/12504758.html

[root@hadoop001 k8s_dashboard]# cat kubernetes-dashboard.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta4
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.1
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
[root@hadoop001 k8s_dashboard]# 
[root@hadoop001  k8s_dashboard]#  kubectl create -f  kubernetes-dashboard.yaml
secret/kubernetes-dashboard-csrf created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists

获取token

[root@hadoop001 k8s_dashboard]# cat admin-token.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
[root@hadoop001 k8s_dashboard]# 

[root@hadoop001 k8s_dashboard]# kubectl create -f admin-token.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin created
serviceaccount/admin created

注意:

dashboad 要从浏览器打开 是需要账号和密码的 
我们采取的是 token
token:是有周期变化的

查看token:
	要注意:k8s里都是 资源  下面的 我们使用到的资源 就是 秘钥 叫secret

[root@hadoop001 k8s_dashboard]# kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-lx7gf   kubernetes.io/service-account-token   3      114m
[root@hadoop001 k8s_dashboard]# 

这个 是默认的 不是我们那个kube-system

1.获取资源的 名字
[root@hadoop001 k8s_dashboard]# kubectl get secret -n kube-system |grep admin | awk '{print $1}'
admin-token-5m8w9
[root@hadoop001 k8s_dashboard]# 

//根据资源名字 查看token
[root@hadoop001 k8s_dashboard]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -n kube-system
Name:         admin-token-5m8w9
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: 0535f42c-846e-11ea-b01f-00163e087411

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi01bTh3OSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjA1MzVmNDJjLTg0NmUtMTFlYS1iMDFmLTAwMTYzZTA4NzQxMSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.GsHkv8WvIDCeBphjbBtyrzEgVINaKmWN3E3YBPwZ4z4WSjWEX4-v2Z6Y51ilUAuo7DTqHpDZbOtwHL4hbF_jtXtvDcY8joJlbxm11M-sJTmCPii5PQNSAc1llTR_ce4Lu40_XGK1465Jt3QL9rM6pv786OzfQ1Ew8Gad7-Xs5dP0Dofn5am3x02-nBGZIr3CfTbGZccyJPL7yai8WWVDtaMosnKvd6jUo8qF4l-xcuV84EHphSObnfuzrt31fpCxXeFT1vsM27zgrsB8Jlycl02aLpQPZO_tsxwLPFEsH9_ziR-SqhP0gFuMIIeG4Qc3F3c-jR4D_cCAU-t60gTVaw
ca.crt:     1025 bytes
namespace:  11 bytes
[root@hadoop001 k8s_dashboard]# 


注意:
 kubectl get all -n kube-system

get all : 指的是 查看所有资源

上面的命令 就是:
	指 kube-system 的所有资源

[root@hadoop001 k8s_dashboard]# kubectl get all -n kube-system
NAME                                        READY   STATUS              RESTARTS   AGE
pod/coredns-86c58d9df4-5rxkv                0/1     ContainerCreating   0          117m
pod/coredns-86c58d9df4-62lq2                0/1     ContainerCreating   0          117m
pod/etcd-hadoop001                          1/1     Running             0          116m
pod/kube-apiserver-hadoop001                1/1     Running             0          116m
pod/kube-controller-manager-hadoop001       1/1     Running             0          116m
pod/kube-proxy-5n9sg                        1/1     Running             0          75m
pod/kube-proxy-724sn                        1/1     Running             0          117m
pod/kube-proxy-gmbnl                        1/1     Running             0          75m
pod/kube-scheduler-hadoop001                1/1     Running             0          116m
pod/kubernetes-dashboard-79ff88449c-hpzhv   0/1     Pending             0          10m

NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   117m
service/kubernetes-dashboard   ClusterIP   10.101.98.184   <none>        443/TCP         32m

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/kube-flannel-ds-amd64     0         0         0       0            0           <none>          106m
daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>          106m
daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>          106m
daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>          106m
daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>          106m
daemonset.apps/kube-proxy                3         3         3       3            3           <none>          117m

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns                0/2     2            0           117m
deployment.apps/kubernetes-dashboard   0/1     1            0           32m

NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/coredns-86c58d9df4                2         2         0       117m
replicaset.apps/kubernetes-dashboard-79ff88449c   1         1         0       32m
[root@hadoop001 k8s_dashboard]# 

也就是说:
	kube-system 这个命名空间 下面的这些组件 
	就是为了 构建 k8s 而做准备的 

service/kubernetes-dashboard   ClusterIP   10.101.98.184   <none>        443/TCP         32m

即 :443:32004  kubernetes-dashboard 这块和 docker 是反着的 



超详细命令:

 kubectl get all -n kube-system -o wide

加上 -o wide 

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
所以 在dashborad 就可以修改配置 看日志

但是:
!!! 在修改之前一定先 拷贝一份
要不然 万一出事了 你方便!!

在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值