Kubernetes Play with Kubernetes搭建5个节点的K8s集群

本文主要介绍通过Play with Kubernetes来搭建5个节点的Kubernetes集群,其中包括一个master节点以及4个工作节点,来带领大家进入Kubernetes的世界。本文不会专注于讲解Kubernetes的高深原理也不是讲Kuberentes的具体用法,而是通过Play with Kubernetes带大家上手Kubernetes,相当于Kubernetes世界的“Hello World”!大家只需要一台可以上网的电脑以及下载装好浏览器,推荐使用google浏览器,然后需要注册Github账号或者Google账号用以登陆,然后即可上手。

1. 资源限制

  • 利用内置的kubeadm来搭建Kubenetes集群,目前kube版本是v1.11.3
  • 每个实例配置的资源是1 core,4G Memory,并且一次性最多只能创建5个实例,如果再创建的话,就是有以下提示:
Max instances reached
Maximum number of instances reached
  • 每个集群的使用时间是4个小时(当然你可以同时启动多个集群,根据浏览器的session来判断集群)
  • 在Kubernetes集群中创建的服务无法通过外网访问,只能在Play with Kubernetes的网络内访问

2. 创建Kubernetes集群

2.1 登陆

点击Play with Kubernetes地址,打开网页。点击login,登陆方式有两种:

  • Github账户登陆,有github账户,即可登陆,没有的去GitHub创建一个账号
  • Docker登陆,需要有google账户,即可登陆
    然后点击start即可开始你的Kubernetes之旅!登陆页面为:
    Play with Kubernetes Start Page
2.2 创建master节点
  1. 点击create instance创建一个新的实例,然后运行命令,初始化master节点:
[node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i)

Initializing machine ID from random generator.
[init] using Kubernetes version: v1.11.10
[preflight] running pre-flight checks
        [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I1026 08:51:17.842390     800 kernel_validator.go:81] Validating kernel version
I1026 08:51:17.842768     800 kernel_validator.go:96] Validating kernel config
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-165-generic
DOCKER_VERSION: 18.06.1-ce
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "", err - exit status 1
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.23]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.0.23 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 50.502361 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" asan annotation
[bootstraptoken] using token: p6wrgt.ukl5qipcf8ovg2ze
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.23:6443 --token p6wrgt.ukl5qipcf8ovg2ze --discovery-token-ca-cert-hash sha256:6d431bda8393afb76b95466e2a4b0c1a712a25b25f6281cb8b9663c063a27b46

Waiting for api server to startup
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/kube-proxy configured
No resources found
[node1 ~]$  kubectl apply -n kube-system -f \
>     "https://cloud.we
  1. 初始化集群网络:
[node1 ~]$  kubectl apply -n kube-system -f \
>     "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"

serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
  1. 需要执行下面命令,作为一个普通用户来使用集群:
[node1 ~]$ mkdir -p $HOME/.kube
[node1 ~]$ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

cp: '/etc/kubernetes/admin.conf' and '/root/.kube/config' are the same file
[node1 ~]$ chown $(id -u):$(id -g) $HOME/.kube/config
2.3 创建其他工作节点
  1. 新建一个新的实例来创建工作节点,在新的实例窗口运行:
[node3 ~]$   kubeadm join 192.168.0.23:6443 --token p6wrgt.ukl5qipcf8ovg2ze --discovery-token-ca-cert-hash sha256:6d431bda8393afb76b95466e2a4b0c1a712a25b25f6281cb8b9663c063a27b46

Initializing machine ID from random generator.
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: error getting required builtin kernel modules: exit status 1(cut: /lib/modules/4.4.0-165-generic/modules.builtin: No such file or directory
)
        [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service'
        [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I1026 09:09:09.257504     422 kernel_validator.go:81] Validating kernel version
I1026 09:09:09.257670     422 kernel_validator.go:96] Validating kernel config
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-165-generic
DOCKER_VERSION: 18.06.1-ce
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
        [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module "configs": output - "", err - exit status 1
[discovery] Trying to connect to API Server "192.168.0.23:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.23:6443"
[discovery] Requesting info from "https://192.168.0.23:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.23:6443"
[discovery] Successfully established connection with API Server "192.168.0.23:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node3" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to master and a response
  was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this n

注意:这步需根据自己matser节点终端提示进行操作,192.168.0.23:6443只是我的master节点的ipport,而且每个人的token以及sha密码都不一样的!

  1. 重复上一步,可以再创建3个工作的节点。这样加上master节点,则搭建好了一个具有5个节点的Kubernetes集群了。在master节点上运行命令,可以看到节点具体信息
[node1 ~]$ kubectl get nodes -o wide

NAME      STATUS    ROLES     AGE       VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION      CONTAINER-RUNTIME
node1     Ready     master    1h        v1.11.3   192.168.0.23   <none>        CentOS Linux 7 (Core)   4.4.0-165-generic   docker://18.6.1
node2     Ready     <none>    57m       v1.11.3   192.168.0.22   <none>        CentOS Linux 7 (Core)   4.4.0-165-generic   docker://18.6.1
node3     Ready     <none>    54m       v1.11.3   192.168.0.21   <none>        CentOS Linux 7 (Core)   4.4.0-165-generic   docker://18.6.1
node4     Ready     <none>    53m       v1.11.3   192.168.0.20   <none>        CentOS Linux 7 (Core)   4.4.0-165-generic   docker://18.6.1
node5     Ready     <none>    46m       v1.11.3   192.168.0.19   <none>        CentOS Linux 7 (Core)   4.4.0-165-generic   docker://18.6.1

到此,利用Play with Kubernetes搭建5个节点的K8s集群教程结束,大家可以在集群上创建自己的应用了,如poddeploymentstatefulset等。接下来会讲解如何Kubernetes一些基本概念。

评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值