如何在Ubuntu 16.04上使用Kubeadm创建Kubernetes集群

介绍 (Introduction)

Kubernetes is a container orchestration system that manages containers at scale. Initially developed by Google based on its experience running containers in production, Kubernetes is open source and actively developed by a community around the world.

Kubernetes是一个容器编排系统,可大规模管理容器。 Kubernetes最初是由Google根据其在生产容器中运行容器的经验开发的,是开源的,并由世界各地的社区积极开发。

Note: This tutorial uses version 1.14 of Kubernetes, the official supported version at the time of this article’s publication. For up-to-date information on the latest version, please see the current release notes in the official Kubernetes documentation.

注意:本教程使用Kubernetes的1.14版本,这是本文发布时的官方受支持版本。 有关最新版本的最新信息,请参见Kubernetes官方文档中的当前发行说明

Kubeadm automates the installation and configuration of Kubernetes components such as the API server, Controller Manager, and Kube DNS. It does not, however, create users or handle the installation of operating system level dependencies and their configuration. For these preliminary tasks, it is possible to use a configuration management tool like Ansible or SaltStack. Using these tools makes creating additional clusters or recreating existing clusters much simpler and less error prone.

Kubeadm自动执行Kubernetes组件(例如API服务器,Controller Manager和Kube DNS)的安装和配置。 但是,它不会创建用户或处理操作系统级别的依存关系及其配置的安装。 对于这些初步任务,可以使用AnsibleSaltStack之类的配置管理工具。 使用这些工具可使创建更多群集或重新创建现有群集变得更加简单且不易出错。

In this guide, you will set up a Kubernetes cluster from scratch using Ansible and Kubeadm, and then deploy a containerized Nginx application to it.

在本指南中,您将使用Ansible和Kubeadm从零开始设置Kubernetes集群,然后将容器化的Nginx应用程序部署到该集群。

目标 (Goals)

Your cluster will include the following physical resources:

您的集群将包括以下物理资源:

  • One master node

    一个主节点

The master node (a node in Kubernetes refers to a server) is responsible for managing the state of the cluster. It runs Etcd, which stores cluster data among components that schedule workloads to worker nodes.

主节点(Kubernetes中的一个节点指的是服务器)负责管理集群的状态。 它运行Etcd ,该文件在将工作负荷调度到工作节点的组件之间存储群集数据。

  • Two worker nodes

    两个工作节点

Worker nodes are the servers where your workloads (i.e. containerized applications and services) will run. A worker will continue to run your workload once they’re assigned to it, even if the master goes down once scheduling is complete. A cluster’s capacity can be increased by adding workers.

工作节点是将在其中运行工作负载 (即容器化的应用程序和服务)的服务器。 一旦分配了工作负载,工作人员将继续运行您的工作负载,即使主服务器在计划完成后关闭也是如此。 群集的容量可以通过添加工作程序来增加。

After completing this guide, you will have a cluster ready to run containerized applications, provided that the servers in the cluster have sufficient CPU and RAM resources for your applications to consume. Almost any traditional Unix application including web applications, databases, daemons, and command line tools can be containerized and made to run on the cluster. The cluster itself will consume around 300-500MB of memory and 10% of CPU on each node.

完成本指南后,您将拥有一个准备运行容器化应用程序的集群,前提是该集群中的服务器具有足够的CPU和RAM资源供您的应用程序使用。 几乎所有传统的Unix应用程序(包括Web应用程序,数据库,守护程序和命令行工具)都可以容器化并使其在集群上运行。 群集本身将在每个节点上消耗大约300-500MB的内存和10%的CPU。

Once the cluster is set up, you will deploy the web server Nginx to it to ensure that it is running workloads correctly.

设置好群集后,您将向其部署Web服务器Nginx ,以确保其正确运行工作负载。

先决条件 (Prerequisites)

步骤1 —设置工作区目录和Ansible库存文件 (Step 1 — Setting Up the Workspace Directory and Ansible Inventory File)

In this section, you will create a directory on your local machine that will serve as your workspace. You will also configure Ansible locally so that it can communicate with and execute commands on your remote servers. To do this, you will create a hosts file containing inventory information such as the IP addresses of your servers and the groups that each server belongs to.

在本节中,您将在本地计算机上创建一个目录作为您的工作区。 您还将在本地配置Ansible,以便它可以与远程服务器通信并在远程服务器上执行命令。 为此,您将创建一个hosts文件,其中包含清单信息,例如服务器的IP地址以及每个服务器所属的组。

Out of your three servers, one will be the master with an IP displayed as master_ip. The other two servers will be workers and will have the IPs worker_1_ip and worker_2_ip.

在三台服务器中,其中一台将是主服务器,其IP显示为master_ip 。 其他两个服务器将是worker,并将具有IPs worker_1_ipworker_2_ip

Create a directory named ~/kube-cluster in the home directory of your local machine and cd into it:

在本地计算机的主目录中创建一个名为~/kube-cluster的目录并cd进入该目录:

  • mkdir ~/kube-cluster

    mkdir〜/ kube-集群
  • cd ~/kube-cluster

    光盘〜/ kube-cluster

This directory will be your workspace for the rest of the tutorial and will contain all of your Ansible playbooks. It will also be the directory inside which you will run all local commands.

该目录将是本教程其余部分的工作空间,并将包含所有Ansible剧本。 它也是您将在其中运行所有本地命令的目录。

Create a file named ~/kube-cluster/hosts using nano or your favorite text editor:

使用nano或您喜欢的文本编辑器创建一个名为~/kube-cluster/hosts文件:

  • nano ~/kube-cluster/hosts

    纳米〜/ kube-cluster / hosts

Add the following text to the file, which will specify information about the logical structure of your cluster:

将以下文本添加到文件中,该文件将指定有关群集的逻辑结构的信息:

~/kube-cluster/hosts
〜/ kube-cluster / hosts
[masters]
master ansible_host=master_ip ansible_user=root

[workers]
worker1 ansible_host=worker_1_ip ansible_user=root
worker2 ansible_host=worker_2_ip ansible_user=root

[all:vars]
ansible_python_interpreter=/usr/bin/python3

You may recall that inventory files in Ansible are used to specify server information such as IP addresses, remote users, and groupings of servers to target as a single unit for executing commands. ~/kube-cluster/hosts will be your inventory file and you’ve added two Ansible groups (masters and workers) to it specifying the logical structure of your cluster.

您可能还记得,Ansible中的清单文件用于指定服务器信息,例如IP地址,远程用户和服务器组,以将其作为执行命令的单个单元作为目标。 ~/kube-cluster/hosts将是您的清单文件,并且您已向其中添加了两个Ansible组( masterworker ),用于指定集群的逻辑结构。

In the masters group, there is a server entry named “master” that lists the master node’s IP (master_ip) and specifies that Ansible should run remote commands as the root user.

masters组中,有一个名为“ master”的服务器条目,其中列出了主节点的IP( master_ip ),并指定Ansible应该以root用户身份运行远程命令。

Similarly, in the workers group, there are two entries for the worker servers (worker_1_ip and worker_2_ip) that also specify the ansible_user as root.

同样,在worker组中,有两个用于worker服务器的条目( worker_1_ipworker_2_ip ),它们也将ansible_user指定为root。

The last line of the file tells Ansible to use the remote servers’ Python 3 interpreters for its management operations.

文件的最后一行告诉Ansible将远程服务器的Python 3解释器用于其管理操作。

Save and close the file after you’ve added the text.

添加文本后,保存并关闭文件。

Having set up the server inventory with groups, let’s move on to installing operating system level dependencies and creating configuration settings.

使用组设置服务器清单后,让我们继续安装操作系统级别的依存关系并创建配置设置。

第2步-在所有远程服务器上创建非root用户 (Step 2 — Creating a Non-Root User on All Remote Servers)

In this section you will create a non-root user with sudo privileges on all servers so that you can SSH into them manually as an unprivileged user. This can be useful if, for example, you would like to see system information with commands such as top/htop, view a list of running containers, or change configuration files owned by root. These operations are routinely performed during the maintenance of a cluster, and using a non-root user for such tasks minimizes the risk of modifying or deleting important files or unintentionally performing other dangerous operations.

在本节中,您将在所有服务器上创建一个具有sudo特权的非root用户,以便您可以以非特权用户的身份手动SSH进入这些用户。 例如,如果您希望使用诸如top/htop命令查看系统信息,查看正在运行的容器的列表或更改root拥有的配置文件,则此功能很有用。 这些操作通常在群集维护期间执行,并且使用非root用户执行此类任务可以最大程度地减少修改或删除重要文件或无意执行其他危险操作的风险。

Create a file named ~/kube-cluster/initial.yml in the workspace:

在工作区中创建一个名为~/kube-cluster/initial.yml的文件:

  • nano ~/kube-cluster/initial.yml

    纳米〜/ kube-cluster / initial.yml

Next, add the following play to the file to create a non-root user with sudo privileges on all of the servers. A play in Ansible is a collection of steps to be performed that target specific servers and groups. The following play will create a non-root sudo user:

接下来,将以下播放添加到文件中,以在所有服务器上创建具有sudo特权的非root用户。 Ansible中的游戏是针对特定服务器和组要执行的步骤的集合。 以下播放将创建一个非root用户的sudo用户:

~/kube-cluster/initial.yml
〜/ kube-cluster / initial.yml
- hosts: all
  become: yes
  tasks:
    - name: create the 'ubuntu' user
      user: name=ubuntu append=yes state=present createhome=yes shell=/bin/bash

    - name: allow 'ubuntu' to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        line: 'ubuntu ALL=(ALL) NOPASSWD: ALL'
        validate: 'visudo -cf %s'

    - name: set up authorized keys for the ubuntu user
      authorized_key: user=ubuntu key="{{item}}"
      with_file:
        - ~/.ssh/id_rsa.pub

Here’s a breakdown of what this playbook does:

以下是此剧本的功能细目:

  • Creates the non-root user ubuntu.

    创建非root用户ubuntu

  • Configures the sudoers file to allow the ubuntu user to run sudo commands without a password prompt.

    sudoers文件配置为允许ubuntu用户在没有密码提示的情况下运行sudo命令。

  • Adds the public key in your local machine (usually ~/.ssh/id_rsa.pub) to the remote ubuntu user’s authorized key list. This will allow you to SSH into each server as the ubuntu user.

    将本地计算机(通常为~/.ssh/id_rsa.pub )中的公共密钥添加到远程ubuntu用户的授权密钥列表中。 这将允许您以ubuntu用户的身份通过SSH进入每个服务器。

Save and close the file after you’ve added the text.

添加文本后,保存并关闭文件。

Next, execute the playbook by locally running:

接下来,通过本地运行来执行剧本:

  • ansible-playbook -i hosts ~/kube-cluster/initial.yml

    ansible-playbook -i托管〜/ kube-cluster / initial.yml

The command will complete within two to five minutes. On completion, you will see output similar to the following:

该命令将在两到五分钟内完成。 完成后,您将看到类似于以下内容的输出:


   
   
Output
PLAY [all] **** TASK [Gathering Facts] **** ok: [master] ok: [worker1] ok: [worker2] TASK [create the 'ubuntu' user] **** changed: [master] changed: [worker1] changed: [worker2] TASK [allow 'ubuntu' user to have passwordless sudo] **** changed: [master] changed: [worker1] changed: [worker2] TASK [set up authorized keys for the ubuntu user] **** changed: [worker1] => (item=ssh-rsa AAAAB3... changed: [worker2] => (item=ssh-rsa AAAAB3... changed: [master] => (item=ssh-rsa AAAAB3... PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0 worker1 : ok=5 changed=4 unreachable=0 failed=0 worker2 : ok=5 changed=4 unreachable=0 failed=0

Now that the preliminary setup is complete, you can move on to installing Kubernetes-specific dependencies.

现在,初步设置已完成,您可以继续安装特定于Kubernetes的依赖项。

步骤3 —安装Kubernetes的依赖项 (Step 3 — Installing Kubernetes’ Dependencies)

In this section, you will install the operating system level packages required by Kubernetes with Ubuntu’s package manager. These packages are:

在本部分中,您将使用Ubuntu的软件包管理器安装Kubernetes所需的操作系统级别的软件包。 这些软件包是:

  • Docker - a container runtime. It is the component that runs your containers. Support for other runtimes such as rkt is under active development in Kubernetes.

    Docker-容器运行时。 它是运行容器的组件。 Kubernetes正在积极开发对rkt等其他运行时的支持。

  • kubeadm - a CLI tool that will install and configure the various components of a cluster in a standard way.

    kubeadm一个CLI工具,它将以标准方式安装和配置集群的各个组件。

  • kubelet - a system service/program that runs on all nodes and handles node-level operations.

    kubelet在所有节点上运行并处理节点级操作的系统服务/程序。

  • kubectl - a CLI tool used for issuing commands to the cluster through its API Server.

    kubectl一个CLI工具,用于通过其API服务器向集群发出命令。

Create a file named ~/kube-cluster/kube-dependencies.yml in the workspace:

在工作区中创建一个名为~/kube-cluster/kube-dependencies.yml的文件:

  • nano ~/kube-cluster/kube-dependencies.yml

    纳米〜/ kube-cluster / kube-dependencies.yml

Add the following plays to the file to install these packages to your servers:

将以下文件添加到文件中以将这些软件包安装到服务器:

~/kube-cluster/kube-dependencies.yml
〜/ kube-cluster / kube-dependencies.yml
- hosts: all
  become: yes
  tasks:
   - name: install Docker
     apt:
       name: docker.io
       state: present
       update_cache: true

   - name: install APT Transport HTTPS
     apt:
       name: apt-transport-https
       state: present

   - name: add Kubernetes apt-key
     apt_key:
       url: https://packages.cloud.google.com/apt/doc/apt-key.gpg
       state: present

   - name: add Kubernetes' APT repository
     apt_repository:
      repo: deb http://apt.kubernetes.io/ kubernetes-xenial main
      state: present
      filename: 'kubernetes'

   - name: install kubelet
     apt:
       name: kubelet=1.14.0-00
       state: present
       update_cache: true

   - name: install kubeadm
     apt:
       name: kubeadm=1.14.0-00
       state: present

- hosts: master
  become: yes
  tasks:
   - name: install kubectl
     apt:
       name: kubectl=1.14.0-00
       state: present
       force: yes

The first play in the playbook does the following:

剧本中的第一部剧作如下:

  • Installs Docker, the container runtime.

    安装Docker(容器运行时)。

  • Installs apt-transport-https, allowing you to add external HTTPS sources to your APT sources list.

    安装apt-transport-https ,使您可以将外部HTTPS源添加到APT源列表中。

  • Adds the Kubernetes APT repository’s apt-key for key verification.

    添加Kubernetes APT存储库的apt-key进行密钥验证。

  • Adds the Kubernetes APT repository to your remote servers’ APT sources list.

    将Kubernetes APT存储库添加到远程服务器的APT来源列表中。

  • Installs kubelet and kubeadm.

    安装kubeletkubeadm

The second play consists of a single task that installs kubectl on your master node.

第二步包含一个任务,该任务将kubectl安装在您的主节点上。

Note: While the Kubernetes documentation recommends you use the latest stable release of Kubernetes for your environment, this tutorial uses a specific version. This will ensure that you can follow the steps successfully, as Kubernetes changes rapidly and the latest version may not work with this tutorial.

注意:虽然Kubernetes文档建议您为您的环境使用Kubernetes的最新稳定版本,但本教程使用的是特定版本。 这将确保您可以成功地执行步骤,因为Kubernetes会快速变化,并且最新版本可能不适用于本教程。

Save and close the file when you are finished.

完成后保存并关闭文件。

Next, execute the playbook by locally running:

接下来,通过本地运行来执行剧本:

  • ansible-playbook -i hosts ~/kube-cluster/kube-dependencies.yml

    ansible-playbook -i托管〜/ kube-cluster / kube-dependencies.yml

On completion, you will see output similar to the following:

完成后,您将看到类似于以下内容的输出:


   
   
Output
PLAY [all] **** TASK [Gathering Facts] **** ok: [worker1] ok: [worker2] ok: [master] TASK [install Docker] **** changed: [master] changed: [worker1] changed: [worker2] TASK [install APT Transport HTTPS] ***** ok: [master] ok: [worker1] changed: [worker2] TASK [add Kubernetes apt-key] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [add Kubernetes' APT repository] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubelet] ***** changed: [master] changed: [worker1] changed: [worker2] TASK [install kubeadm] ***** changed: [master] changed: [worker1] changed: [worker2] PLAY [master] ***** TASK [Gathering Facts] ***** ok: [master] TASK [install kubectl] ****** ok: [master] PLAY RECAP **** master : ok=9 changed=5 unreachable=0 failed=0 worker1 : ok=7 changed=5 unreachable=0 failed=0 worker2 : ok=7 changed=5 unreachable=0 failed=0

After execution, Docker, kubeadm, and kubelet will be installed on all of the remote servers. kubectl is not a required component and is only needed for executing cluster commands. Installing it only on the master node makes sense in this context, since you will run kubectl commands only from the master. Note, however, that kubectl commands can be run from any of the worker nodes or from any machine where it can be installed and configured to point to a cluster.

执行后,将在所有远程服务器上安装Docker, kubeadmkubeletkubectl不是必需的组件,仅在执行集群命令时才需要。 在这种情况下,仅在主节点上安装它是有意义的,因为您将只能从主节点运行kubectl命令。 但是请注意,可以从任何工作节点或可以在其中安装和配置kubectl命令并将其配置为指向集群的任何计算机上运行kubectl命令。

All system dependencies are now installed. Let’s set up the master node and initialize the cluster.

现在已安装所有系统依赖项。 让我们设置主节点并初始化集群。

步骤4 —设置主节点 (Step 4 — Setting Up the Master Node)

In this section, you will set up the master node. Before creating any playbooks, however, it’s worth covering a few concepts such as Pods and Pod Network Plugins, since your cluster will include both.

在本节中,您将设置主节点。 创建任何剧本之前,但是,它的价值涵盖了几个概念,如豆荚 波德网络插件 ,因为集群将包括。

A pod is an atomic unit that runs one or more containers. These containers share resources such as file volumes and network interfaces in common. Pods are the basic unit of scheduling in Kubernetes: all containers in a pod are guaranteed to run on the same node that the pod is scheduled on.

吊舱是运行一个或多个容器的原子单元。 这些容器共享资源,例如文件卷和网络接口。 Pod是Kubernetes中调度的基本单位:保证Pod中的所有容器都在与Pod调度相同的节点上运行。

Each pod has its own IP address, and a pod on one node should be able to access a pod on another node using the pod’s IP. Containers on a single node can communicate easily through a local interface. Communication between pods is more complicated, however, and requires a separate networking component that can transparently route traffic from a pod on one node to a pod on another.

每个Pod都有自己的IP地址,并且一个节点上的Pod应该能够使用Pod的IP访问另一节点上的Pod。 单个节点上的容器可以通过本地接口轻松进行通信。 吊舱之间的通信更加复杂,并且需要一个单独的网络组件,该组件可以透明地将流量从一个节点上的吊舱路由到另一节点上的吊舱。

This functionality is provided by pod network plugins. For this cluster, you will use Flannel, a stable and performant option.

Pod Network插件提供了此功能。 对于此集群,您将使用Flannel ,这是一个稳定且性能卓越的选项。

Create an Ansible playbook named master.yml on your local machine:

在本地计算机上创建一个名为master.yml的Ansible剧本:

  • nano ~/kube-cluster/master.yml

    纳米〜/ kube-cluster / master.yml

Add the following play to the file to initialize the cluster and install Flannel:

将以下内容添加到文件中以初始化集群并安装Flannel:

~/kube-cluster/master.yml
〜/ kube-cluster / master.yml
- hosts: master
  become: yes
  tasks:
    - name: initialize the cluster
      shell: kubeadm init --pod-network-cidr=10.244.0.0/16 >> cluster_initialized.txt
      args:
        chdir: $HOME
        creates: cluster_initialized.txt

    - name: create .kube directory
      become: yes
      become_user: ubuntu
      file:
        path: $HOME/.kube
        state: directory
        mode: 0755

    - name: copy admin.conf to user's kube config
      copy:
        src: /etc/kubernetes/admin.conf
        dest: /home/ubuntu/.kube/config
        remote_src: yes
        owner: ubuntu

    - name: install Pod network
      become: yes
      become_user: ubuntu
      shell: kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml >> pod_network_setup.txt
      args:
        chdir: $HOME
        creates: pod_network_setup.txt

Here’s a breakdown of this play:

这是这部戏的细目:

  • The first task initializes the cluster by running kubeadm init. Passing the argument --pod-network-cidr=10.244.0.0/16 specifies the private subnet that the pod IPs will be assigned from. Flannel uses the above subnet by default; we’re telling kubeadm to use the same subnet.

    第一个任务通过运行kubeadm init初始化集群。 传递参数--pod-network-cidr=10.244.0.0/16指定将从其分配Pod IP的专用子网。 法兰默认使用上述子网; 我们告诉kubeadm使用相同的子网。

  • The second task creates a .kube directory at /home/ubuntu. This directory will hold configuration information such as the admin key files, which are required to connect to the cluster, and the cluster’s API address.

    第二个任务在/home/ubuntu处创建一个.kube目录。 该目录将保存配置信息,例如连接到集群所需的管理密钥文件以及集群的API地址。

  • The third task copies the /etc/kubernetes/admin.conf file that was generated from kubeadm init to your non-root user’s home directory. This will allow you to use kubectl to access the newly-created cluster.

    第三个任务将从kubeadm init生成的/etc/kubernetes/admin.conf文件复制到您的非root用户的主目录中。 这将允许您使用kubectl来访问新创建的集群。

  • The last task runs kubectl apply to install Flannel. kubectl apply -f descriptor.[yml|json] is the syntax for telling kubectl to create the objects described in the descriptor.[yml|json] file. The kube-flannel.yml file contains the descriptions of objects required for setting up Flannel in the cluster.

    最后一个任务运行kubectl apply安装Flannelkubectl apply -f descriptor.[yml|json]是用于告诉kubectl创建描述descriptor.[yml|json]文件中descriptor.[yml|json]的对象的语法。 kube-flannel.yml文件包含在集群中设置Flannel所需的对象的描述。

Save and close the file when you are finished.

完成后保存并关闭文件。

Execute the playbook locally by running:

通过运行以下命令在本地执行剧本:

  • ansible-playbook -i hosts ~/kube-cluster/master.yml

    ansible-playbook -i托管〜/ kube-cluster / master.yml

On completion, you will see output similar to the following:

完成后,您将看到类似于以下内容的输出:


   
   
Output
PLAY [master] **** TASK [Gathering Facts] **** ok: [master] TASK [initialize the cluster] **** changed: [master] TASK [create .kube directory] **** changed: [master] TASK [copy admin.conf to user's kube config] ***** changed: [master] TASK [install Pod network] ***** changed: [master] PLAY RECAP **** master : ok=5 changed=4 unreachable=0 failed=0

To check the status of the master node, SSH into it with the following command:

要检查主节点的状态,请使用以下命令将其SSH连接到该主节点:

  • ssh ubuntu@master_ip

    ssh ubuntu @ master_ip

Once inside the master node, execute:

进入主节点后,执行:

  • kubectl get nodes

    kubectl获取节点

You will now see the following output:

现在,您将看到以下输出:


   
   
Output
NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0

The output states that the master node has completed all initialization tasks and is in a Ready state from which it can start accepting worker nodes and executing tasks sent to the API Server. You can now add the workers from your local machine.

输出表明master节点已完成所有初始化任务,并且处于Ready状态,从该状态开始, master节点可以开始接受工作程序节点并执行发送到API服务器的任务。 现在,您可以从本地计算机添加工作程序。

步骤5 —设置工作节点 (Step 5 — Setting Up the Worker Nodes)

Adding workers to the cluster involves executing a single command on each. This command includes the necessary cluster information, such as the IP address and port of the master’s API Server, and a secure token. Only nodes that pass in the secure token will be able join the cluster.

向集群添加工作程序涉及在每个集群上执行单个命令。 该命令包括必要的群集信息,例如主服务器的API服务器的IP地址和端口以及安全令牌。 只有传递安全令牌的节点才能加入集群。

Navigate back to your workspace and create a playbook named workers.yml:

浏览回到您的工作区,并创建一个名为workers.yml的剧本:

  • nano ~/kube-cluster/workers.yml

    纳米〜/ kube-cluster / workers.yml

Add the following text to the file to add the workers to the cluster:

将以下文本添加到文件中,以将工作线程添加到集群中:

~/kube-cluster/workers.yml
〜/ kube-cluster / workers.yml
- hosts: master
  become: yes
  gather_facts: false
  tasks:
    - name: get join command
      shell: kubeadm token create --print-join-command
      register: join_command_raw

    - name: set join command
      set_fact:
        join_command: "{{ join_command_raw.stdout_lines[0] }}"


- hosts: workers
  become: yes
  tasks:
    - name: join cluster
      shell: "{{ hostvars['master'].join_command }} >> node_joined.txt"
      args:
        chdir: $HOME
        creates: node_joined.txt

Here’s what the playbook does:

这是剧本的作用:

  • The first play gets the join command that needs to be run on the worker nodes. This command will be in the following format:kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>. Once it gets the actual command with the proper token and hash values, the task sets it as a fact so that the next play will be able to access that info.

    第一个播放得到需要在工作节点上运行的join命令。 该命令将采用以下格式: kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash> 。 一旦它获得了具有正确标记哈希值的实际命令,该任务便将其设置为事实,以便下一次播放将能够访问该信息。

  • The second play has a single task that runs the join command on all worker nodes. On completion of this task, the two worker nodes will be part of the cluster.

    第二步有一个任务,在所有工作节点上运行join命令。 完成此任务后,两个工作程序节点将成为群集的一部分。

Save and close the file when you are finished.

完成后保存并关闭文件。

Execute the playbook by locally running:

通过本地运行来执行剧本:

  • ansible-playbook -i hosts ~/kube-cluster/workers.yml

    ansible-playbook -i托管〜/ kube-cluster / workers.yml

On completion, you will see output similar to the following:

完成后,您将看到类似于以下内容的输出:


   
   
Output
PLAY [master] **** TASK [get join command] **** changed: [master] TASK [set join command] ***** ok: [master] PLAY [workers] ***** TASK [Gathering Facts] ***** ok: [worker1] ok: [worker2] TASK [join cluster] ***** changed: [worker1] changed: [worker2] PLAY RECAP ***** master : ok=2 changed=1 unreachable=0 failed=0 worker1 : ok=2 changed=1 unreachable=0 failed=0 worker2 : ok=2 changed=1 unreachable=0 failed=0

With the addition of the worker nodes, your cluster is now fully set up and functional, with workers ready to run workloads. Before scheduling applications, let’s verify that the cluster is working as intended.

通过添加工作程序节点,您的集群现在已完全设置并运行,工作程序已准备就绪,可以运行工作负载。 在安排应用程序之前,让我们验证集群是否按预期工作。

步骤6 —验证群集 (Step 6 — Verifying the Cluster)

A cluster can sometimes fail during setup because a node is down or network connectivity between the master and worker is not working correctly. Let’s verify the cluster and ensure that the nodes are operating correctly.

群集有时会在设置过程中失败,因为节点关闭或主服务器和工作服务器之间的网络连接无法正常工作。 让我们验证集群并确保节点运行正常。

You will need to check the current state of the cluster from the master node to ensure that the nodes are ready. If you disconnected from the master node, you can SSH back into it with the following command:

您将需要从主节点检查集群的当前状态,以确保节点已准备就绪。 如果从主节点断开连接,则可以使用以下命令通过SSH重新连接到该主节点:

  • ssh ubuntu@master_ip

    ssh ubuntu @ master_ip

Then execute the following command to get the status of the cluster:

然后执行以下命令以获取集群的状态:

  • kubectl get nodes

    kubectl获取节点

You will see output similar to the following:

您将看到类似于以下内容的输出:


   
   
Output
NAME STATUS ROLES AGE VERSION master Ready master 1d v1.14.0 worker1 Ready <none> 1d v1.14.0 worker2 Ready <none> 1d v1.14.0

If all of your nodes have the value Ready for STATUS, it means that they’re part of the cluster and ready to run workloads.

如果您所有节点的值都为Ready for STATUS ,则表示它们是集群的一部分并准备运行工作负载。

If, however, a few of the nodes have NotReady as the STATUS, it could mean that the worker nodes haven’t finished their setup yet. Wait for around five to ten minutes before re-running kubectl get node and inspecting the new output. If a few nodes still have NotReady as the status, you might have to verify and re-run the commands in the previous steps.

但是,如果少数节点的STATUSNotReady ,则可能意味着工作节点尚未完成设置。 等待大约五到十分钟,然后重新运行kubectl get node并检查新输出。 如果仍有几个节点的状态为NotReady ,则可能必须验证并重新运行前面步骤中的命令。

Now that your cluster is verified successfully, let’s schedule an example Nginx application on the cluster.

现在,您的集群已成功验证,让我们在集群上安排一个示例Nginx应用程序。

步骤7 —在群集上运行应用程序 (Step 7 — Running An Application on the Cluster)

You can now deploy any containerized application to your cluster. To keep things familiar, let’s deploy Nginx using Deployments and Services to see how this application can be deployed to the cluster. You can use the commands below for other containerized applications as well, provided you change the Docker image name and any relevant flags (such as ports and volumes).

现在,您可以将任何容器化的应用程序部署到群集中。 为使情况熟悉,让我们使用“ 部署服务”来部署Nginx,以了解如何将此应用程序部署到集群。 您还可以将以下命令用于其他容器化的应用程序,只要您更改Docker映像名称和任何相关标志(例如portsvolumes )。

Still within the master node, execute the following command to create a deployment named nginx:

仍在主节点中,执行以下命令以创建名为nginx的部署:

  • kubectl create deployment nginx --image=nginx

    kubectl创建部署nginx --image = nginx

A deployment is a type of Kubernetes object that ensures there’s always a specified number of pods running based on a defined template, even if the pod crashes during the cluster’s lifetime. The above deployment will create a pod with one container from the Docker registry’s Nginx Docker Image.

部署是一种Kubernetes对象,可确保始终有指定数量的基于定义的模板运行的Pod,即使Pod在群集的生命周期内崩溃也是如此。 上面的部署将使用Docker注册表的Nginx Docker Image创建一个带有一个容器的容器。

Next, run the following command to create a service named nginx that will expose the app publicly. It will do so through a NodePort, a scheme that will make the pod accessible through an arbitrary port opened on each node of the cluster:

接下来,运行以下命令以创建名为nginx的服务,该服务将公开公开该应用程序。 它将通过NodePort这样做,该方案将使Pod可以通过在群集的每个节点上打开的任意端口进行访问:

  • kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort

    kubectl公开部署nginx --port 80 --target-port 80 --type NodePort

Services are another type of Kubernetes object that expose cluster internal services to clients, both internal and external. They are also capable of load balancing requests to multiple pods, and are an integral component in Kubernetes, frequently interacting with other components.

服务是另一种Kubernetes对象,可将集群内部服务向内部和外部客户端公开。 它们还能够平衡对多个Pod的请求负载,并且是Kubernetes中不可或缺的组件,经常与其他组件进行交互。

Run the following command:

运行以下命令:

  • kubectl get services

    kubectl获得服务

This will output text similar to the following:

这将输出类似于以下内容的文本:


   
   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d nginx NodePort 10.109.228.209 <none> 80:nginx_port/TCP 40m

From the third line of the above output, you can retrieve the port that Nginx is running on. Kubernetes will assign a random port that is greater than 30000 automatically, while ensuring that the port is not already bound by another service.

从以上输出的第三行,您可以检索正在运行Nginx的端口。 Kubernetes将自动分配一个大于30000的随机端口,同时确保该端口尚未被其他服务绑定。

To test that everything is working, visit http://worker_1_ip:nginx_port or http://worker_2_ip:nginx_port through a browser on your local machine. You will see Nginx’s familiar welcome page.

要测试一切正常,请通过本地计算机上的浏览器访问http:// worker_1_ip : nginx_porthttp:// worker_2_ip : nginx_port 。 您将看到Nginx熟悉的欢迎页面。

If you would like to remove the Nginx application, first delete the nginx service from the master node:

如果你想删除Nginx的应用程序,首先删除nginx主节点的服务:

  • kubectl delete service nginx

    kubectl删除服务的nginx

Run the following to ensure that the service has been deleted:

运行以下命令以确保该服务已被删除:

  • kubectl get services

    kubectl获得服务

You will see the following output:

您将看到以下输出:


   
   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d

Then delete the deployment:

然后删除部署:

  • kubectl delete deployment nginx

    kubectl删除部署nginx

Run the following to confirm that this worked:

运行以下命令以确认它是否有效:

  • kubectl get deployments

    kubectl进行部署

   
   
Output
No resources found.

结论 (Conclusion)

In this guide, you’ve successfully set up a Kubernetes cluster on Ubuntu 16.04 using Kubeadm and Ansible for automation.

在本指南中,您已成功使用Kubeadm和Ansible在Ubuntu 16.04上成功建立了Kubernetes集群,以实现自动化。

If you’re wondering what to do with the cluster now that it’s set up, a good next step would be to get comfortable deploying your own applications and services onto the cluster. Here’s a list of links with further information that can guide you in the process:

如果您想知道在设置好集群之后该如何处理,那么下一步是一个很好的步骤,可以轻松地将自己的应用程序和服务部署到集群上。 以下是带有更多信息的链接列表,这些信息可以指导您进行此过程:

  • Dockerizing applications - lists examples that detail how to containerize applications using Docker.

    Dockerizing应用程序 -列出了详细说明如何使用Docker容器化应用程序的示例。

  • Pod Overview - describes in detail how Pods work and their relationship with other Kubernetes objects. Pods are ubiquitous in Kubernetes, so understanding them will facilitate your work.

    Pod概述 -详细描述Pod的工作方式以及它们与其他Kubernetes对象的关系。 在Kubernetes中,豆荚无处不在,因此了解它们将有助于您的工作。

  • Deployments Overview - this provides an overview of deployments. It is useful to understand how controllers such as deployments work since they are used frequently in stateless applications for scaling and the automated healing of unhealthy applications.

    部署概述 -提供部署概述 。 了解诸如部署之类的控制器的工作方式非常有用,因为它们经常在无状态应用程序中用于扩展和自动修复运行状况不佳的应用程序。

  • Services Overview - this covers services, another frequently used object in Kubernetes clusters. Understanding the types of services and the options they have is essential for running both stateless and stateful applications.

    服务概述 -这涵盖了服务,这是Kubernetes集群中另一个经常使用的对象。 了解服务的类型及其具有的选项对于运行无状态应用程序和有状态应用程序都是必不可少的。

Other important concepts that you can look into are Volumes, Ingresses and Secrets, all of which come in handy when deploying production applications.

您可以研究的其他重要概念是VolumesIngressSecrets ,它们在部署生产应用程序时会派上用场。

Kubernetes has a lot of functionality and features to offer. The Kubernetes Official Documentation is the best place to learn about concepts, find task-specific guides, and look up API references for various objects.

Kubernetes具有许多功能和特性。 Kubernetes官方文档是学习概念,查找特定于任务的指南以及查找各种对象的API参考的最佳场所。

翻译自: https://www.digitalocean.com/community/tutorials/how-to-create-a-kubernetes-cluster-using-kubeadm-on-ubuntu-16-04

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值