如何将Docker Compose工作流迁移到Kubernetes

介绍 (Introduction)

When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

在构建现代的无状态应用程序时, 将应用程序的组件容器化是在分布式平台上进行部署和扩展的第一步。 如果您在开发中使用了Docker Compose ,将通过以下方式对应用程序进行现代化和容器化:

  • Extracting necessary configuration information from your code.

    从代码中提取必要的配置信息。
  • Offloading your application’s state.

    卸载应用程序的状态。
  • Packaging your application for repeated use.

    包装您的应用程序以供重复使用。

You will also have written service definitions that specify how your container images should run.

您还将具有书面的服务定义,用于指定容器映像应如何运行。

To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

要在Kubernetes等分布式平台上运行服务,您需要将Compose服务定义转换为Kubernetes对象。 这将使您能够灵活扩展应用程序 。 一个工具,可以加快翻译流程Kubernetes是kompose ,一个转换工具,可以帮助开发人员撰写移动工作流程,以协作型集装箱像Kubernetes或OpenShift

In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Node.js application with a MongoDB database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

在本教程中,您将使用kompose将Compose服务转换为Kubernetes 对象 。 您将以kompose提供的对象定义作为起点,并进行调整以确保您的设置将以Kubernetes期望的方式使用SecretsServicesPersistentVolumeClaims 。 在本教程结束时,您将拥有一个单实例Node.js应用程序,该应用程序具有在Kubernetes集群上运行的MongoDB数据库。 此设置将反映使用Docker Compose容器化Node.js应用程序中描述的代码的功能,并且将是构建可满足您的需求的生产就绪解决方案的良好起点。

先决条件 (Prerequisites)

第1步-安装kompose (Step 1 — Installing kompose)

To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.18.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

要开始使用kompose,请导航到项目的GitHub Releases页面 ,然后将链接复制到当前版本(在撰写本文时为1.18.0版)。 将此链接粘贴到以下curl命令中,以下载最新版本的kompose:

  • curl -L https://github.com/kubernetes/kompose/releases/download/v1.18.0/kompose-linux-amd64 -o kompose

    curl -L https://github.com/kubernetes/kompose/releases/download/v 1.18.0 / kompose-linux-amd64 -o kompose

For details about installing on non-Linux systems, please refer to the installation instructions.

有关在非Linux系统上安装的详细信息,请参阅安装说明

Make the binary executable:

使二进制文件可执行:

  • chmod +x kompose

    chmod + x组合

Move it to your PATH:

将其移动到您的PATH

  • sudo mv ./kompose /usr/local/bin/kompose

    sudo mv ./kompose / usr / local / bin / kompose

To verify that it has been installed properly, you can do a version check:

要验证它是否已正确安装,可以执行版本检查:

  • kompose version

    共同版本

If the installation was successful, you will see output like the following:

如果安装成功,您将看到类似以下的输出:


   
Output
1.18.0 (06a2e56)

With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

安装kompose并准备使用它们之后,您现在可以克隆要转换为Kubernetes的Node.js项目代码。

第2步-克隆和打包应用程序 (Step 2 — Cloning and Packaging the Application)

To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

要将我们的应用程序与Kubernetes一起使用,我们将需要克隆项目代码并打包该应用程序,以便kubelet服务可以提取图像。

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

我们的第一步将是从DigitalOcean社区GitHub帐户克隆node-mongo-docker-dev存储库 。 该存储库包含来自使用Docker Compose进行容器化开发Node.js应用程序中描述的设置中的代码,该示例使用一个演示Node.js应用程序来演示如何使用Docker Compose设置开发环境。 您可以在《 使用Node.js从容器到Kubernetes 》系列中找到有关应用程序本身的更多信息。

Clone the repository into a directory called node_project:

将存储node_project到名为node_project的目录中:

  • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

    git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

导航到node_project目录:

  • cd node_project

    cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a MongoDB database.

node_project目录包含与用户输入配合使用的shark信息应用程序的文件和目录。 它已经过现代化的处理,可以与容器一起使用:敏感的特定配置信息已从应用程序代码中删除,并重构为在运行时注入,并且应用程序的状态已卸载到MongoDB数据库中。

For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

有关设计现代无状态应用程序的更多信息,请参见为Kubernetes设计架构应用程序为Kubernetes 现代化应用程序

The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

项目目录包含一个Dockerfile ,其中包含用于构建应用程序映像的说明。 现在开始构建映像,以便将其推送到Docker Hub帐户并在Kubernetes设置中使用它。

Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it node-kubernetes or a name of your own choosing:

使用docker build命令,使用-t标志构建映像,该映像允许您使用令人难忘的名称对其进行标记。 在这种情况下,请使用您的Docker Hub用户名标记该映像并将其命名为node-kubernetes或您自己选择的名称:

  • docker build -t your_dockerhub_username/node-kubernetes .

    docker build -t your_dockerhub_username / node-kubernetes 。

The . in the command specifies that the build context is the current directory.

. 在命令中,指定构建上下文为当前目录。

It will take a minute or two to build the image. Once it is complete, check your images:

构建图像将需要一两分钟。 完成后,检查图像:

  • docker images

    码头工人图像

You will see the following output:

您将看到以下输出:


   
Output
REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-kubernetes latest 9c6f897e1fbc 3 seconds ago 90MB node 10-alpine 94f3c8956482 12 days ago 71MB

Next, log in to the Docker Hub account you created in the prerequisites:

接下来,登录到在先决条件中创建的Docker Hub帐户:

  • docker login -u your_dockerhub_username

    泊坞窗登录-u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

出现提示时,输入您的Docker Hub帐户密码。 以这种方式登录将使用Docker Hub凭据在用户的主目录中创建~/.docker/config.json文件。

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

使用docker push命令将应用程序映像推送到Docker Hub。 请记住用您自己的Docker Hub用户名替换your_dockerhub_username

  • docker push your_dockerhub_username/node-kubernetes

    docker push your_dockerhub_username / node-kubernetes

You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

现在,您有了一个应用程序映像,可以将其拉出以使用Kubernetes运行您的应用程序。 下一步将是将您的应用程序服务定义转换为Kubernetes对象。

步骤3 —使用kompose将Compose Services转换为Kubernetes对象 (Step 3 — Translating Compose Services to Kubernetes Objects with kompose)

Our Docker Compose file, here called docker-compose.yaml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

我们的Docker Compose文件(此处称为docker-compose.yaml )列出了将通过Compose运行我们的服务的定义。 Compose中的服务是一个正在运行的容器, 服务定义包含有关每个容器映像将如何运行的信息。 在此步骤中,我们将使用kompose创建yaml文件,将这些定义转换为Kubernetes对象。 这些文件将包含描述其所需状态的Kubernetes对象的规范

We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

我们将使用这些文件来创建不同类型的对象: Services ,这将确保运行我们容器的Pod仍然可访问; 部署 ,其中将包含有关所需Pod状态的信息; PersistentVolumeClaim为我们的数据库数据提供存储; ConfigMap,用于在运行时注入环境变量; 以及我们应用程序的数据库用户和密码的秘密 。 其中一些定义将存在于kompose将为我们创建的文件中,而另一些则需要我们自己创建。

First, we will need to modify some of the definitions in our docker-compose.yaml file to work with Kubernetes. We will include a reference to our newly-built application image in our nodejs service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

首先,我们需要修改docker-compose.yaml文件中的一些定义以与Kubernetes一起使用。 我们将在nodejs服务定义中包含对新建应用程序映像的引用,并删除绑定装载和用于在Compose开发中运行应用程序容器的其他命令 。 此外,我们将重新定义两个容器的重启策略,以符合Kubernetes期望的行为

Open the file with nano or your favorite editor:

使用nano或您喜欢的编辑器打开文件:

  • nano docker-compose.yaml

    纳米docker-compose.yaml

The current definition for the nodejs application service looks like this:

nodejs应用程序服务的当前定义如下所示:

~/node_project/docker-compose.yaml
〜/ node_project / docker-compose.yaml
...
services:
  nodejs:
    build:
      context: .
      dockerfile: Dockerfile
    image: nodejs
    container_name: nodejs
    restart: unless-stopped
    env_file: .env
    environment:
      - MONGO_USERNAME=$MONGO_USERNAME
      - MONGO_PASSWORD=$MONGO_PASSWORD
      - MONGO_HOSTNAME=db
      - MONGO_PORT=$MONGO_PORT
      - MONGO_DB=$MONGO_DB 
    ports:
      - "80:8080"
    volumes:
      - .:/home/node/app
      - node_modules:/home/node/app/node_modules
    networks:
      - app-network
    command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
...

Make the following edits to your service definition:

对您的服务定义进行以下编辑:

  • Use your node-kubernetes image instead of the local Dockerfile.

    使用您的node-kubernetes映像而不是本地Dockerfile

  • Change the container restart policy from unless-stopped to always.

    将容器restart策略从“ unless-stopped更改为“ always

  • Remove the volumes list and the command instruction.

    删除volumes列表和command指令。

The finished service definition will now look like this:

现在,完成的服务定义将如下所示:

~/node_project/docker-compose.yaml
〜/ node_project / docker-compose.yaml
...
services:
  nodejs:
    image: your_dockerhub_username/node-kubernetes
    container_name: nodejs
    restart: always
    env_file: .env
    environment:
      - MONGO_USERNAME=$MONGO_USERNAME
      - MONGO_PASSWORD=$MONGO_PASSWORD
      - MONGO_HOSTNAME=db
      - MONGO_PORT=$MONGO_PORT
      - MONGO_DB=$MONGO_DB 
    ports:
      - "80:8080"
    networks:
      - app-network
...

Next, scroll down to the db service definition. Here, make the following edits:

接下来,向下滚动到db服务定义。 在这里,进行以下编辑:

  • Change the restart policy for the service to always.

    将服务的restart策略更改为always

  • Remove the .env file. Instead of using values from the .env file, we will pass the values for our MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD to the database container using the Secret we will create in Step 4.

    删除.env文件。 代替使用.env文件中的值,我们将使用在步骤4中创建的Secret,将MONGO_INITDB_ROOT_USERNAMEMONGO_INITDB_ROOT_PASSWORD的值传递给数据库容器。

The db service definition will now look like this:

现在, db服务定义将如下所示:

~/node_project/docker-compose.yaml
〜/ node_project / docker-compose.yaml
...
  db:
    image: mongo:4.1.8-xenial
    container_name: db
    restart: always
    environment:
      - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
      - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
    volumes:  
      - dbdata:/data/db   
    networks:
      - app-network
...

Finally, at the bottom of the file, remove the node_modules volumes from the top-level volumes key. The key will now look like this:

最后,在文件底部,从顶级volumes键中删除node_modules卷。 密钥现在将如下所示:

~/node_project/docker-compose.yaml
〜/ node_project / docker-compose.yaml
...
volumes:
  dbdata:

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Node.js Application for Development With Docker Compose for a longer explanation of this file.

在翻译服务定义之前,我们需要编写.env文件,kompose将使用该文件来使用我们的非敏感信息创建ConfigMap。 有关此文件的详细说明,请参见使用Docker Compose容器化Node.js应用程序进行开发的 步骤2

In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the node-mongo-docker-dev repository in Step 2 of this tutorial. We will therefore need to recreate it now.

在该教程中,我们将.env添加到了.gitignore文件中,以确保不会将其复制到版本控制中。 这意味着当我们在本教程的步骤2中克隆node-mongo-docker-dev存储库时,它不会复制。 因此,我们现在需要重新创建它。

Create the file:

创建文件:

  • nano .env

    纳米.env

kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the nodejs service definition in our Compose file, we will add only the MONGO_DB database name and the MONGO_PORT. We will assign the database username and password separately when we manually create a Secret object in Step 4.

kompose将使用此文件为我们的应用程序创建ConfigMap。 但是,我们将只添加MONGO_DB数据库名称和MONGO_PORT ,而不是从Compose文件中的nodejs服务定义中分配所有变量。 在步骤4中手动创建Secret对象时,我们将分别分配数据库用户名和密码。

Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

将以下端口和数据库名称信息添加到.env文件。 如果您愿意,可以重命名数据库:

~/node_project/.env
〜/ node_project / .env
MONGO_PORT=27017
MONGO_DB=sharkinfo

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

现在,您就可以使用对象规范来创建文件了。 kompose提供了多种翻译资源的选项 。 您可以:

  • Create yaml files based on the service definitions in your docker-compose.yaml file with kompose convert.

    使用kompose convert根据kompose convert docker-compose.yaml文件中的服务定义创建yaml文件。

  • Create Kubernetes objects directly with kompose up.

    直接使用kompose up创建Kubernetes对象。

  • Create a Helm chart with kompose convert -c.

    使用kompose convert -c创建一个Helm图表。

For now, we will convert our service definitions to yaml files and then add to and revise the files kompose creates.

现在,我们将服务定义转换为yaml文件,然后添加到kompose创建的文件并对其进行修改。

Convert your service definitions to yaml files with the following command:

使用以下命令将服务定义转换为yaml文件:

  • kompose convert

    转换

You can also name specific or multiple Compose files using the -f flag.

您还可以使用-f标志来命名特定或多个Compose文件。

After you run this command, kompose will output information about the files it has created:

运行此命令后,kompose将输出有关其创建的文件的信息:


   
Output
INFO Kubernetes file "nodejs-service.yaml" created INFO Kubernetes file "db-deployment.yaml" created INFO Kubernetes file "dbdata-persistentvolumeclaim.yaml" created INFO Kubernetes file "nodejs-deployment.yaml" created INFO Kubernetes file "nodejs-env-configmap.yaml" created

These include yaml files with specs for the Node application Service, Deployment, and ConfigMap, as well as for the dbdata PersistentVolumeClaim and MongoDB database Deployment.

这些文件包括具有针对节点应用程序服务,部署和ConfigMap以及dbdata PersistentVolumeClaim和MongoDB数据库部署的规范的yaml文件。

These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Node.js Application for Development With Docker Compose we will need to make a few additions and changes to the files kompose has generated.

这些文件是一个很好的起点,但是为了使我们的应用程序功能与使用Docker Compose容器化开发Node.js应用程序中描述的设置相匹配,我们将需要对kompose生成的文件进行一些补充和更改。

步骤4 —创建Kubernetes的秘密 (Step 4 — Creating Kubernetes Secrets)

In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database username and password.

为了使我们的应用程序能够按照我们期望的方式运行,我们需要对kompose创建的文件进行一些修改。 这些更改的第一步将是为我们的数据库用户和密码生成一个Secret并将其添加到我们的应用程序和数据库部署中。 Kubernetes提供了两种使用环境变量的方式:ConfigMaps和Secrets。 kompose已经使用我们包含在.env文件中的非机密信息创建了ConfigMap,因此现在我们将使用机密信息(我们的数据库用户名和密码)创建Secret。

The first step in manually creating a Secret will be to convert your username and password to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

手动创建Secret的第一步是将您的用户名和密码转换为base64 ,该编码方案可让您统一传输数据,包括二进制数据。

Convert your database username:

转换您的数据库用户名:

  • echo -n 'your_database_username' | base64

    echo -n'your_database_username '| base64

Note down the value you see in the output.

记下您在输出中看到的值。

Next, convert your password:

接下来,转换您的密码:

  • echo -n 'your_database_password' | base64

    echo -n'your_database_password '| base64

Take note of the value in the output here as well.

还要在此处注意输出中的值。

Open a file for the Secret:

打开秘密文件:

  • nano secret.yaml

    纳米秘密

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

注意: Kubernetes对象通常是使用YAML 定义的 ,它严格禁止制表符并且需要两个空格来缩进。 如果您想查询您的任何的格式yaml文件,你可以使用棉短绒或测试使用的语法的正确性kubectl create--dry-run--validate标志:

  • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

    kubectl创建-f your_yaml_file .yaml --dry-run --validate = true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

通常,在使用kubectl创建资源之前,先验证语法是一个好主意。

Add the following code to the file to create a Secret that will define your MONGO_USERNAME and MONGO_PASSWORD using the encoded values you just created. Be sure to replace the dummy values here with your encoded username and password:

将以下代码添加到文件中以创建一个Secret,该Secret将使用您刚创建的编码值定义MONGO_USERNAMEMONGO_PASSWORD 。 请确保将此处的虚拟值替换为您编码的用户名和密码:

~/node_project/secret.yaml
〜/ node_project / secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
data:
  MONGO_USERNAME: your_encoded_username
  MONGO_PASSWORD: your_encoded_password

We have named the Secret object mongo-secret, but you are free to name it anything you would like.

我们已将Secret对象命名为mongo-secret ,但是您可以随意命名。

Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

完成编辑后,保存并关闭此文件。 与处理.env文件一样,请确保将secret.yaml添加到.gitignore文件中,以使其不受版本控制。

With secret.yaml written, our next step will be to ensure that our application and database Pods both use the values we added to the file. Let’s start by adding references to the Secret to our application Deployment.

编写好secret.yaml ,下一步就是确保我们的应用程序和数据库Pod都使用我们添加到文件中的值。 首先,向我们的应用程序部署中添加对Secret的引用。

Open the file called nodejs-deployment.yaml:

打开名为nodejs-deployment.yaml的文件:

  • nano nodejs-deployment.yaml

    纳米nodejs-deployment.yaml

The file’s container specifications include the following environment variables defined under the env key:

该文件的容器规范包括在env键下定义的以下环境变量:

~/node_project/nodejs-deployment.yaml
〜/ node_project / nodejs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
    spec:
      containers:
      - env:
        - name: MONGO_DB
          valueFrom:
            configMapKeyRef:
              key: MONGO_DB
              name: nodejs-env
        - name: MONGO_HOSTNAME
          value: db
        - name: MONGO_PASSWORD
        - name: MONGO_PORT
          valueFrom:
            configMapKeyRef:
              key: MONGO_PORT
              name: nodejs-env
        - name: MONGO_USERNAME

We will need to add references to our Secret to the MONGO_USERNAME and MONGO_PASSWORD variables listed here, so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our nodejs-env ConfigMap, as is the case with the values for MONGO_DB and MONGO_PORT, we’ll include a secretKeyRef key to point to the values in our mongo-secret secret.

我们需要将对我们的Secret的引用添加到此处列出的MONGO_USERNAMEMONGO_PASSWORD变量中,以便我们的应用程序可以访问这些值。 而是包括的configMapKeyRef关键点我们nodejs-env ConfigMap,如同这些值的情况下MONGO_DBMONGO_PORT ,我们将包括secretKeyRef关键点在我们的价值观mongo-secret的秘密。

Add the following Secret references to the MONGO_USERNAME and MONGO_PASSWORD variables:

将以下秘密引用添加到MONGO_USERNAMEMONGO_PASSWORD变量:

~/node_project/nodejs-deployment.yaml
〜/ node_project / nodejs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
    spec:
      containers:
      - env:
        - name: MONGO_DB
          valueFrom:
            configMapKeyRef:
              key: MONGO_DB
              name: nodejs-env
        - name: MONGO_HOSTNAME
          value: db
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongo-secret
              key: MONGO_PASSWORD
        - name: MONGO_PORT
          valueFrom:
            configMapKeyRef:
              key: MONGO_PORT
              name: nodejs-env
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongo-secret
              key: MONGO_USERNAME

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Next, we’ll add the same values to the db-deployment.yaml file.

接下来,我们将相同的值添加到db-deployment.yaml文件。

Open the file for editing:

打开文件进行编辑:

  • nano db-deployment.yaml

    纳米db-deployment.yaml

In this file, we will add references to our Secret for following variable keys: MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD. The mongo image makes these variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the database container starts.

在此文件中,我们将为以下变量键添加对Secret的引用: MONGO_INITDB_ROOT_USERNAMEMONGO_INITDB_ROOT_PASSWORDmongo映像使这些变量可用,以便您可以修改数据库实例的初始化。 MONGO_INITDB_ROOT_USERNAMEMONGO_INITDB_ROOT_PASSWORD一起在admin身份验证数据库中创建一个root用户,并确保在数据库容器启动时启用身份验证。

Using the values we set in our Secret ensures that we will have an application user with root privileges on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

使用我们在Secret中设置的值可确保我们将拥有一个对数据库实例具有root特权的应用程序用户,并且可以访问该角色的所有管理和操作特权。 在生产环境中工作时,您将需要创建一个具有适当范围的特权的专用应用程序用户。

Under the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD variables, add references to the Secret values:

MONGO_INITDB_ROOT_USERNAMEMONGO_INITDB_ROOT_PASSWORD变量下,添加对Secret值的引用:

~/node_project/db-deployment.yaml
〜/ node_project / db-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
    spec:
      containers:
      - env:
        - name: MONGO_INITDB_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongo-secret
              key: MONGO_PASSWORD        
        - name: MONGO_INITDB_ROOT_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongo-secret
              key: MONGO_USERNAME
        image: mongo:4.1.8-xenial
...

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With your Secret in place, you can move on to creating your database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

设置好Secret之后,您可以继续创建数据库服务,并确保应用程序容器仅在完全设置和初始化后才尝试连接到数据库。

步骤5 —创建数据库服务和应用程序初始化容器 (Step 5 — Creating the Database Service and an Application Init Container)

Now that we have our Secret, we can move on to creating our database Service and an Init Container that will poll this Service to ensure that our application only attempts to connect to the database once the database startup tasks, including creating the MONGO_INITDB user and password, are complete.

现在我们有了“秘密”,我们可以继续创建数据库服务和一个初始化容器 ,该容器将轮询该服务,以确保我们的应用程序仅在数据库启动任务(包括创建MONGO_INITDB用户和密码)后才尝试连接到数据库。 ,已完成。

For a discussion of how to implement this functionality in Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

有关如何在Compose中实现此功能的讨论,请参见使用Docker Compose容器化Node.js应用程序进行开发的 步骤4

Open a file to define the specs for the database Service:

打开一个文件来定义数据库服务的规格:

  • nano db-service.yaml

    纳米db-service.yaml

Add the following code to the file to define the Service:

将以下代码添加到文件中以定义服务:

~/node_project/db-service.yaml
〜/ node_project / db-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations: 
    kompose.cmd: kompose convert
    kompose.version: 1.18.0 (06a2e56)
  creationTimestamp: null
  labels:
    io.kompose.service: db
  name: db
spec:
  ports:
  - port: 27017
    targetPort: 27017
  selector:
    io.kompose.service: db
status:
  loadBalancer: {}

The selector that we have included here will match this Service object with our database Pods, which have been defined with the label io.kompose.service: db by kompose in the db-deployment.yaml file. We’ve also named this service db.

我们在此处包含的selector将使此Service对象与我们的数据库io.kompose.service: db匹配,该db-deployment.yaml Pod已在db-deployment.yaml文件中由kompose标签io.kompose.service: db定义。 我们还将该服务命名为db

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Next, let’s add an Init Container field to the containers array in nodejs-deployment.yaml. This will create an Init Container that we can use to delay our application container from starting until the db Service has been created with a Pod that is reachable. This is one of the possible uses for Init Containers; to learn more about other use cases, please see the official documentation.

接下来,我们将init容器字段添加到nodejs-deployment.yamlcontainers数组。 这将创建一个Init容器,我们可以使用它来延迟应用程序容器的启动时间,直到创建具有可到达的Pod的db Service为止。 这是初始化容器的可能用途之一; 要了解有关其他用例的更多信息,请参阅官方文档

Open the nodejs-deployment.yaml file:

打开nodejs-deployment.yaml文件:

  • nano nodejs-deployment.yaml

    纳米nodejs-deployment.yaml

Within the Pod spec and alongside the containers array, we are going to add an initContainers field with a container that will poll the db Service.

在Pod规范中,在containers数组旁边,我们将添加一个initContainers字段和一个将轮询db Service的容器。

Add the following code below the ports and resources fields and above the restartPolicy in the nodejs containers array:

将以下代码添加到nodejs containers数组中的portsresources字段下方以及restartPolicy上方:

~/node_project/nodejs-deployment.yaml
〜/ node_project / nodejs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
...
    spec:
      containers:
      ...
        name: nodejs
        ports:
        - containerPort: 8080
        resources: {}
      initContainers:
      - name: init-db
        image: busybox
        command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
      restartPolicy: Always
...

This Init Container uses the BusyBox image, a lightweight image that includes many UNIX utilities. In this case, we’ll use the netcat utility to poll whether or not the Pod associated with the db Service is accepting TCP connections on port 27017.

此Init容器使用BusyBox映像 ,这是一个轻量级映像,其中包含许多UNIX实用程序。 在这种情况下,我们将使用netcat实用程序来轮询与db Service关联的Pod是否在端口27017上接受TCP连接。

This container command replicates the functionality of the wait-for script that we removed from our docker-compose.yaml file in Step 3. For a longer discussion of how and why our application used the wait-for script when working with Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

此容器command复制了在步骤3中docker-compose.yaml文件中删除的wait-for脚本的功能。 有关在使用Compose时我们的应用程序如何以及为何使用wait-for脚本的详细讨论,请参阅“ 使用Docker Compose容器化Node.js应用程序进行开发”的 第4步

Init Containers run to completion; in our case, this means that our Node application container will not start until the database container is running and accepting connections on port 27017. The db Service definition allows us to guarantee this functionality regardless of the exact location of the database container, which is mutable.

初始化容器运行完毕; 在我们的例子中,这意味着直到数据库容器正在运行并接受端口27017上的连接后,Node应用程序容器才会启动。 db Service定义使我们能够保证此功能,而与可变的数据库容器的确切位置无关。

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With your database Service created and your Init Container in place to control the startup order of your containers, you can move on to checking the storage requirements in your PersistentVolumeClaim and exposing your application service using a LoadBalancer.

创建数据库服务并放置Init容器以控制容器的启动顺序后,您可以继续检查PersistentVolumeClaim中的存储要求,并使用LoadBalancer公开应用程序服务。

第6步—修改PersistentVolumeClaim并公开应用程序前端 (Step 6 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend)

Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

在运行我们的应用程序之前,我们将做两个最后的更改,以确保将正确配置我们的数据库存储,并确保可以使用LoadBalancer公开我们的应用程序前端。

First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

首先,让我们修改kompose为我们创建的PersistentVolumeClaim中定义的storage resource 。 此声明允许我们动态地配置存储以管理应用程序的状态。

To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage.

要使用PersistentVolumeClaims,必须创建一个StorageClass并将其配置为配置存储资源。 在本例中,因为我们正在使用DigitalOcean Kubernetes ,所以默认的StorageClass provisioner程序设置为dobs.csi.digitalocean.comDigitalOcean块存储

We can check this by typing:

我们可以通过输入以下内容进行检查:

  • kubectl get storageclass

    kubectl获取存储类

If you are working with a DigitalOcean cluster, you will see the following output:

如果您正在使用DigitalOcean群集,则将看到以下输出:


   
Output
NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 76m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

如果你不是用DigitalOcean集群工作,你需要创建一个StorageClass和配置provisioner您的选择。 有关如何执行此操作的详细信息,请参阅官方文档

When kompose created dbdata-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

当kompose创建dbdata-persistentvolumeclaim.yaml ,它集storage resource的大小不符合我们的最低规格要求provisioner 。 因此,我们将需要修改PersistentVolumeClaim以使用最小可行的DigitalOcean块存储单位 :1GB。 请随时修改它以满足您的存储要求。

Open dbdata-persistentvolumeclaim.yaml:

打开dbdata-persistentvolumeclaim.yaml

  • nano dbdata-persistentvolumeclaim.yaml

    纳米dbdata-persistentvolumeclaim.yaml

Replace the storage value with 1Gi:

storage值替换为1Gi

~/node_project/dbdata-persistentvolumeclaim.yaml
〜/ node_project / dbdata-persistentvolumeclaim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: dbdata
  name: dbdata
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
status: {}

Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

还要注意accessModeReadWriteOnce表示由于该Claim所配置的卷只能由单个节点读写。 请参阅文档以获取有关不同访问模式的更多信息。

Save and close the file when you are finished.

完成后保存并关闭文件。

Next, open nodejs-service.yaml:

接下来,打开nodejs-service.yaml

  • nano nodejs-service.yaml

    纳米nodejs-service.yaml

We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

我们将使用DigitalOcean负载均衡器在外部公开此服务。 如果您不使用DigitalOcean群集,请查阅您的云提供商的相关文档,以获取有关其负载均衡器的信息。 另外,您也可以按照官方Kubernetes文件上设置了高可用性群集kubeadm ,但在这种情况下,您将无法使用PersistentVolumeClaims置备存储。

Within the Service spec, specify LoadBalancer as the Service type:

在服务规范中,将LoadBalancer指定为服务type

~/node_project/nodejs-service.yaml
〜/ node_project / nodejs-service.yaml
apiVersion: v1
kind: Service
...
spec:
  type: LoadBalancer
  ports:
...

When we create the nodejs Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

当我们创建nodejs服务时,将自动创建一个负载均衡器,为我们提供一个外部IP,我们可以在其中访问我们的应用程序。

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With all of our files in place, we are ready to start and test the application.

放置好所有文件后,我们就可以启动和测试应用程序了。

步骤7 —启动和访问应用程序 (Step 7 — Starting and Accessing the Application)

It’s time to create our Kubernetes objects and test that our application is working as expected.

现在该创建我们的Kubernetes对象并测试我们的应用程序是否按预期工作。

To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Node application and MongoDB database Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

为了创建我们定义的对象,我们将使用带有-f标志的kubectl create ,这将允许我们指定kompose为我们创建的文件以及我们编写的文件。 运行以下命令以创建Node应用程序和MongoDB数据库服务和部署,以及您的Secret,ConfigMap和PersistentVolumeClaim:

  • kubectl create -f nodejs-service.yaml,nodejs-deployment.yaml,nodejs-env-configmap.yaml,db-service.yaml,db-deployment.yaml,dbdata-persistentvolumeclaim.yaml,secret.yaml

    kubectl创建-f nodejs-service.yaml,nodejs-deployment.yaml,nodejs-env-configmap.yaml,db-service.yaml,db-deployment.yaml,dbdata-persistentvolumeclaim.yaml,secret.yaml

You will see the following output indicating that the objects have been created:

您将看到以下输出,表明已创建对象:


   
Output
service/nodejs created deployment.extensions/nodejs created configmap/nodejs-env created service/db created deployment.extensions/db created persistentvolumeclaim/dbdata created secret/mongo-secret created

To check that your Pods are running, type:

要检查您的Pod是否正在运行,请键入:

  • kubectl get pods

    kubectl得到豆荚

You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this command, along with the name of your Namespace.

您无需在此处指定名称空间 ,因为我们已经在default名称空间中创建了对象。 如果要使用多个命名空间,请确保在运行此命令时包括-n标志以及命名空间的名称。

You will see the following output while your db container is starting and your application Init Container is running:

db容器启动且应用程序初始化容器运行时,您将看到以下输出:


   
Output
NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 0/1 ContainerCreating 0 10s nodejs-6b9585dc8b-pnsws 0/1 Init:0/1 0 10s

Once that container has run and your application and database containers have started, you will see this output:

一旦该容器运行并且您的应用程序和数据库容器已启动,您将看到以下输出:


   
Output
NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 1/1 Running 0 54s nodejs-6b9585dc8b-pnsws 1/1 Running 0 54s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Running STATUS表示您的Pod已绑定到节点,并且与这些Pod相关联的容器正在运行。 READY指示Pod中有多少个容器正在运行。 有关更多信息,请查阅Pod生命周期文档

Note: If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

注意:如果在“ STATUS列中看到意外的阶段,请记住,可以使用以下命令对Pod进行故障排除:

  • kubectl describe pods your_pod

    kubectl描述豆荚your_pod

  • kubectl logs your_pod

    kubectl记录your_pod

With your containers running, you can now access the application. To get the IP for the LoadBalancer, type:

随着容器的运行,您现在可以访问应用程序。 要获取LoadBalancer的IP,请输入:

  • kubectl get svc

    kubectl获取svc

You will see the following output:

您将看到以下输出:


   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.245.189.250 <none> 27017/TCP 93s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 25m12s nodejs LoadBalancer 10.245.15.56 your_lb_ip 80:30729/TCP 93s

The EXTERNAL_IP associated with the nodejs service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

nodejs服务关联的EXTERNAL_IP是可以访问应用程序的IP地址。 如果您在EXTERNAL_IP列中看到<pending>状态,则表明您的负载均衡器仍在创建中。

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

在该列中看到IP后,请在浏览器中浏览至该IP: http:// your_lb_ip

You should see the following landing page:

您应该看到以下登录页面:

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

单击获取鲨鱼信息按钮。 您将看到一个带有输入表单的页面,您可以在其中输入鲨鱼名称和该鲨鱼的一般性格描述:

In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

在表单中,添加您选择的鲨鱼。 为了演示,我们将Megalodon Shark添加到Shark Name字段中,并将AncientShark Character字段中:

Click on the Submit button. You will see a page with this shark information displayed back to you:

单击提交按钮。 您将看到一个页面,其中显示了此鲨鱼信息:

You now have a single instance setup of a Node.js application with a MongoDB database running on a Kubernetes cluster.

现在,您将具有在Kubernetes集群上运行的MongoDB数据库的Node.js应用程序的单实例设置。

结论 (Conclusion)

The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:

您在本教程中创建的文件是您逐步进入生产阶段的良好起点。 在开发应用程序时,可以进行以下工作:

翻译自: https://www.digitalocean.com/community/tutorials/how-to-migrate-a-docker-compose-workflow-to-kubernetes

  • 0
    点赞
  • 0
    评论
  • 1
    收藏
  • 一键三连
    一键三连
  • 扫一扫,分享海报

表情包
插入表情
评论将由博主筛选后显示,对所有人可见 | 还能输入1000个字符
©️2021 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值