helm部署mongodb_如何使用Helm在Kubernetes上使用MongoDB扩展Node.js应用程序

helm部署mongodb

介绍 (Introduction)

Kubernetes is a system for running modern, containerized applications at scale. With it, developers can deploy and manage applications across clusters of machines. And though it can be used to improve efficiency and reliability in single-instance application setups, Kubernetes is designed to run multiple instances of an application across groups of machines.

Kubernetes是一个用于大规模运行现代容器化应用程序的系统。 有了它,开发人员可以跨机器集群部署和管理应用程序。 尽管可以在单实例应用程序设置中使用它来提高效率和可靠性,但Kubernetes仍被设计为在一组机器之间运行应用程序的多个实例。

When creating multi-service deployments with Kubernetes, many developers opt to use the Helm package manager. Helm streamlines the process of creating multiple Kubernetes resources by offering charts and templates that coordinate how these objects interact. It also offers pre-packaged charts for popular open-source projects.

当使用Kubernetes创建多服务部署时,许多开发人员选择使用Helm软件包管理器。 Helm通过提供图表和模板来协调这些对象的交互方式,从而简化了创建多个Kubernetes资源的过程。 它还提供了流行的开源项目的预打包图表。

In this tutorial, you will deploy a Node.js application with a MongoDB database onto a Kubernetes cluster using Helm charts. You will use the official Helm MongoDB replica set chart to create a StatefulSet object consisting of three Pods, a Headless Service, and three PersistentVolumeClaims. You will also create a chart to deploy a multi-replica Node.js application using a custom application image. The setup you will build in this tutorial will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build a resilient Node.js application with a MongoDB data store that can scale with your needs.

在本教程中,您将使用Helm图表将具有MongoDB数据库的Node.js应用程序部署到Kubernetes集群上。 您将使用官方的Helm MongoDB副本集图表创建一个StatefulSet对象 ,该对象由三个Pod ,一个Headless服务和三个PersistentVolumeClaims组成 。 您还将创建一个图表,以使用自定义应用程序映像部署多副本Node.js应用程序。 您将在本教程中构建的设置将反映使用Docker Compose容器化Node.js应用程序中描述的代码的功能,并且是使用MongoDB数据存储构建可伸缩的Node.js应用程序的良好起点。您的需求。

先决条件 (Prerequisites)

To complete this tutorial, you will need:

要完成本教程,您将需要:

步骤1 —克隆和打包应用程序 (Step 1 — Cloning and Packaging the Application)

To use our application with Kubernetes, we will need to package it so that the kubelet agent can pull the image. Before packaging the application, however, we will need to modify the MongoDB connection URI in the application code to ensure that our application can connect to the members of the replica set that we will create with the Helm mongodb-replicaset chart.

要将我们的应用程序与Kubernetes一起使用,我们需要对其进行打包,以便kubelet代理可以提取图像。 但是,在打包应用程序之前,我们将需要在应用程序代码中修改MongoDB 连接URI ,以确保我们的应用程序可以连接到将使用Helm mongodb-replicaset图表创建的副本集的成员。

Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application with a MongoDB database to demonstrate how to set up a development environment with Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

我们的第一步将是从DigitalOcean社区GitHub帐户克隆node-mongo-docker-dev存储库 。 该存储库包含容器化Node.js应用程序以进行Docker Compose开发中描述的设置代码,该示例使用具有MongoDB数据库的演示Node.js应用程序演示了如何使用Docker Compose设置开发环境。 您可以在《 使用Node.js从容器到Kubernetes 》系列中找到有关应用程序本身的更多信息。

Clone the repository into a directory called node_project:

将存储node_project到名为node_project的目录中:

  • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

    git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

Navigate to the node_project directory:

导航到node_project目录:

  • cd node_project

    cd node_project

The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a MongoDB database.

node_project目录包含与用户输入配合使用的shark信息应用程序的文件和目录。 它已经过现代化的处理,可以与容器一起使用:敏感的特定配置信息已从应用程序代码中删除,并重构为在运行时注入,并且应用程序的状态已卸载到MongoDB数据库中。

For more information about designing modern, containerized applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

有关设计现代容器化应用程序的更多信息,请参见《 为Kubernetes设计架构应用程序》和《 为Kubernetes 现代化应用程序》

When we deploy the Helm mongodb-replicaset chart, it will create:

当我们部署Helm mongodb-replicaset图表时,它将创建:

  • A StatefulSet object with three Pods — the members of the MongoDB replica set. Each Pod will have an associated PersistentVolumeClaim and will maintain a fixed identity in the event of rescheduling.

    具有三个Pod的StatefulSet对象-MongoDB 副本集的成员。 每个Pod都有一个关联的PersistentVolumeClaim,并且在重新计划时将保持固定的身份。

  • A MongoDB replica set made up of the Pods in the StatefulSet. The set will include one primary and two secondaries. Data will be replicated from the primary to the secondaries, ensuring that our application data remains highly available.

    一个由StatefulSet中的Pod组成的MongoDB副本集。 集合将包括一个主要和两个次要。 数据将从主数据库复制到辅助数据库,以确保我们的应用程序数据保持高可用性。

For our application to interact with the database replicas, the MongoDB connection URI in our code will need to include both the hostnames of the replica set members as well as the name of the replica set itself. We therefore need to include these values in the URI.

为了使我们的应用程序与数据库副本进行交互,我们代码中的MongoDB连接URI将需要同时包含副本集成员的主机名以及副本集本身的名称。 因此,我们需要在URI中包含这些值。

The file in our cloned repository that specifies database connection information is called db.js. Open that file now using nano or your favorite editor:

我们克隆的存储库中用于指定数据库连接信息的文件称为db.js 立即使用nano或您喜欢的编辑器打开该文件:

  • nano db.js

    纳米db.js

Currently, the file includes constants that are referenced in the database connection URI at runtime. The values for these constants are injected using Node’s process.env property, which returns an object with information about your user environment at runtime. Setting values dynamically in our application code allows us to decouple the code from the underlying infrastructure, which is necessary in a dynamic, stateless environment. For more information about refactoring application code in this way, see Step 2 of Containerizing a Node.js Application for Development With Docker Compose and the relevant discussion in The 12-Factor App.

当前,该文件包括在运行时在数据库连接URI中引用的常量 。 这些常量的值是使用Node的process.env属性注入的,该属性在运行时返回包含有关您的用户环境信息的对象。 在应用程序代码中动态设置值使我们能够将代码与底层基础结构分离,这在动态,无状态的环境中是必需的。 有关以这种方式重构应用程序代码的更多信息,请参阅使用Docker Compose容器化Node.js应用程序进行开发的 第2步 ,以及《十二要素应用程序》中的相关讨论。

The constants for the connection URI and the URI string itself currently look like this:

当前,连接URI和URI字符串本身的常量如下所示:

~/node_project/db.js
〜/ node_project / db.js
...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB
} = process.env;

...

const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
...

In keeping with a 12FA approach, we do not want to hard code the hostnames of our replica instances or our replica set name into this URI string. The existing MONGO_HOSTNAME constant can be expanded to include multiple hostnames — the members of our replica set — so we will leave that in place. We will need to add a replica set constant to the options section of the URI string, however.

与12FA方法保持一致,我们不想将副本实例的主机名或副本集名称硬编码到此URI字符串中。 可以将现有的MONGO_HOSTNAME常量扩展为包括多个主机名(副本集的成员),因此我们将其保留在原位。 但是,我们将需要向URI字符串的options部分添加一个副本集常量。

Add MONGO_REPLICASET to both the URI constant object and the connection string:

MONGO_REPLICASET添加到URI常量对象和连接字符串中:

~/node_project/db.js
〜/ node_project / db.js
...
const {
  MONGO_USERNAME,
  MONGO_PASSWORD,
  MONGO_HOSTNAME,
  MONGO_PORT,
  MONGO_DB,
  MONGO_REPLICASET
} = process.env;

...
const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?replicaSet=${MONGO_REPLICASET}&authSource=admin`;
...

Using the replicaSet option in the options section of the URI allows us to pass in the name of the replica set, which, along with the hostnames defined in the MONGO_HOSTNAME constant, will allow us to connect to the set members.

使用URI的options部分中的replicaSet选项 ,我们可以传入副本集的名称,它与MONGO_HOSTNAME常量中定义的主机名一起,将使我们能够连接到集合成员。

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With your database connection information modified to work with replica sets, you can now package your application, build the image with the docker build command, and push it to Docker Hub.

修改数据库连接信息以使用副本集后,您现在可以打包应用程序,使用docker build命令构建映像,然后将其推送到Docker Hub。

Build the image with docker build and the -t flag, which allows you to tag the image with a memorable name. In this case, tag the image with your Docker Hub username and name it node-replicas or a name of your own choosing:

使用docker build-t标志构建映像,这使您可以使用令人难忘的名称标记映像。 在这种情况下,请使用您的Docker Hub用户名标记该映像,并将其命名为node-replicas或您自己选择的名称:

  • docker build -t your_dockerhub_username/node-replicas .

    docker build -t your_dockerhub_username / node-replicas 。

The . in the command specifies that the build context is the current directory.

. 在命令中,指定构建上下文为当前目录。

It will take a minute or two to build the image. Once it is complete, check your images:

构建图像将需要一两分钟。 完成后,检查图像:

  • docker images

    码头工人图像

You will see the following output:

您将看到以下输出:


   
Output
REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-replicas latest 56a69b4bc882 7 seconds ago 90.1MB node 10-alpine aa57b0242b33 6 days ago 71MB

Next, log in to the Docker Hub account you created in the prerequisites:

接下来,登录到在先决条件中创建的Docker Hub帐户:

  • docker login -u your_dockerhub_username

    泊坞窗登录-u your_dockerhub_username

When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your non-root user’s home directory with your Docker Hub credentials.

出现提示时,输入您的Docker Hub帐户密码。 以这种方式登录将使用Docker Hub凭据在非root用户的主目录中创建~/.docker/config.json文件。

Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

使用docker push命令将应用程序映像推送到Docker Hub。 请记住用您自己的Docker Hub用户名替换your_dockerhub_username

  • docker push your_dockerhub_username/node-replicas

    docker push your_dockerhub_username / 节点副本

You now have an application image that you can pull to run your replicated application with Kubernetes. The next step will be to configure specific parameters to use with the MongoDB Helm chart.

现在您有了一个应用程序映像,可以拉出该映像以使用Kubernetes运行复制的应用程序。 下一步将配置与MongoDB Helm图表一起使用的特定参数。

第2步-为MongoDB副本集创建机密 (Step 2 — Creating Secrets for the MongoDB Replica Set)

The stable/mongodb-replicaset chart provides different options when it comes to using Secrets, and we will create two to use with our chart deployment:

在使用Secrets时, stable/mongodb-replicaset图表提供了不同的选项,我们将创建两个用于图表部署的选项:

  • A Secret for our replica set keyfile that will function as a shared password between replica set members, allowing them to authenticate other members.

    我们的副本集密钥文件的机密,它将用作副本集成员之间的共享密码,使他们能够验证其他成员。

  • A Secret for our MongoDB admin user, who will be created as a root user on the admin database. This role will allow you to create subsequent users with limited permissions when deploying your application to production.

    我们的MongoDB管理员用户的秘密,该用户将admin数据库上作为root用户创建。 该角色将允许您在将应用程序部署到生产环境时创建具有受限权限的后续用户。

With these Secrets in place, we will be able to set our preferred parameter values in a dedicated values file and create the StatefulSet object and MongoDB replica set with the Helm chart.

有了这些Secrets之后,我们将能够在专用值文件中设置首选参数值,并使用Helm图表创建StatefulSet对象和MongoDB副本集。

First, let’s create the keyfile. We will use the openssl command with the rand option to generate a 756 byte random string for the keyfile:

首先,让我们创建密钥文件。 我们将使用带有rand选项的openssl命令为密钥文件生成756字节的随机字符串:

  • openssl rand -base64 756 > key.txt

    openssl rand -base64 756> key.txt

The output generated by the command will be base64 encoded, ensuring uniform data transmission, and redirected to a file called key.txt, following the guidelines stated in the mongodb-replicaset chart authentication documentation. The key itself must be between 6 and 1024 characters long, consisting only of characters in the base64 set.

该命令生成的输出将进行base64编码,以确保统一的数据传输,并按照mongodb-replicaset图表身份验证文档中所述的准则重定向到名为key.txt 文件密钥本身必须在6到1024个字符之间,并且只能由base64集中的字符组成。

You can now create a Secret called keyfilesecret using this file with kubectl create:

现在,您可以将此文件和kubectl create一起使用来创建一个名为keyfilesecret的Secret:

  • kubectl create secret generic keyfilesecret --from-file=key.txt

    kubectl创建秘密通用keyfilesecret --from文件= key.txt

This will create a Secret object in the default namespace, since we have not created a specific namespace for our setup.

这将在default 名称空间中创建一个Secret对象,因为我们尚未为安装程序创建特定的名称空间。

You will see the following output indicating that your Secret has been created:

您将看到以下输出,表明您的秘密已创建:


   
Output
secret/keyfilesecret created

Remove key.txt:

删除key.txt

  • rm key.txt

    rm key.txt

Alternatively, if you would like to save the file, be sure restrict its permissions and add it to your .gitignore file to keep it out of version control.

另外,如果您要保存文件,请确保限制其权限并将添加到.gitignore文件中,以使其不受版本控制。

Next, create the Secret for your MongoDB admin user. The first step will be to convert your desired username and password to base64.

接下来,为您的MongoDB管理员用户创建Secret。 第一步是将所需的用户名和密码转换为base64。

Convert your database username:

转换您的数据库用户名:

  • echo -n 'your_database_username' | base64

    echo -n'your_database_username '| base64

Note down the value you see in the output.

记下您在输出中看到的值。

Next, convert your password:

接下来,转换您的密码:

  • echo -n 'your_database_password' | base64

    echo -n'your_database_password '| base64

Take note of the value in the output here as well.

还要在此处注意输出中的值。

Open a file for the Secret:

打开秘密文件:

  • nano secret.yaml

    纳米秘密

Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your YAML files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

注意: Kubernetes对象通常是使用YAML 定义的 ,它严格禁止制表符并且需要两个空格来缩进。 如果您想查询您的任何YAML文件的格式,你可以使用棉短绒或测试使用的语法的正确性kubectl create--dry-run--validate标志:

  • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

    kubectl创建-f your_yaml_file .yaml --dry-run --validate = true

In general, it is a good idea to validate your syntax before creating resources with kubectl.

通常,在使用kubectl创建资源之前,先验证语法是一个好主意。

Add the following code to the file to create a Secret that will define a user and password with the encoded values you just created. Be sure to replace the dummy values here with your own encoded username and password:

将以下代码添加到文件中以创建一个Secret,该Secret将使用您刚创建的编码值定义userpassword 。 确保使用您自己的编码用户名和密码替换此处的虚拟值:

~/node_project/secret.yaml
〜/ node_project / secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: mongo-secret
data:
  user: your_encoded_username
  password: your_encoded_password

Here, we’re using the key names that the mongodb-replicaset chart expects: user and password. We have named the Secret object mongo-secret, but you are free to name it anything you would like.

在这里,我们使用mongodb-replicaset图表期望的键名: userpassword 。 我们已将Secret对象命名为mongo-secret ,但是您可以随意命名。

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Create the Secret object with the following command:

使用以下命令创建Secret对象:

  • kubectl create -f secret.yaml

    kubectl创建-f secret.yaml

You will see the following output:

您将看到以下输出:


   
Output
secret/mongo-secret created

Again, you can either remove secret.yaml or restrict its permissions and add it to your .gitignore file.

同样,您可以删除secret.yaml或限制其权限并将其添加到您的.gitignore文件中。

With your Secret objects created, you can move on to specifying the parameter values you will use with the mongodb-replicaset chart and creating the MongoDB deployment.

创建Secret对象后,您可以继续指定将与mongodb-replicaset图表一起使用的参数值,并创建MongoDB部署。

步骤3 —配置MongoDB Helm图表并创建部署 (Step 3 — Configuring the MongoDB Helm Chart and Creating a Deployment)

Helm comes with an actively maintained repository called stable that contains the chart we will be using: mongodb-replicaset. To use this chart with the Secrets we’ve just created, we will create a file with configuration parameter values called mongodb-values.yaml and then install the chart using this file.

Helm带有一个主动维护的存储库,称为稳定库,其中包含我们将要使用的图表: mongodb-replicaset 。 要将这张图表与我们刚刚创建的Secrets一起使用,我们将创建一个名为mongodb-values.yaml配置参数值文件,然后使用该文件安装该图表。

Our mongodb-values.yaml file will largely mirror the default values.yaml file in the mongodb-replicaset chart repository. We will, however, make the following changes to our file:

我们的mongodb-values.yaml文件将在很大程度上镜像mongodb-replicaset图表存储库中的default values.yaml文件 。 但是,我们将对文件进行以下更改:

  • We will set the auth parameter to true to ensure that our database instances start with authorization enabled. This means that all clients will be required to authenticate for access to database resources and operations.

    我们将auth参数设置为true以确保我们的数据库实例以启用的授权开始。 这意味着将要求所有客户端进行身份验证才能访问数据库资源和操作。

  • We will add information about the Secrets we created in the previous Step so that the chart can use these values to create the replica set keyfile and admin user.

    我们将添加有关在上一步中创建的Secrets的信息,以便图表可以使用这些值来创建副本集密钥文件和admin用户。
  • We will decrease the size of the PersistentVolumes associated with each Pod in the StatefulSet to use the minimum viable DigitalOcean Block Storage unit, 1GB, though you are free to modify this to meet your storage requirements.

    我们将减小与StatefulSet中的每个Pod关联的PersistentVolumes的大小,以使用最小可行的DigitalOcean块存储单元 1GB,尽管您可以随意修改它以满足您的存储要求。

Before writing the mongodb-values.yaml file, however, you should first check that you have a StorageClass created and configured to provision storage resources. Each of the Pods in your database StatefulSet will have a sticky identity and an associated PersistentVolumeClaim, which will dynamically provision a PersistentVolume for the Pod. If a Pod is rescheduled, the PersistentVolume will be mounted to whichever node the Pod is scheduled on (though each Volume must be manually deleted if its associated Pod or StatefulSet is permanently deleted).

但是,在编写mongodb-values.yaml文件之前,应首先检查是否已创建并配置了StorageClass来配置存储资源。 数据库StatefulSet中的每个Pod将具有粘性标识和关联的PersistentVolumeClaim ,后者将为Pod动态设置PersistentVolume。 如果对Pod进行了重新调度,则PersistentVolume将安装到该Pod计划在其上调度的任何节点上(尽管如果每个Volume关联的Pod或StatefulSet被永久删除,则必须手动将其删除)。

Because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.comDigitalOcean Block Storage — which we can check by typing:

因为我们正在使用DigitalOcean Kubernetes ,所以我们的默认StorageClass provisioner程序设置为dobs.csi.digitalocean.com ( DigitalOcean块存储 ),可以通过键入以下内容进行检查:

  • kubectl get storageclass

    kubectl获取存储类

If you are working with a DigitalOcean cluster, you will see the following output:

如果您正在使用DigitalOcean群集,则将看到以下输出:


   
Output
NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 21m

If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

如果你不是用DigitalOcean集群工作,你需要创建一个StorageClass和配置provisioner您的选择。 有关如何执行此操作的详细信息,请参阅官方文档

Now that you have ensured that you have a StorageClass configured, open mongodb-values.yaml for editing:

现在,确保已配置了StorageClass,打开mongodb-values.yaml进行编辑:

  • nano mongodb-values.yaml

    纳米mongodb-values.yaml

You will set values in this file that will do the following:

您将在此文件中设置值,以执行以下操作:

  • Enable authorization.

    启用授权。
  • Reference your keyfilesecret and mongo-secret objects.

    引用您的keyfilesecretmongo-secret对象。

  • Specify 1Gi for your PersistentVolumes.

    为您的PersistentVolumes指定1Gi

  • Set your replica set name to db.

    将副本集名称设置为db

  • Specify 3 replicas for the set.

    为该集合指定3副本。

  • Pin the mongo image to the latest version at the time of writing: 4.1.9.

    在撰写本文时,将mongo映像固定到最新版本: 4.1.9

Paste the following code into the file:

将以下代码粘贴到文件中:

~/node_project/mongodb-values.yaml
〜/ node_project / mongodb-values.yaml
replicas: 3
port: 27017
replicaSetName: db
podDisruptionBudget: {}
auth:
  enabled: true
  existingKeySecret: keyfilesecret
  existingAdminSecret: mongo-secret
imagePullSecrets: []
installImage:
  repository: unguiculus/mongodb-install
  tag: 0.7
  pullPolicy: Always
copyConfigImage:
  repository: busybox
  tag: 1.29.3
  pullPolicy: Always
image:
  repository: mongo
  tag: 4.1.9
  pullPolicy: Always
extraVars: {}
metrics:
  enabled: false
  image:
    repository: ssalaues/mongodb-exporter
    tag: 0.6.1
    pullPolicy: IfNotPresent
  port: 9216
  path: /metrics
  socketTimeout: 3s
  syncTimeout: 1m
  prometheusServiceDiscovery: true
  resources: {}
podAnnotations: {}
securityContext:
  enabled: true
  runAsUser: 999
  fsGroup: 999
  runAsNonRoot: true
init:
  resources: {}
  timeout: 900
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
extraLabels: {}
persistentVolume:
  enabled: true
  #storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 1Gi
  annotations: {}
serviceAnnotations: {}
terminationGracePeriodSeconds: 30
tls:
  enabled: false
configmap: {}
readinessProbe:
  initialDelaySeconds: 5
  timeoutSeconds: 1
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1
livenessProbe:
  initialDelaySeconds: 30
  timeoutSeconds: 5
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1

The persistentVolume.storageClass parameter is commented out here: removing the comment and setting its value to "-" would disable dynamic provisioning. In our case, because we are leaving this value undefined, the chart will choose the default provisioner — in our case, dobs.csi.digitalocean.com.

persistentVolume.storageClass参数在此处被注释掉:删除注释并将其值设置为"-"将禁用动态配置。 在我们的例子,因为我们离开这个值未定义,图表会选择默认的provisioner -在我们的情况下, dobs.csi.digitalocean.com

Also note the accessMode associated with the persistentVolume key: ReadWriteOnce means that the provisioned volume will be read-write only by a single node. Please see the documentation for more information about different access modes.

还要注意与persistentVolume键相关联的accessModeReadWriteOnce意味着配置的卷只能由单个节点读写。 请参阅文档以获取有关不同访问模式的更多信息。

To learn more about the other parameters included in the file, see the configuration table included with the repo.

要了解有关文件中包含的其他参数的更多信息,请参见存储库包含的配置表

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Before deploying the mongodb-replicaset chart, you will want to update the stable repo with the helm repo update command:

在部署mongodb-replicaset图表之前,您将要使用helm repo update命令更新稳定的仓库:

  • helm repo update

    头盔回购更新

This will get the latest chart information from the stable repository.

这将从稳定的存储库中获取最新的图表信息。

Finally, install the chart with the following command:

最后,使用以下命令安装图表:

  • helm install --name mongo -f mongodb-values.yaml stable/mongodb-replicaset

    舵机安装--name mongo -f mongodb-values.yaml stable / mongodb-replicaset

Note: Before installing a chart, you can run helm install with the --dry-run and --debug options to check the generated manifests for your release:

注意:在安装图表之前,您可以使用--dry-run--debug选项运行helm install来检查为您的发行版生成的清单:

  • helm install --name your_release_name -f your_values_file.yaml --dry-run --debug your_chart

    头盔安装--name your_release_name -f your_values_file。 yaml --dry-run --debug your_chart

Note that we are naming the Helm release mongo. This name will refer to this particular deployment of the chart with the configuration options we’ve specified. We’ve pointed to these options by including the -f flag and our mongodb-values.yaml file.

请注意,我们正在命名Helm 版本 mongo 。 此名称将引用具有我们指定的配置选项的图表的特定部署。 我们通过包括-f标志和我们的mongodb-values.yaml文件来指出这些选项。

Also note that because we did not include the --namespace flag with helm install, our chart objects will be created in the default namespace.

另请注意,由于在helm install未包括--namespace标志,因此将在default名称空间中创建图表对象。

Once you have created the release, you will see output about its status, along with information about the created objects and instructions for interacting with them:

创建发行版后,您将看到有关其状态的输出,以及有关已创建对象的信息以及与之交互的说明:


   
Output
NAME: mongo LAST DEPLOYED: Tue Apr 16 21:51:05 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE mongo-mongodb-replicaset-init 1 1s mongo-mongodb-replicaset-mongodb 1 1s mongo-mongodb-replicaset-tests 1 0s ...

You can now check on the creation of your Pods with the following command:

现在,您可以使用以下命令检查Pod的创建:

  • kubectl get pods

    kubectl得到豆荚

You will see output like the following as the Pods are being created:

创建Pod时,您将看到类似以下的输出:


   
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 67s mongo-mongodb-replicaset-1 0/1 Init:0/3 0 8s

The READY and STATUS outputs here indicate that the Pods in our StatefulSet are not fully ready: the Init Containers associated with the Pod’s containers are still running. Because StatefulSet members are created in sequential order, each Pod in the StatefulSet must be Running and Ready before the next Pod will be created.

这里的READYSTATUS输出表明StatefulSet中的Pod尚未完全准备就绪:与Pod的容器关联的Init容器仍在运行。 由于StatefulSet成员是按顺序创建的,因此在创建下一个Pod之前,StatefulSet中的每个Pod必须处于“ Running且“ Ready ”状态。

Once the Pods have been created and all of their associated containers are running, you will see this output:

创建Pods并运行所有关联的容器后,您将看到以下输出:


   
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 2m33s mongo-mongodb-replicaset-1 1/1 Running 0 94s mongo-mongodb-replicaset-2 1/1 Running 0 36s

The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

Running STATUS表示您的Pod已绑定到节点,并且与这些Pod相关联的容器正在运行。 READY指示Pod中有多少个容器正在运行。 有关更多信息,请查阅Pod生命周期文档

Note: If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

注意:如果在“ STATUS列中看到意外的阶段,请记住,可以使用以下命令对Pod进行故障排除:

  • kubectl describe pods your_pod

    kubectl描述豆荚your_pod

  • kubectl logs your_pod

    kubectl记录your_pod

Each of the Pods in your StatefulSet has a name that combines the name of the StatefulSet with the ordinal index of the Pod. Because we created three replicas, our StatefulSet members are numbered 0-2, and each has a stable DNS entry comprised of the following elements: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local.

StatefulSet中的每个Pod都有一个名称,该名称将StatefulSet的名称与Pod的顺序索引结合在一起。 因为我们创建了三个副本,所以StatefulSet成员的编号为0-2,每个成员都有一个稳定的DNS条目 ,该条目由以下元素组成: $( statefulset-name )-$( ordinal ).$( service name ).$( namespace ).svc.cluster.local

In our case, the StatefulSet and the Headless Service created by the mongodb-replicaset chart have the same names:

在我们的例子中,由mongodb-replicaset图表创建的StatefulSet和Headless Service具有相同的名称:

  • kubectl get statefulset

    kubectl获取状态集

   
Output
NAME READY AGE mongo-mongodb-replicaset 3/3 4m2s
  • kubectl get svc

    kubectl获取svc

   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 42m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 4m35s mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 4m35s

This means that the first member of our StatefulSet will have the following DNS entry:

这意味着我们的StatefulSet的第一个成员将具有以下DNS条目:

mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local

Because we need our application to connect to each MongoDB instance, it’s essential that we have this information so that we can communicate directly with the Pods, rather than with the Service. When we create our custom application Helm chart, we will pass the DNS entries for each Pod to our application using environment variables.

因为我们需要我们的应用程序连接到每个MongoDB实例,所以拥有此信息非常重要,这样我们才能直接与Pod(而不是与Service)进行通信。 创建自定义应用程序Helm图表时,我们将使用环境变量将每个Pod的DNS条目传递给我们的应用程序。

With your database instances up and running, you are ready to create the chart for your Node application.

在数据库实例启动并运行后,就可以为Node应用程序创建图表了。

步骤4 —创建自定义应用程序图表并配置参数 (Step 4 — Creating a Custom Application Chart and Configuring Parameters)

We will create a custom Helm chart for our Node application and modify the default files in the standard chart directory so that our application can work with the replica set we have just created. We will also create files to define ConfigMap and Secret objects for our application.

我们将为Node应用程序创建一个定制的Helm图表,并修改标准图表目录中的默认文件,以便我们的应用程序可以使用刚刚创建的副本集。 我们还将创建文件来为我们的应用程序定义ConfigMap和Secret对象。

First, create a new chart directory called nodeapp with the following command:

首先,使用以下命令创建一个名为nodeapp的新图表目录:

  • helm create nodeapp

    掌舵创建nodeapp

This will create a directory called nodeapp in your ~/node_project folder with the following resources:

这将在~/ node_project文件夹中使用以下资源创建一个名为nodeapp的目录:

  • A Chart.yaml file with basic information about your chart.

    一个Chart.yaml文件,其中包含有关图表的基本信息。

  • A values.yaml file that allows you to set specific parameter values, as you did with your MongoDB deployment.

    一个values.yaml文件,允许您像在MongoDB部署中一样设置特定的参数值。

  • A .helmignore file with file and directory patterns that will be ignored when packaging charts.

    带有文件和目录模式的.helmignore文件,在打包图表时将被忽略。

  • A templates/ directory with the template files that will generate Kubernetes manifests.

    一个包含将生成Kubernetes清单的模板文件的templates/目录。

  • A templates/tests/ directory for test files.

    用于测试文件的templates/tests/目录。

  • A charts/ directory for any charts that this chart depends on.

    该图表所依赖的任何图表的charts/目录。

The first file we will modify out of these default files is values.yaml. Open that file now:

我们将从这些默认文件中修改的第一个文件是values.yaml 。 立即打开该文件:

  • nano nodeapp/values.yaml

    纳米nodeapp /values.yaml

The values that we will set here include:

我们将在此处设置的值包括:

  • The number of replicas.

    副本数。
  • The application image we want to use. In our case, this will be the node-replicas image we created in Step 1.

    我们要使用的应用程序映像。 在我们的例子中,这将是我们在步骤1中创建的node-replicas图像。

  • The ServiceType. In this case, we will specify LoadBalancer to create a point of access to our application for testing purposes. Because we are working with a DigitalOcean Kubernetes cluster, this will create a DigitalOcean Load Balancer when we deploy our chart. In production, you can configure your chart to use Ingress Resources and Ingress Controllers to route traffic to your Services.

    ServiceType 。 在这种情况下,我们将指定LoadBalancer创建测试应用程序的访问点。 由于我们正在使用DigitalOcean Kubernetes集群,因此在部署图表时将创建一个DigitalOcean负载均衡器 。 在生产中,您可以配置图表以使用Ingress资源Ingress控制器将流量路由到您的服务。

  • The targetPort to specify the port on the Pod where our application will be exposed.

    targetPort用于指定Pod上将公开我们的应用程序的端口。

We will not enter environment variables into this file. Instead, we will create templates for ConfigMap and Secret objects and add these values to our application Deployment manifest, located at ~/node_project/nodeapp/templates/deployment.yaml.

我们不会在该文件中输入环境变量。 相反,我们将为ConfigMap和Secret对象创建模板,并将这些值添加到位于~/node_project/ nodeapp /templates/deployment.yaml应用程序部署清单中。

Configure the following values in the values.yaml file:

values.yaml文件中配置以下值:

~/node_project/nodeapp/values.yaml
〜/ node_project / nodeapp / values.yaml
# Default values for nodeapp.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 3

image:
  repository: your_dockerhub_username/node-replicas
  tag: latest
  pullPolicy: IfNotPresent

nameOverride: ""
fullnameOverride: ""

service:
  type: LoadBalancer
  port: 80
  targetPort: 8080
...

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

Next, open a secret.yaml file in the nodeapp/templates directory:

接下来,在nodeapp /templates目录中打开secret.yaml文件:

  • nano nodeapp/templates/secret.yaml

    纳米nodeapp /templates/secret.yaml

In this file, add values for your MONGO_USERNAME and MONGO_PASSWORD application constants. These are the constants that your application will expect to have access to at runtime, as specified in db.js, your database connection file. As you add the values for these constants, remember to the use the base64-encoded values that you used earlier in Step 2 when creating your mongo-secret object. If you need to recreate those values, you can return to Step 2 and run the relevant commands again.

在此文件中,为MONGO_USERNAMEMONGO_PASSWORD应用程序常量添加值。 这些是您的应用程序期望在运行时访问的常量,如数据库连接文件db.js所指定。 在添加这些常量的值时,请记住在创建mongo-secret对象时使用您先前在步骤2中使用的base64 编码的值。 如果需要重新创建这些值,则可以返回到步骤2并再次运行相关命令。

Add the following code to the file:

将以下代码添加到文件中:

~/node_project/nodeapp/templates/secret.yaml
〜/ node_project / nodeapp / templates / secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: {{ .Release.Name }}-auth
data:
  MONGO_USERNAME: your_encoded_username
  MONGO_PASSWORD: your_encoded_password

The name of this Secret object will depend on the name of your Helm release, which you will specify when you deploy the application chart.

此Secret对象的名称取决于您的Helm版本的名称,您将在部署应用程序图表时指定该名称。

Save and close the file when you are finished.

完成后保存并关闭文件。

Next, open a file to create a ConfigMap for your application:

接下来,打开一个文件为您的应用程序创建ConfigMap:

  • nano nodeapp/templates/configmap.yaml

    纳米nodeapp /templates/configmap.yaml

In this file, we will define the remaining variables that our application expects: MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET. Our MONGO_HOSTNAME variable will include the DNS entry for each instance in our replica set, since this is what the MongoDB connection URI requires.

在此文件中,我们将定义应用程序期望的其余变量: MONGO_HOSTNAMEMONGO_PORTMONGO_DBMONGO_REPLICASET 。 我们的MONGO_HOSTNAME变量将在我们的副本集中包含每个实例的DNS条目,因为这是MongoDB连接URI所需要的

According to the Kubernetes documentation, when an application implements liveness and readiness checks, SRV records should be used when connecting to the Pods. As discussed in Step 3, our Pod SRV records follow this pattern: $(statefulset-name)-$(ordinal).$(service name).$(namespace).svc.cluster.local. Since our MongoDB StatefulSet implements liveness and readiness checks, we should use these stable identifiers when defining the values of the MONGO_HOSTNAME variable.

根据Kubernetes文档 ,当应用程序执行活动和就绪检查时,连接到Pod时应使用SRV记录 。 如第3步所述 ,我们的Pod SRV记录遵循以下模式: $( statefulset-name )-$( ordinal ).$( service name ).$( namespace ).svc.cluster.local 。 由于我们的MongoDB StatefulSet实现了活动性和就绪性检查,因此在定义MONGO_HOSTNAME变量的值时,我们应该使用这些稳定的标识符。

Add the following code to the file to define the MONGO_HOSTNAME, MONGO_PORT, MONGO_DB, and MONGO_REPLICASET variables. You are free to use another name for your MONGO_DB database, but your MONGO_HOSTNAME and MONGO_REPLICASET values must be written as they appear here:

将以下代码添加到文件中,以定义MONGO_HOSTNAMEMONGO_PORTMONGO_DBMONGO_REPLICASET变量。 您可以随意为MONGO_DB数据库使用其他名称,但是必须将MONGO_HOSTNAMEMONGO_REPLICASET值写成如下所示:

~/node_project/nodeapp/templates/configmap.yaml
〜/ node_project / nodeapp / templates / configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}-config
data:
  MONGO_HOSTNAME: "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local,mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local"  
  MONGO_PORT: "27017"
  MONGO_DB: "sharkinfo"
  MONGO_REPLICASET: "db"

Because we have already created the StatefulSet object and replica set, the hostnames that are listed here must be listed in your file exactly as they appear in this example. If you destroy these objects and rename your MongoDB Helm release, then you will need to revise the values included in this ConfigMap. The same applies for MONGO_REPLICASET, since we specified the replica set name with our MongoDB release.

因为我们已经创建了StatefulSet对象和副本集,所以此处列出的主机名必须在文件中完全按照本示例中的显示方式列出。 如果销毁这些对象并重命名MongoDB Helm版本,则将需要修改此ConfigMap中包含的值。 这同样适用于MONGO_REPLICASET ,因为我们在MongoDB版本中指定了副本集名称。

Also note that the values listed here are quoted, which is the expectation for environment variables in Helm.

还要注意,这里列出的值都用引号引起来,这是对Helm中的环境变量的期望

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

With your chart parameter values defined and your Secret and ConfigMap manifests created, you can edit the application Deployment template to use your environment variables.

定义了图表参数值并创建了Secret和ConfigMap清单后,您可以编辑应用程序部署模板以使用环境变量。

步骤5 —将环境变量集成到您的Helm部署中 (Step 5 — Integrating Environment Variables into Your Helm Deployment)

With the files for our application Secret and ConfigMap in place, we will need to make sure that our application Deployment can use these values. We will also customize the liveness and readiness probes that are already defined in the Deployment manifest.

有了我们的应用程序Secret和ConfigMap的文件,我们将需要确保我们的应用程序Deployment可以使用这些值。 我们还将自定义Deployment清单中已经定义的活动性和就绪性探针

Open the application Deployment template for editing:

打开应用程序部署模板进行编辑:

  • nano nodeapp/templates/deployment.yaml

    纳米nodeapp /templates/deployment.yaml

Though this is a YAML file, Helm templates use a different syntax from standard Kubernetes YAML files in order to generate manifests. For more information about templates, see the Helm documentation.

尽管这是一个YAML文件,但是Helm模板使用与标准Kubernetes YAML文件不同的语法来生成清单。 有关模板的更多信息,请参见Helm文档

In the file, first add an env key to your application container specifications, below the imagePullPolicy key and above ports:

在文件中,首先在imagePullPolicy密钥下方和ports上方,将env密钥添加到您的应用程序容器规范中:

~/node_project/nodeapp/templates/deployment.yaml
〜/ node_project / nodeapp / templates / deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        ports:

Next, add the following keys to the list of env variables:

接下来,将以下键添加到env变量列表中:

~/node_project/nodeapp/templates/deployment.yaml
〜/ node_project / nodeapp / templates / deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        env:
        - name: MONGO_USERNAME
          valueFrom:
            secretKeyRef:
              key: MONGO_USERNAME
              name: {{ .Release.Name }}-auth
        - name: MONGO_PASSWORD
          valueFrom:
            secretKeyRef:
              key: MONGO_PASSWORD
              name: {{ .Release.Name }}-auth
        - name: MONGO_HOSTNAME
          valueFrom:
            configMapKeyRef:
              key: MONGO_HOSTNAME
              name: {{ .Release.Name }}-config
        - name: MONGO_PORT
          valueFrom:
            configMapKeyRef:
              key: MONGO_PORT
              name: {{ .Release.Name }}-config
        - name: MONGO_DB
          valueFrom:
            configMapKeyRef:
              key: MONGO_DB
              name: {{ .Release.Name }}-config      
        - name: MONGO_REPLICASET
          valueFrom:
            configMapKeyRef:
              key: MONGO_REPLICASET
              name: {{ .Release.Name }}-config

Each variable includes a reference to its value, defined either by a secretKeyRef key, in the case of Secret values, or configMapKeyRef for ConfigMap values. These keys point to the Secret and ConfigMap files we created in the previous Step.

每个变量都包含对其值的引用,对于Secret值,此变量由secretKeyRef定义;对于ConfigMap值,则由configMapKeyRef 。 这些键指向我们在上一步中创建的Secret和ConfigMap文件。

Next, under the ports key, modify the containerPort definition to specify the port on the container where our application will be exposed:

接下来,在ports键下,修改containerPort定义,以指定将在其中公开我们的应用程序的容器上的端口:

~/node_project/nodeapp/templates/deployment.yaml
〜/ node_project / nodeapp / templates / deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      ...

Next, let’s modify the liveness and readiness checks that are included in this Deployment manifest by default. These checks ensure that our application Pods are running and ready to serve traffic:

接下来,让我们修改默认情况下包含在此Deployment清单中的活动和就绪检查。 这些检查确保我们的应用程序Pod正在运行并可以为流量提供服务:

  • Readiness probes assess whether or not a Pod is ready to serve traffic, stopping all requests to the Pod until the checks succeed.

    准备情况调查会评估Pod是否准备好为流量服务,并停止对Pod的所有请求,直到检查成功。
  • Liveness probes check basic application behavior to determine whether or not the application in the container is running and behaving as expected. If a liveness probe fails, Kubernetes will restart the container.

    活动性探针检查基本的应用程序行为,以确定容器中的应用程序是否正在按预期方式运行和行为。 如果活动探针失败,Kubernetes将重新启动容器。

For more about both, see the relevant discussion in Architecting Applications for Kubernetes.

有关两者的更多信息,请参见《 为Kubernetes设计应用程序》中相关讨论

In our case, we will build on the httpGet request that Helm has provided by default and test whether or not our application is accepting requests on the /sharks endpoint. The kubelet service will perform the probe by sending a GET request to the Node server running in the application Pod’s container and listening on port 8080. If the status code for the response is between 200 and 400, then the kubelet will conclude that the container is healthy. Otherwise, in the case of a 400 or 500 status, kubelet will either stop traffic to the container, in the case of the readiness probe, or restart the container, in the case of the liveness probe.

在我们的例子中,我们将基于Helm默认提供的httpGet请求并测试我们的应用程序是否在/sharks端点上接受请求。 kubelet服务将通过向运行在应用程序Pod的容器中的Node服务器发送GET请求并侦听端口8080来执行探测。 如果响应的状态代码在200到400之间,则kubelet会得出结论该容器是健康的。 否则,在状态为400或500的情况下,对于就绪状态探测器, kubelet将停止到容器的通信,而对于活动性探测器,则kubelet将重新启动容器。

Add the following modification to the stated path for the liveness and readiness probes:

在活动性和就绪性探测器的指定path添加以下修改:

~/node_project/nodeapp/templates/deployment.yaml
〜/ node_project / nodeapp / templates / deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
...
  spec:
    containers:
    ...
      env:
    ...
      ports:
        - name: http
          containerPort: 8080
          protocol: TCP
      livenessProbe:
        httpGet:
          path: /sharks
          port: http
      readinessProbe:
        httpGet:
          path: /sharks
          port: http

Save and close the file when you are finished editing.

完成编辑后,保存并关闭文件。

You are now ready to create your application release with Helm. Run the following helm install command, which includes the name of the release and the location of the chart directory:

现在,您可以使用Helm创建应用程序版本了。 运行以下helm install命令 ,其中包括发行版的名称和图表目录的位置:

  • helm install --name nodejs ./nodeapp

    舵机安装--name nodejs ./ nodeapp

Remember that you can run helm install with the --dry-run and --debug options first, as discussed in Step 3, to check the generated manifests for your release.

请记住,您可以先使用--dry-run--debug选项运行helm install ,如步骤3所述 ,以检查为您的发行版生成的清单。

Again, because we are not including the --namespace flag with helm install, our chart objects will be created in the default namespace.

同样,由于在helm install中不包括--namespace标志,因此将在default名称空间中创建图表对象。

You will see the following output indicating that your release has been created:

您将看到以下输出,表明您的发布已创建:


   
Output
NAME: nodejs LAST DEPLOYED: Wed Apr 17 18:10:29 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/ConfigMap NAME DATA AGE nodejs-config 4 1s ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE nodejs-nodeapp 0/3 3 0 1s ...

Again, the output will indicate the status of the release, along with information about the created objects and how you can interact with them.

同样,输出将指示发布状态,以及有关已创建对象以及如何与它们进行交互的信息。

Check the status of your Pods:

检查您的Pod状态:

  • kubectl get pods

    kubectl得到豆荚

   
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 57m mongo-mongodb-replicaset-1 1/1 Running 0 56m mongo-mongodb-replicaset-2 1/1 Running 0 55m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 117s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 117s

Once your Pods are up and running, check your Services:

Pod启动并运行后,请检查您的服务:

  • kubectl get svc

    kubectl获取svc

   
Output
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 96m mongo-mongodb-replicaset ClusterIP None <none> 27017/TCP 58m mongo-mongodb-replicaset-client ClusterIP None <none> 27017/TCP 58m nodejs-nodeapp LoadBalancer 10.245.33.46 your_lb_ip 80:31518/TCP 3m22s

The EXTERNAL_IP associated with the nodejs-nodeapp Service is the IP address where you can access the application from outside of the cluster. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

nodejs-nodeapp服务关联的EXTERNAL_IP是可以从集群外部访问应用程序的IP地址。 如果您在EXTERNAL_IP列中看到<pending>状态,则表明您的负载均衡器仍在创建中。

Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

在该列中看到IP后,请在浏览器中浏览至该IP: http:// your_lb_ip

You should see the following landing page:

您应该看到以下登录页面:

Now that your replicated application is working, let’s add some test data to ensure that replication is working between members of the replica set.

现在,复制的应用程序正在运行,让我们添加一些测试数据以确保复制在副本集的成员之间运行。

第6步-测试MongoDB复制 (Step 6 — Testing MongoDB Replication)

With our application running and accessible through an external IP address, we can add some test data and ensure that it is being replicated between the members of our MongoDB replica set.

通过运行我们的应用程序并可以通过外部IP地址对其进行访问,我们可以添加一些测试数据,并确保在我们的MongoDB副本集的成员之间复制它们。

First, make sure you have navigated your browser to the application landing page:

首先,请确保已将浏览器导航到应用程序登录页面:

Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark’s general character:

单击获取鲨鱼信息按钮。 您将看到一个带有输入表单的页面,您可以在其中输入鲨鱼名称和该鲨鱼的一般性格描述:

In the form, add an initial shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

在表单中,添加您选择的初始鲨鱼。 为了演示,我们将Megalodon Shark添加到Shark Name字段中,并将AncientShark Character字段中:

Click on the Submit button. You will see a page with this shark information displayed back to you:

单击提交按钮。 您将看到一个页面,其中显示了此鲨鱼信息:

Now head back to the shark information form by clicking on Sharks in the top navigation bar:

现在,通过单击顶部导航栏中的“ Sharks”返回鲨鱼信息表单:

Enter a new shark of your choosing. We’ll go with Whale Shark and Large:

输入您选择的新鲨鱼。 我们将选择Whale SharkLarge

Once you click Submit, you will see that the new shark has been added to the shark collection in your database:

单击Submit之后 ,您将看到新的Shark已添加到数据库中的Shark集合中:

Let’s check that the data we’ve entered has been replicated between the primary and secondary members of our replica set.

让我们检查一下我们输入的数据是否已在副本集的主要和次要成员之间复制。

Get a list of your Pods:

获取您的Pod列表:

  • kubectl get pods

    kubectl得到豆荚

   
Output
NAME READY STATUS RESTARTS AGE mongo-mongodb-replicaset-0 1/1 Running 0 74m mongo-mongodb-replicaset-1 1/1 Running 0 73m mongo-mongodb-replicaset-2 1/1 Running 0 72m nodejs-nodeapp-577df49dcc-b5fq5 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-bkk66 1/1 Running 0 5m4s nodejs-nodeapp-577df49dcc-lpmt2 1/1 Running 0 5m4s

To access the mongo shell on your Pods, you can use the kubectl exec command and the username you used to create your mongo-secret in Step 2. Access the mongo shell on the first Pod in the StatefulSet with the following command:

要访问Pods上的mongo shell ,可以使用kubectl exec命令和在步骤2中用于创建mongo-secret的用户名。 使用以下命令访问StatefulSet中第一个Pod上的mongo shell:

  • kubectl exec -it mongo-mongodb-replicaset-0 -- mongo -u your_database_username -p --authenticationDatabase admin

    kubectl exec -it mongo-mongodb-replicaset- 0 -mongo -u your_database_username -p --authentication数据库管理员

When prompted, enter the password associated with this username:

出现提示时,输入与此用户名关联的密码:


   
Output
MongoDB shell version v4.1.9 Enter password:

You will be dropped into an administrative shell:

您将进入管理外壳:


   
Output
MongoDB server version: 4.1.9 Welcome to the MongoDB shell. ... db:PRIMARY>

Though the prompt itself includes this information, you can manually check to see which replica set member is the primary with the rs.isMaster() method:

尽管提示本身包含此信息,但是您可以使用rs.isMaster()方法手动检查哪个副本集成员是主要的:

  • rs.isMaster()

    rs.isMaster()

You will see output like the following, indicating the hostname of the primary:

您将看到类似以下的输出,指示主要主机的主机名:


   
Output
db:PRIMARY> rs.isMaster() { "hosts" : [ "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-1.mongo-mongodb-replicaset.default.svc.cluster.local:27017", "mongo-mongodb-replicaset-2.mongo-mongodb-replicaset.default.svc.cluster.local:27017" ], ... "primary" : "mongo-mongodb-replicaset-0.mongo-mongodb-replicaset.default.svc.cluster.local:27017", ...

Next, switch to your sharkinfo database:

接下来,切换到您的sharkinfo数据库:

  • use sharkinfo

    使用sharkinfo


   
Output
switched to db sharkinfo

List the collections in the database:

列出数据库中的集合:

  • show collections

    显示收藏

   
Output
sharks

Output the documents in the collection:

输出集合中的文档:

  • db.sharks.find()

    db.sharks.find()

You will see the following output:

您将看到以下输出:


   
Output
{ "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

Exit the MongoDB Shell:

退出MongoDB Shell:

  • exit

    出口

Now that we have checked the data on our primary, let’s check that it’s being replicated to a secondary. kubectl exec into mongo-mongodb-replicaset-1 with the following command:

现在我们已经检查了主数据库上的数据,现在让我们检查一下它是否已复制到辅助数据库上。 使用以下命令将kubectl exec转换为mongo-mongodb-replicaset-1

  • kubectl exec -it mongo-mongodb-replicaset-1 -- mongo -u your_database_username -p --authenticationDatabase admin

    kubectl exec -it mongo-mongodb-replicaset- 1 -mongo -u your_database_username -p --authentication数据库管理员

Once in the administrative shell, we will need to use the db.setSlaveOk() method to permit read operations from the secondary instance:

进入管理外壳后,我们将需要使用db.setSlaveOk()方法来允许从辅助实例进行读取操作:

  • db.setSlaveOk(1)

    db.setSlaveOk(1)

Switch to the sharkinfo database:

切换到sharkinfo数据库:

  • use sharkinfo

    使用sharkinfo

   
Output
switched to db sharkinfo

Permit the read operation of the documents in the sharks collection:

允许对sharks集合中的文档进行读取操作:

  • db.setSlaveOk(1)

    db.setSlaveOk(1)

Output the documents in the collection:

输出集合中的文档:

  • db.sharks.find()

    db.sharks.find()

You should now see the same information that you saw when running this method on your primary instance:

现在,您应该看到与在主实例上运行此方法时看到的信息相同的信息:


   
Output
db:SECONDARY> db.sharks.find() { "_id" : ObjectId("5cb7702c9111a5451c6dc8bb"), "name" : "Megalodon Shark", "character" : "Ancient", "__v" : 0 } { "_id" : ObjectId("5cb77054fcdbf563f3b47365"), "name" : "Whale Shark", "character" : "Large", "__v" : 0 }

This output confirms that your application data is being replicated between the members of your replica set.

此输出确认您的应用程序数据正在副本集的成员之间复制。

结论 (Conclusion)

You have now deployed a replicated, highly-available shark information application on a Kubernetes cluster using Helm charts. This demo application and the workflow outlined in this tutorial can act as a starting point as you build custom charts for your application and take advantage of Helm’s stable repository and other chart repositories.

现在,您已经使用Helm图表在Kubernetes集群上部署了一个复制的,高可用性的鲨鱼信息应用程序。 在为应用程序构建自定义图表并利用Helm的稳定存储库和其他图表存储库时,此演示应用程序和本教程中概述的工作流可以作为起点。

As you move toward production, consider implementing the following:

在走向生产时,请考虑实施以下步骤:

To learn more about Helm, see An Introduction to Helm, the Package Manager for Kubernetes, How To Install Software on Kubernetes Clusters with the Helm Package Manager, and the Helm documentation.

要了解有关Helm的更多信息,请参阅Helm 简介,Kubernetes的软件包管理器如何使用Helm软件包管理器在Kubernetes集群上安装软件以及Helm文档

翻译自: https://www.digitalocean.com/community/tutorials/how-to-scale-a-node-js-application-with-mongodb-on-kubernetes-using-helm

helm部署mongodb

  • 0
    点赞
  • 0
    评论
  • 0
    收藏
  • 一键三连
    一键三连
  • 扫一扫,分享海报

表情包
插入表情
评论将由博主筛选后显示,对所有人可见 | 还能输入1000个字符
©️2021 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、C币套餐、付费专栏及课程。

余额充值