零担部署Tarantool Cartridge应用程序(第2部分)

We have recently talked about how to deploy a Tarantool Cartridge application. However, an application's life doesn't end with deployment, so today we will update our application and figure out how to manage topology, sharding, and authorization, and change the role configuration.

最近,我们讨论了如何部署Tarantool Cartridge应用程序。 但是,应用程序的生命不会随着部署而结束,因此今天我们将更新应用程序,并弄清楚如何管理拓扑,分片和授权以及更改角色配置。

Feeling interested? Please continue reading under the cut.

有兴趣吗? 请继续阅读。

我们从哪里出发? (Where did we leave off?)

Last time, we set up the following topology:

上一次,我们设置了以下拓扑:

The sample repository has changed a bit: there are new files called getting-started-app-2.0.0-0.rpm and hosts.updated.2.yml. You do not have to pull the new version, you can just download the package by clicking this link, and you need hosts.updated.2.yml only to look there if you have trouble changing the current inventory.

样本存储库有所更改:有一些新文件,名为getting-started-app-2.0.0-0.rpmhosts.updated.2.yml 。 您不必拉出新版本,只需单击此链接即可下载软件包,并且如果您在更改当前清单时遇到问题,则仅需要hosts.updated.2.yml才能在那里查看。

If you have followed all the steps from the previous part of this tutorial, you now have a cluster configuration with two storage replica sets in the hosts.yml file (hosts.updated.yml in the repository).

如果您已按照本教程的前一部分的所有步骤,你现在有两个群集配置storage在副本集hosts.yml文件( hosts.updated.yml在仓库中)。

First, start the virtual machines:

首先,启动虚拟机:

$ vagrant up

An up to date version of the Tarantool Cartridge Ansible role should already be installed. Just in care run the following command:

应该已经安装了Tarantool Cartridge Ansible角色的最新版本。 只要小心,请运行以下命令:

$ ansible-galaxy install tarantool.cartridge,1.0.2

So, the current cluster configuration:

因此,当前集群配置:

--- 
all:
  vars:
    # common cluster variables
    cartridge_app_name: getting-started-app
    cartridge_package_path: ./getting-started-app-1.0.0-0.rpm  # path to package
 
    cartridge_cluster_cookie: app-default-cookie  # cluster cookie
 
    # common ssh options
    ansible_ssh_private_key_file: ~/.vagrant.d/insecure_private_key
    ansible_ssh_common_args: '-o IdentitiesOnly=yes -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
 
  # INSTANCES
  hosts:
    storage-1:
      config:
        advertise_uri: '172.19.0.2:3301'
        http_port: 8181
 
    app-1:
      config:
        advertise_uri: '172.19.0.3:3301'
        http_port: 8182
 
    storage-1-replica:
      config:
        advertise_uri: '172.19.0.3:3302'
        http_port: 8183
 
    storage-2:
      config:
        advertise_uri: '172.19.0.3:3303'
        http_port: 8184
 
    storage-2-replica:
      config:
        advertise_uri: '172.19.0.2:3302'
        http_port: 8185
 
  children:
    # GROUP INSTANCES BY MACHINES
    host1:
      vars:
        # first machine connection options
        ansible_host: 172.19.0.2
        ansible_user: vagrant
 
      hosts:  # instances to be started on the first machine
        storage-1:
        storage-2-replica:
 
    host2:
      vars:
        # second machine connection options
        ansible_host: 172.19.0.3
        ansible_user: vagrant
 
      hosts:  # instances to be started on the second machine
        app-1:
        storage-1-replica:
        storage-2:
 
    # GROUP INSTANCES BY REPLICA SETS
    replicaset_app_1:
      vars:  # replica set configuration
        replicaset_alias: app-1
        failover_priority:
          - app-1  # leader
        roles:
          - 'api'
 
      hosts:  # replica set instances
        app-1:
 
    replicaset_storage_1:
      vars:  # replica set configuration
        replicaset_alias: storage-1
        weight: 3
        failover_priority:
          - storage-1  # leader
          - storage-1-replica
        roles:
          - 'storage'
 
      hosts:   # replica set instances
        storage-1:
        storage-1-replica:
 
    replicaset_storage_2:
      vars:  # replica set configuration
        replicaset_alias: storage-2
        weight: 2
        failover_priority:
          - storage-2
          - storage-2-replica
        roles:
          - 'storage'
 
      hosts:   # replica set instances
        storage-2:
        storage-2-replica:

Go to http://localhost:8181/admin/cluster/dashboard and make sure that your cluster is operating correctly.

转到http:// localhost:8181 / admin / cluster / dashboard ,并确保您的集群运行正常。

As before, we change this file step-by-step and watch how the cluster changes. You can always look up the final version in hosts.updated.2.yml

和以前一样,我们逐步更改此文件,并观察群集如何更改。 您可以随时在hosts.updated.2.yml查找最终版本

Let's start!

开始吧!

更新应用程序 (Updating the application)

First, we are going to update our application. Make sure you have the getting-started-app-2.0.0-0.rpm file in your current directory (otherwise, download it from the repository).

首先,我们将更新我们的应用程序。 确保当前目录中有getting-started-app-2.0.0-0.rpm文件(否则,请从存储库下载该文件)。

Specify the path to a new version of the package:

指定软件包新版本的路径:

---
all:
  vars:
    cartridge_app_name: getting-started-app
    cartridge_package_path: ./getting-started-app-2.0.0-0.rpm  # <==
    cartridge_enable_tarantool_repo: false  # <==

We have set cartridge_enable_tarantool_repo: false so that the role does not include the repository with the Tarantool package that we had already installed last time. It slightly speeds up the deployment process but it isn't obligatory.

我们已经设置了article_enable_tarantool_repo cartridge_enable_tarantool_repo: false以便该角色不包括带有我们上次安装的Tarantool软件包的存储库。 它可以稍微加快部署过程,但不是必须的。

Run the playbook with the cartridge-instances tag:

运行带有Cartridge cartridge-instances标签的剧本:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --tags cartridge-instances

And check that the package has been updated:

并检查软件包是否已更新:

$ vagrant ssh vm1
[vagrant@svm1 ~]$ sudo yum list installed | grep getting-started-app

Check that the version is 2.0.0:

检查版本为2.0.0

getting-started-app.x86_64          2.0.0-0            installed

Now you can safely try out the new version of the application.

现在,您可以安全地试用该应用程序的新版本。

启用分片 (Enabling sharding)

Let's enable sharding so that we can later get to managing storage replica sets. It's an easy thing to do. Add the cartridge_bootstrap_vshard variable to the all.vars section:

让我们启用分片,以便以后可以管理storage副本集。 这是一件容易的事。 添加cartridge_bootstrap_vshard变量的all.vars部分:

---
all:
  vars:
    ...
    cartridge_cluster_cookie: app-default-cookie  # cluster cookie
    cartridge_bootstrap_vshard: true  # <==
    ...
  hosts:
    ...
  children:
    ...

Run:

跑:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --tags cartridge-config

Note that we have specified the cartridge-config tag to run only the tasks related to the cluster configuration.

请注意,我们已指定了cartridge-config标记以仅运行与集群配置有关的任务。

Open the Web UI http://localhost:8181/admin/cluster/dashboard and note that the buckets are distributed among storage replica sets as 2:3 (as you may recall, we specified these weights for the replica sets):

打开Web UI http:// localhost:8181 / admin / cluster / dashboard ,注意存储桶在存储副本集之间的分配比例为2:3 (您可能还记得,我们为副本集指定了这些权重):

启用自动故障转移 (Enabling automatic failover)

Now we are going to enable the automatic failover mode in order to find out what it is and how it works.

现在,我们将启用自动故障转移模式,以了解它是什么以及它如何工作。

Add the cartridge_failover flag to the configuration:

cartridge_failover标志添加到配置中:

---
all:
  vars:
    ...
    cartridge_cluster_cookie: app-default-cookie  # cluster cookie
    cartridge_bootstrap_vshard: true
    cartridge_failover: true  # <==
    ...
  hosts:
    ...
  children:
    ...

Start cluster management tasks again:

再次启动集群管理任务:

$ ansible-playbook -i hosts.yml playbook.yml \
                    --tags cartridge-config

When the playbook finishes successfully, you can go to the Web UI and make sure that the Failover switch in the top right corner is now switched on. To disable the automatic failover mode, simply change the value of cartridge_failover to false and run the playbook again.

当剧本成功完成时,您可以转到Web UI并确保现在已打开右上角的Failover开关。 要禁用自动故障切换模式,只需值更改cartridge_failoverfalse ,并再次运行剧本。

Now let's take a closer look at this mode and see why we enabled it.

现在,让我们仔细看看此模式,并了解为什么启用它。

研究故障转移 (Looking into failover)

You have probably noticed the failover_priority variable that we specified for each replica set. Let's look into it.

您可能已经注意到我们为每个副本集指定的failover_priority变量。 让我们来看看它。

Tarantool Cartridge provides an automatic failover mode. Each replica set has a leader, that is, the instance where the record is written. If anything happens to the leader, one of the replicas takes over its role. Which one? Look at the storage-2 replica set:

Tarantool弹药筒提供自动故障转移模式。 每个副本集都有一个领导者,即记录被写入的实例。 如果领导者发生任何事情,其中​​一个副本将接管其角色。 哪一个? 查看storage-2副本集:

---
all:
  ...
  children:
    ...
    replicaset_storage_2:
      vars:
        ...
        failover_priority:
          - storage-2
          - storage-2-replica

In failover_priority, we specified the storage-2 instance as the first one. In the Web UI, it is the first one in the replica set instance list and is marked with a green crown. This is the leader, or the first instance specified in failover_priority:

failover_priority ,我们将storage-2实例指定为第一个实例。 在Web UI中,它是副本集实例列表中的第一个,并带有绿色的皇冠。 这是领导者,或failover_priority指定的第一个实例:

Now let's see what happens if something is wrong with the replica set leader. Go to the virtual machine and stop the storage-2 instance:

现在让我们看看如果副本集领导者出了什么问题怎么办。 转到虚拟机并停止storage-2实例:

$ vagrant ssh vm2
[vagrant@vm2 ~]$ sudo systemctl stop getting-started-app@storage-2

Back to the Web UI:

返回Web UI:

The crown of the storage-2 instance turns red, which means that the assigned leader is unhealthy. But storage-2-replica now has a green crown, so this instance took over the leader role until storage-2 comes back into operation. This is the automatic failover in action.

storage-2实例的表冠变为红色,这表示分配的领导者状况不佳。 但是, storage-2-replica现在具有绿色的表冠,因此该实例将担任领导者角色,直到storage-2恢复运行。 这是自动故障转移。

Let's bring storage-2 back to life:

storage-2复活:

$ vagrant ssh vm2
[vagrant@vm2 ~]$ sudo systemctl start getting-started-app@storage-2

Everything is back to normal:

一切恢复正常:

Now we change the instance order in failover priority. We make storage-2-replica the leader and remove storage-2 from the list:

现在,我们以故障转移优先级更改实例顺序。 我们将storage-2-replica作为领导者,并从列表中删除storage-2

---
all:
  vars:
    ...
  hosts:
    ...
  children:
    ...
    replicaset_storage_2:
      vars:  # replica set configuration
        ...
        failover_priority:
          - storage-2-replica  # <==
        ...

Run cartridge-replicasets tasks for instances from the replicaset_storage_2 group:

运行来自replicaset_storage_2组的实例的cartridge-replicasets任务:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --limit replicaset_storage_2 \
                   --tags cartridge-replicasets

Go to http://localhost:8181/admin/cluster/dashboard and check that the leader has changed:

转到http:// localhost:8181 / admin / cluster / dashboard并检查领导者是否已更改:

But we removed the storage-2 instance from the configuration, why is it still here? The fact is that when Cartridge receives a new failover_priority value at the input, it arranges the instances as follows: the first instance from the list becomes the leader followed by the other specified instances. Instances left out from failover_priority are arranged by UUID and added to the end.

但是我们从配置中删除了storage-2实例,为什么它仍然在这里? 事实是,当Cartridge在输入处接收到新的failover_priority值时,它将按以下方式安排实例:列表中的第一个实例成为领导者,然后是其他指定的实例。 从failover_priority中遗漏的实例由UUID排列并添加到末尾。

驱逐实例 (Expelling instances)

What if you want to expel an instance from the topology? It is straightforward: just assign the expelled flag to it. Let's expel the storage-2-replica instance. It is the leader now, so Cartridge will not let us do this. But we're not afraid so we'll try:

如果要从拓扑中删除实例怎么办? 这很简单:只需为其分配expelled标志。 让我们驱逐storage-2-replica实例。 现在是领导者,因此墨盒不会让我们这样做。 但是我们并不害怕,所以我们将尝试:

---
all:
  vars:
    ...
  hosts:
    storage-2-replica:
      config:
        advertise_uri: '172.19.0.2:3302'
        http_port: 8185
      expelled: true  # <==
  ...

We specify the cartridge-replicasets tag because expelling an instance is a change in topology:

由于排除实例是拓扑结构的更改,因此我们指定了cartridge-replicasets标签:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --limit replicaset_storage_2 \
                   --tags cartridge-replicasets

Run the playbook and observe the error:

运行剧本并观察错误:

Cartridge doesn't let the current replica set leader be removed from the topology. This makes good sense because the replication is asynchronous, so expelling the leader is likely to cause data loss. We need to specify another leader and only then expel the instance. The role first applies the new replica set configuration and then proceeds to expelling the instance. So we change the failover_priority and run the playbook again:

墨盒不允许将当前副本集首标从拓扑中删除。 这很有意义,因为复制是异步的,因此驱逐领导者可能会导致数据丢失。 我们需要指定另一个领导者,然后才驱逐该实例。 该角色首先应用新的副本集配置,然后继续驱逐该实例。 因此,我们更改了failover_priority并再次运行该剧本:

---
all:
  vars:
    ...
  hosts:
    ...
  children:
    ...
    replicaset_storage_2:
      vars:  # replica set configuration
        ...
        failover_priority:
          - storage-2 # <==
        ...
$ ansible-playbook -i hosts.yml playbook.yml \
                   --limit replicaset_storage_2 \
                   --tags cartridge-replicasets

And so storage-2-replica disappears from the topology!

这样, storage-2-replica将从拓扑中消失!

Please note that the instance is expelled permanently and irrevocably. After removing the instance from the topology, our Ansible role stops the systemd service and deletes all the files of this instance.

请注意,该实例将永久且不可撤销地被驱逐。 从拓扑中删除实例后,我们的Ansible角色将停止systemd服务并删除该实例的所有文件。

If you suddenly change your mind and decide that the storage-2 replica set still needs a second instance, you will not be able to restore it. Cartridge remembers the UUIDs of all the instances that have left the topology and will not allow the expelled one to return. You can start a new instance with the same name and configuration, but its UUID will obviously be different, so Cartridge will allow it to join.

如果突然改变主意并确定storage-2副本集仍需要第二个实例,则将无法还原它。 卡式盒会记住所有离开拓扑的实例的UUID,并且不允许被驱逐的实例返回。 您可以使用相同的名称和配置启动一个新实例,但是其UUID显然会有所不同,因此Cartridge将允许它加入。

删除副本集 (Deleting replica sets)

We have already found out that the replica set leader cannot be expelled. But what if we want to remove the storage-2 replica set permanently? Of course, there is a solution.

我们已经发现副本集首标不能被驱逐。 但是,如果我们想永久删除storage-2副本集,该怎么办? 当然,有解决方案。

In order not to lose the data, we must first transfer all the buckets to storage-1. For this purpose, we set the weight of the storage-2 replica set to 0:

为了不丢失数据,我们必须首先将所有存储桶传输到storage-1 。 为此,我们将storage-2副本的权重设置为0

---
all:
  vars:
    ...
  hosts:
    ...
  children:
    ...
    replicaset_storage_2:
      vars:  # replica set configuration
        replicaset_alias: storage-2
        weight: 0  # <==
        ...
  ...

Start the topology control:

启动拓扑控制:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --limit replicaset_storage_2 \
                   --tags cartridge-replicasets

Open the Web UI http://localhost:8181/admin/cluster/dashboard and watch all the buckets flow into storage-1:

打开Web UI http:// localhost:8181 / admin / cluster / dashboard,然后观察所有存储桶流入storage-1

Assign the expelled flag to the storage-2 leader and say goodbye to this replica set:

expelled标志分配给storage-2领导者,并为此副本集说再见:

---
all:
  vars:
    ...
  hosts:
    ...
    storage-2:
      config:
        advertise_uri: '172.19.0.3:3303'
        http_port: 8184
      expelled: true  # <==
  ...
$ ansible-playbook -i hosts.yml playbook.yml \
                   --tags cartridge-replicasets

Note that we did not specify the limit option this time since at least one of the instances with the running playbook must not be marked as expelled.

请注意,我们这次未指定limit选项,因为至少一个运行中的剧本实例不得标记为已expelled

So we're back to the original topology:

因此,我们回到了原始拓扑:

授权书 (Authorization)

Let's take our minds off replica set control and think about safety. Now any unauthorized user can manage the cluster via Web UI. We have to admit; it doesn't look too good.

让我们将注意力从副本集控制上移开,考虑一下安全性。 现在,任何未经授权的用户都可以通过Web UI管理集群。 我们必须承认; 看起来不太好。

With Cartridge, you can connect your own authorization module, such as LDAP (or whatever), and use it to manage users and their access to the application. But here we'll be using the built-in authorization module that Cartridge uses by default. This module allows you to perform basic operations with users (delete, add, edit) and implements password verification.

使用Cartridge,您可以连接自己的授权模块,例如LDAP(或其他),并使用它来管理用户及其对应用程序的访问。 但是在这里,我们将使用Cartridge默认使用的内置授权模块。 该模块使您可以对用户执行基本操作(删除,添加,编辑)并实现密码验证。

Please note that our Ansible role requires the authorization backend to implement all these functions.

请注意,我们的Ansible角色需要授权后端来实现所有这些功能。

Okay, we need to put theory into practice now. First, we are going to make authorization mandatory, set the session parameters, and add a new user:

好的,我们现在需要将理论付诸实践。 首先,我们将强制授权,设置会话参数,并添加一个新用户:

---
all:
  vars:
    ...
 
    # authorization
    cartridge_auth:  # <==
      enabled: true   # enable authorization
      cookie_max_age: 1000
      cookie_renew_age: 100
 
      users:  # cartridge users to set up
        - username: dokshina
          password: cartridge-rullez
          fullname: Elizaveta Dokshina
          email: dokshina@example.com
          # deleted: true  # uncomment to delete user
    ...

Authorization is managed within the cartridge-config tasks, so specify this tag:

授权在cartridge-config任务中进行管理,因此请指定以下标记:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --tags cartridge-config

Now http://localhost:8181/admin/cluster/dashboard has a surprise for you:

现在, http:// localhost:8181 / admin / cluster / dashboard为您带来了惊喜:

You can log in with the username and password of the new user, or as admin, the default user. The password is a cluster cookie; we have specified this value in the cartridge_cluster_cookie variable (it is app-default-cookie, don't bother to check).

您可以使用新用户的usernamepassword登录,也可以使用默认用户admin password 。 密码是集群cookie; 我们已在cartridge_cluster_cookie变量中指定了此值(它是app-default-cookie ,不必费心检查)。

After a successful login, we open the Users tab to make sure that everything goes well:

成功登录后,我们打开“ Users选项卡以确保一切顺利:

Try adding new users and changing their parameters. To delete a user, specify the deleted: true flag for that user. The email and fullname values are not used by Cartridge, but you can specify them for your convenience.

尝试添加新用户并更改其参数。 要删除用户,请为该用户指定deleted: true标志。 墨盒不使用emailfullname值,但是为了方便您可以指定它们。

应用配置 (Application configuration)

Let's step back and skim through the whole story.

让我们退后一步,浏览整个故事。

We have deployed a small application that stores data about customers and their bank accounts. As you may recall, this application has two implemented roles: api and storage. The storage role deals with data storage and sharding using the integrated vshard-storage role. The second role (or api) implements an HTTP server with an API for data management. It also has another integral standard role (vshard-router) that controls sharding.

我们已经部署了一个小型应用程序,用于存储有关客户及其银行帐户的数据。 您可能还记得,该应用程序具有两个已实现的角色: apistoragestorage角色使用集成的vshard-storage角色处理数据存储和分片。 第二个角色(或api )使用用于数据管理的API实现HTTP服务器。 它还具有另一个控制分片的集成标准角色( vshard-router )。

So, we send the first request to the application API to add a new client:

因此,我们将第一个请求发送到应用程序API以添加新客户端:

$ curl -X POST -H "Content-Type: application/json" \
               -d '{"customer_id":1, "name":"Elizaveta", "accounts":[{"account_id": 1}]}' \
               http://localhost:8182/storage/customers/create

In return, we get something like this:

作为回报,我们得到如下信息:

{"info":"Successfully created"}

Note that in the URL we have specified the 8082 port of the app-1 instance as this is the port for the API.

请注意,在URL中,我们指定了app-1实例的8082端口,因为这是API的端口。

Now we update the balance of the new user:

现在,我们更新新用户的余额:

$ curl -X POST -H "Content-Type: application/json" \
               -d '{"account_id": 1, "amount": ^_^quotϨquot^_^}' \
               http://localhost:8182/storage/customers/1/update_balance

We see the updated balance in the response:

我们会在响应中看到更新后的余额:

{"balance":"1000.00"}

All right, it works! The API is implemented, Cartridge takes care of data sharding, we have already configured the failover priority in case of emergency and enabled authorization. It's time to get down to configuring the application.

好吧,它有效! 该API已实现,Cartridge负责数据分片,我们已经配置了紧急情况下的故障转移优先级并启用了授权。 现在该开始配置应用程序了。

The current cluster configuration is stored in a distributed configuration file. Each instance stores a copy of this file, and Cartridge ensures that it is synchronized among all the nodes in the cluster. We can specify the role configuration of our application in this file, and Cartridge will make sure that the new configuration is distributed across all the instances.

当前群集配置存储在分布式配置文件中。 每个实例都存储此文件的副本,并且Cartridge确保文件在群集中的所有节点之间同步。 我们可以在此文件中指定应用程序的角色配置,并且Cartridge将确保新配置分布在所有实例中。

Let's take a look at the current contents of this file. Go to the Configuration files tab and click on the Download button:

让我们看一下该文件的当前内容。 转到“ Configuration files选项卡,然后单击“ Download按钮:

In the downloaded config.yml file, we find an empty table. It's no surprise because we haven't specified any parameters yet:

在下载的config.yml文件中,我们找到一个空表。 这并不奇怪,因为我们尚未指定任何参数:

--- []
...

In fact, the cluster configuration file is not empty: it stores the current topology, authorization settings, and sharding parameters. Cartridge does not share this information so easily; the file is intended for internal use, and therefore stored in hidden system sections that you cannot edit.

实际上,集群配置文件不是空的:它存储当前的拓扑,授权设置和分片参数。 墨盒不会轻易共享此信息; 该文件仅供内部使用,因此存储在无法编辑的隐藏系统部分中。

Each application role can use one or more configuration sections. The new configuration is loaded in two steps. First, all the roles verify that they are ready to accept the new parameters. If there are no problems, the changes are applied; otherwise, the changes are rolled back.

每个应用程序角色可以使用一个或多个配置部分。 分两步加载新配置。 首先,所有角色都验证它们准备好接受新参数。 如果没有问题,则应用更改。 否则,更改将回滚。

Now get back to the application. The api role uses the max-balance section, where the maximum allowed balance for a single client account is stored. Let's configure this section using our Ansible role (not manually, of course).

现在回到应用程序。 api角色使用“ max-balance部分,其中存储了单个客户帐户的最大允许余额。 让我们使用Ansible角色(当然不是手动)来配置此部分。

So now the application configuration (more precisely, the available part) is an empty table. Now add a max-balance section there with a value of 100,000, and specify the cartridge_app_config variable in the inventory file:

因此,现在应用程序配置(更确切地说,可用部分)是一个空表。 现在,在其中添加一个max-balance部分,其值为100,000 ,并在清单文件中指定cartridge_app_config变量:

---
all:
  vars:
    ...
    # cluster-wide config
    cartridge_app_config:  # <==
      max-balance:  # section name
        body: 1000000  # section body
        # deleted: true  # uncomment to delete section max-balance
    ...

We have specified a section name (max-balance) and its contents (body). The content of the section can be more than just a number; it can also be a table or a string depending on how the role is written and what type of value you want to use.

我们已经指定了部分名称( max-balance )及其内容( body )。 本节的内容可以不只是数字。 它也可以是表或字符串,具体取决于角色的编写方式和要使用的值的类型。

Run:

跑:

$ ansible-playbook -i hosts.yml playbook.yml \
                   --tags cartridge-config

And check that the maximum allowed balance has indeed changed:

并检查最大允许余额确实已更改:

$ curl -X POST -H "Content-Type: application/json" \
               -d '{"account_id": 1, "amount": "1000001"}' \
               http://localhost:8182/storage/customers/1/update_balance

In return, we get an error, just as we wanted:

作为回报,我们得到一个错误,就像我们想要的那样:

{"info":"Error","error":"Maximum is 1000000"}

You can download the configuration file from the Configuration files tab once again to make sure the new section is there:

您可以再次从“ Configuration files选项卡下载配置文件,以确保存在新部分:

---
max-balance: 1000000
...

Try adding new sections to the application configuration, change their contents, or delete them altogether (to do this, you need to set the deleted: true flag in the section):

尝试将新的部分添加到应用程序配置中,更改其内容,或将其全部删除(为此,您需要在该部分中设置deleted: true标志):

For more information on using the distributed configuration in roles, see the Tarantool Cartridge documentation.

有关在角色中使用分布式配置的更多信息,请参见Tarantool Cartridge 文档

Don't forget to run vagrant halt to stop the virtual machines when you're done.

完成操作后,不要忘记运行vagrant halt来停止虚拟机。

摘要 (Summary)

Last time we learned how to deploy distributed Tarantool Cartridge applications using a special Ansible role. Today we updated the application and learned how to manage application topology, sharding, authorization, and configuration.

上一次我们学习了如何使用特殊的Ansible角色来部署分布式Tarantool Cartridge应用程序。 今天,我们更新了应用程序,并学习了如何管理应用程序拓扑,分片,授权和配置。

As a next step, you can try different approaches to writing Ansible Playbook and use your apps in the most convenient way.

下一步,您可以尝试使用各种方法来编写Ansible Playbook,并以最方便的方式使用您的应用程序。

If something doesn't work or you have ideas on how to improve our Ansible role, please feel free to create a ticket. We are always happy to help and open to any ideas and suggestions!

如果无法解决问题,或者您对改善Ansible角色有想法,请随时创建票证 。 我们随时乐意为您提供帮助,并欢迎提出任何想法和建议!

翻译自: https://habr.com/en/company/mailru/blog/496982/

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值