docker api_Docker端到端API测试的完整指南

docker api

Testing is a pain in general. Some don't see the point. Some see it but think of it as an extra step slowing them down. Sometimes tests are there but very long to run or unstable. In this article you'll see how you can engineer tests for yourself with Docker.

测试通常是一件痛苦的事情。 有些人看不到重点。 有些人看到了它,但认为这是使他们减速的额外步骤。 有时会有测试,但运行时间很长或不稳定。 在本文中,您将看到如何使用Docker自己进行测试。

We want fast, meaningful and reliable tests written and maintained with minimal effort. It means tests that are useful to you as a developer on a day-to-day basis. They should boost your productivity and improve the quality of your software. Having tests because everybody says  "you should have tests" is no good if it slows you down.

我们希望以最小的努力编写和维护快速,有意义且可靠的测试。 这意味着对于您作为开发人员而言日常有用的测试。 他们应该提高您的生产力并改善您的软件质量。 因为每个人都会说“您应该进行测试”而进行测试,如果它减慢了您的速度,那就不好了。

Let's see how to achieve this with not that much effort.

让我们看看如何不费吹灰之力实现这一目标。

我们要测试的示例 (The example we are going to test)

In this article we are going to test an API built with Node/express and use chai/mocha for testing. I've chosen a JS'y stack because the code is super short and easy to read. The principles applied are valid for any tech stack. Keep reading even if Javascript makes you sick.

在本文中,我们将测试使用Node / express构建的API,并使用chai / mocha进行测试。 我选择了一个JS'y堆栈,因为代码非常短且易于阅读。 所应用的原理对任何技术堆栈均有效。 即使Javascript让您感到恶心,也要继续阅读。

The example will cover a simple set of CRUD endpoints for users. It's more than enough to grasp the concept and apply to the more complex business logic of your API.

该示例将为用户介绍一组简单的CRUD端点。 掌握概念并将其应用于API的更复杂的业务逻辑已绰绰有余。

We are going to use a pretty standard environment for the API:

我们将为API使用标准的环境:

  • A Postgres database

    Postgres数据库
  • A Redis cluster

    Redis集群
  • Our API will use other external APIs to do its job

    我们的API将使用其他外部API来完成其工作

Your API might need a different environment. The principles applied in this article will remain the same. You'll use different Docker base images to run whatever component you might need.

您的API可能需要不同的环境。 本文中应用的原理将保持不变。 您将使用不同的Docker基本映像来运行您可能需要的任何组件。

为什么选择Docker? 实际上,Docker Compose (Why Docker? And in fact Docker Compose)

This section contains a lot of arguments in favour of using Docker for testing. You can skip it if you want to get to the technical part right away.

本节包含许多支持使用Docker进行测试的参数。 如果您想立即进入技术部分,可以跳过它。

痛苦的选择 (The painful alternatives)

To test your API in a close to production environment you have two choices. You can mock the environment at code level or run the tests on a real server with the database etc. installed.

要在接近生产环境的环境中测试API,您有两种选择。 您可以在代码级别上模拟环境,也可以在安装了数据库等的真实服务器上运行测试。

Mocking everything at code level clutters the code and configuration of our API. It is also often not very representative of how the API will behave in production. Running the thing in a real server is infrastructure heavy. It is a lot of setup and maintenance, and it does not scale. Having a shared database, you can run only 1 test at a time to ensure test runs do not interfere with each other.

在代码级别模拟所有内容会使我们的API的代码和配置混乱。 它通常也不十分代表API在生产中的行为。 在真实的服务器上运行事物非常耗费基础架构。 它涉及很多设置和维护,并且无法扩展。 具有共享数据库,您一次只能运行1个测试,以确保测试运行不会相互干扰。

Docker Compose allows us to get the best of both worlds. It creates "containerized" versions of all the external parts we use. It is mocking but on the outside of our code. Our API thinks it is in a real physical environment. Docker compose will also create an isolated network for all the containers for a given test run. This allows you to run several of them in parallel on your local computer or a CI host.

Docker Compose使我们能够充分利用两者。 它为我们使用的所有外部零件创建“容器化”版本。 它是在嘲笑,但在我们代码的外部。 我们的API认为它是在真实的物理环境中。 Docker compose还将为给定测试运行的所有容器创建一个隔离的网络。 这使您可以在本地计算机或CI主机上并行运行其中的几个。

过度杀伤力? (Overkill?)

You might wonder if it isn't overkill to perform end to end tests at all with Docker compose. What about just running unit tests instead?

您可能想知道,使用Docker compose进行端到端测试是否不是矫kill过正。 只运行单元测试呢?

For the last 10 years, large monolith applications have been split into smaller services (trending towards the buzzy "microservices"). A given API component relies on more external parts (infrastructure or other APIs). As services get smaller, integration with the infrastructure becomes a bigger part of the job.

在过去的十年中,大型的整体应用程序已划分为较小的服务(趋向于蓬勃发展的“微服务”)。 给定的API组件依赖于更多的外部部件(基础结构或其他API)。 随着服务规模的缩小,与基础架构的集成成为工作的重中之重。

You should keep a small gap between your production and your development environments. Otherwise problems will arise when going for production deploy. By definition these problems appear at the worst possible moment. They will lead to rushed fixes, drops in quality, and frustration for the team. Nobody wants that.

您应该在生产和开发环境之间保持很小的差距。 否则,在进行生产部署时会出现问题。 根据定义,这些问题出现在最坏的时刻。 它们会导致紧急修复,质量下降和团队沮丧。 没有人想要。

You might wonder if end to end tests with Docker compose run longer than traditional unit tests. Not really. You'll see in the example below that we can easily keep the tests under 1 minute, and at great benefit: the tests reflect the application behaviour in the real world. This is more valuable than knowing if your class somewhere in the middle of the app works OK or not.

您可能想知道,使用Docker compose进行的端到端测试是否比传统的单元测试运行更长的时间。 并不是的。 您将在下面的示例中看到,我们可以轻松地将测试保持在1分钟以内,并且受益匪浅:这些测试反映了现实世界中的应用程序行为。 这比知道您的班级是否在应用程序中的某个地方正常工作更有价值。

Also, if you don't have any tests right now, starting from end to end gives you great benefits for little effort. You'll know all stacks of the application work together for the most common scenarios. That's already something! From there you can always refine a strategy to unit test critical parts of your application.

另外,如果您现在没有任何测试,那么从头到尾的尝试将为您带来巨大的好处。 您将了解应用程序的所有堆栈在最常见的情况下都可以协同工作。 那已经是东西了! 从那里,您始终可以完善策略以对应用程序的关键部分进行单元测试。

我们的第一个测试 (Our first test)

Let’s start with the easiest part: our API and the Postgres database. And let’s run a simple CRUD test. Once we have that framework in place, we can add more features both to our component and to the test.

让我们从最简单的部分开始:我们的API和Postgres数据库。 让我们运行一个简单的CRUD测试。 一旦有了该框架,就可以向我们的组件和测试添加更多功能。

Here is our minimal API with a GET/POST to create and list users:

这是我们使用GET / POST来创建和列出用户的最小API:

const express = require('express');
const bodyParser = require('body-parser');
const cors = require('cors');

const config = require('./config');

const db = require('knex')({
  client: 'pg',
  connection: {
    host : config.db.host,
    user : config.db.user,
    password : config.db.password,
  },
});

const app = express();

app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
app.use(cors());

app.route('/api/users').post(async (req, res, next) => {
  try {
    const { email, firstname } = req.body;
    // ... validate inputs here ...
    const userData = { email, firstname };

    const result = await db('users').returning('id').insert(userData);
    const id = result[0];
    res.status(201).send({ id, ...userData });
  } catch (err) {
    console.log(`Error: Unable to create user: ${err.message}. ${err.stack}`);
    return next(err);
  }
});

app.route('/api/users').get((req, res, next) => {
  db('users')
  .select('id', 'email', 'firstname')
  .then(users => res.status(200).send(users))
  .catch(err => {
      console.log(`Unable to fetch users: ${err.message}. ${err.stack}`);
      return next(err);
  });
});

try {
  console.log("Starting web server...");

  const port = process.env.PORT || 8000;
  app.listen(port, () => console.log(`Server started on: ${port}`));
} catch(error) {
  console.error(error.stack);
}

Here are our tests written with chai. The tests create a new user and fetch it back. You can see that the tests are not coupled in any way with the code of our API. The SERVER_URL variable specifies the endpoint to test. It can be a local or a remote environment.

这是我们用蔡写的测试。 这些测试将创建一个新用户并将其取回。 您可以看到测试与我们的API代码没有任何关联。 SERVER_URL变量指定要测试的端点。 它可以是本地或远程环境。

const chai = require("chai");
const chaiHttp = require("chai-http");
const should = chai.should();

const SERVER_URL = process.env.APP_URL || "http://localhost:8000";

chai.use(chaiHttp);

const TEST_USER = {
  email: "john@doe.com",
  firstname: "John"
};

let createdUserId;

describe("Users", () => {
  it("should create a new user", done => {
    chai
      .request(SERVER_URL)
      .post("/api/users")
      .send(TEST_USER)
      .end((err, res) => {
        if (err) done(err)
        res.should.have.status(201);
        res.should.be.json;
        res.body.should.be.a("object");
        res.body.should.have.property("id");
        done();
      });
  });

  it("should get the created user", done => {
    chai
      .request(SERVER_URL)
      .get("/api/users")
      .end((err, res) => {
        if (err) done(err)
        res.should.have.status(200);
        res.body.should.be.a("array");

        const user = res.body.pop();
        user.id.should.equal(createdUserId);
        user.email.should.equal(TEST_USER.email);
        user.firstname.should.equal(TEST_USER.firstname);
        done();
      });
  });
});

Good. Now to test our API let's define a Docker compose environment. A file called docker-compose.yml will describe the containers Docker needs to run.

好。 现在测试我们的API,让我们定义一个Docker compose环境。 名为docker-compose.yml文件将描述Docker需要运行的容器。

version: '3.1'

services:
  db:
    image: postgres
    environment:
      POSTGRES_USER: john
      POSTGRES_PASSWORD: mysecretpassword
    expose:
      - 5432

  myapp:
    build: .
    image: myapp
    command: yarn start
    environment:
      APP_DB_HOST: db
      APP_DB_USER: john
      APP_DB_PASSWORD: mysecretpassword
    expose:
      - 8000
    depends_on:
      - db

  myapp-tests:
    image: myapp
    command: dockerize
        -wait tcp://db:5432 -wait tcp://myapp:8000 -timeout 10s
        bash -c "node db/init.js && yarn test"
    environment:
      APP_URL: http://myapp:8000
      APP_DB_HOST: db
      APP_DB_USER: john
      APP_DB_PASSWORD: mysecretpassword
    depends_on:
      - db
      - myapp

So what do we have here. There are 3 containers:

那我们这里有什么。 有3个容器:

  • db spins up a fresh instance of PostgreSQL. We use the public Postgres image from Docker Hub. We set the database username and password. We tell Docker to expose the port 5432 the database will listen to so other containers can connect

    db启动了PostgreSQL新实例。 我们使用来自Docker Hub的公共Postgres映像。 我们设置数据库的用户名和密码。 我们告诉Docker公开数据库将监听的端口5432,以便其他容器可以连接

  • myapp is the container that will run our API. The build command tells Docker to actually build the container image from our source. The rest is like the db container: environment variables and ports

    myapp是将运行我们的API的容器。 build命令告诉Docker从我们的源代码实际构建容器映像。 其余的就像db容器:环境变量和端口

  • myapp-tests is the container that will execute our tests. It will use the same image as myapp because the code will already be there so there is no need to build it again. The command node db/init.js && yarn test run on the container will initialize the database (create tables etc.) and run the tests. We use dockerize to wait for all the required servers to be up and running. The depends_on options will ensure that containers start in a certain order. It does not ensure that the database inside the db container is actually ready to accept connections. Nor that our API server is already up.

    myapp-tests是将执行我们的测试的容器。 它将使用与myapp相同的图像,因为代码已经存在,因此无需再次构建它。 在容器上运行的命令node db/init.js && yarn test将初始化数据库(创建表等)并运行测试。 我们使用dockerize等待所有必需的服务器启动并运行。 depends_on选项将确保容器以特定顺序启动。 它不能确保db容器中的数据库实际上已准备好接受连接。 我们的API服务器也没有启动。

The definition of the environment is like 20 lines of very easy to understand code. The only brainy part is the environment definition. User names, passwords and URLs must be consistent so containers can actually work together.

环境的定义就像20行非常容易理解的代码。 唯一聪明的部分是环境定义。 用户名,密码和URL必须一致,以便容器可以实际协同工作。

One thing to notice is that Docker compose will set the host of the containers it creates to the name of the container. So the database won't be available under localhost:5432 but db:5432. The same way our API will be served under myapp:8000. There is no localhost of any kind here.

要注意的一件事是,Docker compose会将其创建的容器的主机设置为容器的名称。 因此该数据库在localhost:5432下将不可用,但在db:5432下将不可用。 我们的API将以相同的方式在myapp:8000 。 这里没有任何类型的本地主机。

This means that your API must support environment variables when it comes to environment definition. No hardcoded stuff. But that has nothing to do with Docker or this article. A configurable application is point 3 of the 12 factor app manifesto, so you should be doing it already.

这意味着在涉及环境定义时,您的API必须支持环境变量。 没有硬编码的东西。 但这与Docker或本文无关。 可配置的应用程序是12因子应用程序清单的第3点,因此您应该已经在做它。

The very last thing we need to tell Docker is how to actually build the container myapp. We use a Dockerfile like below. The content is specific to your tech stack but the idea is to bundle your API into a runnable server.

我们需要告诉Docker的最后一件事是如何实际构建容器myapp 。 我们使用如下所示的Dockerfile。 内容特定于您的技术堆栈,但其想法是将API捆绑到可运行的服务器中。

The example below for our Node API installs Dockerize, installs the API dependencies and copies the code of the API inside the container (the server is written in raw JS so no need to compile it).

以下针对我们的Node API的示例安装了Dockerize,安装了API依赖关系,并在容器内部复制了API代码(服务器使用原始JS编写,因此无需对其进行编译)。

FROM node AS base

# Dockerize is needed to sync containers startup
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
    && tar -C /usr/local/bin -xzvf dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
    && rm dockerize-alpine-linux-amd64-$DOCKERIZE_VERSION.tar.gz

RUN mkdir -p ~/app

WORKDIR ~/app

COPY package.json .
COPY yarn.lock .

FROM base AS dependencies

RUN yarn

FROM dependencies AS runtime

COPY . .

Typically from the line WORKDIR ~/app and below you would run commands that would build your application.

通常,在WORKDIR ~/app及以下行中,您将运行用于构建应用程序的命令。

And here is the command we use to run the tests:

这是我们用来运行测试的命令:

docker-compose up --build --abort-on-container-exit

This command will tell Docker compose to spin up the components defined in our docker-compose.yml file. The --build flag will trigger the build of the myapp container by executing the content of the Dockerfile above. The --abort-on-container-exit will tell Docker compose to shutdown the environment as soon as one container exits.

该命令将告诉Docker compose提升在docker-compose.yml文件中定义的组件。 --build标志将通过执行上述Dockerfile的内容来触发myapp容器的构建。 --abort-on-container-exit将告诉Docker compose在一个容器退出后立即关闭环境。

That works well since the only component meant to exit is the test container myapp-tests after the tests are executed. Cherry on the cake, the docker-compose command will exit with the same exit code as the container that triggered the exit. This means that we can check if the tests succeeded or not from the command line. This is very useful for automated builds in a CI environment.

这很有效,因为要退出的唯一组件是执行测试后的测试容器myapp-tests 。 樱桃蛋糕上的docker-compose命令将以与触发退出的容器相同的退出代码退出。 这意味着我们可以从命令行检查测试是否成功。 这对于CI环境中的自动构建非常有用。

Isn't that the perfect test setup?

这不是完美的测试设置吗?

The full example is here on GitHub. You can clone the repository and run the docker compose command:

完整的示例在GitHub上 。 您可以克隆存储库并运行docker compose命令:

docker-compose up --build --abort-on-container-exit

Of course you need Docker installed. Docker has the troublesome tendency of forcing you to sign up for an account just to download the thing. But you actually don't have to. Go to the release notes (link for Windows and link for Mac) and download not the latest version but the one right before. This is a direct download link.

当然,您需要安装Docker。 Docker有一种麻烦的趋势,迫使您仅注册一个帐户来下载东西。 但是实际上您不必这样做。 转到发行说明( 适用于Windows的 链接适用于Mac的链接 ),而不是下载最新版本,而是下载之前的版本。 这是直接下载链接。

The very first run of the tests will be longer than usual. This is because Docker will have to download the base images for your containers and cache a few things. The next runs will be much faster.

测试的第一轮运行将比平时更长。 这是因为Docker必须下载容器的基础映像并缓存一些内容。 下一次运行将更快。

Logs from the run will look as below. You can see that Docker is cool enough to put logs from all the components on the same timeline. This is very handy when looking for errors.

运行日志将如下所示。 您可以看到Docker足够酷,可以将来自所有组件的日志放在同一时间轴上。 查找错误时,这非常方便。

Creating tuto-api-e2e-testing_db_1    ... done
Creating tuto-api-e2e-testing_redis_1 ... done
Creating tuto-api-e2e-testing_myapp_1 ... done
Creating tuto-api-e2e-testing_myapp-tests_1 ... done
Attaching to tuto-api-e2e-testing_redis_1, tuto-api-e2e-testing_db_1, tuto-api-e2e-testing_myapp_1, tuto-api-e2e-testing_myapp-tests_1
db_1           | The files belonging to this database system will be owned by user "postgres".
redis_1        | 1:M 09 Nov 2019 21:57:22.161 * Running mode=standalone, port=6379.
myapp_1        | yarn run v1.19.0
redis_1        | 1:M 09 Nov 2019 21:57:22.162 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1        | 1:M 09 Nov 2019 21:57:22.162 # Server initialized
db_1           | This user must also own the server process.
db_1           | 
db_1           | The database cluster will be initialized with locale "en_US.utf8".
db_1           | The default database encoding has accordingly been set to "UTF8".
db_1           | The default text search configuration will be set to "english".
db_1           | 
db_1           | Data page checksums are disabled.
db_1           | 
db_1           | fixing permissions on existing directory /var/lib/postgresql/data ... ok
db_1           | creating subdirectories ... ok
db_1           | selecting dynamic shared memory implementation ... posix
myapp-tests_1  | 2019/11/09 21:57:25 Waiting for: tcp://db:5432
myapp-tests_1  | 2019/11/09 21:57:25 Waiting for: tcp://redis:6379
myapp-tests_1  | 2019/11/09 21:57:25 Waiting for: tcp://myapp:8000
myapp_1        | $ node server.js
redis_1        | 1:M 09 Nov 2019 21:57:22.163 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
db_1           | selecting default max_connections ... 100
myapp_1        | Starting web server...
myapp-tests_1  | 2019/11/09 21:57:25 Connected to tcp://myapp:8000
myapp-tests_1  | 2019/11/09 21:57:25 Connected to tcp://db:5432
redis_1        | 1:M 09 Nov 2019 21:57:22.164 * Ready to accept connections
myapp-tests_1  | 2019/11/09 21:57:25 Connected to tcp://redis:6379
myapp_1        | Server started on: 8000
db_1           | selecting default shared_buffers ... 128MB
db_1           | selecting default time zone ... Etc/UTC
db_1           | creating configuration files ... ok
db_1           | running bootstrap script ... ok
db_1           | performing post-bootstrap initialization ... ok
db_1           | syncing data to disk ... ok
db_1           | 
db_1           | 
db_1           | Success. You can now start the database server using:
db_1           | 
db_1           |     pg_ctl -D /var/lib/postgresql/data -l logfile start
db_1           | 
db_1           | initdb: warning: enabling "trust" authentication for local connections
db_1           | You can change this by editing pg_hba.conf or using the option -A, or
db_1           | --auth-local and --auth-host, the next time you run initdb.
db_1           | waiting for server to start....2019-11-09 21:57:24.328 UTC [41] LOG:  starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1           | 2019-11-09 21:57:24.346 UTC [41] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1           | 2019-11-09 21:57:24.373 UTC [42] LOG:  database system was shut down at 2019-11-09 21:57:23 UTC
db_1           | 2019-11-09 21:57:24.383 UTC [41] LOG:  database system is ready to accept connections
db_1           |  done
db_1           | server started
db_1           | CREATE DATABASE
db_1           | 
db_1           | 
db_1           | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
db_1           | 
db_1           | waiting for server to shut down....2019-11-09 21:57:24.907 UTC [41] LOG:  received fast shutdown request
db_1           | 2019-11-09 21:57:24.909 UTC [41] LOG:  aborting any active transactions
db_1           | 2019-11-09 21:57:24.914 UTC [41] LOG:  background worker "logical replication launcher" (PID 48) exited with exit code 1
db_1           | 2019-11-09 21:57:24.914 UTC [43] LOG:  shutting down
db_1           | 2019-11-09 21:57:24.930 UTC [41] LOG:  database system is shut down
db_1           |  done
db_1           | server stopped
db_1           | 
db_1           | PostgreSQL init process complete; ready for start up.
db_1           | 
db_1           | 2019-11-09 21:57:25.038 UTC [1] LOG:  starting PostgreSQL 12.0 (Debian 12.0-2.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1           | 2019-11-09 21:57:25.039 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1           | 2019-11-09 21:57:25.039 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1           | 2019-11-09 21:57:25.052 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1           | 2019-11-09 21:57:25.071 UTC [59] LOG:  database system was shut down at 2019-11-09 21:57:24 UTC
db_1           | 2019-11-09 21:57:25.077 UTC [1] LOG:  database system is ready to accept connections
myapp-tests_1  | Creating tables ...
myapp-tests_1  | Creating table 'users'
myapp-tests_1  | Tables created succesfully
myapp-tests_1  | yarn run v1.19.0
myapp-tests_1  | $ mocha --timeout 10000 --bail
myapp-tests_1  | 
myapp-tests_1  | 
myapp-tests_1  |   Users
myapp-tests_1  | Mock server started on port: 8002
myapp-tests_1  |     ✓ should create a new user (151ms)
myapp-tests_1  |     ✓ should get the created user
myapp-tests_1  |     ✓ should not create user if mail is spammy
myapp-tests_1  |     ✓ should not create user if spammy mail API is down
myapp-tests_1  | 
myapp-tests_1  | 
myapp-tests_1  |   4 passing (234ms)
myapp-tests_1  | 
myapp-tests_1  | Done in 0.88s.
myapp-tests_1  | 2019/11/09 21:57:26 Command finished successfully.
tuto-api-e2e-testing_myapp-tests_1 exited with code 0

We can see that db is the container that initializes the longest. Makes sense. Once it's done the tests start. The total runtime on my laptop is 16 seconds. Compared to the 880ms used to actually execute the tests, it is a lot. In practice, tests that run under 1 minute are gold as it is almost immediate feedback. The 15'ish seconds overhead are a buy in time that will be constant as you add more tests. You could add hundreds of tests and still keep execution time under 1 minute.

我们可以看到db是初始化最长的容器。 说得通。 一旦完成,测试就会开始。 我的笔记本电脑上的总运行时间为16秒。 与实际执行测试的880ms相比,它要多得多。 实际上,在1分钟以内运行的测试是黄金,因为它几乎是立即反馈。 15秒钟的时间开销是购买时间,随着您添加更多测试,时间将保持不变。 您可以添加数百个测试,并且仍将执行时间保持在1分钟以内。

Voilà! We have our test framework up and running. In a real world project the next steps would be to enhance functional coverage of your API with more tests. Let's consider CRUD operations covered. It's time to add more elements to our test environment.

瞧! 我们已经建立并运行了测试框架。 在现实世界的项目中,下一步将是通过更多测试来增强API的功能覆盖范围。 让我们考虑涵盖的CRUD操作。 现在是时候在我们的测试环境中添加更多元素了。

添加Redis集群 (Adding a Redis cluster)

Let's add another element to our API environment to understand what it takes. Spoiler alert: it's not much.

让我们向我们的API环境中添加另一个元素,以了解它需要什么。 剧透警告:数量不多。

Let us imagine that our API keeps user sessions in a Redis cluster. If you wonder why we would do that, imagine 100 instances of your API in production. Users hit one or another server based on round robin load balancing. Every request needs to be authenticated.

让我们想象一下,我们的API将用户会话保留在Redis集群中。 如果您想知道我们为什么要这样做,请想象一下您的生产中有100个API实例。 用户根据循环负载平衡命中一台或另一台服务器。 每个请求都需要认证。

This requires user profile data to check for privileges and other application specific business logic. One way to go is to make a round trip to the database to fetch the data every time you need it, but that is not very efficient. Using an in memory database cluster makes the data available across all servers for the cost of a local variable read.

这需要用户配置文件数据来检查特权和其他特定于应用程序的业务逻辑。 一种方法是往返于数据库以在每次需要时获取数据,但这并不是很有效。 使用内存数据库集群可以使数据在所有服务器上都可用,但需要读取局部变量。

This is how you enhance your Docker compose test environment with an additional service. Let’s add a Redis cluster from the official Docker image (I've only kept the new parts of the file):

这是通过附加服务增强Docker撰写测试环境的方式。 让我们从官方Docker镜像添加Redis集群(我只保留了文件的新部分):

services:
  db:
    ...

  redis:
    image: "redis:alpine"
    expose:
      - 6379

  myapp:
    environment:
      APP_REDIS_HOST: redis
      APP_REDIS_PORT: 6379
    ...
  myapp-tests:
    command: dockerize ... -wait tcp://redis:6379 ...
    environment:
      APP_REDIS_HOST: redis
      APP_REDIS_PORT: 6379
      ...
    ...

You can see it's not much. We added a new container called redis. It uses the official minimal redis image called redis:alpine. We added Redis host and port configuration to our API container. And we've made tests wait for it as well as the other containers before executing the tests.

您可以看到数量不多。 我们添加了一个名为redis的新容器。 它使用称为redis:alpine的官方最小redis映像。 我们将Redis主机和端口配置添加到我们的API容器中。 并且我们已经使测试以及其他容器在执行测试之前等待它。

Let’s modify our application to actually use the Redis cluster:

让我们修改应用程序以实际使用Redis集群:

const redis = require('redis').createClient({
  host: config.redis.host,
  port: config.redis.port,
})

...

app.route('/api/users').post(async (req, res, next) => {
  try {
    const { email, firstname } = req.body;
    // ... validate inputs here ...
    const userData = { email, firstname };
    const result = await db('users').returning('id').insert(userData);
    const id = result[0];
    
    // Once the user is created store the data in the Redis cluster
    await redis.set(id, JSON.stringify(userData));
    
    res.status(201).send({ id, ...userData });
  } catch (err) {
    console.log(`Error: Unable to create user: ${err.message}. ${err.stack}`);
    return next(err);
  }
});

Let's now change our tests to check that the Redis cluster is populated with the right data. That's why the myapp-tests container also gets the Redis host and port configuration in docker-compose.yml.

现在让我们更改测试,以检查Redis集群是否填充了正确的数据。 这就是为什么myapp-tests容器还会在docker-compose.yml获得Redis主机和端口配置docker-compose.yml

it("should create a new user", done => {
  chai
    .request(SERVER_URL)
    .post("/api/users")
    .send(TEST_USER)
    .end((err, res) => {
      if (err) throw err;
      res.should.have.status(201);
      res.should.be.json;
      res.body.should.be.a("object");
      res.body.should.have.property("id");
      res.body.should.have.property("email");
      res.body.should.have.property("firstname");
      res.body.id.should.not.be.null;
      res.body.email.should.equal(TEST_USER.email);
      res.body.firstname.should.equal(TEST_USER.firstname);
      createdUserId = res.body.id;

      redis.get(createdUserId, (err, cacheData) => {
        if (err) throw err;
        cacheData = JSON.parse(cacheData);
        cacheData.should.have.property("email");
        cacheData.should.have.property("firstname");
        cacheData.email.should.equal(TEST_USER.email);
        cacheData.firstname.should.equal(TEST_USER.firstname);
        done();
      });
    });
});

See how easy this was. You can build a complex environment for your tests like you assemble Lego bricks.

看看这有多容易。 您可以像组装乐高积木一样为测试构建复杂的环境。

We can see another benefit of this kind of containerized full environment testing. The tests can actually look into the environment's components. Our tests can not only check that our API returns the proper response codes and data. We can also check that data in the Redis cluster have the proper values. We could also check the database content.

我们可以看到这种容器化的完整环境测试的另一个好处。 测试实际上可以查看环境的组件。 我们的测试不仅可以检查我们的API是否返回正确的响应代码和数据。 我们还可以检查Redis集群中的数据是否具有正确的值。 我们还可以检查数据库内容。

添加API模拟 (Adding API mocks)

A common element for API components is to call other API components.

API组件的常见元素是调用其他API组件。

Let's say our API needs to check for spammy user emails when creating a user. The check is done using a third party service:

假设我们的API在创建用户时需要检查垃圾邮件。 使用第三方服务进行检查:

const validateUserEmail = async (email) => {
  const res = await fetch(`${config.app.externalUrl}/validate?email=${email}`);
  if(res.status !== 200) return false;
  const json = await res.json();
  return json.result === 'valid';
}

app.route('/api/users').post(async (req, res, next) => {
  try {
    const { email, firstname } = req.body;
    // ... validate inputs here ...
    const userData = { email, firstname };

    // We don't just create any user. Spammy emails should be rejected
    const isValidUser = await validateUserEmail(email);
    if(!isValidUser) {
      return res.sendStatus(403);
    }

    const result = await db('users').returning('id').insert(userData);
    const id = result[0];
    await redis.set(id, JSON.stringify(userData));
    res.status(201).send({ id, ...userData });
  } catch (err) {
    console.log(`Error: Unable to create user: ${err.message}. ${err.stack}`);
    return next(err);
  }
});

Now we have a problem for testing anything. We can't create any users if the API to detect spammy emails is not available. Modifying our API to bypass this step in test mode is a dangerous cluttering of the code.

现在我们遇到了测试任何问题。 如果检测垃圾邮件的API不可用,我们将无法创建任何用户。 修改我们的API以在测试模式下绕过此步骤会导致代码混乱。

Even if we could use the real third party service, we don't want to do that. As a general rule our tests should not depend on external infrastructure. First of all, because you will probably run your tests a lot as part of your CI process. It’s not that cool to consume another production API for this purpose. Second of all the API might be temporarily down, failing your tests for the wrong reasons.

即使我们可以使用真正的第三方服务,我们也不想这样做。 通常,我们的测试不应依赖于外部基础架构。 首先,因为您可能会在CI流程中大量运行测试。 为此目的而使用另一个生产API并不是很酷。 第二,API可能暂时关闭,由于错误的原因导致测试失败。

The right solution is to mock the external APIs in our tests.

正确的解决方案是在测试中模拟外部API。

No need for any fancy framework. We'll build a generic mock in vanilla JS in ~20 lines of code. This will give us the opportunity to control what the API will return to our component. It allows to test error scenarios.

不需要任何花哨的框架。 我们将在约20行代码中在Vanilla JS中构建一个通用模拟。 这将使我们有机会控制API返回到组件的内容。 它允许测试错误方案。

Now let’s enhance our tests.

现在让我们增强测试。

const express = require("express");

...

const MOCK_SERVER_PORT = process.env.MOCK_SERVER_PORT || 8002;

// Some object to encapsulate attributes of our mock server
// The mock stores all requests it receives in the `requests` property.
const mock = {
  app: express(),
  server: null,
  requests: [],
  status: 404,
  responseBody: {}
};

// Define which response code and content the mock will be sending
const setupMock = (status, body) => {
  mock.status = status;
  mock.responseBody = body;
};

// Start the mock server
const initMock = async () => {
  mock.app.use(bodyParser.urlencoded({ extended: false }));
  mock.app.use(bodyParser.json());
  mock.app.use(cors());
  mock.app.get("*", (req, res) => {
    mock.requests.push(req);
    res.status(mock.status).send(mock.responseBody);
  });

  mock.server = await mock.app.listen(MOCK_SERVER_PORT);
  console.log(`Mock server started on port: ${MOCK_SERVER_PORT}`);
};

// Destroy the mock server
const teardownMock = () => {
  if (mock.server) {
    mock.server.close();
    delete mock.server;
  }
};

describe("Users", () => {
  // Our mock is started before any test starts ...
  before(async () => await initMock());

  // ... killed after all the tests are executed ...
  after(() => {
    redis.quit();
    teardownMock();
  });

  // ... and we reset the recorded requests between each test
  beforeEach(() => (mock.requests = []));

  it("should create a new user", done => {
    // The mock will tell us the email is valid in this test
    setupMock(200, { result: "valid" });

    chai
      .request(SERVER_URL)
      .post("/api/users")
      .send(TEST_USER)
      .end((err, res) => {
        // ... check response and redis as before
        createdUserId = res.body.id;

        // Verify that the API called the mocked service with the right parameters
        mock.requests.length.should.equal(1);
        mock.requests[0].path.should.equal("/api/validate");
        mock.requests[0].query.should.have.property("email");
        mock.requests[0].query.email.should.equal(TEST_USER.email);
        done();
      });
  });
});

The tests now check that the external API has been hit with the proper data during the call to our API.

现在,测试会在调用我们的API时检查外部API是否已被正确的数据击中。

We can also add other tests checking how our API behaves based on the external API response codes:

我们还可以添加其他测试,以根据外部API响应代码检查API的行为:

describe("Users", () => {
  it("should not create user if mail is spammy", done => {
    // The mock will tell us the email is NOT valid in this test ...
    setupMock(200, { result: "invalid" });

    chai
      .request(SERVER_URL)
      .post("/api/users")
      .send(TEST_USER)
      .end((err, res) => {
        // ... so the API should fail to create the user
        // We could test that the DB and Redis are empty here
        res.should.have.status(403);
        done();
      });
  });

  it("should not create user if spammy mail API is down", done => {
    // The mock will tell us the email checking service
    //  is down for this test ...
    setupMock(500, {});

    chai
      .request(SERVER_URL)
      .post("/api/users")
      .send(TEST_USER)
      .end((err, res) => {
        // ... in that case also a user should not be created
        res.should.have.status(403);
        done();
      });
  });
});

How you handle errors from third party APIs in your application is of course up to you. But you get the point.

当然,您如何处理应用程序中第三方API的错误。 但是你明白了。

To run these tests we need to tell the container myapp what is the base URL of the third party service:

要运行这些测试,我们需要告诉容器myapp第三方服务的基本URL是什么:

myapp:
    environment:
      APP_EXTERNAL_URL: http://myapp-tests:8002/api
    ...

  myapp-tests:
    environment:
      MOCK_SERVER_PORT: 8002
    ...

结论和其他一些想法 (Conclusion and a few other thoughts)

Hopefully this article gave you a taste of what Docker compose can do for you when it comes to API testing. The full example is here on GitHub.

希望本文能够带您了解Docker compose在进行API测试时可以为您做些什么。 完整的示例在GitHub上

Using Docker compose makes tests run fast in an environment close to production. It requires no adaptations to your component code. The only requirement is to support environment variables driven configuration.

使用Docker compose可以使测试在接近生产环境中快速运行。 它不需要修改您的组件代码。 唯一的要求是支持环境变量驱动的配置。

The component logic in this example is very simple but the principles apply to any API. Your tests will just be longer or more complex. They also apply to any tech stack that can be put inside a container (that's all of them). And once you are there you are one step away from deploying your containers to production if need be.

此示例中的组件逻辑非常简单,但是原理适用于任何API。 您的测试将更长或更复杂。 它们还适用于可以放在容器内的所有技术堆栈(仅此而已)。 到那里后,如果需要的话,离将容器部署到生产阶段仅一步之遥。

If you have no tests right now this is how I recommend you should start: end to end testing with Docker compose. It is so simple you could have your first test running in a few hours. Feel free to reach out to me if you have questions or need advice. I'd be happy to help.

如果您现在没有测试,这就是我的建议,您应该开始:用Docker compose进行端到端测试。 如此简单,您可以在几个小时内进行第一次测试。 如果您有任何疑问或需要建议,请随时与我联系 。 我很乐意提供帮助。

I hope you enjoyed this article and will start testing your APIs with Docker Compose. Once you have the tests ready you can run them out of the box on our continuous integration platform Fire CI.

我希望您喜欢这篇文章,并会开始使用Docker Compose测试您的API。 一旦准备好测试,就可以在我们的持续集成平台Fire CI上开箱即用地运行它们。

自动测试成功的最后一个想法。 (One last idea to succeed with automated testing.)

When it comes to maintaining large test suites, the most important feature is that tests are easy to read and understand. This is key to motivate your team to keep the tests up to date. Complex tests frameworks are unlikely to be properly used in the long run.

在维护大型测试套件时,最重要的功能是易于阅读和理解。 这是激励您的团队保持测试最新状态的关键。 从长远来看,复杂的测试框架不太可能正确使用。

Regardless of the stack for your API, you might want to consider using chai/mocha to write tests for it. It might seem unusual to have different stacks for runtime code and test code, but if it gets the job done ... As you can see from the examples in this article, testing a REST API with chai/mocha is as simple as it gets. The learning curve is close to zero.

无论您的API使用哪种堆栈,您都可能要考虑使用chai / mocha为其编写测试。 在运行时代码和测试代码中使用不同的堆栈似乎很不寻常,但是如果可以完成工作的话……从本文的示例中可以看出,使用chai / mocha测试REST API就像获得代码一样简单。 学习曲线接近于零。

So if you have no tests at all and have a REST API to test written in Java, Python, RoR, .NET or whatever other stack, you might consider giving chai/mocha a try.

因此,如果您根本没有测试,并且有REST API可以用Java,Python,RoR,.NET或任何其他堆栈编写测试,则可以考虑尝试chai / mocha。

If you wonder how to get start with continuous integration at all, I have written a broader guide about it. Here it is: How to get started with Continuous Integration

如果您完全不知道如何开始进行持续集成,那么我已经写了一个更广泛的指南。 这是: 如何开始进行持续集成

Originally published on the Fire CI Blog.

最初发布在Fire CI博客上

翻译自: https://www.freecodecamp.org/news/end-to-end-api-testing-with-docker/

docker api

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值