使用Jenkins和Kubernetes将Jest测试套件拆分为多个并行作业

There is something about speed that makes it an object of desire… somehow when you are on a highway going somewhere, getting there faster is the goal and achieving that goal makes you feel “better” than everyone else on the same highway.

关于速度的某些事情使它成为欲望的对象……以某种方式,当您在高速公路上行驶时,到达目的地的速度就是目标,而实现这一目标会使您比同一个高速公路上的其他所有人感觉“更好”。

I love speed and constantly strive to run the fastest in the room… but some problems are best solved by just cutting the track in half. If you can afford the extra work, doing multiple things at the same time can take you to the next level.

我热爱速度,并不断努力在房间中以最快的速度运转……但是将轨道缩短一半可以最好地解决一些问题。 如果您负担得起额外的工作,同时执行多项操作可以将您带入新的高度。

什么可以接受? (What is acceptable?)

I remember this one time I was reading a really unmemorable blog post, but what I did remember about it was that in the post it said it took their company an entire DAY to run their end-to-end test suite, and that it’s cumbersome to wait for these tests. I’m sitting here reading and thinking… well YEAH DUH, a day is an extremely long feedback loop. Though this anecdote is not 100% related because this particular blog is about server unit/integration tests and not e2e tests, it does a good job of conveying that often times test suites grow with lack of attention.

我记得有一次我读了一篇非常难忘的博客文章,但是我记得那件事是在文章中说他们的公司花了整整一天的时间来运行他们的端到端测试套件,而且很麻烦等待这些测试。 我坐在这里阅读和思考...嗯,DUAH DUH,一天是一个非常漫长的反馈循环。 尽管此轶事并非100%相关,因为该特定博客是关于服务器单元/集成测试而不是e2e测试的,但它很好地传达了测试套件经常增长而缺乏关注的情况。

People add coverage because adding coverage is easy. Tending to the area around the test suite is often overlooked or not as habitual. Small gains in the turnaround time of a Continuous Integration pipeline can lead to MASSIVE time savings for the development team and faster deploy cycles for releasing new features and critical bug fixes. These two things are priceless.

人们增加覆盖范围是因为添加覆盖范围很容易。 经常会忽略或不太习惯趋于测试套件周围的区域。 持续集成管道的周转时间的少量增加可以为开发团队节省大量时间,并可以加快发布新功能和重大错误修复的部署周期。 这两件事是无价的。

Regardless of time savings or potential benefit, it behooves us as developers to pursue quality and efficiency in our work. We mustn’t let the weeds in our garden get out of control. Always remember to ask yourself the question… If something is cumbersome for you to wait for, is it really acceptable?

无论节省时间或潜在收益,作为开发人员,我们都应该追求工作的质量和效率。 我们一定不能让花园里的杂草失控。 始终记得问自己一个问题……如果有什么麻烦您要等待,那真的可以接受吗?

我们的世界 (Our World)

At Galley Solutions we recently had some great work done to reduce our e2e test run time from around 35 minutes to around 13 minutes — huge time savings!

在Galley Solutions,我们最近做了很多出色的工作,以将e2e测试运行时间从大约35分钟减少到大约13分钟-节省大量时间!

While this was fantastic it did not equate to huge time savings in our CI pipeline as a whole. We have a Node.js API test suit that runs about 30 min and after the e2e optimization the API tests became new rate limiting step. For developers who are getting ready to put the bow on a given feature or PR it is critical for them to receive timely feedback on their branch tests. We decided it was time to leverage some existing strengths in our infrastructure and create multiple parallel execution stages, each stage executing a given fraction of the test suite.

尽管这太棒了,但它并不等同于整个CI流程节省大量时间。 我们有一个运行大约30分钟的Node.js API测试服,在e2e优化之后,API测试成为新的速率限制步骤。 对于准备将功能引入给定功能或PR的开发人员来说,及时获得有关分支测试的反馈至关重要。 我们认为是时候利用基础架构中的一些现有优势并创建多个并行执行阶段,每个阶段执行测试套件的给定部分。

成功的步骤 (Steps for Success)

We need to execute on the three following steps.

我们需要执行以下三个步骤。

  1. We need to be able to run a command that executes a specific set of test files against their own DB.

    我们需要能够运行一个命令,以针对其自己的数据库执行一组特定的测试文件。
  2. We need some code that will take a constant NUM_PARALLEL_STEPS and generate a set of files, each containing the files to be run in a given step.

    我们需要一些代码,这些代码将采用常量NUM_PARALLEL_STEPS并生成一组文件,每个文件包含要在给定步骤中运行的文件。

  3. Lastly, we need some code that will dynamically generate each API stage based on the NUM_PARALLEL_STEPS constant that will call the command from step 1 with the file from step 2 above.

    最后,我们需要一些代码,该代码将基于NUM_PARALLEL_STEPS常量动态生成每个API阶段,这些常量将调用步骤1中的命令以及上面步骤2中的文件。

假设和依赖 (Assumptions and Dependencies)

Alright, before we get started lets state some of the dependencies and constraints we are working with…

好吧,在我们开始之前,先说一下我们正在使用的一些依赖关系和约束条件…

  1. Most of our tests are integration tests so a DB is required

    我们的大多数测试是集成测试,因此需要数据库
  2. Jenkins running on Kubernetes (or able to execute worker jobs)

    Jenkins在Kubernetes上运行(或能够执行工作者作业)
  3. Scripted Jenkinsfile syntax (not required but this is the syntax of code examples)

    脚本化的Jenkinsfile语法(不是必需的,但这是代码示例的语法)
  4. Jest and Node.js with Typescript and/or .js files

    具有Typescript和/或.js文件的Jest和Node.js
  5. Sequelize/Postgres as an ORM/DB (not required but this is what the examples are tailored to)

    将Sequelize / Postgres作为ORM / DB(不是必需的,但这是为示例量身定制的)

I think we are ready.

我想我们已经准备好了。

步骤1.测试命令 (Step 1. Test Command)

We will first set up our sequelize config to read from a DATABASE_URL environment variable.

我们将首先设置sequelize配置,以从DATABASE_URL环境变量中读取。

module.exports = {
"test-parallel": {
use_env_variable: "DATABASE_URL",
dialect: "postgres",
logging: false
}
}

In the above example test-parallel is referring to our NODE_ENV . The corresponding sequelize.ts initialization file would look something like this…

在上面的示例中, test-parallel是指我们的NODE_ENV 。 相应的sequelize.ts初始化文件看起来像这样……

import { Sequelize } from "sequelize";const env = process.env.NODE_ENV!;const { use_env_variable, ...config } = require("./config")[env];const sequelize = new Sequelize(process.env[use_env_variable]!, config)export default sequelize;

This may seem a bit excessive but we normally have a lot more environments in the config. What is really important here is new Sequelize("DATABASE_URL", {dialect: "postgres", logging: false}) is getting instantiated and being told in the first argument to look for a DATABASE_URL.

这似乎有点过分,但是通常我们在配置中有很多环境。 这里真正重要的是new Sequelize("DATABASE_URL", {dialect: "postgres", logging: false})被实例化,并在第一个参数中被告知要查找DATABASE_URL

It does no good for us to have sequelize config without an actual database. When we spin up a postgres container we wont have a database and need something to handle the creation and migration of our database. A shell script will do fine.

在没有实际数据库的情况下对配置进行序列化对我们没有好处。 当我们启动一个postgres容器时,我们将没有数据库,需要一些东西来处理数据库的创建和迁移。 一个shell脚本就可以了。

db_rebuild.sh would look something like this…

db_rebuild.sh看起来像这样……

NODE_ENV=test-parallel yarn sequelize db:create 2> /dev/null || :m=`NODE_ENV=test-parallel yarn sequelize db:migrate:status | grep ^down | wc -l`if [ $m -gt 0 ]
then
echo RECREATING DATABASE
NODE_ENV=test-parallel yarn sequelize db:drop
NODE_ENV=test-parallel yarn sequelize db:create
else
echo MIGRATING DATABASE
fi
NODE_ENV=test-parallel NODE_PATH=./src yarn sequelize db:migrate
else
echo DATABASE IS UP TO DATE
fi

Credit goes to Kyle Hennessy for the above

幸得 凯尔轩尼诗 对于上述

The first line will create the db or give us some console output, the second command will ask how many migrations have not been applied. If migrations that have not been applied is greater than 0 we drop and recreate the database, then execute migrations.

第一行将创建数据库或为我们提供一些控制台输出,第二行将询问尚未应用多少迁移。 如果尚未应用的迁移大于0,我们将删除并重新创建数据库,然后执行迁移。

Now we have a DB that exists and is fully migrated. Next, we will need to tackle our test command that will bring these pieces together.

现在,我们有了一个已存在并已完全迁移的数据库。 接下来,我们将需要处理将这些部分组合在一起的测试命令。

In our package.json (we are using yarn), in the scripts section we need a command. Our desire is to do this in ci and what differentiates the command is that it executes test looking at the output of a file, so we can name it something like test:ci:parallel. This command will need to call our db script and execute our test command.

在我们的package.json(我们正在使用yarn)中,在脚本部分,我们需要一个命令。 我们的愿望是用ci来执行此命令,该命令的区别在于它执行命令来查看文件的输出,因此我们可以将其命名为test:ci:parallel 。 该命令将需要调用我们的数据库脚本并执行我们的测试命令。

"scripts": {
"test:ci:parallel": "./<PATH_TO>/dbrebuild.sh && NODE_ENV=test-parallel yarn run jest --ci --runInBand"
}

Now we can test this!

现在我们可以测试一下!

Let’s create a file with two test names from your test suite.

让我们用测试套件中的两个测试名称创建一个文件。

touch my_tests && vi my_tests

touch my_tests && vi my_tests

Paste in the filenames and save, then

粘贴文件名并保存,然后

export DATABASE_URL=postgres://<USER_NAME>:<PASSWORD>@localhost:5432/<DB_NAME>

Replace the user_name, password, and db_name with random things, but eventually this variable will be variables defined on a given Node.js container’s env.

将user_name,password和db_name替换为随机的东西,但最终此变量将是在给定Node.js容器的env上定义的变量。

Lastly, lets call our command with the output of the file using cat…

最后,让我们使用cat…调用文件输出的命令。

yarn test:ci:parallel $(cat my_tests)

yarn test:ci:parallel $(cat my_tests)

Jest should execute the two tests (based on pattern matching) against whatever DB_NAME you gave above.

Jest应该针对您在上面给出的任何DB_NAME执行两个测试(基于模式匹配)。

Problems? Drop a response below and we can troubleshoot.

问题? 在下面添加回复,我们可以进行故障排除。

步骤2.生成测试清单文件 (Step 2. Generate Test Manifest Files)

Our goal for step 2 is to create a piece of code that will take a NUM_PARALLEL_STEPSand divide our test suite into as many files as parallel steps. We could do this using groovy in the Jenkinsfile, however after poking a bit I found it was easier to go a different route… which means it’s about to get bashey.

我们第2步的目标是创建一段代码,该代码将采用NUM_PARALLEL_STEPS并将测试套件分成与并行步骤一样多的文件。 我们可以在Jenkinsfile中使用groovy来做到这一点,但是经过一番戳后,我发现走一条不同的路线会更加容易……这意味着它将要得到bashey。

Note: In order to simplify things, we can take a more complex command and breakdown the desired pieces of behavior into smaller chunks. Stringing all the smaller commands together will get us the global output we desire.

注意:为了简化操作,我们可以采用更复杂的命令,并将所需的行为细分为较小的块。 将所有较小的命令串在一起将为我们提供所需的全局输出。

列出所有带有通配符的文件 (List All Files With Globbing)

Let’s start with simply listing all the test files and to do this let’s utilize globbing.

让我们从简单列出所有测试文件开始,然后使用globlob进行此操作。

If we are in the project root and all our code and tests are in the ./src subdirectory we can something like this…

如果我们位于项目根目录中,而我们所有的代码和测试都位于./src子目录中,我们可以像这样……

MacOS:

苹果系统:

./ls src/**/*.spec.js src/**/*.spec.ts

./ls src/**/*.spec.js src/**/*.spec.ts

Debian: (the linux flavor of the jenkins image we use)

Debian :(我们使用的jenkins映像的linux风格)

find ./src -printf ‘%f\n’ | grep .spec

find ./src -printf '%f\n' | grep .spec

将输出分成文件 (Split the Output Into Files)

Great! Now that we have a list of test file strings how do we put a certain number of them into a file manifest? Let’s say I have 10 file strings in my list, and I want 5 in the one file and 5 in the other.

大! 现在我们有了测试文件字符串的列表,我们如何将一定数量的字符串放入文件清单中? 假设我的列表中有10个文件字符串,一个文件中需要5个文件,另一个文件中需要5个文件字符串。

We can leverage split with these options… split -a 1 -d -l 5 — api_

我们可以利用split这些选项... split -a 1 -d -l 5 — api_

The -a option specifies the length of the suffix that is appended, -d specifies a numeric suffix, -l is number of lines per file, and api_ is the desired prefix we will give each file. So this command will split a 10 line file into two 5 line files called api_0 and api_1.

-a选项指定附加的后缀的长度, -d指定数字后缀, -l是每个文件的行数,而api_是我们将给每个文件的期望前缀。 因此,此命令会将一个10行文件拆分为两个5行文件,称为api_0api_1

All you have to do is pipe the output of the first command to the split command.

您要做的就是将第一个命令的输出通过管道传递给split命令。

MacOS:

苹果系统:

./ls src/**/*.spec.js src/**/*.spec.ts | split -a 1 -d -l 5 — api_

./ls src/**/*.spec.js src/**/*.spec.ts | split -a 1 -d -l 5 — api_

Debian: (the linux flavor of the jenkins image we use)

Debian :(我们使用的jenkins映像的linux风格)

find ./src -printf ‘%f\n’ | grep .spec | split -a 1 -d -l 5 — api_

find ./src -printf '%f\n' | grep .spec | split -a 1 -d -l 5 — api_

Perfect!

完善!

计算每个清单的文件数 (Do Math for the Number of Files per Manifest)

The last piece is dynamically creating the number for -l in the split command. As our test suite grows the number of test per file needs to grow because the NUM_PARALLEL_STEPS is constant.

最后一部分是在split命令中动态创建-l的数字。 随着我们的测试套件的增长,每个文件的测试数量也需要增长,因为NUM_PARALLEL_STEPS是恒定的。

If we took the number of lines from our test using wc -l as NUM_TESTS, and divide NUM_TESTS by our NUM_PARALLEL_STEPS + 1(to handle odd number cases) we will get the number of lines per file.

如果使用wc -l作为测试的行数作为NUM_TESTS ,并将NUM_TESTS除以NUM_PARALLEL_STEPS + 1 (以处理奇数情况),我们将获得每个文件的行数。

MacOS:

苹果系统:

echo $(($(./ls src/**/*.spec.js src/**/*.spec.ts | wc -l)/3+1))

echo $(($(./ls src/**/*.spec.js src/**/*.spec.ts | wc -l)/3+1))

Debian: (the linux flavor of the jenkins image we use)

Debian :(我们使用的jenkins映像的linux风格)

echo $(($(find ./src -printf ‘%f\n’ | grep .spec | wc -l)/3+1))

echo $(($(find ./src -printf '%f\n' | grep .spec | wc -l)/3+1))

Congrats! That’s step two. On to the last one…

恭喜! 那是第二步。 到最后一个…

步骤3.并行詹金斯阶段 (Step 3. Parallel Jenkins Stages)

Here is where we bring it all together. The Jenkinsfile. Game time. No Prisoners. Freedom. Eyes on the prize…

这是我们将所有内容整合在一起的地方。 Jenkins文件。 游戏时间。 没有囚犯。 自由。 关注奖品…

Image for post
Photo by Japheth Mast on Unsplash
Japheth MastUnsplash拍摄的照片

During our build phase we will use our command from step 2, because we need our test manifest files to exist when we generate and run the parallel stages. We can utilize a for loop and our NUM_PARALLEL_STEPS constant to add stages in a tests object that we will call with the parallel command in the Jenkinsfile.

在构建阶段,我们将使用第2步中的命令,因为在生成和运行并行阶段时,我们需要测试清单文件存在。 我们可以利用for循环和NUM_PARALLEL_STEPS常量在tests对象中添加阶段,这些阶段将通过Jenkinsfile中的parallel命令调用。

Vamos!

Vamos!

确保正确设置数据库 (Ensure DB is Setup Correctly)

containerTemplate(
name: 'api-db',
image: 'postgres:11.2-alpine', ttyEnabled: true,
envVars: [
envVar(key: 'POSTGRES_USER', value: '<USER_NAME>'),
envVar(key: 'POSTGRES_DB', value: 'default'),
envVar(key: 'POSTGRES_PASSWORD', value: '<PASSWORD>')
],)

In the api-db container above lets specify an environment variables for the USER_NAME and PASSWORD.. The POSTGRES_DB you can put a random name there.

在上面的api-db容器中,让我们为USER_NAMEPASSWORD.指定一个环境变量PASSWORD. 。 您可以在POSTGRES_DB中放置一个随机名称。

The other container we will be working with is a node container.

我们将使用的另一个容器是node容器。

建立阶段 (Build Stage)

Let’s say our build stage looks something like…

假设我们的构建阶段看起来像…

stage('Build Dependencies'){
container('node') {
try {
sh "yarn install --frozen-lockfile"
sh "cd api && yarn build:test"
} catch (Exception e) {
notifySomethingErrored()
throw e
}
}
}

We install our dependencies, then we build from our ./src directory. If an error happens we notify.

我们安装依赖项,然后从./src目录构建。 如果发生错误,我们会通知。

NOTE: In our case we run tests against our build directory, so the rest of these commands will reference only .spec.js files but you can easily run test against ./src and include .spec.ts in your commands.

注意:在本例中,我们针对构建目录运行测试,因此这些命令的其余部分将仅引用.spec.js文件,但您可以轻松针对./src运行测试,并在命令中包含.spec.ts

Right after we run yarn build:test we want to compute our numAPISpecsPerStage. To do this we can pull in our command from step 2 and output the STDOUT to numAPISpecsPerStage .

在运行yarn build:test我们想计算numAPISpecsPerStage 。 为此,我们可以从步骤2中提取命令,并将STDOUT输出到numAPISpecsPerStage

def numAPISpecsPerStage = sh(
script: "cd packages/api && echo \$((\$(find ./build -printf '%f\n' | grep .spec.js\$ | wc -l)/${NUM_PARALLEL_STEPS}+1))",
returnStdout: true
).trim()

NOTE: When executing bash commands in groovy, we need to escape any $ characters if we want them to be interpreted by bash, otherwise, groovy will attempt to interpret them as variables.

注意:在groovy中执行bash命令时,如果我们希望bash解释它们,则需要转义任何$字符,否则,groovy会尝试将它们解释为变量。

Next, we will want to build our manifest files using the above variable

接下来,我们将要使用上述变量来构建清单文件

sh "cd packages/api && find ./build -printf '%f\n' | grep .spec.js\$ | split -a 1 -d -l ${numAPISpecsPerStage} - api_"

Finally, let’s call a random function, buildApiStages() which we haven’t written yet.

最后,让我们调用一个尚未编写的随机函数buildApiStages()

Plugging those into the build stage yields,

将这些插入build阶段即可获得收益,

stage('Build Dependencies'){
container('node') {
try {
sh "yarn install --frozen-lockfile"
sh "cd api && yarn build:test"
def numAPISpecsPerStage = sh(
script: "cd packages/api && echo \$((\$(find ./build -printf '%f\n' | grep .spec.js\$ | wc -l)/${NUM_PARALLEL_API_STAGES}+1))",
returnStdout: true
).trim()
sh "cd packages/api && find ./build -printf '%f\n' | grep .spec.js\$ | split -a 1 -d -l ${numAPISpecsPerStage} - api_"
buildAPIStages()
} catch (Exception e) {
notifySomethingErrored()
throw e
}
}
}

Yes we can absolutely extract our file list output to a variable that we echo and pipe to wc and split to clean things up.

是的,我们绝对可以将文件列表输出提取到一个变量中,然后splitecho并通过管道传递给wcsplit以进行清理。

API阶段 (API Stage)

Next, we can flesh out the buildApiStages() function. Near, the top of the file, or wherever you like to define your functions, let’s copy in our current API test stage code.

接下来,我们可以buildApiStages()函数。 在文件的顶部,顶部附近或任何您想要定义函数的位置,让我们复制当前的API测试阶段代码。

"api": {
timeout(time: 40, unit: 'MINUTES') {
container('node') {
try {
sh "cd api && yarn test:ci"
} catch (Exception e) {
notifySomethingErrored()
throw e
}
}
}
}

The current code executes tests or notifies of a failure. Let’s modify this by first exposing the environment variables we care about to the container.

当前代码执行测试或通知失败。 让我们通过首先将我们关心的环境变量暴露给容器来进行修改。

Right above the try , we will utilize a withEnv wrapper.

try上方,我们将使用withEnv包装器。

withEnv([“DATABASE_URL=$DATABASE_URL”, “NODE_ENV=test-parallel”, “NODE_PATH=./build”]){
…code
}

We expose a $DATABASE_URL and set it to a variable which we haven’t built yet. Then, we set our env as test-parallel and we point our NODE_PATH to ./build (you can specify ./src if you are running against source).

我们公开了$DATABASE_URL并将其设置为尚未构建的变量。 然后,将env设置为test-parallel并将NODE_PATH指向./build (如果针对源代码运行,则可以指定./src )。

Finally, we can add our test command in from step 1 and for the file that we want to cat out let’s use a variable that we haven’t built yet like UNIT_TEST_FILE_NAME.

最后,我们可以从第1步,并为文件添加我们的测试指令,我们希望cat出来让我们用一个变量,我们还没有建立像UNIT_TEST_FILE_NAME

sh "cd packages/api && yarn test:ci:parallel \$(cat ${UNIT_TEST_FILE_NAME})"

This gives us an API stage that looks like…

这使我们的API阶段看起来像…

"api": {
timeout(time: 40, unit: 'MINUTES') {
container('node') {
withEnv(["DATABASE_URL=$DATABASE_URL", "NODE_ENV=test-parallel", "NODE_PATH=./build"]){
try {
sh "cd api && yarn test:ci:parallel \$(cat ${UNIT_TEST_FILE_NAME})"
} catch (Exception e) {
notifySomethingErrored()
throw e
}
}
}
}
}

多个API阶段 (Multiple API Stages)

If you notice, “api” is just a key, and we are setting that key equal to an object (our stage). For this we can instantiate a tests map, then utilize a for loop to set a few different keys equal to a number of stages.

如果您注意到, “api”只是一个键,并且我们将该键设置为等于对象(我们的阶段)。 为此,我们可以实例化一个tests图,然后利用for循环来设置一些等于阶段数的不同键。

tests = [:]

tests = [:]

Now let’s define our variable NUM_PARALLEL_API_STAGES=2 and start our function and loop.

现在,让我们定义变量NUM_PARALLEL_API_STAGES=2并启动函数并循环。

def buildAPIStages(){
for(int i = 0; i<NUM_PARALLEL_API_STAGES; i++) {
...code
}

Additionally we have some more variables to define in the context of our loop.

另外,我们还有一些要在循环上下文中定义的变量。

def UNIT_TEST_FILE_NAME = “api_${i}”

def UNIT_TEST_FILE_NAME = “api_${i}”

We will utilize this as the key in our tests map. This variable links up with the file names of the split command. split gave us api_0, api_1, ect… so each stage will call a particular manifest file based on the loops index.

我们将把它用作tests图中的键。 该变量与split命令的文件名链接。 split给了我们api_0api_1等...,因此每个阶段都将基于loops索引调用特定的清单文件。

def DATABASE_URL = “postgres://<USER_NAME>:<PASSWORD>@localhost:5432/$UNIT_TEST_FILE_NAME”`

Fill in the <PASSWORD> and <USER_NAME> that correspond to the variables defined on the api_db postgres container.

填写与在api_db postgres容器上定义的变量相对应的<PASSWORD><USER_NAME>

Now, for the final piece lets set our key in our test map to our api stage object. To give us a final function of…

现在,对于最后一部分,让我们将测试映射中的密钥设置为api stage对象。 为了给我们一个最终的功能……

def buildAPIStages(){
for(int i = 0; i<NUM_PARALLEL_API_STAGES; i++) {
def UNIT_TEST_FILE_NAME = "api_${i}"
def DATABASE_URL = "postgres://<USER_NAME>:<PASSWORD>@localhost:5432/$UNIT_TEST_FILE_NAME"
tests["${UNIT_TEST_FILE_NAME}"] = {
timeout(time: 40, unit: 'MINUTES') {
container('node') {
withEnv(["DATABASE_URL=$DATABASE_URL", "NODE_ENV=test-parallel", "NODE_PATH=./build"]){
try {
sh "cd api && yarn test:ci:parallel \$(cat ${UNIT_TEST_FILE_NAME})"
} catch (Exception e) {
notifySomethingErrored()
throw e
}
}
}
}
}
}
}

We are SOOO close to being done. During the build stage, we set up our files and built our dynamic stages, now we just need to tell Jenkins to execute them in parallel.

我们即将完成。 在build阶段,我们设置了文件并构建了动态阶段,现在我们只需要告诉Jenkins并行执行它们即可。

With this function…

有了这个功能...

def doParallelStages(){
parallel tests
}

And this stage, placed after the build stage…

这个阶段位于build阶段之后...

stage(‘Run Tests’){
doParallelStages()
}

(Fin)

Our world is now parallelized. Problems, issues, ideas of how to do something better, drop me a line in the responses and I’ll be happy to learn from you or help out.

我们的世界现在已经平行化。 问题,问题以及有关如何做得更好的想法,使我在回答中有所区分,我很乐意向您学习或提供帮助。

What is really cool about this kind of work is that you’re really saving the whole organization so much time. It makes work life (and maybe personal life too??) easier and more productive for your coworkers and teammates. At times it may seem not urgent or necessary, but when you take the time to empower your organization with efficiency, developer happiness and productivity will skyrocket.

这种工作的真正好处是,您确实为整个组织节省了很多时间。 它使您的同事和队友的工作生活(也许还有个人生活?)变得更轻松,更有效率。 有时这似乎并不紧急或不必要,但是当您花时间为组织提高效率时,开发人员的幸福感和生产力将急剧上升。

As a result, our total test time went from around 30 minutes to 20 min flat. To me this does not really add up and will require some investigation. I would have expected the test time to go to 15 min. We are sharing resources so it might be that the running more parallel jobs on the same node gave a slight performance hit. However, I’ll take the 10 min savings :-).

结果,我们的总测试时间从大约30分钟缩短为20分钟。 对我来说,这并没有真正加起来,需要进行一些调查。 我本来希望测试时间为15分钟。 我们正在共享资源,因此可能是在同一节点上运行更多并行作业给性能造成了轻微影响。 但是,我会节省10分钟:-)。

I have more work to do on this CI/CD front so stay tuned for some future posts on new technologies and approaches to creating the best CI/CD pipelines.

在这个CI / CD方面,我还有很多工作要做,请继续关注将来有关创建最佳CI / CD管道的新技术和方法的文章。

Until then, cheers, and happy developing.

在那之前,欢呼雀跃,快乐成长。

翻译自: https://medium.com/javascript-in-plain-english/splitting-a-jest-node-js-test-suite-into-multiple-parallel-jobs-on-jenkins-kubernetes-dabf48567f50

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值