你让,勋爵? 使用Jenkins声明性管道的Docker中的Docker

Resources. When they are unlimited they are not important. But when they're limited, boy do you have challenges!

资源。 当它们不受限制时,它们并不重要。 但是,当他们受到限制时,男孩你有挑战!

Recently, my team has faced such a challenge ourselves: we realised that we needed to upgrade the Node version on one of our Jenkins agents so we could build and properly test our Angular 7 app. However, we learned that we would also lose the ability to build our legacy AngularJS apps which require Node 8.

最近,我的团队自己面对了这样的挑战:我们意识到我们需要在我们的Jenkins代理之一上升级Node版本,以便我们可以构建和正确测试Angular 7应用程序。 但是,我们了解到,我们还将失去构建需要Node 8的旧版AngularJS应用程序的能力。

What were we to do?

我们该怎么办?

Apart from eliminating the famous "It works on my machine" problem, Docker came in handy to tackle such a problem. However, there were certain challenges that needed to be addressed, such as Docker in Docker.

除了消除著名的“它可以在我的机器上工作”问题之外,Docker还可以方便地解决此类问题。 但是,有一些挑战需要解决,例如Docker中的Docker。

For this purpose, after a long period of trial and error, we built and published a docker file that fit our team's needs. It helps run our builds, and it looks like the following:

为此,经过长期的反复试验,我们构建并发布了适合我们团队需求的docker文件 。 它有助于运行我们的构建,如下所示:

1. Install dependencies
2. Lint the code
3. Run unit tests
4. Run SonarQube analysis
5. Build the application
6. Build a docker image which would be deployed
7. Run the docker container
8. Run cypress tests
9. Push docker image to the repository
10. Run another Jenkins job to deploy it to the environment
11. Generate unit and functional test reports and publish them
12. Stop any running containers
13. Notify chat/email about the build

我们需要的Docker映像 (The docker image we needed)

Our project is an Angular 7 project, which was generated using the angular-cli. We also have some dependencies that need Node 10.x.x. We lint our code with tslint, and run our unit tests with Karma and Jasmine. For the unit tests we need a Chrome browser installed so they can run with headless Chrome.

我们的项目是Angular 7项目,它是使用angular-cli生成的。 我们还有一些需要Node 10.xx的依赖项。我们使用tslint代码,并使用KarmaJasmine运行单元测试。 对于单元测试,我们需要安装Chrome浏览器,以便它们可以与无头Chrome一起运行。

This is why we decided to use the cypress/browsers:node10.16.0-chrome77 image. After we installed the dependencies, linted our code and ran our unit tests, we ran the SonarQube analysis. This required us to have Openjdk 8 as well.

这就是为什么我们决定使用cypress/browsers:node10.16.0-chrome77图像的原因。 在安装依赖项,简化代码并运行单元测试之后,我们运行了SonarQube分析。 这要求我们也有Openjdk 8

FROM cypress/browsers:node10.16.0-chrome77

# Install OpenJDK-8
RUN apt-get update && \
    apt-get install -y openjdk-8-jdk && \
    apt-get install -y ant && \
    apt-get clean;

# Fix certificate issues
RUN apt-get update && \
    apt-get install ca-certificates-java && \
    apt-get clean && \
    update-ca-certificates -f;

# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME

Once the sonar scan was ready, we built the application. One of the strongest principles in testing is that you should test the thing that will be used by your users.That is the reason that we wanted to test the built code in exactly the same docker container as it would be in production.

声纳扫描准备就绪后,我们就构建了该应用程序。 测试中最强大的原则之一就是您应该测试将由用户使用的东西,这就是我们想要在与生产环境完全相同的Docker容器中测试构建代码的原因。

We could, of course serve the front-end from a very simple nodejs static server.But that would mean that everything an Apache HTTP server or an NGINX server usually did would be missing (for example all the proxies, gzip or brotli).

我们当然可以通过非常简单的nodejs静态服务器为前端服务, nodejs意味着Apache HTTP服务器或NGINX服务器通常所做的一切都将丢失(例如所有代理, gzipbrotli )。

Now while this is a strong principle, the biggest problem was that we were already running inside a Docker container. That is why we needed DIND (Docker in Docker).

现在,尽管这是一个很强的原则,但最大的问题是我们已经在Docker容器中运行。 这就是为什么我们需要DIND(Docker中的Docker)的原因。

After spending a whole day with my colleague researching, we found a solution which ended up working like a charm. The first and most important thing is that our build container needed the Docker executable.

在与同事一起研究了一整天之后,我们找到了一个最终成功的解决方案。 首先也是最重要的一点是,我们的构建容器需要Docker可执行文件。

# Install Docker executable
RUN apt-get update && apt-get install -y \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg2 \
        software-properties-common \
    && curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
    && add-apt-repository \
        "deb [arch=amd64] https://download.docker.com/linux/debian \
        $(lsb_release -cs) \
        stable" \
    && apt-get update \
    && apt-get install -y \
        docker-ce

RUN usermod -u 1002 node && groupmod -g 1002 node && gpasswd -a node docker

As you can see we installed the docker executable and the necessary certificates, but we also added the rights and groups for our user. This second part is necessary because the host machine, our Jenkins agent, starts the container with -u 1002:1002. That is the user ID of our Jenkins agent which runs the container unprivileged.

如您所见,我们安装了docker可执行文件和必要的证书,但是我们还为用户添加了权限和组。 第二部分是必需的,因为主机(我们的Jenkins代理)使用-u 1002:1002启动容器。 这是我们的Jenkins代理的用户ID,该代理以无特权的方式运行容器。

Of course this isn't everything. When the container starts, the docker daemon of the host machine must be mounted. So we needed to start the build containerwith some extra parameters. It looks like the following in a Jenkinsfile:

当然,这还不是全部。 容器启动时,必须挂载主机的docker守护程序。 因此,我们需要使用一些额外的参数来启动构建容器。 Jenkins文件中的内容如下所示:

pipeline {
  agent {
    docker {
     image 'btapai/pipelines:node-10.16.0-chrome77-openjdk8-CETtime-dind'
     label 'frontend'
     args '-v /var/run/docker.sock:/var/run/docker.sock -v /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket -e HOME=${workspace} --group-add docker'
    }
  }

// ...
}

As you can see, we mounted two Unix sockets. /var/run/docker.sock mounts the docker daemon to the build container.

如您所见,我们安装了两个Unix套接字。 /var/run/docker.sock将Docker守护程序挂载到构建容器。

/var/run/dbus/system_bus_socket is a socket that allows cypress to run inside our container.

/var/run/dbus/system_bus_socket是一个套接字,可让cypress在我们的容器内运行。

We needed -e HOME=${workspace} to avoid access rights issues during the build.

我们需要-e HOME=${workspace}以避免在构建期间出现访问权限问题。

--group-add docker passes the host machines docker group down, so that inside the container our user can use the docker daemon.

--group-add docker向下传递主机docker组,以便我们的用户可以在容器内使用docker守护程序。

With these proper arguments, we were able to build our image, start it up and run our cypress tests against it.

有了这些适当的论据,我们就能建立自己的形象,启动它并对其进行赛普拉斯测试。

But let's take a deep breath here. In Jenkins, we wanted to use multi-branch pipelines. Multibranch pipelines in Jenkins would create a Jenkins job for each branch that contained a Jenkinsfile. This meant that when we developed multiple branches they would have their own views.

但是,让我们在这里深呼吸。 在詹金斯,我们想使用多分支管道。 Jenkins中的多分支管道会为每个包含Jenkinsfile的分支创建一个Jenkins作业。 这意味着当我们开发多个分支机构时,它们将拥有自己的视图。

There were some problems with this. The first problem was that if we built our image with the same name in all the branches, there would be conflicts (since our docker daemon was technically not inside our build container).

这有一些问题。 第一个问题是,如果我们在所有分支中都使用相同的名称构建映像,则会发生冲突(因为从技术上讲,docker守护程序不在构建容器内)。

The second problem arose when the docker run command used the same port in every build (because you can't start the second container on a port that is already taken).

当docker run命令在每个构建版本中使用相同的端口时,会出现第二个问题(因为您无法在已占用的端口上启动第二个容器)。

The third issue was getting the proper URL for the running application, because Dorothy, you are not in Localhost anymore.

第三个问题是为正在运行的应用程序获取正确的URL,因为Dorothy,您不再位于Localhost中。

Let's start with the naming. Getting a unique name is pretty easy with git, because commit hashes are unique. However, to get a unique port we had to use a little trick when we declared our environment variables:

让我们从命名开始。 使用git获得唯一的名称非常容易,因为提交哈希是唯一的。 但是,要获得唯一的端口,我们在声明环境变量时必须使用一些技巧:

pipeline {

// ..

  environment {
    BUILD_PORT = sh(
        script: 'shuf -i 2000-65000 -n 1',
        returnStdout: true
    ).trim()
  }

// ...

    stage('Functional Tests') {
      steps {
        sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"
        // be patient, we are going to get the url as well. :)
      }
    }

// ...

}

With the shuf -i 2000-65000 -n 1 command on certain Linux distributions you can generate a random number. Our base image uses Debian so we were lucky here.The GIT_COMMIT environment variable was provided in Jenkins via the SCM plugin.

在某些Linux发行版中,使用shuf -i 2000-65000 -n 1命令可以生成一个随机数。 我们的基本映像使用Debian,因此我们很幸运GIT_COMMIT环境变量是通过SCM插件在Jenkins中提供的。

Now came the hard part: we were inside a docker container, there was no localhost, and the network inside docker containers can change.

现在最困难的部分是:我们在docker容器内,没有本地主机,并且docker容器内的网络可以更改。

It was also funny that when we started our container, it was running on the host machine's docker daemon. So technically it was not running inside our container. We had to reach it from the inside.

有趣的是,当我们启动容器时,它正在主机的docker守护程序上运行。 因此,从技术上讲,它不在容器内运行。 我们必须从内部到达它。

After several hours of investigation my colleague found a possible solution:docker inspect --format "{{ .NetworkSettings.IPAddress }}"

经过数小时的调查,我的同事找到了一个可能的解决方案: docker inspect --format "{{ .NetworkSettings.IPAddress }}"

But it did not work, because that IP address was not an IP address inside the container, but rather outside it.

但这没有用,因为该IP地址不是容器内部的IP地址,而是容器外部的IP地址。

Then we tried the NetworkSettings.Gateway property, which worked like a charm.So our Functional testing stage looked like the following:

然后我们尝试了NetworkSettings.Gateway属性,该属性像一个超级按钮一样工作,因此我们的功能测试阶段如下所示:

stage('Functional Tests') {
  steps {
    sh "docker run -d -p ${BUILD_PORT}:80 --name ${GIT_COMMIT} application"
    sh 'npm run cypress:run -- --config baseUrl=http://`docker inspect --format "{{ .NetworkSettings.Gateway }}" "${GIT_COMMIT}"`:${BUILD_PORT}'
  }
}

It was a wonderful feeling to see our cypress tests running inside a docker container.

看到我们的cypress测试在docker容器中运行是一种很棒的感觉。

But then some of them failed miserably. Because the failing cypress tests expected to see some dates.

但是,其中一些失败了。 因为失败的柏树测试预计会看到一些日期。

cy.get("created-date-cell")
  .should("be.visible")
  .and("contain", "2019.12.24 12:33:17")

But because our build container was set to a different timezone, the displayed date on our front-end was different.

但是因为我们的构建容器设置为不同的时区,所以前端显示的日期不同。

Fortunately, it was an easy fix, and my colleague had seen it before. We installed the necessary time zones and locales. In our case we set the build container's timezone to Europe/Budapest, because our tests were written in this timezone.

幸运的是,这很容易解决,而我的同事以前见过。 我们安装了必要的时区和语言环境。 在我们的案例中,我们将构建容器的时区设置为Europe/Budapest ,因为我们的测试是在该时区编写的。

# SETUP-LOCALE
RUN apt-get update \
    && apt-get install --assume-yes --no-install-recommends locales \
    && apt-get clean \
    && sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen \
    && sed -i -e 's/# hu_HU.UTF-8 UTF-8/hu_HU.UTF-8 UTF-8/' /etc/locale.gen \
    && locale-gen

ENV LANG="en_US.UTF-8" \
    LANGUAGE= \
    LC_CTYPE="en_US.UTF-8" \
    LC_NUMERIC="hu_HU.UTF-8" \
    LC_TIME="hu_HU.UTF-8" \
    LC_COLLATE="en_US.UTF-8" \
    LC_MONETARY="hu_HU.UTF-8" \
    LC_MESSAGES="en_US.UTF-8" \
    LC_PAPER="hu_HU.UTF-8" \
    LC_NAME="hu_HU.UTF-8" \
    LC_ADDRESS="hu_HU.UTF-8" \
    LC_TELEPHONE="hu_HU.UTF-8" \
    LC_MEASUREMENT="hu_HU.UTF-8" \
    LC_IDENTIFICATION="hu_HU.UTF-8" \
    LC_ALL=

# SETUP-TIMEZONE
RUN apt-get update \
    && apt-get install --assume-yes --no-install-recommends tzdata \
    && apt-get clean \
    && echo 'Europe/Budapest' > /etc/timezone && rm /etc/localtime \
    && ln -snf /usr/share/zoneinfo/'Europe/Budapest' /etc/localtime \
    && dpkg-reconfigure -f noninteractive tzdata

Since every crucial part of the build was now resolved, pushing the built image to the registry was just a docker push command. You can check out the whole dockerfile here.

由于现在解决了构建的每个关键部分,因此将构建的映像推送到注册表只是一个docker push命令。 您可以在此处检出整个dockerfile。

One thing remained, which was to stop running containers when the cypress tests failed. We did this easily using the always post step.

剩下的一件事是,当柏树测试失败时,停止运行容器。 我们使用always post步骤轻松做到了这一点。

post {
  always {
    script {
      try {
        sh "docker stop ${GIT_COMMIT} && docker rm ${GIT_COMMIT}"
      } catch (Exception e) {
        echo 'No docker containers were running'
      }
    }
  }
}

Thank you very much for reading this blog post. I hope it helps you.

非常感谢您阅读此博客文章。 希望对您有帮助。

The original article can be read on my blog:

原始文章可以在我的博客上阅读:

翻译自: https://www.freecodecamp.org/news/you-rang-mlord-docker-in-docker-with-jenkins-declarative-pipelines/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值