plsql中trunk 表_kubernetes环境中基于Trunk的开发策略的ci cd管道项目

plsql中trunk 表

The main purpose of this story is to present a project on how to build a pipeline for Continuous Integration and Continuous Deployment (CI/CD) on a Trunk-Based Development (TBD) Strategy in a Kubernetes environment, leveraging opensource tools in order to foster automation, control and agility.

这个故事的主要目的是介绍一个项目,该项目如何在Kubernetes环境中利用基于干线的开发(TBD)策略上的持续集成和持续部署(CI / CD)管道,利用开源工具来促进自动化,控制和敏捷性。

In the next paragraphs, we will outline the Trunk-Based Development process and how it can be supported in a real-world CI/CD pipeline. We will also present a set of open source tools that enables automation on a TBD strategy and the general steps to integrate and achieve their full potential on delivering an automated TBD CI/CD pipeline.

在接下来的几段中,我们将概述基于主干的开发过程以及如何在实际的CI / CD管道中对其进行支持。 我们还将介绍一组开源工具,这些工具可实现基于TBD策略的自动化,以及将集成和实现其全部潜力的常规步骤,以提供自动化的TBD CI / CD流水线。

现实世界中基于主干的开发(TBD)策略 (The Trunk-Based Development (TBD) Strategy in the real world)

Many organizations are now working with (or migrating to) TBD. As from https://trunkbaseddevelopment.com/, this model aims to reduce the effort on merging procedures, focusing on maintaining a constant buildable “master” branch. Due to its simplicity, it integrates seamlessly with CI/CD pipelines.

许多组织现在正在使用(或迁移到)TBD。 从https://trunkbaseddevelopment.com/起,该模型旨在减少合并程序的工作量,着重于维护一个恒定的可构建“主”分支。 由于其简单性,它与CI / CD管道无缝集成。

Having an everlasting deployable master branch offers additional benefits. First, it promotes shared responsibility, where all developers are committed to maintain a stable trunk. Second, as the merging process become more challenging, developers commit smaller changes more frequently, delivering faster features and bug-fixes. On the other hand, as risk rises when commits are made straight into master, TBD often requires experienced developers to support this strategy.

拥有永久的可部署主分支可带来更多好处。 首先,它促进了责任分担,所有开发人员都致力于维护稳定的主干。 其次,随着合并过程变得更具挑战性,开发人员将更频繁地进行较小的更改,从而提供更快的功能和错误修复。 另一方面,随着直接提交到主控方的风险增加,TBD通常需要经验丰富的开发人员来支持此策略。

In my organization, every commit (and push) to the master branch is considered a “good to go” build. This means that whenever an image is built, it suffices to say that it has already passed all tests and are ready for production.

在我的组织中,对master分支的每次提交(和推送)都被认为是“行之有效”的构建。 这意味着只要构建了映像,就可以说它已经通过了所有测试并可以投入生产。

In this pipeline project, it has been considered, along with the master branch, feature branches, which can be short or long-lived. It is up to the developer to decide for it, provided that the longer the life of a feature-branch, the more challenging the merge process will be.

在该管道项目中,已将其与主分支,功能分支(可能是短期的也可能是长期的)一起考虑在内。 如果功能分支的寿命越长,则合并过程的挑战就越大,这取决于开发者。

Meanwhile, the validation process with our clients is done either in a feature branch, before merging with the master branch, or in staging, leveraging toggle features, which can be enabled or disabled by configuration. This aligns with the devops idea of “fail fast”, which encourages the team to fix faster and improve the system.

同时,与客户的验证过程可以在功能分支中完成,然后再与master分支合并;也可以在阶段中利用可通过配置启用或禁用的切换​​功能。 这与“快速失败”的发展理念相一致,后者鼓励团队更快地修复并改善系统。

The picture below outlines the actual TBD pipeline in my organization:

下图概述了我组织中的实际TBD管道:

TBD pipeline
Pic.1: TBD Pipeline
图1:TBD管道

The downside of this workflow is the lack of control by the Release Manager on what is promoted into production. As soon the image is deployed on staging it only takes a sync step(the “auth” part) to deploy that release on production. There is no validation or release candidates. Therefore, it requires strong communication between Devs and the Release Manager to avoid unwanted releases in production.

此工作流程的不利之处在于,发行经理对发布到生产中的内容缺乏控制。 一旦在阶段上部署了映像,它只需采取同步步骤(“身份验证”部分)就可以在生产环境中部署该版本。 没有验证或发布候选者。 因此,它要求开发人员与发行管理器之间进行强有力的沟通,以避免生产中不必要的发行。

It’s under discussion, however, an improvement of this process, where the Release Manager role will be responsible for tagging the release whenever it is ready for staging and production stages. As consequence, this will move the “authorization” phase one step earlier. It is also under the radar some automation between the our local dev Workflow Management tool and local git tool, enabling auto-tagging when the ticket (or user story) is validated by the release manager.

但是,正在讨论此过程的改进,在该发布过程中,如果发布准备好用于登台和生产阶段,则版本管理器角色将负责标记发布。 结果,这将使“授权”阶段提前一个步骤。 我们的本地开发人员工作流管理工具和本地git工具之间还存在一些自动化,这在发布管理器验证票证(或用户故事)时启用自动标记。

In such scenario, the new TBD pipeline would be something like this:

在这种情况下,新的TBD管道将如下所示:

Pic 2. The new TBD Pipeline with the tagging step
Pic. 2: New TBD Pipeline with tagging step
图片2:带有标记步骤的新TBD管道

This new pipeline empowers the Release Manager to decide when the code is ready to move on. The drawback is that emergency bug fixes will be dammed waiting for the tag, which reduces agility, eventually affecting fast fixes.

这个新的管道使发布管理器可以决定何时可以继续进行代码。 缺点是,紧急漏洞修复程序将在等待标签时被阻止,从而降低了敏捷性,最终影响了快速修复程序。

Despite this new approach, for now on in this project, the first TBD pipeline (Pic. 1) will be considered for CI/CD implementation.

尽管采用了这种新方法,但目前在该项目中,CI / CD的实施将考虑第一个TBD管道(图1)。

工具包 (The Toolkit)

In devops world, it can be extremely challenging to find the perfect tool. There are literally hundreds available which offer amazing features. There are, however, some differences that can make a few of them particularly interesting for a project. At the end of the day, it is a matter of taste, usability, features and integration to your environment. In our case, in addition to their core features, being an open-source tool was also a requirement. A good way to start is to check if the project is supported by the CNCF (Cloud Native Computing Foundation — https://landscape.cncf.io/). As we had already decided for orchestrating containers with kubernetes, having a k8s native tool was also a plus.

在devops世界中,找到完美的工具可能极具挑战性。 字面上有数百个提供了令人惊奇的功能。 但是,有些差异可能会使其中的一些对于项目特别有趣。 归根结底,这取决于口味,可用性,功能以及与环境的集成。 在我们的案例中,除了它们的核心功能之外,还必须成为开源工具。 一个很好的开始方法是检查CNCF是否支持该项目(Cloud Native Computing Foundation — https://landscape.cncf.io/ )。 正如我们已经决定使用kubernetes协调容器一样,拥有k8s原生工具也是一个加号。

Besides, all tools, applications and systems should be able to be deployed in a fully managed private cloud. And this is definitely not a constraint. Rather, being a cloud-ready project, all infrastructure can be easily migrated to a hybrid or public cloud providers, such as AWS, Azure or GCP.

此外,所有工具,应用程序和系统都应能够部署在完全托管的私有云中。 这绝对不是约束。 而是,作为云就绪项目,所有基础架构都可以轻松迁移到混合或公共云提供商,例如AWS,Azure或GCP。

After some research, testing and sweat, this is our chosen fleet (we will see how they work together later on):

经过一番研究,测试和汗水,这是我们选择的舰队(我们将在以后看到它们的协同工作):

  • Argo CD— “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes”. We decided to use Argo-cd as the deployment tool for deploying helm charts in staging and production environments. It provides not only visibility and rollbacks but also the option to manually sync newly available helm charts to prod. This complies with the authorization requirement for publishing in production (only those with proper rights can sync into prod). In staging, nevertheless, the synchronization is automatic. Whenever a new helm chart of a project became available on the chart repository, Argo-cd takes care of its deployment. This means that it also enforces configuration. If someone accidentally changes or deletes an object in the k8s namespace watched by Argo it then triggers an auto-sync (or fire a not-in-sync warning for a production environment on UI). Argo-cd has recently joined CNCF as an incubating project.

    Argo CD —“ Argo CD是用于Kubernetes的说明性GitOps连续交付工具”。 我们决定将Argo-cd用作在登台和生产环境中部署头盔图表的部署工具。 它不仅提供可见性和回滚,还提供了将新可用的舵图手动同步到产品的选项。 这符合在生产中发布的授权要求(只有拥有适当权限的人员才能同步到产品中)。 但是,在登台过程中,同步是自动的。 每当图表存储库中有可用的新项目舵图时,Argo-cd都会负责其部署。 这意味着它也强制执行配置。 如果有人不小心更改或删除Argo监视的k8s命名空间中的对象,则它会触发自动同步(或在UI上针对生产环境发出不同步警告)。 Argo-cd最近加入了CNCF,成为一个孵化项目。

  • Drone— Drone is used for the pipeline automation. This is the core tool that enables the CI/CD pipeline. Drone is an amazing project that has been recently acquired by Harness.io. The pipeline is described within the built project itself, in a drone.yaml file. It is a docker based pipeline tool. This means that every step in the pipeline is a container, that dramatically enhances performance with reduced resource consumption. Describing the whole pipeline in a drone.yaml file in each repository can be a problem when you have an application with dozen or even hundreds of microservices managed by independent repos. Luckily, Drone provides the possibility to centralize configuration in only one repo and describe the rules for forwarding the proper drone.yaml file by configuring a drone-config service (https://github.com/drone/boilr-config). Despite already being a stable tool, Drone obviously has a lot to evolve, but is a promising project that definitely worth a look.

    无人机—无人机用于管道自动化。 这是启用CI / CD管道的核心工具。 无人机是一个了不起的项目,最近被Harness.io收购。 该管道在drone.yaml文件中的内部构建项目中进行了描述。 它是基于docker的管道工具。 这意味着管道中的每个步骤都是一个容器,可显着提高性能并减少资源消耗。 当您的应用程序具有由独立存储库管理的数十个甚至数百个微服务时,在每个存储库中的drone.yaml文件中描述整个管道会是一个问题。 幸运的是,Drone提供了仅在一个仓库中集中配置的可能性,并通过配置drone-config服务( https://github.com/drone/boilr-config )描述了转发正确的drone.yaml文件的规则。 尽管Drone已经是一个稳定的工具,但它显然还有很多发展的余地,但这绝对是一个有前途的项目,值得一看。

  • Jenkins— The well-known alternative for Drone. As drone still don’t support pipeline triggers based on branch deletion (as its inner logic is based on a drone.yaml file available on that branch), we decided to use jenkins only to execute the deletion of a “feature” k8s cluster every time a feature branch is deleted.

    詹金斯(Jenkins) -无人机的知名替代品。 由于无人机仍然不支持基于分支删除的管道触发器(因为其内部逻辑基于该分支上可用的drone.yaml文件),我们决定仅使用jenkins来删除每个“功能” k8s集群删除功能分支的时间。

  • Ansible — Tool for automation and configuration management of infrastructure and applications. Ansible was used in this project to automate the creation and deletion of a k8s cluster to host a feature-branch build.

    Ansible-用于基础结构和应用程序的自动化和配置管理的工具。 在该项目中使用Ansible来自动创建和删除k8s集群以托管功能分支构建。

  • Harbor— Docker Images and Helm Chart Repository. It has also native integration with Clair and Trivy image security scanners. It is possible to define security thresholds for pulling images. If an image is tainted with critical vulnerability, the pull for that image will fail, making it possible to use this feature in the Security Test step.

    Harbor — Docker映像和Helm图表存储库。 它还与Clair和Trivy图像安全扫描仪进行了本地集成。 可以定义用于拉取图像的安全性阈值。 如果图像被严重漏洞污染,则对该图像的拉取将失败,从而可以在“安全性测试”步骤中使用此功能。

  • Gitea— Git-based service for code hosting. It is a great option for self-hosting a git service on-premise.

    Gitea —基于Git的代码托管服务。 这是在内部自行托管git服务的绝佳选择。

  • Helm — Tool for templating and packaging for kubernetes objects. With Helm, it is possible to manage the creation and deployment of all kubernetes declarative files (deployment.yaml, service.yaml, ingress.yaml, etc) by templating them. In this project, we defined only one template for all repos. This has enabled centralized configuration with simplified management. The trade-off, however, is more complex templates.

    Helm —用于kubernetes对象的模板和打包的工具。 使用Helm,可以通过模板化来管理所有kubernetes声明性文件(deployment.yaml,service.yaml,ingress.yaml等)的创建和部署。 在此项目中,我们为所有存储库仅定义了一个模板。 这使得集中配置和简化管理成为可能。 但是,权衡是更复杂的模板。

  • Kubernetes— Container orchestration infrastructure. The foundation platform for our container-based projects.

    Kubernetes —容器编排基础结构。 我们基于容器的项目的基础平台。

  • Rancher— Rancher is a powerful stack capable of managing hundreds of kubernetes clusters, providing an API, a friendly UI and integrating many tools for cluster administration.

    Rancher — Rancher是一个功能强大的堆栈,能够管理数百个kubernetes集群,提供API,友好的UI并集成了许多用于集群管理的工具。

  • VMWare— Private cloud infrastructure, already part of our inventory.

    VMWare —私有云基础架构,已经包含在我们的清单中。

自动CI / CD管道 (The automated CI/CD pipeline)

So, let’s put it all together — taking into consideration that we will not delve into the configuration of all pipeline’s components, otherwise, it will be longer than it is already. All config is specific for the environment and must be adapted for each situation. In this project, all tools are already running in a kubernetes cluster (except jenkins, which was already installed in a standalone VM).

因此,让我们将所有内容放在一起-考虑到我们不会深入研究所有管道组件的配置,否则,它会比现在更长。 所有配置都是特定于环境的,必须针对每种情况进行调整。 在该项目中,所有工具都已在kubernetes集群中运行( jenkins除外,该jenkins已安装在独立VM中)。

Drone is the backbone of the process and is responsible for the underlying pipeline automation. All steps in the pipeline are commanded or orchestrated by drone and its plugins.

无人机是该过程的基础,并负责底层管道自动化。 管道中的所有步骤均由无人机及其插件命令或协调。

Let’s take a second look into the pipeline, now with our tools in place:

让我们再看看管道,现在有了我们的工具:

Image for post
Pic.3: Our tool choice for the TBD process
图3:TBD流程的工具选择

First, let’s consider a commit in the master branch and a push to Gitea for a random repo. This is the default path/flow to production in the automated pipeline.

首先,让我们考虑在master分支中进行提交,并推送到Gitea进行随机回购。 这是自动化管道中生产的默认路径/流。

  • Gitea is configured to send webhooks to Drone that triggers the pipeline process. This configuration is easy and can be done by creating an OAuth Application in Gitea (https://docs.drone.io/server/provider/gitea/). By default, Gitea will send webhook events for branch or tag creation, pull requests and pushes, but this can be also configured.

    Gitea被配置为将Webhook发送给Drone ,从而触发管道流程。 此配置很容易,可以通过在Gitea ( https://docs.drone.io/server/provider/gitea/ )中创建OAuth应用程序来完成。 默认情况下, Gitea将发送webhook事件以创建分支或标记,拉取请求和推送,但是也可以对其进行配置。

  • Push: Drone server will receive this event and will call the drone-config service (another container from a boilerplate project) to retrieve the proper centralized drone.yaml file, which is also versioned in a git repo. It is possible to write a logic for this step, in case you have different pipelines for different repos.

    推送:无人机服务器将收到此事件,并将调用drone-config服务(样板项目中的另一个容器)来检索适当的集中drone.yaml文件,该文件也在git repo中进行了版本控制。 如果您有不同的管道用于不同的存储库,则可以为此步骤编写逻辑。

  • Drone server will then pass this selected drone.yaml to drone agent, the third component of the “Drone suite”. Drone agent is responsible for parsing this yaml file and call the plugins. Plugins in Drone are nothing more than docker containers running specific images to execute the step. There are many available in http://plugins.drone.io/. The first (and default) step for drone is to clone the target repo into a workspace. This workspace is a temporary docker volume that is mounted on each container for each step. This is quite useful when passing information from one step to another. Drone will then execute the following steps:

    然后,无人机服务器会将选定的drone.yaml传递给无人机代理(“无人机套件”的第三个组件)。 无人机代理负责解析此Yaml文件并调用插件。 Drone中的插件只不过是运行特定映像以执行该步骤的docker容器。 http://plugins.drone.io/中有许多可用内容。 无人机的第一步(也是默认)是将目标存储库克隆到工作区中。 该工作空间是一个临时docker卷,每个步骤都安装在每个容器上。 当将信息从一个步骤传递到另一步骤时,这非常有用。 然后,无人机将执行以下步骤:

Image for post
Pic. 4: Print from a master-branch build in the Drone Server UI
图片4:从Drone Server UI中的主分支构建进行打印
  • Unit Test: Drone will execute the unit tests defined in the repo.

    单元测试:Drone将执行仓库中定义的单元测试。

  • Build and Push: Drone will call the build plugin to build the image, tag it with a combination of the build’s number, the branch and 8 digits from the commit’s hash(ex: 3-master-93d62a1). This enables traceability for the image in production. This tagged image is then pushed to Harbor to become available for the next steps.

    Build and Push :Drone将调用build插件来构建图像,并用构建号,分支和提交的哈希值(例如3-master-93d62a1 )中的8位数字的组合对其进行标记。 这使得生产中的图像具有可追溯性。 然后,将带标签的图像推送到Harbor,以用于后续步骤。

  • Security Test: While in Harbor, Clair (our choice for security scan) will inspect the image for vulnerabilities. We’ve set an auto-scan on push and set a rule to prevent vulnerable images with low severity and above to be pulled. This constraint will force the pipeline to fail in further steps in case of vulnerabilities.

    安全测试:在港口期间,Clair(我们用于安全扫描的选项)将检查图像是否存在漏洞。 我们在推送时设置了自动扫描,并设置了规则以防止严重程度低且严重程度高的易受攻击图像被拉出。 如果存在漏洞,此约束将迫使管道在进一步的步骤中失败。

  • Helm: The next step is to build and publish the helm chart which will be used for deployment. We used the helm package and helm push commands in a custom docker image to execute this step. As one of our principles is to reduce complexity and redundancies in configuration, we maintain only one repo for templating all helm charts. The developer only needs to describe the specific configs for that service in a values.yaml file inside the project and the script will blend it with the unified template to generate the final chart. Our chart versioning policy is to match the minor version of a chart (ex: 0.1.4) with each image build (ex: 4-master-abcdefgh).

    Helm :下一步是构建并发布将用于部署的Helm图表。 我们在自定义docker映像中使用了helm包和helm push命令来执行此步骤。 因为我们的原则之一是减少配置的复杂性和冗余,所以我们仅维护一个回购来模板化所有头盔图表。 开发人员只需要在项目内的values.yaml文件中描述该服务的特定配置,脚本会将其与统一模板混合在一起以生成最终图表。 我们的图表版本控制政策是将图表的次要版本(例如:0.1.4)与每个映像版本(例如:4-master-abcdefgh)进行匹配。

  • Deploy Test Environments: In this step, drone invokes the drone-helm3 plugin to deploy the helm chart on each environment. This plugin can be configured to set values during the deploy. This is quite useful for setting specific URLs for ingress according to the environment.

    部署测试环境:在此步骤中,drone调用drone-helm3插件在每个环境上部署头盔图。 可将该插件配置为在部署期间设置值。 这对于根据环境设置用于入口的特定URL很有用。

  • Deploy Staging and Production: Despite the name, Drone will only configure the Argo-cd application to sync the new available helm chart. Argo will, then, be the one that actually deploys and enforces configuration on kubernetes staging and production environments. What Drone does is to check if the Argo application already exists (in case of a new repo) and creates it otherwise. It sets some helm values just like the last step and sets staging deployment as auto-sync and production as manual sync.

    部署登台和生产:尽管名称如此,但Drone只会将Argo-cd应用程序配置为同步新的可用头盔图因此,Argo将成为在kubernetes登台和生产环境上实际部署和实施配置的公司。 Drone要做的是检查Argo应用程序是否已经存在(如果有新的仓库),否则创建它。 就像最后一步一样,它会设置一些头盔值,并将登台部署设置为自动同步,将生产设置为手动同步。

  • Authorization: This is not a Drone step. The authorization occurs when the Release Manager approves the deployment in production and manually sync all “out-of-sync” repos in Argo-cd. In Argo, it is possible not only to sync specific kubernetes objects but also check the history of all deployments and execute rollbacks for previously working releases.

    授权:这不是Drone步骤。 授权在发布管理器批准生产环境中的部署并手动同步Argo-cd中的所有“不同步”存储库时发生。 在Argo中,不仅可以同步特定的kubernetes对象,还可以检查所有部署的历史记录并执行先前工作版本的回滚。

This is how it looks in Argo-cd UI:

这是在Argo-cd UI中的外观:

Image for post
Pic 5: Images (Helm Charts) ready for sync
图5:准备同步的图像(头盔图表)

Now, let’s describe the feature-branch pipeline. It will also start in Gitea, but now triggered by a push in a non-master branch:

现在,让我们描述功能分支管道。 它也将从Gitea开始,但现在由非主分支的推送触发:

  • Push: It is possible to configure conditions in Drone pipelines. In our case, whenever the commit is on a branch with “feature-*” name pattern, the alternative pipeline will be executed. This means that we don’t need to specify another drone.yaml. Rather, we can reuse common steps and execute others when the branch condition is satisfied. The image below shows those new steps:

    推送:可以在无人机管道中配置条件。 在我们的情况下,只要提交位于具有“ feature- *”名称模式的分支上,就会执行替代管道。 这意味着我们不需要指定其他drone.yaml。 相反,我们可以重用常见的步骤,并在满足分支条件时执行其他步骤。 下图显示了这些新步骤:

Image for post
Pic. 6: Pipeline for a push in a feature-branch
图片6:用于功能分支的推送管道
  • Unit Test, build, push, sec test and helm: Nothing new or special here, except that images are now tagged with the name of the feature-branch instead of “master”(Ex: 5-myfeature-fedcba99). This will also make it easier to find the production-to-be microservice among all others in the development cluster.

    单元测试,构建,推送,秒测试和掌舵:这里没有新内容或特别之处,除了图像现在使用功能分支的名称而不是“主”标记(例如:5-myfeature-fedcba99)。 这也将使在开发集群中的所有其他服务中更容易找到要生产的微服务。

  • Create k8s cluster: This is probably the most interesting step in the hole pipeline. In order to test and validate the new feature, we automatically create a temporary kubernetes cluster that will live until the feature branch exists. This will not only create a standalone and isolated environment for testing but also curb unnecessary resources consumption (specially useful in the pay-as-you-go public cloud model). For this automation, drone will call the ansible plugin to execute this step.

    创建k8s集群:这可能是Kong管道中最有趣的步骤。 为了测试和验证新功能,我们会自动创建一个临时的kubernetes集群,该集群将一直存在直到功能分支存在。 这不仅会创建一个独立且孤立的测试环境,而且还将抑制不必要的资源消耗(在按需付费的公共云模型中特别有用)。 对于这种自动化,无人机将调用ansible插件来执行此步骤。

This ansible-playbook can be quite extensive and can be found in https://github.com/alexismaior-ansible/play-create-rancher-cluster-per-branch. The overall steps executed by this playbook are:

这本ansible- playbook可能涉及面很广,可以在https://github.com/alexismaior-ansible/play-create-rancher-cluster-per-branch中找到。 此剧本执行的总体步骤为:

  1. Check if there are available VMs in VMWare vCenter. As we are in a private cloud environment and we don’t want to wait a long time for the VM provisioning, the VMs were previously created and its “availability” controlled by tagging. If the VM has the “available” tag, it can be used in our new cluster. This playbook can be easily adapted to dynamically create and delete a VM on the fly.

    检查VMWare vCenter中是否有可用的VM。 由于我们处于私有云环境中,并且我们不想等待很长时间来进行VM配置,因此先前已创建了VM,并通过标记控制了其“可用性”。 如果VM具有“ available”标签,则可以在我们的新群集中使用它。 可以轻松修改此剧本,以动态地动态创建和删除VM。

  2. Install prerequisites on those VMs, such as docker, helm and kubectl.

    在这些VM上安装必备组件,例如docker,helm和kubectl。
  3. Create a kubernetes cluster in Rancher. For this, we use an ansible role designed for communicating with Rancher API (alexismaior.ansible_role_manage_rancher_cluster).

    在Rancher中创建一个kubernetes集群。 为此,我们使用专为RancherAPI(alexismaior.ansible_role_manage_rancher_cluster)通信的ansible作用。

  4. Add the VMs to this new cluster, using the same ansible role.

    虚拟机添加到这个新的集群,使用相同的ansible作用。

  5. Create and set a VMWare tag to VMs. Now they are formally assigned to the cluster and won’t be available for other feature-branch clusters.

    创建VMWare标签并将其设置为VM。 现在,它们已正式分配给该群集,并且不适用于其他功能分支群集。

  6. Register DNS entries for the kubernetes ingress.

    kubernetes入口注册DNS条目。

  7. Clone staging environment. For testing purposes, it is critical to have an environment closest as possible to production. Therefore, we run a shell script (https://github.com/alexismaior/kubernetes/blob/master/scripts/kubeconfig-devtest.yaml) that deploys all helm charts available in staging on the new cluster.

    克隆暂存环境。 为了进行测试,拥有尽可能接近生产环境的环境至关重要。 因此,我们运行一个shell脚本( https://github.com/alexismaior/kubernetes/blob/master/scripts/kubeconfig-devtest.yaml ),该脚本将所有可用的舵图部署在新群集上。

  8. Finally, communicate in a Telegram group that the new k8s cluster is available, application deployed with its respective ingress URL.

    最后,在Telegram组中传达新的k8s群集可用,应用程序使用其各自的入口URL进行部署。

Following into the feature-branch steps, we still have:

遵循功能分支步骤,我们仍然有:

  • Deploy Feature in dev: Back to drone, here we call the drone-helm3 plugin to do the job and deploy the new feature helm chart on the new cluster.

    在dev中部署功能:回到drone ,这里我们称为drone-helm3插件来完成这项工作,并将新功能的helm chart部署到新集群上。

  • Delete k8s Cluster: As said, drone cannot yet be triggered by branch deletion, therefore we have chosen to rely on jenkins to execute the cluster deletion playbook. Jenkins receives a webhook from Gitea whenever a branch is deleted and pass its name for ansible-playbook. P.S.: Automatic deletion of a cluster can be dangerous for obvious reasons (Imagine someone creating a “production” feature branch). Well, besides creating clusters with “feature” prefix, we also guarantee that the service user running this action in Rancher does not have RW rights in prod.

    删除k8s集群:如上所述,无人机还不能被分支删除触发,因此我们选择依赖詹金斯执行集群删除手册。 詹金斯接收来自Gitea一个网络挂接每当一个分支被删除,并通过其名称ansible-剧本。 PS:出于明显的原因,自动删除群集可能很危险(想象一下有人创建了“生产”功能分支)。 好了,除了创建带有“功能”前缀的集群之外,我们还保证在Rancher中运行此操作的服务用户没有prod中的RW权限。

That’s it! This is the happy ending of the pipeline.

而已! 这是管道的幸福结局。

结论 (Conclusion)

This project has incredibly enhanced our control and observability in the deployment process. Now, we can measure delivery and deployment times, test error rates and deployment frequency. Besides, the devops team can now prioritize slower steps and improve testing.

该项目极大地增强了我们在部署过程中的控制力和可观察性。 现在,我们可以测量交付和部署时间,测试错误率和部署频率。 此外,devops团队现在可以优先考虑较慢的步骤并改进测试。

Many other tools, such as fluentd, elastic search, kibana, prometheus and grafana are also being used for logging and reporting purposes. Rancher also provides great features for kubernetes cluster management and security auditing and controls. Much more must be done, but the overall result has shown us that leveraging automation is the key to greater productivity with managed risks.

许多其他工具(例如fluentd弹性搜索kibanaprometheusgrafana)也用于记录和报告目的。 Rancher还为kubernetes集群管理,安全审核和控制提供了出色的功能。 必须做的工作还很多,但是总体结果表明,利用自动化是在管理风险的同时提高生产率的关键。

翻译自: https://medium.com/swlh/a-ci-cd-pipeline-project-for-a-trunk-based-development-strategy-in-a-kubernetes-environment-c4ffea9700fe

plsql中trunk 表

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值