terraform_用terraform建立一个eks集群

terraform

介绍 (Introduction)

This post describes the creation of a multi-zone Kubernetes Cluster in AWS, using Terraform with some AWS modules. Specifically, we are going to use infrastructure as code to create:

这篇文章介绍了如何使用Terraform和某些AWS模块在AWS中创建多区域Kubernetes集群。 具体来说,我们将使用基础结构作为代码来创建:

  • A new VPC with multi-zone public & private Subnets, and a single NAT gateway.

    带有多区域公共和专用子网以及单个NAT网关的新VPC

  • A Kubernetes Cluster, based on Spot EC2 instances running in private Subnets, with an Autoscaling Group based on average CPU usage.

    一个Kubernetes集群 ,基于在私有子网中运行的Spot EC2实例,并具有一个基于平均CPU使用率的自动扩展组。

  • An Elastic Load Balancer (ELB) to accept public HTTP calls and route them into Kubernetes nodes, as well as run health checks to scale Kubernetes services if required.

    Elastic Load Balancer(ELB)接受公共HTTP调用并将其路由到Kubernetes节点,以及运行运行状况检查以扩展Kubernetes服务(如果需要)。

  • An Nginx Ingress Gateway inside the Cluster, to receive & forward HTTP requests from the outside world into Kubernetes pods.

    集群内部的Nginx Ingress网关 ,用于接收来自外界的HTTP请求并将其转发到Kubernetes容器中。

  • A DNS zone with SSL certificate to provide HTTPS to each Kubernetes service.

    一个具有SSL证书的 DNS区域 ,可为每个Kubernetes服务提供HTTPS。

  • A sample application to deploy into our Cluster, using a small Helm Chart.

    使用一个小的Helm Chart将样本应用程序部署到我们的集群中。

The usage of official Terraform modules brings us simplicity of coding AWS components following the best practices from verified providers (A.K.A. do not reinvent the wheel), like Private Networks or Kubernetes Clusters.

官方Terraform模块的使用使我们可以按照经过验证的提供商(例如,私有网络或Kubernetes集群)的最佳实践(AKA 不会重复发明 )的最佳实践对AWS组件进行编码。

要求 (Requirements)

  • AWS Account, with programatic access. We will use these credentials to configure some environment variables later.

    具有程序访问权的AWS账户。 稍后,我们将使用这些凭据来配置一些环境变量。
  • Terraform CLI or Terraform Cloud. In this document we use 0.12.24 version, but feel free to use newer versions if you want to. My recommendation is to use a docker container, to simplify the installation and use a specific version.

    Terraform CLI或Terraform Cloud。 在本文档中,我们使用0.12.24版本,但如果需要,可以随时使用较新的版本。 我的建议是使用docker容器,以简化安装并使用特定版本。

  • A terminal to run Terraform CLI, or a source control repo if you are using Terraform Cloud. In my personal case I use a CI pipeline for this, to break the dependency of a computer to run Terraform commands, and have history about past deployments applied.

    运行Terraform CLI的终端,或者如果您正在使用Terraform Cloud,则运行源代码控制库。 就我个人而言,我为此使用CI管道,以打破计算机运行Terraform命令的依赖关系,并具有有关过去部署的历史记录。

💰 Disclaimer: creating VPC, EKS & DNS resources is probably going to bring some cost in your AWS monthly Billing, since some resources may go beyond the free tier. So, be aware of this before applying any Terraform plans!.

💰 免责声明 :创建VPC,EKS和DNS资源可能会在AWS每月账单中带来一些费用,因为某些资源可能超出了免费套餐的范围。 因此,在应用任何Terraform计划之前,请注意这一点!

地形配置 (Terraform Configuration)

After a short introduction, let’s get into our infrastructure as code! We will see small snippets of Terraform configuration required on each step; feel free to copy them and try applying these plans on your own. But, if you are getting curious or impatient to get this done, take a look into this repository with all Terraform configurations concentrated in a single place using a CI pipeline to apply them.

简短介绍之后,让我们以代码的形式进入我们的基础架构! 我们将在每个步骤中看到所需的Terraform配置小片段。 随时复制它们,并尝试自己应用这些计划。 但是,如果您对执行此操作感到好奇或不耐烦,请查看此存储库,其中所有Terraform配置都集中在一个位置,并使用CI管道应用它们。

The very first step in Terraform is to define Terraform configurations, related to state file backend and version to be used:

Terraform的第一步是定义与状态文件后端和要使用的版本有关的Terraform配置:

Recommendation: It is a good idea to declare the version of Terraform to be used while coding our Infrastructure, to avoid any breaking changes that could affect to our code if we use newer/older versions when running terraform in the future.

建议 :最好在对基础结构进行编码时声明要使用的Terraform版本,以避免将来在运行terraform时使用较新/较旧的版本时,避免任何可能影响代码的重大更改。

Recommendation: Backend configuration is almost empty, and that is in purpose. It is recommended to externalize this setup to several files if required (e.g. having one config per environment). In this case we will use a single S3 backend, with several state files for each terraform workspace:

建议 :后端配置几乎是空的,这是有目的的。 如果需要,建议将该设置外部化为多个文件(例如,每个环境有一个配置)。 在这种情况下,我们将使用单个S3后端,每个terraform工作区都有几个状态文件:

Which means that we will use an S3 bucket called “my-vibrant-and-nifty-app-infra” which will look like this:

这意味着我们将使用一个名为“ my-vibrant-and-nifty-app-infra”的S3存储桶,该存储桶如下所示:

s3://my-vibrant-and-nifty-app-infra/
|_environment/
|_development/
| |_tf-state.json
|_staging/
| |_tf-state.json
|_production/
|_tf-state.json

⚠️ Important: The S3 bucket defined in here will not be created by Terraform if it does not exist in AWS. This bucket has be externally created by manual action, or using a CI/CD tool running a command like this:

Important️ 重要提示 :如果此处定义的S3存储桶在AWS中不存在,则不会由Terraform创建。 该存储桶是通过手动操作或使用运行以下命令的CI / CD工具从外部创建的:

aws s3 mb s3://my-vibrant-and-nifty-app-infra --region us-west-2

⚠️ Important: Bear in mind that S3 bucket names must be unique worldwide, across AWS accounts and regions. Try to use a custom name for your bucket when running aws s3 mb command, and also when defining backend.tfvars file. That is the reason why I chose a very-customized name as “my-vibrant-and-nifty-app-infra”.

Important️ 重要提示 :请记住,S3存储桶名称在AWS账户和地区之间在全球范围内必须是唯一的 。 在运行aws s3 mb命令以及定义backend.tfvars文件时,请尝试为您的存储桶使用自定义名称。 这就是为什么我选择一个非常定制的名称作为“ my-vibrant and nifty-app-infra”的原因。

To initialize each workspace, for instance “development”, we should run the following commands:

要初始化每个工作空间,例如“开发”,我们应该运行以下命令:

terraform workspace new developmentterraform init -backend-config=backend.tfvars

In future executions, we can select our existing workspace using the following command:

在将来的执行中,我们可以使用以下命令选择我们现有的工作空间:

terraform workspace select developmentterraform init -backend-config=backend.tfvars

Recommendation: Resource providers can be handled automatically by Terraform while running init command. However, it is a good idea to define them explicitly using versions:

建议 :运行init命令时,Terraform可以自动处理资源提供者。 但是,最好使用版本明确定义它们:

It is also recommended to avoid defining AWS credentials in provider blocks. Instead we could use environment variables for this purpose, which will be automatically used by Terraform to authenticate against AWS APIs:

还建议避免提供程序块中定义AWS凭证。 取而代之的是,我们可以为此使用环境变量, Terraform自动使用环境变量针对AWS API进行身份验证:

AWS_ACCESS_KEY_ID=AKIAXXXXXXXXXXXXXXXX
AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
AWS_DEFAULT_REGION=us-west-2

Now, we’re ready to start writing our Infrastructure as code!.

现在,我们准备开始以代码的形式编写基础结构!

VPC (VPC)

Let’s start by creating a new VPC to isolate our EKS-related resources in a safe place, using the official VPC terraform module published by AWS:

让我们首先创建一个新的VPC,以使用AWS发布的官方VPC terraform模块在安全的地方隔离与EKS相关的资源:

As it is commented in the previous code block, we will create a new VPC with subnets on each Availability Zone with a single NAT Gateway to save some costs, adding some Tags required by EKS. Remember to also define some variable values file (e.g. one for each environment) for the previous block:

正如在前面的代码块中所注释的那样,我们将在每个可用区中使用单个NAT网关在每个可用区中创建一个带有子网的新VPC,以节省一些成本,并添加一些EKS所需的标签。 记住还要为上一个块定义一些变量值文件(例如,每个环境一个):

Now, we should be ready to create this VPC resources using Terraform. If we already ran init command, we can examine the resources to be created or updated by Terraform using plan command:

现在,我们应该准备使用Terraform创建此VPC资源。 如果已经运行过init命令,则可以使用plan命令检查Terraform创建或更新的资源:

terraform plan -out=development.tfplan -var-file=network-development.tfvars

And then, we can apply those changes using apply command, after user confirmation:

然后,在用户确认后,我们可以使用apply命令应用这些更改:

terraform apply development.tfplan

EKS集群 (EKS Cluster)

The next move is to use the official EKS Terraform module to create a new Kubernetes Cluster:

下一步是使用官方的EKS Terraform模块创建新的Kubernetes集群:

As shown in the previous code block, we are creating:

如前面的代码块所示,我们正在创建:

  • An EC2 autoscaling group for Kubernetes, composed by Spot instances autoscaled out/down based on CPU average usage.

    由Spot实例组成的Kubernetes的EC2自动伸缩组,根据CPU平均使用量自动伸缩。
  • An EKS cluster, with two groups of users (called “admins” and “developers”).

    一个EKS群集,具有两组用户(称为“管理员”和“开发人员”)。
  • An EC2 Spot termination handler for Kubernetes, which takes care of reallocating Kubernetes objects when Spot instances get automatically terminated by AWS. This installation uses Helm to ease things up.

    Kubernetes的EC2 Spot终止处理程序,当Spot实例被AWS自动终止时,它负责重新分配Kubernetes对象。 此安装使用Helm简化了工作。

And we also define some Kubernetes/Helm Terraform providers, to be used later to install & configure stuff inside our Cluster using Terraform code.

我们还定义了一些Kubernetes / Helm Terraform提供程序,稍后将使用Terraform代码在我们的集群中安装和配置内容。

Bear in mind that this Terraform configuration block uses some variables defined on the previous Terraform blocks, so it is required to store it as a new file at the same folder as the VPC definition file. Some variables are new, though, so we need to define their corresponding values in a new file:

请记住,此Terraform配置块使用在先前Terraform块上定义的一些变量,因此需要将其作为新文件存储在与VPC定义文件相同的文件夹中。 不过,有些变量是新变量,因此我们需要在新文件中定义它们的相应值:

⚠️ Note: The user IDs displayed above are fictitious, and of course they have to be customized according to the user groups present in your AWS account. Have in mind that these usernames do not have to exist as AWS IAM identities at the moment of creating the EKS Cluster nor assigning RBAC accesses, since they will live inside the Kubernetes Cluster only. IAM/Kubernetes usernames correlation is handled by AWS CLI at the moment of authenticating with the EKS Cluster.

Note️ 注意 :上面显示的用户ID是虚构的,当然必须根据您AWS账户中存在的用户组自定义它们。 有想法,这些用户名不必在创建EKS集群也分配RBAC访问的时刻,因为AWS IAM的身份存在,因为他们将生活只有Kubernetes集群内。 在通过EKS集群进行身份验证时, AWS CLI将处理IAM / Kubernetes用户名关联。

Recommendation: to facilitate code reading and an easy variable files usage, it is a good idea to create a separate Terraform configuration file to define all variables at once (e.g. variables.tf) and then define several variable values files as:

建议:为方便代码阅读和方便使用变量文件,最好创建一个单独的Terraform配置文件来一次定义所有变量(例如variables.tf ),然后将几个变量值文件定义为:

  • A single terraform.tfvars file (automatically loaded by Terraform commands) with all generic variable values, which do not have customized or environment-specific values.

    具有所有通用变量值的单个terraform.tfvars文件(由Terraform命令自动加载),这些变量没有自定义值或特定于环境的值。

  • Environment-or-case-specific *.tfvars files with all variable values which will be specific to a particular case or environment, and will be explicitly used when running terraform plan command.

    特定于环境或案例的* .tfvars文件,其中包含所有变量值,这些变量值特定于特定的案例或环境,并且在运行terraform plan命令时将显式使用。

However, for the sake of this article we will skip these rules to simplify understanding of each part step by step on the creation of AWS resources. This means that we will run terraform plan command adding every variable value file, as we write new configuration blocks:

但是,出于本文的目的,我们将跳过这些规则,以简化对创建AWS资源的每个步骤的理解。 这意味着我们将在编写新的配置块时运行terraform plan命令,添加每个变量值文件:

terraform plan -out=development.tfplan -var-file=network-development.tfvars -var-file=eks-development.tfvarsterraform apply development.tfplan

Once the plan is applied, we have a brand-new EKS cluster in AWS!.

一旦应用了该计划,我们就会在AWS!中拥有一个全新的EKS集群。

负载均衡器 (Load Balancer)

Now we can move on creating an Elastic Load Balancer (ELB), to handle HTTP requests to our services. The creation of the ELB will be handled by a new Kubernetes Service deployed through a Helm Chart of an Nginx Ingress deployment:

现在,我们可以继续创建Elastic Load Balancer(ELB),以处理对我们服务的HTTP请求。 ELB的创建将由新的Kubernetes服务处理,该服务通过Nginx Ingress部署的Helm Chart 部署

As you may see above, the Ingress definition uses a new AWS-issued SSL certificate to provide HTTPS in our ELB to be put in front of our Kubernetes pods, and also defines some annotations required by Nginx Ingress for EKS. At the end it creates a new DNS entry associated with the ELB, which in this example depends on a manually-configured DNS Zone in Route53.

如您在上方所见,Ingress定义使用新的AWS发行的SSL证书在ELB中提供HTTPS,以放置在Kubernetes窗格的前面,并且还定义了Nginx Ingress用于EKS所需的一些注释。 最后,它创建一个与ELB关联的新DNS条目,在本示例中,该条目取决于Route53中手动配置的DNS区域。

⚠️ Note: In this case I decided to re-use a DNS Zone created outside of this Terraform workspace (defined in “dns_base_domain” variable). That is the reason why we are using a data source to fetch an existing Route53 zone instead of creating a new resource. Feel free to change this if required, and create new DNS resources if you do not have any already.

Note️ 注意 :在这种情况下,我决定重新使用在此Terraform工作区外部创建的DNS区域(在“ dns_base_domain”变量中定义)。 这就是为什么我们使用数据源来获取现有Route53区域而不是创建新资源的原因 。 如果需要,请随时进行更改,如果还没有,请创建新的DNS资源。

As well as other Terraform configuration files, this one also uses some new variables. So, let’s define them for our “development” environment:

与其他Terraform配置文件一样,此文件也使用了一些新变量。 因此,让我们为我们的“开发”环境定义它们:

And then run terraform plan & apply:

然后运行terraform计划申请

terraform plan -out=development.tfplan -var-file=network-development.tfvars -var-file=eks-development.tfvars -var-file=ingress-development.tfvarsterraform apply development.tfplan

域名解析 (DNS)

The next step is to create some DNS subdomains associated with our EKS Cluster, which will be used by the Ingress Gateway to route requests to specific applications using DNS subdomains:

下一步是创建与我们的EKS集群关联的一些DNS子域,Ingress Gateway将使用这些子域将请求路由到使用DNS子域的特定应用程序:

This code requires one variable value, which could be something like:

此代码需要一个变量值,该变量值可能类似于:

And will be applied as follows, after user confirmation:

并在用户确认后将如下应用:

terraform plan -out=development.tfplan -var-file=network-development.tfvars -var-file=eks-development.tfvars -var-file=ingress-development.tfvars -var-file=subdomains-development.tfvarsterraform apply development.tfplan

Kubernetes命名空间 (Kubernetes Namespaces)

The next step, not really mandatory but recommended, is to define some Kubernetes namespaces to separate our Deployments and have better management & visibility of applications in our Cluster:

下一步(实际上不是强制性的建议)是定义一些Kubernetes命名空间,以分隔我们的部署,并更好地管理和可视化集群中的应用程序:

This configuration file expects a list of namespaces to be created in our EKS Cluster:

此配置文件期望在我们的EKS群集中创建一个名称空间列表:

Which could be applied as:

可以应用为:

terraform plan -out=development.tfplan -var-file=network-development.tfvars -var-file=eks-development.tfvars -var-file=ingress-development.tfvars -var-file=subdomains-development.tfvars -var-file=namespaces-development.tfvarsterraform apply development.tfplan

RBAC访问 (RBAC Access)

The last step is to set up RBAC permissions for the developers group defined in our EKS Cluster:

最后一步是为我们的EKS集群中定义的开发人员组设置RBAC权限:

As you may see, this configuration block grants access to see some Kubernetes objects (like pods, deployments, ingresses and services) as well as executing commands in running pods and create proxies to local ports. On the other hand, this configuration block does not require any new variable values apart from the used previously, so we could apply it using the same command as before:

如您所见,此配置块授予访问权限以查看某些Kubernetes对象(如Pod,部署,入口和服务),以及在运行的Pod中执行命令并为本地端口创建代理。 另一方面,除了先前使用的配置块之外,此配置块不需要任何新的变量值,因此我们可以使用与之前相同的命令来应用它:

terraform plan -out=development.tfplan -var-file=network-development.tfvars -var-file=eks-development.tfvars -var-file=ingress-development.tfvars -var-file=subdomains-development.tfvars -var-file=namespaces-development.tfvarsterraform apply development.tfplan

That’s it! We finally have a production-ready EKS Cluster ready to host applications with public IP access 🎉. Remember to visit this repository to have a complete look of all these Terraform configurations, and a sample CI pipeline to apply them in AWS.

而已! 我们终于有了一个可用于生产环境的EKS集群,准备通过公共IP访问来托管应用程序🎉。 记住要访问此存储库以完整了解所有这些Terraform配置,以及用于在AWS中应用它们的示例CI管道。

示例应用程序部署 (Sample application Deployment)

As a bonus, I will leave a link of a sample application, which deploys a very small container into our new Kubernetes Cluster using Helm, based on this docker image. It also contains some CI jobs that could help you to get familiar with aws eks and helm commands.

作为奖励,我会留下一个示例应用程序的链接 ,其中使用部署头盔一个非常小的容器放入我们的新Kubernetes集群,基于此泊坞窗上的图像 。 它还包含一些CI作业,可以帮助您熟悉aws ekshelm命令。

Image for post
sample application running in Kubernetes 😊
在Kubernetes中运行的示例应用程序

结语 (Wrapping up)

That’s it for now! I hope this page helped you to understand some key concepts behind a basic Kubernetes Cluster in AWS, and get your hands on with some good practices about Terraform configuration files.

现在就这样! 我希望此页面可以帮助您了解AWS基本Kubernetes集群背后的一些关键概念,并获得有关Terraform配置文件的一些良好实践。

I would really appreciate any kind of feedback, doubts or comments. Feel free to ping me in here, or post any comments in this post.

我真的很感谢任何形式的反馈,疑问或意见。 随时在这里 ping我,或在这篇文章中发表任何评论。

翻译自: https://itnext.io/build-an-eks-cluster-with-terraform-d35db8005963

terraform

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值