terraform_用terraform配置云基础架构

terraform

The infrastructure-as-code concept allows you to manage an entire cloud infrastructure in a declarative way. AWS provides the CloudFormation service to enable the modeling of the entire infrastructure and application resources using a text file or programming languages in an automated way. Even though it’s very simple to learn and use, its main downside is that it’s tied to AWS only, and if you manage a multi-cloud infrastructure that consists of applications deployed in AWS, Azure, Google Cloud, etc., then you would use Terraform to build, change, and manage infrastructure in a safe and repeatable way throughout the clouds.

基础架构即代码概念允许您以声明的方式管理整个云基础架构。 AWS提供了CloudFormation服务,以自动方式使用文本文件或编程语言对整个基础架构和应用程序资源进行建模。 尽管学习和使用非常简单,但它的主要缺点是仅与AWS绑定,如果您管理由部署在AWS,Azure,Google Cloud等中的应用程序组成的多云基础架构,则您将使用在整个云中以安全且可重复的方式构建,更改和管理基础架构的Terraform。

In this article, you will be introduced to the way Terraform allows you to define the infrastructure through its configuration language, called HCL. After that, we will model and apply changes to a multi-region infrastructure on AWS that will consist of multiple custom VPCs and EC2s.

在本文中,将向您介绍Terraform允许您通过称为HCL的配置语言定义基础结构的方式。 之后,我们将对更改进行建模并将其应用于AWS上的多区域基础架构,该基础架构将由多个自定义VPC和EC2组成。

Terraform简介 (Introduction to Terraform)

The starting point for using Terraform is installing it following the instructions from the official HashiCorp website. Once it’s installed, you need to create an empty directory and a .tf file with a name of your choice. You will use that file to define the entire cloud infrastructure, but we will get to that shortly.

使用Terraform的起点是按照HashiCorp官方网站上的说明进行安装。 安装完成后,您需要创建一个空目录和一个 .tf文件,带有您选择的名称。 您将使用该文件来定义整个云基础架构,但是稍后我们将进行介绍。

When creating a new project, Terraform creates a state file. This local state is used to create plans and make changes to your infrastructure, and it serves as a single source of truth to match configuration changes with the provisioned infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real cloud infrastructure.

创建新项目时,Terraform将创建状态文件。 此本地状态用于创建计划和对基础结构进行更改,并且它是将配置更改与已配置的基础结构进行匹配的单一事实来源。 在执行任何操作之前,Terraform会进行刷新以使用真实的云基础架构来更新状态。

To create the state, you need to initialize a new Terraform project, in the same directory where you created the .tf file, by calling the terraform init command.

要创建状态,您需要通过调用terraform init命令在创建.tf文件的同一目录中初始化一个新的Terraform项目。

Now we can start defining the architecture by understanding how Terraform works. The starting building block of Terraform infrastructure declaration is a provider, which is a plugin that offers a collection of resource types. A provider usually provides resources to manage a single cloud or on-premises infrastructure platform. Providers are distributed separately from Terraform itself, but Terraform can automatically install most providers when initializing a working directory. We will be focused on AWS, so the provider we need is AWS plugin, configured as shown below.

现在,我们可以通过了解Terraform的工作原理来开始定义架构。 Terraform基础结构声明的起始构建块是provider ,这是一个提供资源类型集合的插件。 提供商通常提供资源来管理单个云或本地基础架构平台。 提供程序与Terraform本身是分开分发的,但是Terraform可以在初始化工作目录时自动安装大多数提供程序。 我们将专注于AWS,因此我们需要的提供者是AWS插件,其配置如下所示。

terraform {
   required_providers {
     aws = {
       source = "hashicorp/aws"
     }
   }
 }
 
 
 provider "aws" {
   profile = "terraform"
   region  = "us-east-1"
 }

Before we dive any deeper, you might be wondering: Since Terraform will need to interact with an AWS account in order to provision the required architecture, shouldn’t we specify some access keys for this?

在深入研究之前,您可能会想知道:由于Terraform将需要与AWS账户进行交互才能提供所需的架构,我们是否不应该为此指定一些访问密钥?

Of course! You will need to create a new user through the IAM console, with programmatic access, and download and install AWS CLI on the same machine as the Terraform installation. Then, you need to configure AWS credentials using aws configure — profile terraform and enter respective data about the access keys you just created. And you can instruct Terraform to use these credentials by specifying the credentials profile in the provider.

当然! 您将需要通过具有编程访问权限的IAM控制台创建一个新用户,并在与Terraform安装相同的计算机上下载并安装AWS CLI。 然后,您需要使用aws configure — profile terraform配置AWS凭证,并输入有关您刚创建的访问密钥的相应数据。 您可以通过在提供程序中指定凭据配置文件来指示Terraform使用这些凭据。

The next step is to start provisioning the infrastructure. To begin with, we will define an infrastructure from the bottom up following the steps below:

下一步是开始供应基础架构。 首先,我们将按照以下步骤从下至上定义基础架构:

  • Create custom VPC

    创建自定义VPC
  • Create custom subnet

    创建自定义子网
  • Create internet gateway and connect to VPC

    创建Internet网关并连接到VPC
  • Create route table, associate IGW and the custom subnet

    创建路由表,关联IGW和自定义子网
  • Create a custom security group with HTTP and SSH rules

    使用HTTP和SSH规则创建自定义安全组
  • Create a key pair to securely login to EC2 instance

    创建密钥对以安全登录EC2实例
  • Create EC2 instance

    创建EC2实例

The first thing that comes to mind right now is that we cannot provision these resources in any order: We first need to create a custom VPC in order to launch an EC2 instance in that VPC. Terraform makes this very easy: We can specify implicit dependencies between resources by linking them with IDs of previously provisioned resources.

现在想到的第一件事是,我们不能以任何顺序调配这些资源:我们首先需要创建一个自定义VPC,以便在该VPC中启动EC2实例。 Terraform使得此操作非常容易:我们可以通过将资源与以前供应的资源的ID链接来指定资源之间的隐式依赖关系。

A.创建自定义VPC (A. Create custom VPC)

As a first step, we will create the custom VPC by specifying the cidr_block and other options to enable DNS resolution in that VPC. We do that by creating a Terraform resource, which is defined in the following way: RESOURCE_FROM_PROVIDER RESOURCE_NAME RESOURCE_CONFIG

第一步,我们将通过指定cidr_block和其他选项来创建自定义VPC,以在该VPC中启用DNS解析。 为此,我们创建了Terraform资源,该资源的定义方式如下: RESOURCE_FROM_PROVIDER RESOURCE_NAME RESOURCE_CONFIG

Since we are only dealing with AWS resources, the RESOURCE_FROM_PROVIDER will begin with the prefix “aws.” The following code demonstrates the definition of a VPC.

由于我们仅处理AWS资源,因此RESOURCE_FROM_PROVIDER将以前缀“ aws”开头。 以下代码演示了VPC的定义。

resource "aws_vpc" "custom-vpc-1" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
}

B.创建自定义子网 (B. Create custom subnet)

By specifying these details about the VPC, Terraform cannot infer its unique random ID before the actual initialization of the resource, so in order to specify dependent resources, we will reference the VPC ID, as shown in the snippet below, for defining a custom subnet. By saying that vpc_id = aws_vpc.vpc-1.id, we define an implicit dependency stating that the VPC must be created before the subnet.

通过指定有关VPC的这些详细信息,Terraform无法在实际初始化资源之前推断其唯一的随机ID,因此,为了指定从属资源,我们将参考VPC ID(如下面的代码段所示)来定义自定义子网。 。 通过说vpc_id = aws_vpc.vpc-1.id ,我们定义了一个隐式依赖关系,指出必须在子网之前创建VPC。

resource "aws_subnet" "custom-subnet-1" {
cidr_block = "10.0.1.0/24"
vpc_id = aws_vpc.vpc-1.id
map_public_ip_on_launch = true
}

C.创建互联网网关 (C. Create internet gateway)

The snippet below shows the definition of an internet gateway and its attachment to the previously defined VPC.

下面的代码片段显示了Internet网关的定义及其到以前定义的VPC的附件。

resource "aws_internet_gateway" "custom-igw-1" {
vpc_id = aws_vpc.custom-vpc-1.id
}

D.创建并关联一个自定义路由表 (D. Create and associate a custom route table)

As you know, when creating a VPC, a default route table is created, but we won’t modify that. Instead, we will create a new custom route table, and associate it with the internet gateway and the custom subnet. We can achieve that by initially creating a route table and adding a custom route 0.0.0.0/0 that will point to the gateway, and then creating a route association to associate the custom subnet with it.

如您所知,在创建VPC时,会创建一个默认路由表,但我们不会对其进行修改。 相反,我们将创建一个新的自定义路由表,并将其与Internet网关和自定义子网相关联。 我们可以通过首先创建一个路由表并添加一个指向网关的自定义路由0.0.0.0/0 ,然后创建一个路由关联以将自定义子网与其关联来实现此目的。

resource "aws_route_table" "custom-rt1" {


  vpc_id = aws_vpc.custom-vpc-1.id


  route {


    cidr_block = "0.0.0.0/0"


    gateway_id = aws_internet_gateway.custom-igw-1.id


  }


}
resource "aws_route_table_association" "custom-rt-association-1" {


  provider = aws


  route_table_id = aws_route_table.custom-rt1.id


  subnet_id = aws_subnet.custom-subnet-1.id


}

E.使用HTTP和SSH规则创建一个自定义安全组 (E. Create a custom security group with HTTP and SSH rules)

The next step is to create an EC2 instance, and we can easily do that with minimal configuration — it will create a default security group. But we want to define a custom security group that will allow HTTP and SSH traffic to any instance it’s attached to. We do that using the following snippet:

下一步是创建一个EC2实例,我们可以用最少的配置轻松地做到这一点-它会创建一个默认的安全组。 但是我们想要定义一个自定义安全组,该安全组将允许HTTP和SSH通信到它所连接的任何实例。 我们使用以下代码段进行此操作:

resource "aws_security_group" "custom-sg1" {


  provider = aws


  name = "allow_http_and_ssh"


  description = "Allow HTTP and SSH traffic"


  vpc_id = aws_vpc.custom-vpc-1.id






  ingress {


    description = "HTTP access"


    from_port = 80


    protocol = "tcp"


    to_port = 80


    cidr_blocks = ['0.0.0.0/0']


  }




  ingress {


    description = "SSH access"


    from_port = 22


    protocol = "tcp"


    to_port = 22


    cidr_blocks = ['0.0.0.0/0']


  }




  egress {


    from_port = 0


    protocol = "-1"


    to_port = 0


    cidr_blocks = ['0.0.0.0/0']


  }


}

First of all, we associate the security group with the custom VPC and define ingress rules to allow traffic from everywhere on port 22 (don’t allow traffic from anywhere on port 22 for your systems, as it’s a security concert) and port 80. We also need to provide egress rules to allow the EC2 instance to talk to the internet.

首先,我们将安全组与自定义VPC关联,并定义入口规则,以允许来自端口22上任何地方的流量(由于安全起见,不允许来自端口22上任何地方的流量对您的系统而言)。我们还需要提供出口规则,以允许EC2实例与Internet通讯。

F.创建密钥对以安全登录EC2实例 (F. Create key pair to securely login to EC2 instance)

As a final step to define the EC2 instance, we need to define a key pair using RSA architecture to allow us to SSH into the instance. We can do that by initially generating a public and private key locally (using ssh-keygen), and by defining the key pair using the following snippet.

作为定义EC2实例的最后一步,我们需要使用RSA体系结构定义密钥对,以允许我们通过SSH进入实例。 为此,我们可以首先在本地生成一个公共和私有密钥(使用ssh-keygen ),然后使用以下代码段定义密钥对。

resource "aws_key_pair" "custom -kp1" {
key_name = "terraform-key"
public_key = "PASTE_PUBLIC_KEY_HERE"
}

G.创建一个EC2实例 (G. Create an EC2 instance)

Now we are ready to create our EC2 instance in the custom subnet within the custom VPC. There are several things to keep in mind here:

现在,我们准备在自定义VPC内的自定义子网中创建EC2实例。 这里有几件事要记住:

  • ami — For each instance, you must specify the ID of the AMI that you will use. You can get this in the AWS console, but note that you can use the same AMI in a specific region.

    ami对于每个实例,您必须指定将使用的AMI的ID。 您可以在AWS控制台中获得此功能,但请注意,您可以在特定区域中使用相同的AMI。

  • instance type — This is the instance type that will be used to launch the EC2. For this example, we choose t2.micro, which is free-tier eligible.

    instance type -这是将用于启动EC2的实例类型。 在此示例中,我们选择t2.micro ,它是免费的。

The snippet below shows the definition of the EC2 instance and its association with the previously defined subnet, security group, and keypair respectively.

下面的代码段分别显示了EC2实例的定义及其与先前定义的子网,安全组和密钥对的关联。

resource "aws_instance" "custom-ec2-1" {


  ami = "ami-02354e95b39ca8dec"


  instance_type = "t2.micro"


  key_name = aws_key_pair.custom-kp1.key_name


  subnet_id = aws_subnet.custom-1.id


  security_groups = [aws_security_group.custom.id]


}

Right now, you are ready to deploy the infrastructure, but before you do that, you need to check what changes are going to be applied to your account, using the terraform plan command. The output of this command will look like below:

现在,您已经准备好部署基础结构,但是在执行此操作之前,需要使用terraform plan命令检查要对您的帐户应用哪些更改。 该命令的输出如下所示:

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create




Terraform will perform the following actions:




# aws_instance.custom-ec2-1 will be created
+ resource "aws_instance" "custom-ec2-1" {
+ ami                          = "ami-02354e95b39ca8dec"
+ arn                          = (known after apply)
+ associate_public_ip_address  = (known after apply)
+ availability_zone            = (known after apply)
+ cpu_core_count               = (known after apply)
+ cpu_threads_per_core         = (known after apply)
+ get_password_data            = false
+ host_id                      = (known after apply)
.....

You can see that the instance’s properties, along with their values, are listed in the output. The values stated as “known after apply” indicate that they will be only generated after the resource is provisioned in the infrastructure.

您可以看到输出中列出了实例的属性及其值。 表示为“应用后已知”的值表示仅在基础结构中配置了资源后才会生成它们。

Once you confirm that everything is OK in the Terraform plan, you can deploy the infrastructure using terraform apply command. It should take a few seconds or minutes, and you can notice all the newly created components in the AWS account.

确认Terraform计划中的一切正常后,就可以使用terraform apply命令部署基础结构。 这将花费几秒钟或几分钟,您会注意到AWS账户中所有新创建的组件。

Note: It’s a good idea to add tags to all resources in order to better identify them when doing cost analysis.

注意:最好在所有资源中添加标签,以便在进行成本分析时更好地识别它们。

定义多区域基础架构 (Define Multi-Region Infrastructure)

Now that we are done with defining a single region architecture, we can easily move on to defining the same architecture in multiple regions. The reason for this might be that you want to architect a global, resilient architecture, replicated in multiple regions, and you can add a weighted policy to Route53 in order to equally route the traffic in different regions. But we will leave the Route53 configuration for another article, even though it’s straightforward to use.

现在我们已经定义了一个区域架构,我们可以轻松地在多个区域中定义相同的架构。 这样做的原因可能是您想要构建一个在多个区域中复制的全局弹性体系结构,并且可以向Route53添加加权策略,以平等地路由不同区域中的流量。 但是,即使使用起来很简单,我们也会在另一篇文章中保留Route53配置。

As you noted, we defined an AWS provider at the beginning of the Terraform, by stating the region us-east-1. Now we want to also deploy resources in two more regions, eu-central-1 and ap-northeast-1, and we will need to define providers for each region.

如您所述,我们通过说明区域us-east-1在Terraform的开头定义了一个AWS提供程序。 现在,我们还要在另外两个区域eu-central-1ap-northeast-1部署资源,并且我们需要为每个区域定义提供者。

provider "aws" {
   alias = "eu"
   profile = " terraform "
   region  = "eu-central-1"
 }
 
 provider "aws" {
   alias = "ap"
   profile = " terraform "
   region  = "ap-northeast-1"
 }

Note that you can only have one provider with the same name and alias combination. We can define more providers with different aliases, for each region. And when defining resources, we can reference the specific profile using the scheme PROFILE_NAME.ALIAS (e.g. aws.eu). Following this logic, we can simply copy and paste the resources, and specify the respective profile for each resource.

请注意,您只能有一个具有相同名称和别名组合的提供程序。 我们可以为每个区域定义更多具有不同别名的提供程序。 在定义资源时,我们可以使用方案PROFILE_NAME.ALIAS(例如aws.eu )来引用特定的配置文件。 按照此逻辑,我们可以简单地复制和粘贴资源,并为每个资源指定各自的配置文件。

resource "aws_vpc" "custom-vpc-1" {
   provider = aws
   cidr_block = "10.0.0.0/16"
   enable_dns_support = true
   enable_dns_hostnames = true
 }
 
 resource "aws_vpc" "custom-vpc-2" {
   provider = aws.eu
   cidr_block = "10.0.0.0/16"
   enable_dns_support = true
   enable_dns_hostnames = true
 }
 
 resource "aws_vpc" " custom-vpc-3" {
   provider = aws.ap
   cidr_block = "10.0.0.0/16"
   enable_dns_support = true
   enable_dns_hostnames = true
 }

You can follow the same logic for the remaining resources. However, I want to point out a simple level of reusability, which will come in handy when defining security groups for different regions.

您可以对其余资源遵循相同的逻辑。 但是,我想指出一个简单的可重用性级别,这在为不同区域定义安全组时会派上用场。

Since the security groups can live in a specific region and VPC, we need to create security groups for each region. And these security groups will have the same ingress and egress rules, so we need to reuse that code.

由于安全组可以位于特定区域和VPC中,因此我们需要为每个区域创建安全组。 这些安全组将具有相同的入口和出口规则,因此我们需要重用该代码。

In Terraform, we can define variables and reference those variables throughout the code. That’s why we will define the ingress and egress rules in variables, as shown below. The variable definition is pretty straightforward, by specifying the data in a default block and the data type in a type block.

在Terraform中,我们可以定义变量并在整个代码中引用这些变量。 这就是为什么我们将在变量中定义入口和出口规则,如下所示。 通过在default块中指定数据并在type块中指定数据类型,变量定义非常简单。

variable "ingress-rules" {
   default = {
     "http-ingress" = {
       description = "For HTTP"
       from_port   = 80
       to_port     = 80
       protocol    = "tcp"
       cidr_blocks = ["0.0.0.0/0"]
     },
     "ssh-ingress" = {
       description = "For SSH"
       from_port   = 22
       to_port     = 22
       protocol    = "tcp"
       cidr_blocks = ["0.0.0.0/0"]
     }
   }
   type = map(object({
     description = string
     from_port   = number
     to_port     = number
     protocol    = string
     cidr_blocks = list(string)
   }))
 }
 
 variable "egress-rules" {
   default = {
     "all-egress" = {
       description = "All"
       from_port   = 0
       to_port     = 0
       protocol    = "-1"
       cidr_blocks = ["0.0.0.0/0"]
     }
   }
   type = map(object({
     description = string
     from_port   = number
     to_port     = number
     protocol    = string
     cidr_blocks = list(string)
   }))
 }

Once the variables are defined, we can reference them when defining security groups for each region in dynamic blocks, as shown below:

一旦定义了变量,我们就可以在动态块中为每个区域定义安全组时引用它们,如下所示:

resource "aws_security_group" "custom-sg1" {
   provider = aws
   name = "allow_http_and_ssh"
   description = "Allow HTTP and SSH traffic"
   vpc_id = aws_vpc.custom-vpc-1.id
 
   dynamic "ingress" {
     for_each = var.ingress-rules
     content {
       description      = lookup(ingress.value, "description", null)
       from_port        = lookup(ingress.value, "from_port", null)
       to_port          = lookup(ingress.value, "to_port", null)
       protocol         = lookup(ingress.value, "protocol", null)
       cidr_blocks      = lookup(ingress.value, "cidr_blocks", null)
     }
   }
 
   dynamic "egress" {
     for_each = var.egress-rules
     content {
       description      = lookup(egress.value, "description", null)
       from_port        = lookup(egress.value, "from_port", null)
       to_port          = lookup(egress.value, "to_port", null)
       protocol         = lookup(egress.value, "protocol", null)
       cidr_blocks      = lookup(egress.value, "cidr_blocks", null)
     }
   }
 }

You can find the complete code in the gist below:

您可以在下面的要点中找到完整的代码:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
    }
  }
}




provider "aws" {
  profile = "Encite"
  region  = "us-east-1"
}


provider "aws" {
  alias = "eu"
  profile = "Encite"
  region  = "eu-central-1"
}


provider "aws" {
  alias = "ap"
  profile = "Encite"
  region  = "ap-northeast-1"
}






resource "aws_vpc" "custom-vpc-1" {
  provider = aws
  cidr_block = "10.0.0.0/16"
  enable_dns_support = true
  enable_dns_hostnames = true
}


resource "aws_vpc" "custom-vpc-2" {
  provider = aws.eu
  cidr_block = "10.0.0.0/16"
  enable_dns_support = true
  enable_dns_hostnames = true
}


resource "aws_vpc" "custom-vpc-3" {
  provider = aws.ap
  cidr_block = "10.0.0.0/16"
  enable_dns_support = true
  enable_dns_hostnames = true
}






resource "aws_subnet" "custom-subnet-1" {
  provider = aws
  cidr_block = "10.0.1.0/24"
  vpc_id = aws_vpc.custom-vpc-1.id
  map_public_ip_on_launch = true
}


resource "aws_subnet" "custom-subnet-2" {
  provider = aws.eu
  cidr_block = "10.0.1.0/24"
  vpc_id = aws_vpc.custom-vpc-2.id
  map_public_ip_on_launch = true
}


resource "aws_subnet" "custom-subnet-3" {
  provider = aws.ap
  cidr_block = "10.0.1.0/24"
  vpc_id = aws_vpc.custom-vpc-3.id
  map_public_ip_on_launch = true
}






resource "aws_internet_gateway" "custom-igw-1" {
  provider = aws
  vpc_id = aws_vpc.custom-vpc-1.id
}


resource "aws_internet_gateway" "custom-igw-2" {
  provider = aws.eu
  vpc_id = aws_vpc.custom-vpc-2.id
}


resource "aws_internet_gateway" "custom-igw-3" {
  provider = aws.ap
  vpc_id = aws_vpc.custom-vpc-3.id
}






resource "aws_route_table" "custom-rt1" {
  provider = aws
  vpc_id = aws_vpc.custom-vpc-1.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.custom-igw-1.id
  }
}


resource "aws_route_table" "custom-rt2" {
  provider = aws.eu
  vpc_id = aws_vpc.custom-vpc-2.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.custom-igw-2.id
  }
}


resource "aws_route_table" "custom-rt3" {
  provider = aws.ap
  vpc_id = aws_vpc.custom-vpc-3.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.custom-igw-3.id
  }
}






resource "aws_route_table_association" "custom-rt-association-1" {
  provider = aws
  route_table_id = aws_route_table.custom-rt1.id
  subnet_id = aws_subnet.custom-subnet-1.id
}


resource "aws_route_table_association" "custom-rt-association-2" {
  provider = aws.eu
  route_table_id = aws_route_table.custom-rt2.id
  subnet_id = aws_subnet.custom-subnet-2.id
}


resource "aws_route_table_association" "custom-rt-association-3" {
  provider = aws.ap
  route_table_id = aws_route_table.custom-rt3.id
  subnet_id = aws_subnet.custom-subnet-3.id
}


variable "ingress-rules" {
  default = {
    "http-ingress" = {
      description = "For HTTP"
      from_port   = 80
      to_port     = 80
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    },
    "ssh-ingress" = {
      description = "For SSH"
      from_port   = 22
      to_port     = 22
      protocol    = "tcp"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }
  type = map(object({
    description = string
    from_port   = number
    to_port     = number
    protocol    = string
    cidr_blocks = list(string)
  }))
}


variable "egress-rules" {
  default = {
    "all-egress" = {
      description = "All"
      from_port   = 0
      to_port     = 0
      protocol    = "-1"
      cidr_blocks = ["0.0.0.0/0"]
    }
  }
  type = map(object({
    description = string
    from_port   = number
    to_port     = number
    protocol    = string
    cidr_blocks = list(string)
  }))
}


resource "aws_security_group" "custom-sg1" {
  provider = aws
  name = "allow_http_and_ssh"
  description = "Allow HTTP and SSH traffic"
  vpc_id = aws_vpc.custom-vpc-1.id


  dynamic "ingress" {
    for_each = var.ingress-rules
    content {
      description      = lookup(ingress.value, "description", null)
      from_port        = lookup(ingress.value, "from_port", null)
      to_port          = lookup(ingress.value, "to_port", null)
      protocol         = lookup(ingress.value, "protocol", null)
      cidr_blocks      = lookup(ingress.value, "cidr_blocks", null)
    }
  }


  dynamic "egress" {
    for_each = var.egress-rules
    content {
      description      = lookup(egress.value, "description", null)
      from_port        = lookup(egress.value, "from_port", null)
      to_port          = lookup(egress.value, "to_port", null)
      protocol         = lookup(egress.value, "protocol", null)
      cidr_blocks      = lookup(egress.value, "cidr_blocks", null)
    }
  }
}


resource "aws_security_group" "custom-sg2" {
  provider = aws.eu
  name = "allow_http_and_ssh"
  description = "Allow HTTP and SSH traffic"
  vpc_id = aws_vpc.custom-vpc-2.id


  dynamic "ingress" {
    for_each = var.ingress-rules
    content {
      description      = lookup(ingress.value, "description", null)
      from_port        = lookup(ingress.value, "from_port", null)
      to_port          = lookup(ingress.value, "to_port", null)
      protocol         = lookup(ingress.value, "protocol", null)
      cidr_blocks      = lookup(ingress.value, "cidr_blocks", null)
    }
  }


  dynamic "egress" {
    for_each = var.egress-rules
    content {
      description      = lookup(egress.value, "description", null)
      from_port        = lookup(egress.value, "from_port", null)
      to_port          = lookup(egress.value, "to_port", null)
      protocol         = lookup(egress.value, "protocol", null)
      cidr_blocks      = lookup(egress.value, "cidr_blocks", null)
    }
  }
}


resource "aws_security_group" "custom-sg3" {
  provider = aws.ap
  name = "allow_http_and_ssh"
  description = "Allow HTTP and SSH traffic"


  dynamic "ingress" {
    for_each = var.ingress-rules
    content {
      description      = lookup(ingress.value, "description", null)
      from_port        = lookup(ingress.value, "from_port", null)
      to_port          = lookup(ingress.value, "to_port", null)
      protocol         = lookup(ingress.value, "protocol", null)
      cidr_blocks      = lookup(ingress.value, "cidr_blocks", null)
    }
  }


  dynamic "egress" {
    for_each = var.egress-rules
    content {
      description      = lookup(egress.value, "description", null)
      from_port        = lookup(egress.value, "from_port", null)
      to_port          = lookup(egress.value, "to_port", null)
      protocol         = lookup(egress.value, "protocol", null)
      cidr_blocks      = lookup(egress.value, "cidr_blocks", null)
    }
  }
}






resource "aws_key_pair" "custom-kp1" {
  provider = aws
  key_name = "terraform-keys2"
  public_key = "ssh-rsa XT3RdHf7oZcdjPjf0OYSvDHk/WNMvkjF0FMoW+RBtakDyMLFJxIlXqI3lAiwk173n65AlBn1gQ3hgpMT/IgTZLJg6EluyjgL4heyVRuAedh3dBjNHkucRKSCfcNQVFVIeJbAbWG0JJbmVwLIa/JWY+YPyXlYpSqCwCicRcZXea1e6p+TX5GZvKn+MO/rIFRIXbEFIPDIV1nEivj7HW4hADLTIPA1CjGAqaVbqr65Xr4sbpDl0KvDha+uPjueMjKOV93A6a/RUIP5EftZ40cIR2oqu7GH677R5f19GtK6yHfUBlzCbclBlVnrMYWEEBFiG3dIQv55cDs97u9iyeSLqcde2OX4ZEhjb5PH7YOtG8AS0qbu1Y70RG2UgDa3Bv5AcT673mw0ab3kXtUjng1d05eC6pA+voW5jxV/g4a3ESlGtnD029jpfl6vaz53cjL4ml+JXRRgnBVMb= x@Nasis-MacBook-Pro.local"
}


resource "aws_key_pair" "custom-kp2" {
  provider = aws.eu
  key_name = "terraform-keys2"
  public_key = "ssh-rsa XT3RdHf7oZcdjPjf0OYSvDHk/WNMvkjF0FMoW+RBtakDyMLFJxIlXqI3lAiwk173n65AlBn1gQ3hgpMT/IgTZLJg6EluyjgL4heyVRuAedh3dBjNHkucRKSCfcNQVFVIeJbAbWG0JJbmVwLIa/JWY+YPyXlYpSqCwCicRcZXea1e6p+TX5GZvKn+MO/rIFRIXbEFIPDIV1nEivj7HW4hADLTIPA1CjGAqaVbqr65Xr4sbpDl0KvDha+uPjueMjKOV93A6a/RUIP5EftZ40cIR2oqu7GH677R5f19GtK6yHfUBlzCbclBlVnrMYWEEBFiG3dIQv55cDs97u9iyeSLqcde2OX4ZEhjb5PH7YOtG8AS0qbu1Y70RG2UgDa3Bv5AcT673mw0ab3kXtUjng1d05eC6pA+voW5jxV/g4a3ESlGtnD029jpfl6vaz53cjL4ml+JXRRgnBVMb= x@Nasis-MacBook-Pro.local"
}


resource "aws_key_pair" "custom-kp3" {
  provider = aws.ap
  key_name = "terraform-keys2"
  public_key = "ssh-rsa XT3RdHf7oZcdjPjf0OYSvDHk/WNMvkjF0FMoW+RBtakDyMLFJxIlXqI3lAiwk173n65AlBn1gQ3hgpMT/IgTZLJg6EluyjgL4heyVRuAedh3dBjNHkucRKSCfcNQVFVIeJbAbWG0JJbmVwLIa/JWY+YPyXlYpSqCwCicRcZXea1e6p+TX5GZvKn+MO/rIFRIXbEFIPDIV1nEivj7HW4hADLTIPA1CjGAqaVbqr65Xr4sbpDl0KvDha+uPjueMjKOV93A6a/RUIP5EftZ40cIR2oqu7GH677R5f19GtK6yHfUBlzCbclBlVnrMYWEEBFiG3dIQv55cDs97u9iyeSLqcde2OX4ZEhjb5PH7YOtG8AS0qbu1Y70RG2UgDa3Bv5AcT673mw0ab3kXtUjng1d05eC6pA+voW5jxV/g4a3ESlGtnD029jpfl6vaz53cjL4ml+JXRRgnBVMb= x@Nasis-MacBook-Pro.local"
}






resource "aws_instance" "custom-ec2-1" {
  provider = aws
  ami = "ami-02354e95b39ca8dec"
  instance_type = "t2.micro"
  key_name = aws_key_pair.custom-kp1.key_name
  subnet_id = aws_subnet.custom-subnet-1.id
  security_groups = [aws_security_group.custom-sg1.id]
  user_data = ""
}


resource "aws_instance" "custom-ec2-2" {
  provider = aws.eu
  ami = "ami-0c115dbd34c69a004"
  instance_type = "t2.micro"
  key_name = aws_key_pair.custom-kp2.key_name
  subnet_id = aws_subnet.custom-subnet-2.id
  security_groups = [aws_security_group.custom-sg2.id]
  user_data = ""
}


resource "aws_instance" "custom-ec2-3" {
  provider = aws.ap
  ami = "ami-0cc75a8978fbbc969"
  instance_type = "t2.micro"
  key_name = aws_key_pair.custom-kp3.key_name
  subnet_id = aws_subnet.custom-subnet-3.id
  security_groups = [aws_security_group.custom-sg3.id]
  user_data = ""
}

Now you can use terraform plan command to check the changes that will be applied to the single region infrastructure you already have, and once you confirm that everything looks good, you can proceed with terraform apply to provision the remaining resources.

现在,您可以使用terraform plan命令来检查将应用于已经具有的单个区域基础结构的更改,并且一旦确认一切看起来都不错,就可以继续进行terraform apply来配置剩余的资源。

破坏基础设施 (Destroy the Infrastructure)

Destroying the infrastructure is as easy as it gets. You can simply do it with the terraform destroy command, which will destroy all the provisioned resources.

破坏基础架构非常容易。 您只需使用terraform destroy命令即可完成此操作,该命令将销毁所有已调配的资源。

I hope you enjoyed this article and that it really helped you in getting started with Terraform for managing your cloud infrastructure. Stay tuned for more Terraform articles.

我希望您喜欢这篇文章,并且它确实可以帮助您开始使用Terraform来管理您的云基础架构。 请继续关注有关Terraform的更多文章。

Take care.

照顾自己。

翻译自: https://medium.com/better-programming/provision-your-cloud-infrastructure-with-terraform-7b581b8fe38f

terraform

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是使用Terraform配置ECS的步骤: 1. 安装Terraform:首先,您需要在本地计算机上安装Terraform。您可以从Terraform官方网站下载适用于您操作系统的安装程序,并按照说明进行安装。 2. 创建Terraform配置文件:在您的项目目录中创建一个新的Terraform配置文件(例如,main.tf)。在该文件中,您可以定义您的ECS集群的配置。 3. 引入必要的提供者:在配置文件的开头,您需要引入AWS提供者。您可以使用以下代码行引入AWS提供者: ```terraform provider "aws" { region = "your_aws_region" } ``` 请将"your_aws_region"替换为您要使用的AWS区域。 4. 定义ECS集群:使用以下代码行在配置文件中定义ECS集群: ```terraform resource "aws_ecs_cluster" "ecs_cluster" { name = "your_cluster_name" } ``` 请将"your_cluster_name"替换为您要创建的ECS集群的名称。 5. 配置ECS服务:使用以下代码行在配置文件中定义ECS服务: ```terraform resource "aws_ecs_service" "ecs_service" { name = "your_service_name" cluster = aws_ecs_cluster.ecs_cluster.id task_definition = aws_ecs_task_definition.ecs_task_definition.arn desired_count = 1 } ``` 请将"your_service_name"替换为您要创建的ECS服务的名称。 6. 配置ECS任务定义:使用以下代码行在配置文件中定义ECS任务定义: ```terraform resource "aws_ecs_task_definition" "ecs_task_definition" { family = "your_task_definition_family" container_definitions = file("path_to_container_definitions_file") requires_compatibilities = ["FARGATE"] network_mode = "awsvpc" cpu = "256" memory = "512" } ``` 请将"your_task_definition_family"替换为您要创建的ECS任务定义的名称,并将"path_to_container_definitions_file"替换为包含您的容器定义的文件路径。 7. 部署ECS堆栈:在命令行中导航到您的项目目录,并运行以下命令来初始化Terraform部署ECS堆栈: ```shell terraform init terraform apply ``` 这将初始化Terraform并根据您的配置文件创建ECS集群和服务。 请注意,上述步骤仅提供了一个基本的配置示例。根据您的需求,您可以进一步配置ECS集群和服务,例如定义任务定义参数、容器定义和其他资源。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值