构建服务器_如何使用无服务器构建完整的后端系统

构建服务器

This article will teach you how to build and deploy everything you need to be able to build a back-end for your application. We'll be using AWS to host all of this and deploying it all using the Serverless Framework.

本文将教您如何构建和部署为应用程序构建后端所需的一切。 我们将使用AWS托管所有这些内容,并使用无服务器框架进行部署。

By the end of this article you'll know how to:

到本文结尾,您将知道如何:

Being able to do all these things gives you the ability to create all the functionality needed from most application back ends.

能够做所有这些事情使您能够创建大多数应用程序后端所需的所有功能。

使用AWS进行无服务器设置 (Serverless Setup with AWS)

The Serverless Framework is a tool that we can use as developers to configure and deploy services from our computers. There's a bit of setup to allow all of this to work together and this section will show you how to do that.

无服务器框架是一种工具,我们可以作为开发人员使用它来配置和部署计算机中的服务。 有一些设置可让所有这些协同工作,本节将向您展示如何进行。

To allow Serverless to do work on your account, you need to set up a user for it. To do this, navigate into AWS and search for "IAM" (Identity and Access Management).

为了使Serverless能够处理您的帐户,您需要为其设置一个用户。 为此,请导航到AWS并搜索“ IAM”(身份和访问管理)。

Once on the IAM Page, click on Users in the list on the left hand side. This will open the list of users on your account. From here we'll be clicking Add user.

进入IAM页面后,在左侧列表中单击“ 用户 ”。 这将打开您帐户上的用户列表。 在这里,我们将单击添加用户。

We need to create a user which has Programmatic access and I've called my user ServerlessAccount, but the name doesn't matter too much.

我们需要创建一个具有Programmatic访问权限的用户,并且我已经将我的用户称为ServerlessAccount ,但是名称并不太重要。

Next, we need to give the user some permissions. When on the permissions screen, select Attach existing policies directly and then select AdministratorAccess. This will give the Serverless Framework permission to create all the resources it needs to.

接下来,我们需要给用户一些权限。 在权限屏幕上,选择直接附加现有策略 ,然后选择AdministratorAccess 。 这将授予无服务器框架创建其所需所有资源的权限。

We don't need to add any tags, so we can move straight onto Review.

我们不需要添加任何标签,因此我们可以直接移至Review

On the review window, you'll see the user has been given an Access key ID and a Secret access key. We'll be needing those in the next part so keep this page open.

在查看窗口上,您将看到为用户提供了访问密钥ID秘密访问密钥 。 下一部分将需要这些内容,因此请保持此页面打开状态。

无服务器安装和配置 (Serverless Install and Configuration)

Now that we've created our user, we need to install the Serverless Framework on our machine.

现在,我们已经创建了用户,我们需要在计算机上安装无服务器框架。

Open up a terminal and run this command to install Serverless globally on your computer. If you haven't got NodeJS installed check out this page.

打开终端并运行此命令以在计算机上全局安装Serverless。 如果尚未安装NodeJS,请查看此页面。

npm install -g serverless

Now that we've got Serverless installed, we need to set up the credentials for Serverless to use. Run this command, putting your access key ID and Secret access key in the correct places:

现在,我们已经安装了Serverless,我们需要设置凭据以供Serverless使用。 运行此命令,将您的访问密钥ID秘密访问密钥放在正确的位置:

serverless config credentials --provider aws --key ${Your access key ID} --secret ${Your secret access key} --profile serverlessUser

Once this has been run, you're all set up with Serverless.

一旦运行了,就可以使用无服务器了。

部署您的第一个AWS Lambda (Deploying Your First AWS Lambda)

With out serverlessUser set up, we want to deploy something using the Serverless Framework. We can use Serverless templates to setup a basic project that we can deploy. This will be the base for the whole of this Serverless project.

在没有设置serverlessUser的情况下,我们希望使用Serverless Framework进行部署。 我们可以使用无服务器模板来设置可以部署的基本项目。 这将是整个无服务器项目的基础。

In your terminal we can create a Serverless project from a template. This command will create a NodeJS Serverless project in the myServerlessProject folder:

在您的终端中,我们可以从模板创建无服务器项目。 此命令将在myServerlessProject文件夹中创建一个NodeJS Serverless项目:

serverless create --template aws-nodejs --path myServerlessProject

If you now open the folder up in your code editor we can look at what we've created.

如果现在在代码编辑器中打开该文件夹,我们可以查看我们创建的内容。

We've got two file worth talking about: handler.js and serverless.yml

我们有两个值得讨论的文件: handler.jsserverless.yml

handler.js (handler.js)

This file is a function that will be uploaded as a Lambda function to your AWS account. Lambda functions are great and we'll use a lot more of them later on in the series.

此文件是一个将作为Lambda函数上传到您的AWS账户的函数。 Lambda函数很棒,我们将在本系列的后面部分中使用更多。

无服务器 (serverless.yml)

This is a very important file for us. This is where all the configuration for our deployment goes. It tells Serverless what runtime to use, which account to deploy to, and what to deploy.

这对我们来说是非常重要的文件。 这是部署所有配置的地方。 它告诉Serverless使用哪个运行时,部署到哪个帐户以及部署什么。

We need to make a change to this file so that our deployment works properly. In the provider object we need to add a new line of profile: serverlessUser. This tells Serverless to use the AWS credentials we created in the last section.

我们需要对此文件进行更改,以便我们的部署正常工作。 在provider对象中,我们需要添加新的profile: serverlessUserprofile: serverlessUser 。 这告诉Serverless使用上一节中创建的AWS凭证。

We can scroll down to functions and see that we have one function which is called hello and points to the function within the handler.js file. This means we will be deploying this Lambda function as part of this project.

我们可以向下滚动到functions然后看到有一个名为hello函数,它指向handler.js文件中的函数。 这意味着我们将在项目中部署此Lambda函数。

We'll learn a lot more about this serverless.yml file later on in this article.

我们将在本文后面的内容中学习有关此serverless.yml文件的更多信息。

部署我们的项目 (Deploying Our Project)

Now that we've looked at the files it's time to do our first deployment. Open up a terminal and navigate to our project folder. Deploying is as simple as typing:

现在,我们已经查看了文件,是时候进行首次部署了。 打开终端并导航到我们的项目文件夹。 部署就像输入一样简单:

serverless deploy

This takes a while, but when it's done we can check that everything has deployed successfully.

这需要一段时间,但是完成后,我们可以检查所有内容是否已成功部署。

Open up your browser and navigate to your AWS account. Search for Lambda and you'll see a list of all your Lambda functions. (If you don't see any then check that your region is set to N. Virginia). You should see the myserverlessproject-dev-hello Lambda which contains the exact code that is in the handler.js file in your project folder.

打开浏览器并导航到您的AWS账户。 搜索Lambda ,您将看到所有Lambda函数的列表。 (如果看不到任何内容,请检查您的区域设置为N. Virginia )。 您应该看到myserverlessproject-dev-hello Lambda,其中包含项目文件夹中handler.js文件中的确切代码。

部署S3存储桶并上传文件 (Deploying an S3 Bucket and Uploading Files)

In this section we're going to learn how we can deploy an Amazon S3 bucket and then sync up files from our computer. This is how we can start using S3 as cloud storage for our files.

在本节中,我们将学习如何部署Amazon S3存储桶,然后从计算机同步文件。 这就是我们可以开始将S3用作文件的云存储的方式。

Open up the serverless.yml file and remove all the commented out lines. Scroll to the bottom of the file and add the following code to include our S3 resources:

打开serverless.yml文件并删除所有注释掉的行。 滚动到文件底部,并添加以下代码以包含我们的S3资源:

resources:
    Resources:
        DemoBucketUpload:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: EnterAUniqueBucketNameHere

Change the name of the bucket and we're ready to deploy again. Open up your terminal again and run serverless deploy. You may get an error saying that the bucket name is not unique, in which case you'll need to change the bucket name, save the file and rerun the command.

更改存储桶的名称,我们准备再次进行部署。 再次打开终端并运行serverless deploy 。 您可能会收到一条错误消息,说明存储桶名称不是唯一的,在这种情况下,您需要更改存储桶名称,保存文件并重新运行命令。

If it is successful we can then go and see our new S3 bucket in our AWS Console through our browser. Search for S3 and then you should see your newly created bucket.

如果成功,我们可以通过浏览器在AWS控制台中查看新的S3存储桶。 搜索S3 ,然后您应该会看到新创建的存储桶。

同步文件 (Syncing up your files)

Having a bucket is great, but now we need to put files in the bucket. We're going to be using a Serverless plugin called S3 Sync to do this for us. To add this plugin to our project we need to define the plugins. After your provider object, add this code:

有一个存储桶很棒,但是现在我们需要将文件放入存储桶中。 我们将使用名为S3 Sync的无服务器插件为我们完成此任务。 要将这个插件添加到我们的项目中,我们需要定义插件。 在提供程序对象之后,添加以下代码:

plugins:
    - serverless-s3-sync

This plugin also needs some custom configuration, so we add another field to our serverless.yml file, changing out the bucket name for yours:

该插件还需要一些自定义配置,因此我们将另一个字段添加到我们的serverless.yml文件中,更改了您的存储桶名称:

custom:
    s3Sync:
        - bucketName: YourUniqueBucketName
          localDir: UploadData

This section of code is telling the S3 Sync plugin to upload the contents of the UploadData folder to our bucket. We don't currently have that folder so we need to create it and add some files. You can add a text file, an image, or whatever you want to be uploaded, just make sure there is at least 1 file in the folder.

这部分代码告诉S3 Sync插件将UploadData文件夹的内容上传到我们的存储桶。 我们目前没有该文件夹,因此我们需要创建该文件夹并添加一些文件。 您可以添加文本文件,图像或要上传的任何文件,只需确保文件夹中至少有1个文件即可。

The last thing we need to do is to install the plugin. Luckily, all Serverless plugins are also npm packages, so we can install it by running npm install --save-dev serverless-s3-sync in our terminal.

我们需要做的最后一件事是安装插件。 幸运的是,所有无服务器插件也都是npm软件包,因此我们可以通过在终端中运行npm install --save-dev serverless-s3-sync来安装它。

As we've done before, we can now run serverless deploy and wait for the deployment to complete. Once it is complete we can go back into our browser and into our bucket and we should see all the files that we put in the UploadData folder in our project.

正如我们之前所做的那样,我们现在可以运行serverless deploy并等待部署完成。 完成后,我们可以回到浏览器和存储桶中,然后应该看到放在项目的UploadData文件夹中的UploadData文件。

使用Lambda和API网关创建API (Creating an API with Lambda and API Gateway)

In this section we'll learn to do one of the most useful things with Serverless: create an API. Creating an API allows you to do so many things, from getting data from databases, S3 storage, hitting other APIs, and much more!

在本节中,我们将学习使用Serverless做最有用的事情之一:创建一个API。 创建API可以使您做很多事情,从数据库中获取数据,S3存储,使用其他API等等!

To create the API we first need to create a new Lambda function to handle the request. We're going to make a few Lambdas, so we're going to create a lambdas folder in our project with two subfolders, common and endpoints.

要创建API,我们首先需要创建一个新的Lambda函数来处理请求。 我们将制作一些Lambda,因此我们将在项目中创建一个lambdas文件夹,其中包含两个子文件夹, commonendpoints

Inside the endpoints folder we can add a new file called getUser.js. This API is going to allow someone to make a request and get back data based on the ID of a user. This is the code for the API:

在端点文件夹内,我们可以添加一个名为getUser.js的新文件。 该API将允许某人发出请求并根据用户ID取回数据。 这是API的代码:

const Responses = require('../common/API_Responses');

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;

    if (data[ID]) {
        // return the data
        return Responses._200(data[ID]);
    }

    //failed as ID not in the data
    return Responses._400({ message: 'no ID in data' });
};

const data = {
    1234: { name: 'Anna Jones', age: 25, job: 'journalist' },
    7893: { name: 'Chris Smith', age: 52, job: 'teacher' },
    5132: { name: 'Tom Hague', age: 23, job: 'plasterer' },
};

If the request doesn't contain an ID then we return a failed response. If there is data for that ID then we return that data. If there isn't data for that user ID then we also return a failure response.

如果请求不包含ID,则我们将返回失败的响应。 如果有该ID的数据,则我们返回该数据。 如果没有该用户ID的数据,那么我们还会返回失败响应。

As you may have noticed we are requiring in the Responses object from API_Responses. These responses are going to be common to every API that we make, so making this code importable is a smart move. Create a new file called API_Responses.js in the common folder and add this code:

您可能已经注意到,我们需要API_Responses中的Responses对象。 这些响应对于我们制作的每个API都是通用的,因此使此代码可导入是明智之举。 在common文件夹中创建一个名为API_Responses.js的新文件,并添加以下代码:

const Responses = {
    _200(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 200,
            body: JSON.stringify(data),
        };
    },

    _400(data = {}) {
        return {
            headers: {
                'Content-Type': 'application/json',
                'Access-Control-Allow-Methods': '*',
                'Access-Control-Allow-Origin': '*',
            },
            statusCode: 400,
            body: JSON.stringify(data),
        };
    },
};

module.exports = Responses;

This set of functions are used to simplify the creation of the correct response needed when using a Lambda with API Gateway (which we'll do in a sec). The methods add headers, a status code, and stringify any data that needs to be returned.

这组函数用于简化将Lambda与API网关结合使用时所需的正确响应的创建过程(我们将在几秒钟内完成)。 这些方法添加标题,状态代码,并对需要返回的任何数据进行字符串化。

Now that we have the code for our API, we need to set it up in our serverless.yml file. Scroll to the functions section of the serverless.yml file. In the last part of this guide we deployed the hello function, but we no longer need that. Delete the functions object and replace it with this:

现在我们有了API的代码,我们需要在serverless.yml文件中进行设置。 滚动到serverless.yml文件的functions部分。 在本指南的最后部分,我们部署了hello函数,但不再需要它。 删除功能对象,并替换为:

functions:
    getUser:
        handler: lambdas/endpoints/getUser.handler
        events:
            - http:
                  path: get-user/{ID}
                  method: GET
                  cors: true

This code is creating a new Lambda function called getUser which is in the file of lambdas/getUser on the method of handler. We then define the events that can trigger this lambda function to run.

这段代码正在创建一个名为getUser的新Lambda函数,该handler位于handler方法的lambdas/getUser文件中。 然后,我们定义可以触发此lambda函数运行的事件。

To make a Lambda into an API we can add a http event. This tells Serverless to add an API Gateway to this account and then we can define the API endpoint using path. In this case get-user/{ID} means the URL will be https://${something-provided-by-API-Gateway}/get-user/{ID}, where the ID is passed into the Lambda as a path parameter. We also set the method to GET and enable CORS so that we could access this endpoint from a front end application if we wanted.

为了使Lambda成为API,我们可以添加http事件。 这告诉Serverless将API网关添加到该帐户,然后我们可以使用path定义API端点。 在这种情况下, get-user/{ID}表示URL将为https://${something-provided-by-API-Gateway}/get-user/{ID} ,其中ID作为路径参数。 我们还将方法设置为GET并启用CORS,以便我们可以根据需要从前端应用程序访问此端点。

We can now deploy again, and this time we can use the shorthand command sls deploy. This only saves a few characters, but helps avoid a lot of typos. When this is completed we'll get an output that also includes a list of endpoints. We can copy our endpoint and head over to a browser to test it out.

现在,我们可以再次进行部署,这次我们可以使用简写命令sls deploy 。 这只会节省一些字符,但有助于避免很多错字。 完成此操作后,我们将获得包含端点列表的输出。 我们可以将端点复制并转到浏览器进行测试。

If we paste our API URL into our browser and then add an ID to the end of 5132 we should get back a response of { name: 'Tom Hague', age: 23, job: 'plasterer' }. If we enter a different ID such as 1234 we'll get different data, but entering an ID of 7890 or not entering an ID will return an error.

如果我们将API URL粘贴到浏览器中,然后在5132的末尾添加一个ID,我们应该返回{ name: 'Tom Hague', age: 23, job: 'plasterer' }的响应。 如果我们输入其他ID(例如1234),则会得到不同的数据,但是输入ID 7890或不输入ID都会返回错误。

If we want to add more data to our API, we can simply add a new row to the data object in the getUser.js file. We can then run a special command which only deploys one function, sls deploy -f ${functionName}. So for us that is:

如果我们想向我们的API添加更多数据,我们可以简单地在getUser.js文件中向数据对象添加新行。 然后,我们可以运行一个仅部署一个功能的特殊命令,即sls deploy -f ${functionName} 。 因此对我们而言:

sls deploy -f getUser

If you now make a request using the ID of the new data, the API will return that new data instead of an error.

如果现在使用新数据的ID进行请求,则API将返回该新数据而不是错误。

在AWS上创建数据库 (Creating a Database on AWS)

DynamoDB is a fully hosted, non-relational database on AWS. This is the perfect solution for storing data that you need to access and update regularly. In this section we're going to learn how we can create a DynamoDB table with Serverless.

DynamoDB是AWS上完全托管的非关系数据库。 这是用于存储需要定期访问和更新的数据的完美解决方案。 在本节中,我们将学习如何使用Serverless创建DynamoDB表。

In our serverless.yml file we're going to add some configuration to the Resources section:

在我们的serverless.yml文件中,我们将向“ Resources部分添加一些配置:

resources:
    Resources:
        DemoBucketUpload:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: ${self:custom.bucketName}
        # New Code
        MyDynamoDbTable:
            Type: AWS::DynamoDB::Table
            Properties:
                TableName: ${self:custom.tableName}
                AttributeDefinitions:
                    - AttributeName: ID
                      AttributeType: S
                KeySchema:
                    - AttributeName: ID
                      KeyType: HASH
                BillingMode: PAY_PER_REQUEST

In this code we can see that we are creating a new DynamoDB table with a TableName of ${self:custom.tableName}, defining an attribute of ID and setting the billing mode to pay per request.

在此代码中,我们可以看到我们正在创建一个新的DynamoDB表,其TableName${self:custom.tableName} ,定义ID的属性,并将计费方式设置为按请求付费。

This is our first look at the use of variables in our serverless.yml file. We can use variables for a few reasons and they can make our jobs much easier. In this case, we're referencing the variable custom.tableName. We can then reference this variable from multiple locations without having to copy and paste the table name. To get this to work we also need to add tableName to the custom section. In our case we're going to add the line tableName: player-points to create a table to store the points a player has. This table name only needs to be unique to your account.

这是我们首先查看serverless.yml文件中变量的使用。 由于一些原因,我们可以使用变量,它们可以使我们的工作变得更加轻松。 在这种情况下,我们引用变量custom.tableName 。 然后,我们可以从多个位置引用此变量,而不必复制和粘贴表名。 为了tableName起作用,我们还需要将tableName添加到定制部分。 在我们的例子中,我们将添加tableName: player-points以创建一个表格来存储玩家拥有的积分。 该表名只需要是您帐户的唯一名称。

When defining a table you need to define at least one of the fields which will be your unique identifying field. Because DynamoDB is a non-relational database, you don't need to define the full schema. In our case we've defined the ID, stating that it has an attribute type of string and a key type of HASH.

定义表时,您需要定义至少一个字段,这将是您的唯一标识字段。 由于DynamoDB是非关系数据库,因此您不需要定义完整的架构。 在我们的例子中,我们定义了ID ,说明它具有字符串的属性类型和键的HASH类型。

The last part of the definition is the billing mode. There are two ways to pay for DynamoDB:

定义的最后一部分是计费方式。 DynamoDB有两种付款方式:

  • pay per request

    按请求付费
  • provisioned resources.

    调配资源。

Provisioned resources lets you define how much data you're going to be reading and writing to the table. The issues with this are that if you start using more your requests get throttled, and that you pay for the resource even if no one is using it.

配备的资源使您可以定义要读取和写入表的数据量。 这样做的问题是,如果您开始使用更多资源,那么您的请求就会受到限制,即使没有人使用它,您也要为资源付费。

Pay Per Request it's much simpler as you just pay per request. This means if you have no one using it then you pay nothing and if your have hundreds of people using it at once, all the requests work. For this added flexibility you pay slightly more for Pay Per Request, but in the long run it usually works out to be cheaper.

按请求付费,这很简单,因为您只需按请求付费。 这意味着,如果您没有人使用它,那么您一分钱都不会付钱;如果您一次有数百人使用它,那么所有请求都将起作用。 为了增加灵活性,您可以为“按请求付费”支付更高的价格,但从长远来看,它通常更便宜。

Once we've run sls deploy again we can open up our AWS console and search for DynamoDB. We should be able to see our new table and we can see that there is nothing in there.

一旦再次运行sls deploy ,我们就可以打开AWS控制台并搜索DynamoDB。 我们应该可以看到我们的新表,并且可以看到其中没有任何内容。

To add data to the table, click Create item, give it a unique ID, click the plus button and append to a new field and select the type of string. We need to give it a field of name and a value of Jess. Add a number field of score set to 12. Click save and you now have data in your dynamo table.

要将数据添加到表中,请单击Create item ,为其提供唯一的ID,单击加号按钮并append到新字段,然后选择字符串的类型。 我们需要给它一个name字段和一个Jess值。 将score设置为12的数字字段添加。 单击保存,现在您的发电机表中就有数据。

从DynamoDB表获取数据 (Getting Data from your DynamoDB Table)

Now that we have our Dynamo table created, we want to be able to get and add data to the table. We're going to start with getting data from the table with a get endpoint.

现在我们已经创建了Dynamo表,我们希望能够获取数据并将其添加到表中。 我们将从使用get端点从表中获取数据开始。

We're going to create a new file in our endpoints folder called getPlayerScore.js. This Lambda endpoint is going to handle the requests for a user and get that data from the Dynamo table.

我们将在endpoints文件夹中创建一个名为getPlayerScore.js的新文件。 该Lambda端点将处理对用户的请求,并从Dynamo表中获取该数据。

const Responses = require('../common/API_Responses');
const Dynamo = require('../common/Dynamo');

const tableName = process.env.tableName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;

    const user = await Dynamo.get(ID, tableName).catch(err => {
        console.log('error in Dynamo Get', err);
        return null;
    });

    if (!user) {
        return Responses._400({ message: 'Failed to get user by ID' });
    }

    return Responses._200({ user });
};

The code used here is very similar to the code inside the getUser.js file. We are checking that a path parameter of ID exists, getting the user data, and then returning the user. The main difference is how we are getting the user.

此处使用的代码与getUser.js文件中的代码非常相似。 我们正在检查ID的路径参数是否存在,获取用户数据,然后返回用户。 主要区别在于我们如何吸引用户。

We have imported the Dynamo function object and are calling Dynamo.get. We're passing in the ID and the table name and then catching any errors. We now need to create that Dynamo function object in a new file called Dynamo.js in the common folder.

我们已经导入了Dynamo函数对象,并正在调用Dynamo.get 。 我们传入ID和表名,然后捕获任何错误。 现在,我们需要在公共文件夹中名为Dynamo.js的新文件中创建该Dynamo函数对象。

const AWS = require('aws-sdk');

const documentClient = new AWS.DynamoDB.DocumentClient();

const Dynamo = {
    async get(ID, TableName) {
        const params = {
            TableName,
            Key: {
                ID,
            },
        };

        const data = await documentClient.get(params).promise();

        if (!data || !data.Item) {
            throw Error(`There was an error fetching the data for ID of ${ID} from ${TableName}`);
        }
        console.log(data);

        return data.Item;
    },
};
module.exports = Dynamo;

Reading and writing to Dynamo requires a reasonable amount of code. We could write that code every time we want to use Dynamo but it is much cleaner to have functions to simplify the process for us.

读写Dynamo需要大量的代码。 每次我们想要使用Dynamo时,我们都可以编写该代码,但是具有简化功能的函数要干净得多。

The file first imports AWS and then creates an instance of the DynamoDB Document Client. The document client is the easiest way for us to work with Dynamo from our Lambdas. We create a Dynamo object with an async get function. The only things we need to make a request are an ID and a table name. We format those into the correct parameter format for the DocumentClient, await a documentClient.get request, and make sure that we add .promise() to the end. This turns the request from a callback to a promise which is much easier to work with. We check that we managed to get an item from Dynamo and then we return that item.

该文件首先导入AWS,然后创建DynamoDB Document Client的实例。 文档客户端是我们使用Lambda的Dynamo进行工作的最简单方法。 我们使用异步get函数创建一个Dynamo对象。 我们唯一需要做的就是ID和表名。 我们将其格式化为DocumentClient的正确参数格式,等待documentClient.get请求,并确保在最后添加.promise() 。 这将请求从回调变为承诺,这更容易处理。 我们检查是否设法从Dynamo获得了一个物品,然后将其退回。

We now have the all the code that we need, we have to update our serverless.yml file too. The first thing to do is to add our new API endpoint by adding it to our list of functions.

现在,我们有了所需的所有代码,我们也必须更新serverless.yml文件。 首先要做的是通过将新的API端点添加到函数列表中来添加它。

getPlayerScore:
        handler: lambdas/endpoints/getPlayerScore.handler
        events:
            - http:
                  path: get-player-score/{ID}
                  method: GET
                  cors: true

There are two more changes that we need to make to get our endpoint working:

为了使端点正常工作,我们还需要进行两项更改:

  • environment variables

    环境变量
  • permissions

    权限

You may have noticed in the getPlayerScore.js file we had a line of code like this:

您可能已经注意到在getPlayerScore.js文件中,我们有如下代码行:

const tableName = process.env.tableName;

This is where we are getting the table name from the environment variables of the Lambda. To create our Lambda with the correct environment variables, we need to set a new object in the provider called environment with a field of tableName and a value of ${self:custom.tableName}. This will ensure that we are making the request to correct table.

这是我们从Lambda的环境变量获取表名的地方。 要使用正确的环境变量创建Lambda,我们需要在提供程序中设置一个名为environment的新对象,该对象具有tableName字段和${self:custom.tableName} 。 这将确保我们正在请求更正表。

We also need to give our Lambdas permissions to access Dynamo. We have to add another field to the provider called iamRoleStatements. This has an array of policies which can allow or disallow access to certain services or resources:

我们还需要授予Lambdas访问Dynamo的权限。 我们必须向提供程序添加另一个字段,称为iamRoleStatements 。 它具有一系列策略,可以允许或禁止访问某些服务或资源:

provider:
    name: aws
    runtime: nodejs10.x
    profile: serverlessUser
    region: eu-west-1
    environment:
        tableName: ${self:custom.tableName}
    iamRoleStatements:
        - Effect: Allow
          Action:
              - dynamodb:*
          Resource: '*'

As all this has been added to the provider object, it will be applied to all Lambdas.

由于所有这些都已添加到提供程序对象,因此它将应用于所有Lambda。

We can now run sls deploy again to deploy our new endpoint. When that is done we should get an output with a new endpoint of https://${something-provided-by-API-Gateway}/get-player-score/{ID}. If we copy that URL into a browser tab and add the ID of the player that we created in the last section, we should get a response.

现在,我们可以再次运行sls deploy来部署新端点。 完成后,我们应该获得带有新端点https://${something-provided-by-API-Gateway}/get-player-score/{ID} 。 如果我们将该URL复制到浏览器选项卡中并添加在上一节中创建的播放器的ID,则应该得到响应。

将新数据添加到DynamoDB (Adding New Data to DynamoDB)

Being able to get data from Dynamo is cool, but it's quite useless if we can't also add new data to the table as well. We're going to be creating a POST endpoint to create new data in our Dynamo table.

能够从Dynamo获取数据很酷,但是如果我们也无法向表中添加新数据,那将毫无用处。 我们将创建一个POST端点,以在Dynamo表中创建新数据。

Start by creating a new file in our endpoints folder called createPlayerScore.js and adding this code:

首先在我们的终结点文件夹中创建一个名为createPlayerScore.js的新文件,然后添加以下代码:

const Responses = require('../common/API_Responses');
const Dynamo = require('../common/Dynamo');

const tableName = process.env.tableName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.ID) {
        // failed without an ID
        return Responses._400({ message: 'missing the ID from the path' });
    }

    let ID = event.pathParameters.ID;
    const user = JSON.parse(event.body);
    user.ID = ID;

    const newUser = await Dynamo.write(user, tableName).catch(err => {
        console.log('error in dynamo write', err);
        return null;
    });

    if (!newUser) {
        return Responses._400({ message: 'Failed to write user by ID' });
    }

    return Responses._200({ newUser });
};

This code is very similar to the getPlayerScore code with a few changes. We are getting the user from the body of the request, adding the ID to the user, and then passing that to a Dynamo.write function. We need to parse the event body as API Gateway stringifies it before passing it to the Lambda.

此代码与getPlayerScore代码非常相似,但有一些更改。 我们从请求的正文中获取用户,将ID添加到用户,然后将其传递给Dynamo.write函数。 我们需要解析事件主体,因为API网关会对事件主体进行字符串化处理,然后再将其传递给Lambda。

We now need to modify the common Dynamo.js file to add the .write method. This performs very similar steps to the .get function and returns the newly created data:

现在,我们需要修改通用的Dynamo.js文件以添加.write方法。 这执行与.get函数非常相似的步骤,并返回新创建的数据:

async write(data, TableName) {
        if (!data.ID) {
            throw Error('no ID on the data');
        }

        const params = {
            TableName,
            Item: data,
        };

        const res = await documentClient.put(params).promise();

        if (!res) {
            throw Error(`There was an error inserting ID of ${data.ID} in table ${TableName}`);
        }

        return data;
    }

We've created the endpoint and common code, so the last thing we need to do is modify the serverless.yml file. As we added the environment variable and permissions in the last section, we just need to add the function and API configuration. This endpoint is different from the previous two because the method is POST instead of GET:

我们已经创建了端点和通用代码,因此我们最后要做的就是修改serverless.yml文件。 在上一节中添加了环境变量和权限后,我们只需要添加函数和API配置。 该端点与前两个端点不同,因为该方法是POST而不是GET

createPlayerScore:
        handler: lambdas/endpoints/createPlayerScore.handler
        events:
            - http:
                  path: create-player-score/{ID}
                  method: POST
                  cors: true

Deploying this with sls deploy will now create three endpoints, including our create-player-score endpoint. Testing a POST endpoint is more complex than a GET request, but luckily there are tools to help us out. I use Postman to test all my endpoints as it makes it quick and easy.

现在,通过sls deploy将创建三个端点,包括我们的create-player-score端点。 测试POST端点比GET请求要复杂得多,但是幸运的是,有一些工具可以帮助我们。 我使用Postman来测试我的所有端点,因为它使它变得快速,容易。

Create a new request and paste in your create-player-score URL. You need to change the request type to POST and set the ID at the end of the URL. Because we're doing a POST request we can send up data within the body of the request. Click body then raw and select JSON as the body type. You can then add the data that you want to put into your table. When you click Send, you should get a successful response:

创建一个新请求并粘贴到您的create-player-score URL中。 您需要将请求类型更改为POST并在URL的末尾设置ID。 因为我们正在执行POST请求,所以我们可以在请求的正文中发送数据。 单击body然后单击raw ,然后选择JSON作为主体类型。 然后,您可以将要放入表中的数据添加。 点击Send ,您将获得成功的响应:

To validate that your data has been added to the table, you can make a get-player-score request with the ID of the new data you just created. You can also go into the Dynamo console and look at all the items in the table.

要验证您的数据已添加到表中,可以使用刚创建的新数据的ID发出get-player-score请求。 您也可以进入Dynamo控制台并查看表中的所有项目。

创建S3 GET和POST端点 (Creating S3 GET and POST Endpoints)

Dynamo is a brilliant database storage solution, but sometimes it isn't the best storage solution. If you've got data that isn't going to change and you want to save some money, or if you want to store files other than JSON, then you might want to consider Amazon S3.

Dynamo是一个出色的数据库存储解决方案,但是有时它并不是最佳的存储解决方案。 如果您拥有不会改变的数据并且想要省钱,或者想要存储除JSON以外的文件,那么您可能要考虑使用Amazon S3。

Creating endpoints to get and create files in S3 is very similar to DynamoDB. We need to create two endpoint files, a common S3 file, and modify the serverless.yml file.

在S3中创建用于获取和创建文件的端点与DynamoDB非常相似。 我们需要创建两个端点文件,一个公共的S3文件,并修改serverless.yml文件。

We're going to start with adding a file to S3. Create a createFile.js file in the endpoints folder and add this code:

我们将从向S3添加文件开始。 在端点文件夹中创建一个createFile.js文件,并添加以下代码:

const Responses = require('../common/API_Responses');
const S3 = require('../common/S3');

const bucket = process.env.bucketName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.fileName) {
        // failed without an fileName
        return Responses._400({ message: 'missing the fileName from the path' });
    }

    let fileName = event.pathParameters.fileName;
    const data = JSON.parse(event.body);

    const newData = await S3.write(data, fileName, bucket).catch(err => {
        console.log('error in S3 write', err);
        return null;
    });

    if (!newData) {
        return Responses._400({ message: 'Failed to write data by filename' });
    }

    return Responses._200({ newData });
};

This code is almost identical to the createPlayerScore.js code, but uses a filename instead of an ID and S3.write instead of Dynamo.write.

该代码几乎与createPlayerScore.js代码相同,但是使用filename而不是ID并使用S3.write而不是Dynamo.write

Now we need to create our S3 common code to simplify requests made to S3:

现在,我们需要创建S3通用代码以简化对S3的请求:

const AWS = require('aws-sdk');
const s3Client = new AWS.S3();

const S3 = {
    async write(data, fileName, bucket) {
        const params = {
            Bucket: bucket,
            Body: JSON.stringify(data),
            Key: fileName,
        };
        const newData = await s3Client.putObject(params).promise();
        if (!newData) {
            throw Error('there was an error writing the file');
        }
        return newData;
    },
};
module.exports = S3;

Again, the code in this file is very similar to the code in Dynamo.js, with a few differences around the parameters for the request.

同样,此文件中的代码与Dynamo.js的代码非常相似,但请求参数之间存在一些差异。

The last thing we need to do for writing to S3 is change the severless.yml file. We need to do four things: add environment variables, add permissions, add the function, and add an S3 bucket.

写入S3的最后一件事是更改severless.yml文件。 我们需要做四件事:添加环境变量,添加权限,添加函数以及添加S3存储桶。

In the provider we can add a new environment variable of bucketName: ${self:custom.s3UploadBucket}.

在提供程序中,我们可以添加一个新的bucketName: ${self:custom.s3UploadBucket}环境变量bucketName: ${self:custom.s3UploadBucket}

To add permission to read and write to S3 we can add a new permission to the existing policy. Straight after - dynamodb:* we can add the line - s3:*.

要添加读写S3的权限,我们可以向现有策略添加新的权限。 直路后- dynamodb:*我们可以添加行- s3:*

Adding the function is the same as we've been doing with all our other functions. Make sure that the path has a parameter of fileName as that is what you are checking for in your endpoint code:

添加该功能与我们对所有其他功能所做的相同。 确保路径具有fileName参数,因为这是您在端点代码中检查的参数:

createFile:
        handler: lambdas/endpoints/createFile.handler
        events:
            - http:
                  path: create-file/{fileName}
                  method: POST
                  cors: true

Lastly we need to create a new bucket to upload these files into. In the custom section we need to add a new field s3UploadBucket and set it to a unique bucket name. We also need to configure the resource. After the Dynamo table config, we can add this to create a new bucket for our file uploads:

最后,我们需要创建一个新的存储桶以将这些文件上传到其中。 在custom部分中,我们需要添加一个新字段s3UploadBucket并将其设置为唯一的存储桶名称。 我们还需要配置资源。 在Dynamo表格配置之后,我们可以添加以下内容来为文件上传创建新的存储桶:

s3UploadBucket:
            Type: AWS::S3::Bucket
            Properties:
                BucketName: ${self:custom.s3UploadBucket}

With this set up it is time to deploy again. Running sls deploy again will deploy the new upload bucket as well as the S3 write endpoint. To test the write endpoint, we'll need to head back over to Postman.

通过此设置,是时候再次进行部署了。 再次运行sls deploy将部署新的上载存储桶以及S3写端点。 要测试写入端点,我们需要回到Postman。

Copy in the create-file URL that you get when Serverless has completed the deployment and paste it into Postman and change the request type to POST. Next, what we need to do is to add the filename that we are uploading. In our case we're going to be uploading car.json. The last thing we need to do is add the data to the request. Select Body then raw with a type of JSON. You can add whatever JSON data you would like but here's some example data:

复制Serverless完成部署后获得的create-file URL,并将其粘贴到Postman并将请求类型更改为POST 。 接下来,我们需要添加要上传的文件名。 在我们的例子中,我们将要上传car.json 。 我们需要做的最后一件事是将数据添加到请求中。 选择“ Body然后选择“ raw并使用JSON类型。 您可以添加所需的任何JSON数据,但这是一些示例数据:

{
	"model": "Ford Focus",
	"year": 2018,
	"colour": "red"
}

When you post this data up, you should get a 200 response with an ETag reference to the file. Going into the console and your new S3 bucket you should be able to see car.json.

当您发布此数据时,应该收到200响应,并带有对该文件的ETag引用。 进入控制台和新的S3存储桶,您应该可以看到car.json

从S3获取数据 (Getting Data from S3)

Now that we can upload data to S3, we want to be able to get it back too. We start by creating a getFile.js file inside the endpoints folder:

现在我们可以将数据上传到S3,我们也希望能够将其取回。 我们首先在端点文件夹内创建一个getFile.js文件:

const Responses = require('../common/API_Responses');
const S3 = require('../common/S3');

const bucket = process.env.bucketName;

exports.handler = async event => {
    console.log('event', event);

    if (!event.pathParameters || !event.pathParameters.fileName) {
        // failed without an fileName
        return Responses._400({ message: 'missing the fileName from the path' });
    }

    const fileName = event.pathParameters.fileName;

    const file = await S3.get(fileName, bucket).catch(err => {
        console.log('error in S3 get', err);
        return null;
    });

    if (!file) {
        return Responses._400({ message: 'Failed to read data by filename' });
    }

    return Responses._200({ file });
};

This should look pretty similar to previous GET endpoints we've created before. Differences are the use of the fileName path parameter, S3.get, and returning the file.

这看起来与我们之前创建的以前的GET端点非常相似。 区别在于使用fileName路径参数S3.get和返回文件。

Inside the common s3.js file we need to add the get function. The main difference between this and getting from Dynamo is that when we get from S3, the result is not a JSON response, but a Buffer. This means that if we upload a JSON file, it won't come back down in JSON format, so we check if we're getting a JSON file and then transform it back to JSON:

在常见的s3.js文件中,我们需要添加get函数。 这与从Dynamo获得的主要区别在于,当我们从S3获得时,结果不是JSON响应,而是Buffer 。 这意味着,如果我们上传JSON文件,它将不会以JSON格式恢复,因此我们检查是否获取JSON文件,然后将其转换回JSON:

async get(fileName, bucket) {
        const params = {
            Bucket: bucket,
            Key: fileName,
        };
        let data = await s3Client.getObject(params).promise();
        if (!data) {
            throw Error(`Failed to get file ${fileName}, from ${bucket}`);
        }
        if (fileName.slice(fileName.length - 4, fileName.length) == 'json') {
            data = data.Body.toString();
        }
        return data;
    }

Back in our serverless.yml file, we can add a new function and endpoint for getting files. We've already configured the permissions and environment variables:

回到我们的serverless.yml文件中,我们可以添加一个新函数和端点来获取文件。 我们已经配置了权限和环境变量:

getFile:
        handler: lambdas/endpoints/getFile.handler
        events:
            - http:
                  path: get-file/{fileName}
                  method: GET
                  cors: true

As we're creating a new endpoint we need to do a full deployment again with sls deploy. We can then take the new get-file endpoint and paste it into a browser or Postman. If we add car.json to the end of the request we'll receive the JSON data that we uploaded earlier in this section.

在创建新端点时,我们需要使用sls deploy再次进行完整sls deploy 。 然后,我们可以使用新的get-file端点并将其粘贴到浏览器或Postman中。 如果将car.json添加到请求的末尾,我们将收到在本节前面上传的JSON数据。

使用API​​密钥保护端点安全 (Securing Your Endpoints with API Keys)

Being able to create API endpoints quickly and easily with Serverless is great for starting a project and creating a proof of concept. When it comes to creating a production version of your application, you need to start being more careful about who can access your endpoints. You don't want anybody being able to hit your APIs.

使用Serverless能够快速,轻松地创建API端点对于启动项目和创建概念证明非常有用。 在创建应用程序的生产版本时,您需要开始更加小心谁可以访问您的端点。 您不希望任何人都能使用您的API。

To secure your APIs there are loads of methods, and in this section we're going to be implementing API keys. If you don't pass the API key with the request then it fails with an unauthorised message. You can then control who you give the API keys to, and therefore who has access to your APIs.

为了保护您的API,有很多方法,在本节中,我们将实现API密钥。 如果您未通过请求传递API密钥,则它将失败并显示一条未经授权的消息。 然后,您可以控制向谁提供API密钥,从而控制谁可以访问您的API。

You can also add usage policies to your API keys so that you can control how much each person uses your API. This allows you to created tiered usage plans for your service.

您还可以在API密钥中添加使用策略,以便控制每个人使用API​​的数量。 这使您可以为服务创建分层的使用计划。

To start we're going to be creating a simple API Key. To do this we need to go into our serverless.yml file and add some configuration to the provider.

首先,我们将创建一个简单的API密钥。 为此,我们需要进入serverless.yml文件并将一些配置添加到提供程序。

apiKeys:
		myFirstAPIKey

This will create a new API key. Now we need to tell Serverless which API endpoints to protect with the API key. This has been done so that we can have some of the APIs protected, whilst some of them stay public. We specify that an endpoint needs to be protected by adding the option private: true:

这将创建一个新的API密钥。 现在,我们需要告诉Serverless使用API​​密钥保护哪些API端点。 这样做是为了使我们可以保护某些API,同时保留其中的一些API。 我们通过添加选项private: true来指定需要保护端点。

getUser:
        handler: lambdas/endpoints/getUser.handler
        events:
            - http:
                  path: get-user/{ID}
                  method: GET
                  cors: true
                  private: true

You can then add this field to as many of your APIs as you would like. To deploy this we can run sls deploy again. When this completes, you will get back an API key in the return values. This is very important and we'll use it very soon. If you try and make a request to your get-user API you should get a 401 Unauthorised error.

然后,您可以将该字段添加到任意多个API中。 要部署它,我们可以再次运行sls deploy 。 完成后,您将在返回值中获取API密钥。 这非常重要,我们将尽快使用它。 如果尝试向get-user API发出请求,则应该收到401未经授权错误。

To get the request to succeed, you now need to pass up an API key in the headers of the request. To do this we need to use Postman or another API request tool and add header to our get request. We do this by selecting Authorisation using the API type. The key needs to be X-API-KEY and the value is the key that you got as an output from your Serverless deploy:

为了使请求成功,您现在需要在请求的标头中传递一个API密钥。 为此,我们需要使用Postman或其他API请求工具,并将标头添加到我们的get请求中。 为此,我们使用API type选择“ Authorisation ”。 密钥必须是X-API-KEY ,并且值是从无服务器部署中作为输出获得的密钥:

Now when we make the request we get a successful response. This means that the only people who can access your API are people who you have given your API key to.

现在,当我们发出请求时,我们将获得成功的响应。 这意味着您唯一可以访问您的API的人就是您已将其API密钥授予其的人。

This is great, but we can do more. We can add a usage policy to this API key. This is where we can limit the number of requests a month as well as the rate at which requests can be made. This is great for running a SAAS product as you can provide an API key that gives users a set amount of API calls.

很好,但是我们可以做更多。 我们可以向该API密钥添加使用政策。 在这里我们可以限制一个月的请求数量以及发出请求的速度。 这对于运行SAAS产品非常有用,因为您可以提供一个API密钥,该密钥为用户提供一定数量的API调用。

To create a usage plan we need to add a new object in the provider. The quota section defines how many requests can be made using that API key. You can change the period to either DAY or WEEK if that would suit your application better.

要创建使用计划,我们需要在提供程序中添加一个新对象。 quota部分定义了使用该API密钥可以发出的请求数。 如果更适合您的应用程序,则可以将周期更改为DAYWEEK

The throttle section allows you to control how frequently your API endpoints can be hit. Adding a throttle rate limit sets a maximum number of requests per second. This is very useful as it stops people from setting up a denial of service attack. The burstLimit allows the API to be hit more often than your rateLimit but only for a short period of time, normally a few seconds:

throttle部分使您可以控制API端点的命中频率。 添加限制rate limit设置每秒的最大请求数。 这非常有用,因为它可以阻止人们进行拒绝服务攻击。 与您的rateLimit相比, burstLimit允许该API被击中的频率rateLimit但仅持续一小段时间,通常为几秒钟:

usagePlan:
        quota:
            limit: 10
            period: MONTH
        throttle:
            burstLimit: 2
            rateLimit: 1

If we were to deploy this again, the deployment would fail as we would be trying to deploy the same API key. API keys need to be unique so we have to change the name of the API key. When we deploy this and copy our new API key into Postman, we'll be able to make requests as we normally would. If we try and make too many requests per second or reach the maximum number of requests then we'll get a 429 error of

如果我们要再次部署它,则部署将失败,因为我们将尝试部署相同的API密钥。 API密钥必须唯一,因此我们必须更改API密钥的名称。 当我们部署它并将新的API密钥复制到Postman时,我们将能够像往常一样发出请求。 如果我们尝试每秒发出太多请求或达到最大请求数,则会收到429错误

{
    "message": "Limit Exceeded"
}

This means that you can't use this API key again until next month.

这意味着您不能在下个月之前再次使用此API密钥。

Whilst creating a usage plan is great, you often want to give different people different levels of access to your services. You might give free users 100 requests per month and paying users get 1000. You might want different payment plans which give different number of requests. You would also probably want a master API key for yourself which has unlimited requests!

尽管创建使用计划很重要,但您通常希望为不同的人员提供不同级别的服务访问权限。 您可能每月向免费用户发出100个请求,而付费用户则获得1000个请求。您可能希望使用不同的付款计划,从而产生不同数量的请求。 您可能还希望自己拥有一个无限请求的主API密钥!

To do this we can set up multiple groups of API keys that each have their own usage policy. We need to change the apiKeys and usagePlan sections:

为此,我们可以设置多组API密钥,每组都有自己的使用策略。 我们需要更改apiKeysusagePlan部分:

apiKeys:
        - free:
              - MyAPIKey3
        - paid:
              - MyPaidKey3
    usagePlan:
        - free:
              quota:
                  limit: 10
                  period: MONTH
              throttle:
                  burstLimit: 2
                  rateLimit: 1
        - paid:
              quota:
                  period: MONTH
                  limit: 1000
              throttle:
                  burstLimit: 20
                  rateLimit: 10

Once you've saved and deployed this you'll get two new API keys, each with a different level of access to your API endpoints.

保存并部署此密钥后,您将获得两个新的API密钥,每个密钥对API端点的访问级别不同。

Thanks for reading this guide! If you've found it useful, please subscribe to my Youtube channel where I release weekly videos on Serverless and software development.

感谢您阅读本指南! 如果您发现它有用,请订阅我的Youtube频道 ,我每周都会发布有关无服务器和软件开发的视频。

翻译自: https://www.freecodecamp.org/news/complete-back-end-system-with-serverless/

构建服务器

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值