canitravelto

Can I travel to is born from my interest in developing/learning how to develop a full production-ready environment, using the latest technologies (Go, Docker, AWS, CI/CD, microservices…).

我能旅行到这个地方是因为我对使用最新技术(Go,Docker,AWS,CI / CD,微服务…)开发/学习如何开发完整的生产就绪环境感兴趣。

When deciding what I wanted to develop, I was also planning a trip with some friends to the Netherlands and Belgium, and that’s when I noticed that it was really confusing to know if you could travel there, due to the world situation… I don’t know if you heard about Covid-19 😂.

在决定要发展的东西时,我还计划和一些朋友一起去荷兰和比利时旅行,那时候我注意到,由于世界形势的原因,想知道您是否可以去那里真的很令人困惑……我不知道不知道您是否听说过Covid-19。

So, as any good engineer would do, I decided to build a service that would tell the traveler what countries can they visit!

因此,就像任何一名优秀工程师一样,我决定提供一种服务,该服务会告诉旅行者他们可以参观哪些国家/地区!

Its purpose is to let the user know if they can travel to a destination country from a country of origin. The functionality is partially working.

其目的是让用户知道他们是否可以从原籍国旅行到目的地国家 。 该功能部分起作用。

Amongst the functionalities of the web service we can find: It shows if with the passport of the user, if they can travel there VISA free, only an specific amount of days or if the VISA is always required.

在Web服务的功能中,我们可以找到:它显示了是否具有用户护照,他们是否可以免费前往VISA,仅特定天数还是始终需要VISA。

It also displays the amount of Covid cases in the destination country, but the actual travel restrictions are not considered, as to get that information I would have to manually insert it and keep updated, or pay for a premium API. Another option would be to scrap the website of each country to find the information automatically, but I’m not that crazy/have that much time (195 countries!).

它还显示目的地国家/地区的Covid案件数量, 但并未考虑实际的旅行限制 ,因此,要获取该信息,我将不得不手动将其插入并保持更新,或付费购买高级API。 另一种选择是取消每个国家的网站以自动查找信息,但是我并不那么疯狂/拥有那么多时间(195个国家!)。

It’s not supposed to be a “usable/user-friendly” service, that’s why the frontend may look a bit rough. The service has a simple static website frontend that consumes from the backend through an API (TLS ready, so the website can be https).

它不应该是“可用/用户友好”的服务 ,这就是为什么前端可能看起来有些粗糙。 该服务具有一个简单的静态网站前端,该前端通过API从后端使用(TLS就绪,因此网站可以为https)。

Website: canitravelto.com

网址canitravelto.com

Github: https://github.com/marcllort/CanITravelTo_Backend

GitHubhttps : //github.com/marcllort/CanITravelTo_Backend

Image for post
Website example 网站示例
Image for post
Postman (API) example
邮递员(API)示例

Now that you have a brief idea of what the project it’s about, let’s see how I have coded each part of it!

现在,您对项目的含义有了一个简短的了解,让我们看看如何对项目的每个部分进行编码!

In the following sections, I will explain each part/technology used in the project and how everything has been set up.

在以下各节中,我将解释项目中使用的每个零件/技术以及如何设置所有内容。

It is a bit more technical, so in case you are not interested in that, just a few bullet points about the project I’d like you to know!

它有点技术性,因此,如果您对此不感兴趣,请告诉我有关该项目的几点要点!

  • Golang: Google’s backend programing language.

    Golang:Google的后端编程语言。
  • Gin Gonic: Golang HTTP framework to create the API.

    Gin Gonic:Golang HTTP框架创建API。
  • Microservices: Project separated in different “mini programs”, so it is easier to maintain its distinct parts, and independently deployable.

    微服务:项目分散在不同的“小程序”中,因此更易于维护其不同部分,并且可以独立部署。
  • AWS (EC2): Amazon’s cloud platform, where the backend is deployed.

    AWS(EC2):部署后端的Amazon云平台。
  • CI/CD: Continuous Integration, means that each time a new version of the project is released, a collection of tests will run, to ensure this latest version works properly. Continuous Delivery, the process of, once we know the new version works, deploying this code to the cloud.

    CI / CD:持续集成,意味着每次发布新版本的项目时,都会运行一系列测试,以确保该最新版本正常运行。 连续交付,即我们知道新版本有效的过程,然后将此代码部署到云中。
  • Docker: Unit of software that packages up code, so it can be run easily in any platform/OS.

    Docker:打包代码的软件单元,因此可以在任何平台/ OS上轻松运行。
  • Kanban: Agile methodology

    看板:敏捷方法论

数据库 (Database)

Running on Amazon RDS MySQL free instance (t.2 micro instance) → Basic/Simple setup (hosted in Paris).

Amazon RDS MySQL免费实例(t.2微型实例)上运行→基本/简单设置(在巴黎托管)。

Steps to configure:

配置步骤:

  • Set its configuration to publicly available.

    将其配置设置为公开可用。
  • In Security groups, add two inbound rules (port 3306 MySQL), one for your development computer with your own IP, and another for the EC2 instances where the backend server is being hosted. No need to set the IP, just the name of its security group/launch-wizard number.

    在“安全性”组中,添加两个入站规则(MySQL 3306端口),一个规则用于具有自己IP的开发计算机,另一个规则用于托管后端服务器的EC2实例。 无需设置IP,只需设置其安全组/发射向导编号的名称即可。
  • The current dataset in the DB has the information from PassportIndex of the places you can travel with your passport.

    数据库中的当前数据集具有PassportIndex提供的有关您可以携带护照旅行的地点的信息。
  • This dataset, which should be regularly updated, can be found in GitHub in CSV. Transform it to MySQL here. If updated, and the list of countries has changed, it must also be changed in Countries.go list

    该数据集应定期更新,可以在CSV中的GitHub中找到。 在此处将其转换为MySQL。 如果已更新,并且国家/地区列表已更改,则还必须在Countrys.go list中更改它。

  • Once the import script is prepared, just connect to the DB with DataGrip/Workbench, and run the script.

    准备好导入脚本后,只需使用DataGrip / Workbench连接到数据库,然后运行脚本。

The DB credentials should be stored always in the Creds folder in the Backend GoLang, with the following format:

数据库凭证应始终以以下格式存储在后端GoLang中的Creds文件夹中:

{
"user": "admin",
"hostname": "x.x.eu-west-3.rds.amazonaws.com,
"port": "3306",
"database": "db name"
}

EC2 Ubuntu (EC2 Ubuntu)

First, I’ll explain how to create the virtual machine where the backend will run, and later in the Backend GoLang I will explain the backend itself.

首先,我将说明如何创建后端将在其中运行的虚拟机,然后在后端GoLang中,我将说明后端本身。

Hosted in free tier t2.micro Amazon EC2 instance, running Ubuntu 18.04 default configuration (hosted in Paris).

托管在免费层t2.micro Amazon EC2实例中 ,该实例运行Ubuntu 18.04默认配置(托管在巴黎)。

When creating the instance, download the keypair.pem to be able to SSH into the machine.

创建实例时,下载keypair.pem以能够通过SSH进入计算机。

The only configuration needed, is in Security groups, where there’s the need to, in the inbound rules, the ports 22 (SSH), 80 (HTTP), 8080 (DEV) and 443 (HTTPS) should be open to “Anywhere”, so to 0.0.0.0.

唯一需要的配置是在“安全”组中,在该组中,需要在入站规则中将端口22(SSH),80(HTTP),8080(DEV)和443(HTTPS)开放给“任意位置”,因此为0.0.0.0

To SSH into AWS Ubuntu machine:

SSH到AWS Ubuntu计算机中:

ssh -i keypair.pem ubuntu@[Public-DNS] (i.e= whatever.eu-west-3.compute.amazonaws.com)

ssh -i keypair.pem ubuntu@[Public-DNS] (ie= whatever.eu-west-3.compute.amazonaws.com)

后端GoLang (Backend GoLang)

Backend is being written in Go and using Gin framework for the http requests. The connection with the MySQL DB is done with go-sql-driver. Now, there are two microservices, one for data retrieval, and the other one handling the requests.

后端是用Go编写的,并使用Gin框架处理http请求。 与MySQL DB的连接是通过go-sql-driver完成的 。 现在,有两种微服务,一种用于数据检索,另一种用于处理请求。

When working on local, you should connect to localhost:8080/travel. If testing with the hosted backend in EC2, publicIP:8080/travel.

在本地上工作时,应连接到localhost:8080/travel 。 如果使用EC2中的托管后端进行测试,请使用publicIP:8080/travel

Libraries used:

使用的库

  • GinGonic

    金戈尼奇
  • go-sql-driver

    go-sql驱动程序

Using the following commands, go generates Go modules, which facilitates the download of the different packages/dependencies of the project.

使用以下命令,go生成Go模块 ,该模块有助于下载项目的不同包/依赖项。

go mod init     // Creates the file where dependencies are saved
go mod tidy // Tidies and downloads the dependencies

API Usage

API用法

The request to the backend should always be a POST, and this could be an example JSON body for the request:

到后端的请求应始终为POST ,这可以是该请求的示例JSON正文:

{
"destination": "Spain",
"origin": "France"
}

The request must have a ‘X-Auth-Token’ with the API-KEY (for now the token is SUPER_SECRET_API_KEY, original, I know xD) if not, a 401 Unauthorized code will be given. To enforce the API key, a middleware is being used, which is added to the "router" so every time it receives a request the auth check is done.

如果没有,则请求必须具有带有API-KEY的“ X-Auth-Token” (目前,令牌是SUPER_SECRET_API_KEY ,原始的,我知道xD),否则,将给出401 Unauthorized代码。 为了强制执行A​​PI密钥,正在使用中间件,该中间件已添加到“路由器”中,因此,每当它收到请求时,都会完成auth检查。

The same endpoint is also implemented with GET, but not being used now.

相同的端点也通过GET实现,但现在不使用。

When running the Gin router previously I used run, which serves HTTP. But, since being deployed, I use runTLS, which serves HTTPS. In this case you need to provide two certificates, later explained in Domains and Cloudfare.

在先前运行Gin路由器时,我使用了run ,该服务器提供HTTP服务。 但是,自部署以来,我使用的是服务于HTTPS的runTLS 。 在这种情况下,您需要提供两个证书,稍后在“域和Cloudfare”中进行说明。

So far, the response time in local is about 1ms, while in AWS is around 36ms. In both cases has been stress tested with thousands of requests every 1ms and has been able to not drop a single request.

到目前为止, 本地的响应时间约为1ms ,而AWS 的响应时间约为36ms。 在这两种情况下,每1ms都对数千个请求进行了压力测试,并且无法丢弃单个请求。

To stress test the backend, I used the chrome extension named RestfulStress.

为了对后端进行压力测试,我使用了名为RestfulStress的Chrome扩展程序

CORS

CORS

Cross-Origin Resource Sharing (CORS) is a mechanism that uses additional HTTP headers to tell browsers to give a web application running at one origin, access to selected resources from a different origin. A web application executes a cross-origin HTTP request when it requests a resource that has a different origin (domain, protocol, or port) from its own.

跨域资源共享(CORS)是一种机制,该机制使用附加的HTTP标头来告诉浏览器以在一个来源运行的Web应用程序访问来自另一个来源的选定资源。 Web应用程序请求其来源(域,协议或端口)不同的资源时,将执行跨域HTTP请求。

In the backend, when responding to the requests there is the following headers that must be added to the response, so it complies with the CORS policies:

在后端中,响应请求时,必须将以下标头添加到响应中,因此它符合CORS策略

c.Header("Access-Control-Allow-Origin", "*")  
c.Header("Access-Control-Allow-Headers", "access-control-allow-origin, access-control-allow-headers")

“Preflighted” requests first send an HTTP request by the OPTIONS method to the resource on the other domain, to determine if the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data (my case, as frontend and backend are hosted separately). In case the request is a CORS preflight (OPTIONS request), we will also, in case that we use an API key, add the following header (“X-Auth-Token”), so the client knows that the requests must contain an API key/token:

“优先”请求首先通过OPTIONS方法将HTTP请求发送到另一个域上的资源,以确定实际请求是否可以安全发送。 跨站点请求之所以这样进行预检,是因为它们可能会对用户数据产生影响(我的情况是, 前端和后端分别托管 )。 如果请求是CORS预检(OPTIONS请求),我们还将在使用API​​密钥的情况下添加以下标头(“ X-Auth-Token”),因此客户端知道请求中必须包含一个API密钥/令牌:

c.Header("Access-Control-Allow-Origin", "*")  	
c.Header("Access-Control-Allow-Headers", "access-control-allow-origin, access-control-allow-headers, X-Auth-Token")

The “Access-Control-Allow-Origin” determines what origin/website can use the endpoint. I could configure it, so the backend can only be used by canitravelto.com and my own IP. In that case, I should also include an extra header: "Vary: Origin".

“ Access-Control-Allow-Origin”(访问控制允许来源)确定哪个起源/网站可以使用该端点。 我可以对其进行配置,因此后端只能由canitravelto.com和我自己的IP使用。 在这种情况下,我还应该包括一个额外的标题 :“ Vary:Origin”。

To avoid all this, and not having to deal with certificates, HTTPS… another solution would be to use the service behind Nginx, which would act like a “middleware” between the user and the backend, and the deal with the SSL certificates and its renewals.

为了避免所有这些,并且不必处理证书,HTTPS…另一个解决方案是使用Nginx后面的服务,该服务就像用户和后端之间的“中间件”,并处理SSL证书及其续订。

数据检索器 (Data Retriever)

Microservice, coded in Go. Responsible for updating the Covid daily data, in the future will also handle other Database related functions. It uses a Go-Cronjob to update the data every day at 10:30 AM.

微服务,用Go编码。 负责更新Covid的日常数据 ,将来还将处理其他与数据库相关的功能。 它每天早上10:30使用Go-Cronjob来更新数据。

业务经理 (Business Handler)

Microservice, coded in Go. Responsible for handling all the API requests. Uses Gin-gonic to handle the endpoints in HTTPS mode, so the content can be served to the HTTPS frontend.

微服务,用Go编码。 负责处理所有API请求 。 使用Gin-gonic以HTTPS模式处理端点,因此可以将内容提供给HTTPS前端。

码头工人 (Docker)

The different microservices are being run with docker-compose in the EC2 AWS instance. The images are hosted in a GitHub private docker registry for this project. I added an automation, so the older images are deleted once a month, or when a limit is reached. There are two different Dockerfile’s for each microservice, plus the docker-compose file to launch them together, plus create the “internal network”, so they can communicate.

在EC2 AWS实例中使用docker-compose运行不同的微服务。 图像托管在该项目的GitHub私有Docker注册表中。 我添加了一个自动化功能,因此较旧的图像每个月或在达到限制时被删除一次。 每个微服务都有两个不同的Dockerfile,加上将它们一起启动的docker-compose文件,以及创建“内部网络”,以便它们可以进行通信。

Performance wise, the difference between the compiled binary and the docker images has been negligible. Both are extremely fast, averaging around 53ms per request both. The backend itself, from when it receives the request till it sends back the response just takes 5 or 6ms.

在性能方面,已编译二进制文件和docker映像之间的差异可以忽略不计。 两者都非常快,平均每个请求均约为53ms。 后端本身,从收到请求到发回响应,只需要5到6毫秒。

Image for post
Backend “logic” (Docker) response time, 5ms (POST/GET), 12micros (OPTIONS)
后端“逻辑”(Docker)响应时间,5毫秒(POST / GET),12微米(可选)
Image for post
Binary response time, 51ms
二进制响应时间,51ms
Image for post
Docker response time, 53ms
Docker响应时间,53ms

Kubernetes (Kubernetes)

Even though is REALLY overkill for this project, due to the small number of visitors received, I wanted to implement Kubernetes to handle the docker containers. I haven’t been able to make it work in the production server (EC2 t2.micro instance) due to the small amount of resources. It makes the server unusable, always at 100% CPU and RAM usage.

尽管对于这个项目来说确实过分了 ,但是由于收到的访问者很少,我还是想实现Kubernetes来处理docker容器。 由于资源少,我无法使其在生产服务器(EC2 t2.micro实例)中工作。 它使服务器无法使用,始终使用100%的CPU和RAM。

I have a simple implementation of the project running in Kubernetes in my local environment, which I’ll upload its configuration when working properly.

我在本地环境中的Kubernetes中运行了一个简单的项目实现,在正常工作时我将上传其配置。

吉特 (Git)

I’m using a mono-repo, as it enables me to share the docker-compose, readme, credentials… Later, during the CI/CD is much easier to deal, as there is only one git repository to pull and deal with. I also use the GitHub Projects feature, with the Kanban methodology to organize the new “stories” I have to develop/fix.

我使用的是mono-repo ,因为它使我可以共享docker-compose,自述文件,凭据...稍后,在CI / CD期间,由于只有一个git存储库可进行处理,因此更容易处理。 我还使用GitHub Projects功能以及看板方法来组织我必须开发/修复的新“故事”。

Image for post
Kanban board
看板

CI / CD (CI/CD)

I’m using GitHub Actions, to have everything centralized in GitHub. It uses a YML file, really similar to Bitbucket/Gitlab or Jenkins.

我正在使用GitHub Actions ,将所有内容集中在GitHub中。 它使用YML文件,与Bitbucket / Gitlab或Jenkins非常相似。

So far, I have two pipelines, the CI (ci.yml) and CD (cd.yml).

到目前为止,我有两个管道 ,即CI( ci.yml )和CD( cd.yml )。

In the CI pipeline the steps implemented are: build the two microservices, run the unit and integration/E2E tests of each microservice. If it fails it will notify me through an email.

在CI管道中,实现的步骤是:构建两个微服务,运行每个微服务的单元测试和集成/端到端测试。 如果失败,它将通过电子邮件通知我。

The Unit tests are done with the vanilla Golang test methodology, similar to JUnit. The E2E tests, are a collection of Postman calls/tests that are being run in the pipeline with Newman (cli version of Postman). The tests are written in JavaScript, here there is a simple example:

单元测试使用类似于JUnit的香草Golang测试方法完成。 E2E测试是与Newman (climan版本的Postman) 一起在管道运行的Postman呼叫/测试的集合。 测试是用JavaScript编写的,这里有一个简单的示例:

pm.test("Status code is 200", function () {
pm.response.to.have.status(200);
});pm.test("No error messages", function () {
var jsonData = pm.response.json();
pm.expect(jsonData.error).to.eql("");
});pm.test("Origin/Destination Correct", function () {
var jsonData = pm.response.json();
pm.expect(jsonData.origin).to.eql("France");
pm.expect(jsonData.destination).to.eql("Spain");
});

In the CD pipeline, if the commit is in the master branch, and it’s a commit marked as a release, it will decrypt the Credentials needed to build each microservice image (explained in the next section), build and upload the new Docker images (to GitHub’s docker registry), and then SSH into the EC2 instance to stop the old docker images and start the updated images with the latest changes.

在CD管道中,如果提交位于master分支中,并且是标记为发布的提交,它将解密构建每个微服务映像所需的凭据(在下一部分中进行说明), 构建并上传新的Docker映像 (到GitHub的Docker注册表),然后通过SSH进入EC2实例以停止旧的Docker映像并使用最新更改启动更新的映像。

Another solution could be building the docker images in the server instead of doing it directly in the pipeline. This would allow me to just have the Credentials in my server and avoid uploading the encrypted version to GitHub. In case this was a long-term project, or a business it would probably be better to keep the Creds only on your server, but again, the Repository would be private, so the security shouldn’t be a big problem… Make your choice! I decided to use both solutions, the building of the image in the pipeline, and the deployment through an SSH script.

另一个解决方案可能是在服务器中构建docker映像,而不是直接在管道中执行。 这样一来,我就可以在服务器中拥有凭据,而不必将加密版本上传到GitHub。 如果这是一个长期项目或一项业务,最好仅将Creds保留在您的服务器上,但是同样,存储库将是私有的,因此安全性应该不是大问题……请选择! 我决定同时使用两种解决方案,在管道中构建映像以及通过SSH脚本进行部署。

If we are in another branch (other than master) it will only run the CI pipeline (build and unit/integration tests).

如果我们在另一个分支(除master之外)中,它将仅运行CI管道(构建和单元/集成测试)。

Image for post
CI (left) and CD (right) pipelines execution example
CI(左)和CD(右)管道执行示例

管道中的SSH (SSH from pipeline)

To SSH to AWS, a PEM/key file is needed. As it would be insecure to upload the key to GitHub, I’m using a workaround (also used for the Credentials in the CI/CD pipelines). I encrypted (and uploaded to GitHub) the PEM file using:

要通过SSH连接到AWS,需要一个PEM /密钥文件。 由于将密钥上传到GitHub是不安全的,因此我正在使用一种解决方法(也用于CI / CD管道中的凭据)。 我使用以下方法加密(并上传到GitHub)PEM文件:

gpg --quiet --batch --yes --decrypt --passphrase="XXXX" --output key-aws.pem BusinessHandler/Creds/key-aws.pem.gpg

Then in the pipeline I decrypt the file using the passphrase (which is saved as a Secret environment variable), change the permissions and SSH (plus run the update script).

然后,在管道中,我使用密码短语(保存为Secret环境变量)解密文件,更改权限和SSH(并运行更新脚本)。

Important using the -oStrictHostKeyChecking=no when SSHing from a script/pipeline, so it automatically accepts the ECSDA key.

从脚本/管道进行-oStrictHostKeyChecking=no时,使用-oStrictHostKeyChecking=no重要,因此它会自动接受ECSDA密钥。

gpg --quiet --batch --yes --decrypt --passphrase="$LARGE_SECRET_PASSPHRASE" --output key-aws.pem BusinessHandler/Creds/key-aws.pem.gpg
chmod 600 key-aws.pem
ssh -oStrictHostKeyChecking=no -i key-aws.pem ubuntu@ec2-35-180-85-2.eu-west-3.compute.amazonaws.com "chmod +x CanITravelTo_Backend/Documentation/Ubuntu/update.sh && exit"
ssh -oStrictHostKeyChecking=no -i key-aws.pem ubuntu@ec2-35-180-85-2.eu-west-3.compute.amazonaws.com "CanITravelTo_Backend/Documentation/Ubuntu/update.sh $DB_PASSWORD && exit"

前端 (Frontend)

The frontend is a static website coded in vanilla HTML/CSS/JS and hosted in GitHub Pages, which is free with a maximum of a 100GB of bandwidth per month. To avoid this limitation, Cloudfare can be used. Cloudfare will (for free) cache the website in their servers and provide a Secure SSL certificate. To do so, follow this tutorial.

前端是一个用香草HTML / CSS / JS编码并托管在GitHub Pages中的静态网站,该网站是免费的,每月最大带宽为100GB。 为避免此限制,可以使用Cloudfare。 Cloudfare将(免费)将网站缓存在其服务器中并提供安全SSL证书。 为此,请遵循教程。

In the future, I plan to move to a React frontend. Already have a React implementation running, but so far is not as nice as the vanilla one, because I don’t have much experience with it! Once I’m done with CI/CD and backend tests, I’ll continue with the React trainings to improve it.

将来,我计划迁移到React前端。 已经有一个React实现在运行,但是到目前为止还没有香草实现好,因为我对此没有太多经验! 完成CI / CD和后端测试后,我将继续进行React培训以改进它。

域和云票价 (Domains and Cloudfare)

Currently, two domains are being used: canitravelto.com (where the frontend is hosted, GitHub Pages) and canitravelto.wtf (where the backend API is hosted, EC2 AWS).

当前,正在使用两个域:canitravelto.com(托管前端,GitHub页面)和canitravelto.wtf(托管后端API,EC2 AWS)。

Both are using Cloudfare, which provides caching for the website, important for the frontend as GitHub offers a maximum of 100GB of bandwidth per month, but most importantly it provides TLS certificates, so the website and backend are HTTPS encrypted and safe to use.

两者都使用Cloudfare,该服务为网站提供缓存,这对前端很重要,因为GitHub每月最多提供100GB的带宽,但最重要的是它提供TLS证书 ,因此网站和后端均经过HTTPS加密并且可以安全使用。

At first, only the frontend used Cloudfare as there was no need for the backend to use HTTPS, until I saw that an HTTPS website can’t consume from an HTTP API. The options where to go back to HTTP in the frontend (I didn’t manage to do it, because GitHub Pages always provides HTTPS).

最初,只有前端使用Cloudfare,因为后端无需使用HTTPS,直到我看到HTTPS网站无法从HTTP API进行使用为止。 在前端返回HTTP的选项(我没有设法做到这一点,因为GitHub Pages始终提供HTTPS)。

The second option was to serve the API as HTTPS. I created my own certificates, but as they were self-signed, HTTPS didn’t like them, so I had to have valid certificates. To get an SSL certificate, you need a domain name, so I then acquired canitravelto.wtf, to use it instead of the public AWS IP.

第二种选择是将API用作HTTPS。 我创建了自己的证书,但是由于它们是自签名的,HTTPS不喜欢它们,因此我必须拥有有效的证书。 要获得SSL证书,您需要一个域名,因此我获得了canitravelto.wtf,以使用它代替公共AWS IP。

canitravelto.com is hosted by godaddy.com

canitravelto.com 由godaddy.com托管

canitravelto.wtf is hosted by name.com

canitravelto.wtf 由name.com托管

云费 (Cloudfare)

To configure Cloudfare in both domains, there’s a few steps to follow:

要在两个域中配置Cloudfare,请执行以下步骤:

前端 (Frontend)

In the case of GitHub Pages (canitravelto.com) just follow the Cloudfare set-up. Once the email is received about your website being active, navigate to SSL/TLS and change the mode to FULL. If not done, the webpage won’t be reached (still don’t know why).

对于GitHub Pages( canitravelto.com ),只需遵循Cloudfare设置即可。 收到有关您的网站处于活动状态的电子邮件后,导航至SSL / TLS并将模式更改为“完全”。 如果未完成,则无法访问该网页(仍然不知道为什么)。

Another change to be done, is CORS in the API requests. CORS stands for Cross-origin resource sharing, which means that it will consume resources from another site. If not correctly configured, the requests will fail.

另一个要做的更改是API请求中的CORS。 CORS代表跨域资源共享,这意味着它将消耗其他站点的资源。 如果配置不正确,请求将失败。

To make this work, in the frontend we only need to add to the header of the request this API key:

为此,在前端,我们只需要在此请求的标头中添加以下API密钥

myHeaders.append("X-Auth-Token", "SUPER_SECRET_API_KEY")

myHeaders.append("X-Auth-Token", "SUPER_SECRET_API_KEY")

后端 (Backend)

With the backend (canitravelto.wtf), first I hosted in AWS with Route53 (not needed) and then did the same configuration as with GitHub pages in Cloudfare.

使用后端( canitravelto.wtf ),首先我使用Route53托管在AWS中 (不需要),然后与Cloudfare中的GitHub页面进行了相同的配置。

In this case we also must navigate to SSL/TLS -> Origin Server and select Create Server. We need to copy the two keys and save them in http-server.key and http-server.cert.

在这种情况下,我们还必须导航至SSL / TLS->源服务器,然后选择创建服务器。 我们需要复制两个密钥并将其保存在http-server.keyhttp-server.cert

In the GoLang backend (main.go):

在GoLang后端( main.go )中:

router.RunTLS(PORT, "Creds/https-server.crt", "Creds/https-server.key")

router.RunTLS(PORT, "Creds/https-server.crt", "Creds/https-server.key")

We also need to enable CORS on the backend, to enable the requests from the frontend. Just add these two headers to the returned JSON:

我们还需要在后端启用CORS ,以启用来自前端的请求 。 只需将这两个标头添加到返回的JSON中:

c.Header("Access-Control-Allow-Origin", "*")

c.Header("Access-Control-Allow-Origin", "*")

c.Header("Access-Control-Allow-Headers", "access-control-allow-origin, access-control-allow-headers")

c.Header("Access-Control-Allow-Headers", "access-control-allow-origin, access-control-allow-headers")

结论 (Conclusion)

That’s it — all you need to create an API in Golang and deploy it free with a fully working environment. Hopefully, at least some of this will help you when building your next project. You can find all the source code here and if you have any suggestions or improvements feel free to create issue, pull request or just fork/star the repository. If you liked this article, look for the next one, where I’ll improve on the methodologies I used here, and make use of different languages and technologies.

就是这样–您只需在Golang中创建一个API并在一个可以正常工作的环境中免费部署它即可。 希望其中至少有一部分可以在构建下一个项目时为您提供帮助。 您可以在此处找到所有源代码,如果有任何建议或改进,可以随时创建问题,拉取请求或只是对存储库进行分叉/加注星标。 如果您喜欢本文,请寻找下一篇,我将在这里改进我​​在这里使用的方法,并利用不同的语言和技术。

翻译自: https://medium.com/@mac12llm/canitravelto-21d19160bb42

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值