Fully managed Kong Gateway SaaS on Konnect and AWS, with live demo

Thank you everyone.

Ok. So today we're going to be talking about APIs and we're also going to be seeing a live demonstration of how we can run a globally distributed API infrastructure on AWS.

So before we start, we really have to recognize the current time period where we live in the world has changed and is changing in front of our eyes. One API at a time, we're transitioning obviously from a pre API world to an API world. Everything that we do, the business that we run it is at the end of the day powered by an API. API is the interface of our business and APIs allow us to cater to new use cases that we're building. AI is one of them. I like to say the more AI, the more API. It is how AI communicates with the data and the services that we want to make available.

And in order for us to be successful with this transformation, we must have an API vision that we execute against. If we do have a vision for our business, we must have a vision for APIs. I mean look at AWS. AWS wouldn't have been possible without a vision for APIs. And typically to go from a pre API world to a successful API world, we need to have a few pieces in place in our API vision.

It starts from the developers, it starts from how developers are building, creating and designing APIs. We want to make our developers self service in the way they build new APIs and in the way their developer pipelines is connected with the API creation and testing phase.

Then once we have APIs, we need to have a gateway in place. A gateway that allows us to expose APIs as a product with a product life cycle, either internally to other teams, other applications or externally at the edge for any client, any use case that we may be building.

And then finally, some of our APIs are going to be served by microservice oriented applications. And for these applications, we need to have a secure observable and fast network overlay that we can deliver through a service mesh.

In order to be successful with this change, it doesn't matter how good our API infrastructure is. If the consumption of that infrastructure is zero, the value of API infra is being given by the consumption. So all of these components, the infrastructure components and the developer experience component, they're all necessary for a successful API vision.

Then once we have all of these in place we need to have a single pane of glass where we can manage all of our APIs that are emerging from the meshes and from the gateways, one single pane of glass to manage all the configuration of our infrastructure. And what type of boundaries and guardrails we also want to put in place to still own the security, for example, the security and the observable aspects of our API in front and yet give a degree of freedom to our teams to innovate and move fast.

And then finally, we want to have something that allows us to scale our API governance across all the teams in the organization. Every team is an API team in this new API world. At Kong, we serve this API vision with an API platform, an API platform that delivers the API developer experience with Insomnia that delivers the gateway API management experience with Kong Gateway and K Mesh for service mesh.

And then finally, that platform is being delivered in the cloud with Connect. Connect is our single pay source of truth for all the gateways and the meshes that you are provisioning. And it is what allows us to iterate in our API vision because it gives us that visibility of knowing what are all the APIs that we are running in the organization without a control plane like this. We are navigating in the dark, we are blind because we don't know what the teams are doing.

So today, we're going to be seeing a demonstration of Connect. By the way, now, when it comes to every transformation, including the API transformation, it is all about reducing complexity. It's all about doing more with less. And so in order for us to run a modern API infrastructure in the cloud on AWS, we also must reduce complexity. We must reduce the costs, we must decrease the risks and we must make our API infra as easy as possible as self service as possible for everything that's consuming or publishing APIs in the organization.

Let's make an example, let's say that we want to deploy 24/7 API infrastructure across the world, people wise, we have to staff for a necessary team that is going to be in charge of maintaining the uh up time and reliability of our API infrastructure across the globe. So for example, if we assume that uh we need three SOEs per region to follow pretty much the 24/7 in each region and we multiply that across uh both regions and we add some redundancy. We see that people wise, the costs are very um expensive just for the people that allow us to maintain this infrastructure, being able to reduce costs, reduce complexity and deploy global API infra as quickly as possible is critical in this day and age.

So what if, what if we could reduce cost by 10x? Not only on people costs, salary costs, training costs, but also infra costs. What if we could deploy dedicated API infrastructure that has the same ease of use as a serverless API infrastructure? And what if we could do that with Kong Gateway, which is the most adopted gateway in the world open source. We do that to un Connect.

Thanks to this new offering that we are about to ship early next year available as a tech preview today called Dedicated Cloud Gateways on Connect. We dedicated cloud gateways on Connect. We can use the Kong Gateway that we all know with all the plugins that it has all the ecosystem of Kong and run it in one click dedicated in the cloud across any region where you need to have that API infrastructure running, it supports a one click global provisioning.

We can choose the cloud the region where we want our gateway infrastructure to run and in one click, we can provision it in a dedicated way dedicated means that we're not sharing this infrastructure with anybody else. It is yours and it has to be yours because APIs are mission critical components of your business. Therefore, API infra is mission critical as well. It has to have the same reliability and performance of dedicated with the ease of use of serverless.

Dedicated Cloud Gateways on Connect, supports two different modes of operation. We have what we call autopilot, SaaS, which means we can choose the regions where we want to run the gateway and Connect will automatically deploy the right amount of resources to always cater to the live traffic that you're handling at any given time. So this is out of scaling built in with um uh upscaling and downscaling based on live traffic. It's extremely easy to use or we can run in custom mode.

In custom mode, we do have a little bit more flexibility into determining how we want to run our infra. So we can choose not only the regions, but we can also choose how many nodes and what type of nodes small, medium large we want to provision based on the use cases that we want to cater to. Are we doing transformations on the API layer? Then we will need a higher provision instances to cater to those transformations or if we don't have that, we can go a little bit lower. And therefore with custom, we have full predictability on the costs and full predictability on how the API infra is going to run.

All of these runs with private networking built in. We can run a dedicated cloud gateway uh on Connect in a way that is as secure as if you are running your own Kong Gateway on your own AWS account. We support transit gateway to create a secure link between our cloud and your cloud. And you can support different transit gateway attachments for different regions that you decide decide to use with dedicated cloud gateways.

And then finally, there's a lot of capabilities we put into this into this new feature. Uh we also ship an edge global DNS which allows us to have a globally reachable DNS that will target all the regions that you have decided to use. And we'll do that based on live latency and performance metrics that we capture. In such a way, we can always redirect your customers to the closest edge location that you have decided to deploy.

And then finally, dedicated cloud gateways works in addition to hybrid cloud gateway, which is the mode of operation we already support in Connect with hybrid. You can use Connect as your global API control plan and then you can start your own data plane gateways or meshes in a self managed way in your own infra. Therefore, you always own the traffic. But the management of that global infrastructure is that in the cloud, it's the best of both worlds, you own the traffic yet you can use the cloud to manage it.

So I think it's time to see a demo now and see how we can use this capability on Connect. So to do that, I'm going to go over my computer and um let's wait for the for Connect to show up. So this is the Connect dashboard Connect is the global cloud control plane of Kong where you can provision gateways and meshes to pretty much cater all API use cases you have in the organization. Of course, infrastructure is only part of the story. We also provide products like developer portal and analytics and so on and so forth to have a full API life cycle in the cloud for your organization.

So today we're going to be focusing on provisioning a dedicated cloud gateway cluster. And to do that, we go on Gateway Manager. In Gateway Manager, I can create multiple virtual control plans for each team or each application that I want in such a way that I can segregate both the configuration of my API infrastructure and also segregate the infra that is running on this virtual control plane.

We would create a virtual control plans for each team for each application and everything that you see in the GUI, I can also be fully automated with APIs or declarative configurations. So all of these can be fully automated in your CI/CD workflows.

So here we can see that I have a few control planes. Um and I can go ahead and provision a new control plane. And if I do that I can choose Kong Gateway when I provision new control plane, um I assign a name for that control plane and then I choose how I want my gateways to run. Do I want to manage them myself in a self managed way or do I want to use dedicated cloud gateways?

If I choose dedicated cloud and I click next. The next step is a very simple wizard that allows us to choose what version of Kong we want to run, how we want to run it. And then finally on what regions we want to run our infra.

So for this example, I'm going to be choosing uh staging environment. Um I'm going to be choosing some regions like Ohio uh one node and then perhaps uh another region like Singapore two nodes, for example. And then based on this, we will provision a default network that you can then later configure with a private networking attachment using transit gateway. This is it in one click. I can provision global API infrastructure on top of Kong Gateway and then Connect will do all the heavy lifting for us.

Of course, now the cluster is being provisioned. I came prepared. So I do have a cluster already running which allows us to see how we can start using Connect on top of our um our uh new globally distributed infrastructure.

So I do have this uh cluster here called the AWS reInvent. This cluster I provisioned it the same way you have just seen. And here I have basically, I'm covering the whole world. I have nodes in uh Singapore, Ireland, Europe, uh US and um and APAC. And once I have my data planes running, I can go ahead and choose how I want to manage my services.

So in this case, I have a service called HTTP bin. But let's do this. Let me create a new one which can still point to HTTP bin. So I can show you how easy it is to create an API and make consumption to this API.

To do that, I am going to be fetching the base of HTTP ban there it is. I'm going to be creating my service on top of Connect on top of that cluster. And then once I do that, I'm going to be creating a route and this route is being called, is going to be called um hello, for example. And um we call it hello, hello route. And then my hello route can point on slash hello. So if I do that, we're making our HTTP bin API available on slash hello on top of our gateway infrastructure.

So I can go ahead and I can save. And then once I've done that I can connect to this cluster and to connect to the cluster, I can click this connect button and Connect will present us with a public DNS that points to all the regions as well as it will present us with specific regional addresses if we want to go straight into one region. Of course, when I provision this cluster, one of the settings that I have uh configured was to make this cluster publicly available. So the clusters that we provisioned, they can be public, they present with a public uh DNS that we can consume or they can be private only. So this can be been also used for entirely internal API usage of the gateway.

And so if I go back and consume my and try to connect it to my cluster and um I copy, paste this url and then make my my request. We see how this request is going on our cloud gateway, dedicated infrastructure, the public DNS, it's parsing that slash shallow path. It's associating that to HTTP bin. And then once I do that I can proxy to my APIs.

So of course, this is a very simple example. We could have one or 1000 or a million APIs and a million routes configured here to cater to all the requirements that we have.

Now that now that I have this API uh let's go and make things a little bit more interesting. Let's go for example and apply plugins that will allow me to enhance how the traffic uh is being managed, the traffic that goes through this gateway.

And so among the many plugins that Kong has, there's plugins for pretty much every use case, there's plugins that also you can create um I can go ahead and choose the rate limiting plug in. And if I go ahead and enable my rate limiting plug in, and I say that I want no more than five requests per minute and I go ahead and uh point to my uh let's see. Uh everything is is being configured. I can actually point to a specific um um to a specific um it can be global, it can be for a specific route. Let's do it global.

Now, the plug in is being created, the rate limiting plug plug in is on. And so if I go back here and I connect again to my cluster or I can just refresh here. Uh and I start making a bunch of requests, eventually, the gateway rate limiting plug in will go into effect and the rate limiting will stop me from making new requests.

Of course, this is a simple example. There is a lot more that can be done. But I hope you have enjoyed this presentation of dedicated cloud gateways. I truly believe as um you know, I've been in the API business for a very long time as the co founder of Kong. And I truly believe that we made something that was very easy to use Kong Gateway self managed, even easier with Connect in one click provisioning.

Um this is um the end of my presentation and the end of my demo. Uh what i would like, uh what i encourage you to do is to click and scan this QR code by clicking on this QR code, you can sign up on Connect for free. And then from there, you can start using all the capabilities for gateway and service manage. that we have available and do that in one click.

Thank you so much.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值