Amazon EventBridge Pipes simplifies connecting event-driven services

good morning folks and thanks so much for joining us this morning. I know it's the last day of reinventing and it's just awesome to see a really full house here to learn more about pipes. My name is Nick Smit and I'm the principal product manager for Amazon EventBridge and I'm joined by Jamie Dool, who is the general manager for Amazon EventBridge.

So today we're gonna be, we're really excited to be talking to you about pipes which launched yesterday in Werner's keynote. Pipes provides a simple, consistent and cost effective way to create point to point integrations between event producers and event consumers. We're gonna talk about why you need to create integrations in the first place and why that can be sometimes quite difficult and complex. We're then gonna jump into how pipes helps make that easy.

We'll explore a couple of use cases for pipes and then actually hop into a demo to see how it works from there. We'll, we'll do a quick wrap up about some of the things that we're gonna be adding to pipes in the future.

Ok. So let's start off by talking about the problem that we're trying to solve, which in essence is about integrations. AWS has more services and more features within those services than any other cloud provider by a long way. And these services are designed as building blocks that you can assemble together to create a complete solution.

And this gives you a an application that is highly scalable, resilient. And crucially it is really easy to extend and add more features by adding more of these building blocks. This saves you from having to invest all of your developer resources into managing infrastructure and data centers and allows your developers to instead focus on creating business value and creating differentiated experiences for your customers.

And so the services that I've talked about here is building blocks. These are incredibly powerful and allow you to pick the right tool for the right job. So if you need a database, you can use DynamoDB that provides massive incredible performance and massive scale at low latency. If you're streaming huge amounts of data Kinesis Data Streams has you covered? And if you need to handle spiky workloads in a queuing kind of form, then you can use SQS that gives you pretty much unlimited throughput.

And so the idea is that you can assemble these building blocks together to create a solution that scales that's resilient and that's reliable and you can do this with that flexibility to extend.

Ok. So we have these building blocks and we assemble them together. And we get this your architecture diagram. You've probably seen hundreds of these before and they're really a visual representation of your application. They explain what your services, what services you're using and, and how you're connecting them together.

But one of the difficulties with this building block type approach to, to assembling your application is that you have to spend quite a lot of time connecting those building blocks together. Think about all of those architecture diagrams. They all have these lines that connect the services together. But a lot of those times the lines between those blocks can be quite complex.

So for example, here, we've got an architecture diagram where we've got a DynamoDB stream that we want to connect to SQS. Perhaps we have something that can, that wants to process from that queue changes to our database. And so in this case, you know, it looks fairly simple, we have a DynamoDB stream and then we have SQS on the right. But what you don't realize here is that you actually have to write integration code to pull that DynamoDB stream and place those events onto the SQS queue.

So we call this integration code or glue code. And it can actually be quite tricky and time consuming to, to, to write this, this integration code on the face of it. It might not seem like it. But when you actually think through it, there's quite a lot that you have to consider.

First off, there's a lot of nuances between the different services and how they communicate. Take Kinesis for example, you need to use the Kinesis client library to pull a Kinesis stream. You need to know about how shards work and that you have to have a single consumer per shard. You also need to keep track of your sequence IDs so that you know whereabouts you are in that stream.

On the other hand, if you're using SQS, you need to know that you can have many consumers that can all pull the same stream or the same queue, unlike Kinesis, where you have a single consumer per stream. With SQS, you use the AWS SDK, not the Kinesis client library. And with SQS, you need to know that you have to delete a message after you've consumed it.

So even between these two services, a lot of differences in how they work, the developers have to figure out. So once you figured out those differences, you have to think about how you handle errors in your integration. If you're doing batching, you'll have to handle partial failures. And if you're doing ordering, you'll need to keep track of what the last message was that you processed in case you have some impact to your service and you have to come back online and continue where you left off

Once you've done that, and written that code, you then have to think about your target service, the one that you're wanting to actually send your events or messages to. And here you have to consider authentication permissions for that service, what the API looks like. And if you're using a non AWS service as a target, say you have a third party SaaS API that you're trying to call, you have to figure out how to authenticate against that. And you also have to consider things like flow control or rate control. You don't want to overwhelm some third party API that, that might not be able to take the load that you're wanting to send it.

So once you've written all of that code, you then need to consider performance testing. You want to make sure you allocate the right amount of CPU and memory resources. And you also need to actually benchmark that integration to ensure you have the right amount of throughput for your use case, then you'll have to deploy that code and lastly manage the ops and monitoring for that. You want to make sure that if you have increased latency on that integration or failures that you're actually alarmed and and you're notified about that.

So even a simple integration like a DynamoDB stream to SQS can end up having a lot that your developers and you have to consider. So all of these considerations reduce your developer velocity, they increase the risk of, of having bugs in your code and they make your more your app more complex and inefficient and this increases your operational load. And so the end result is a higher total cost of ownership as you spend more of your developer resources writing integrations instead of focusing on that business logic that differentiates your business.

Great, thanks Nick. So there we go. So now we understand some of the challenges that developers are facing and to address those challenges we have announced EventBridge pipes, a new feature of EventBridge that makes it easy for developers to build event driven applications that integrate event driven services in a simple, consistent cost effective manner without having to write additional code.

Today, pipes support consuming from six event sources, enriching through four services and delivery to 14 services including HTTP endpoints via API destinations. And we're very excited to be here today to tell you how pipes can help you in your applications.

But before I jump into the all the great things that pipes can do. I wanted to just do a little informal poll here. We have been monitoring our dashboard since we announced the the the feature yesterday. And, and I've seen upwards of several 100 customers already give it a try. So I'm curious who in the room here still on Friday has tried it out? You just raise your hand.

Ok. Few. Not, not too many. So. Ok. So lots of new learning today. That's great.

Ok. So what can a pipe do? Firstly, a pipe supports filtering as customers evolve their applications with discrete microservices, they often want to consume only the events that are appropriate to that service. So being able to filter a source ensures that the service gets only what it needs.

Pipe support, batching on both consumption and delivery. Ensuring integrations between services are highly efficient for order sources. Pipes retain that ordering and deliver each event sequentially to ensure ordering is maintained.

Pipe support high concurrency, ensuring the integrations between services scale as the volume of events grow. And finally pipe support an enrichment step allowing for unique business logic to be applied to the consumption of a source and change that on its delivery to the target.

So why would you want to use a pipe? Pipes enable developers to move faster by eliminating the undifferentiated integration code that you're writing and maintaining today. Pipes help you save money from needlessly processing events your applications don't need and you only pay for events you want to process.

Pipes provide fully managed polling, allowing your applications to react as and when new events arrive, finally, pipes removes the burden of managing the code infrastructure and scaling of your point to point service integrations.

So I want to step back for just a moment and and share a little bit what inspired us to build pipes in the first place. In 1973 nearly 50 years ago, Unix pipes were introduced as a way to compose novel solutions using purpose-built executables connected together over a common text protocol.

Later, this concept evolved into software architectures and then into enterprise integration patterns and all of these three innovations are continued to be practiced by developers today. As Nick talked about earlier, AWS provides tremendous breadth of purpose built services that enable developers to innovate a cloud scale faster than ever before.

And so we conceived of EventBridge pipes as a way to extend these integration patterns to the AWS cloud.

So before we get into the details of pipes, let's first address, we are launching pipes as a capability of EventBridge. So we'll take a moment to talk about how pipes are different from the core event bus capability of EventBridge.

Looking back, we launched EventBridge in 2019. And if you, you've been with AWS for a while now, you may know that that evolved from CloudWatch Events. Customers love CloudWatch Events because it allows them to respond to changes in their AWS based environments in near real time. Many of these early use cases were mostly appropriate to dev ops or SRE type use cases. But it demonstrated the power of event driven development in the cloud.

So we saw an opportunity to bring this powerful paradigm with the ability to react in near real time approach to all of the changes occurring across your business. We launched the EventBridge with a serverless spin as a platform to make it easy for customers to bring all of their business events to an infinitely scalable, always available, pay only for what you use enterprise solution that allows many publishers and many consumers to easily integrate with each other. And in so remove much of the heavy lifting.

Pipes extend the same benefits to event driven point to point integrations by removing the undifferentiated heavy lifting required in wiring together the many building blocks of AWS. Both event buses and pipes help developers move faster. Focus on their core business logic and reduce the code your teams have to manage and scale.

Event buses address many producers and many consumers while pipes focus on 1 to 1 integrations between services. For those of you that are familiar with Lambda, you may be starting to see some of the similarities between the sources that pipes support. For those not familiar with Lambda, you may be seeing some commonality in the sources that pipes support and that there are all services that require some form of polling.

Every developer using these services needs to understand how the unique or nuanced approaches to polling work. They need to understand how to scale as the workload grows, how to handle unhappy path error conditions and again how to develop and maintain this code, which takes time away from building the unique capabilities of your business.

Pipes share the same at scale managed pulling infrastructure that Lambda event source mappings do. And we extend that capability to pull and deliver to over 14 AWS services. This production proving capability is used by over 150,000 users operating over 1 million ESMs that process over 1 million transactions per second. So this lineage makes pipes ready to tackle the scale of all your supported sources today.

Let's talk about a few use cases. With six supported sources, four supported enrichment services and 14 supported delivery targets pipes allows for over 100 integrations without requiring you to write any integration code.

So let's jump into a few of those integrations and get a sense of what's possible.

Ok. So the most basic use case on the left hand side, we have an SQS queue, a hugely popular serverless queue service often referred to as the infinite buffer in the sky. On the right hand side, we have a Step Function. Another popular service that makes it easy to orchestrate complex workflows.

There are many ways to achieve this. But one of the easiest ways is to use event source mappings to connect a queue to a Lambda. In this case, you have to write the code to invoke the state machine. Consider the throughput and concurrency requirements, test the code deployed the code and manage the code through the life cycle of the application.

Now with pipes, navigate to the EventBridge console, create a new pipe, select the source queue and the state machine as your destination finished with create pipe. And that's it. You're done. Your queue is now wired to your state machine. No code written and now you're ready to move on to your next business challenge.

Ok. This next use case will leverage the filtering capability of pipes. In this example, we have a Kinesis data stream and the stream may contain say all of the click stream, data from your web application and your teams have a desire to split this stream into multiple domain specific streams.

The need to split a stream maybe comes from where one team is just interested in processing the user behavior that is associated with the portion of the web app that they own. Another reason might be a specific use case or analytic case that only applies to a subset of the stream and yet one more reason may be to simply expand the read capacity of a given stream to more consumers.

With pipes, you can introduce two more two or more pipes from the same stream, apply a filter pattern in the same semantics as an EventBridge rule and target a new stream that will contain only the subset of events by which the stream consumer is interested in.

Like we saw in the first example to achieve this, you would follow the same set of steps first creating a new pipe, then choosing the aggregated stream is the source. But in this case, our second step is to define a filter pattern to match only the events that we're interested in. Once again without writing any code, we have easily split the stream into two. And now we're ready to to to move on to solving the business problems offered by those individual streams.

Ok. This next example builds in the previous where we had pipes with filters and now we introduce the enrichment capability of the pipe. Here we have a managed Kafka cluster living in a VPC and we want to leverage more of the purpose built capabilities of AWS while integrating this event data with another another service that we own, which sits behind an API.

To achieve this, we create a pipe to find a filter and choose a SageMaker pipeline to train our SageMaker model. We follow the same steps as before with, with create pipe and again, without writing any code, we have integrated our self managed Kafka source with a native AWS machine learning capability.

Next, we want to integrate one or more topics from our Kafka instance with an existing system that is fronted by an API. Before we do that, we need to apply some business logic to the event. To achieve this, we follow the same steps. We select our source, we define a filter to select the subset of events we're interested in processing. And then we leverage the optional enrichment step to select a Lambda function that will apply our business logic on incoming events before sending it to the API destination configured for our service.

Now, you might be saying, but Jamie, I thought you, you wrote some code this time. I thought this was the integration without code. And you're right. But there's a difference here in our enrichment Lambda function. You were only writing business logic. There's no glue code required to call the API. You simply return from this Lambda with the data in the shape your API requires. And the pipe takes care of making the API call and passing your freshly enriched data. Ok?

And finally, on this last news case, which I'll move through quickly. I want to demonstrate how pipes can fit into your event driven journey. In this case, our source is a DynamoDB table and over time data in this table is becoming more and more interesting to more and more consumers within your business who want to react in real time to record level changes to this table.

Now, you can have many pipes and the event source mappings connected to this table. Each contributes to, however, each contributes to the read capacity and may result in throttling, which is why we recommend only two consumers at one time.

To solve this, we can use a pipe as a single, so a single consumer of a stream to publish the events to events, stream to EventBridge where many consumers can subscribe to the events without without impacting the capacity of that stream starting off with this pattern provides some future proofing of this event source as more of your teams and systems find unique ways to derive value from these events.

Now I'll pass it back to Nick so you can show us a demo.

Thanks Jamie.

Ok. So we're gonna hop into a demo. Jamie has given us some great use cases. I'm gonna go through two demos that we, we create here two pipes that we're gonna do.

The first is a super simple one. Jamie showed us that use case of going from SQS to Step Functions. So today, that's something that you would actually have to write code for - typically a Lambda function to go and pull that SQS queue and then invoke your Step Function. So we're gonna see how we can do that really simply with a pipe.

The next one is a more complex use case that we we'll build more advanced. In this one, we've got a Kinesis stream that has a whole lot of customer support messages or events that come in. But this stream has customer support tickets that are what we call thin events. So the event in that Kinesis stream has only the type of event, it is support ticket created or an agent logged off or logged on and then an ID to reference for that particular event.

And so what we actually need to do here is we wanna go out and, and look up that particular ticket ID or resource that we've mentioned in an event and populate it with all of the data related to that event. And so to do that, we're gonna use Zendesk, a kind of popular third party SaaS application for managing our tickets. So we wanna make an API call out to Zendesk to go and get the data about a ticket and we wanna then put that on to an SQS queue.

So we'll see the filtering and enrichment capabilities of the pipe in the second demo.

Let's take a look. Here we go. We've got my screen and I'm going to refresh just to make sure that I am logged in. Okay? So here I am on the Pipes console. Um you can see this is now a new uh navigation item in the EventBridge console on the left hand side and I've got no Pipes created so far, I've gone ahead and created an SQS queue. Um you can see, I've got my orders queue and then later on we'll use this support tickets queue.

Um so I'm gonna go into my orders queue. Um and uh you can see that I, I don't have any uh messages in there right now. Um and then I have a Step Function that I've created. So this is the order processor Step Function and all this does, it's super simple. Um it takes in an event and then passes it out as an output. So really, we're just kinda using it to just uh log what's coming in and what's going out.

Okay. So I've got my source, the SQS queue and I've got my target, the Step Function. So I'll head back to the Pipes console, click the Create Pipe button. I'm gonna give this a name. We'll call it order uh processing pipe and then I can select my source for this particular pipe. So you can see we've got a range of different sources that Jamie mentioned Kinesis SQS, DynamoDB, DB, Amazon MQ, self managed Kafka um and Amazon MSK. So in this case, I'm gonna select SQS and then I'll choose my orders queue and, and in this example, I'm kind of pretending that I have some ecommerce system that's putting events onto that SQS queue and, and I'll use those in Step Functions.

You can also go and set some properties. For example, the batch size, the batch window. I'm just going to use the defaults that we have here. So you can see the Pipe has that filtering and enrichment step. I'm not gonna use that for this example, we'll, we'll get to those in the next example. So I'll remove those.

The next thing I've got here is the target. So I want to use Step Functions as my target. And I can see my state machine that order processor that I showed you just now. So that's it. That's, that's the Pipe. It's got a source and it's got a target. And remember if I wanted to do this today, I would have to write that code to, to integrate the two.

So what I'll do is hit the Create Pipe button. Uh that happens asynchronously. So I'm gonna let that complete and I'll hop over to my SQS queue and we'll, we'll start building a message that we're gonna send. Uh we'll, we'll give this an order id. Uh we'll give it an id of 10 and let's just say we have a order value um and we'll make that 100. Um this is obviously not gonna get used. It'll just come through in our, our Step Function.

So I'll hop back to the Pipe page i can see that that Pipe is now running. And so I'm gonna go ahead and send that that message from SQS. So I hit send. And now my expectation is when I hop over to the Step Function, uh I can go over to the executions and we can see that there's a successful execution. And if I look at the input and output here, you can see that that event that we sent with an order id of 10 and an order value of 100 has come through.

So I think that was about less than a minute and we managed to get an integration from SQS straight to Step Functions. Okay? So that was our, our simpler example. Let's now hop over to the more advanced one here. I've got a Kinesis stream, it's our support stream, we're calling it. Um and I've got a bunch of messages that I, I'm gonna send to that stream just now. I told you I had those thin events.

So you can see there's this support ticket created event. It's got a ticket id of one. I've got a support ticket resolved. So support ticket updated and an agent logged out. So in my case, I don't actually care about the support ticket resolved and updated and signed out uh use cases i i just care about when support tickets are created.

So I wanna filter out all those other events. But then what I wanna do is I wanna populate this event with more data. I, I wanna know who created the ticket. What's, what's in there? So if I hop back here, you can see, I've got my Zendesk uh agent platform uh up here and you can see that there's this um support ticket that I have, I'll, I'll click on it. Um it's got a whole bunch of details who, who requested this support ticket um and, and the contents of the, of the ticket.

So what I want is for my event that I actually placed on that SQS queue to have all of this data added to it. So, what I'm gonna do is I'm gonna go ahead and, and first start by creating an API destination. API destinations is a feature of EventBridge that allows you to make API calls out to third party APIs.

So I'm gonna create this API destination using the Zendesk API that will go and fetch those tickets from the full data of those tickets from Zendesk and then return that to the Pipe for our enrichment step. So I'll go ahead and create my API destination. We'll call this the Zendesk ticket or get ticket.

Um and then I've got to pop in the actual end point, so I'll hop back. I've got a little cheat sheet here. Um and we'll put in this url. So what you can see if I zoom in a touch is that this url actually has a star. Um in the url. And what that does is allows us to actually dynamically map parts of the url to a field in our event.

So you saw that those events that I had had a um, a ticket id, I actually want to map that id to this particular part of the url. So that depending on whatever my, on my ticket id, it'll go out and get the right ticket information.

I, then I'm gonna set this as a get request to the API and I'm gonna leave the rate limiting. Um and I'll, I'll choose the authentication that I've set up previously for Zendesk. So I go ahead and create that API destination now that I have that I've got all the different pieces that I need for my Pipe.

I have the support stream, the Kinesis stream that we have. Um I've got the API destination that we'll use for enrichment. And then I've got this SQS queue that I've created. Um I'll support tickets queue that we're gonna actually send these events to.

So I'll hop into the Pipes console and I'm gonna create a new Pipe. We'll name this Pipe, the support tickets, oops Pipe. And we're gonna go ahead and select that Kinesis stream as the source. So that's our support stream. And then I can choose my starting position.

So I can tell it the Pipe to go all the way back in time. The trim horizon to the very start of the events that that Kinesis stream has, I can start at a particular time stamp or I can start just at the latest events that that come through. I'm going to pick the latest over here.

I can also configure things like the batch size batch window, how many, how much concurrency we we want on the stream and and the um the partial batch failures. And another interesting thing here with Kinesis is that we can actually set a dead letter queue for this particular Pipe.

So with Kinesis, we process events in order and there might be a situation where one of the events in that Kinesis stream cannot be successfully processed. So what normally would happen is that's called a poison pool. And we're not actually able to move on from that event.

But with Pipes, we can set an SQS queue as a dead letter queue. And we'll simply take that event that cannot be invoked to the target and place it in the SQS queue and then continue processing the rest of that Kinesis stream. But in this case, I'm gonna just leave the dead letter queue. I, I hopefully won't need that famous last words.

So we've got our Kinesis stream set up. The next step that I'm gonna do is actually create filter, a filter on this particular Pipe. So i mentioned that I have all these events that are coming into the stream, but I'm actually only interested in these support ticket created events so again, I'm gonna hop over to my cheat sheet and just copy this and I'll, I'll talk through this.

Essentially. what we're doing here is we're telling the Pipe we only want to match against events that have a type of support ticket created. And like EventBridge rules, we are able to use a whole bunch of different content filters here. So you can do things like prefix matching anything but matching numeric range. There's a whole lot of powerful ways that you can filter events.

So I've got my event filter set up to, to match those support ticket created events. The next step that I'm gonna go to is the enrichment step. So we want to go out to Zendesk and query for more information about our ticket.

So I select API destinations and, and while I'm here, you can see you can use Lambda Step Functions and API Gateways enrichments as well. But in this case, we'll use API destinations and then I'm gonna use that Zendesk uh get ticket uh API destination.

Now, you can see here, I mentioned earlier that we have that, that asterisk in the um in the url. This is where I get to actually map that asterisk to a particular field in our event. So back to my cheat sheet and I've got that uh json path for this particular field, which is the ticket id that we, that we have in that source event.

So I've got my enrichment set up and the last bit that we'll do here is then select that SQS queue that we created. And so that is our support tickets queue.

So just going through that once more, we have a source that's a Kinesis stream. We've set up filtering to only get those support ticket created events. We've got our API destination configured to go and fetch the full payload of that particular ticket from Zendesk and then we're going to go and place that onto SQS.

So I'll go ahead and create the Pipe. Now, when we set up Kinesis as a source, it takes a couple of um sometimes seconds, hopefully not minutes uh to configure that. But one of the things I'll show you while that's getting created is the IAM roles that get created for you. And I think this is a really powerful part of Pipes.

If you think about having to write all of this on your own and, and actually create the code for this, you've got to go and figure out what permissions you need for each of the steps of that integration. You've got to figure out, you know, what do you have to do to pull that Kinesis stream? Um you've got to think about calling API destinations and permissions related to that and then for your target as well.

But with the Pipes console, what we do is actually figure out the permissions that you need for each of the resources in that particular Pipe and we go out, go ahead and create that IAM role for you.

So you can see here. I've got, um, oops, I've got an API destination, uh, set up, it's configured, the permissions for my Zendesk get ticket API destination. Uh we've got that Kinesis stream configured. Um so it figures out all of the different actions that needs to perform on the Kinesis stream. Um and then we've got our SQS queue configured here as well.

So the Pipe takes care of creating and figuring out all of those permissions for you.

Okay. Let's head back to the Pipe. Uh hopefully this is created grade failed. Uh let me see if I can try that once more. Um i think this might have been just um a little something. There we go, it's running. So we have some, some delays in creating an IAM role and I think I ran into one of those issues, um known issue. It should be fixed soon.

So, so hopefully, now my Pipe is running and what I'm gonna do is go ahead and actually send those uh events that we, that i showed you earlier to our Kinesis stream. Kinesis requires these events to be uh base 64. So I've got this records, base 64 which essentially is just the same as this particular set of events. It's just base 64 encoded.

So I'll hop over to my CLI and I'm gonna call the um the put record uh CLI command and you can see that's gone ahead and um and sent those events to Kinesis. If I hop over then to Kinesis. Um and uh where are we look at our data viewer? I can go and say, let's look at a particular sequence number and hopefully we will see our records in there. Whoops, what's happening there? It was on shot 30 thanks.

Uh ok, let's try and check that we got those. There we go. So we can see that Kinesis streams got um various support tickets. Um the or support events that we that we had created there. So the support ticket created the agent logged out. And so what we're expecting to happen here is for the Pipe to actually filter those events and for only that support ticket created event to land up on our SQS queue.

And of course, we expect then to have gone out to Zendesk and got the full details about that ticket. So if I hop back over to SQS and hopefully, i don't have to send those events again. We'll see if they're there might still be setting up the the permissions on this. So i hope that it comes through. Yeah, there we go.

So we can see here, I'll, I'll copy this from the SQS queue and just put it into sublime so we can format it a bit nicer. Um what's happened here is the, the that support ticket that we had for the support ticket created where it just had this id in it. Instead what we have when we, we look at the event placed on our, our SQS queue is the full ticket, the full response that we got back from Zendesk.

And you can see that's got things like the the subject of the ticket, the description of it, um all of the other details. So our consumer that's processing these tickets from that SQS queue now has all of the information that it needs to actually be able to go and react to that particular event.

The last thing I wanted to just show you is that the Pipe also has some helpful monitoring um uh metrics that you can use, for example, the invocations failures um and, and various other things related to the throughput of your particular Pipe. So this makes it easy to figure out what's going on, what kind of throughput you're get, you're getting on that Pipe and if there's any failures.

Cool. Here we go, Jamie, thanks, Nick. Nicely done. Always nice when a live demo actually works for the most part.

Okay. So now you've heard about what Pipes can offer today. You've seen them in action, but this is just the beginning for us. We're excited to keep enhancing Pipes with more capabilities that enable developers to move faster while focusing on their unique business challenges.

So coming in 2023 we intend to support CloudWatch Logs and VPC endpoints. We'll also include support for AWS service integrations. Now, this is something I'm not sure everybody has seen yet, but in Step Functions. And in our latest scheduler launch, we have introduced a new capability called service integrations that allow services and, and eventually Pipes to target any AWS service API.

Essentially enabling, you know, Pipes easy, easy delivery to any service. We'll be able to deliver across service accounts. We will support archive and replay on the Pipe and we will integrate with CDK and IDE where all the tools that you're familiar with using today.

So let's recap a little bit about what we learned. Pipes, connect pull based sources to AWS services as well as SaaS based services through API destinations while allowing filtering of sources and enrichment before delivery.

Pipes allow for over 100 integration combinations without the need to write any additional code. Pipes share the same managed polling infrastructure as Lambda event source mappings, ensuring our extensive experience in the complexity and scaling of polling making Pipes ready to tackle your largest workloads starting today.

In closing, we're so excited to add Pipes to the family of EventBridge capabilities including event buses, schedules, schema and registry and now Pipes making EventBridge your first stop. Your first stop on your journey to modernizing and scaling your technology investments through event driven applications to deliver better experiences for your customers and better outcomes for your business.

So to get started our blog post introducing the feature and how it works and, and you can jump right into the console, of course, with either these QR codes. So we can't wait to see some of the creative integrations that you come up with. And thank you very much.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值