Accelerate innovation with real-time data

Hi, welcome to Accelerate Innovation with Real Time Data. My name is Mindy Ferguson. I'm the Vice President for AWS Streaming and Messaging. These are services like SQS, Amazon MQ Kinesis Data Streams and Managed Streaming for Apache Kafka. These services are not only the foundation for how data moves between micro services and applications, but real time streaming data is the backbone that companies use to move data breaking through data silos and accelerating information.

Now, 10 years ago, as a customer, I sat in a room very much like this here at Reinvent and I listened to AWS announce a brand new service called Amazon Kinesis. I was so incredibly excited about the possibilities of real time data. But now here we are at this year's Re Invent and we're celebrating the 10th anniversary of this revolutionary fully managed service. And I'm even more excited because we've moved from what's possible to now living in a world powered by real time data.

Today, I'm gonna be joined by the General Manager for Kissa Data Streams. Arvin Th Ravi Together, we're going to talk about all of the AWS streaming services. We'll share examples of how customers are using real time data and how that data is used to improve machine learning and artificial intelligence use cases. Now, customers who have mature data streaming strategies are often the fastest to adopt new technology like generative A I. And today we will be joined here on stage by Harsh Perk, a Principal Scientist from Adobe. And we'll have the opportunity to hear how Adobe is using real time data. Even to power generative A I applications like Firefly

Data is the foundation for every business decision we make. And the world is generating data at an unconstrained rate. I dc says that 90% of today's data was generated in the last two years of alone. Now, that might seem unprecedented, but it's actually becoming the new norm and it brings with it both opportunities and challenges. On one hand, data driven insights drive better decision making, faster decision making and they can also improve your operations. But on the other hand, managing and analyzing such a huge amount of data that can be overwhelming. So what's the solution?

I'd love to see a show of hands here today. And I'm going to be honest with you, there's a lot of light in my eyes. So i i need you to hold your hands up really high. How many people here have been overwhelmed by just the amount of data available to them? Well, we've got quite a bit in the room. That's excellent. You're not alone. Data is one of the most valuable assets any company can have if it's used appropriately. And according to an Accenture study, only 32% of companies say they're able to realize value from their data. So what is the solution?

Well, data is perishable and it can lose value quickly. So if your systems and mechanisms aren't set up to actually generate and capture data as it's being created, you can miss on the value of that data. Imagine you're sitting here this afternoon and you receive a text alert. There's been fraudulent activity on your bank account. If you're like me, you'd go and grab for your phone right away. How would you feel if you found out that the alert was for an activity that actually took place last month? If you're like me, that brings about an entire range of emotion. Real time. Data allows you to act as fast as needed. Organizations are using real time data to detect fraudulent activity and suspicious behavior in real time alerting customers to things like multiple failed login attempts, large transfers or withdrawals and by detecting that fraudulent activity in real time, they not only protect their customers financial assets, but they're even beginning to actually prevent fraudulent transactions.

So to bring out the value of data, you need to act on that data as soon as it's created instead of waiting hours, days or even weeks. Now, I want to go back to something that i just said data is perishable. The value of recency and data, it just cannot be overstated. But data is also your differentiator. When you're looking to use generative a i and create applications that bring unique business value to your organization, it's your data, that's the differentiator. Every company out there has access to the same foundation models. But companies that will be successful with generative a i delivering meaningful business value are those that will do. So using their own data state is the difference between just creating a gen a i app and creating one that knows your business and your customers deeply.

I've had such a pleasure of talking with so many of you here this week at at re invent. And i uh i've heard not as many as last year, but i still know that cost is top of mind for so many of us streaming data technologies allow you to more efficiently process and store data significantly reducing cost over traditional batch processing and storage methods. Here's a very fundamental example, let's say i have a website and i'm interested really in only users who have made some type of transaction, let's say a purchase. Maybe there's another type of transaction that's of interest to me by filtering out all of the irrelevant data in real time. I'm not only reducing the amount of data that i need to consume and process and i'm also reducing the amount of data that i need to store continuous real time streaming is so incredibly important when it comes to modern architectures.

It's what allows us to keep all of our components of the system in sync with one another. And up to date, if you think about change data capture communication between microservices and even event driven architectures, it's real time streaming that helps enable that consider a real time stock trading platform where your goal is to actually show a current market price and your users can be anywhere in the world and they can be on any device by using streaming data from the stock exchange into the back end of your system and then streaming that data across your platform to keep all of the components in sync. You can provide users regardless of location or device, a current market price.

I want to put this into play for us and and kind of bring this to life a little bit with two specific examples of customers who have been able to drive meaningful business value out of real time data. The first is British Telecom. British Telecom is a telecommunications company. They have 30 million customers and they provide support for 1.2 million businesses. Now BT has millions of smart hub two devices and they use these devices for broadband wi fi and phone connectivity. And they stream data from the from the smart hub. Two devices that contain quality of service metrics and those quality of service and performance metrics go back to the customer support representative who answers calls from customers.

British Telecom wanted to create a brand new product and they, they wanted to call it Digital Voice. It's a consumer application and it enables high definition voice calling on top of the UK broadband network. To be successful with Digital Voice BT needed to be able to understand the real time state of the network. They wanted to be able to troubleshoot any hotspots that occurred in real time. So let's take a look at their current system. BT was using a self managed to dupe instance, it had some 15 minute batch latencies and it could only support 12,000 customers. But BT for their Digital Voice product needed to be able to handle 6.5 million customers.

They chose to use streaming data solutions and in fact, were able to put together an MVP in just a few days, they chose to leverage Flink and they chose Flink because they wanted to be able to join multiple streams of data together and enrich that data. So on one stream, they had the data from those smart hub, two devices on the other stream, they had data about the network topology. This is the path that the network is going to take. As the call traverses the BT network, they joined those two streams together in Flink. They were able to enrich that data with geographic information. And for the support representatives, they were able in real time to be able to troubleshoot any of the network hot spots. They were able to query that data based on device serial number, origin and destination. And the outcome for BT was an 80% reduction in latency. They moved from 15 minutes with their previous batch cycles down to three minutes using real time streaming.

The next company we'll talk about is Orange Theory Fitness. Orange Theory is a fitness company with more than 1500 studios in multiple countries. They've got about a million members and more than a billion dollars in revenue. Orange Theory wanted to use technology and a data driven approach to inspire members to improve their fitness. Now, I'm not sure how many of you have been to Orange Theory, but Orange Theory has some very interesting fitness concepts. I love them. They have what's called a splat point. A splat point is when you've achieved 84% of your maximum heart rate after burn is when you have 14 or more splat points in the course of a one hour workout. I feel tired just saying that. But the thing is people design their fitness routine to achieve after burn. So how do you know if you're actually achieving it?

Well, Orange Theory has studio equipment and they use wristwatch like devices and they're able to stream 54,000 data points in the course of a one hour workout and it provides for customers a real time way of knowing have they achieved after burn. But it does more than that, they centralize the data and they're able to tell their members if they're improving their fitness over time.

In both of these examples, we saw data working across organizational and data silos, but we could use data for so much more. We can use it now to improve machine learning and artificial intelligence. Think of things like personalization recommendations, fraud detection, even preventive analytics. There's so many more areas, let's dive into two particular companies and talk about how they're using real time data with machine learning.

Poshmark is a social commerce marketplace. It's a place where users can buy and sell new and second hand home goods, fashion and electronics. Poshmark has more than 80 million registered users and more than 200 million listings. And Poshmark wanted to drive revenue through personalization and they wanted to use personalization to improve the user experience in their previous batch based system. They had some success. They were able to take the insights from a daily batch process and they were able to incorporate that back into the user experience. But what they couldn't do was take those insights and pair it with real time activity from customers. They also wanted to be able to detect fraud in less than a second and catch it before check out.

So Poshmark decided to use Amazon's managed service for Apache Flink and they chose to use it for its data enrichment Poshmark brought in all of their data, they were able to run it through Flink enrich the data and then feed that data into their personalization recommendation and fraud detection, machine learning models. The results were pretty substantial for Poshmark. They were able in their previous batch based system to detect and prevent 45% of account takeover activity. But in real time, that moved to 80% and with personalized search results, they were able to increase the click through rate on search by 8% at the top of the funnel. That is an incredible difference. That meant their search conversion and revenue increased.

Now, let's hope we have some hockey fans out there today, but the National Hockey League, they have reimagined the fan experience by streaming live NHL Edge game data and stats. It offers hockey fans valuable insights and trust me. It keeps those hockey fans on the edge of their seats. Last year, AWS partnered with the National Hockey League and developed what's called Face Off Probability statistics. It's part of the NHL Edge IQ powered by AWS. The face off is one of the most anticipated and contested moments in hockey before the puck is even dropped. The face off probability statistics, machine learning models take real time data and historical data and it identifies where on the ice that face off is likely to occur, who is likely to take the face off and even the probability of each player winning the draw.

So how is that done? You're gonna be so excited to hear this. This is so cool. Every single puck has an embedded circuit board and a battery and 61 inch tubes that emit an infrared signal 2000 times per second. In the back, back of every single hockey jersey or sweater, you'll see a little pouch inside of that pouch is a tag that tag emits an infrared signal 200 times per second. And in every single NHL arena, there are between 14 and 16 antennas that collect all of the signals from those tags along with cameras that handle the tracking functionality.

The National Hockey League chose to use Amazon's Kisa Status Dreams and Amazon's Managed Service for Apache Flink as a way to ingest all of this data process that data and then deliver it over to the Face Off Probability statistics learning model. It's all part of their puck and player tracking technology and it's powered by NHL Edge IQ for the National Hockey League. This has been a game changing way to keep fans engaged and they are only now beginning to imagine the multitude of other ways to put this to use.

So we've talked about real time data. We've talked about real time data with machine learning and artificial intelligence, but there are companies out there like Adobe that are even going farther. Please welcome to the stage

Harsh Perk from Adobe to tell us how Adobe is using real-time data to power generative AI applications like Firefly.

Thank you, Mindy and good afternoon all. I'm Harsh Perk, a Principal Scientist and part of the Data and Intelligence Services team at Adobe Digital Media business unit. Digital Media business consists of our flagship applications such as Photoshop, Illustrator, InDesign, Acrobat, Premiere Pro - many of those who have been using it - and our new generation applications like Adobe Express and Adobe Firefly.

Our charter in the company includes managing data lifecycle and data lifecycle across the entire products and services. So what does data lifecycle mean for us? When we look at data lifecycle, we look at two horizons along the breadth and depth.

With breadth, we have three focus areas:

  1. Bringing disparate data sources and silos together. Some of these mainly touched on include telemetry and operational data, crash logs, transactional behavioral usage, user generated content feedback, and ML inference results.

  2. Focus areas that we have is focus on foundation aspects which is scale, latency, reliability, throughput, size and volume. And more importantly, economics given the scale at which we operate.

  3. The third one which is very critical to us is cross platform. We have a host of applications and services which range from desktop - Windows and Mac, mobile applications on iOS and Android, web services and a large number of backend services.

Let's look at some of the use cases that we solve for. As we look at depth, the depth is focused in six steps going all the way ranging from data models to AI and ML.

We will now look at how we solve our customer needs very holistically with a use case pyramid framework that we have developed. It is six steps. As part of it, the steps represent evolution and sophistication of our use cases. The colors represent the maturity and range from green all the way to orange, green represents higher maturity and orange represents higher infancy. This is a point in time snapshot that illustrates evolution for us. I'm sure all of you can relate to this in some form and shape and then visualize how this is for your team, organization and the company.

Let's spend the next few minutes covering the steps of the use case pyramid. The first and foundational step is associated with defining your data models, your taxonomy, your instrumentation and privacy enhancement techniques. This is what we call the privacy by design step.

There are two approaches in the industry, no right or wrong. One approach is taking all the data, bringing in what we call as garbage in garbage out, getting all in different types of logs, different types of structures, putting into a data lake and then processing it in such an approach. There is a cost associated with subsequent steps where depending upon the use case, each step has to curate the data, transform the data, enrich the data, and then use it.

And there's another approach in the industry which is less known but far more evolving, which is cleaning, clean out data where there's emphasis on instrumentation with standard structures and schemas and taxonomies. We've adopted the latter and it's helped us tremendously in subsequent steps which are built on the top foundation.

Second step of this pyramid is around data collection with ultra low latency typically in the range of about 10 milliseconds and where we enable range of use cases such as cross platform observability, personalization at scale, trial abuse use cases, and fraud detection.

The third one is around product and feature analysis. This is to better understand user adoption of our products as well as features within our products.

The fourth step focuses on correlated and common insights. Given the breadth of the applications that we have, the data that we collect allows us to identify user journeys and 360 degree workflows - users going from one product to another and from one platform to another.

Fifth one is an evolution where we start influencing the behavior of a user with growth experiments - call it product led growth or marketing led growth - and personalization with next best recommended actions and best offers.

Last step is associated with AI and ML where we start predicting and now generating behaviors known as gen AI.

One of the most important steps that most of us often miss out is the ability to collect that feedback loop back from the customer. That could be by the virtue of relevance, context or simply user comments.

On the left hand side, you'll see the maturity and complexity of the use cases in this pyramid. And on the right hand side, there are some of the table stakes capabilities.

We now look at how we've achieved some of our objectives. Here's a high level representation of the architecture. You'll see two color themes progressing as we progress on the slides - orange and purple represent AWS technologies and the gray color will represent services whether they're Adobe or otherwise.

Our data lifecycle starts with the user and devices - over a billion over a period of time that we've built out - where data is collected from cross platform SDKs sending data to our injection endpoint which is powered by Amazon Kinesis Data Streams. We retain the data here for about seven days allowing data to persist even if there were issues downstream.

We have six use cases or six pyramid steps that we saw earlier. The first one is associated with personalization and experimentation and offers. This is where the data from Kinesis streams is fed into an EFO Lambda (enhanced fan out Lambda) where we process data within 10 milliseconds. It goes into our campaign systems. And from there, the use cases that we serve are growth experiments, offers and next best actions.

Second part and second set of use cases are around release readiness, fraud detection, observability, and trial abuses. Again, it uses a similar structure of processing through EFO Lambda, feeds into our observability system.

Third set of use cases is your typical product analytics where latency is not as time sensitive. And so the options that we use over here are classic Lambda, which helps us with economics where the data is processed within a matter of seconds, two minutes. It then feeds into our analytic systems with Experience Cloud and offers self-serve analytics.

Fourth and fifth use cases are very interesting. They serve a large number of requirements for our team and for internal teams within Adobe. The first one is around exploratory analysis and machine learning algorithms. And the second one is around prompt engineering.

Data from Kinesis is read by Apache Flink, processed within a matter of a few seconds. We have state persistence that allows us to persist the state of a user depending upon the use case. Data is then curated and saved in S3 buckets. From that we use EMR Serverless to read the data, create outputs and feature data stores which are used both for analysis, machine learning and prompt engineering for gen AI.

And the last part that we have is around operational profiling again, low latency use case where we use EFO Lambda which feeds into the Graph system that we have.

So this entire architecture is designed for low latency ranging from few milliseconds to seconds to minutes depending on the use case and the hybrid of several serverless and non-serverless technologies.

Real time is very relative in the industry - to some it may be milliseconds. As Mindy was talking about to us, it ranges from milliseconds to seconds to minutes.

At the beginning of the session, Mindy mentioned about her 10 years journey with AWS. We've been extremely fortunate and part of this journey with them for the last eight years and helping them pioneer and evolve - Kinesis, Lambda, Managed Services with Apache Flink, EMR, ECS and more as the industry evolves. So will we with gen AI?

However, what's going to remain front and center is efficiency, ease of use, low latency, scale and economics.

Some fun stats to share with you on this entire architecture which has served us to serve over a billion plus devices over these years:

  • It processes 100 terabytes of data on a daily basis.
  • Lowest latency is about 6 milliseconds.
  • We process about 40 billion requests in a single day that translates to about 100 billion events in a single day.
  • We have load tested this to about 300 billion events in a day and can process data up to 350,000 requests per second.

If you have any questions about these slides, happy to take them offline with the contact information or of the Q&A session.

Thank you. And with that, I'll pass on to Arwin with whom we've been working very closely and he can share some of the exciting work that they've been doing on the AI/ML side.

Thank you.

we intentionally evolve our services, all of our services, specifically our stream engine and storage services to meet where you are in your data streaming journey.

As an example, if you're using kafka actively already and have deep familiarity with kafka, you should, you should pick our manage streaming for apache kafka service.

On the other hand, if you do not have prior experience with open source products like kafka, and you do not want to be in the business of actively managing clusters and scaling partisans up and down, you should pick our knees data streams service.

We evolve both of these services deliberately depending on the amount of overhead you want to offload to aws.

Again, as examples, if you do not want to provision stream capacity and scale and manage stream capacity, you should pick our guinness data streams on demand mode or our msk serverless product.

On the other hand, if you want deep control over managing stream clusters and stream capacity, you should pick our provisioned mode in both msk and kis data streams as we meet you where you are in your stream processing journey.

We also deliberately evolve all of our services along three core pillars of innovation. These are the pillars that customers have repeatedly told us are the most important to them:

  1. Scale and effective cost. All of our services combined process in the order of trillions of events, trillions of records across tens of thousands of customers and in excess of 45 petabytes of data per day. Our customers run millions of streams with the largest streams processing in excess of 15 gigabytes per second of data at aws. We handle scale like no other. This is why the largest enterprises trust us with internet scale data.

  2. Ease of use. Through our genesis data streams on demand mode and msk services mode customers do not have to get into the business of actively managing stream capacity. Their bandwidth is freed up to build compelling business applications while they offload the overhead of managing stream capacity to aws the significantly lowers total cost of operations for our customers.

  3. When it comes to cost, it's useful to think about the total cost of operations and not just the monthly bill that customers pay.

The third pillar of innovation is what customers tell us, makes it easy for them to move data between popular source services and destination services. We achieve this by continuously improving our no code integration options. Currently, we support an excess of 50 integrations within aws services and we also support integrations with popular big data services external to aws.

Next, let's look at innovations that we've made recently in each of our services.

First, let's look at innovations in ms. As i mentioned at aws, we are consistently innovating to improve the scale and effective cost for our customers. Today, i'm happy to announce that we bring gravitons, effective economics, high performance and scale to msk using graviton three m seven g instances. You can now save up to 24% in compute costs. You could also achieve up to 29% improved right throughput and read throughput. Further these instances use 60% less energy all in all. It's a win win. It's a win win for aws for the customers and for the environment.

Another excellent example of running at any scale is ms cases, tiered storage service. We're increasingly seeing customers use data injection services to store data for longer periods of time for purposes like training machine learning models government and governance and compliance reasons and for processing delayed data in the event of unforeseen outages using tiered storage. We've seen customers achieve extended storage periods using and reduce cost both at the same time.

Let's look at two specific examples:

Ion sources mission is to help developers monetize their app, content developers and iron source use kafka to exchange data between their micro services while improving monetization and distribution of their content with msk storage, they were able to improve retention three fold and reduce cost. one third at the same time.

Another example is fact set fact set generates user data to improve the effectiveness of investment decisions. as developers in fact set we're building business applications. they were having challenges in scaling their observable platform as a result, they moved to aws to achieve superior economics, higher scale and better elasticity by leveraging tiered storage and msk, they were able to reduce their monthly spend from $30,000 per month to $13,000 per month. that's a full 56% savings. this freed up their developers to build applications that move the needle for their business and also reduce their total cost of operations because it meant that they did not have to actively manage the underlying storage capacity anymore.

Our innovations and msk do not just that with a combination of both tiered storage and graviton, customers can achieve up to 30% reduced spend on dollars per gigabyte. it's quite compelling if you want to store your data in the cloud, using storage, using data injection services for longer periods of time. and you want to do so at effective cost and you run kafka workloads, you should be using graviton in combination with tier storage.

Next, let's let's look at an example of how we've improved ease of use for our customers in msk. As you all may already know all of our services run in multiple availability zones. each availability zone is distributed by miles within a region. this offers our customers world class resiliency. but many of our customers want resiliency across regions as they want to store data across regions.

In order to achieve this by building the solution on their own. it costs a good amount of money and a lot of effort with our msa replicator which we launched a few months ago. customers can now achieve this with just a few clicks and within minutes, developers do not have to write any code or deploy complex network infrastructure in order to achieve cross region replication with msk.

Once again, this significantly lowers the cost of operations for customers looking for multiregional resiliency. If you want to store your data in multiple regions or share data with partners and other regions or simply build highly resilient applications. when you're using kafka workloads, you should be using msk replicator, rounding up our recent innovations and ms scale.

Let's look at a new integration. We now support as you all know, s3 is the backbone for data lakes. and many customers have told us that they want to move data from msk to s3 as a common pattern and they do not want to build application logic while doing so.

We now support kinni data fire hose integration with msk. With this integration, customers can achieve data movement from msk 2 s3 with just a few clicks and without having to write any code k data fire hose is our fully managed service that helps customers manage streams and transform data into useful formats and deliver to desired destinations.

With guinness data fire host customers can deliver data now to s3 and they do not have to worry about error handling. guinness's data fire host handles automated retries and automated data buffering. further customers can also achieve in line data conversion and transformation because guinness's data fire host supports rich features such as um par or par data conversion s3 dynamic partitioning. and it also supports service site schema validation using aws glue schema registry.

In short, if your organization has a data lake and you're running kafka workloads, you should be using our integration with guinness's data fire hose to move your data from msk to s3.

Next, let's look at our recent innovations in k data streams with guinness's data streams on demand mode. customers do not have to worry about actively managing stream cluster capacity or scaling partitions up and down as their workloads scale.

Just in the last year, we have improved the throughput offered in ken's data streams on demand mode tenfold. We now support two gigabytes per second of right throughput and four gigabytes per second of read throughput.

If you have large internet scale workloads and you want to lower the total cost of your operations and you don't want to be handling stream capacity actively. you should be using k's data streams on demand mode.

We have media and entertainment giants with huge variations in their average to peak throughput capacity who use cannes data streams on demand more actively already, we consistently focus on improving the ease of use for our customers and all of our services in guinness, data streams on demand mode.

A very common pattern is for customers to share their data in one stream with users in other aws accounts. In order to make this simpler, we now support cross account access with cross account access. customers can now provide permissions to users and others accounts to both ingest data and read data from a stream in a different account.

Before this customers had to copy data across accounts. in order to achieve cross account access. Once again, with this feature, we have lowered the total cost of operations for our customers and made it incredibly simple to share data. This was a top requested feature from our largest enterprise customers to close out our innovations in cannes data streams.

i'm going to talk about are um integrations. there's a bit of a glitch with the slide. i'm going to talk a little bit about our integrations in guinness's data streams.

We support integrations with common data storage services, database services, data enrichment services, data analytics services and a il services. In total, we have over 50 integrations. We also support integrations with popular external big data companies and their products.

Recently, we announced integration with amazon monitron with this integration, we've made it incredibly simple for customers to build an internet of thing. derelicts. We also support integration actively with stage maker which is our machine learning product and this is an integration we're seeing pick up incredibly, this is the integration that mindy ferguson was talking about when she mentioned about how national hockey league uses real time data streaming to deliver real time insights

To round up all of our services in the data streaming and data processing realm. i'm gonna talk about amazon manage service for apache flink. Apache flink is a popular open source framework that lets customers analyze real time streaming data.

We were the first to use apache link to process real time streaming data and we've been doing this since 2018. Formerly, the service was called kenne data analytics. We just renamed it to amazon manage service for apache flink, with amazon, with amazon manage service for apache flink.

We see thousands of customers process real time streaming data for tens of thousands of applications to summarize, we consistently innovate to deliver the best experience for our customers. and we do so across three core pillars.

One is cost and scale. The second is ease of use and the third is a rich set of no core integrations when it comes to cost and scale our recent innovations and msk support graviton through which you can improve your compute costs by up to reduce a computer costs by up to 24%. You can improve your right and read throughput by up to 29%. And these instances use 60% less energy on kinni data streams.

We now support improved scale in our k data streams on demand mode, the increased scale 10 fold just in the last year and we now support two gigabytes per second of right throughput and four gigabytes per second of read throughput.

When it comes to ease of use. In msk, we announced msk replicator which now helps customers build car season replication without having to write a single line of code. And within minutes in guinness data streams, we announced cross account access which allows customers to share data in their streams across users in different areas, accounts once again without having to copy data across accounts.

In our set of integrations. We now support guinness's data fire hose integration with msk. This allows you to move data from msk to s3 without having to write code. We also support kness data streams integration with amazon monot. Once again, this allows you to build internet of things data lakes using real time streaming data without having to write a single line of code

With this. I'd like to hand off to mindy ferguson for closing remarks. Great job arvind and thank you so much. Thank you to harsh as well. um from adobe that was incredibly powerful to hear all across any type of industry. you can find customers using aws streaming services. sometimes they come to us with very unique use cases. i'm sure many of you out there have very unique use cases, some regulatory requirements and some even really operational goals that you want to tackle at aws.

We will meet you not only where you are on your transformation journey, but we will go along with you on the journey to your future, we'll work backward from the problems you're trying to solve and we'll partner with you on developing the right solutions that meet your needs today. and tomorrow, there are some sessions up here on on the screen that you can check out. there's a lot of ways here for you to begin your data streaming journey while you're still in vegas. because whatever happens in vegas doesn't necessarily need to stay here. you can get started on your data streaming journey right away today.

So check out these sessions, check them also out after re invent, they're great to to, to look back and and think upon also pull out your mobile app and give us a, give us a review on your, on the survey. we'd love to take your feedback. we are a data driven company and we do value your feedback. how can we get this better for you the next time around haras and arvind and i will be out in the lobby. we'll take any questions as well after this.

On behalf of all of us who work on the aws streaming and messaging teams, we say thank you. you continue to amaze and inspire us every single day by what you create. and we are so proud to be a part of your architecture and your innovation. thank you and enjoy reinventing.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值