Break down data silos using real-time synchronization with Flink CDC

Hi, everyone. I first wanted to start by asking a few questions. How many of you have data across multiple data silos? (Some raise hands) That's who also is running daily ETL jobs trying to access the data across all of those data silos, maybe daily, weekly and having to reconcile in a central repository.

So today in this session, what we're gonna be seeing is how we can simplify and easily move data across all of those data silos by also being able to do real time processing by making sure that we can extract and add value and gain insights as we're consuming all those changes of all the data across our data silos.

My name is Francisco Murillo. I'm a Streaming Solutions Architect at AWS. I've been working here for the past four years and I'm originally from Venezuela and I'm based in Spain. And today with me is Let Me Here, I'm a Streaming Special Solutions Architect as well. And we are going to get into our topic.

So first we are going to get into data integration challenges that exist when you want to move the data from the data silos and then you want to process them, gain immediate insight or you want to also send them to other destinations in order to sync other targets and other systems.

And we are going to compare batch processing for this purpose versus stream processing. We are going to do a quick refresher and quick introduction to Apache Link for those who are not familiar with it. And we are going to also talk about why we think Apache Link can help us solve this problem and Apache Link is a fit for this solution.

We are going to also show some reference architectures and at the end, we are going to have a summary and call to action.

So in the past, decision making revolved around enterprise data warehouses. All the data from different sources is going into a data warehouse solution. And then that was what's feeding the business intelligence.

As the companies started having more data silos, they had to bring and purchase or add another data warehouse engine to the stack and then connect that to the business intelligence. And then they had basically two different business intelligence dashboards and then they had to figure out a way to join that data somehow.

After they get the insight that they get, as the number of the data silos grow, then this solution became expensive because it's not supporting multi-persona support. It requires having a central team to manage all of that. It requires vendor locking and licensing. And as a result, the company, when the needs of the company grows, the central team cannot support and scale to the size of the organization.

And as a result, the organization loses its agility and they started seeing that their SLAs are missed. On top of that, the data grows exponentially every year. And then the decision makers moreover need faster insight than before in order to be able to make decisions that make an impact and help the business to grow.

As a result, the traditional analytics which was around the data warehouse engines and enterprise data warehouse engines evolved into data lakehouse architecture. Data lakehouse architecture opens doors for opportunities such as machine learning, business intelligence and analytics and also data warehousing.

The reason that the data lake is at the center of the modern data architecture is because it can store massive amounts of data and it's durable and reliable and very cost effective. It can store data, doesn't matter if it's relational or non-relational. And then it can also scale different types of storages, for example, block storages.

And then it drives a broad range of analytic and other use cases such as machine learning and stream analytics and different use cases. And then the consumers can even read the data directly from the data lake without even moving it with the support for engines such as Apache Spark or Apache or PrestoDB or Amazon Athena, you can query the data right where it is and you don't have to move it in order to get the insight.

And as I said, it's designed for low cost and high performance for analytic purposes. So in order for customers, in order to move the data from their data silos into a lakehouse architecture, they would need to have a process that first extracts the data from those sources.

And these processes often are batch jobs that they are going to take a snapshot of the data. And then each time that they run, they take a snapshot and they have to move a lot of data from those sources and move them in raw format into a data lake.

And then there are another job, batch jobs that they subsequently have to run and then they start transforming this data and start basically harnessing the insight that we need from the data and then write those back into the data lake to be able to convert them in a consumable format for the business intelligence.

And when we have a data lake and a data warehouse in a lakehouse architecture, then the data warehouse engine will need to load that data, the data that is ready to consume in order to provide that to the business intelligence application.

So as you can see, there are two issues with this kind of architecture. One is the latency because we have to wait all these processes complete. And then there is another thing which is basically the parallelism. So we have to only have a single job running all the time. And if the size of the data grows, we have to wait longer for these jobs to finish. And we know the value of the data diminishes over time.

So for example, in a fraud detection scenario, if we get a fraud alert about something that happened this morning, it's a little bit too late for that, isn't it? We need to act upon the data as soon as possible, preferably as it's happening.

Streaming data technologies solve this problem because they work differently, they continuously running and they can help us to harness all the data in real time. And they can cost optimize our processes by only processing the data that changes. They are built for scalability. So we can scale them as the amount of the data, as the throughput, as the size of the data grows. And that way we can get faster insight and better analytics.

And we can also use these processes if we want to keep other systems synced with the source systems. In contrast with batch processing, stream analytics would look like this. So we have our source systems. And this time we are going to run a streaming processing, streaming process that is going to capture only data changes.

And this process often is called CDC. And CDC process generates data events in the format of CDC. It's called CDC data. And that event basically includes information about what has changed, what was the record, what was before, what was after? And whether it was a delete, whether there was an update, what was the time that this change happened? So a lot of information that we can use afterward.

Because this is a streaming data, we need a streaming storage. So we can use the streaming storages such as Apache Kafka, Amazon MSK or Amazon Kinesis in order to store this data and then we can feed them to stream processors, anything that can process, you know, data events.

And because we have an infinite number of events that we are processing, the output of those processes is also going to be an infinite number of infinite events in a form of data streaming. So then streaming storage again is a perfect choice for storing the output of this stream processing.

There are stream processing frameworks that can connect to other systems and we can use them in order to feed the streaming data to those systems, whether it's a relational database that we want to sync with the source system, whether it's a data warehousing that supports streaming as an input.

So we can basically send the output from the stream processors to the destination as well. Or we can have microservices to subscribe to the topics in a stream storage in order to receive the output of the stream processors.

For example, a microservice that sends fraud alerts only needs to subscribe and basically would be an event source process that it receives events and then processes it and sends notification to the appropriate receiver.

But customers who try to build a custom streaming application or stream processor, streaming process, they found that building these processes are difficult to set up. Often, they are hard to scale because it's not easy to just increase the number of workers.

Often these processes need to be stateful and as a result, they're also hard to set up in a way that they're highly available because we need to account for scenarios when a worker fails. Then another worker which runs in a different data center needs to take over the task of that previous worker. And it's not easy.

We need some level of orchestration and as a result of failures and retries, we often find that our stream process or simple stream process that we wrote code for it can produce inconsistent results because it might process an event twice and then it might produce the result that we don't expect.

And because of that all these processes become error prone and complex to manage and also expensive to maintain.

Apache Flink is a very popular framework for all the reasons that we said because Apache Flink is an open source, it's built for stream analytics and solves a lot of these problems.

It can be used to build event driven applications or streaming analytics and building streaming ETLs. And it can also run in batch mode so we can have batch jobs and streaming jobs, and we only maintain one code base.

It provides us some processing guarantees to overcome the challenges that I mentioned with data inconsistency such as exactly once, state consistency or processing the events based on the event time, not the clock time, not the processor time. And also it has mechanisms to handle the events that are arriving late.

And in terms of simplicity of using Apache Flink and adoption of Apache Flink, Apache Flink provides a range of APIs from the lowest level that you can access to the timer and you can access to the state and you can write your own operator all the way up to the level that you can only write a SQL statement and you can process your events in real time just by writing the familiar SQL queries.

In terms of scalability, it can adapt to virtually any throughput because it's highly scalable and built for scaling horizontally.

So you can basically just scale out uh the the workers of Apache Flink. And then that way you scale out to the um to meet the throughput that you're receiving as an input. And there is uh a very vibrant um open source community supporting Apache Flink. And in fact, AWS also contributes to open source Apache Flink. We had uh over 300 pull request and uh 2,200 of which have been mirrored. And then we also have conducted more than 300 code reviews, Apache Flink um can connect and read data from streaming storages such as Amazon Kinesis Data Stream or Apache Kafka or Amazon MSK and also can connect and read data and help us with CDC ingestion um from the um the other data sources such as Amazon RDS, Amazon Aurora or uh some other messaging systems such as uh RabbitMQ, it can help us with processing that data.

And then once, once the result is ready, it can write the result. Not only to the streaming storages that we mentioned also um maybe maybe hadoop distributed file system, HTFS or Amazon S3, or maybe something other than uh Apache Kafka and Amazon Kinesis such as Apache Pulsar or even Apache Cassandra.

So customers tried running Apache Flink, open source on Amazon EC2 or Amazon EKS and still found that there are still some infrastructure to manage and uh some processes and dev ops that they need to build in order to manage the scalability and monitoring the job and making sure um they know when to a scale and also monitor the workers and make sure that they have all the enough resources.

So they did not want to do that and ask uh Amazon why you guys not building that? So AWS built a service. Um um maybe I, I don't know exactly when but um it's, it's been a service that's been around, it's called Amazon Kinesis Data Analytics. And recently that service rebranded to Amazon uh Managed Service for Apache Flink.

So Amazon Managed Service for Apache Flink basically is the easiest way to transform and analyze your streaming data and you can run Apache Flink applications continuously and scale them automatically by setting up autos skiing on the on the job and it can um and it gives you a platform for running a streaming job. Also, it gives you a development environment called uh Apache Flink Studio when you don't have to um set up your development environment and spend time on that. So you can get a quick uh development environment in the form of a Zeppelin notebook and you can write queries there and you can interact with the streaming data in an interactive way and, and basically start doing exploration with the streaming data. And then once you're done, you can just deploy the notebook as a job, um as a job that continuously gonna run. And then the service will take care of all the Apache Flink, check, pointing and taking a snapshot. And also making sure that it uh makes a backup of the state and you, you, you would not lose um you know, data because the da the state is being um stored as a backup and you can basically recover if anything fails if the hardware fails. So it will basically uh manage all the infrastructure in those scenarios.

You also can um use those Apache Flink API s that I mentioned and all of them fully available when you use this service from SQL, Java Python or Scala.

So customers who run a um on premises, um they had to perform all the tasks on the left side from power and networking uh on the data centers and uh hardware life cycle all the way up to uh building and streaming applications and basically deploying those applications and maintaining them when customers switch to AWS. But they use Amazon EC2 or uh their self managed Kubernetes. They get some level of management from power networking and um basically ac um to operation system install. But uh they, they still need to perform operation system patching and making sure that they ha they're providing the enough uh infrastructure and capacity needed for running a patch of link job they need to do encryption. They need to make sure of the security they need to build for high availability and resiliency. And um those are many tasks that uh customer had to manage with Managed Service for Apache Fleeing. Customers can only focus on streaming application and building those applications and once and very easy way, uh they can deploy it um on AWS without having to worry about all those tasks.

All right, we'll hand it over to Francisco. Thank you OE.

So now that we know what Apache Flink is and how we can deploy on AWS. Let's see then how we can use Apache Flink in order to break down our data silos. As Ali mentioned, we can use Apache Flink to consume data from Apache Kafka clusters from Amazon Kinesis Data Streams. So if you are already consuming those changes from your data sources, you could already start to use Apache Flink to do the processing, consume that data and send it to a destination or to other consumers.

However, with Apache Flink, you have the possibility of using Flink CDC connectors which allows you then to connect directly to the database which will allow you to do a snapshot migration or just consume the data directly from the transaction logs, allowing you not having to manage. And the CDC connector or any other CDC tool that you're using. And ingesting that data into an Apache Kafka cluster.

It is available with the APIs that Flink offers - DataStream, Java and SQL. And as of now, it works with MySQL, MongoDB, Progress SQL, SQL Server and Oracle.

When we talk about how Apache Flink CDC works, we need to understand then how Apache Flink manages CDC under the hood as it's consuming those changes. Apache Flink can transform all of those events, all of those records into a materialized view, which is called which they call a dynamic table. And this is how then Apache Flink is able as is consuming the data, be able to do inserts, deletes and updates of the of the information that we're consuming from all of those tables.

Let's see actually how this works with an example. To the left, we would have what could be the stream of events that we're consuming or the changes that are happening in the database to which with Apache Flink is not only that we're able to consume that data, we can also then apply continuous querying or transformations to order them to gain real time insights as soon as those changes are happening.

So we can run a continuous query to which if we want to do a simple aggregation based on how many clicks our users are doing in our website. And we are capturing those events in our transactional database later on as we're then gonna be deciding where we're going to be output or sending this information within Flink, we're gonna be sending all of those transformations or, or those aggregations to a stream.

So when we do the first insert or the first insert, we're capturing a click on our database from Alex to which when we run our query, our continuous query with our Apache Flink application, we get a simple record. Alex has done one click and that is going to be outputted to the downstream operator on Flink, which is going to have the information as well as what type of record or change we're having with this event. As we can see, Alex one insert as we continue to consume those changes directly from the database, we will continue to add or append and modify our dynamic table and send those records out to the stream of the events.

Now, the question will be what happens when we receive an update. What happens when we receive a new record that we're gonna be doing an aggregation with our continued querying. Well, we have received a new click or a new event from Alex. And what we're gonna see with our dynamic table is not that it's only going to be appending this event. It would have modified entirely, it would have modified internally to which we resend that update across the different applications and so forth as we consume events from the clicks that Mary is doing, send those back out and then again in on it, not only will consume the updates but as well the deletes.

So we now have a delete operation in our database that would be reflected in our dynamic table as well as the rest of our application. But if you notice something, even though our dynamic table is constantly modified internally within the Apache Flink application, we're sending all of those aggregated events out to the stream.

So then we have the challenge to what is going to be the destination in which we want to do this real time synchronization as we're extracting our data from the databases. Because if we are consuming or sending those data to a stream, what the sync operator will see is many events. We're gonna see the first event for Frank. We're gonna see the first event from Alex and we're gonna be seeing all of those updates as individual events.

So if we actually want to do that continuous querying in our destinations, we're gonna be seeing inconsistent results. As we can see. Then here's where we have to decide for CDC data. What is going to be our destination? If we choose to have an app and sync that all of those events are just going to be written to our destination or to our data lake where we run that query, we're gonna see that we have some outliers, we have some events that shouldn't be there.

However, what we would want is to use an upsert sink. And then based on a primary key as we're doing with our users, we're able then to, even though i have all of those individual events that were calculated using the Flink CDC connector and our continued querying, we're always able then to have the same result as we were within the Apache Flink application in the dynamic table.

But how can we do this if we were working with a data lake? Well, here is where the transactional data lakes come to the rescue.

So in order to process and write CDC data, as we saw with the example, we need to find or use a data sync that is in component such as a key value store or a sync that allows us to do upsert. But then again, if we're trying to leverage a data lake, as we saw how traditional analytics has been evolving, we run into the issue that Amazon S3 actually expects and works with immutable objects.

So we need to find a support that allows us then to run upsert and delete on our data in Amazon S3. So you may have heard of transactional data lakes or open source table formats which would allow us then first of all be able to have a snapshot isolation as we're continuously streaming this data into a. Do we want to make sure that we can give some consistency for our users when they're accessing and querying this information that it fit it fits perfectly with the CDC because we can leverage a primary key such as we did with the clicks with our users, which is what we're gonna be using in order to observe our data lake as we're swimming. All of those changes, we're gonna have to deal with a small file problem as we want to be able to achieve that real time near real time synchronization. We're going to be writing constantly and continuously to our data lake which will bring us with the issue of having way too many files, which is not going to be optimal for querying or for post processing. And here's where transactional data lake formats does allows us to either do it as synchronous compaction in order to optimize those files, those file sizes or running a separate batch shop separate from our stream processes and running backfills separately.

And lastly, as we then see with our database, we then have to deal with also having to maintain consistency across our data silos by leveraging and being able to do upsert and deletes if we want to comply with privacy regulations.

Modern data lakes use modern open source table formats. The most common ones would be Apache Hudi, Apache Iceberg, or Delta Lake, open source Delta Lake. These frameworks do allows us then to solve our challenges as we're already using Flink to consume the data directly from the databases. Then what we can leverage is Apache Flink connectors that also allows us to write to S3 using this transactional data formats.

So we can leverage Apache Hudi mainly if you have more familiar, normally you can see Apache Hudi being used for with Apache Spark or as well with Apache Iceberg with Apache Spark. But then again, as we want to be able to consume that data from the databases doing real time streaming analytics, we have Apache Flink has direct integration in order to leverage this format as we write into Amazon S3.

So at the beginning, I made a promise in which I would show you how we can simplify how we consume data directly from the databases. So instead of having way too many components, one option could be, we have our data source, which in this case is an RDS database. Let's say that it is a MySQL and we can then use Amazon Managed Service for Apache's Link with the uh Link CDC connectors being able to consume the data directly from our tables.

We can specify if we wanna consume the data from the beginning, doing a snapshot migration or consuming the data specifically from the transaction logs. So we only capture all of the la the last changes that we have since the moment that we started our filling job to which if we want to write that data into a data link.

We can leverage one of those open source table formats, write that data into a tree, which is then compatible. And we are able to query using Amazon Athena where you wanna apply data governance. We can use Amazon Lake Formation for some other use cases to which if we want as well to ingest that data back into a stream, we can also leverage Apache Link by using uh an Amazon MSK connector or Amazon Kinesis Data Stream connector that would allow us then by using the feature from Amazon Redshift.

If we want to ingest that data directly into our data warehouse, or then again, if we want to empower other consumers to be able then to work with the with all the changes or or aggregations that we're doing in real time. But then again, you might be thinking what happens if I actually do need to have access to the data as soon as being produced by multiple applications, maybe using Flink CDC connectors might not be the great fit uh for one of the your use cases.

Well, in that case, we can run Amazon MSK and uh we can run K Connect connectors that allows it as well to consume those changes from databases and push that information into Amazon MSK. You can as well leverage AWS Database Migration Service that also allows you to consume data from databases and push to a destination normally is more commonly used for doing database migrations. But it also can write that data into a Kinesis Data Stream or into an Apache Kafka cluster.

And as we have all of those events, those change data capture events in our Amazon MSK cluster. Even though we might not be using in this architecture, the Flink CDC connectors, we still leverage the dynamic table concept in our Flink applications to which we can then apply and run that continuous query.

If we're doing our transformations using the Flink SQL API doing our transformations or aggregations and sending that data, then again into our data lake or to a separate Kafka topic that will be ingested directly into Amazon Redshift. But then again, you could as well if we need it, run your stream processes using AWS Lambda or run an Apache Flink application if you wanna have more control of your environment in an Amazon EMR cluster.

So what I wanna show you now is how we can then see this live through some screenshots on how this would actually work. What we're gonna be seeing is this uh exact architecture to which we're gonna be ingesting some events using uh into our database. We're gonna be running a Debezium connector in Amazon MSK Connect and we're gonna be using Amazon Managed Service for Apache Flink Studio in order to consume that data.

And from the Kafka topic doing a continuous querying and sending the data back into Amazon S3. So first we connect to the database, we have a person table. Uh we have a simple person table in our Amazon RDS to which we're going to be ingesting a record. We're gonna be ingesting an event with my colleague's name to which by how we're using Amazon MSK Connect the Debezium, we can then consume those changes from the database.

The Debezium as well does allows us to do some format transformations if needed. And we can have that change directly already as we consume the data from our Kafka topic. When we go to the Amazon Managed Service for Apache Flink Studio. What I uh what I have access to is a Jupyter Notebook environment that allows me to run Apache Flink applications, either using Scala Python or SQL.

I can then create a table that makes reference to the topic that I want to be consuming from. As I want to be able to run that continuous query, I provide the schema the topic as well as all the connections uh properties that I need in order to consume the data from Apache Kafka. And then I can run the same continuous query that we were running, doing a select first name count from a specific topic.

And as as we can see, we're consuming and we're showing that data within the notebook at this moment, we haven't deployed the Flink application, we're not writing to anywhere. This is a way that does allows us to run Apache Flink code interactively and seeing those results in real time as we were modifying and inserting those records into the database.

But then again, let's insert more records as we can then want to be able to see actually how many clicks or how many events we have for a lead in our specific table. And funny enough, you can see that we're not seeing a pen, we're actually seeing directly from the notebook environment that the the that by the first name. So our primary key, we're able then to actually update on the fly.

But then again, this is how the dynamic table looks within the Apache Flink application. That's so if we were to then send this data back into a Kinesis Data Stream or to an Amazon MSK topic, we would actually remember from the slides that we saw the live animation. We will be seeing each event being updated as that continuous query was running.

So that's where we need them to choose which open source table format we can use. The most popular ones at this time would be Apache Hudi and Apache Iceberg. And for CDC, what we are seeing customers use more is Apache Hudi. Why? Because it has direct integration with AWS DMS, it also allows you to specify the primary key to which where or how you're gonna be running the search as well as which column you should be using from the table in order then to compare which record is the latest update or the mo last modified uh event that has happened to your table as well as allowing us to run that asynchronous compaction as we're writing continuously to our data lake, making sure that we're able then to compact all of those small files and not having any issues as requiring our data.

So how can I then write my data or my continuous query into an S3? Well, I create an Apache Hudi table to which I only need to provide the schema. I need to provide the S3 bucket that I want to be writing to. What is going to be my primary key, which will be the first name as well as how frequently I want that compaction to be running and then what I just need to do, it's, it's a little bit small, but I run an insert into query that in which I'm going to be consuming the data from my Kafka topic and inserting it into my Apache Hudi table as the data is being constantly written and I'm modifying and adding new records.

What I'm gonna see in my Amazon S3 bucket is a Hudi metadata table that keeps track of the partitions and the changes and files that are needed in order for my consumers when they query to really access only the files that represent the last changes that are being seen in my database.

And as this data is already in Amazon S3, I can leverage the consumers that allows us to work and query data from Apache Hudi tables such as Amazon Athena with Amazon Athena. I'm able then to query Apache Hudi tables. As I'm writing the data in by using my managed Apache Flink application.

As we can see here at this point, we have two records, Francisco and Ali one and two, how many clicks? And if we just go and then insert multiple more records, as we can see at the bottom, those are the actual events or that would be the dynamic table view if I were not to be writing or just using the studio in order to do some interactive analysis.

But if I then go back into Amazon Athena and I run the same query instead of having the error or having duplicates as we may be seeing a lot with when we want to break down our data silos and consume that CDC data into our uh into our data lake. By leveraging the combination of streaming analytics and open source table formats. We can see then that we're able to reduce the injection latency, we're able to then have a materialized view or a consistent view from how my data looks in my database as well as not only that but being able to run or write already direct queries or analytics aggregations.

Of course, in this particular use case, we're just doing a simple count. But with Apache Flink SQL run time window queries, you can run a joint, you can do out uh you can do external enrichment from multiple data sources as well as registered user defined functions to which if you want to run more complex logic that goes outside of their simple SQL querying, you're able then to simply using this APIs to be able either to consume the data from the Kafka topics or then again using the CDC connectors consuming those changes directly from the database.

So a little summary from what we have seen today, Apache Flink does allows us then to as well as doing streaming analytics doing batch processing as mentioned by Ali, what we can then use is the broad uh the broad range of connectors available so that if you have applications are are are in this moment running a batch and you're doing your daily extraction from your data sources. One option to migrate or move into streaming analytics would be to leverage Apache Flink by using batch processing first seeing how you can then get customized and you can get familiar with Apache Flink to do your batch processing and then switching to be able to actually extract and gain insights as the data is flowing in real time.

Because even though that we access that data in batch and we are accessing that data in our databases. All of those changes are actually happening in real time. So with Apache Flink with the dynamic tables, we will be able then to reduce our latency and our time in order then to make better informed decisions with Apache Flink CDC, we can simplify and not having to manage an Amazon MSK Connect or a DB connector or using AWS DMS, which would allow me to consume that data more simple uh in a simplified manner.

But then again, we do recognize that some use cases do need to have that data available in your streaming storage. But you still leverage the concept of the dynamic table in order then to do those CDC inserts or deletes. If we're gonna be choosing a destination for CDC changes, we need to choose an idempotent sink, either a key value storage or if we're using a data lake such as in Amazon S3, we will need to use one of the open source table formats.

And lastly making sure when choosing which technology to use, what we would recommend is to use the use the offering of managed services on AWS in order to make to make that heavy lifting of doing a migration or as well as not having to maintain infrastructure and just having to worry on how you're consuming their data and how you can then optimize your application.

So a few links or a few uh announcements, I would say we do offer a lot of uh documentation and samples on how you can then deploy this architecture. So first the first link would be our managed service Apache Flink documentation. If you wanna know a little bit more how it works in which regions is available and their pricing, you then have information on how dynamic tables work directly from the Apache Flink docs.

And if you actually want to deploy a simplified sample that creates a database, creates an Apache Flink managed studio and uses Flink CDC to write your data into Hudi. We have an end to end sample of uh available in GitHub as well as a blog that explains how you can run this by leveraging Amazon MSK Connect.

And lastly as we wanted to as well, the the flexibility that you can have with Apache Flink, we also have a managed Flink repo with many samples. One of which is how you can then write your data into your data lake using Apache Iceberg.

Hopefully, I would say that at this time, uh they're already, I would say picking up the booth, but hopefully you were able then to go to the modern application and open source zone which was to the right corner uh below uh the AWS village. And I know that we, we have a little bit of extra time. We actually planned this uh on purpose so that we can give uh you uh we can give you guys a little bit of time if you wanna make some questions, if you wanna give us a little bit of your use cases that you are having today.

So with that, I would ask before or after you, you make the questions to please remember to fill the survey uh from the session that you have today. And first of all, thank you so much for making the time. I know it's a, it's a late uh it was a late session on the last day. So I really do appreciate you guys taking the time for being here and also Ali for accompanying me here. So thank you guys so much for being here today.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值