Self-service analytics with Amazon Redshift Serverless

welcome everyone. today, we are going to talk about self service analytics with amazon redshift.

uh my name is naresh chainani. i'm an engineering director at redshift. uh i've been at amazon for six years and i have been working on databases for two decades and having a lot of fun.

i would like to welcome raj paisa. he's from broadridge financial and he's going to be co presenting with me and talk about broad's experience with several redshift.

thank you, naresh chainani.

so for today's session, uh the flow is going to be like, i'm going to start by talking about what self-service analytics means. and why, why is this important? what is the approach that redshift is taking? then we are going to segue to raj to talk about the challenges they were facing with the current system and how they migrated it. and the results of moving to several redshift self service analytics actually deals with both the compute side and the data side.

so then we are going to talk about the data simplification with um how enterprises that have different data silos, how are they able to query across these silos? and finally, we are going to talk about uh autonomic and why that is important to get the best out of box performance uh as as your workloads scale.

so let's take a look at what does self service analytics mean as as you can imagine. and this may be true in your organizations. there are many hundreds of thousands of analysts line of business users that are trying to get some insights from their data. in order to get started. we don't want them to have to worry about making decisions around what compute to provision. how do i get started and not take days or hours or weeks to get started?

so basically, we want self service analytics is where where an analyst is able to get start clearing the data in a matter of seconds. what are some of the properties uh for a data warehouse that is suitable for self-service analytics insights with getting started within seconds.

so you have some questions, you're trying to figure out you have some data, maybe it's in s3 or maybe it's in interested in the data warehouse. so you want to be able to quickly get started, start asking some questions. and often when you initially ask some questions rather than answers, you get into more deeper questions. the it's it's important to be able to query uh all your data, the data could be justed into a data warehouse, it could be in a in a transactional system. for example, it may not even be in your analytics system.

uh so, so the ability to query across different data silos becomes very important. so, so that you are able to make timely and good decisions and finally being able to get the best price performance, i mean, if, if, if you are able to, if the cost is too high, that's going to be a problem from a business perspective. if the performance is not satisfactory, if your dashboard latencies are not where they need to be, that's going to be a problem.

so at red shift, we we took these as guiding principles as we started thinking about self-service analytics and, and it's been a journey. we have customers today that have data in s3. this could be par k files, for example, uh they could have some data ingested into red shift using copy or some kind of tl processing.

um they, they could have data in a, in a transactional system like aurora or there could be data in a streaming system. and i'm going to talk more about streaming systems in a little bit here. the ability to analyze all the data that's important. otherwise, your insights may be meaningless, super important from a user perspective to have the um ease of fuel.

so you're not making infrastructure decisions, you're not worrying about when to take the next backup you're not worried about. and and with data sensitivity and security is super important. redshift takes that very seriously. all the data is encrypted by default. when we exchange data for query processing, that's also encrypted and with advanced features like role level security data masking, which is something we announced as preview this week and role based access control. we simplify making sure that users that have access to the data are able to get in and it's block for the rest.

and finally, what we often see is customers. some applications may start small as they become more interesting uh and provide valuable insights. the number of users grows, the data volumes may grow or both users and data grow. it's important that the consistent performance be maintained across uh as, as the as the data and user scale.

i want to take a moment to talk about the scale of amazon redshift and when, when i start talking about autonomic, i'll tell you why, why this is important. every single day, tens of thousands of customers are using red shift and they are querying in aggregate 750 million queries every single day. collectively, these queries are crunching exabytes of data. redshift is a global service. it's uh available across uh 27 regions and 84 availability zones.

um so, so we, we have uh so we, we, we take pride in kind of monitoring the fleet. what are the shapes of the queries that are running so that we can optimize where uh where it matters for you, for the customers

um as an example, redshift machine learning is a feature that we launched a year ago. and that's the other thing we do is look at adoption of features, see how customers are able to use it and add more value as time goes by every single week. redshift ml is doing 80 billion predictions.

um and and this has changed the way customers, i mean this data without those insights, without machine learning, just you get to the advanced levels of analytics and there are customers like job case that are using this every day to further their value.

so let's take a deeper look at the three pillars that we talked about. so it's, it's important uh that you are able to get started quickly. i mentioned that um before. so how, how does uh wretched zulus fit in? this is a fairly young service. we launched it in july of this year. so it's five months old right now and it's one of the fastest growing service with thousands of users using it every single day.

what users find most attractive about this. and this goes back to the line of business users is they don't have to worry about maintenance, windows patching, they don't have to worry about security. it's taken care of if there is in the event of a az outage fail over kicks in. uh so that your workloads continue uninterrupted tuning of the database is fully automated. and what this means is by aws, taking care of all of this. you as an analyst, as a line of business user gets to focus on querying and paying for what you use.

let's take a zoom in into, into the wretched serverless high level architecture at the heart of wretched serverless is this intelligent compute management layer. so what what's intelligent about it? so what it does is we, we see query workload patterns vary during the day.

uh for example, daytime, you may have hundreds or thousands of users that are querying your data while at night time it may be lower concurrency and say a tl or reporting jobs that are running. what the what the intelligent compute layer does is it scales the compute based on the workload and as part of so it is made of a set of sub components and we'll dive into a little bit into some of them. but the workload manager first will try to maximize the utilization of the compute that's already allocated. that's what you are paying for.

once you get to a point where your computers at peak utilization, it starts scaling automated automatically. so as as say, say for example, think of the 9 a.m. syndrome. the first few users log in computers provision behind the scenes as, as the workload scales the compute scales and and maybe an hour or two later things start winding down and the computer may go all the way down to zero. so all of this is managed automatically behind the scenes.

there is one other thing that happens which is automatic tuning based on the query patterns. maybe it makes sense to cluster the data differently or distribute it differently so that your joints are co located. so all of this is at the heart of red of serve, we have had customers who are using provision, redshift wonder what would it take to move some of their workloads to serverless? is it a long migration?

one design tenet was to maintain the same jd bc interface, the sql language compatibility. so you can essentially lift and shift your workloads, variable workloads, spiky patterns are a very good fit for serverless. uh so your j dc and points, your tableau looker dashboards continue to work unchanged at the data layer. it's actually the same data layer, both provisioned and serverless computer querying.

so we have queries that might be going against the s3 data, open file format or against structure managed storage. all of that continues to work. at this point. i would like to invite raj to talk about how they took this architecture and realize their objectives.

thank you, naresh chainani. good evening, everyone. uh my name is raj barisa and i'm a senior solution architect at broadridge financial solutions. i'm very excited to be here today and uh present my experience utilizing amazon redshift s as uh for solving some of the challenges that we had at broadridge like performance scalability availability and cost for one of the self-service analytics applications that we use at broadridge.

before i do that. uh i wanted to take a few minutes to give a little bit background about broadridge financial solutions. broadridge is a global fintech company. uh it has a 5 billion in revenue. we provide settlement and clearance. we provide regulatory and compliance services and we provide investor and customer communications. we operate in 100 plus market and we process nearly $9 trillion in fixed income and equity trades. we also provide more than 7 billion customer communications annually across print and digital channels.

11 out of the 15 global investment banks, they use broadridge financial solutions for their equity processing trades. i'm sorry. broadridge also provides data analytics insights and advisory services. more than 200,000 finance professionals, they use brokerage services. some of the analytics that we provide in the capital markets, wealth and asset management domains are portfolio performance analytics, p and analytics risk analytics, market data analytics and trade life cycle analytics. these analytics facilitate better investment planning, navigate risk and compliance and optimize operations and drive revenue for financial advisors. portfolio managers and investors.

we utilize various on prem and cloud services to provide these analytic solutions to our internal and external customers. and here is the list of the landscape of the services that we use across the analytics landscape.

a little bit background about the self service analytics application that we use at broadridge. it is a finance regulatory platform. it ranges data from our customers who buy and sell securities on the market. we receive these security transaction data every day and we ingest into our data stores. we implemented a simple pattern, a data warehouse pattern. it just the data it stores in the data warehouse which are file stores and rd systems. and we use our etl jobs which extract the data, perform validations. does the transformation and calculations and then load the data into our on prem data warehouse. and our customers use this data through a custom application which is a self service analytic application. this application has a specialized widgets which allows the users to create custom dashboards, customized reports as well as use the search and filtering functions to create custom data queries.

our financial data management team performs the research on the data and make sure this data quality is good before it is made ready for regulatory requirements. the current on prem utilization for this application is around 40 terabytes of data in size and it runs on a fixed appliance and it is running on an old hardware. it also has the daily batch edition of the data through the etl and we perform additional calculations and transformations on the data. once it is received, it also has a highly variable daily usage based on the day based on the month end quarter end and the year end, the data and the workload usage pattern differs. users run historical data queries again as this data and also perform data corrections and also generate the reports that are required for the regulatory requirements.

so what are the challenges that we have? we have five major challenges with this application in the on prem environment.

the first challenge is we have difficulty performing the data management operations. when users are creating these complex data queries, the query response times are so high. the ui experience is very bad and also the custom dashboards are rolling the data very slow.

the second challenge that we have was difficulty processing the data with the increased volume. the data quality validations generation of the reports and review of the reportable data on prep published schedules that we needed to send it to our clients and vendors are impacted.

the third challenge is scaling this infrastructure and the environment to our seasonal demands. the month end quarter end and the year end, we have a huge workload volume and we are not able to scale these systems in that short time to meet those seasonal demands.

the fourth challenge we have with the data growth, we have increased infrastructure cost and operations and maintenance costs.

the fifth challenge, we have operations overhead to maintain and manage these systems and also perform a lot of maintenance activities like backup, restore resource scheduling, partitioning and performance tuning.

with a lot of these challenges, we are not able to on board new customers or on board, new applications and users to these existing systems.

so what do we want on the new system or some of the requirements? we set ourselves for the new platform to be able to support our requirements.

we wanted the new systems to be able to process the data in a short time and significantly reduce our query processing times and provide a better ui experience.

we wanted the new platform to support our seasonal demand which is up to four times in volume.

we wanted to be able to support our research and investigation requirements which requires us to take snapshots of the existing data and build new clusters, perform research on this data and discard those environments when they are not in use.

we would like to avoid the static cost associated with the licensing and the dedicated hardware and wanted to reduce the cost of performance as well.

and we wanted to reduce the operations overhead by and the management and the maintenance of the systems and also wanted to improve the monitoring and security and audit capabilities of these systems.

so overall, we wanted to improve the data processing times and provide better experience to our users for the data analytic analytics requirements.

so why red chips are less? we reached out to the aws team and we provided our requirements. we provided our challenges. we did a research on the online cloud native cloud native solutions that are available. red shifts provides the features that fulfills our requirements.

some of the features that were of interest to us were we were able to create new environments within minutes compared to months that it was taking for us on the on prem environments.

the auto concurrency scaling feature is creating transient clusters during the variable workload and supporting our workload demands and the performance requirements as well.

the auto workload management scales the resources up and down and we do not have to manage the resource allocations.

the auto backup and recovery point creation every 30 minutes enables us to create snapshots and be able to create new clusters within 15 minutes, which supports our new research and investigation requirements.

the built in query and database resource monitoring it provides from the aws console you can log in and you can view the resources, the query executions, the times the cpu cycles that are being used by the database. and the best part we only pay for the query execution time. when we are not running any queries against the system, we do not have to pay for anything.

so with all of these features, we wanted to be able to compare red shift's performance against our on prem. so we reached out to aws team on the migration process and they recommended us to use aws ct, which is the schema conversion tool and additional services like aws s3 to transfer the data and load it to the redshift server.

so we used a simple pattern to do the initial data migration. we installed the aws ct tool on two and the ct extractor agents on a couple of ec two instances. it was an easy setup and we were able to set up this environment in a couple of hours and we connected the ct instance to our on prem data warehouse and we ran an assessment report.

the ct assessment report provides a detailed information as well as a summary report on how much of the data base objects can be automatically converted versus how much of it you need to manually migrate

And it also provides the details on the task and the complexity of the task associated as well with ac t tool. You can automatically convert the schemas between multiple platforms as well. So we were able to convert the schemas from our on prem data warehouse to the Redshift serverless environment with minimal changes.

We installed multiple cd extractor agents to migrate the data from our on prem system to the Redshift servers. Some of the tables which contained 1.9 billion records. It took us two hours to migrate the data from our on prem system to the Redshift serverless environment.

Some of the learnings in this migration process, uh aws Redshift does not support the data type. So we had to convert that data type schema the column and then we have to split it up into a reference table and then unset and then reference that key. We also had to set the column sizes for some of the columns in the source tables because ct was considering them as lbs and causing delays in the data migration.

So when we migrated our initial data set to the Red shift surve environment, we are ready to do the performance test. So we identified five different procedures that are frequently used by our end users that are of low medium and high complexity. Uh they have like aggregations joints, multiple parameters. Uh we identified a common user base that is actually used by this. Uh that is used by the application.

We set up the performance test environment with custom scripts pointing to the Redshift server as well as the on prem data warehouse providing 250 unique parameters that that contained approximately 5000 queries across those 20 user sessions. Those queries were scanning one terabytes of data in size. And the Redshift serverless environment we configured was 128 rp and the on prem data warehouse that we have was 768 v cpu and six terabytes in memory.

When we ran the test, we were really impressed by the results. What we noticed was the total query execution time between the on prem and the Redshift surve there is a huge difference in the total query execution time. The Redshift serve less took approximately 30 minutes for those 5000 query executions. Whereas the on prem system took almost three hours to complete the same number of queries.

Some of the additional performance results that we noticed 81% reduction in the total query execution time compared to our on prem platform. This allowed us to process more data and support our kps and sls. We also noticed two x to 75 x query speed improvements. This resulted in improved user experience, faster data quality reviews and generating reports for 20 user sessions and the 5000 query executions, the total Redshift surplus cost was just $32.

So we were really impressed by that number and then we wanted to do an estimation of how much would it cost if we take an on prem system which is running 24 by 78 hours a day and then 365 days versus a Redshift serverless environment which runs five days a week, eight hours a day for a year. And we need, we noticed a significant cost savings on prem system was approximately 300,000 versus the Redshift of less was approximately $100,000.

So in summary, what we noticed was the five times more data processing with the same within the same time frame for on prem, which enabled us to perform more tasks and avoid bottlenecks in our reporting and submissions and also support our seasonal demands. We notice two x to 75 x query performance improvements. We are able to provide better user experience and meet our data kps and sls with paper prairie execution. We noticed significant savings and cost performance is very low.

We were able to create new clusters from snapshots which supported our research and investigation requirement. We were able to reduce the maintenance costs and the dying and then also the reduce the maintenance activities and the downtime which allowed our d bs to spend more time on data design, data modeling and unlock more opportunities with the data with integrated monitoring logging, we were able to support our security requirements and also we were able to interactively visualize and analyze these systems in aws console with all of these benefits, we are able to on board new customers, new applications and in a short period of time as part of our next steps in virtue c journey, we wanted to do a detailed cost comparison against our on prem systems and conduct the similar performance and cost competitions.

We wanted to include additional data injection and transformation use cases. Also, we would like to explore the multi tenancy and data sharing capabilities of Redshift which enables more cost savings and less data management tasks. With that. I would like to pass it on to Naresh. Thank you, Raj. I'm blown away by those results, Raj 81% query execution time improvement at a three x lower cost. What this means is the users that you could not on board before now, you can on board them and without sacrificing performance. Thank you so much for sharing that.

So, so far, we focused on serverless, the compute simplification uh side of things. Uh a very common use case is uh to query different silos of data and the way customers are uh dealing with that is so, so there are data pipelines. Uh the problem with data move pipelines is they are error prone, right? Uh you, you need an on call rotation to support them. They tend to fail on a friday evening when everybody's gone home and once they fail, it's incredibly difficult to figure out where to resume uh the ingestion.

So you, you don't deal, don't have to deal with uh data quality issues, duplicates or still results. Um when, when i speak with customers, i commonly see there are different data sources that is data that lands in a data lake. Some parts of that data is in interested into the data warehouse, then there are operational uh data sources uh systems like aurora and and there are streaming sources for your log analytics, gaming data uh through say something like ms k uh k a fa apache, kafka or kin sis and all of these sort of feed into the analytics subsystem.

So you are able to combine this data for for timely and reliable insights. Redshift has uh several different ways to deal with uh these different sources. And today i am very excited to talk about two new announcements that we made this week. First is auto ingestion, auto copy from Redshift s3.

Um the most common way to ingest data into Redshift is uh you, you run the copy command, you point it to an s3 folder, give it the credentials and it ingest data and now you might be doing this continuously hourly or you know, daily what, whatever your business uh data sls might be. That's great. Uh what, what we have done is uh we have with the the new functionality is you create a copy job and you point it to the s3 folder, you provide again the credentials. And at that point when new data lands in the s3 folder, Redshift automatically ingest it into the data warehouse.

So you're querying as you're querying, you start seeing the new data uh come in. This is available as a public preview which means you can try it out and give us feedback. The next launch this week is around Redshift streaming ingestion. Uh use cases where your data is in a kin uh data stream or uh uh managed uh ms k apache kafka is the, the idea there is the streaming data.

What used to happen is the streaming data would land would be staged on s3 and then you would ingest it from s3. This provided excellent throughput, but the challenge was the latency of it was single digit minutes now for, for an application that is trying to do any kind of near real time analytics that's several minutes too late with Redshift streaming ingestion. What we have introduced in Redshift is the way for Redshift to directly pull the data from kin or kafka. These are the two supported sources directly into Redshift, bypassing the staging to s3.

The way you get started with this is you, you create an external schema pointing to the guinness or kafka data source. Then you create a materialized view as a landing zone for the streaming data to land. This streaming data could be um unstructured, semi structured or strongly typed as part of these materialized views. You can even do transformations. For example, you might do some sort of typecasting or you can do even advanced transformation. You might merge this data into rest of analytics database with this, you continue to get the benefit from the high prout of streaming injection and uh without sac the latency of this is uh in less than 10 seconds.

So um we were speaking with uh ado uh adobe, one of our customers and they were one of our preview customers for this feature. Their use case was to build an experience adobe experience platform where based on uh click stream analytics and web application data, they wanted to personalize and customize the experience for crm applications and that's what they are launching uh using the chip uh streaming ine this is not generally available, which means you can use it for your production workloads.

Finally, in another example of the data pipeline simplification, i wanted to talk about a feature that we actually launched over a year ago. Now it's Redshift machine learning. And through through the from Redshift, you, you now have the ability to create advanced machine learning models either automatically or specify you can say xt boost and so on under the covers. What Redshift does is uses amazon sage maker to build the training model based on the data. The nice thing with using sage maker sage maker has hundreds of data scientists that are innovating and machine learning on both the the accuracy of the models, the performance of training. And you have all of that available right from your Redshift command lines.

Once the models are built and trained and built, the inference functions are pushed down into Redshift. So the inference actually happens on the redshift compute nodes. Why is that interesting? This means so what we see is the inferences are actually that happen all over the time over large volumes of data. So physics kicks in if you have to go to a remote network hop to do the inference, this was limiting customers like job case that were trying to use Redshift machine learning by pushing the entrance functions into Redshift, you get the full mp p scale uh of Redshift and as you scale your Redshift compute with your data, uh you're able to benefit from Redshift ml.

Just another way where where an analyst, it's analyst can actually start applying machine learning uh to their data. And this data could be now also streaming in from different sources. So we talked about compute simplification, we talked about data simplification and a few times we have talked about autonomic. So the goal of autonomic is uh to, to provide you with the best out of box performance without you having to worry about taking tuning decisions. And and there is a lot of skepticism that there is a lot of mystery around autonomic. How does it work? How can i control it? And and i wanted to take a few minutes today to um demystify and, and talk about the approach that Redshift takes uh to uh autonomic.

Remember early on, i said i talked about the scale of Redshift where every day we process 750 million queries. That's a wealth of data we have in terms of query shapes that are interesting that, that, that we can learn from and optimize Redshift to best serve these complex workloads. The first step in this process starts by identifying the opportunity. For example, we may see there are there are access patterns that may benefit from sorting the data if the data is not already sorted or sorting it differently, if it might be already sorted, that way we can cut down on the io requirements and significantly speed up query performance.

Once we identify the opportunity, the next thing our models do is estimate the benefit. Why is that important? Um we, we want to make sure that because autonomic actually ends up doing additional work, like sorting the data doesn't come for free. Uh we want to make sure that there is significant benefit to be realized for the workload. So at that point, we estimate and once we have enough confidence in our estimates, we go ahead and take the action, the action could be automatically creating a materialized view or sorting the data or one of many actions.

Oops, sorry. So at that point, are we done? No, because because your workloads are not static, your daytime workload might be different from nightly workloads. Uh from week, over week, month, over month, your workloads are evolving and adapting. So we continuously observe the shifts in workloads and in a loop, we identify the next set of opportunities currently in Redshift. Uh the uh automation automated tuning spans multiple layers.

At first, we have the automatic table optimization a to for some, some of you that may have heard of it. Uh it deals with the physical design of tables. So how should data be distributed based on what columns uh uh t uh queries are joining a table on and what the benefit there is. We may decide to distribute the data to optimize the joint performance. For co location. The other part is around maximizing the computer utilization using automatic workload manager. Uh that way you get the consistent peak performance, you maximize utilization for the compute that you're using and the system scales beyond that point.

And finally, uh we have recently introduced some advanced optimization techniques including uh automatically creating materialized views. Um this, this actually makes a huge difference to getting consistent uh dashboard performance. And we'll, we'll deep dive into a couple of examples where, where you can get a sense of how this approach that i outlined is being applied. And and see some of the thinking automatic workload manager is something we actually introduced a few years ago and it has been continuously updated and improved uh based on our learnings.

The key one key aspect of auto wm is adaptive concurrency. So, so what is happening is different classes of queries? Like in raj's example, he talked about 20 concurrency test with simple queries, medium complexity and high complexity. And so what happens as these queries are entering the system, the query planner. So which is a cost based optimizer is using statistics to what's the best way to execute the query.

Then our machine learning basically predicts machine learning models predict the execution time of this query based on these statistics as well as the resources that the queries need. And these are important inputs into two decisions. One is scheduling of these queries and the second is how many queries can be packed into a system so that it is well utilized and not overloaded. Uh once the system is operating at its peak and it's its capacity, then the automatic workload manager uh automatically provisions additional computer to serve your workloads. That way, the query performance for your users is not degrading as more users uh are are querying the data.

So materialized view this, this is an actually fairly old concept. Uh it's 3040 years, it's been around in databases for a long time. And the idea is very simple. If there are patterns of a query that could be pre computed and reused multiple times, you can save a bunch of time in terms of processing. A couple of years ago, Redshift introduced materialized views where you would identify those common workload patterns, decide what views should be created. And then as your queries would uh query your base tables, we would automatically figure out which metalized views might make sense to use as the table data is changing, the metalized view would be automatically refreshed and all of that was great.

But there was one challenge and specifically for the line of business user where they had no idea which metalized view might be interesting. And we saw cases where folks in trying to use the feature would either create views that are not very useful, which which can get expensive because now you are paying for pushing these metalized views or the other angle is lost opportunity, not knowing. Hey, you know, if i could have pre computed this.

So we took the step again, applied our approach to uh uh auto autonomic and introduced automated mb creation with this. What Redshift does is analyses, the workload identifies the candidate set of materialized views that can be um that are interesting. These views are then ordered based on the estimated benefits they can provide and we start creating those as time goes by one of two things can happen. Some motorized views are very interesting. So they are kept around others might have fallen out of favor. Maybe your query patterns have changed. So they are no longer interesting.

Um in this case, uh Redshift will automatically drop such views to, to avoid any overhead of maintaining them. So how does this autonomic uh tie uh tie into performance? This is an experiment that we ran in the lab about um 10 months ago. This is actually without automatic metalized views. Um and, and it focuses mainly on the automatic table optimization. What you see here is um is a workload. In this case, it's a 30 terabyte t pc data set that is running continuously results, caching is disabled. So the whole processing is happening all the time on the y axis is the time it takes to run the workload on the x axis is the time since the workload began and is running continuously as you can see initially the workload takes 100 a little over 100 minutes to execute.

Uh the autonomic modules of Redshift are observing and a couple of hours later identified some opportunities applied them and the workload is now running 20% faster. Mind you there is there is no action at this point from the user, no hint, no intent. It was just opportunities that were identified as more time goes by additional autonomic kick in. And, and the study state for this uh test was a 40% speed up.

So, so we went from 100 minutes to about 60 minutes for this t pc h 30 terabyte workload uh with with no input, no action needed. And, and that way as as queries are, as your workloads are running, adapting. This is where automatic autonomic by monitoring the workload, taking actions correcting itself is is able to add value.

So i would like to encourage all of you to realize these benefits to, to make it easy for you to give wretched cus a try. There's a $300 credit which you can use either sample data or bring on your workloads within minutes like raj did uh at broadridge to test it for yourself. Thank you so much for joining us uh raj and i will stay around for any questions and before you leave, please fill the session survey. Thank you so much for joining.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值