Modernize your data warehouse

Good afternoon, everyone. Thanks for joining the session today. My name is Neeraja Rentachintala. I lead the product management for Amazon Redshift. I have two co-presenters with me:

  • Shruti Warlick - She is the Senior Manager for Analytics Specialist SA team at AWS
  • Shyam Malhotra - He's the Director for Janssen North America Commercial Data and Insights group at Johnson and Johnson

Today, we are very excited to talk to you about how AWS and Amazon Redshift can help modernize your data warehouse environments.

Before I go deep dive in, can I get a quick show of hands on how many of you actually already use Redshift today? Ok. And how many of you are using some on premises system, database, data warehouse and looking to migrate to cloud? Nice. We have a good mix here.

So this is our agenda today:

First, I will start with what are some of the motivations for considering data warehouse modernization.

And then I will talk about some of the key tenets and the use cases of a modern data warehouse.

And then I will go a little bit deep into the capabilities of Amazon Redshift for enabling this modernization.

Then Shruti will do a demo followed by Sham sharing with us their data warehouse modernization journey at JNJ.

Data in organizations is growing exponentially. We see that customers want to derive insights from all their data and make it available for all their users and organizations.

Traditional systems are complex, rigid and do not scale for the growing data and analytics demands. And we see that customers struggle with performance at scale rising costs and just most importantly unable to get to the data faster.

So these are some of the common reasons we hear from customers on why they want to consider modernizing a data warehouse at AWS.

We believe that a data warehouse is not a silo, a modern data warehouse should do everything that a traditional data warehouse did, which is supporting your business intelligence analytics needs with high performance, but it should do more.

It should enable you to scale two petabytes of data and make the analytics available to thousands of users in the organization, including the users that don't have database management experience, enabling them with self service capabilities.

In addition, a modern data warehouse should open up new use cases. For example, being able to run machine learning directly on top of data warehouse data and enabling you to do analytics on top of all data sources such as data lake, operational databases, streaming data.

And once you have access to all this data share it consistently throughout the organization. And another emerging use case we hear from customers is third party collaboration, which is from your data warehouse. You want to be able to consume third party data sets and join it with your data in the data warehouse and also take your the data in your data warehouse and make it available as a service to your customers, your partners and the business ecosystem.

So third party collaboration is another very common use case with Amazon Redshift.

We have been closely listening to our customers anticipating the changing landscape demands and also looking at our fleet to understand what is working, what is not working. We have been reinventing data warehouse for the past 10 years since Redshift is launched in 2012 as the very first cloud data warehouse, I'll cover some of the key capabilities in subsequent slides.

But our ability to query the data lake directly, ability to query operational databases directly in database machine learning, concurrency scaling. So all these capabilities are enabling tens of thousands of customers to derive meaningful and differentiated value from all their data.

At the core of Redshift innovation is its superior price performance, which is enabled by a number of capabilities such as distributed data processing vector materialized views, advanced query optimizations, efficient and codings. A number of performance features which enable Redshift to be five x better price performance compared to the alternative data warehouse systems.

And the price performance of Redshift maintains as you scale with more number of users and more number of queries. And also as your data volumes grow, making the data warehouse cost a lot more predictable.

On the left side of this graph. here there there is um the price performance graph for a high throughput workload. this is many users coming in and querying the data warehouse. so small queries, high concurrency where throughput is very important in these scenarios. Redshift offers up to seven x better price performance compared to the alternative systems.

The right side graph here is for data. So Redshift allows you to go from gigabytes of data to terabytes of data to petabytes of data seamlessly and maintains the price performance.

So now let's dive into some of the capabilities of Redshift as well as its integration with the rest of the ecosystem to enable data warehouse modernization.

The first step in data warehouse modernization is actually migration from your on premises system to Redshift cloud in general. So this is where the capabilities matter, the features matter.

So Redshift offers rich ANSI SQL capabilities and also connectivity to the BI and analytics tools. We also offer a simpler application development API called Data API with which developers can write applications in Python, Node.js whatever the language of their choice and build analytic applications very easily.

Redshift also supports all data types including semi structured data spatial data support and at re invent this week, we launched additional SQL capabilities such as MERGE, which is in SQL standard sequel syntax with which you don't have to delete and add neuros. You can just merge for the fast changing data type of workloads and additional functionality such as OLA functions, roll ups, cubes, grouping sets.

But the idea is the Redshift has all the necessary SQL capabilities to make migrations easier. And a very important thing to note is security. So security is critical for the adoption of data warehouses across different use cases and across your organizations.

So, Redshift offers security capabilities out of the box at no additional cost. So this includes things like compliance, certifications, ability to run Redshift in your own VPC granular row level column level controls and a new feature we launched this week dynamic data masking with which you can protect sensitive data.

So all these capabilities, they are available out of the box at no extra cost. And we understand that a lot of times customers, it's a hard thing. It's a challenging thing to actually take a first step with the data warehouse migration.

So we want to make that simple with a number of tools. For example, the Database Migration Service to move the data from on prem to cloud. We have schema detection, the CT tool with which you can map an existing on prem data warehouse database to Redshift accounting for any syntax and semantic differences.

And we also have a comprehensive partner ecosystem that can help accelerate the migrations, we have a number of customers that have, that have migrated from on premises systems.

This is one example DoCoMo they are the largest mobile services provider in Japan. They have 79 million customers and they moved from an on prem system to Redshift and they were to see tenex performance improvement at a fraction of cost.

And with the manage storage in Amazon Redshift, they're able to scale to petabytes of data and make the analytics available throughout the organization and we have many other customers.

This is the, this is one of the most common use cases which is their cloud journey starts with typically data warehouse modernization strategy, scaling analytics in the organization also means making the data available to all users including the users that don't have data warehouse management experience and also to all the workloads however, dynamic and variable the workloads are so to deal with these challenges, we have introduced Redshift serverless.

It was announced as a preview last year re invent and we have delivered generally available capability in July this year. The idea is with serverless, you don't have to deal with managing clusters that shift automatically provisions the compute scales the compute based on your workload demands to maintain performance.

It takes care of things like patching version upgrades, backups recoveries so that you can just focus on getting insights from your data and a very important feature of Redshift server. Less is you just pay for what you use.

So you have a sporadic workload, for example, that runs for 10 minutes in within a one hour, you're just paying for the 10 minutes of usage of the capacity. So this is a very important consideration to determine. Ok. Does that really make sense to me or should i go for a provision workload, provision, Redshift.

We have a number of customers, thousands of customers using serverless every month. Pro is one of the largest pharmaceutical companies and they benefit from the simplification of serverless. So they are able to onboard new analytics use cases very, very easily without friction and they also benefit from reduced operational burden and optimized costs with serve less after the self service analytics with with AWS and Redshift.

One of the benefits you get is you have flexibility on your data management. You can take a data warehouse centric approach where you keep all the data in Redshift. Redshift has manage storage, which is backed by S3 and S3 is our primary storage pricing is pretty much the same as S3.

So you can take a complete data warehouse oriented approach or you can take a hybrid approach which is keep some data in open formats, some data in Redshift and you can you can do the analytics appropriately. So, Redshift supports both these types of use cases.

With Redshift, you can directly query the data in open formats in the data lake with high performance and also join it with the data in your data warehouse. At re invent. we added this re invent. we we introduced a brand new capability where you can continue to do ad hoc queries on the data lake. but if you want to ingest data from the data lake, you are able to do it very easily without any pipelines this into zero etl vision where data is automatically loaded into Redshift once you set this up.

And Redshift also deeply integrates with AWS Glue Catalog and Lake Formation so that you have centralized governance and security.

We have a number of customers that take hybrid approach for managing data. Nasdaq is a multi multi national financial services company. They operate US stock exchange. So they have taken a lake house approach in which they collect the data from all the internally operated exchanges and store it in S3 and their teams such as research teams, surveillance teams, they directly query S3 in place and improve the time to insights.

A modern data warehouse also should open up new use cases such as machine learning, data processing and more. Redshift offers machine learning capabilities in the database. So Redshift integrates with Amazon SageMaker to provide these capabilities.

So with this, you are able to use SQL to create train and deploy models, everything using SQL without having to learn any new skill sets. And another nice feature with the machine learning is you can you can bring your own model.

So if you have models that are trained outside Redshift in SageMaker, for example, you can bring the models into Redshift and you can do in database inference a new capability.

So just continuing on advanced analytics on data warehouse, in addition to machine learning, we just announced the capability to integrate Redshift with Apache Spark. So the idea is simplifying and speeding up Apache Spark applications on top of the data in the data warehouse.

So with this integration, you are able to very easily get started with a packaged and certified integration from Amazon EMR or AWS Glue. And you can start building Spark applications and using the Spark's native programming model.

So data frames and Spark SQL and operate directly on top of the Redshift data. So i think the key thing to think about is modern data warehousing is not just limited to the traditional or the standard business intelligence workloads, but more workloads like machine learning and Spark.

All these are capable workloads that that you want to be able to run on the data warehouse directly.

We have a lot of customers using machine learning capabilities in production. Jobcase is one example, this is a technology and a a company and they have they they run a job marketplace.

So they have a lot of big data sets including things like job seeker, information, job interaction data, they store all of that in Redshift and they have recommendation engines that apply machine learning models on these big data sets.

So they have a familiar problem which is their machine learning models and data, they are not co located. So what they used to do is take the data out of Redshift, run a batch inference and bring the data back into Redshift.

With the in database machine learning capabilities of Redshift. They are able to directly do millions billions of predictions in fact, directly on the data warehouse data and not having to move the data out of Redshift

As customers become more and more data driven and start to kind of see data as a competitive advantage. We hear from customers that they want near real time analytics.

So near real time analytics are very interesting because customers want to understand their business drivers grow sales, reduce costs, improve customer experiences. There are so many use cases for real time analytics.

So from a from Redshift point of view, we we are deeply integrating with the rest of AWS ecosystem to make real time analytics on top of the data warehouse.

So in at the re invent this week, we have we have made generally available streaming ingestion capability. So with this, you are able to ingest high volumes of streaming data into Redshift and make it available for analytics within seconds.

And this is a very, very simple integration. You don't have to build any pipelines, you don't have to deal with any complexity of the etl pipelines that you need to build. You just use SQL.

So you simply create a materialized view that points to a Kinesis stream or a Kafka topic. And that's it. The data shows up in the materialized view. And as new data lands on the Kafka stream or Kin streams, it automatically the materialized review is automatically refreshed with new data.

So very, very simple and straightforward integration where you could you would use just SQL

And this capability as I mentioned is available for Amazon Kinesis as well as for Managed Service for Apache Kafka, Amazon MSK. We have uh streaming injection for a few months in preview. Jim is one of our customers. They they are using streaming ines to do their risk control reporting on user financial data. So previously, they used to do hourly reporting and now with streaming ines, they are able to do near real time reporting, improving their business efficiency.

In addition to streaming analytics, we are also doing deep integrations with our operational databases to enable analytics directly on top of transactional data. For example, a couple of years back, Redshift has introduced federated query capability with which you can directly query Aurora and RDS, MySQL and PosterSQL databases and directly from Redshift and join it with the data in the data warehouse. So federated query is very useful for situations like ad hoc queries data exploration, you're just trying to understand what data is there and then you figure out, ok, this is the specific data that I want to bring it into Redshift.

Now at Re:Invent, we have launched, we are very excited about the launch of a new capability which is zero ETL integration between Amazon Aurora and Amazon Redshift. So the idea is to enable near real time analytics on petabytes of transactional data that you can combine from multiple Aurora databases into one Redshift. And with this data is continuously replicated from Aurora and made available in Redshift within seconds after the data is written to Amazon Aurora. So this is a very, very powerful integration with which as you can see, we are moving towards real time analytic streaming support transactional data. And the idea is the decision making systems in organizations customers. We just want real time for a variety of use cases.

So now we have seen how do you get the data across the data lake streams, operational databases. Now we want to make it available in a consistent fashion throughout the organization, which is where data sharing comes into the mix with Redshift data sharing. You can share live and transaction consistent data across Redshift data warehouses and these data warehouses could be program or serverless, this could be in the same account, different accounts or even across regions. So cross region sharing is a very uh popular use case with uh data sharing. And the idea is again, you're not making any data copies or data movement. You collect the data from your source systems and make it available in the organization.

So data sharing is enabling our customers to form very flexible multi cluster, what we refer as data mesh architectures and the train this week today morning in the data keynote, we announced an integration of data sharing with AWS Lake Formation with which in addition to being able to share data directly between the data warehouses, you can actually share as the producer of the data with AWS Lake Formation. And all the consumers would come in, they discover the data from AWS Lake Formation and they query it. The benefit of this is now you can centrally manage, you can specify the access control, you can govern the data access from Lake Formation in a central place across all your data sharing consumers.

We have a lot of thousands of data sharing customers. Data sharing has been GA for almost a year now and Fannie Mae is one of our largest data sharing customers. They have a decentralized approach for data warehouse management with tens of Redshift clusters. And previously they had to unload data from one Redshift cluster to another with data sharing. They are able to seamlessly collaborate across all their application teams and they also share data between the pre-prod research and the production environments seamlessly without any data copies.

Redshift also integrates with Amazon AWS Data Exchange ADX for third party collaboration. As I mentioned, the idea is within your data warehouse, you want to be able to subscribe, you want to be able to discover, subscribe and consume third party data sets and you want to join it with the data in your data warehouse. And in addition, a very common growing use case is you want to make the data that is in your data warehouse available to the rest of your business ecosystem. So both inbound, as well as the outbound data sharing outside your organization and Data Exchange integration is built on data sharing. So this is live data access, no ETLs, no copies, no data movement is needed. And Data Exchange takes care of all the subscriptions payments. Basically the licensing aspects of the third party collaboration.

So now that you have some idea on all the key use cases enabled, I'd like to ask Shruti to show a quick demo illustrating some of these use cases. Thank you a good story.

Speaker 1: So we build a small to run. Hey, do I actually get the predication? What do you see? We have the payment measure, name, the payment id and the predicted amounts for different payments.

Finally, let's make this data available to our end users. Our patients, our patients want to know which are the hospitals that are available for them? What are the average wait times for it? How do i how much do i have to pay for like an estimate. Um you know, a condition, a health condition.

So select a state, select a county. And then we see that there are two hospitals available over here. Oh, look at that. The automated data masking in action. I did not want to you guys to see the, you know, name. I did not have to do anything. I could like just consider it in red shift, select it.

Um and i could see, oh the wait times is for the second hospital is low, that's it. I'll go to the second hospital. And then second thing is as you can see, you know, once i select the condition, i'm able to get a range of what, what what is the range, estimated range that i may pay for a condition.

So what did we see? Finally, one, we were able to pull in data from multiple different types of data, data, data sets to avoid the data silo challenge that happens. Two, we were able to make that data readily available within a cluster so that multiple personas can actually take advantage of the data and utilize it the way that they want it for the business use case. And last but not the least make it available for our end users readily. With that. I would like to call sham, who is the director at jj to talk about, you know how they were able to take all of this and put it into action, right? Thank you, shruti.

Speaker 2: And hello everybody. Um good afternoon. And i hope that everybody is enjoying the sessions here at the conference and learning um uh new, new technology and use cases and then uh features, right? My, my name is sama patra, i'm the director of data and insight uh product group at uh north johnson commercial pharmaceutical business.

Um and johnson is our pharmaceutic pharmaceutical division of uh johnson and johnson, uh johnson and johnson um has, has led the um had had led the reimagining uh all the health and services um uh across the world and then helping uh patients uh helping uh our doctors and then also uh helping uh our um nurses uh over 100 30 years.

So johnson is the pharmaceutical division of the johnson and johnson uh at johnson commercial. Uh we help patients um so patients with complex medical and um challenging medical issues across different therapeutic areas.

Uh today, i'll go over our in the agenda, i'll go over our journey um of data warehouse modernization, uh effort, focusing on uh challenges, the voice of the customer, our journey at, at uh json commercial uh the business benefits and the data architecture that we had and the learnings and the next steps um that, that we have planned for.

So let me start by providing a brief overview and interaction. So my group is the data insight product group under the north america jansen commercial business technology our vision is to save the future of health care uh to a diverse team by unlocking the power of data through an insight driven organization for the purpose of improving efficiency, generating value. by empowering data users to deliver a better customer experience.

Our business data users are not limited to sales and marketing data scientists, uh contracting teams, uh medical teams, uh market research teams and then e generation and and many more, some of the key opportunities uh that, that we see uh is the growth in business happens. there is a uh this data is growing. uh there is a need for like a data sharing across, across uh the organization.

Uh new channels are emerging uh and uh looking at like uh the new data users in the organization and new applications are being, being being developed and then there is a data getting generated uh uh in those applications. uh there is a huge need for the sales service um in the organization. um also like uh with that, uh there is a need for, for the organization to have a faster time to insight and um and also look uh to, to uh deliver faster time to data. uh and, and, and, and also like there is an ongoing pressure of like reducing the cost to deliver those.

So when i was coming here uh to my journey, like uh to attend the conference, i was thinking um so i generated so much data, right? uh uh going from taking a uber uh paying the toll in the toll roads, um taking the airport uh airlines uh uh ticketing system, going to um uh restaurants here or traveling across ass all of them uh generate data. we are generating data right now.

Um and that's the data that we are generating and there is a humongous amount of data which is like untapped as well. and with data comes challenges and then it's, it's, it's the challenges will not go away as the data grows. the explosion of the data happens, we need to address that and then also look at innovative ways and optimize those the increased cost to deliver capabilities.

So in the organization uh is, is the data growth, uh the cost of data cost of storage, cost of compute um as well as um delivering insight um uh also grows and the support also co cost also increases the data ecosystem gets more complex uh or over um looking a t like more data silos across the organization.

Scalability has been an issue uh with, with legacy tools and technologies. um when, when um uh we look at um some of the workloads uh at some at some point points like it, it impacts um uh to deliver the, the right data at the right time to our customer. uh because it, it, it is impacting some of the workloads as well.

Uh data sharing uh is uh near the talked about um also some of the features that are coming. and then over the organization, there has been a need uh to share the data uh with, with multiple data silos uh with that also, like there's an operational overhead on managing the cluster, managing the pipelines. and it is uh uh you, you saw this uh ge oetl uh pipelines coming in and then it was uh launched uh today morning and there is uh also talked about that.

So how do you avoid those operational overheads? uh there's also a growing need for self service. the business doesn't want it, they want to have a self service and then do themselves. so how do you enable those, those user groups uh with, with the uh with the data growth, there's new users coming in, there's new applications uh are being created and, and also generating more data uh with all these, like there's always a always a demand from, from the organization and business to deliver faster time to deliver those uh data and insights.

So our journey started uh uh 2014 and uh before i go uh through our journey, um uh i, i'm not sure if, if some of some of, you know, have seen the video. so red shift has had a commercial uh two minutes video of, of capabilities.

Um so there was a character who was sitting in, in, in the beach and where the, the data was running. uh automatically. so i, i went to my manager and, and i said, you know what you could be, you here sitting in the beach and then uh the, the data runs automatically um uh and then you'll be enjoying.

So, um so our journey started at like 2014 with uh 2015. uh we moved some of our on premise uh ad ms uh to, to red shift as is a foundation uh to our data ecosystems over the years, like we have optimized, use some of the features uh that, that came in uh with the spectrum uh to reduce and optimize cost.

Um and also like moved a lot of our legacy data warehouses uh into, into uh into the uh uh uh uh red shift data warehouse. last year, we started to migrate our um uh receive dc and ds instances to r a three and then also like started to leverage or use some of the features uh that, that were available on the data sharing.

And as we speak now, we are looking at several less and other features that, that are available for us to use. so some of the key business benefits that we see like uh when we started our journey, uh one of the thing i was asking when we need more data uh to store, we, we uh we are stuck with the computer and, and storage. we, even though our workloads were not compute intensive, we uh have to use the compute.

Um so with the, with the benefit of r a three where the computer and storage separation happened, it supported uh the data growth, the business growth as well as uh look at like the we want to be sustainable right in the environment. uh so it reduced, reduces the carbon footprint because we um the compute uh needs are not there.

And it, it's um it's lowest, the computes uh wherever it is not needed data sharing. uh it it enabled better user experience. uh for us also like reduce the data duplication uh across the organization. also it enabled data interoperability um and innovation. it did drive innovation on the organization.

Uh before the sharing uh features were available, we need to uh cut uh do a cut of the data and then ship it to the other cluster or other parts of the organization. so with the data sharing it, it helped uh uh to, to streamline that uh there are better cluster management with uh query capabilities uh concurrence in other uh other aspects that help to reduce our admin administration and also operational overrates data growth.

As business grows, the data growth uh grows with it. and then we had 20 times growth in in our data and with r a three and it was able to sustain those, those growth and demand. and also as you look at there are new use cases coming with personalized medicine, uh coming in additional data coming in with omni channel and, and personalized uh patient experience as well.

So with that also like there's new applications and user base um coming in uh that, that needed like the sales service. and also it also helps to democratize the data across the organization. finally, like this is important where like we need to look at the cost and effort, right?

Uh so it also help to reduce 30% reduction in our like operation cost and then 50% faster time to delivery um uh to, to our business uh users. yes, there is a new user groups coming in. they need a self service with the data syn citizen scientist and uh citizen uh data users as well as the analytics users.

Uh so, and then also looking at uh some of the a inml capabilities and real time data and data ib i is uh the sales service. uh it helped the organization uh on the sales service side of it scaling. uh also it optimize our production workload, supported new dynamic workloads that's, that came with the a inml side of it.

Uh so this was our prior architecture uh from the left to the right, the data producer to the data consumers. um there are like multiple etl tool uh pipelines pl sql s. uh the data was copied in multiple uh rd ms systems. so it was hard to manage and then the time to market was slow.

Um uh for, for adding new data assets like um back then like when uh some of you might be uh already aware of it or at that time when i needed like more like disk space, you have to create a request. it takes three months to six months uh to do it. but now click of a button, you can increase uh the, the size or so.

From here we went uh to a like a new architecture. uh when you look at the left hand side, we are able to add in more data assets, more channels as it emerged with emre hr and others. uh data comes to s3. and then a lot of our processing happened on eks on, on far west and then finally, it goes to redshift.

Um we also use amazon data exchange uh to, to uh improve time to delivery. it has a direct connect to redshift and once the data provider provides the data, it is automatically shows up in redshift and you start querying and building your applications and visualization and other things.

And then with the, with the amazon data exchange or aws data exchange, the the architecture can scale easily uh and then generate faster insights to the end user as well. with recent modernization effort of to our red shift r three, we moved all our dense storage and dense compute to migrate to r a 300 plus terabytes of data migrated.

And then some of the bigger cluster took less than two hours of time, time to move, migrate. it, we didn't have any, any downtime was zero, net cost increase. and r three comparability also helped enable seamless indication or transition to for our users and applications as well.

Aws team was there to help. and then whenever there were, there were some of the challenges issues encountered, they were able to help us uh during the journey.

Um some of the key uh recent learnings um when you start the journey of migration, uh look at the right sizing. so from, from d ds n dc uh instances to r a three start with small cluster and non product so that you test it out.

Um uh you can, you can start to plan to use some of the new features that are enabled uh with the advanced wlm uh looking at like uh data sharing capabilities or other integration with ad x and other other capabilities that are out there, not everything went smooth. there were some hiccups along the way.

Um so encryption for bigger cluster was taking some uh more time. uh and uh that, that's a need for when you want to enable cross cluster data sharing. and the aws team was there to help us solve that pretty quickly.

Uh also like uh when uh we, we tried the elastic resizing, uh there was a bug with the, with the, with the uh system and then the aws team timely provided uh uh to mitigate those uh those uh challenges issues.

So what next for us uh we are planning to expand the surveillance to run like uh and then scale some of the analytics uh use cases. um we want to expand our use of ad x to enable more cell service for organization, for faster data access.

Um also like uh we wanted to um uh use uh scale the data sharing capabilities across the organization we have been using and then we want to expand uh furthermore. so with that, uh thank you everybody for, for listening. thank you.

Speaker 1: Thank you, sean. yeah. so i think we are at the end of our session. um you can get started with redshift today. you can start with redshift serverless very quickly spin up if you are new to redshift or if you have new use cases that you want to try out, you can get started, you can visit redshift page for more information, whether you are new to redshift or you're using redshift. there is information for you and we also have a migration program if you are considering. so there is a little bit more information on um on that as well. so with that, thanks everyone. and we can, we can take some questions.

  • 16
    点赞
  • 20
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值