What’s new with Amazon Redshift

Hi folks, welcome to the session. My name is Eugene Kawamoto. I'm the Director of Product Management for Amazon Redshift.

So today, uh we'll be talking about what's new with Amazon Redshift covering some of the new launches over the past few days, as well as uh having a guest speaker, Neema Raphael, Chief Data Officer from Goldman Sachs. They'll be talking about how they're modernizing their analytics stack using Redshift today.

So here's our agenda we have a lot to talk about. So let's get into the presentation.

So first to set some context as everybody's aware, the amount of data has been growing exponentially over the past decade. So this has been primarily driven by the cloud and a lot of that data is going into analytics systems. So as data has been growing, customers been seeing unique challenges, first and foremost, analytics requirements and coming are coming in across all over the enterprise. So everyone's now a data user and enabling all these data users with low friction is also becoming important. And given the strategic importance of data analytics workloads are becoming mission critical downtime leads to business loss. And we see customers running their analytics systems 24 7 now. And as more sensitive data such as PII is coming to the cloud and also to analytic systems, finding a better way to secure and govern their data is also a common as that we're hearing. And finally, as more and more analytics are used, customers want to make sure that they have a highly scalable system but also making sure that their costs don't get out of control.

So as data has been growing, customers are evolving the data platforms to accommodate new use case use, new use cases to harness data as a strategic advantage. And when we look at the journey of customers using their data, typically, they start from things like dashboarding and reporting as data is spread across the enterprise and they have multiple data sources such as operational databases and data lakes. They also need to build data pipelines to bring that data over to analytic systems. More recently, we also see customers doing real time analytics trying to get insights much more faster and also doing things like predictive analytics, leveraging machine learning technologies. And finally, a a recent trend that we're starting to see is more and more customers wanting to monetize their data. So customers are using Redshift not just to support internal systems but also using it to externalize and support external customers. As well as part of that. A lot of customers are going through data modernization and also moving from on prem legacy data warehouses onto Redshift to support this needs.

So this year marks the 10th year anniversary of Amazon Redshift. So we originally launched in 2012 as the first cloud data warehouse. And since then, we've been listening for customers about how we can make Redshift better and innovate faster. So these are things like better elasticity to accommodate peak hour demand querying into the data lake. So customers can query into S3 as they accumulate more data and also things like compute and store se separation to provide better flexibility for their clusters. And more recently, we we are centering our investments around three pillars.

The first and foremost is around price performance at any scale. So this is something that we've been hearing from customers for a very long time and uh customers are uh looking for high performance and scalability while keeping their costs low. So we introduce things like concurrency scaling to serve high peak demand workloads. Also uh things like materialized views to accelerate your queries. And also we've introduced automatic materialized views most recently as an easy way to create and maintain your materialized views without any manual intervention.

The second pillar is around analyzing all your data. So this is really centered around what we're hearing a lot from customers around data mesh architecture. So you can use data sharing to share your data across multiple clusters. You can also use federated query to query your data directly into an operational database in live format. And also you can use Spectrum to do data lake queries to query into the S3 data lake without even moving or copying your data.

And finally, the last trend is around easy, secure and reliable. So this is really centered around customers asked to make data warehousing a lot more easier to use. So as customers making data and analytics available to all end users within the enterprise, not all these customers know about how to maintain a cluster manage infrastructure and also do patching. So an easy to use data warehouse is becoming even more important for many customers. But at the same time, they can't sacrifice on things like security and reliability. We have like features like role based access control column level security as well as low low level security. And we'll talk a little more about these features later.

So today, tens of thousands of customers analyze exabytes of data every day on Redshift over the past few years, especially uh during the Covid time, we see a lot of customers in life sciences and healthcare such as Pfizer Moderna, as well as J and J using Redshift to support the analytic systems as part of their drug research to accelerate their fight against Covid. We've also seen a lot of customers going through digital transformation. This is shifting to a modern data architecture in the cloud with various financial services companies such as JP Morgan Chase Intuit, as well as Goldman Sachs that are gonna be gonna be talking today.

So one good example is Nasdaq a major stock exchange in the US. So, Nasdaq ingests tens of billions of market data record into the S3 data lake getting insights in real time matters. So what they do is actually leverage Spectrum queries to query directly into the S3 data lake, combine that data into the local Redshift data warehouse cluster to get real time insights.

So there are a ton of use cases that we're seeing from customers today. And today we'll be walking through some of the most common use cases as well as walk through some of the new features that we've been launching to this this week.

So as everyone's becoming a data user, a low friction self service analytics environment is becoming essential. So this is particularly common for line of businesses where individual organizations are starting analytics on their own to address these customers challenges. We announced the GA of Amazon Redshift Serverless this year. So since GA thousands of customers have been using clusters and we have customers such as Roche Class Method as well as Rail Delivery Group using Serverless today. And Serverless is focused on getting insights in seconds while we take care of the infrastructure management on behalf of you. So Redshift automatically and intelligently takes care of the infrastructure provisioning on behalf of you both for steady state workloads as well as unpredictable type of workloads. And the big benefit about using Redshift Serverless is that you pay for only what you use. Also, given all the innovation that happened over the past 10 years on Redshift using Serverless, you can continue to use all the advanced features such as Redshift ML, data sharing and federated queries as part of your experience.

And since GA in July of this year, we've also announced a lot of new uh features including support for tagging additional additional monitoring views, query, monitoring roles and we continue to expand in new additional regions as well.

So great example of a customer using Serverless today is Peloton. So uh Peloton, as many of you are probably aware is a fitness equipment company known for their internet connected exercising bikes. So they provide fitness classes that are on demand. So data matters to them and understanding user behavior really changes the engagement and business growth. So they use things like open source tools, Apache Airflow to orchestrate their workflows, Apache Spark to process their data. And DBT for the building data pipelines to ingest data into Redshift. So they have a hub and spoke type of model where they're using data sharing to ingest all the data into a producer cluster that is a provision cluster that is running 24 7 and they have a consumer cluster that has various different uh workloads running on it. A couple of examples are things like ad hoc reporting as well as uh connecting to their favorite BI tools like Tableau and Looker and a lot of these endpoints are actually operated on serverless today. So big benefit of, you know, uh Peloton going to Serverless is really cost optimization. So uh they were able to benefit for a bit from the pay as you go model on Serverless and save uh close to $300,000 on an annual basis.

So another popular tool for customers using serverless today is Query Editor V2. So Query Editor is an easy to use SQL client for supporting end user demand as well as improve analyst productivity. So it allows end user to browse and also explore multiple databases, external tables views stored procedures, etc, etc. You can also actually visualize your query results as it's shown in this image here and also share that with other members on your team. And most recently we launched the support for notebooks. So this is very important for doing collaboration, offering, organizing and also annotating queries.

So as we introduce Serverless it was essential for us to also make sure that various performance tuning tasks and also maintenance tasks are all automated. So we're investing heavily in automation today. So one good example is ATO which stands for Automatic Table Optimization, which is a feature that automates the physical data distribution on your storage layer. So instead of manually setting sort keys or disk keys. You can actually use ATO where we will learn your workload patterns and automatically set the parameters. So we can actually get better price performance. And we also uh have common maintenance tasks like vacuum, delete, sorting all automated as well.

A common feature, customers use today's workload management. So this enables you to manage your query priorities to get better throughput for your workloads. So using automation, customers like EA actually are getting order magnitude faster performance, uh actually 15% better performance on the same hardware footprint.

So another common use case that we see with customers is data ingestion. So today, uh customers have multiple data sources that are sitting across the enterprise. Typically, these are, you know, data, that's in silos. So this spans across data lakes, operational databases, uh streaming from multiple data sources as well as maybe S3 buckets where you want to uh ingest data into Redshift from multiple file formats. So customers need to manually build multiple different data pipelines that requires engineers weeks or even months to actually accomplish maintenance costs can also be pretty expensive. So with schema changes happening, customers need to reconstruct these pipelines.

So in order to make these things easier and uh enable automatic ETL jobs and uh data pipelines to build a lot more easier, uh we're introducing multiple new features today.

So a common architecture pattern that we see today is using Aurora as your operational database and then Redshift as your main analytic system. So typically customers have multiple Aurora databases that they're operating and this requires building multiple data pipelines to ingest into a Redshift cluster.

So in order to make that easier, uh we're really excited to announce we're Aurora database uh or Aurora zero ETL integration with Amazon Redshift. This was announced in the keynote yesterday as a preview.

so this really eliminates the need for customers to build and maintain complex eto pipelines. and customers can get near real time analytics on petabytes of data, uh transactional data.

so for customers that want to ingest data from amazon s3 buckets, we also have the copy statement as a simple way to ingest data into red shift. so we support various different file formats like cs v parquet, uh and uh and other text files as well.

so for customers that want to automate this process, we're introducing auto copy for amazon s3 today. so this is a new capability that helps you uh automate the process of data ingestion from your s3 bucket into redshift tables and it eliminates the need for manually and repeatedly doing uh copy procedures.

so with a low code approach, using the copy command, uh you can actually benefit from automating the ingestion without any experience in building data pipelines.

we've also introduced streaming ingestion as uh a preview earlier this year. so this is an easy way to ingest streaming data into your warehouse for real time analytics use cases. so we enable integration with kis data streams or k ds. and today we're also announcing integration with managed streaming for apache kafka, which is m scape.

so both are really common streaming engines which enables you to directly uh stream the data into redshift in ne world time without really creating a staging environment in s3.

so one great example is adobe that's using uh red shift streaming ingestion as part of the adobe experience platform. so this is for ingesting data, which is you know, things like click stream logs and other session data through things like crm and customer support applications to provide a better personalized experience for their end users.

and last but not least in data ingestion is our collaboration with informatica. so in addition to ingesting data from databases and also from an s3 bucket, customers want to have multiple data sources that they want to ingest data. so these might be third party data sets like salesforce data or even google analytics where uh you might want to make sure that you have a data pipeline between third party data sets into red shift.

so uh as part of that really excited to announce the data loader integration with the informatica. so this is now g a and allows customers to run high speed high volume data ingestion into red shift.

so uh informatica supports over 30 different sources to connect and it is actually provided for free. so please test it out.

so let's shift gears and go into uh architecture patterns.

so uh emerging trend that we see right now with customers is using data sharing. so instead of having a monolithic cluster that has all your workloads running in this hardware, what we're seeing with customers is actually splitting up their cluster going into a multi class environment using data sharing by doing that, they can actually have um workload isolation and also serve their end customers better with as they have various different sl a requirements.

so data sharing allows you to share data across multiple clusters in live format without losing transactional consistency. so it also allows you to enable workload isolation uh chargeback capabilities.

so uh a common pattern uh we see is uh various different organizations sharing data with one another. and you can actually have a pre provision cluster in server list and also uh a consumer cluster in uh provision. and you can also do that vice versa as well. so you can mix and match based on what your workload patterns look like.

uh we also support cross account as well as uh cross region as well. and we've also integrated with a data exchange. so this is a third party marketplace where you can actually subscribe to data like fax set to use your red of cluster to immediately purchase and start querying uh fax set data.

so today. uh we have thousands of customers that are using data sharing uh and over 12 million queries that are running on a daily basis right now.

so a great example of a customer using uh data sharing is orion. so orion is um uh uh data as a service provider that serves various different financial services company. and they have 2500 different data sources that are streaming data uh mainly from sql server uh databases on prem and aws uh using an uh a kafka connector that streams into a data sharing environment.

and uh they have a producer cluster that receives all this data and shares that data in real time to end users for collaboration. so this is a multi-tenant architecture that serves multiple different clients. and given the sensitivity of data, data sharing is a great way to provide workload isolation between clusters and also securely share that data to their end customers.

and as we've seen more and more customers leveraging data sharing uh especially in large scale environments. so this is multiple consumer and producer clusters both in provision as well as serverless customers. customers have been asking an easier way to manage that data sharing environment.

so today we're announcing the preview of data sharing central access control with aws la formation. so this integration allows you to use a lake formation to essentially manage granular access control for things like role level and call level level based on the different permission requirements on your data sharing environment.

so there's really no scripting or complex uh cooling requirement as part of this experience.

so with that, uh let me hand it over to uh neema raphael uh chief data officer from goldman sachs. we'll be talking about how they're cloud finding their data stack uh at goldman sachs uh using redshift. he'll also be talking about some of the new features that i talked about including data sharing as well as serverless.

so, nema, please come on stage. oh, bright lights.

thanks eugene for that nice intro and thanks everybody for joining. like eugene said, i'm gonna talk a little bit about our cloud journey at goldman sachs and how aws and red shift have helped us make data great again.

so, just a quick rundown, we'll do a quick intro about goldman myself, our team. and then i really want to get into what problems are we even trying to solve? how we crack the code leveraging the cloud and then we'll get a nerd out on some architecture. so let's get it started.

all right. so what is goldman sachs? of course, people have come up with uh all sorts of clever names and descriptions, but at the heart, we're really a financial services company that cares deeply about financial well being of our in of individuals and institutions.

we're not a quote unquote maybe technology company, but we are a platform business that leans heavily on tech and of course, not to forget we have 12,000 engineers working at goldman, right.

so quick, quick intro about my team myself. so i'm a, i'm a pretty big data nerd, right? i've been a software engineer, data engineer. i've been a, a strat at goldman, which is our quan, which is the goldman word for quant. um and now i lead our data efforts at gs.

what makes goldman data engineering interesting and unique? we have every sort of every sort of data available from super fast nanosecond millisecond to slow moving data. but really, we usually work with medium sized data as i like to say, but super complex, super complex relationships and formats across many different business lines.

and the other thing to note is we care about our data to be right, not directionally correct, not statistically correct. the data has to be correct for us to run our business.

and then just another quick shout out, of course, they're gonna do a shameless plug for our um our data platform that we have open source recently called legend. it's on github. take a look if you like if you like the uh talk, throw us a star on github.

all right, let's get into it.

so first a little gs history lesson, right? so before data was cool, data engineering was super application centric, right? and you can see here that sort of turns into a little bit of data spaghetti on the right side and we'll talk a little bit about a more about that shortly.

ok. so when your data crew is as big as ours, you know, 12,000 engineers, most all of them working with data in some shape or form, right? the natural entropy tends towards disjoint islands and of what i like to call rube goldberg machines.

so a lot of spaghetti in the middle of, of those boxes, right? and a lot of sequel ocean and what does that lead to? right. new code and data takes a long time to on board and um give to our clients or users duplicate data sources and often duplicate uh different data models as well, right? for the same data orphan feeds, lack of ownership, unknown data quality, ongoing costs of unused data.

and of course, all of that comes with the headache of uh of maintenance and support and cost, right? and you could see some of our stats, right? it used to take months to onboard new data, millions of unmanaged and ungoverned sql queries, thousands of consumers.

and actually it says thousands of complex data sets, but it's actually hundreds of thousands of complex data stores.

i'm not gonna spend a lot of time on this, but i do want to point out sort of two interesting things. one is a half joke, half truth that we talk about at goldman where we would say that pretty much we have about 1.5 etl frameworks for every engineer at goldman.

uh and the second thing again is on the consumption side, right? we really didn't know what was going on, right? there was data silos, but also just free for all in on the consumption side.

all right. so after a little bit of self reflection and some meditation and some sound baths, we uh we started plotting what we wanted, what we wanted to get out of our new data stack, right?

and the main thesis, the main main thesis for us is that we really wanna treat data like we treat software. that was a big revelation for us of how data engineering should work. we really wanted to bring the same great tools and techniques and patterns that people have built over 40 years of software engineering to our data teams, right?

things like c i cd lineage, understanding, your self-service and using the right data. so that was like the key sort of fundamental building block for us.

all right, i hope there's some 80 kids, eighties kids in the room that like the title.

all right. so this is really this is really the money slide for us, right? the key for us was really starting from a unified data models and having those data models work directly on our execution stack.

and of course, we don't want to run that execution stack ourselves or in for ourselves. and that's what the legend platform lets us do.

the legend platform. is our semantic data mesh that lets people talk and work in at a business layer and then map those two underlying infrastructure.

and so what do we get out of this is we get uh legend doing the heavy data modeling and we have aws and red shift doing the heavy lifting on infrastructure uh and execution, which is like a match made in heaven from our perspective in data engineering.

and now we could start with this pattern coalescing everyone on one source of truth, both at the infra level and at the data model level with unlimited scale and elasticity, right?

and so now you get from that spaghetti to the picture you have here and the key here again is there's two levels of scale, right? you get the amazingness of aws services. and then on top of that, we get the scale of of goldman data engineering and our platform which reduces the complexity to the picture you see here and helps all our data engineers innovate faster.

all right. so what do we get at the end of this? right. aws red shift does the heavy lifting on the infrastructure Here is the speech transcript formatted for better readability:

And now that let's allows our data engineers to focus on their main job, which is data curation for our businesses and our clients, all with the right guard rails of governance, data quality and consistency. And you can see some of the stats here. But I like to point out the one unified data model as the, as the key as the key stat here.

Um all right, last eighties reference, I promise. Um right. So what are we super excited about? Right, we love that how the services are evolving serverless as Eugene had mentioned. Zero ETL again, what is the theme here is like we do not want to run infrastructure, we want to offload that to the heavy lifting that AWS has done.

Alright, let's do a quick uh behind the scenes tour nerd out a little bit to close the talk, right? So now it sort of feels like magic to our data engineers and they're actually happy doing their job, right? Instead of instead of, you know, managing pipelines dev office pipelines all day long, looking after jobs, right? We got zero ETO data sharing and finally for consumption, our our financial cloud that features Legend and then you could see sort of the consumption patterns sort of on the left all from a single source of truth of the consumer and a little bit of deeper dive.

But the point of this slide is now that we've actually cleaned up our mess internally, right? We actually can now through what we call the GS Financial Cloud for data also uh make that data available for our clients as well, the same data, same curation that we use internally uh for our clients.

And then I'll just finally to wrap it up, throw up a a picture of our uh of a self-service tool that we've built as part of Legend so that you could see that it's real, it's a real query uh that people run internally to get real data. And with that, I think, uh wrap it up. Thanks. Thanks. This is, thank you, Nema. That was awesome.

And really looking forward to the, the future partnership and also supporting them on using additional features like Redshift service as well as Redshift ML that I'm gonna talk about in a moment. So uh let's shift gears and talk a little bit about data science and machine learning.

So as customers have been asking for uh machine learning capabilities on their data warehouse, uh we introduced Redshift ML to make it easy for SQL developers to run machine learning tasks. So the beauty about this is that uh you can uh use a SQL interface to create and train models and without really thinking about uh or learning about uh machine learning technologies.

So behind the scenes, we integrate with SageMaker AutoP to create and train the models. In addition, as models are created, uh you can also deploy that model as a UDF within the Redshift environment to do predictions. So you can leverage the blazing fast Redshift environment to run billions of inferences.

So Redshift supports both supervised and unsupervised machine learning. And we also support bringing our own model for pre trained SageMaker models as well.

So a great example of a customer using uh Redshift ML is Jobcase. So Jobcase is uh a job marketplace for job seekers with over 100 10 million registered users. So accurate job ma ma matching is essential as part of their business. So including things like uh matching the candidates with the right skill sets, years of experience and also uh making sure that commuting distance is in a reasonable distance for the end candidate.

So on any given day Jobcase generates predictions for over 10 million jobs secrets that add up to actually billions of predictions on a daily basis. So this is where Redshift really comes into place where they've uh significantly improved their recommendation system by enabling billions of nonlinear predictions in just a few minutes. And by leveraging Redshift ML, they have made uh 5 to 10% improvements in revenue and membership engagement.

So uh pretty awesome use case where Redshift has directly impacted the end user business. So uh as as Apache Spark workloads are becoming extremely popular, especially for things like data processing and also machine learning type of use cases. We're hearing from customers about uh finding easier ways to integrate Apache Spark with Redshift.

So today, uh we introduced Redshift integration for Apache Spark. So this is a simple way to speed up Apache Spark applications accessing Redshift data from AWS services such as Amazon EMR and AWS Glue using Apache Spark.

So there's really no manual setup required. And uh you can actually do it uh easily with your, with the, the the the the the default Spark connectors. We've also improved performance significantly with predicate pushdown that enable faster reads. And based on our study, we've also seen that Amazon EMR together with a Apache Spark runs 10x faster compared to existing Redshift Spark connectors.

So uh let's dive deep into security and reliability. So this is especially important as we see more customers running mission critical analytics on Redshift. So a big benefit about using Redshift is that uh we provide built in security and compliance, which is part of the Redshift experience out of out of the box.

So this spans across multiple things like authentication, access control, auditing encryption, as well as helping you achieve compliance for things like FedRAMP and HIPAA for regulated industry. And over the past year, we've invested heavily around access control. This has been a big ask for many customers.

So earlier this year, we introduced role based access control also uh provided column level as well as role level security as part of that. Uh today, we also talked about Lake Formation integration uh with data sharing uh which is in preview. And today we're excited to announce dynamic data masking as a preview today.

So dynamic data masking is a new feature for customers that want to uh protect sensitive data by managing uh data masking policies through a SQL interface. So we can modify uh sensitive or PII data based on different uh access control mechanisms. So masking can be fully or partially based on uh different requirements. And we also support uh both cell level as well as column level uh masking uh based on different specific conditions that you uh define.

So uh next features around uh availability and reliability. So as uh customers systems are becoming mission critical, uh they also want to make sure that they have highly available architecture for their data warehouse. So we have capabilities to relocate your cluster from one availability zone to the another using cluster relocation already.

But for customers that want to have minimum downtime literally in seconds and also capacity guarantee. Today, we're also introducing Redshift Multi AZ in preview. So but Multi AZ enables a single endpoint to span across uh your compute clusters into availability zones. So in case of AZ failures, uh you can continue to run your applications and there's really no idle capacity and you can leverage all your compute resources during normal times and uh during an unfortunate unfortunate failure, you can also continent to run your applications with minimum downtime with zero data losses and no manual intervention.

So uh customers that are using AWS Backup. Uh this is a really easy way to essentially manage your backups across different AWS resources such as EC2 S3 RDS and also DynamoDB and there's a lot more others that they support as well. And today, uh we're also introducing the backup integration as for Redshift.

So you can create backup plans to automate backup scheduling and also retention for Redshift resources. And you can also use these backups to restore Redshift data to a point in time uh based on, you know, cluster or table on what you specify.

So our last area is around price performance. So we've done various different benchmarking studies over the year to compare Redshift with other cloud data warehouses. In particular, we've compared Redshift with uh testing in an out of box type of environment. So this is uh meaning no performance tuning on the data warehouse and just using Redshift and other cloud data warehouses out of the box.

And uh we did a TPC-DS and TPC-H uh benchmarking which is the industry standard benchmarking and we use three terabytes of data and you can see that Redshift have achieved better price performance in uh both scenarios, the benchmarking matters, but even more real life workloads matter even more.

So, uh we took a typical BI dashboard type of scenario. So this is uh having a dashboard uh which is typically short running queries with small amounts of data where you might have a large number of end users or analysts that are accessing the data warehouse cluster at the same time. So in this case query throughput matters, right? You want to be able to serve a lot of queries and making sure that runs concurrently uh during the same time.

So we compared Redshift with other cloud data warehouses for a single cluster. So the red line here is Redshift and where you can see that we've achieved higher throughput compared to other cloud cloud data warehouses.

And on the graph on the right here, uh you can also see that as we use concurrency scaling, not just a single cluster, but bursting your clusters. Uh you can also get linear performance as concurrent queries increase query throughput also increases as well.

So in order to support these different performance improvements, we've introduced various new Redshift functionalities behind the scenes. So uh just to give you a few, uh we introduced vectorized scans for Redshift tables added write capabilities for concurrency scaling.

Uh so, uh in addition to reads, you can also uh do write to burst your cluster to serve peak hour demand. So these are things like uh ETL type of workloads um which has been a big ask for many customers. And we also added things like improvement in our commit performance for uh faster rights. So there's a lot of things that are happening behind the scenes uh in order to make Redshift even uh better performance.

So finally, uh various uh migrations uh are occurring. Um you know, we've heard from customers like Nexis, Nexis, Intuit, McDonald's, Moderna, Toyota, Zynga, uh actually moving from legacy data warehouses on to Redshift. And uh we continue to support these customers that are moving from various different database technologies into the cloud into the Redshift environment.

So as as part of that uh migration, we're also introducing additional SQL commands that are very popular in request from many customers. So those are things like MERGE, CONNECT BY, GROUPING SETS, ROLLUP and also CUBE. So these are all uh very common uh SQL commands that we've heard from customers that want to uh be used on the Redshift environment as they migrate from uh legacy data warehouses.

And also a comment as that we've been hearing for customers is uh support for uh larger file sizes for super data type. So super data type is um a data type that supports semi-structured data. So these are things like JSON file format where uh we now support up to 16 megabyte JSON files uh which is in preview now.

So that's all for today. Uh we covered a lot and really looking forward for everyone uh to test out these features and tell us how it worked out. So uh we have various different content for whether you're just starting out on Redshift or you're already an advanced user.

So thank you for the time and uh make sure to complete the session survey. Thank you.

  • 9
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值