Hello. Can you hear me? Yes. Ok, great. Welcome. Wonderful to have you all with us this afternoon. Thank you for joining us and hope you're having a wonderful week at AWS Re:Invent, our favorite week of the year.
Welcome to "Making Better Decisions with No Code ML." Before I get started, Danny Smith, my colleague asked, you know, some of you with a show of hands who is a data scientist, a line of business users or somebody who's with a data background and it was a mix. So it's great because that's what we want. We want to talk to you about how you can make better business decisions using what we call is no code ML. And today we're going to dive deep into that.
My name is Sham Srinivasan. I'm a product manager. We're handling Amazon SageMaker Canvas, the no code ML service which AWS offers. I want to be joined shortly by my esteemed co-speakers, Danny Smith and our honored speaker from all the way from Korea Derek Lee, who is going to talk us through the Samsung story. So let's get started.
If you listen to Swami in his keynote this morning at the Data and ML keynote, one of the aspects he talked about data and ML was democratization of machine learning. Now, what this session is all about is going to dive deep on to what that actually means. What do we mean by ML democratization? How do we create value from machine learning? Be able to follow that up with a demonstration, a picture or a demo is worth 1000 words. So no slide can do justification unless you see the actual product.
So Danny is going to give us an end to end demo of what Canvas is and what we mean by end to end ML value creation. We will follow that with Derek telling us about the Samsung story. It's a fascinating story and you're gonna love it and then we'll wrap it up with some Q&A.
So let's take a step back not too long ago, about 10 to 12 years, 2010 machine learning was getting out of research labs. AI and ML have been around for decades. In fact, Amazon has been using AI and ML much before AWS is born and we continue to do so. So around that time in 2010, machine learning was gaining foothold in the business in an enterprise and it looked as if it was holding promise to solve business problems.
Technologies were built. AWS was at the forefront of this journey for those of you who have seen us through the years, we have launched a number of solutions services focusing on machine learning, making us the leader. Today, we wanted to ease that ML adoption into the business and not really restrict into research labs. But of course, the folks who are using ML were still the ML practitioners and some of you in the audience today, be it a data scientist, an ML engineer or a data engineer.
But going back to ML democratization, it makes sense only when we take ML mainstream. When a business analyst, a business user who has no background in coding or in technology, if she or he can use machine learning, then we call it a success. But the business analyst needs to be unblocked. A typical business analyst loves data, they know their data but they may not have the ML expertise and upskilling is difficult overnight. You can't become a data scientist as we know.
The second part is about transparency, the business needs transparency. You need validation from the experts but it has to be transparent and collaborative. So no code is the future. In fact, Gartner in a report last year predicted that just in about two years, 80% of technology of services and solutions are going to be built by non-tech professionals. And the key part is about seamless collaboration, which I talked about earlier. You need to have the business teams and the data teams get out of their silos and seamlessly collaborate without artificially being forced to learn machine learning skills overnight.
So welcome to Amazon SageMaker Canvas. We launched Amazon SageMaker Canvas at this very stage last year at Re:Invent 2021. So it's a one year old baby. Canvas is a no code visual machine learning tool, which is actually a workspace canvas. It gives you the entire journey of taking your data and generating predictions to solve business problems in a single workspace.
You can import data from different sources, understand and explore the data, make the data work for you. And then with a click of a button, ask Canvas to build a model for you. The complete data science nuances are hidden from you or abstracted from you. If you're a business analyst Canvas uses all of ML on the under the hood to build a model and to give you the most accurate prediction.
For those of you who are interested, you could still go deep into knowing what those model metrics are. For some of you are interested in things like F1 score or you just want to know the feature importance of a particular feature in your data that is impacting the model metric. And then you can share the same model with your data science teams for validation of a review. And last but not the least like any other a service. It's a pay as you go model, you pay only when we when you use Canvas and no license fees.
We've been busy. This is a crowded slide and intentionally so. It's been one year. We have been busy launching different capabilities in Canvas. Thanks to feedback from customers like you, customers have kept us honest, have told us what they would like to see. Be it in areas like exploratory data analysis. Be it an easy on boarding or even for the administrator to enable easy provisioning for the Canvas user? And we have just begun. It's an exciting road ahead for us.
So you may ask, this is all good. But where can I use Canvas? What is the use case? Well, what I have here on the slide is just the tip of the iceberg. I've just cracked the surface. I talked to some of you in the audience before we started that you are coming from a finance background. So if you want to talk about credit risk scoring, if you want to talk about fraud detection on the sales side, you want to talk about propensity to buy or you want to avoid churning customers and keep them loyal. The operations and logistics. You will see about, you know, if you want to predict delivery times, we want to talk about how to keep your machines up to date. Canvas can make that happen for you.
So as I said, you have data, which is the foundation for machine learning, ask questions, make the data work for you. So Canvas is as simple as you select a target column. If you have a table of data, choose a column and see what the impact of the target column would be. How is it explained by other features? So it's as simple as bringing in a spreadsheet and creating a model and generating predictions.
But don't take my word. Let's hear from Danny. As you see the demo. Welcome, Danny.
Thank you, Sam. Thank you everybody. Sorry, I want one slide first because I want to explain the use case. So this use case is a manufacturing use case. So imagine if I was a manufacturing quality engineer and what I'm trying to do is understand how the end of the line quality test. So when people make things, they test them before they send them to you, so they make sure they work. And this manufacturing line has a lot of sensors and other information on it. And so the use case is what we want to do is we want to predict underline quality or at least explain it by those other variables that we have the other sensor readings um so that we can understand how to improve, but we can also spot problems ahead of time. So that's the use case and this is Canvas.
So the first thing I would say is if you go into Canvas, I would highly recommend you press the help button because the help button has a built in guided experience for you. It'll teach you what you need to do. And so the demo that I'm gonna run today is actually built into the tool. And so just as an example, just press it and it'll say this is how you import data, go to the data sets page and then click on import, pick the data you have, you get the idea. So I'll so never, never forget that there's help inside the application.
So let's go back to our use case. So the first thing we have to do is get data in. So it's actually pretty straightforward. What I did was I just uploaded data, right? So here's the data i wanted, I pulled it in, we can take a peek at it. It's just the data that i think we want. Yes, it is. We've got some test information, we've got some sensor readings. We've got this x and y offset, which is a position from where it's from where a component is placed on the, on another component. So x and y offset is how far off from the ideal it is. And then of course, we've got this end of line test, right? So we have some fails and if we scroll down here's some passes. So that's the data that I pulled in.
Now, I could have also pulled it in from S3, which is Amazon storage locations or if I had access to a database, I could pull in information. So as an example, I tell you what, let me make this a little bit bigger so that you can see it. Here we go. So if I had a few tables and wanted to just kind of pull them in together, Canvas would automatically figure out how they're related to each other and join them. If I wanted to, I could look at SQL, but I certainly don't want to look at SQL so you can bring data in.
Alright, let's get back to the story. So then I started building some models to analyze this data. So let me catch you up because I've already started taking a look. So first thing I did was I selected the data. So here's the data that I showed you just a second ago, sorry. And then I came in here and I said end of line test EOL test is the target column I want to predict. And as soon as I did that Canvas started doing a bunch of things.
So Canvas looks at the data and it's like, ok, what kind of data type it is binary? Does that have missing or mismatch values? What are the unique? Uh you know how many unique values are there? What are some average values? It also when you put in an end of line test, it also analyzes that data and it says, oh I see you have pass and fail. It's automatically going to recommend a two category prediction. So there's lots of different things Canvas can do, it can do what a data scientist would call a binary classifier. So that's a prediction for two categories.
There's also a multi class classifier or a three or more multi you know, multiple prediction, multiple value predictions. If the target column is numeric, it'll recommend regressions and other numeric models, even time series. Uh so for time series forecasting, so in this case, I have a classifier, I want to see if I can predict pass or fail. So it, it suggested that's the model type I want to use.
So we can also do other things to get our data ready. I always like to look at our data. So here's kind of some visualizations of the data itself. So you see that some of these things, this test one is actually a binary. So is the gate one um if there are things that need to be fixed.
So first of all, you notice that it says validate your data here, Canvas is already looking at the data, sorry, running through validations. I'm just having it, you know, go and check and make sure there's nothing in there. But let's say there was. So we could say if we had missy values, missy values sometimes are hard to deal with. We could remove rows that have missy values as an example.
So reading six, if there are any missing, we could just remove values same with replacing, you know, if we wanted to say, ok, reading six, if it has outliers, in other words, really strange random occurrences of data, we could actually kind of scope that down and say, i want to bring it into the normal range. In this case, a standard deviation. So if it's three standard deviations, I want you to just bring it in the three standard deviations. So it doesn't throw off the data.
So you can do a lot of data preparation, data fixing. One of the most interesting things is functions. So functions is um just like what you're used to in a spreadsheet if i wanted to. So let's take a look at this reading six. So reading six seems to have a strange distribution in the data, right? But maybe for human understanding, maybe what we want to do is say if reading six is greater than mhm yeah, 100 and 30 call it high. And if reading six is less than 85 call it low. Otherwise.
So typing alone, you know, it's hard to type and talk. At the same time, you get the idea, maybe we call that r six pin and then we can just process the data and let it take a look uh you know, see if we want to accept that or not, right? So Canvas can spin through that. So I've already done all this. So we don't need to do it.
But we probably as a as a user are probably curious about our data, right? I certainly am. So right now, I'm trying to figure out if any of this data that you see can actually explain end align test. And so maybe before i do any modeling, i just want to take a look at the data. And so probably the first thing i would look at is a correlation matrix.
So correlation matrix shows how data is related to each other, right? And so here's my underlying test. And so end line test is related 100% with underline test. It's logical, but it looks like this x offset has a high relationship. So this might be useful that value may be useful to predict.
Now there's some other things in here too. So like reading one and reading five are 100% correlated with each other. Probably don't need both of those in the model. But if we wanted to take a look at that, right. So reading one reading five, i can come in, i can take a look at a scatter plot, maybe run it against the entire data set versus just a sample. Ok. That's exactly the same information, right? You can do other things right with these visualizations.
And so i remember this x off set. So how does x offset relate to end of line test? So this is a box plot. So a box pots really useful that line in the middle kind of shows the average the box shows the 25th quartile, 25th percentile, the 75th percentile. So the like the middle quartile ranges and then those whiskers show the long tail of outliers beyond those.
And so if you think about this, as does the data that has, that's passed, does it behave differently than the data that uh than the, than the items that failed? And so i would say yes, so this is probably gonna be a pretty good variable. But now i'm impatient, time is ticking. It's only a short demo.
So what i wanna do is start building some models. So the first thing i would suggest to train some data is to just press preview, preview is really quick, right? It's just a single pass, but we, we don't charge you for it. So it's useful to kind of see if we want to submit all the data or if we want to scope down the data.
So here's an example i ran just a second ago. So we could this model, it ran a quick one pass 93% accuracy. It looks like we can explain underly test really well. Here are the columns that make the most sense. So you see down here gate two gate one test, these aren't really explaining much or contributing to the prediction.
So maybe i just get rid of those. Now, when i'm getting rid of these couple of things are going on, the total cell count is going down so when we actually want to train a model, the pricing is based on the cell count. Cell count is like rows, times columns. Everybody knows what a you know, spreadsheet is. So this is a great way of reducing your cost by just kind of narrowing down the scope uh without giving up much.
So the question is, and so we keep track of that. So we're dropping test gate one gate two reading one, we keep track of that with the recipe. But does it help us any or hurt us? So we can just run it again? See what happens. It always seems longer when i'm standing up here in front of you and we can see what the result is.
Well, ok. So here's the result. So accuracy is about the same. So we can get rid of these columns without really impacting accuracy. So now i'm probably ready to kind of continue and actually build uh a machine learning model that takes advantage of the auto ml.
So the auto ml does some things. So let me just kind of show you that for a second. So there's two types of auto ml uh model building. One of them is called standard build. Standard build is the best of auto ml. It does a bunch of data preprocessing, it cleans up missing values. It uh standardizes the data, it does all this preprocessing, then it looks at algorithms and it's like, ok, let me look at these uh all these different algorithms and then it does what's called hyper parameter tuning.
If you don't know what hyper parameter tuning is, you remember on old radios where you had to turn the knobs and you could pick up the radio signal. It's only for older people. Hyper parameter tuning is just tuning all those input parameters to see if there's a better reception, a better model. So it'll do all those, it'll do 100 iterations of that and then it'll show you the best. That's two hours. I don't know if i'm ready for the two hours.
So i'm gonna go for a quick model, which is like 2 to 15 minutes. So it does a subset of that best practice, but it's faster. So i'm in just so you guys know i'm not going to make you sit here and wait for 15 minutes. I'm actually going to show you one. I already ran in a different version. It's the magic of cooking demos.
So here, let's pick up the story. So here we are. I press this button. I pressed the quick build button. Now, let's see what kind of results we get. So the first thing i want to do is should i trust this model that 94% sounds really good. But is that trustworthy?
So best practice, machine learning, what it does behind the scenes is it takes the data and it takes like 80% of the data and it builds a model that's the training data set. And the other 20% is the test data set. After the model is built, we take that test data set, we feed it back in and see if the model maybe learned generally on data, you know, uh so that it's good on data, it hasn't seen yet or maybe it, it doesn't do a good job.
And so this is what this shows. So of the 1300 rows in this model, we held out 263 of those. And i'm interested in a fail of those 102 were actual failures. And we predicted this model predicted 100 and one. However, there is a few false positives, right? We, we got it wrong a few ways.
So this is what this shows. Now if you have a little citizen data science in you, you can kind of take a look at the details and you can say, ok, for the fail, if we wanted to predict fail, right, we have an accuracy in 94% but we have some false positives and false negatives. That balance is represented by this f one score. So this is a pretty good model. I would feel like i could trust this model.
So if i can trust it, let's do something with it. So we could go straight to prediction and run some new data through and get some predictions. But i'm not going to do that yet. I'm gonna tell you about that in a minute. What i wanna do is see if there's more information that i can learn just on the behavior of the model
And this one's really interesting. So if we look at this variable x offset, so the physical reality of this model is we're taking a component and we're putting it down on another piece of equipment. And if it's perfect x offset and y offset are 00 because this is offset from the ideal.
Now, nothing's perfect in life, especially in my manufacturing process. So we're gonna, so this is what this f uh you know, x offset means we put it down. So what's really interesting to me as a quality engineer is this relationship.
So here's impact of prediction for failure and then we show the value of x offset. So as you can see if it gets above about 10 and by the time it hits maybe 18 or 19, it's really causing problems with my quality result.
Now, i, i don't even have to do a prediction. I can pick up the phone and call the line and say, hey, you need to make really sure that this thing's adjusted and I could just take action what they, you know, they'll go look at it and we fix the problem.
But let's run some new data through instead because this is pretty useful for predicting the future. So if I go to, I can do predictions two different ways. One is a batch. And so here is new data that, you know, new manufacturing that's come off the line. What we don't have is the test because we want to predict it.
So if I select that and I'm just gonna run it through and you know, it's 35 so it'll run in a few seconds here and then we can take a look what we get is not only the prediction, so this one's probably going to fail, but, but a probability.
So this the model saying this thing is 99% probability that this thing is going to fail. So I would, I would be very interested in this if I'm the production line manager.
Ok. There's another way we can do predictions and ah our customers called us. What if? And so let's say that we have certain values and we wanna kinda see what a change in that value is.
So you remember, I said if x offset, you know, you saw the curve, right? So if we get up to 17, what do you think is gonna happen? It's a demo, of course. So yes, we did get a fail. What's interesting though is we also get the probability, right?
So if it's at 17 87% chance of fail with everything else being the same. If it's at 16, it's only a 40 chance, 40% chance of it failing. So, you get, you can get a sense of where the sensitive areas are. And so this is why our customers, customers love it.
So I'm almost done. I've got one more thing to show you. Let's say that i wanted to actually have my ml engineering group, take a look at this. I want data scientists to review this to make sure that they're comfortable with the auto ml. And then more importantly, I want them to take it and production it so that I can embed this in an automated system on the line.
So how do I do that? Well, the first thing we do as we run that other kind of modeling that i told you standard built, i don't want to ask a data scientist to look at, you know, the quick model. I want them to look at the best practice model.
So I ran i i spent two hours running it. I got a more nuanced result. I actually got a better result as well and now I can share it. So I share this and now the data scientists can kind of come in and if I send that link to them, they can see it in sage maker studio, sage maker studio is the interface that a data scientist uses to take advantage of the sage maker managed service.
And so here is the canvas model from the data scientists viewpoint. So in this case, if there were manual feature engineering, it would show here, i didn't actually do anything in this model. I just submitted it all, you know, let them sort it out.
But behind the scenes, I told you, it does a lot of preprocessing and things like this. So you can kind of the data scientists can kind of come in and scroll through and look at all the analysis we did on the data.
And so here's an example of where like reading one and reading five were the same value. Since i didn't get rid of one of them, we just show the data scientists, they'd be aware of this.
What else can we look at? Well, we can look at all of those 100 iterations, data scientist calls those trials in an experiment. So we can see them all, we can even go into a notebook and see how they were generated.
So I can scroll down a bit. So here's an example, this is one that was looking at an xg boost algorithm. This is a tree based algorithm. And so what do we do? We converted some numeric features using a computer, we converted categorical features using an encoder and looks like we use some robust pc a followed by some standard scaling.
And here's the code, if you're not a data scientist, this is not maybe the most compelling thing in the world. But if you are, it's really interesting, right? So it's full transparency, full visibility.
We can also look at the best model and the best model we can look at explainability at the depth that a data scientist wants to. So data scientists, you know all those little hyper parameters i was talking to you about where they tuned them.
So you can kind of come down and see those. So here's an example like here's our learning right hyper parameter, right. This was the tweet, this is the tuning that gave us the best result. They can look at all this stuff, they can look at performance the way they want to.
And I think rather than scroll through this, I think this is a really cool feature. It's also self documenting. So we can document everything the auto ml did and learned. So data scientists can you know have that.
And then finally, the last thing i want to show you is a data scientist wants to take any of these artifacts and production them using the standard sage maker functionality. They have access to the artifacts themselves.
So the model artifacts, not just the model, but so input data where we split the training and test data, where we then ran it through a transformation logic that we did. If we wanted to see the feature engineering code itself, we could take a look at that.
People always ask me this. So i'm just cutting off some questions during the q and a section. So they're like, what's the feature engineering code? Well, there it is i hate to say you've got full access to everything you need as a data scientist.
All right. So let's let's go back. So what i did was i selected data. I told that i wanted to predict or explain underlying test results. Canvas suggested a model type for me. I explored some data. I ran a preview model, maybe scoped the data down. I submitted it for formal training. I was able to see that I could trust the data. I could get some interesting insights from the data itself. Maybe go act immediately. I could run predictions and I could share interestingly just like all the things in the help that i told you about.
So again, if you don't forget anything else, go into the help and the help will guide you. But hopefully you got to see how I could make better business decisions by analyzing my data with sagemaker canvas.
And then one last thing since i'm done with it, i'm gonna log out so that i stop the session charges. All right. So thank you for that part. My part is done that i have a really exciting job. I have to, i get the privilege and pleasure of introducing derek lee from south korea. He works for samsung and he's going to tell you how his group uses canvas. So derek, please come on up. Thank you very much.
Wouldn't it be the one we can trust and implement and executing and materializing one by one? That's really amazing. Right? Then how would I score the past demand focus Samsung Electronics memory made. Yes, I will generous to give 100 points. So please give me and my team round of applause. Thank you so much.
Yeah. Yes. Yes. Yes. Our seniors have done well in the past and I think the memory division has been in the top for more than 30 years and still doing well. But is it ok to stick to the existing way though? It may have been ok in the past, but it's different now. The environment that surrounded us in the past was less complex than now, you know, in particular, single device. For example, the PC was the more than half the weight and the future was predictable based on simple information. But now there are a lot of new applications and various devices such as PC server, mobile IOT cloud, and the environmental factors such as the pandemic logistic cows which pose greater impacts to the future.
So the demand forecast has become more important which you know more important than than before. So we calculated we should change the methodology of demand forecast to cooper with this complex environment. Yeah, I think you guys are worried about the same in your field because so looking back the methodology of demand forecasting we have done in the past.
First, the customer will see, you know, customers demand is really important but also highly volatile and the result of customer strategy. So it is really difficult to make an important decision by solely listening to the customers real name.
Second thing is you cannot simply make decision, relying on the external researches forecast and also simple regulation is good technique, but it is really hard to detect the inflection point and also it is really hard to reflect the new factors of the future.
So that's why we have tried to advance the demand forecast methodology by discovering new and sophisticated ways of observing these complex environments such as the technological change, the uncertain future. And the one also we we have tried to combine the experience and our domain knowledge with data analysis to gain insights and solutions to our problems.
So we looked for the so type of services that business users like me could easily use and get output. So it may sound un mear. But to me, the news on stage made canvas was similar to a movie. I immediately contacted to the AWS co team after encountering these places a I machine learning without much learning expert as shown here.
Yeah, of course, I had some skepticism to back then. So looking back on my steps to today's presentations, we had a kickoff meeting with AWS Korea team in April. You know, as I mentioned, we are a business users so we need the training to use them. So through our data lab program, we learned how to use the stage, make cameras and we learned how to, you know, prep, prep, proceeding the data. And now I'm presenting john to you as expected, the whole process did not go smoothly.
You know, uh most of you guys usually know what s3 is, but I didn't, i never used cloud be used a I and machine learning services before. So you know, i tried, i started to try to understand what s3 was first. Even my boss, you know, tommy kong, he asked me, what's the s3 meaning three story storage? No simple three service. So with more questions answered, i gradually become acquainted with AWS more and more.
And second thing is i thought that if i just uploaded the data to the a I machine learning service, the a I machine learning would predict the future. But i soon realized that uh we need the necessary data, data preparation formatting and structuring were required. At first, even if i get the output, the result, i didn't get the good result.
So i had challenging judging whether to believe in the unconvincing result. Despite the difficulties, our team solved challenges. One by one with the AWS team, we we started with a very small project. First, we, we collected the data with excel, our very familiar to and we precede the data through a tool which is called wrang wrangler was available with a click in the console and especially the reem function was very impressive as it makes our data richer and helped us the company, the data by seeing the distribution of the value.
After that, i unloaded it to the stage, make cameras with well organized data that match it required data structure. What i want to emphasize here, all of these were simply done with a click in the console. So the business users like me could easily use the tool for.
Now, i will explain the steps for using c a canvas. My first impression was canvas was easy and user friendly. You know, even my first line manager, lena lee, she took one day training program and she, after this, she mastered how to use it. Don't be surprised. You will find it very simple. And also you can just start right away by selecting your data on your pc or s3 account.
And once they unloaded it, the appropriate model is automatically recommended. So i mostly use the time forecasting model. And when i started the training which is called build here in land for 10 minutes to several hours. And when the research came out, we were able to analyze them, it was especially great that you could immediately check which related data affected the result with this.
You can acquire better training result by adding or deleting the the related data. Isn't it very simple? Right? Yeah, the two graphs shows are the actual predicted results. The graph above is the result of the forecasting with historical data. only. The graph below is the result of the forecasting with related data comparing two graphs. Which graphs do you think is more accurate?
Yes. So you know, i want to see, i wanted to see the quarterly demand for the next eight quarters by pc set shipment. So when i added the related data that is influenced the future demand, the focus accuracy increase dramatically. And the grave gave us the upper bound and lower bound rather than showing a specific value. So we are more convinced to the combine the result with our domain knowledge.
You know, after using s ma made cameras, we had the experience of the positive changes, you know, business user like me have been able to use a i and machine learning tools and get help to make data driven decision for the future.
Yeah, of course, focused accuracy is getting better. But you know, our team has some still very important task left. I think most of you guys are also worried about in your field about about the demand forecasting. How would you explain it? So when your supervisor or your boss, your manager asked what time management, you asked why a i learning for like this that it is really difficult to explain, you know.
So currently the explainable a i is very important now and the reliable and actionable is also very difficult problem. So and how can we make better demand forecast next time? You know, i think we are not predicting the future. We are analyzing the data, we are going to develop it further and build up it again in this model and develop it further more.
You know, the difference between us and the fortune tellers is fortune tellers. Prediction accuracy is based on love. It's all about love. Meanwhile using data and a i machine learning like this, we can continuously, we can continuously advance the forecast accuracy over time even though there may be some and more errors initially accordingly. Our team desires to the evangelist of this movement and spread this culture within our team beyond the small project we are current to be working on.
So let's work together and uh to design a future that beyond the predicting the demand outlook i just discussed about. Thank you so much for your time to listen and i hope you enjoy the rest of the rein session. Thank you very much. Yeah, sure. Yeah. Thank you. Thank you. Thank you, derek for sharing your story. What a wonderful story. I mean, it's always gratifying to hear from our customers, talk through their journey, the challenges and the success. Thank you again, derek for coming all the way from south korea. We truly appreciate it.
So let's bring it all together, right? We talked about democratization of machine learning. So keys, no code is the future. You really don't need to know coding, to know, to use machine learning and achieve your outcomes. Give the analysts the best practice without code. And finally about seamless collaboration you saw in the demo, how we built a model and we could share it with the data science team for them to go under the hood, look through the details if needed because the goal is for the analysts and the ml practitioners operating as a single team.
So i would love for you to explore this further. We have a couple of workshops tomorrow for canvas so you could actually get your hands on. So please do try canvas tomorrow. We have two sessions of workshop tomorrow at the caesar forum. We also have a lab tomorrow at the venetian. We'd love for you to get hands on with canvas if you want to do it at home. We have a course on coursera which we launched earlier this year. It's a course about practical decision making. It gives you the fundamentals of data science. It gives you a chance to try canvas for free. So please do check it out. It's been written by AWS professionals and i would definitely encourage you to get your hands on on this course.
For those of you who like to play with code, we have more sessions, some of the low code services like sage maker, data angler for data prep, sage maker, autopilot for auto ml and sage maker jumpstart for pre models. We have a bunch of sessions through this week. So please check it out.
And finally, you could always go to the canada website where all this information is available of how canvas works, resources, blog posts, videos and more. But last but not the least we want to hear from you. Canvas is built based on feedback from customers like you. So please let us know your feedback not only for the session but also what you would like us to see, how can we help you use canvas to achieve your goals?
Thank you again and we truly appreciate you joining us today.