Build without limits: The next-generation developer experience at AWS

Please welcome Vice President AWS Generative Builders, Adam Seligman.

Hey, everybody. Hey, welcome Thursday afternoon. Who's ready for the post lunch, big energy session in the afternoon? Yes. Yes. Yes.

Okay. Hey, I formed to start by just thank everybody for coming. I'm Adam Seligman. I'm the VP of Generative Builders. So I work in this next generation developer experience organization and we're incredibly passionate about bringing builder tools for everything you do on AWS.

But I wanted to start by thanking you. I wanted to thank everybody who's new to the community for coming. Everybody who's a longtime AWS community member. I wanted to thank our heroes and our community builders. Like that's what makes this community so special. It's kind of what makes this event so special is all the human connections that you bring to it. So I just wanted to start with that. Thank you.

Okay. Now, on that note, I thought the best way to start a Thursday afternoon session was with this. Who has heard of Party Rock? Can I get a show of hands? Yeah. Pretty fun. You probably didn't see that coming from us. Pretty awesome.

So, Party Rock is a playground from power by Bedrock to build generative AI applications and get hands on with important skills like building prompts, chaining together multiple steps, learning about temperature swapping different models with Bedrock and also just having fun.

So if if you haven't seen it, let me, let me show you a little bit about Party Rock. So Party Rock, you can start by just describing the application that you want to build no coding or anything click generate app.

Here. We decided we're going to do a team bonding in Las Vegas for, for the next generation developer experience team and came up with the inputs and outputs and LM steps and prompts and chained it all together and built this really neat fun application.

Now you can build applications like this from scratch or you can go in and hand edit the blocks, connect things, adjust parameters. It's really fun. And I think my favorite part is you can share apps, you can, you can share them privately or you can share them publicly and then your friends and family can go and remix applications.

Has everybody shared an app or remixed an application yet? Yes. Really fun stuff. I i want to, i want to give a special thanks again of the builders and the heroes. We launched this a little early with those communities and they did a really neat job of building some some applications.

We took some of our favorites here, the plant care companion and the chat with 480 otis. And I just love the creativity that we saw from the community as you all go off and build these crew applications and experiment with generative AI and i think this is a good starting point for the talk too because sometimes the best ideas come when we can get into that creative space, right? Like when you get out of sort of the day to day and the drudgery work and you get to that creative stuff.

And that's actually where Party Rock came from. Party Rock started with a couple of engineers that had an idea and just wanted to experiment with LMs and putting these things together and they built an internal playground built on a built on Bedrock. And so they went viral internally. So the slack channel kind of overflowed and people say, oh I want to get in, I want to get in, I want to get in.

And so we decided not only to make an internal thing but bring it to our community and see all the things you can do. So that's what this talk is about, is about getting to that creative part of software development, getting out of the undifferentiated stuff, then the heavy lifting and getting into that really creative zone and it's something that kind of inspires us on this mission that, that we're on here, the building developer tools for all of you sound good.

Okay. So there are three things that I wanted to anchor on today. First, we wanted to show you a bunch of these new tools that we're building new developer tools experiences for the whole life cycle of software development. And the second thing is we wanted to communicate how we're providing them to you in a responsible way so that you can trust these tools in terms of quality and responsible use of AI.

And then the third thing we want to do is like, wow, we're in this moment where these tools are going to enable more people to do more things. And we're going to talk about some of the new avenues that open up for other kinds of builders. In addition to just, you know, deep cloud experts, like some of you, but not all of you are.

Now, there's some like things that I think we all carry in the field of software development that are like truth is so we're like, oh these things are just true, they're not going to change. Okay. So three of those are that I thought would be fun to start with just to kind of set the stage.

Our first creating like a full modern application is just a time-consuming thing, tens of thousands of lines of code and a ton of heavy lifting just to build everything. And the second thing is scaffolding application architectures and wiring that up and building pipelines and keeping it all running. That's hard and tedious and third maintaining, upgrading migrations, you know, upgrading versions. These things are tedious. We often don't get to them but we need to get to them and we have like security loans around.

So I think, I think a lot of us carry that. Oh, these are just truth is just part of software development. But what, what's inspiring is we think all of these things and we're going to show you this today, all of these things seem to be changing. But the kinds of tools we're building now the way you're using them, a lot of these assumptions are no longer true. So it's an exciting time and we pose it's an entirely new era of building and it's going to transform how organizations build software, how developers work, how teams of developers work and even the kinds of applications that we can on go build together.

Okay. So that's exciting and all that AI is great, but it's important we do this in a responsible way. And so while we at AWS, understand there's immense potential in this gene of AI stuff, we're incredibly focused in delivering it in a responsible use of a responsible use of AI. And I want to give you a couple of examples.

One it's really important that your tools aren't trained on low quality code out there because you're likely to get low quality responses. A second example is if a tool doesn't give proper attribution to open source, so you can make an informed decision if you want to take that code into your code base or license it. And also like scanning for toxicity and bias these things are really important.

So this this new era of generative AI is incredibly exciting, but we have to do it in a responsible way. So that brings us to like this mission of the next generation developer experience organization. What we're here to do, which is to help you build without limits and actually reimagine what building on AWS can be. And this is a start of a of a pretty momentous shift. I just want to like call that out.

So it's all about allowing you to bring that creativity to bear. Let all this other stuff get out of the way and use the tools to just build things you could otherwise only dream of and remove all that undifferentiated parts to get to the really joyful parts that creative parts and build the future. Okay?

So our first big push in this area was Amazon Code Whisperer. I'm sure you've all heard of it and many of you have tried it. So we launched Code Whisperer in general availability early in the year. So you can build applications faster, get in-line code suggestions and just speed up your development.

So it's pretty neat if you haven't seen it. These tools are pretty interesting. They they, they, it generates coding suggestions based on prompts in line in your code with contextual awareness of what's going on inside of your files. It works to generate very high quality output because it's been trained on the best of 17 years of Amazon best practices and documentations.

We really curate that to have a great training set. So you get the great output and we think it's the best way to work with Amazon services. You know, you're using EC2 or Step Functions or Lambda, whatever all that expertise and how to use those things. The best way is built into Codus.

And recently we worked with MongoDB to make sure you get awesome MongoDB experience. Also using their APIs and services with the curated training data is how this works. So you get great code suggestions from working with MongoDB.

Oh before I forget, by the way, we want to make it possible for every woman to get advantage of this. So Code Whisperer is free for individual developers, which we're very excited about.

I talked about toxicity scanning for anything that's biased or unfair and also attribution of potential licenses. For example of open-source licenses, you can be informed if you want to take that license on.

Okay. Instead of talking about Code Whisperer, I thought we could do is show you where it is. If you're not aware, these tools are evolving very fast. So if you haven't tried Code Whisperer in the last month or six weeks even you're missing out because the evolution of these tools, what we're doing behind the scenes to improve the quality and the relevancy of suggestions is super fun to watch. Very exciting.

So I'd like to bring up Brook. Could you come out and give us a great demo of Coke Whisperer some today?

When developers in organizations write code, we need to make sure that that code complies with all of the internal specifications, things like internal SDKs APIs libraries packages and classes. Until recently. Though the coding tools were limited to suggesting code from public data, they weren't aware of your company's internal code or nuanced best practices.

This often meant spending hours talking to experienced developers and trying to figure out what's going where and why it's going there because also the documentation could be sparse or not even exist. You then end up spending even more hours reading through code and then trying to figure out what's going on.

You're also going to be spending time writing and reviewing lots of undifferentiated code to go along with all of this as you're trying to follow best practices. That's why we've recently launched a preview, a new Code Whisperer customization capability.

So you can make much more tailored suggestions that include your internal SDKs, APIs, libraries, and packages. This is gonna help developers like me go faster and improve code quality to help ensure security.

Administrators also control which developers get access to what part of a customization. Any customizations are isolated. We never use content from Code Whisperer from customers to train the underlying models.

To see how Code Whisperer accelerates development, I'm gonna walk through an example using a GitHub repo for a serverless ecommerce platform. So what I'm doing up here is going through and making this customization that you can see on screen, and then I'm activating that within my VS Code terminal. So it's using the tools that I've already got as part of my workflow.

I'm now going into the order code to look at this Python function. We've got a lot of pieces here, but I want to add a coupon part of this so that my online store can now accept coupons. What I'm doing here is working out I need to define a constant for the coupon API URL and then I need to also organize this with the other configurations. I'm also later on gonna add a function called validate coupons that calls out to the coupons API to validate any coupons on an order. I'm actually gonna call this from the existing validate function so that it can run concurrently with the other validations that already exist in this codebase.

As you can see up on the screen, this is all happening with the internal standards of this repo that I have on GitHub. And this is how it's all happening through. As you can see, this is an example that I've used to show you this. But you can all I imagine go back and think of in your organizations, those eccentric packages and standards that you might have, how this can help you along the way.

We worked with Persistent, a global digital engineering and enterprise modernization company to conduct a study on how Code Whisperer customizations impact productivity. Persistent found that developers completed their tasks 28% faster with customizations than without.

Now I also want to show you another new capability that we recently made available in Code Whisperer. The command line is used by every software developer every single day to write, run, build, debug, and deploy software. Despite how critical it is to the software development process, until recently, the command line hasn't changed much since the 1970s. And it is notoriously hard to use with tens of thousands of command line applications. It's almost impossible to remember the correct input syntax. Every time developers constantly need to switch their context to their browser and back to find guidance and then use it.

Now Code Whisperer is available for the command line which solves these challenges with CLI completions and AI natural language to code translation to your existing command line. It integrates into your existing tools so you can keep your workflow, making the command line easier for beginners to pick up, but then also more productive for advanced developers.

So as you type, you receive typeahead completions with inline documentation for over 500 different CLIs like git, pm, docker. And of course AWS completions are going to reduce the time developers spend typing repetitive boilerplate code, context switching to that browser for documentation, and learning new commands.

You can also write a task in natural language such as "copy all files to S3" and Code Whisperer will convert it into an instantly executable shell command specific for your local environment. I know how to do this and I still check it every time because I doubt myself if it's in a different language. So this helps so much.

So if you're using it, turn to the native macOS Terminal or the Terminal in VS Code or the Terminal in JetBrains, you can download it and get started in minutes.

Thank you so much. We're gonna swap back to you, Adam. I can, I can automatically write all my git commit messages for me. Awesome.

OK. So this is great. This is just the start this direction where we're going with generative AI tools and Code Whisperer. So bringing AI and IDE to the, bringing AI to the in the VS Code is kind of just the beginning.

And what we did is step back is we looked at the whole cycle of the development as they work on building application and solutions. And if you think about it, a lot of developer time is not actually spent in that core creative part where you're like adding some logic or building a feature. There's all this other stuff that you do. If you, if you think about when you take, you know, you get like an issue assigned to you and you start working on the issue and flush out, you're going to approach it, maybe research some of the technologies you're going to use and apply and of course, you have to make sense of the code base because it might be a new code base to you.

So there's just a whole lot of things along that whole journey of software development that aren't just at the point of time when you're writing a line or a block or a function of code. So what we wanted to do was re-invent how you build over that entire cycle.

Now this is really new for AWS. I think it's new for the industry. It's super exciting that whole cycle of planning, creating the coding and now operating and then improving it. So that, that that whole journey, we would like to provide tools, we are providing tools to help you that entire journey.

And what's interesting about this is we feel there's an opportunity to provide context from all those different stages together. So you don't have a tool that does one part. But actually that context is all with you in that entire journey. So you get the best guidance and the best assistance.

So that's why we announced earlier this week, Amazon Q. So Amazon Q is this new generative AI assistant that helps you with every part of the journey. It's, it's specifically for work. It could be tailored for your particular business. You get fast relevant answers to pressing questions. You can solve problems with it. You can even generate content and take actions in different systems and it uses the data inside your company and the expertise inside your documents and it knows about your code and your AWS environment.

And again, we've trained Q on the best of 17 years of AWS expertise, which is pretty, pretty neat, a really amazing training set of data. So you get this really actionable, helpful guidance at every step over this whole development life cycle.

Now Q is available everywhere you need it. We're going to show you that Q is available in the Management Console. It's available in our documentation. It is available in the AWS mobile app. It's available in your IDE and you're going to see Q show up everywhere that you're doing your work. So you can do your job faster and focus on the really creative, highest value and exciting part.

Again, Q is pretty neat. Another place you're going to offer is in Teams in Slack. I don't want to miss that. It's pretty neat when you can chat your Q is a teammate in Slack or Microsoft Teams. But again, the spirit of this is just helping your team go faster, helping you build faster and giving you the confidence through all those stages of the development life cycle.

OK. So Q in the console is pretty neat. This is available now. I hope you've all tried it out. You could just chat with it right on the side, that icon, icon up in the top right there and you could just chat and ask questions, you can learn about unfamiliar technologies. You can architect get helped architecting solutions and it provides links to the expert resources also. So you know, the supporting material like a Well-Architected Framework document, it will provide those links. So you can do that deeper research if you decide you want to go do that.

So previously, when you started a project, you often started with a lot of research and you'd start with like, ok, how, what services should I use and how do I best get started with them? So with Q, you can not only ask questions like about a service, you can ask questions to help you get directed, make tradeoffs between different services, explain your particular situation.

And again, Q is knowledge and this is going to increase over time about your systems and architecture running inside AWS. So this is going to get smarter and smarter over time. But providing you the best guidance and expertise.

Again, all of this is in the spirit of giving you more, you know, putting, letting you put more of your energy in that creative part and less on that sort of heavy lifting and research and all those other parts of it.

Now Q is more than a conversational interface and I wanted to talk more about how we're bringing Q to more of the places that you're going to work. And we saw an opportunity to bring Q beyond just the IDE and beyond just the command line and beyond just the console, there's so much more work if you think about it and task like understanding a large and existing code base and understanding it.

And you know, when you, when i, when i ever looked at some code, i was unfamiliar with, i really wanted to talk to a person, right? that had was familiar with it and kind of get oriented and kind of find my way around. Well, now Q can help you with that.

So we've put Q in Code Whisperer. So it's part of the Code Whisperer experience. And so over on the left end panel, you can actually chat and ask questions. Actually, you move the panel around depending on your IDE, but you can ask questions about the code base and and remember not just in the single file that you're looking at on the screen, but your entire code base and Q is informed by other contexts from your AWS fleet.

You can ask questions about architecture, it can summarize things for you. It's like having that expert available to guide you through, even when you're unfamiliar to a code bases or, or, or you're, you're trying to figure out the best approach to tackle a problem.

I'll give you an example here. So once you have it in there, you can ask all sorts of questions. So here provide me a description of what this application does and how it works. And then for me, you could do things like ask AWS APIs or best practices. You could even do things like generate tests. And that's nice walking into an older code base that might not have great tests. And before you make changes on it, you want to generate those tests, you feel a little more confident doing the work.

So again, it's back to that same theme of helping you speed up and get to the creative parts.

OK? But we didn't just want to work on the code writing part. We wanted to meet you in that full software development life cycle. So we talked about some other key capabilities earlier in the week such as checking off backlog tests, troubleshooting errors and generating tests. And let's walk you through some of those.

So one thing that happens, I think, when you are implementing a new feature in an application is there's like a lot of steps to that process.

The first thing we need to do is make sure you understand what you're being asked - like the issue. What is the issue and then come up with a plan like how are we going to tackle this? What are the pieces? What is the logic we're going to need to write, where is it going to live and what files is it going to live?

You have to actually write the code and consume some APIs and those edits, all those, if you think about all those edits, you're going to do that might be across many files and across thousands of lines of code.

Now again, I come back to where we started with Code Whisperer. Now, remember there are tools out there that can generate some lines of code blocks of code as you're typing. This is like a slightly different problem, right? We're talking about walking into a code base and really implementing a feature which might be a lot of changes in a lot of places.

So we thought we could help with this problem. And so really excited to share with you this week, Amazon Q's feature development capability that can ship an entire feature in a fraction of the time and automate the entire process of starting with a natural language prompt, describing a feature and then getting all the way to an application feature in minutes.

This is really, really neat. I hope you saw this in Adams keynote and in Warner's keynote, we're going to show you more in a minute. This is in preview right now in CodeCatalyst. So every single one of you can try this and have Q building features for you in CodeCatalyst today and we're going to release this soon.

And the idea is preview also this really, let me show you how it works. The first thing it does is it creates a plan and then it collaborates with you to sort of the o sorry, could you back up please? It collaborates with you to iteratively make sure the plan looks good to you and then describes your approach and then goes on and implements it for you. We'll give you a demo in a minute.

Another thing we wanted to do was bring the best of serverless development and bring that Application Composer pair that up with Q put it right in your IDE. And so we're really pleased to announce that Application Composer is now bell and VS Code. This is pretty cool for building and serve list applications.

And if you haven't seen this before, you have a fully visual drag and drop serverless application composer and over on the right is your code. So as you move things visually, the code is updated and it's in sync. So if you update the code, you can see the updates visually.

Now it's need is now cura there you can ask questions, you can get best practices. You can even get code suggestions that you click to insert right into the code. And again, see it visually. You also have codes there if you want to make any fine grain edits, this is a really exciting and almost revolutionary way to build cus applications.

So I'm really excited for you to kick the tires on this and this is available now in the AWS toolkit.

Ok. So we kind of covered the planning stages and the creation stage of the application. Now, we want to move on to operations. We talk a lot about building new features with Q so far, like build this, build new things, build new things that's creative and fun. But we also have to operate these things, right?

And there's a lot of undifferentiated, heavy lifting when it comes to operations, there's troubleshooting and debugging, you might have an error in your code, you might find an error in the console or in your logs and you know, kind of making sense that can be a lot of work to research and understand what's going on.

And not only do you have to understand what's going on and, and finally, the problem is you have to go implement a fix and implement a fix safely and test it and everything and make sure you can roll it out. This is a lot of work, right?

So we're pleased to announce Amazon Q's troubleshooting capability and this is bringing Q right into the console into all these services where you can diagnose and troubleshoot errors like that really quickly and easily.

So for example, you could be in the ec two permission, you see the ec two permissions error s3 configuration error and you could just press that, see that troubleshoot with Amazon cue button there. And Amazon q is going to come back and diagnose the error and provide personalized step by step guidance.

So you you know what's going on, how to tackle the problem and get back into production as fast as possible. Again, going over this whole life cycle. We think this is the way to really make you successful is bringing this Amazon q and this generative a i capability to your entire life cycle.

So let's talk about another area and that is modernization of migration. We build these apps or someone before us left us an app, but these apps have to be updated over time, right? You have to migrate to new dependencies, security vulnerabilities and libraries need updating new versions of languages and frameworks and things.

There's, it's a lot of work to do. It could take months, it could take years sometimes dangerously. It it doesn't happen, right?

So we're pretty excited that Amazon Q now offers a code transformation capability to do updates and modernizations of apps at a fraction of the time. And so with just a few clicks, we're starting with java, you can do things like move from java eight to java 17. So you get the benefits of the latest like security and performance updates and soon Q will also do.net to cross platform.net allowing you to move from legacy operating systems to monitor linux.

And when you do that, you get as you know, you get a bunch of performance and cost benefits if you can move your net applications to modern linux. So transforming from using amazon code, code transformations capability can save you a ton of time in this maintenance in this later parts of the life cycle of an application, oh and q will create and run tests to validate that the applications work correctly after the update.

Now, we had a team of about five engineers inside amazon that did this and they did, they, they used amazon q's transformation capability on 1000 existing java applications at aws. We actually have a lot of java applications internally and they were able to update 1000 production applications from java eight to java 17 in two days.

It took an average of 10 minutes each and the hardest one took just under an hour. And this used to take a couple days, 2 to 3 days per application to really go through all the steps and validate things to write. So this was a game changer for us. We think it will be a game changer for all of you. And we're excited for you to try it out.

And again, use that generative a i to make this lean as part of the life cycle go faster. So you can focus on the creative parts pretty neat. Are you ready to see it instead of just hear about it?

Oh, come on, if you want a demo, i need a little energy on a thursday afternoon, massimo. Thank you.

So, what i would like to do in the next few minutes is uh showing um a few stuff that adam has been talking about and try to ground them um with these uh three sample demonstration. Uh there are three scenarios that i would like to cover here.

The first one is for uh optimizing your application. So i want to take a, an existing java application and i want to uh optimize it. The second uh scenario that i would like to talk about is how you can implement a new feature, a new api in the same um uh python application.

The first scenario that we're going to cover is how you can migrate a java application from java eight to java 17 as adam was uh alluding to. So let's get into the first scenario, imagine that you have been asked to optimize the python application. You have never seen the application and you have been asked to optimize specifically the way that this application interacts with dynamo db.

We heard that this application has issues um in terms of latency uh cost that could be uh optimized. Um and you're going to start exploring, right? and investigating the first thing that you can do is you can use amazon q uh to figure out a way uh to create a baseline to um to monitor uh your dynamo db interaction.

So you're going to ask you, for example, how can i create this baseline? what are the metrics that i can use to create this baseline to be able to uh then figure out you know the level of optimization that i can achieve later on. Uh with qq is suggesting to you these two metrics.

So i switched the cloudwatch console and i configured these metrics. So i'm ready now with this baseline to um to move to the next step, i'm giving name to this dashboard and then i'm moving into the ide.

So this is my python application. But as i said before, i've never seen this application, i was just asked to uh optimize uh some components so i can ask q and i think that this is a killer application for generative a i to describe what this application is doing and how it works.

And q goes through the uh the files and it will give me like an overview of what this application does. The second thing that i'm doing here is i'm going to start uh to solicit an end point of these applications so that i can see some of the dynamo db activity going on in the background and then switching back into the q chat console.

And i uh want to not knowing again the application, i want to ask you to locate where the code for the interaction with dynamo db is and q is giving me the snippet of code that interacts with dynamo db.

And immediately i see something that is very weird, right? The way that i'm dealing with dynamo db is i'm reading a counter, i'm increasing the counter and i'm putting the counter back. I know that dynamo db can do better.

So i'm asking q, is there a way to optimize the way that i'm doing this plus one of this counter and q is coming back with. Yeah, you can use this update item api specific to dynamo db that allows you to do an in place update of that counter.

So instead of reading increasing, putting it back, i can just say increase by one that counter in dynamo db. So um what i'm doing is i'm copying that snippet of code. I'm restarting the application and immediately what i see from that baseline uh is that my read drops dramatically basically to zero because now i'm doing a new place upgrade.

So that was the baseline that i was using before to monitor and to um and to track my improvement. The second scenario that i wanted to talk about is how can you um uh create a new feature or create uh in this case, a new api in this application to do this, i'm going to use a capability of q that we call feature development.

The way that you use this feature is by invoking this capability in the chat, by using the l de uh directive. So instead of having a very transactional way to deal with the chat and to explore the application, you're asking q to do a very specific task. In this case, i'm asking q to basically connect to another dynamo db table, read another counter that i'm later going to expose through this api uh to um uh using another uh u i, for example,

Um he is coming back with the plan, right? and the thing is that i can steal the plan. So if i see something that doesn't look good for me, i can steal the plan. And as you can see here, there is also adam's favorite feature which is q is going to add a test, the unit test uh to this along the way.

So in addition to adding the function in the flask route, so the plane looks good for the sake of the game. I'm not going to steer it. The plane looks good. So i'm going to say uh write code. In this case, i'm fast forwarding um uh the demo because it will take a few minutes.

But basically q is coming back with the list of files that it has updated or created. As you can see here, my original python file has been updated uh with a new function in the new um in the new flash crowd. In addition as you can see here, it has created the unit test. All looks good.

So i'm going to accept these changes. Once i have accepted these changes, these changes get written into these files and these files get actually uh created on my uh local um environment. Once i've done this, i want to deploy this application in the cloud.

So this application runs in lambda. So i'm going to use s am uh to build and deploy this application in lambda. And i'm switching into the lambda console where i'm going to test this specific route for some reason, i'm getting an error and i'm not sure why this is happening.

So i want to use amazon q to troubleshoot this error. Amazon q, the troubleshooting capability is going to tell me there is a permission issue here and the way that you are going to fix it, amazon q is actually giving me the exact em goal that i need to update in the in line policy permission that i need to use to allow this application to reach out to this additional dynamo db table that i've been using for this new api.

So this is exactly what i'm doing and i'm going to retest this um uh uh lambda function. And now it gives me a status quo of 200 the counter that was present in that dynamo db table.

The first scenario that i wanted to show you is the migration or the transformation of the java application to do this

I'm in a different ID, I'm in TJ ID in this case. And I'm pointing to a Java project here, um, Java 8 project, uh, as you can see here from my, uh, local setup, right? So as you can see here, uh, I'm running Java 8 on my local, uh, environment. I want to migrate this application to Java 17. So I'm pointing to the pom.xml file and I'm going to say, uh, my, uh, transform.

What happens in this case is that Q is going to take my project, uploading it into a secure cloud sandbox, run the existing tests that exist in this, uh, project against the Java 8 sandbox and then provide me a plan with the steps that it's going to take to update this application to Java 17. In the meanwhile, it is actually updating this application to Java 17 and it's rerunning the very same tests in, in another sandbox with Java 17.

Once this process has completed, I've been, I'm offered the possibility through this model, uh, that you're going to see in a second to explore what it has actually done in the files that it has modified. So this is the source file and I see all the changes that it has done. In addition to that, there is also a report that, uh, Q has written with all the changes and all the steps that it has, um, that it has done.

The last step here is that I'm going to, um, change the runtime in my local environment to seven to Java 17. And I'm going to test locally this application. So I'm going to verify that my application actually, uh, builds and builds successfully, basically.

Thank you very much, Adam. Back to you the x 1000.

Ok. So I think you've seen over the first part of the presentation, how the job of an individual developer, how their experience will sort of transform. But we also saw an opportunity to help teams. Now teams use DevOps to like work together, take ideas from, you know, the first stages of an idea all the way to production and operate over the life cycle. But it's a pretty complex orchestration of tools, like when you talk to a typical team and ask them about their DevOps environments, they put a lot of different pieces together and unfortunately, in many places, they're silos or they're not best practices.

Now, what, what you really want is to bring that DevOps culture to life so that every team gets the best and works the best way and can and be most efficient and really be empowered and do their best work. Now, if you want developers to work without silos, you're going to need tools that aren't in silos. And what we want to do is make sure every team has that right from the start there's a pre integrated experience with best practices built in and it's cohesive and we just get everything right from day one. And in particular, we want to make it flexible enough also. So if you have a particular set of tools or where you want to work, you can accommodate those also.

So this is why we launched Amazon CodeCatalyst. So Amazon CodeCatalyst is our unified platform for software development teams to work together and streamline the software development process of of building AWS applications. And we wanted to bring everything together. So you could plan code, manage the project, collaborate, build, test and deploy all in one tool as opposed to having to put lots of different pieces together, one service with best practices built in. So you could ship code from idea all the way to running applications.

So there's some really neat features in there like blueprints. So you can get a project started in minutes. Live CI/CD pipelines are included in those blueprints, project code. So everybody on the team can understand how the whole system works. It's not like a separate operation C and development team really aligned with the DevOps culture. Your whole development team can see how everything is wired up to develop and deploy applications.

Now also in CodeCatalyst, here we go are cloud development environments which are really neat. So these are development workstations in the cloud with everything preloaded and preconfigured for the particular project you're working on so the dependencies are there, the libraries are installed, the, the CS use are already authenticated in like the AWS. And it's easy to just drop in. Now, you could connect with the ID of your choice. For example, you could use VS Code or JetBrains and integrate with these things right out of the box. And it's easy for a team member to drop into a part of the code base that maybe is not their normal part of the proba without having to do all that shifts and changes and manage dependencies on their, on their, on their laptop or desktop machine, which can really slow you down.

There's also built-in project management that works with your g workflows. So you can do things like code reviews, issue, tracking code is opinionated. So it has all these capabilities in the in the box and you can start really quickly, but it's also flexible. So it works with the tools you already have if you like using GitHub or Jira, for example.

Ok. So one thing we've seen is that for large organizations that have really sophisticated project needs is they want to modernize their DevOps approach and what we hear from them is they want to learn from Amazon's best practices and then they also want to codify their own best practices and roll them out organization. The idea is like everybody gets those best practices on all your DevOps teams, not just the ones that have like done all the work and had all the years of experience getting everything dialed in exactly the way they want it.

So that's a pretty complex thing to do on your own. So, what we found is rather than having that be a time-consuming distraction, we could build that right into the product into CodeCatalyst. So it's ready for your teams and they can accelerate development and get the benefit of the best practices right on day one, even for large, large environments.

Ok. So earlier this week, we announced general availability of the CodeCatalyst Enterprise tier and then includes Amazon Q features. But rather than me talking about it, I'd like to bring up the GM of that product family, Harry Mauer to come out and give you a demo. Sound good.

Yeah. Oh, that's great. Hi, everybody. As Adam mentioned earlier this week, we announced the general availability of our Enterprise tier. And one of the things that Adam also mentioned is that Coke house has a unique capability called Project Blueprints. Project Blueprints are the fastest way for you to build applications on AWS. Confidently simply choose the application type that you want and the blueprint will generate everything that you need to build plan test and deploy your application, including all of the infrastructure, the workflows, the application code and everything that your teams need to go from idea to production in one CodeCatalyst project.

When we showed customers this, when we first launched CodeCatalyst. The first question they asked was, how can I create my own blueprints? And I'm excited to announce that as part of the Enterprise tier, we now offer private and custom blueprints as well as great new features that help you keep your projects in sync as you change your blueprints.

So let me show you a quick demo of how you can use blueprints in your project today. So the first thing I want to point out is project blueprints are not a static file, they're not a static template repo these are fully functioning TypeScript applications that give platform engineers a tremendous amount of power and flexibility. This is a blueprint that we use to stand up a simple lambda stack and today it uses Node 16, but we want to start to move to Node 18.

So I'm going to make some changes here in the TypeScript file. And then I'm going to use some CI/CD tools to publish that those changes to our blueprint catalog in our CodeCatalyst space. Now, obviously any new project that uses this blueprint going forward is going to get the new version. But what if you have projects that used the old version and you want to update those to Node 18 as well.

Well, through a project lifecycle management features, this is pretty easy. So what happens is after the new blueprint version is updated and the space administrator makes it available in your space. Any project that was using. The previous version of the blueprint is automatically notified. The project owner can then choose to look at that update and apply it to their existing project.

And so here, the project owner sees if it has a new version of the blueprint available, it can then go and update the blueprint. And what we do is we show you a quick diff of where the blueprint is versus where your project is today. And we actually like to set some configuration changes if you want to. After that, we simply generate a pull request and then you can merge those changes back into your project.

So it's pretty simple if you've got a couple of projects, no 10 or so. But what if you have 10, 2,101,000 projects, this can be a lot of work, right? And the drift is real. And so we think this is going to save our customers a ton of time and energy to keep your projects up to date with the latest standards so that your developers are building on AWS the way that you want them to.

So this is available today for to use as part of the Enterprise tier, a CodeCatalyst dot AWS. In addition to the new features that we added in the Enterprise tier, Adam also mentioned that we brought the power of Amazon Q into CodeCatalyst. I believe this has the ability to fully transform how we build software today. Developers can now go from idea to fully merge pull request simply by assigning a task to cue in CodeCatalyst.

After you do that Q does all the heavy lifting of understanding what your code base does and then develops an approach that you get to work on. And then we go to the next step and actually generate code to add that new feature to your application. In addition to using the power of Q to create new features and to do really remarkable work inside of your application, we also use the power of Q to make quality of life improvements for everyday developer tests.

So one of the other new features that we introduced is the ability to summarize a pull request using Q and also create a digest of all the comments of a on an existing pull request and you can see all the activity.

So I want to show you a demo of how you can build, how you can use Amazon Q and CodeCatalyst to build a new feature in your application. So here we have a simple to do app that I created using one of the blueprints in CodeCatalyst. It's just a simple sort of checklist thing. I can create a task and then when I complete it, I can say that it's done. But what happens in the application today is that it doesn't do anything.

So what I wanted to do is actually sort all the completed tasks to the end. So what I'm going to do is going to work with Q to add this new feature to my application. I'm going to go into CodeCatalyst and create a new issue from the issue board. I'm going to give it a title and I'm going to get a really precise description of what I'm trying to achieve.

Then I can assign the test to Q and Q will ask me how I want to work with it. We offer the ability to interact with Q so they can help nudge it and guide it in the right direction. So the first thing as I mentioned that Q does is it goes and it reads all the things in your project and tries to understand what your codebase does and how your application is constructed.

Then based on the intent that I have, it's going to go create an approach for how it's going to build that new functionality. Now, this is what's really cool about doing this as a, as a part of your team member here in C co catalyst. I can start to work back and forth with Q through this issue ticket and guide it to where I want it to go.

In this particular case, I don't want it to update an existing file. What I want to do is actually create a new file with that new functionality. So I put the comment on the issue and I push it back to Q and this time, it generates a new approach which I like better. And then it takes the next step and actually creates the code for this capability and creates a new pull request.

And I can continue to work with Q now even in the pull request. So I can start to review the code that it generated. And if I see something I want changed, I can just make a comment in line in the in the PR. In this case, it was referencing something that I didn't want to reference. So I made that comment in the PR, I pushed it back to QQ, then whipped at all the comments and based on those comments, revised the code to create this new capability.

so now i like this code a lot better. i'm going to merge it into the code base and then our c our c id pipeline is going to kick off the one that was created as part of that blueprint. and then we should see if how things worked this new capability added to our feature.

so if you remember before, when i clicked complete on the task, nothing happened. now when i do this, of course, it's going to soar to the end. so i want to take a breath here because we've talked a lot about q and the power of it today. but i want you all to realize what you just saw. i was able to create a new feature inside of my application simply by assigning the task to q it did all the work, right? this is a completely new way of collaborating with technology and one that not only i think makes you more productive, but i think it also preserves some of the things we all love about building software, things like exploring codebases and collaborating and making decisive changes that can really affect change.

and so as i said, this is available for you to try and preview today. i'm really excited to see what you're going to do with it. go to codecatalyst dot aws and give us your feedback.

all right, thanks. so again, with code call, you have a solution that's fully integrated in the cloud. these tools work together, the context is available of your code, your operating environment and that's why that bot that cue can do such a great job building features. i hope you all try it. it's a pretty exciting world that we're entering in. ok?

which takes us to a new topic. another area that i wanted to, i wanted to bridge into which is reinventing what you can build. front end developers are incredibly important in the world and they build these highly valued experiences. like think about this, think about the experiences you have with like brands and companies like frequently, it's like a digital experience, you know, it's like an application or mobile app or website or marketing campaign or something on social.

so like companies, our companies all really rely on front-end developers to build and bring these experiences to life. and what matters is they capture the imagination and connect with the customer or the or the user or whatever and really make a lasting mark. so the energy is in the front end. but for front-end developers and people working on front-end tasks, you know, that's where the energy is, but they still need a backend to make those applications go, they still need features that needs to be cloud connected.

so they want to focus on the front end or they might not even be an expert in the backend but need to take advantage of backend features. so we think that this is a time when we can really, really do some amazing things for these kinds of builders and these kinds of tasks.

so that's when we launched aws amplify and we launched amplify to build the needs of front end developers and really anyone building front-end experiences. and so with the amplify, you have everything you need to build full stack web and mobile apps in just hours. and the idea is you can use it to build things like a cloud connected ui add features to your app like data and off push notifications and then add more and more features back by aws, but go really, really fast and focus on your front end and focus on the business logic.

we host the application for you the web application for you, we host it globally, we scale out on aws so you can scale the millions of users. but front-end developers have told us that that's, that's cruel. and they would like to simplify things and really stay just in the front-end domain with the languages and tools that they're most familiar with.

according to work between front-end and back team can be kind of complicated. and then as you get closer to production, like coordinating deployments and keeping the synchronized crunch, front-end and backend could be even more complicated and a little dicey.

so we thought there was an opportunity for us to help front-end teams and backend teams solve these problems. ok?

so that's why we're announcing this week, actually last week, aws amplifies new code first developer experience. so this is a preview and the idea is we can enable fund and developers to accelerate full stack development just with typescript and javascript skills.

so what does this mean? so, so this means developers can focus on code, not infrastructure and just by working in code, they can automatically provision cloud resources for these use cases like a and data and the code that configures these features and backend services just sits right next to their front end code, right?

so it's really easy to understand, keep it in sync. in addition, because we have code whisper and q you can generate this code that configures your features and back end using natural language. you can also use q to give you advice and ask questions and help you decide how you want to use these features and take advantage of it.

and again, again, your backend code configuration in your future configuration is sitting right there in this really simple, almost declarative code format right there in your all in one place.

now, a couple of other new things that came out with this one is instant sandboxes per developers. so as you make changes to this code that configures your app and your back end and you hit save your sandbox instantly updates and amplify, continues to provide cd pipelines to manage deployments. you can smoothly do all those deployments across your testing and production. again, keeping front end and back end in sync pretty neat. ok?

so again, one thing that's really important to us in this space is like we don't want to just build a front-end tool that has a lot of constraints. we want to make sure all of aws is available to you and you won't ever outgrow it. and so you can extend this with the cdk. and so now front-end, front-end developers and people who focus on the front end could build these full stack applications with these great features in a really natural expressive way, just working in code and their id in a way that's very familiar to them.

and we think this is super exciting. so again, i think a demo is the best way to show this to you. and i'd like to bring up ali, who is our manager of amplified developer advocacy to come out and give us a demo.

ali, hey, hey, thanks so much ad so i want to go back to that demo that we saw at the very beginning of party rock and it gave us a few ideas for team building experiences at re invent. the very last one is a tech trivia dance party. i am going to build out the tech trivia. if you want to do the dance party, you can do that.

so i'm gonna build out my data model first. if you've used the amplify before, you may have used a cli or studio based approach. but we just ruled out this new code first approach where you use typescript or javascript to build out your apps back end.

so as i'm building, i'm building this question model for my trivia app. and you can see that i'm getting intellisense which will suggest typescript code for me as i go. and i'm also using code whisper which is fully trained on amplified data. so you will see it suggestions too.

i'm also gonna add authorization rules so that anybody can see a question. but people who create a question can update and delete it and only assigned in users can create new questions. the data is fully backed by aws appsync and the database layer is amazon dynamo db. so it's fully scalable as your user base grows.

now i want to add off and similarly, i'm going to be building with typescript. so again, i'm gonna use right suggestions. as i go, i'm gonna write a custom email verification line for when my users sign up for my application.

so everything with amplify, gen two is built on top of the aws cd k. in fact, ad er are using l three cd k constructs. this means that you can build with any aws service if you want.

so i've been building a ton of amplify and bedrock sandboxes recently. speaking of sandboxes, as i develop, every person on my team can have their own cloud sandbox. this sandbox will update as i change my code and any infrastructure changes will deploy automatically with every safe and each one of my teammates will have their own isolated development environment, which is really helpful for so that we don't all clobber one another or make any changes that affect one another.

once i'm done, it will tear down my resources for me. i've pre built most of my ui out, but i'm going to show you just a couple of lines of code. so the first one is this with authenticator and with this one line of code, a couple of seconds, it's going to add in a full sign in sign up flow for me that integrates with my amplify built back end.

so my also looks like this. i'll sign up, i'll enter my password and confirm it then in my inbox, you'll see the verified email for trivia, which i set up previously. once i sign in i have a form, i'll add a couple of trivia questions.

now, i want to list all my trivia questions on my u i. so we just deployed a new version of amplified client libraries and these optimized for a smaller bundle size, a typescript first developer experience and a really great next js support, which i know is important to probably a lot of friend developers.

so now all my questions are retrieved and i have a couple of amplify questions up there. the first ever amplify app was a pet tracker. so now i need to ship this to production. that's my next step with amplified gen two, your source code is your source of truth.

so i can have multiple gift branches and each one could be its own deployment environment. my full stack code can live in one repo or multiple and it's really easy to use this default. but if i want, i can build my own pipeline and use that if i want instead.

so i'll push on my code, then i'll go to the amplify console, redesigned its purple. i love it. i'm gonna choose github as my provider authorized, selecting repository. oh, it will then automatically detect that i'm using next js and then amplify back end.

i want to cut the deploy time for demo syk. but i'm live, my friend and an backend are deployed and i'm in production.

so building next generation apps got a whole lot easier with amplified new code first developer experience. it makes full stop provisioning simple and fits the development paradigm that front end developers are already comfortable with the backend architecture means that your app is ready for scale security and reliability right out of the gate, no extra work and the work flow means less time configuring infrastructure and more time building features for your users.

i was admiring the rainbow trivia backstage and i didn't run out fast enough, but that's pretty cool, really excited about that. this gen two amplify experience is a really big shift.

so overall, i hope you got a good feel for this whole arc that we're on. this is a really exciting time to go build. we are here to reimagine software development on aws together with you with entirely new tools. i think they're just scratching the surface of what's going to be possible as science fiction as some of this seems like it's here now. it's ready for you to explore and we're going to keep bringing you, you know, needed tools and guidance and best practices. but what it really comes down to is you taking advantage of it and getting to that creative zone, giving us feedback, you know, exploring with your peers and sharing with us.

so uh again, we're really excited about the future that's coming. thank you for all your contributions here and participating in the community. and we're super excited to see you, take these tools and run with them. thanks everybody.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

李白的朋友高适

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值