Good afternoon. Good afternoon everyone. Hey, everybody. So uh thank you for joining us. This is amazing. This is a very uh intimidating room with all of you kind of uh looking at us with all these eyes. But uh uh we are here to talk about conversational AI and how generative AI is uh impacting what we do on a day to day basis on building these uh customer experiences.
My name is Marcelo Sil, I'm a product manager with Amazon Lex and co speakers. I'm Gen General Manager for Amazon Lex and Bedrock Agents and, and I'm Greg Doppel hower. I am an LM Fellow and the MFC Enterprise Chief Architect.
Ok. So we're gonna leave both. Thanks.
Ok. So let's get started. Um we'll go quickly through the agenda. Um I'll give you a quick introduction. Not on generative AI because I know you, you are tired of that. You have all kinds of knowledge on that and you know, a conversation AI is and generative AI. Um but I just wanna give you a perspective on what the talk is gonna be about. Uh especially as Ganesh introduces new features in Amazon Lex that have been implemented with large language models and to take advantage and uh of generative AI. And even more important when Greg comes in and says how Lockheed Martin has used these features to enable their use cases.
So um let's get through the first slides. So again, I'm not here to tell you what generative AI is, is just, I'm gonna contextualize generative AI within um the Amazon Lex service. And what we're gonna be showing you both in terms of the new features that we developed as well as the use case that Greg is gonna present to you.
How many of you are Amazon Lex users or use some kind of conversational AI experience? That's awesome. First of all, thank you, I really love to hear this and see these many people uh using Amazon likes. We are super passionate about it. We've done a lot of work on it.
So let's talk quickly about what generative AI is to us, right? Um when we talk about generative AI, in the concept of large language models is nothing new to Lex. We've been doing this, we've been using pre trained unlabeled models with hundreds of millions of parameters in Lex. Since the inception of the product, you've all been playing with sort of the large language models. They are not as large as they are today, but they, you all been part of this type of revolution. And to the, to the point that now generative AI and perhaps the same thing for you that happens for me. Not my family knows what I do, right? Since the beginning of the month, the year, they go like, oh you do this and they offer all kinds of advice, they know all kinds of things that I did not know before perhaps and they have all kinds of suggestions, but we all been doing this. So it's super cool that now everyone is talking about this.
So the models that we have used in Lex for as language models are large, are purposely built with context for conversational experiences. And we use the bots that you build the bot definitions that you build to create a fine tuning of that model for your experience. So if you're building a bot to book a hotel, we actually buy it the models associated with the intents and your bot definition to create that model to fine tune the large model to your experience, right?
So it's super important that the two things i want you to get out of this initial slide. this you've been using large language models for a while. uh it has to be contextual for the conversational experiences that you build in two is based on training data. And we're gonna show you today what we have done over the past few months on Lax to make that builder experience in the end user experience faster and better based on data that we use from large language models to help you get started.
So we talked about the foundation models before what our foundation models you have had this in in many talks. uh the only difference between those traditional machine learning models and the foundation models that you use today in Bedrock and in Tropic and all the other partners that are part of the Bedrock ecosystem is that the machine learnings of the model, machine learning models of the past were based on being built for very specific purposes, fraud detection, sentiment analysis and things of that nature. The large language models of today are much more broad, the corpus of data is larger and it can do many more things, right?
So let's talk about conversational experiences and the challenges is super challenging to build these experiences as you go through. And the ones that have used Lacs and the one that are planning to use Lacs or use uh all the tools you know how hard it is to create these experiences. You need conversational designers that know that understand the nuances of the language and the concept of dialogue strategies, dialogue states in how these nuances of language need to be disambiguate and retried and things of that nature, right?
You need a predictable dialogue management in some circumstances like for example, regulatory and compliance industries, i don't wanna be calling my doctor and asking for the results test of my cholesterol test and have the thing. Hey, you want 320 who i don't want that? I don't want that kind of thing. I need the personality and the, and the type of information that's gonna be given to me to be contextual to have a personality, but the personality associated with the type of information i'm given. Right.
So these concepts are super important. You need the control at times and sometimes you don't need control. But even when you're creating this experience that have chit chat or they are much more open in nature, you don't want to create a bot that everyone can have a therapy session with your bot on your dime just by calling your phone or chatting with your bot, right? If you are a financial institution, you don't want your bots to be telling, talking about the weather in Honolulu because the guy is gonna leave in a couple of days, right? You want to be financial information pertinent to your business into the type of information you wanna create in exchange with the user population, right?
Ok. So let's quickly go into what Amazon Lex is just to give very quickly for the ones that have never experienced Lex some lexicon on the terms that you're gonna hear in a little bit.
So as we talk, we call that experience, especially on the speech channel, on the voice channel, utterances much like if someone were to type Amazon Lax is a speech and text interface for applications. You can take Lex and put in front of a contact center, associate with Amazon Connect to create an overall flow for both text and speech what we call IVRs or you can put Lex in a web page and use that as one of the chat bots that people can chat with. You can add to your mobile app as you're gonna see what Greg has done at Lockheed Martin. And it's an amazing use case. He's using Lex with his own app that provides employee productivity to its executives. So we'll see how that works. But Lex is doing these use cases of like trying to identify the action that the user wants to perform the intent. What is the intent of that utterance that the customer has? And based on that utterance, sometimes we need to collect slots, right? Which is input data if you call airline and you say I wanna book a hotel, the airline doesn't know that two minutes ago, you talked to your partner and said, hey, we're going to Hawaii. So you're gonna have to say, hey, where you would like to go and it's gonna ask you and you say Hawaii. And so and so you need to collect some data sometimes to even be routed in terms of first call resolution to an expert that can help you or to fulfill an end to end transaction like booking a flight or a hotel, for example. And at the end of that, you do a fulfillment that you connect to the back end systems at times and says, hey, your reservation is confirmed. Uh is there anything else i can do? And, and you move on. So the concept of utterances which are user inputs, both in text and speech, the concept of intent, the actions or intentions of the user and the concept of the slots which are data inputs that might be needed to fulfill those intents.
Ok. So again, since the beginning of the year, we've been, we've been going crazy with this generative AI thing. Uh how does it impact conversational AI? And that the impact is tremendous and both in terms of how we build the bots, but also how the end users experience the bots that we built, right?
So from advanced uh language understanding, you're gonna see a bunch of examples today. We humans are super weird because at times we use the different terminologies. I may ask you uh what city you were born in and you may say uh uh the place near the white house, right? For a bot or for whatever you're building might be hard to go and say, well, white house might be Washington DC based on the context might be the near my town that was a big white house and that was the only white house and that was i was born because it was the hospital context is super important. But at the same time, not only context having this general knowledge of the world and understanding where what what kind of exchange you're having with this, with this uh device, right?
Dynamic dialogue generation is super cool. The fact now that the generative capabilities of generative AI allows us to have this chit chat, this conversation with the bot outside of what the intent that we built a bot to be. So we don't have to think about all the different paths that the bot needs to take. And we can create based on our knowledge, feqs and q and a s.
Think about how we are building websites in the early days of the internet, right? You put your website, you put that, someone says, hey, why don't you uh put in there a question and answer type of thing? Why don't you put this kind of information based on your specific uh policy regarding baggage returns and things of that nature, right? So it's the same thing you're trying to build that dialogue and also the generative AI gives you this generic world capability, right? If you are booking a travel thing and it says the white house, likely you wanna go to Washington DC, right? I wanna go to the big bang. Oh you're trying to go to London and that kind of stuff and it needs to have a personality. The cool thing about generative AI is through prompt engineering and a variety of other um artifacts. You can put a personality on the bot, you don't want a happy bot giving you bad news. And at the same time when you're trying to get consent from a customer, you just created a financial transaction, you may have to do things in a person on a specific verbiage for compliance and regulatory aspects, right?
Ok. So what is good about the traditional conversational AI that we have in Lex? It is predictable, right? You put a path, you know exactly how you want the customer to do to traverse this set of intents and slots. It's very much free flowing, right? In Lex, you can do uh intent switching, you can go from one intent to the to the next, but you want that experience to be a certain way, right? You have complete control, you start with control on the generative models that exist today is the opposite. You start extremely broad and you have to use things like guard rails and a variety of other techniques to narrow down the path, right? With Lex, you build what you want, you put the, you put the users into a path and then from there you open up with generative AI and you're gonna see how we are doing this by introducing large language models to Amazon legs.
Ok. So our idea of Lex since the inception on 2017 was to power enterprises and conversations at scale back ended by the AWS infrastructure. This is allows you to do sophisticated conversations at high scale on a SR automatic speech recognition as well as an NLU system that scaled to thousands and thousands of customers interactions that can elastically go when you're having one phone call or when Taylor Swift is playing at the sphere and 100,000 people are gonna be calling to book the book the show, right? When i play the spheres, i don't think it's gonna have that kind of a volume, but we'll, we'll check that out. After uh tonight, after you all go to, you know that i'm gonna be on stage, right? I'm gonna be singing. No, i'm not. You don't want that.
Um so again, we're building an enterprise experience. Ok? With that, I wanna introduce Ganesh Geller, our uh Lex uh GM for Lex and Bedrock. And Ganesh is gonna talk about all the work that he has allowed us to do in generative AI and how to make Lex an amazing product. Thank you all today.
I'm super excited to talk about our generative AI features that we added in Amazon Lex. Uh this week, I am Ganesh Gla General Manager for Amazon Lex and Bedrock Agents. I've been with Amazon Lex since its launch in 2017.
Before we go into the features that we launched this week. Let's do a quick recap of some of the advancements that we made to the R. in the last few years, we added several accuracy improvement features for our NRU and ESR like context carry over multi-valued slots, custom vocabulary and run time hints, whether RA raise your hand or make some noise if you know the feature or if you have used the feature. Cool. And then we simplified our bot schema and added bidirectional streaming with our veto set of APIs. We added purpose built IVR features so that Amazon Lex is efficient in IVR automation with Amazon Connect. We added 25 languages almost at the rate of 10 languages per year. When we started adding new languages, we then provided a no-code canvas for developers to author their intents efficiently with the help of a visual conversation builder
Anyone here use it? Cool. I see some raise of hands. And then we added Analytics and Test Workbench to make the post deployment experience better for our developers.
Now, in 2023 we added several generative AI features in Lex - generative AI features using the large language models and Bedrock. We ensure that these actually help the complete life cycle of development.
At high level, I categorize these features into two spectrums:
-
A spectrum that helps with the developer productivity so that developers can build bots and deliver them to production faster.
-
Another set of features that help improve the end user experience.
So let's do a deep dive on the productivity features today. I'm going to talk about Descriptive Bot Builder and Utterance Generator.
What is Descriptive Bot Builder? Looks like most of you, or some of you, have already built Lex bots. So you're aware of this concept - building of a Lex bot today consists of coming up with the set of intents. And then you need to identify the entities that you want to extract from the conversations and then you need to map these entities to the slot types.
For example, you want, you may want to capture location information, then map this location, location entity to a data type like Amazon.City or Amazon.Address or Amazon.ZipCode based on the requirement of the bot, right?
Not only that, then you need to curate the training data that is required for these intents and the quality of these utterances which we call as training data. And the cohesiveness of this training data plays a big role on the accuracy of your bot.
So no doubt that all of us are spending a lot of time today in ensuring that they actually prepare the training data for these intents.
But now with the Descriptive Bot Builder, you simply start with the objective of the bot and specify the objective in a simple natural language and then select the large language model that you wanted to use for this purpose on Bedrock. And now you have the entire Lex bot that is generated with the intents and all the slot types and with all the training data that is required for these intents.
Previously, a simple use case might have taken hours to create the bot, a medium use case may have taken days and a complex use case might have taken weeks. But now in a matter of minutes, you can start with the objective and they have a bot that you can start testing.
Let's take a look at a deeper look at this feature in action.
First, I'll add in my own description here for booking a hotel bot. Build me a bot for booking a hotel. I'd like the bot to have the following intents and slots:
- Book hotel intent with slots for city, check in date, number of nights and room type.
When I press done, we use Amazon Bedrock to generate a draft bot based on this description. If I select review, I can review the generated results.
I am provided a list of the suggested intents and slot types based on my description. If I select my book hotel intent, I can also see the generated sample utterances for my intent. If I scroll down a bit, I can see the suggested slots and slot types.
Note that the Descriptive Bot Builder has automatically matched the correct built in slot types for check in date, city and number of nights. It has also created a custom slot type to handle the room type. This looks good. So I'll select confirm intents and slot types.
Make some noise if you liked it. Thank you all.
So now let's go to the utterance generation. So now we have created the bot, right? And now you may want to augment its functionality by adding a new intent and that's where utterance generation comes into picture.
So, utterance generation is similar to descriptive bot builder where you started with the objective for the bot. Now, for the utterance generation in this feature, you're you're working on creating an intent and you simply provide the intent, name and description for the intent and utterance generation brings all the required utterances for the intent by also by annotating the required slots for it.
Cool. So let's switch gears to the features that help with the end user experience.
So the first one I will be talking about is the Generative AI Assisted Slot Resolution. Before going into the future, let's cover some basics on what is slot resolution.
Slot resolution is an activity where it's about mapping the given user input either in spoken form or the text form and converting it to a data type or a value that programmers can directly consume in their business logic.
As an example, if the user says "day after tomorrow" or "the coming Monday". So authorizations job is convert this input and provide a specific date that you can actually feed it into the APIs directly.
In the previous example that we have seen we saw entities like check in date, location and number of guests - these map to slot types like date, city and number, right?
So it may look simple - it's about taking the input and converting it to a data type that programmers can use it directly in their APIs. But the expansiveness of natural language expressivity sometimes might trump even the most curated and sophisticated grammars.
Let me give an example. Say this hotel booking bot is asking the user "Hey, how many guests are involved in this reservation?"
A user might say "four guests" or user might say "I'm here with 4 guests. I need 4". Great - today, Lex gets it as the value 4 and gives it to the APIs.
What if the user says "I'm super excited about my trip. I'm traveling with my family, me myself, sorry, me, my wife and our two kids are in this reservation", right?
And no problem with this Generative AI Assisted Slot Resolution - Lex will now be able to resolve this to 4 and the end user will be able to move forward in the conversation.
So to recap - developers are in control of this feature, developers can enable this feature at a slot by slot basis. And the Lex existing NLU kicks in to resolve the value and when the existing NLU fails in some of the examples that I mentioned, that's when the Generative AI Assisted Slot Resolution kicks in.
And developers have the choice of using either Anthropic Cloud or Cloud Insta for this purpose. Again, with the help of a quick video, let's take a look at a few more examples of this kind.
Go back into the different slot values and just enable Assisted Slot Resolution for all of the slots where that is supported. And I'll save my intent and rebuild the bot. And when the bot has finished rebuilding, we will test it one more time this time with the Assisted Slot Resolution.
This time, we can see that things are interpreted a little bit better. So "What city I'm heading to London for a conference" is interpreted immediately as London. "What day will I be checking in? The first of next month" is interpreted as October 1st, which is correct based on when I'm recording this Friday. "Saturday and Sunday night" is interpreted as three nights. And "me, my wife and our two kids" is four people.
By enabling this new feature, you will allow your customers to converse with your bots in a much more human-like way without much re-prompting or error handling. I encourage you to see how this can improve the customer experience for your chatbot or contact center.
Thank you.
And the final but even more powerful feature that I would be talking now is about our Conversational FAQ Intent - CFAQ Intent is a built in intent now that you can add to your bots.
So what is CFAQ intent? This Conversational FAQ intent allows you to handle the queries coming from your users that you have not anticipated ahead of time or that you have not defined intents for those.
So these are the questions coming from your customers and you have the information on those, but that information may be lurking in somewhere in your documents. But the CFAQ intent now can read your documents with the help of RAG technique and provides the summarized responses back to your users.
RAG stands for Retrieval Augmented Generation. So the way RAG works is RAG takes the set of documents and chunks the inputs and creates the corresponding vector embeddings for them and stores those embeddings in the vector database. And then during run time when the user query comes, it retrieves the relevant vectors and corresponding documents and then summarizes the information back to the end user with the CFAQ intent.
All of this complexity is encapsulated for you. You simply point your data in an S3 folder and then CFAQ intent takes care of providing responses back to your users.
Not only that, CFAQ intent marries the query with the context it already has and provides even more pertinent responses to your users.
CFAQ intent works with recently launched Amazon Bedrock Knowledge Basis. But if your information is with other retrievers like Kinder or OpenSearch, CFAQ intent works with those retrievers as well.
Again, for this feature, you have a choice to select the type of larger language model on Bedrock that you would like to use and provide the knowledge base source and your CFAQ is now ready.
Let's take a quick look at some of the scenarios that CFAQ can light up in this scenario. The customer is asking about a comparison between two different hotels in two different locations, right? So your documents have the information about Property A and Property B but you have never compared these two and stored the information.
But the CFAQ intent now not only retrieves the information about those two properties, but also compares them and provides the summary back to your end users.
Let's take a look at another powerful scenario here. The customer is in the process of making the hotel reservation and in the process, the customer was asking a question about, "Hey, what are the pool hours for this property?"
And CFAQ intent retrieves the relevant information about that property and gives the information back to the user - and now keeping the context alive so that the user now can go back to completing the reservation.
Previously without the CFAQ intent, this might have caused reduced customer satisfaction or a loss of business. But now with the CFAQ intent, it knows when to surface itself and provide the relevant information based off of your documents.
And again, you can learn more about these features from the blog post that we launched this Monday.
Thank you all. And with that, let me invite Greg Do. Greg has been user of Lex for multiple years. Greg is from Lockheed Martin and let's hear from him on how he uses Lex for the internal employee productivity.
Thank you.
And, and if you've been paying attention to what's going on global uh in the geopolitical global climate. Some of the products we we build in MFC are performing extremely well and are being required to help whichever side you're on.
Um so Jason can now concentrate his time where he needs to, to, to work and to pay attention to, to help us produce the products we produce, not just for the US, but for our global customers. And you're gonna see more about that as Lockheed becomes a global company is, you know, uh the world is a changing place.
So let me tell you a little bit about Lockheed Martin. Um I don't know how much anybody here knows about Lockheed Martin. We four business areas. Um, they're backwards, Aeronautics. We go, we usually go alphabetically Aeronautics, Missiles and Fire Control Rotary Emission Systems in Space. We are probably in every state in the United States.
Uh 20% of our employees are veterans. We're very proud of that. Um like to see that number a little higher. Um and the other thing I'm really proud of is 100 years of innovation. I like to say we build cool stuff and we solve some really, really hard problems and then they don't all involve mass destruction.
So ii, i think, uh it's an interesting company to work for and I've worked in every one of those four bas across my career. I just celebrated 35 years with Lockheed Barton um so it's a great place to think about if you want to come work for a really cool technology company.
So let's, let's talk about how we got here with Lex. About six years ago, I was the corporate IoT architect for Lockheed Barton. And we were on our journey figuring out how do we connect our factories, how do we understand how our factories operate and how do we more efficiently use those capital assets? You can imagine it's not cheap to build the products we build and the machinery that does, that isn't easy to buy off the shelf.
So we started with what we call our IFF our Intelligent Factory Framework. And, and I worked with Amazon to build a demonstrator product and that demonstrator product got me really thinking and I said, well, if I can control this thing through topics and cues and all those stuff, why can't I use Alexa in voice?
So I started being able to show production operations, folks, how I can turn the machines on and off with my voice. And you can imagine how that freaked them out. Um you know, in the middle of cutting $100 million or $100,000 part, I tell a machine to stop or tell a machine to pause, they weren't real happy with that.
So, but the really interesting thing is when I showed the corporate CTO, he said to me, how do we take this voice enabled? And voice, enable our enterprise to the way to the point to, to, to the point where we can run our business. And I thought that was kind of interesting. He says, I want to be able to get program status wherever I am, anywhere in the world, any time of the day.
So uh being a turbo nerd and I am, I sat down, I and I, and I thought about it and I contacted the teams that manage our enterprise reporting system for projects and programs. And when I say enterprise programs, I'm talking across the entire corporation and I got with those guys, those folks and I, I said, how do I get to your data?
And of course, the first question is why second question was no or second answer. 1st, 1st question was why second answer was no. And then when I explained that we're just doing a small proof of concept, I was able to get a little data set and, and it really started me down a path of looking at how you develop skills in, in Alexa. And I mean, you take the a off either end. You got what Lex, right?
So within two weeks, I built a skill where I demonstrated what we call EPRS Enterprise Program Reporting to the corporate CTO that Monday at nine o'clock in the morning, he told me I'd made his entire week. So he set me on a path to figure how we do this across the enterprise.
Well, I don't know how many of, you know, Alexa is just a just existing commercial cloud being a defense industrial base. We need things more on the govcloud side, uh il four, il six type stuff. So I started working with Amazon trying to figure out how number one do we get? Alexa in govcloud. That was an interesting conversation.
Then the conversation switched to, well, can I use the lex at that time, lex wasn't quite in gov cloud either. So guys, how are you gonna help me out here? We're in an impasse somewhere in the 2020 time frame. Covid's killing us. Everybody's home, you know, and I need to move at the pace of business.
So conversation ensued with, I don't know if Ganesh was in charge at that time, but another person who owned Lack as a product and told me if I could get it to gov cloud for you in three months, would you demonstrate? Would you take your project? Migrate it and demonstrate? Of course, you know, no problem.
So we got Lex and gov cloud and then he started talking to me about Lex 2.0 the one disappointing thing for me was it wasn't as easy as Alexa. All right, if any of you draw up Alexa skills, there's a skill builder you build in tents, boom, bang, boom, everything's there. Well, you saw what they just announced today, right? I wish I had that 10 months ago.
So 2021 2022 I finally get Lex and govcloud, I start demonstrating. So this year in April, my team set out to build an iOS app that's funded by um iOS mobile app that uses lex to build the chat experience. Uh because everybody in Lockheed Martin, by the way, we u we use uh iPhones as our corporate um md m platform.
So now we have the uh the lex app that runs on ucq three, I released the beta release uh to my senior leadership team in MFC. Uh what we call the MFC lex program management application. So now guys like Jason can go in and, and gals. Uh I actually have it in my vice president's hand and we have two vice presidents in manufacturing and both have the application and are using a beta testing for us right now. I've already gotten quite a bit of good feedback.
So taking that back to the development team, I'm the idea, the idea guy, the development team does all the real hard work. So uh it's been really, really uh positive response so far, but I want to tell you where we're going next. It's a, it's an interesting enterprise, Lockheed Martin, we have so much data and again, pardon me, it's so dry up here. I also feel like I'm an open mic night.
The uh so you, you see the program performance right on demand always available, you know, the ability for us to auto generate trend metrics. All that kind of good stuff are are in the in the models today. And some of them are Hana data models and some of them are small generated uh language but not language models, but we'll call them small trend models.
But where we want to go next is the manufacturing operations team. Again, I told you we have nine core sites, probably a dozen smaller sites. They want to be able to get visibility into that data. How many people are working today? How many work orders? How far behind production am I? What does IOP look like? All this other kind of good stuff? How do I voice enable that interaction without having some secretary admin or some lower level admin type person actually gathering this data to get, bring it all together and hand it out a spreadsheet. That's, that's classic, right?
So I wanted to provide the ability for a manufacturing operations team to ask, what if questions? What if I move manufacturing of Hellfire from, let's say Ocala into Orlando. What if I double that line? So we're working on that now, that's how we're taking and bringing in some large language model stuff there.
I think the really important one is the, is the third one you see there, we have a specific pro uh program called Jazz. Um it's a very large missile but the government air force fed that to us in 80 separate contracts, I'll get the 81st, the 82nd, 83rd, 84th. But what did I learn from the 1st 80?
So we're taking those 1st 80 all the artifacts rfp. Uh rp response. C drills tras all those types of things and we're bringing them into a large language model to say that when I get the 81st contract, how can I cut out that 6 to 12 month requirements phase and the requirements definition phase and say based on past performance, how do I, what are the requirements out of the 81st contract and then out of the 81st contract, did I miss anything that I didn't do in the previous 80? You know, how do, how do I learn from my mistakes in the previous 80? Cause we made mistakes, don't get me wrong. We made mistakes.
So and then I want to be able to take and put Lex and that voice experience in front of that for not just the rfp, the contracting or the program team, but our manufacturing team, our customer imagine the power of our customer being able to go. How good was Lockheed at building the 1st 80 contracts? How good the other I didn't say bad. How good is Lockheed at consuming the 81st contract? You know, it's all about speed.
Our customer mainly the Department of Defense is telling us I need things again and go increase speed of execution. I need things yesterday. It's, it's, it's interesting in the DIB we've gotten comfortable over my 30 years experience, you know, an airplane to develop a new airplane would take anywhere from 6 to 14 years. The Air Force has challenged us to get that to five. We've been challenged in the Missiles of Fire Control to cut our program execution from seven years from designed to what they call L rip low rate initial production from seven years to 18 months.
There's no way we're going to be able to do that without using tools like large language models, changing the culture inside of Lockheed Martin to use new tools to think differently and to execute differently. So I want you to understand that if we do this for one contract, Jasem with 80 separate plus art contracts, thousands of artifacts. What's stopping me from doing it for every program? Everything I do in MFC and my team, some of my teams in the audience here, everything we do, we do with enterprise scale in mind.
So if I solve a problem at Lockheed Martin Missiles and Fire Control, I'm solving a problem for the rest of the enterprise and we, we, we approach it that way. So I want you to think about that. Unfortunately, I couldn't share with you all the application today, but I'm happy to have a conversation afterwards about how the application works. What the application can do some of the visibility of the application is garnered so far into what our uh our financial operations and our, our production operations look like.
And I think it's really important and a lot of times folks in IT lose touch of the business acumen piece of the business understanding what it takes to build the products we build and how to do it. Solutions impact that bottom line. Uh and so ii, i can answer any of those questions for y'all after uh we're done here and I wanna leave you with just the, the final thought that it's IT solutions and partnerships with Lockheed AWS. Lockheed our employees.
I think our, our first IT starts with our employees and then, you know, we start bringing in folks who really want to change the culture. And I keep telling my leadership, you can't legislate culture change. We have to demonstrate it and live it. And this is how we're going to start living it.
How many younger employees are coming in who don't know a time when an Alexa or Google thing was in their house to ask a question or a cognitive assistant existed. So we have to start bringing those experiences into the office. So with that, I think I'm done. I want to thank you all for your time. I'll be happy to answer any questions.