How United Airlines accelerates innovation with Amazon DocumentDB

Hi there. Welcome to our session today. We will talk about um how United Airline accelerates innovation with Amazon DocumentDB. My name is Rashi Gupta. I lead the product management team here in DocumentDB. Uh I've been with DocumentDB for about three years, AWS for five years and I have about 10 years of experience in clouds and databases uh at Microsoft and Meta before.

Good afternoon, everybody. Uh my name is Paul McLean. I'm the managing director of the passenger service systems team uh for United Airlines. I, well, I've only been at United Airlines for eight months. Uh I've been in the industry for uh almost uh 24 years now working both on the technology side as well as the airline operations side as well. So, thanks very much for having us today.

Great. So um this is our agenda we're going to first talk about. Uh I'll give a brief intro to DocumentDB, what's new with DocumentDB, especially in 2023. And then I'll hand it off to Paul to talk about how United Airlines uses DocumentDB.

Uh by show of hands, how many people here use DocumentDB? So not a, I would say about 25%. So I think it would be a good primer on what DocumentDB is and what what we've launched. And then it will be a good segue into how United Airlines uses DocumentDB.

So before we start, let's take a 30,000 ft view of where DocumentDB sits in the AWS portfolio of database services. If you look at the AWS portfolio of database services, we have our relational services which is RDS and Aurora. And then we have a purpose built non relational services which include DynamoDB for key value, Elastic Cache for caching, Neptune for graphs, Timestream for time series, MemoryDB for memory, database, Keyspace for white column and QLDB for ledger, DocumentDB is another portfolio of non relational purpose built service for document databases.

And so what are document database and why it's important. So what we're seeing in the industry is JSON is becoming very, more and more the most common data format that customers are using. This is due to three reasons. Firstly, JSON is a very flexible format. If you've used JSON, you know, it's very easy to add new fields. And so, so that makes it very extensible. Secondly, modeling is very easy since it's pure plain text, it's very easy to understand without knowing any programming as to what, what's in the, what's the context of the object. And finally, most of the APIs use JSON as a common API output.

So as the as JSON is very prevalent, there is a need to store query and index JSON data and that's where document databases come in. And the shine. Firstly, document databases have a very flexible schema that can be applied to JSON document databases allow ad hoc querying since you can add fields in in the JSON object and your existing applications and queries don't break, they can still work, they provide flexible indexing.

So if you look at the document databases, not only do they have like a single field, but you can have multi, multi field indexes, you can have geospatial, you can have vector indexes, you can have text indexes. So there are a variety of flexible indexing that you can apply on the same document.

Um and and what we're seeing across our customers is more and more the line between operational workloads and analytical workloads. They are blurring and document databases are becoming a place where you can use the same data no matter whether it's your operational workloads or analytical workloads.

So, Amazon DocumentDB is one of those like I said is a purpose document database and it is a scalable, highly durable and fully managed database service for operating mission critical MongoDB API based workloads.

And if I if you double click on it, what it means is there are like four things I want to talk about here. Firstly, it is very fast and scalable. What do I mean by fast and scalable. Um I'm gonna go to the next slide and come back here, but I want to first talk about the architecture. So, so you can better understand it if you look at any traditional database that's built. The traditionally, the way it is is you have like a bunch of compute nodes where you perform both the compute and it's also used for storage. So as your needs for storage increases, you need to add more and more compute nodes. That's not how we build DocumentDB. We've actually built it such that your compute and storage are separate layers and that gives you a lot of flexibility and and it makes the service more scalable how so,

So firstly, your storage can scale independently of compute a lot of our customers for their dev test workloads, use only one compute node with, with our storage layer and our storage by default is on three AZs. So it is you get the three AZ durability even with one compute node.

Secondly, you might want to, if you, if you have a need of increasing your read IOPS or for your read operations, you can add up to 15 read replicas with, with DocumentDB, you can also vertically scale up, right? And because there is no need to hydrate the data because the data is not stored in the compute node. Let's say you are on four XL, you want to go to 16XL compute because you have a need for that. Within all it is is a fail over. You just launch a new instance, make it the primary within a few seconds. Your cluster is up and ready to go. There's no data hydration. So this enables customers to have flexible fast and scalable um clusters. For database clusters.

We are also fully managed by default, we take care of all the administration tasks that you may have. We have built in high availability. I did talk about three AZ durability. We have security best practices always and we take care of backups and patching automatic.

We also use CloudWatch monitoring and alerting enterprise ready. More and more enterprise customers are using DocumentDB and they need high availability and durability. We provide four lines of high availability. Three AZ durability. Enterprise customers also need region like they need HDR from region wide outages. We offer a feature called Global Clusters which I'm going to talk about in a few minutes. But at a high level Global Clusters allows you to have a primary region in one region, primary cluster in one region and five secondary regions which allows you to fail over in case of region outages.

Security is always number one at AWS. And we have built in security best practices with encryption and transit address, have integration with VPC and KMS but also MongoDB compatible if you are using any existing MongoDB API based database service, your application drivers and tools can work with no no change.

Um we also support hundreds of APIs operators and we continue to add more. So I already talked about the architecture. So why do customers choose DocumentDB over alternatives? There are four reasons why customers choose DocumentDB.

Firstly, operational excellence, customers like the operational excellence that DocumentDB offers. I talked about three AZ durability, fast cloning is another I give a specific example. Fast cloning is an example, let's say you had a cluster with one compute and storage and that's our dev test and you decide to start a production with 33 nodes and storage. All you need to do is clone your database and attach which takes typically minutes and attached three instances to it. And now you have a production cluster in less than 10 minutes. So that's that's what it makes your operational excellence faster.

I talked about how easy within seconds you can scale vertically. We also enable you to do direct reads from specific replicas. This is also another customers really like that. What this means is let's say you have a cluster with 10 read replicas you. So in DocumentDB, there are two ways to access your data, you can either go to the cluster end point and we take care of doing a load balancing and picking the right read replica or you can choose specific read replicas to read your data from. So in that case you can say five of my 10 read replicas of a dev test, five of a production. And in that case, you can make sure your production read replicas are not like the load on that is not interfering with your dev test. So customers like that.

Secondly, price performance, our goal is to offer the best performance at the cheapest price. And we do that in a, in a, in different ways. Firstly, we offer a choice of instance types including Graviton. I don't know how many of you here have used Graviton, but Graviton instances offer up to 40% price performance and we pass that on to you.

We i also talked about our architecture and because of that, you don't have to pay extra. If your storage increases, you don't have to pay extra for compute. We also launched a new feature called I optimize. I'm going to talk about it in a few minutes, but that's another way for you to save on costs.

Customers also choose DocumentDB for our deep AWS integrations. Our number of integrations continues to increase and I have a few more coming up. But with these integrations, you get the best of both MongoDB APIs and the use of AWS services that you can use in your applications.

Finally, compatibility, there's no vendor lock in with DocumentDB you because we use standard MongoDB API, you can migrate in and out of DocumentDB. And we, what we've seen is um every day we get a report of how many new customers, the APIs that they run, how many of those APIs we support. And on any given day it's between 95 to 99%. So with, with the API we support, we're pretty sure most of your use cases if not all will be met with DocumentDB.

So what are some use cases that customers use DocumentDB for? So a lot of our media customers store their media content in DocumentDB and also the metadata for content management. A lot of retail customers store their catalog data in DocumentDB. And that is actually well suited because your catalog data can change, right? You can have more reviews, you can have more size. For example, if it's just retail, you can have more sizes and things like that. So it's always ever changing. And DocumentDB is a good fit.

A lot of our internet customers build their mobile and web applications and store their data on DocumentDB. Lot of gaming companies store the user profile and personalization data in DocumentDB. And this is reflected in our number of customers. We have tens of thousands of daily active customers using DocumentDB.

This is a list of our referenceable customers. The key thing I wanted to point out here is we have customers from pretty much every vertical, every industry, every geography using DocumentDB. You'll see travel customers, internet customers, finance customers and giving customers retail customers using DocumentDB.

Before I talk about what's new, I want to talk about two other concepts. First, I wanted to cover the cluster types. So DocumentDB has two cluster types, instant space clusters and elastic clusters. The difference between the two is an instant space clusters. You create a cluster and you define specific instance types you and and so you you define a cluster with specific instant types and then at that point of time, you can vertically scale up or down. You get about 128 tips of storage and you can attack and there's only one primary and you can have up to 15 read replicas.

So a lot of our customers use based clusters and they're quite happy with it. We have heard from our customers as they scale that they had a need for more than one writer and they had a need for more than 128 tip storage. So last year we launched elastic clusters, which is an alternate way of spinning up your clusters. In DocumentDB with elastic clusters, we made it much more easier for you to use DocumentDB. You no longer have to choose your instance type. You just specify the number of compute units and we take care of provisioning the instances underneath you have, you can have multiple writer. So now you're right. Ios can be millions of writes and we can also offer petabytes of data and not only can you scale vertically, but you can also scale horizontally using MongoDB compatible sharding APIs.

So you have a choice of using these two cluster types. I also wanted to cover global clusters. We touched on it briefly, a lot of our customers we're seeing more and more customers launch DocumentDB for global clusters. And actually, this is becoming one of the main basic things that they need before spinning up a cluster. What is global clusters? Global clusters allows you to have a primary in one region and up to five secondary clusters in different regions. And so if you write in the primary in under a second, it gets replicated across those secondaries. This is useful. We're seeing customers use this in two use cases.

One is of course disaster recovery. If one of the region where your primary hosted goes down for whatever reason, you can fail over to secondary, that's the first use case. The second use case, what we're seeing customers is that a lot of like our gaming customers have a high, high performance needs. So they want low latency. So what they do is they write to the primary and then they use the other secondary regions if they have a game, which is that just picks the closest location and uses that to for the read.

So that was a good DocumentDB primer of what DocumentDB is now, I want to cover what's new in DocumentDB, especially in 2023. So here's a list of features we've delivered in 2023. I'm not going to cover each of the, I'm going to cover each of these in detail in a slide. So I'm not going to talk about this right now. I just want to talk about the themes here though.

Firstly, AWS integrations and, and analytics. That's a big theme. We are, we continue to integrate, give more and more integrate more and more AWS services with a special focus on AI. We we have some gen AI announcements that I'm going to talk about with SageMaker Canvas integration. And then Paul also has some use case of gen AI with DocumentDB.

Performance and scale - customers as they're building more and more scalable applications, they want better performance. So that that continues to be a focus. And we've done a lot of work this year, MongoDB compatibility. We, we continue to invest more and more in compatibility and based on what customers want, we continue to launch more capabilities, regional availability, customers want DocumentDB globally. And then finally, security, which is our most important thing. Our goal is to enable you to build the most secure applications using DocumentDB. So that also we've launched features around that.

So let's talk about each of these firstly new regions availability. This year we launched DocumentDB in three regions, Hong Kong, Hyderabad and GovCloud East. With this. we are now in more than 20 regions. And you continue to see our goal is to be wherever AWS regions are globally. That's our goal.

We've also launched elastic clusters in three additional regions, Singapore Sydney and Tokyo.

Query capabilities - until last year, DocumentDB supported the MongoDB API 3.6 and 4. For customers asked for newer versions of MongoDB. So this year, we launched MongoDB 5 API compatibility and with that, you get more operators, new features and 5 do wire compatibility.

We also launched the ability for you to do a managed version upgrades. If you're using a 3.6 or a 4, do you have a fully managed experience to migrate to DocumentDB 5?

We launched JSON schema validation. With this feature, you are able to perform schema validation before an insert update. For example, if you have like a double field that you want to update with JSON schema, if it's not a double, it will not go through, right? So you can, you can do JSON schema validation before document is inserted.

We also continue to add operators and APIs for both instance based elastic clusters. A list is here and you'll see more of that.

Performance and scale - up to last year. our instance space clusters supported 64 terabytes, sorry, 64 tips. This year, we've increased it to 128 tips for instance based clusters. And then even an elastic cluster shard can have up to 128 tip as a reminder, even though a shard is 128 tip elastic clusters can support petabytes of data, the parallel indexing and index building status.

So a lot of customers until last year were the customers have to build indexes, right? And one of the things we heard from customers is that it used to take, especially when they have large documents as their document sizes are increasing, it used to take them hours to sometimes even days for index creation. And that was a big pain point.

So we did two things. One, we added parallel index build, which as the name suggests allows you to create indexes in parallel. And I'm going to talk about that in more detail. But in addition to that, we also added the ability to see the status of your index period. So you get feedback and you know how much of it is completed.

So let's go double click on faster parallel indexing. So parallel indexing works on four XL or higher instance types. So as long as your primary is four XL or higher, you can use this the way it works is you can now define the number of threads you want for it to use to create the index. Typically you can use no max max of 50% of the total vCPUs that are available for that instance that for your, for your primary and and that that is then used for index creation. This has reduced latencies from days to hours. So customers are seeing significant improvement.

And, and what you can do with DocumentDB, you can, like I said, if, if, if you have an index created, you can go from like a a smaller instance to like 16 XL. It takes 30 seconds for the fail over create your index and then you can go back down to save costs, right? So and and all this is possible because of our architecture.

One caller I do want to make is the more threads you give for index creation. Obviously, it's taking those threads away from your querying operations. So you do need to plan for it to make sure it doesn't interfere or you need to scale up vertically scale up to a higher instance type while index creation is happening in addition to indexes, um we also launched some engine improvements specifically along around document compression and operator improvements.

So customers have asked for customers have asked for document compression and the reason being they wanted better lower storage and higher cost. That's our focus. Like I said to give you better performance at a cheaper price. So now with DocumentDB this year, we enable document compression for documents with whose size are greater than two kyte. You can actually, it's actually compressed, it saves you on storage costs and then less data has to be transferred over the wire. So that means lower IO costs.

We also launched operator improvements. So customers told us that some of the operators, they were using like dollar and match and dollar in which were more frequently used when they were using it in a nested fashion, they would take longer than expected. And so both of these, we fixed with $5.

And I just wanted to share some data on the top left. I don't know if you can see the numbers, but if you can, you can see the performance of dollar in as the number of nested nested fields increases for dollar in. You can see it's a magnitude of orders difference between DocumentDB 4 and $5. That's the top left chart over there.

If you look at the, the lower chart, it shows you that it's a fraction between the IOPS reduction is a fraction between the uncompressed and compressed, right? So, so again, that is direct cost savings for you because you have less IOs to pay for and finally load performance with compression. You can see up to 60% improvement if you have a compressed data, that's actually again, performance improvement for you because 60% improve performance as you load your data into DocumentDB.

I talked about security. DocumentDB already supports encryption address and encryption in transit. A lot of customers said they are looking for like they have sensitive data and personal identifiable information. So they, they want a way to store that data encrypted in DocumentDB. And so we launched client site fee level encryption this year. With that, you can use your own keys from KMS encrypted data on client side and it's stored in DocumentDB in an encrypted format.

Let's move on to AWS integrations. Earlier this year, we launched AWS integrations with Lambda. The way it works is you can now trigger Lambda functions from chain stream events. So now in real time as changes are happening in DocumentDB and they're reflected in chain streams, you can now trigger Lambda functions. So again, it gives you more flexibility in using more AWS services using Lambda.

Based on your DocumentDB database integration with analytics is a big focus for us. Last year, we launched JDBC driver and we've seen many customers use Tableau as for example to connect to DocumentDB. This year, we announced the ODBC driver. So you can connect applications like Power BI to connect to your data in DocumentDB.

Last week. So gen AI is on everybody's mind. A lot of customers have gen AI applications.

So last week, we announced the in console integration of DocumentDB with SageMaker Canvas. And what this enables you to do is if you have data in DocumentDB, now without writing a single line of code, you can take that data, bring it to S Maker Canvas where you can create forecasting regression models, train them and then publish to QuickSight. So all that happens without you writing a single line of code all with our in console experience. So that is now live and going for you to use.

And with that, we expect to see customers use more and more DocumentDB data for like their ML and AI use cases saving on cost is our focus, as I mentioned a few times already. So last week, we also announced DocumentDB IO Optimized. What does this feature do? So taking a step back before this feature was launched, we outside of backups, we used to charge customers on three dimensions, compute storage and IOs. Compute and storage are self explanatory. IOs was one where customers would often see unpredictability depending on whether there's a read workload or a write heavy workload, right?

And so customers would often say that they needed more predictable way of using DocumentDB and, and for their costs. So we launched DocumentDB IO Optimized, which is a choice that customers have. Customers can continue to use their current pricing model or they can switch to this pricing model. With this pricing model, there is no more charge for IOs. Customers pay extra 10% extra on compute, 3 times on storage and IO is included.

Um so not only that if you are using IO Optimized, you will also see 15 to 20%. We have seen performance gains when you're inserting your data or for write heavy workloads. So how do you choose between the two? So if you, if you are looking for default storage type is the standard which I just talked about like by default, which are charge for IOs. And really if you don't have, if you're like a primarily read only workload and you don't have a lot of IO workload, the standard pricing is a good choice. But if you are looking for price predictability or have a high IO workload or a high write throughput, you can, you might see better price performance with IO optimized.

So that's what's new with DocumentDB. I'm going to now pass it off to Paul to talk about how United Airline uses DocumentDB.

Good. Alright. Thank you very much, Rashi for that great update on DocumentDB. Let me talk to you a little bit about our United Airlines journey that we're, we're working on right now.

So start off with a little bit of information. What is a passenger service system? Alright. Who flew here? Who flew here? United? Thank you very much. A passenger service system is a series of critical systems that the airline uses to control inventory your reservations that you do when you go to the airport and all the different services underneath it that communicate to the, communicate to the outside world such as your travel agencies or other airlines.

At United, we have a mainframe that does that is called SHARES. It's based on a legacy technology which has been around for over 50 years. That has been the cornerstone of United operations of the airline. Every booking that you make, every time you check in at the airport, everything that you do, all 144 million passengers that we have passes through that mainframe.

What we have done though over the years is we've slowly start to externalize some of the pieces and expose them through various multiple integration layers that are able to. So you can have the more friendly user look when you check in on the web or your mobile app. But at the core of everything that still happens beneath that is our mainframe system, which is over 60 years old, not the system, but the platform that it runs on.

So what are we doing right now? Well, right now we're United Airlines is at a bit of a crossroads, this mainframe that we're running it on the people that run the mainframe, all very great people, but they've been around for a very long time, the the people that we nobody comes out of university anymore and says, hey, I want to code an assembler, right?

So what we're doing is we're planning for a future. We also have to work to, to, to change what sir, we also have to work to ensure that we meet rising customer expectations. I'm sure all of you have shopped on Amazon when you go to do your shopping, you know exactly what you're buying, you know, when it's going to show up, you know, if something happens, how to get a refund, everything is very transparent to you from A to Z, right?

And what we're trying to do is to ensure that we can start to give that type of visibility out to our customers and employees. There's also a lot of industry momentum going on over the last number of years, the industry has been talking, the airline industry has been talking about modernizing. The airline is very restricted to things that that were decided 50-60 years ago when reservation systems electronic reservation systems were first built.

What does anybody here know what an e-ticket is? Thank you for the three people from the airline. No electronic tickets are what the industry uses to be able to reconcile. It's like your bill that sort of goes along with you as you start to go through your journey. And that is something that is industry standard that all airlines follow and all travel agencies follow. But these are things that the airline is kind of saddled with, that kind of holds us back from doing certain new things that we want to do, offering new products and so on. And this is where I'm talking about untapped product potential.

The airlines are there to help ensure our customers have the best and smoothest journey that they can have from everything from when they purchase to what they're buying all the way through to when they get off the aircraft at their destination. And this modernization and transformation that United Airlines is going through is going to help to, to do that for our customers.

So as I was sort of going through what we have today is a legacy system that's built around industry standards that we're trying to change to a modern more customer centric offer and order management system. We have mainframe technology that is very monolithic. And while you can carve some pieces out, it's still got it, it's still, it's still very hard and challenging to do.

We want to make this a platform based solution where we can pick and choose the best of breeds of the pieces of new technology, new new technologies out there and bring them together to give you the best experience integrations into existing mainframe that we have right now are are many and this is causes some challenges as you try to get information out in different ways to do it.

We have things from screen scraping that is going on to the most modern technology to still using teletype messages to to communicate back and forth between, between airlines. We're looking to set ourselves up with a common integration layer, a modern common integration layer that will be able to unlock all the potential of the data that we're gonna have in there.

We have a lot of manual business processes and a lot of processes that still rely on people using a green screen, a green screen is basically a window into a mainframe. And you're looking at the real text of what's going in there. The cryptic information in a mainframe when you're and I always laugh at this when you see videos or stuff of people at an airport asking to change their flight and somebody's banging away on the keyboard and they're running like 50 or 60 commands. They're actually working for United Airlines. Some of them are working in a green screen to help change your tickets or to help make sure you're moved along your way, right?

So these are things that we're looking to be able to change the way that our employees can service the customer and also change the way that our, that our customers can service themselves. So it's, it's about improving the experience, the overall experience to get you from, to get to your destination safely.

So, where are we looking to go? So, our PSS of the future is kind of broken down right now into about five different areas. And some of these things you've sort of heard about where Rashid was talking about, but the way we sort of looking at it is when you go to look to purchase something or look to buy something on the website, you're shopping, right? It's just like when you go on Amazon and you're looking through and you're looking to buy different products and put them together and get a bundle and so on. These are what we call the offer stage.

These offers that we make today though are still built on legacy systems that were built, you know, once again, 40 or 50 years ago, what is happening though is that we're constrained by certain things like how many different booking classes we could have, we could have 26 booking classes. That's all you can have. We have different fare basis codes. These are all airline lingo that just holds back the industry because you have to make all your new technology fit into these little new old boxes that are there, our order that we're looking to build and that we, we built on DocumentDB that we're building on DocumentDB right now.

This is the part of the system, which is your, what we call today, your record locator or your PNR. This is where everything about who you are, where you're going, what you've purchased, how we've delivered that to you or how we haven't delivered that to you or what are some of the special things that you're looking for? That is what's all going to go in the order today. All that information is stored basically in a flat file on a main frame, right? So it's very inflexible of how you can access that we have the flight management part, which is about when you go to the airport, you check in, do we deliver something to you?

And then we have our foundation, our foundation is there to pull all the systems together. This is where we have, like we said, our, our integration layer that we're going to be looking towards and also it's going to be translators. The airline industry doesn't do anything very fast. It takes a long time. E-tickets were introduced in 1994 e-tickets was to get away from a paper ticket you used to give when you get on the airplane, like you held your boarding pass, but you took a little coupon out. What happens now is this all been electronic size? But it took many, many, many years to get there. And I got to tell you some airlines still aren't there yet, right?

So we have to build technology. It's going to work with the work with the future built for what United needs to deliver to our customers. But we still have to work with every other airline. That's our partner, every other travel agent in GDS because they are a significant part of our business. So we have to make this work back and forth interoperability.

So let's talk a little bit about the journey that United has been on over the last couple of years. I wasn't there from the beginning of this journey. I got to shout out to my team that had been working on this and some of them were in this room, but they've been working on this journey of airline seating transformation.

So when you go to purchase a ticket on United Airlines, you are buying the ability to travel on the airplane, then you are choosing the seat you're gonna sit in. So you actually get the physical seat that you're gonna, you're gonna go towards. So over the years we've exposed this, its nice pictures on the website or on the mobile app that you can do. But once again, in the background, there was a lot of hoops going on to make that accessible in the mainframe. And once again, mainframe used to have seat maps which are Xs and Os and dots on a screen that you used to have to choose your seats from.

So what United has start looked at is that we need something that is going to be scalable as we start to build this out. We want to do rapid innovation. So if we see something, we need to tackle quickly, we want to do that and we want to have real time data validation for this, we still need to keep our mainframe, which is the core of our, of our core operating platform for airline. And we still want to keep that in sync with anything else that we off hosts on the system.

We want to make sure it's accessible. One of the most frustrating things for a customer is not being able to get the truth or they get two truths or three truths. Why did this change? Why did this happen? You ask one person to give you one answer, they ask another person to give you another answer. It happens. It's unfortunate, but this is a factor of so many things have been off hosted and built out and built into different areas. You're not necessarily always using the single source of truth. You're using that source of truth that they're looking at right now.

We want to empower our customers and employees to have the accessibility to that information that they need and be able to leverage it to make their journey. We also wanted to start towards our new data model, right? So I talk about this, we're looking to go from a PNR to an order, right? Order is like it's set up is as a database with all the information that you have of where you're going, all the information, whether you've something has been used or not, whether you've been refunded or not. This is what we're looking to set up as our order management as our order database.

And then we want to start to expose like pull out our customer information right now on these PNRs that you have everything is there all the customer information, all the personal information is stored in with all the flight information, with your baggage information. And if your flight goes through a change or a cancellation or something happens. There could be 500 lines of information on these PNRs that people have to go through and figure out what happened.

So this is what we were starting to look at. So, when we look at seating, seating was a place to start, this was a big deal. Like I said, seating. This is, that's what we sell, we sell seats on an airplane to get you from A to B for your destination. So we had over 30 different applications that did something to do with seating at United Airlines and every one of them, some did it the same way.

Some had a little bit different. Some when you did a rebooking, you might have got a similar seat or not, but not everything was, was, was the same. So it was all a mixture of many different applications, managing of seat maps.

A seat map is what's on an airplane right now. A seat map is, is, you know, what seats are available? Uh what the characteristics are, are they window or an aisle that was all managed in the mainframe, right? All with Xs and Os once again, how, how it's managed.

So a lot of that stuff, we decided this is a great place to start, a bold place to start, but it was a great place to start to to start to pull the information out of the carve, that information out of the out of the mainframe. So once again, we had a flat file data structures, very difficult to change. 90% of our code was in assembler, right? Very hard to change, very hard to update. Not very flexible, takes a long time to do anything.

If, if you, if you ever worked in mainframes, multiple interfaces, multiple ways of doing stuff. And once again, it's a complex platform, right? And if you need to scale up to do things, it's expensive, mainframes are expensive to run.

So where we, where we got to today is all of our applications. Now come into one, what we call the seat as designer service. When it goes to that service, it looks over at our Amazon DynamoDB to find out the uh the seat inventory, what is available, what seats are available for you to look at and for you to choose and then once the customer has made the decision, it'll actually make another call over to Amazon, sorry to DocumentDB and make the record there in what we call our new order, which has the customer's name information where they're traveling to what seats they chose. And if there's any changes to that, those will actually be recorded in this in in DocumentDB as well.

What's also important about this is we have to sync this information back to our mainframe and it's almost happening real time of all these, all these situations that happened. Uh uh DocumentDB, give us a new flexible databases that can give us new uh capabilities for our customers. It gives us a common set of isolated rules that we could sort of build into this microservice architecture and it was in the cloud.

So what we need to scale if something starts to go wrong, airlines go through peaks and valleys, right? When some big storms hit all over the place. Airlines are crazy trying to rebook their passengers trying to get them from A to B lots of people on the phone, lots of people, you get a scale and a massive uptick of how many transactions are going against your system.

So you're normally scaling mainframes to handle the peak of what's going on whether it's, you know, and if you're handling for those two days, you've scaled your mainframe to handle that to those, to those peaks. This gives us the ability to sort of scale up and down as we start to plan and handle how much data or how much data we're going to have to store how much access we're going to require for it. Uh uh so like I said, Amazon DocumentDB was our chosen uh database.

As Rham mentioned, easy adapt is changing schema. What we've done now is a starting point for what we call the United Order. It's a starting point. We wanna grow that. So we wanna have something that's flexible. We want to make sure that it could scale reads. When you go for bookings. When you make your booking, you can make it 300 days out or 300 something days out when you've made your booking and you've selected your seat and stuff like that. Unless something changes, nobody's going to look at that information up until you start getting close to. When travel is going to happen.

When you start getting close to travel happens, there might be an air aircraft swap or an airplane changes which causes a, a reaccommodate and your seating gets changed and stuff like that. But this is, it's gonna be, but up until that point, a lot of the information is just about read. It's not about writing back and forth back to the data, the consistent indexes and capabilities to add a OS as new patterns emerge.

Once again, we need this to be flexible. We need this to change as we need, it needs to change fast, right? And the ability for global customers work to perform fast cross-region fail over, this is going to become a critical part of the operation of the airline just like our mainframe. It needs to be stable, it needs to be strong, it needs to be performed, it needs to have redundancy, it can't go down if this goes down and you can't figure out where people are sitting on an airplane doesn't go very well at an airport.

Our seed inventory was also done with Amazon Dy DynamoDB and our history was done, RrDS and our archive is Amazon S3. So we've used a lot of the different Amazon platforms to develop this entire seating capability that we've actually pulled out of our mainframe or in the process of pulling it out right now.

So along the way, we had to work with AWS, we had some challenges that we faced. One of those ones is what i sort of talked about is we have data that needs to be accessed really far out and we have data that needs to be accessed really close in and the close in ones are doing a lot more churn or a lot more read, writes on it.

So we had to figure out a way to have the application logic to look up across different selections. There was no inbuilt way to do that with, with DocumentDB fail over capabilities. While we talked, there are some great fail capabilities there, the automation of it, we had to do that in house, but we worked with AWS to get there to figure out how to do it.

And some of the stuff that we've been talking about has been fed back into what Rim was talking to talking about earlier today. Uh also I talked about the evening, a lot of the data that was coming out of the evening once again was not really uh usable right off the bat for us.

So we had to build custom meaningful events and uh to have and we have to have our own layer on top of that to be able to understand it and, and uh and event it out.

Let's take a little shift a bit to the future. Uh and we talk about Gen AI, right. We've all heard about Gen AI for the last uh for the last six months, seven months. It's become very popular. What we started to look at is look at this opportunity as we transform our mainframe and transform United Airlines is how do we take generative AI and build it at the core of what we're looking to do.

So as an example, we started with a simple use case and if you see those four quadrants there, this is a green screen. I'm talking to you about that. Some of our agents use to be able to see what's going on. We took the information from our various order and inventory uh order and seating history that we have and we extracted it out of the DocumentDB and we were able to load it into Amazon Bedrock and train it how to read what we call cryptic PNRs and read the history.

So if you had multiple lines of changes in there and you ask somebody, hey, why did my seat change from 14D to 36F? What happened for that happened? If you were to go to certain agents, they would look at you and say sorry, sir. I don't know or sorry ma'am. I don't know because it takes investigation to do that as to what it happens.

What we trained with generative AI do is we trained it how to read that the matrix of the uh the cryptic information and we from what we exposed from DocumentDB, we were able to look at it and say, ok, if somebody asked a simple question, why did my seat change? You would be able to go through the history, see all the different events that went on and say your seat changed on this date because there was an aircraft change due to a schedule change due to this and that and give the real, real time answer.

So we're working on right that, that right now is being able to pull that information out of DocumentDB. And then we are, once we pull it out of DocumentDB, running it through and actually having some of our GUI that were building it for our, our call centers or at airports to be able able to ask these simple questions or have some of this stuff precanned so that they can get the information out without having to do deep investigation as to why something changed.

So this gives us the ability to build generative AI at the core of what we're doing. Do we have all the answers for it. No, trust me. We don't, we're still trying to figure out as we go along. But it's very exciting and some of the feedback we've seen so far already has been really, really interesting. Right. It's, it's, it's mind changing when you tell people they've just looked at something that was 500 lines and in two seconds you can explain why something has happened. That's going to be like, uh that's gonna be uh an exponential change for how we serve our customers or how our, how our agents serve our customers.

Why I'm bringing this up is because we're talking with AWS right now is how do we make DocumentDB directly interface? Right? I know we talked about what some of the things we're talking about, but this is we're looking to do this at our core for everything we do using generative AI, right? How you interface with the data, how you could have Gen AI running in the background.

And if for some of you that may have showed up at the airport and you go to check in and it tells you sorry, I can't check in because something's out of sync or, or you haven't paid a fee or something yet. It would have been nice if you'd have known that before you showed it at the airport. But this is where we have Jane, it's working in the background, going through all the different phases of your journey and saying oh, mister Smith. Sorry, you haven't paid your, you know, your, you have a fee that you, excuse me, you still haven't paid yet. Send it out a note, ask me if you want to pay it. You know, that's done instantaneously instead of that being offloaded to a contact center or to somebody else or stopping you at the airport from when you actually get there.

So what is this taking to us? So this is a multiyear and a challenging journey for any airline. My last previous history where i was before they took a little bit of a different step. They didn't go through a transformation, but they moved from off a mainframe to a a modern PSS system, but it's still based around all the old legacy uh capabilities that you have of e tickets and PNR.

So that's one option to do United has taken the other option of trying to transform and this is not going to happen in two years. This is going to be a multiyear journey as we start to carve up the mainframe we wanna build, start with a solid foundation to get there. Like we sort of talked about DocumentDB uh Amazon Bedrock that we talked about. How do we set up our foundation? What's the communication, how are we gonna work with our air other airlines? How are we gonna work with our other uh partners to ensure that we can start to work together to move the industry away from this legacy that it's been that it's had over the last 60 years.

And how do we start to transform it for the future? We have to make sure we coexist again, very important and we want to make sure that anything we have is going to be enabled as rapid innovation.

Give an example. There was new rules that came in with seating where children had to a family, we had family seating that had to happen because of a regulation where if somebody's flying, they should be able to sit with their, with their, with their children. But if they didn't choose the seats and it didn't happen, it was up to the airport to figure it out. Then if the airport didn't figure it out, it was up to the uh the flight attendants to figure out on the plane. And that's just a complete disaster for getting an aircraft out in time. And it's also not a great experience for our customers where all of a sudden you're not sitting with your child anymore or sitting with your, your, your, your family that you want to travel with.

So because we had offloaded some of this uh seating, this gave the ability to adapt to that quickly and offer our customers self-service capability to choose their seats together. If they're a family, meet the, meet the criteria of under a certain age, they're traveling together, they're on the same and they can actually choose their seats, they want to sit together before you have to call somebody. No more of that. It's all done by the customers themselves now.

So this ability, if we had to do that on a mainframe, it would have taken us months to do that, but it was done within weeks that it was capable to do.

So. This has been a great partnership and a great journey so far with AWS, we continue to share information, we continue to improve and we're looking to see what the future is gonna hold. Like I said, this is a very exciting for anybody who works in an airline industry of where we're trying to go here. Very challenging. But I'm looking forward to the adventure that we're all gonna be going on for it.

So, so that's what we have for you today. Uh Rash and I we're gonna take some questions off stage. So Rash will be over here. I will be over here on the left. If you have any questions, please come up and uh come up and talk to us. I'd like to give you a little bit more personal service of one on one instead of people shouting out questions in the room.

So with that, thank you very much, everybody. It's uh it's been a great pleasure to talk to you today.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值