How Oaktree Capital saved 50% by modernizing its Microsoft workloads

Good afternoon, everyone. Welcome to ENT 318. How Oaktree Capital save 50% by modernizing its Microsoft's workloads. I have Chitra Hota on the stage here with me. She is the Managing Director, uh head of data and architecture with Oaktree. I also have Yogi Barra. She is a Principal Solutions Architect here at AWS focusing on Microsoft technologies. My name is Thor Giddings. I am the America's S A Leader uh also working on Microsoft technologies.

So let's talk about what we're gonna talk about today, right?

The agenda:

  • I'm gonna talk a little bit about Microsoft modernization on AWS.
  • Yogi's gonna talk a little bit about SQL Server to Amazon Redshift and what that can look like.
  • Then Chitra is gonna dive deep on the Oaktree database modernization story, their data lake and reporting architecture, uh their index processing use case on AWS and then some next steps that we're taking.

So let's talk about modernization, right? So I always make the joke, you know, if I asked eight people, what is modernization, I'd probably get 10 different answers, right? So let's just baseline it for you.

We at AWS see modernization as a process of progressively transforming existing applications and infrastructure to extend into higher value, cloud native services that unlock new business capabilities, accelerate innovation and reduce technical debt.

Ok, let's see how that actually works, right? So modernization doesn't happen by accident, right? You have to plan these sort of things, you have to really focus on this and modernization can be very expensive. So you need to work with intention and it all starts with business value. You should always start with what is, what is that thing that differentiates my, my company from everyone else? Where's, where's the innovation? How's that gonna help the customer experience? That's where you should be spending your money, right? So always think about the business value of what you're doing and make sure that it exists if it doesn't keep things the way they are, right?

So in this first phase, we have assess and that's all about, you know, building out that business case, understanding the impact, understanding, you know, how everything glues together and who's affected and, and how much risk there is involved and figure out is this a good candidate, then you figure out, ok, i have this candidate. What am I gonna do next? Well, you're gonna mobilize, right? You're gonna get your whole team. I'm not just talking about it. People, I'm talking about business leaders too. Every team that's affected, you have to get them on the same page, right? You have to make sure that they understand the changes and the impact. So there's no surprises, right? And then finally, is the modernization step probably what you thought the whole thing was, right? So that is the actual technical uh changing of the code and deploying of the systems. You know, you're talking about uh making sure that everything is wired together, it is tested, it is ready to go. And you know, for most of the customers that i talked to, you're not looking at wholesale modernization, you might take a component or a piece of an application, a critical application and find where you can apply that innovation and just modernize that one little slice, you know, find that business value. Ok?

So when we talk about Microsoft workloads on the AWS ecosystem, we have innovation at every level at AWS. We try to support you wherever you are in that journey. And most customers are at multiple points if not every point in this journey, but a lot of customers will start out at rehosts. Let's just bring our stuff over to AWS. Let's see how it works, right? And you're talking about maybe Windows on EC2 workloads and SQL on EC2 workloads and you can optimize that and run it and and be happy there, but a lot of customers don't stop there. They think, you know, how can i make this better? How can i make it more cost effective? And a lot of customers will go to the re-platforming stage and that's, you know, maybe putting some applications in containers, uh maybe looking at managed services, right? Why, why do the undifferentiated heavy lifting yourself? Let AWS take care of that. Focus on the innovation, focus on the customer experience, right? Focus on that business value. And then finally, you have refactoring, right? So when you're all in on modernization and you're changing your code, maybe you're going from a legacy version of .NET framework to a newer version that runs on a different operating system. Maybe you're looking at serverless as a way to to not only scale but maybe save a little cost. Um also, you know, using cloud native technologies like purpose-built databases like Aurora, DynamoDB, bringing in those cloud native services uh to help fuel that modernization.

Ok. So with that, I'm gonna hand it off to Yogi. Thank you.

So let's talk about SQL Server to Amazon Redshift modernization. SQL Server is an expensive product, right? It's used both as a data warehouse as well as database engine in the past. What used to happen that business leaders were going through the data engineers for their analytics need and developers were going through the database administrators to manage the data warehouse for them. This old paradigm does not exist in the modern world, in the modern world. You want customer want the data warehouse which is fully managed self serve, zero touch for tolerance system and that they want to use for their day to day life to deliver the faster insights from data.

So modern cloud data warehouse has to deliver analytics without compromises. So first, you need to analyze all your data without movement because the data is growing now, you have like from gigabytes to terabytes to para bytes of data you need to analyze without having movement. At the same time, it needs to support all standard file formats such as ph or jason and access your data seamlessly across your data warehouse and operational databases.

Second, you don't want to think about provisioning or even managing your data warehouse.

Third, you want to make sure that the system can handle your unpredictable and diverse workload. At the same time, you want to stay within your budget and doesn't incur a cost when the system is ideal. At the same time, you also want to make sure you have a top notch price performance to make it affordable for your ever growing volume of data.

And lastly, there are real time users connecting to it. Thousands of users accessing your system simultaneously using a business intelligent tools. So you want a system which can handle all.

So data about housing today has to do a lot more than what it was before. And that's why Amazon Redshift the most widely used and affordable cloud data warehouse has been investing and innovating along all these areas to meet your challenge and to address your modern needs. It, it helps you analyze the sorry. It helps you analyze all of your data, whether it's a log file, click stream, data, semi structure, data, unstructured data, ned data, transaction data without movement or transformation. It's the most affordable cloud data warehouse that delivers up to three times better price performance compared to uh competition. And it requires zero administration and reliably delivers insights of your data in seconds in a more secure way. And the best way is we bring you flexibility and the choice is yours. If you want to start your analytic systems very quickly, you can use a server or if you want to have advanced user. If you are advanced user, if you want to have a granular control and create custom provision environment, you can use provision cluster. So we have multiple ways where you can use redshift.

There are tons of use cases that we are seeing from thousands of our customer we have today. One of them is self service analysis. It's a broader category and it's a surveillance gets you to your destination faster as it takes seconds to get started. So if you are a data scientist, data developer or a line of business analyst, you can easily create the machine learning workload in SQL can set up reporting and dashboarding or conduct real time analytics.

And the second use case we have here is to address your unpredictable workload or spiky workload because we have seen sometimes the workloads are heavy during the working hours, sometimes it's low demand, right? So you have spiky workloads or even a situation where you have a predicting processing windows. We are seeing many of our customer using the redshift serverless data stack.

So now I'm gonna hand it over to Chitra to talk about uh how Oaktree Capital, uh capital, use the AWS technologies and modernize their uh technology uh later. Thank you, Yogi.

Hello everyone. It's great to be here today. I'm sharing a data modernization journey at Oaktree just to give you a little bit about who we are. Oaktree was founded in 1995 by Howard Marks and Bruce Karsh, along with Larry K, Richard Mason and Sheldon Stone as principals. It started with the 5 billion of assets under management in 1995 and today we are 100 and 79 billion assets under management. But along with the AUM the data has also grown, adding more complex portfolios and private offerings. Our data challenges are like any other firm that has grown organically.

Let's talk a bit about our challenges when I started with Oaktree July of 2021 and I started meeting the functional leads. The strongest theme was around the ability to scale the data platforms and also other application platforms without disrupting the business tall order, right? The second thing was identifying golden sources of data and enabling on demand, self serve reporting. But the challenge was to lay out a vision that was long term yet have some wins along the way that were short term.

So to give you a sense of our platforms, uh most of the platform is monolithic SQL Server sharp. Most of the databases are end of life. 15 to 20 years old. We do not have any harmonized governed, self serve, data analytics capabilities, all that had to be built and more and more than that the business needs were changing. We were getting um during the fundraising season, a lot of um the demand on our data team and our IT teams to scale up. So we had to somehow be more agile and have faster turnarounds on how we on boarded our data sets.

The other thing that was happening in the sidelines was the business was getting more savvy and data science teams were getting embedded in the business teams which means faster access to data and a lot of alternate data coming into the picture, think unstructured data, think w like earnings, earnings calls, think quarterly reports, some of the staples of the asset management business. And we were not able to process that because we only knew how to process structure data as sequel server.

The other thing was once they started hiring leaders and data, they were looking for ROI in the data space last but not the least. We were not proactive in our monitoring. We used to have um SLA delays. We used to have um delays in our processes and we had to be working around the clock to fix some of these issues.

So after the challenges were discovered, what did we do next? We had to lay out a vision. Now, how many of you here have gone through any modernization? Please raise your hands quite a few. Then you must have heard of the phrase, we've always done it this way and it works. I don't need to change anything, right? So how do you change minds?

The key steps to a successful data strategy and a modernization effort is having a long term vision and then connecting it with some short term wins and taking your partners along in the journey to changing the culture of the firm is very essential. It's a tall order but it can be done. So you have to pick and choose some of these early adopters. And how do you do that? Let me tell you a story.

So one of my key business users, very savvy with data, very savvy with writing SQL told me he doesn't need to move to the cloud. He can do what he needs to do. I said, ok, let's sit down and have a conversation over lunch. So i asked him like, what is your biggest impediment? And then it comes out, oh, tech is not fast enough. I said, ok, other than that, like, what is your biggest data impediment? The biggest data impediment. It appeared was SLA's for data. We were not able to serve data for Europe and Asia pack in time.

So when I started looking into the eons of our data, we realized that we were processing six months to a year of data overnight every day with no material change either in the transactions on the holdings. So we started thinking about incremental processing

And within a few weeks, we set up incremental processing and lo and behold, we were able to get six hours into the SLA happy customer. And he also became a early adopter and started spreading the word about why this was good for us. So find those early adopters and create that sponsorship by defining those small business cases that have big impact.

The next thing is inspiring with examples of success. Sometimes people don't know what good or great looks like. So if you have been in the industry and bringing in those examples, in my case, I brought in those examples of modernization that I'd worked on from JP Morgan, from Morgan Stanley from Merrill Lynch and brought that in and showed them what good could look like, what great could look like that helped jumpstart the thinking process.

The other thing is thinking out of the box, people are used to doing things a certain way and it is very difficult to change behaviors. So bringing that innovative thinking is key to do that, we started holding our first very first hackathon in Oak Tree last year. This brought up a lot of great ideas and we started thinking about making these ideas a reality by creating small prototypes. I'll talk about a few of those later on in the session.

The other thing that we had to set our mind to was agile practices. We were waterfall requirements were coming through email and there was scope creep all the time and the process of thinking of data as a product where you could deliver iteratively did not exist. So we had to bring in agile practices and start training, not just IT but the business systems at the business users as well. So I kind of touched upon that. So it's the whole organization gets involved in the change.

The other one and most important one is, am I getting my ROI how do you measure success? Let me tell you another story. So you don't have to move an entire application out of the cloud if you don't have to, you can move parts of it, right? So we had a risk analytics application, a vendor application which wasn't scaling. So we would run the scenario, it would take about four hours for a single security and then month, then it would take 24 to 48 hours. So basically, we couldn't run these shocks like multiple shocks during the day.

So we thought of moving just the compute over to the cloud because we couldn't really throw away the software. So what we did was we scaled the compute, we refactored the code into Linux using um dot net core. And then we created a series of lambda functions that could be spitting up concurrently as needed and could process and then be turned down. So with that, we brought the four hours to 20 minutes for multiple scenarios.

So this was tangible and real business value that could be measured. So that again jumpstarts your KP journey because now you can show tangible evidence of what good looks like last but not the least upscale your teams. That is probably the hardest thing to do because you have the legacy team and you're hiring new. So you can do that by having some consulting capacity brought in them initially to help jumpstart the journey. But we made use of all the services you could find for teach, for learning, I merchandise, data labs, bill labs, some self learning and we encourage everyone to get certified. So that was huge and it kind of needed a whole village to get us there now that we've gone through the challenges and what we need to do for data strategy.

Let's talk about our architecture. Our initial focus was to work with AWS and come up with something simple that was flexible enough and could be extended later on. So a single data lake is what we landed up with. Data lake had S3 with AWS Lake Formation on top and we had two patterns for getting the data one from on prem systems of record of free data sources like holdings transactions um and performance data for example. Um and even the bookkeeping data. Um and the second one was for the vendor data. So talk about index alternate data sources.

And then we had um other producer accounts for risk analytics and anything that is produced with using all of these other data. Athena was used at the data virtualization layer for serverless query. We have a lot of that. A lot of a queries getting fired through the day and Redshift serverless was used for as a call up warehouse. We went with Redshift serve because there was this huge uh effort to train the infrastructure team to create um clusters. So initially, we went with serve and if things did not pan out which they did, then we would have to go back into um clusters.

The architecture was flexible enough that we could put all our confirmed data on S3. And so today, tomorrow, if we needed to switch to a different warehouse, a lot of people always ask me, why didn't you go with Snowflake? We wanted to stay in a cloud native with AWS, but we could have done that because our confirmed data was on S3. We could have moved it to any warehouse. And then um on the on the consumption side, we kept uh Power BI as a visualization tool of choice because we didn't want to change um the user habits that takes a little bit of time changing the UIs and the user habits.

Now, let's talk about um what it took from moving from SQL Server to a data lake and how our SLAs improved to give you a little perspective of what our SQL Server footprint looked like. We had 514 SQL Server DBs, 500 terabytes of storage, 18 replicated copies, 200,000 store procedures. Horizontal scaling was only gotten by um replicating the databases 229 link servers for cross DB access. And in the first, the first week that I joined a couple of our production servers went down and low. And before it was because of link server calls with no lock not being used. So think about like mayhem and it's because of ad hoc queries.

So as of today, we have added zero link servers and brought down all of the link server queries by enabling virtualization. We use SSIS for ETL C# jobs and Windows scheduler for processing and job scheduling. And our data warehouse is relational and on a physical server, we do not have row and column level access control for governance and all of our applications were using WPF windows desktop.

So how did the modernized state look? Are we using late formation over S3? All of our ETLs have been transformed to AWS Glue, we're using Spark and Python homegrown data frameworks and pipelines. We also use EventBridge Lambda Step Functions, Glue Workflow orchestration hood, I Amazon, SNS SQS and CloudWatch for monitoring. We use Athena like I mentioned for distributed querying and Redshift for advanced analytics with granular access control Power BI I for visualization and for some of our applications instead of sticking to the windows server, we migrated to Linux. A good example of that was the risk analytics uh workload. In some cases, you have to put your licenses along so that you can have by BYOL bring your own licenses for SQL Server as needed.

On the data access layer, we move from stored procedures to APIs and on the application front, we move to web-based dockerized applications using React um the react framework and grid. Now let's talk about some of our modernization use cases th touched upon one of the bigger ones which is the index modernization use case. We started our modernization journey with the daily index constituent processing.

So you would ask me why did we choose index? The first thing was index data is vendor data. There's no PII to worry about right. The second thing was it's the volume is huge and we really had problems with historical data for in indices. We were only processing current data or like six months worth of data on our on prem data sources and we needed scalable compute and scalable storage to meet our SLA's and we had daily production issues about for the 1500 in X files that we had from the seven providers. And the reason, one of the reasons for the SLA um drag was because all of the processing was sequential, there was not much parallel processing happening.

Um and also whenever there was an issue with the vendor, the metadata wasn't getting um mapped. So we didn't know where the problem was. If you're not taking the metadata and creating a confirmed view, you don't know the file layout has changed on you because there is no way of doing that. None of it was cataloged. The other thing was data quality because of these issues that we were having on timing, data quality was not consistent and we were not able to monitor it on a regular basis.

The other thing was we could not do real time or event based processing. Think about calculating a custom or a carved out index in the old state. You needed to do that maybe once a month or once a quarter. But now in the current state with the data coming in real time or in today, you could do that calculation any time by running the engine. The other bigger hindrance that we were facing was we did not know how to process unstructured data, PDFs, images, invoices, quarterly reports, earnings calls and so forth.

So we needed an infrastructure or a data data architecture that could help process both structured and unstructured together with different velocities. So what we moved to now that we were using S3 in Parque storage was no longer an issue since we're using meta data driven config driven processing processing was much faster, 50% more faster than what we had seen before. All the historical data was in the data lake. So cost of of holding all these historical data and multiple replicated copies could be eliminated so that you're saving on that also compute, you could use compute as and when you needed it.

So the compute was not trapped forever like it on prem. So you're using it as you needed it and you could make it faster or slower as you needed it. All of our business rules were now business led and everything was processed was a metadata driven logic. We could do pre validation so we could catch errors in the vendor files before they even landed into the raw zone, which was a huge bonus because we now knew what a predictable SLA would look like.

So we put in a proactive monitoring frameworks and we reduced the number of um errors in a day. Our data engineering framework was crafted for reusability. So this led to a 50% improvement in our data injection. S so now onboarding new data sets was not the problem at all. We could do multiple formats and very quickly instead of having to wait a few months for each data set.

Let's talk a bit about our real life business cases with Athena. So I already touched upon index and cash flow processing which was using the same pattern as index. So we moved from SQL Server to S3 or data lake. We had high volume compute needs which were checked with AWS. We could create temporal views of historical data. Not a problem anymore. We could do on demand and custom processing for carved out indices and we enable ad hoc querying capabilities using Athena's distributed computing capabilities.

The second news case was ESG reporting. So ESG is a very tricky use case, high visibility. It's a mandate now for most portfolio companies, all portfolio companies um and it comes with structured and unstructured data. Some of it like CDP are coming through reports that some of the bigger companies produce. Um and we also had MC data to look at some of the data had to be extrapolated out and models were created estimation models were created for carbon emission. So there was a lot of transformation needed, a lot of data to be landed and a lot of calculation to be done. And this is perfect for another very important cloud use case to enable ESG reporting across the firm. And also for portfolios.

The third important use case was the document parsing use case. I talked about the hackathon and the ideas that came out of it.

One of the ideas that came out of it was called Doctor Doc Doctor Document. And that was created specifically with the need to parse PDFs uh like quarterly reports or invoices coming from different companies. Uh so OCR and PDFs were uh what you're looking to parse. We use Amazon Textract. Um and we extracted the key value pairs lines and the tables. Uh we created rules based uh templates, we created metadata that could capture which document type and what the mapping was. And then we use Amazon Can on top of it for indexing, not just the document searches, but also the past content. On top of that, you could then read the documents and have sentiment analysis done on it or name recognition done on it. I mean the use cases on top of that could be endless.

Now let's take a minute and see how we did the architecture for the unstructured data in the same data pipeline architecture. So if you look to your left, um you can see the PDFs, the news content, the voice, which is earning calls and then the social media content could be uh could be twitter, facebook and so on and so forth, instagram. So we use out of the box services with a little bit of custom coding and business rules engines for a lot of these text extract uh for a pdf and ocrs transcribe for a lot of the voice calls. Um and then use services manage c fca uh and kinesis for a lot of the streaming content. On, on the on below that you could see that we have structured data coming from the traditional sources, the bloom, the markets, the s and ps either through files or through a ps and we could process all of that as well. And we landed all of that in the data lake, made the structure unstructured content structured with the parts. And that enabled us to join with all the traditional data that we have and lo and behold now, we could have additional insights and more data available to whoever needed it mostly in the investment teams. Uh we, we did um the confirms on the cloud and then made that available either through athena or through red shift depending on the use case. And we also made it available through um sandboxes that most of our investment teams use.

So just take a minute to talk about the document parts of product with athena very simple product uh using text extract, using lambda for orchestration, even bridging for orchestration and then putting the output of that metadata and that template in aws aurora again cloud native. Then using that template, template data and the and the past out data and storing that in athena to be used as a as a landing zone for querying into the ui. So the ui again was redone. We did not use wpf this time we use react, we use an a grade and we were able to query all of the cloud native databases that we had, we had used for ingestion of these unstructured structured content.

So this is this was a simple use case that, that actually was used and adopted by a loan processing team, a transaction team, a trade operations team and immediate savings of removing bpo processing for and manual entry. And also ensuring that we could get more accuracy. 99% more accuracy than a manual entry would have gotten us. It's a huge uplift and we could actually quantify the cost savings with the volume of documents that we were processing daily. We extended this beyond pdf s uh to um bankruptcy documents. Um we extended that to invoices and, and, and even to rent rows if possible.

Let's talk a bit about the reporting and bi use cases that we enabled with amazon redshift service. So we are using it mostly as an enterprise reporting platform. So we had to redo the ui for our enterprise reporting. We moved from wpf again to the react track with a grad. Uh we used holdings as a first mvp. Um the reason is because 80% of our use case is and our ad hoc que and reporting workloads were off holding. So that was the first thing that we had to move a p is were created off the holdings data. But the, the additional impact here was it wasn't just current current year holdings. We could take all historical holdings onto the cloud which we are not able to do before that enabled the next use case, which is historical rfps.

So in, in the asset management business before the investors come in, they always ask questions. So the rfps were always about fund performances since inception. And there's a lot of um people involved in gathering these queries and getting the data together and creating these rfp requests by enabling all of that data in one place including historical. The rfp querying became very simple. Single query could give you both current and historical data and give you data for since inception of the fund. So that was huge. So we had time series views that could be created very quickly now.

So let's talk a bit about uh our reporting architecture. The front end, we are using aws ecs fargate for hosting our front end application which is built on react and a g grid. Uh we also kept the embedded power bi as an option because we had so many like legacy dashboards, 500 of them that needed to be migrated over the data a p over red shift caches data in the s3 buckets, especially if the payload is big. And we use sign urls for getting that. We are still a microsoft sharp. So we federate a single sign on via azure a and encapsulate entitlements on role based access control frameworks.

So next you ask me like what have you learned in a modernization journey? The first thing we've learned is curate your use cases for impact and effort, choose high impact and medium effort use cases. An example of that would be index or do do second thing, assess and identify areas where your on prem infrastructure and applications do not suffice. Again. going back to the examples i gave you index constituents processing was one example, this analytic scaling of the computer was another example. We couldn't have done it on prem.

Next, identify the best way to accelerate modernization by using existing cloud services. So aws has a lot of cloud services. So we use transcribe and test extract out of the box but we had to code around it and customize it and they were receptive to some of the changes we had suggested and we got those edits done to build products around that. So you can accelerate some of your cow journey.

Identify areas where the compute and storage issues are on prem. That was another example i had given you on index and on the historical data, identify use cases with high tc and load data availability. Go back and think about the sl a improvement use case. Perfect example, identify use cases where slas that can be improved through panel processing. Think about the risk analytics use case where the compute went from four hours for a single scenario to 20 minutes for multiple scenarios. Example of parallel processing, evaluate a lift and shift.

Take a minute to think about the number of store procedures. We had 200,000 store procedures. There was no way we were lifting and shifting that because that would have been extremely expensive. Number one and number two, a lot of the code was not even worthless and shifting. So we had to evaluate what we were reengineering or reverse engineering and what we were keeping as in a lot of cases, we created materialized views on red shift and athena and quoted some of the logic over there was absolutely essential, but we had to do a record to ensure we're not breaking anything and data was actually better reskilling.

So this is a continuous learning process and it needs to happen right from the start of a modernization continuously. And i don't think there's an end to that, the more the better set up best practices, the best practices are around architecture, code, coding practices dev ops highly essential to set it upright and also around how you mitigate risk last but not the least adoption. Like i always, i was telling thor, unlike baseball bill and they will come doesn't happen in data modernization. You have to be very, very particular and prescriptive about adoption, push for adoption in your business business areas and the way you do it is choose use cases that are beneficial to them and there's business value immediately.

So what's next next is optimizing on the cloud. So in the case of the analytics use case that i was talking about lamb drug can be expensive. So we went and saw if there was another alternate way of doing some of those scenario loads there was. So we went to ecs four ecs far gate using a ws back spot. Um and seeing if you can reduce some of the cost that we always looking at how to turn down two instances when they're not in use. Also in lower environments, we also look at uh which licenses can be retired, which licenses can be ported over. And uh what can we do in terms of uh windows server licenses and linux prioritize and modernization target.

So this is this is where you have to have a conversation with your business users to identify what are the priority, modernization targets and then work with the aws teams and solutions architects to find out the right optimal architecture for it. A lot of cases, you will be reusing what you've already built. In some cases, you may have to defector with that.

I'll hand it over to thar thank you. Thank you so much. I uh I absolutely love that uh that story. The reason why I love that story is it's not just about cost optimization, right? It's about getting better slas it's about, you know, bringing features to your customers sooner, giving your customers what they actually need and saving money. It's amazing.

So you're probably thinking to yourself? Ok, great. I heard the story. I love the story. But how do, how do I do that? Right? So I got some good news for you, right?

So first thing uh what my solution architecture team does here at aws, we help customers just like you with these sort of problems. We talk modernization of microsoft workloads every single day and we talk at it a lot of different levels. We have immersion days where your people can get hands on training on the different technologies that live inside of aws. We're talking about ebas or experience based accelerators where you're picking a piece of uh architecture and actually running it through a very fast kind of a hackathon sort of process to see what you can actually get done. We're talking about po cs and mvps and also architecture reviews.

Um also from a cost optimization perspective, we released uh the mao initiative this year, which is the microsoft on aws cost optimization where you can cost optimize some of your workloads, maybe take some of that money and put it towards modernization, right?

Secondly, you're not in it alone, right? So your engineers might be able to do this, they might be able to get skilled up on that, but they might be on other projects, right? You might already have them slated for other things. So we have the amazing amazon partner network that can help you along with amazon professional services.

Uh finally, there are cost programs where we can actually help you uh incentivize you to modernize, right? So there are some internal programs inside of aws to help you with that modernization and, and bear a little bit of that cost. Uh more than happy to talk about those.

Finally, you know, there is no one journey for modernization and certainly not modernization of microsoft technologies on aws. We see customers every day moving from.net framework over to maybe a newer version of.net that runs on linux or from sql to maybe a purpose built database like uh dynamo db or aurora. Uh also using containerization running ecs and eks and fargate along with serverless. So this is a couple, a few of the customers that have had success with microsoft specific workloads uh and modernizing them on aws.

I wanna thank everyone for being here, being such a great audience. Uh we'll take questions now but uh I really, really appreciate it and I hope you have a wonderful re invent.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值