Welcome to re:Invent 2023. Super excited to see you here.
Um how many of the audience has experience with mainframes or mainframe modernization? Awesome. There's so many of us really excited for all of us who are used to mainframes and mainframe organizations.
We know that mainframe organizations are very complex to execute, take a lot of time are very expensive to do and often end in failure. So I'm super excited to share with you how AWS is helping our customers reduce the time cost and risk of mainframe modernizations.
And we are super excited also to welcome Casey O'Connor, who's a Senior Director of Technology and Technology Strategy and Modernization at the Signal Group. And Thiago Mora who is the Head of Cloud Platform at IT, our group and both of them are going to share their experiences of modernizing complex mainframes using AWS cloud technologies and AWS Mainframe Modernization.
Let's get started.
So mainframes have been around for several decades. They run 50 billion plus transactions every day and world stop banks, world stops, insurers, the largest public sector enterprises, retail companies travel companies are using mainframes to drive transactions and core enterprise transaction processing.
So with the advent of distributed computing, a couple of decades ago, a lot of these enterprises started moving user interface and customer facing applications onto AWS and other cloud and and and to distributed environments. However, they were lacking a scalable platform. But with AWS cloud now being proven for large scale mission critical applications, we are finding that enterprises are starting to modernize their mainframe applications and move these to AWS cloud.
In doing so, we find that our customers are able to drive several benefits for themselves. First of all is cost savings by modernizing legacy applications, customers are able to reduce cost coming out of proprietary hardware and software. They're also able to move away from peak load provisioning and move to a scalable platform like AWS. They're also able to use modern technologies to basically create new services and solutions and agility.
Customers are able to leverage all the modern technologies in AWS such as a imlio to drive both business insights and new customer experiences. They're able to use devos to drive down the time it takes to create new products and services and reduce those development cycles. They're able to use integrated tool sets, taking the time and cost out of procurement of these tool sets and do that from a single platform.
There are no limits to innovation. At this point, you are able to drive a lot of value out of the mainframe data by leveraging and bringing the mainframe data on a real time basis into AWS and building on top of that, they able to use a lot of technologies that we are able to offer to create cloud native solutions and, and challenge digital native competitors in the market.
You can get the resilience that you get from AWS cloud, you're able to monitor all your mainframe applications and modernize the applications on AWS cloud as well as your estate on AWS cloud simultaneously and use the and derive the benefits of the cloud security and resilience.
So before we go forward, we have a polling question, I'd love to hear from the audience on if and when you are planning to modernize your mainframes, can you use those qr code and perhaps give us some responses?
Oh, good. I think we see the results coming in. Could we see the results? Oh, that's amazing. Incremental migrations to cloud being the largest uh almost existing mainframe fully all of the above and with modernization. Super amazing. Uh thanks for that.
Um if we get back to the slides. So um thanks for sharing that. And we find that actually a lot of customers who are migrating their applications or modernizing the applications in the cloud are leveraging multiple patterns very quickly.
The patterns that you know, customers are leveraging most immediately are replication of mainframe data onto AWS cloud on a real time basis to build on top new applications, customer interface and business insights.
We also see a lot of customers, as you said, exit the databases, exit the mainframe altogether by replicating the mainframe and emulating the mainframe in the cloud using technologies such as micro focus unix, etcetera.
We are also seeing a lot of customers starting to refactor, their applications. And we now have a lot of automated refactor tools such as blue age and logic, which are actually allowing customers to convert coal, java easy tree assembler into technologies and open technologies that can run very well on the cloud.
Um so um we also have uh customers who are replacing business applications with repurchased commercial of the self of the shelf solutions uh running natively on aws. Uh examples can be talco banking solution uh info on aws uh guide on aws, et cetera.
We see a lot of customers re hosting when you need a rapid exit from contracts that might be very inflexible. We see that a lot of customers are using partners, aws partners like en so krill to bring their mainframes as close to the cloud as possible to reduce the latency levels and almost co locate these applications.
We also see, of course, there's a retention and retire of applications because mainframe applications of products on them have a life cycle. And as the product life cycle goes, the mainframe retires or um you know, gets retained and then eventually retires.
Finally, we are seeing actually that customers are starting to reimagine their applications that you know if you're price is basically dependent on some applications. And that gives you competitive advantage in the market. You want to start thinking about actually deploying it very differently in the cloud.
So we are seeing limited rewrite re architecture sometimes, for example, thinking about batch processing, paralyzing the batch and re architecting re architecting that in a very different way in the cloud.
So those are other patterns that people are deploying just to uh pause for a moment and have a, a little bit of um uh feedback from audience. Uh this uh the qr code here. Uh there's a polling question. Uh i if you can share with us uh what kind of patterns you are deploying?
Oh, good. Can we see the responses? That's amazing. So we can see there's a lot of repla 31% relat 23% re factor 25% rearchitect, 15% augment and replace 6% that actually aligns very, very well to our experience.
Um if you can go back to the slides. So just, you know, a lot of customers, a lot of times customers asked me this question that you can do this with smaller mainframes, but my mainframe is super complex. It's very large. I don't think we can do it with this. And my answer to them is that you'd be surprised. Actually, we are seeing a lot of large customers with large mainframes embark on this journey with AWS.
So we've got a bank in cigna here. Today, we also have companies like state farm fidelity investments and more modernizing the very complex mainframe estates with AWS.
So what we try to do as AWS, uh what we are trying to do is to help our customers reduce the time cost and risk of these modernizations. We built a dedicated service called AWS Mainframe Modernization Service. This is essentially a platform service which is basically integrating a lot of best in class tool sets from the market with the AWS cloud in order to provide a variety of patterns and a range of patterns that customers can use. Because in our experience, no single mainframe organization can deploy only one pattern and customers will need a choice and customers will need to be able to use multiple patterns in order to execute these journeys.
So on the left hand side, you will see that we've got technologies that support strategic migration such as repla and automated refactor. These are basically tool based accelerators for automating modernizations for our customers that will reduce the time of the journey. And due to automated ref factoring, et cetera, we can reduce the risk as well as ensure that we are providing you automated testing services to drive those modernizations.
Um we also support of course in the same, same process, manual rewrite as well as repurchase of applications and helping you integrate third party solutions onto AWS cloud while you're modernizing your legacy code on the left hand, on the right hand side, you will see the augmentation and integration patterns.
These are the patterns we are offering to our customers to allow a connectivity between the mainframe and AWS cloud because these modernizations will take some time. So what we are doing is first helping our customers offload their maps by perhaps bringing backup and archival functions into AWS or perhaps reducing the read only maps that are usually used by chatty applications on the mainframe and bringing that on to AWS cloud through tool sets such as precisely which offer change, data control and real time replication into AWS.
So AWS Mainframe Modernization Service is a fully managed runtime that is offered to our customers pre integrated with the cloud. It's also it's a reference architecture. It offers all the, it offers a sandbox and services that you need fully integrated with the devops and cic d tool sets including third party tool sets like git and jenkins.
Um it's basically integrating third party tool sets and offering that as a consumption based service rather than as an upfront license and it allows you to scale up and down with your usage.
So just also to share that, you know, yesterday, we made a series of announcements because our intention is to offer customers maximum choice. So in addition to repla which is part by micro focus and re factor, which is part by blue age and augmentation patterns powered by precisely and model nine yesterday, we made announcements for file transfer with b mc repla with entity data of unique to expand number of tool set that customers can use and perhaps address specific problems such as ims.
We looked at resource manager from krill that's added on to provide manage services layer on top of m two aws mainframe modernization service. We've added m logic to tackle challenges of assembler and easy to retrieve conversions. And we are doing data application for a four hundreds with precisely we've already doing with z series. We've added a four hundreds as well.
Uh we are also supporting our customers in mid ranges, midrange modernizations such as a four hundreds sparks. So we've added strouse to support these journeys and we continue to evolve. We launched application testing service. 70% of the cost actually goes into application testing and that's a key area of definition, you know that that's where we are investing. And we've also launched uh artificial intelligence uh demos.
So anybody who's interested in those, please contact me outside the the session and we can share, we share with you what we've got
Um very quickly. Um we, in addition to the AWS Mainframe Modernization service, uh we are also supporting our customers and basically uh providing um dedicated mainframe teams with knowledge of these systems to enable our customers to use them and deploy them much faster. Not only do we have engineering teams, we have also got systems solutions architects who are basically collecting customer feedback and requests and taking that back to our engineering team so we can add more components to the AWS mainframe organization service.
We built a Mainframe Competency. We've curated a large number of partners and we've created a course, our own Professional Services competency which supports these partners in delivering these large scale migrations. And we've created this partner network. At this point, we have 12 GSIs and eight technology partners so that we can leverage that expertise from partners and curate that knowledge and provide it back to our customers and other partners who are eager to learn through our Skill Builder program on AWS cloud.
Uh and of course, finally, we support our customers uh with the investments and assets mobilize and migrate stages uh through our MAP program.
Um if I would like uh if I can invite Casey from CIA group to tell us about his experiences.
Thank you, Maddie. My name is Casey O'Connor. I am uh the Senior Director of Tech Strategy and Modernization for uh Evernorth Health Services, which is a subsidiary of the Cigna Group. Everth is offers a portfolio of health services coordinate and, and point solutions to really help make health care more affordable, more accessible for people. It includes a pharmacy benefit services, a specialty pharmacy and care services. And so, um I joined them about two years ago. Uh mo moving through a series of cloud roles earlier this year, ii i joined this project that was already underway.
Um and uh so in looking at, you know, everything came in with, with some opinions, you know, uh based on my experience, software engineering and solution architecture background uh from financial services, GE Capital and Wells Fargo, where we had done um a relat at uh GE Capital and, and we did some refactoring to Java when I was at Wells Fargo moving into that.
So the, the benefit that I had coming into the organization is that I knew that mainframe modernization was possible, you know, and so i, i approached the problem as you know, how do we prove that, that we can get this to work here as opposed to how do we prove if it works?
Um so the objectives that we looked at as we came into mainframe modernization were, you know, we, we, we have three LPARs running today uh on our mainframe and we were seeing excessive growth over 10% somewhere around 13% year over year. And we had workloads being driven from digital adoption, customer service, you know, really tied to as we grew our business, our mainframe costs were were growing as well.
So we, we wanted to look at ways of offloading, especially the the re transactions so that we could allow the, the workloads that were harder to, to migrate room to grow.
Um so again, you know, the, the other part was as we're improving all of our digital assets, we're moving them to the cloud and, and we also wanted to allow ourselves the ability to decouple some of the the relationships back to our on premises data.
Ah so, so in the end, you know, we, we wanted a platform that was secure scalable. Um and, and gave us the agility that we needed to be able to scale up and down as our workloads changed throughout, throughout the, the the day and ah through, through the seasonality that we have it in our business.
So what did our journey look like? Um again, i joined about a year ago, uh we had um broken the assessment up. I came in after, after the assessment was done and we had surveyed a lot of the applications but, but coming in and looking at it, we, we, we approached the problem breaking it down into waves.
So we worked with um AWS professional services
Um the, the uh mainframe modernization product teams with uh blue age accenture uh working through some of the micro focus conversion, pulling all of this solution together. Um and, and so for, for wave zero, we, we set up a foundational platform. We didn't want to have to do a data platform for blue age and a separate data platform for what we were building on, on micro focus. So we saw where the similarities were where we could combine efforts and set up that, that foundational infrastructure so that we could get to compliance, reliability, operational readiness once and share that across the the overall platform.
There was some proof points in how we went about selecting the applications that we would target. And we went after our highest mp s utilization services that were driving the most load first. These were also the most complicated services that we had across our entire ecosystem services that were um cutting across multiple mainframe applications with the common modules and everything else. So we were able to prove the brief factor solution functional equivalents with 800,000 lines of code across 20 some odd applications, the reef platform solution across 2527 applications, 3.2 million lines of code. So 4 million lines of code was in our fww was in our first wave as we went through this. And so we, we went through looked at it and you know, uh recognized that as part of that, you know, learning process, we'd have to initiate change data capture. So we had tried some alternate approaches there, you know, could we copy the data off, you know, and and migrate stuff with its its various information life cycles. And you know, we we recognized quickly that change data capture was was the way to go there.
So putting all that through wave one, what we were able to do was verify the financial forecasts that we had make sure that we were approaching the problem that our sizings were right. You know, and we got some nice, nice surprises there. Wave two through the lessons of wave one, we, we realized that we wanted to focus more on refactoring a as we went forward, you know, looking more at blue age and, and more java. Um and, and that's really when we also focused on winning the hearts and minds of all of our development community and, and recognizing all of the change that, that, that we needed. And so um wave one and wave two are still in flight as, as of right now. Wave three and beyond is, is where we're focused for uh next year going forward. And, and that's going to be again, more acceleration of reimagine opportunities by saying what are the common codes that we can pull in and then leverage and expand on. And so we'll still be looking at mp s as a primary driver. But, you know, we, we may have other factors at play now that helps, helps us better prioritize as we go.
Um so our decision process, you know, uh when we looked at wave one, we saw the refactoring of uh the cobalt applications um to java and spring on aws worked, you know, we were able to get through all of that. We saw the same with the cobalt and, and so we went into this thinking that the relat would save us, you know, lots of time moving and, and we really didn't see that much of a variation moving the cobalt code or moving to java. And, and so that was a key learning for us. Um both of them, we were able to get functional equivalents on performance equivalents. We, we were able to see the path forward there through the extensive data replication that we had to do. Um we initially tried going back to the mainframe for data, don't do that. Um then um the some of the learnings again, you know, um when we were comparing runtime costs, the blue age runtime very cost effective when it comes to being able to take the code and, and, and process it and run it. It, you know, on open source technology made us very comfortable. And that sort of fed into how we were building the new applications spring boot java, you know, so we, you know, we, we saw a path forward to leverage more of those assets on a go forward basis.
Um you know, the, the, the the development side of it with the developers needing to learn new frameworks and things like that. What we found is our partners were, are actively out there training the resources on our account today in java getting them certified through aws mainframe modernization competency, so that they have, you know, blue age level one level two certifications. So they're actively working on getting us the, the talent in our teams who are already subject matter experts. Um and, you know, as, as we've worked through our blue age process, you know, we still uncover things, you know, we have code that's 40 years old in some cases and the way that it's been maintained over time, test the limits of the spring framework and java even. And so we've had to work around that and work with the product team and they've been very responsive as, as we've gone for enhancements.
Um so our platform um from a developer tool standpoint, what was very important to me, i've been doing dev ops work, you know, since cruise control and, and some of the earliest technology within that field and, and it was important to me to make sure that the mainframe tools that, that we were migrating, that we had that approach from the very beginning. And so we, we did a, you know, a bake off against our enterprise dev pipelines and, and just saw that we needed an exception for the mainframe modernization code. It, it was big, it was unwieldy, you know, um on our on premise, infrastructure had some challenges there. So we, we went for aws code pipeline and code build and code deploy and, and are using those tools to help us move forward and some of the managed container offerings uh to help us build.
Um our vs am data pipeline was another challenge for us as, as we went through, we're using s ftp off the mainframe to a w ss transfer family landing that on efs. And then depending upon whether it's blue age or going to um going to our micro focus instances, we're then splitting those files off and appropriately landing them on the ebs volumes or in post for our rep platform. And, and our, and our refactor side, we have put in place for, for security, you know, a web application firewalls, we're using api gateways. We have our, our load balancers going back to fleets of instances on the micro focus side, we're scaling them based on a schedule. On the aws side, we have the blue age runtime scaling on demand. And so we, we really like the scalability features because it allows us to add and subtract resources throughout the day as our business needs change. We have hot spots and, and as i'm sure most of you do for this real time traffic. And so that allows us, you know, to be able to control costs effectively.
Um and then we have our databases there. And, and so as i said, we're, we're doing a lot of the cdc replication into them and that's a shared database across each, each of the solutions.
Um so tool set lessons, um we've learned a lot, you know, uh my, my prior experience, we did big bang. And so when, when doing big bang, you don't think about the incremental work flows as much. And so we've spent a lot of time looking at, you know, how do things work uh with refactor, with repla with, with the hearts and minds and, and how do we look at the code better so that we can accelerate? And, and so some of the things that, that we needed to say, how do we keep as things change? How do we keep the start up transactions that we use to feed our ccs cs regions? How do we keep that in sync with what we're migrating out to aws?
Um and then because our code bases are mixed of uh of read write transactions, how do we really get to understand that? And that's where our test cases and dynamic code analysis and things like that have really helped us narrow down the scope of what's actually used. Uh I was having a conversation earlier about a prior project where we went from 60 million lines of source code down to 16 million. And, and so, um we're only touching a fraction of the code in our environment with the 4 million lines of code that we're modernizing now. But, but by using techniques like this, we're, we're trying to narrow the scope of what we actually have to touch.
Um and these are my lessons learned. So, you know, again, as i said, i came into this as, as it was going and, and we had it described as a pilot, but, you know, mindset, you know, the poc mindset is very experimental science project almost versus the pilot mindset of we will run this in production. And so having that attitude, having that mindset that it's possible and you can move forward, you know, and the decisions that you make are how do i land this in production successfully changes how you approach certain problems. And so bringing that to the team, i think was essential.
Um data migration in parallel uh we should have started on our data migration journey with the future state from the beginning, you know, we, we started on cdc too late thinking that we could get by with other other methods and and that was probably informed from the from the poc versus pilot mindset. But but, but we should have start on started on that earlier. Um so move your data, move it up, you know, move it in parallel, move it up front, you know, but consider the data first.
Um developer impact. So here's where messaging across the organization, all of your, you know, change acceleration process, you know, you can't just rely on the top down messages. We were seeing things filtered. So we had to set up office hours, we had to communicate in town halls, we had to get directly to the people touching the code in order to help them understand what the impact of change was.
Um and then operational excellence, you know, um no shortcuts here, you know, we, we, we did extensive failure mode effect analysis, we did extensive game days, extensive monitoring. Um you know, just to make sure that the resiliency of the new platform could match the expectations of 40 years of, of lessons learned on the mainframe.
So outcomes um by next year, we expect to have 25% of our of our reed only workflows moved off uh from a traffic migration perspective, allowing our additional uh applications room to grow. Uh risk-based failure modes as, as we discover and continue to improve the platform, we will improve there. And then um we're, we're looking again to accelerate those reimagined opportunities. So we look at the ways to improve. We look at the areas in our business that have years and years of business knowledge built into our code. How do we take those, use the blue age technology to bring that to aws and then wrapper that with the other business logic that we want to, to change how our business works and operates. So appreciate your time. And thiago, thank you.
Hello everyone. Uh it's working. Ok. Hello everyone. My name is Thiago Moraes. I'm the head of cloud platform at BU and let's talk about a little bit about our journey to to the cloud. So we are almost 100 years old company. So we provide full service bank for, for our customers. We have operations in Latin America and Europe and US. We uh in Latin America, we are evaluated as the most valued brand. So with $8.7 billion in evaluation and we have 9 97,000 employees uh in uh overseas. And in Brazil, out of those 97,000 employees, 12,000 are engineers. So they are working in day to day basis, talking uh working with mainframe or aws or on the on premise platforms.
So how we started our journey? So we started in 2017 with the private cloud at that moment, uh in Brazil, we didn't have the chance or we didn't have the authorization from the central bank to operate public cloud. So we took the decision to iterate on top of the private cloud and see how our teams will adapt to the platforms. Understand how what are the skills that we should grow internally to support this transformation. In 2018, the central bank in Brazil allowed us to move workloads to the pro production workloads to the aws cloud. So we started our journey on accelerating, moving and building solutions into the cloud. But first we understood we aws in a partnership with aws that we need to create a cloud operating model that would support us to accelerate the journey.
So here one key takeaway, it's imperative if you're moving to the cloud, try to create your cloud operating model. So it's important because it's a change management process, you have not only changed the technology but also how you organize your company to provide the right experience and take advantage of this this transformation with within the company. Ok. And in 2023 right now, we are trying to evolve that to think about the platform. So kc mentioned that so how to build the platform but not only think about it as a platform, but also try the product management perspective on top of what we are building. So we are calling here a product operating model as what what we are calling internally.
Some of the results that we have so far with this uh modernization journey, we have 5.9000 certifications, uh a w certified uh employees. So certification, sorry uh through this journey. So it's uh one of the largest uh programs or certification programs in the world. We also reduce 98% of incidents in our platform. Since we started this journey, moving our clothes to the cloud, so we could reduce the impact of our customers. So when we making changes or publishing uh changes into our production environment, also we increasing thir 13 times the number of changes compared when we started the journey to the cloud. So comparing uh 2018 to 2023 we had a huge increase in velocity to deploy new new features to into the platform. And last, we are now at 50% of our modernization strategy or modernization journey with our business services.
So how do the role of the mainframe uh presents here? So, so how some big numbers around our mainframe platform right now, we have 1.3000 business services on our platform that runs on top of mainframe. So here business services are journeys or uh services that we provide to our end user and comprises a set of uh multiple workloads to deliver that journey to our uh users. We we use three sorry, we use three transaction monitors. Six mis and grb is our homegrown transaction monitor. So we started this at 40 years ago. I i believe that late eighties
So we still have this going. And also we process 9 billion transactions per day on our platform. When we decided to go into our modernization strategy, we had two major objectives or major strategies:
-
Think about business service equality. So how to think about quality since the beginning and also how to provide a good experience to our users and also to our business. So how we would change the competitiveness of the business as we move to go through this modernization.
-
We wanted to reduce complexity as we go. So how to build software and reduce the complexity that we had as we grew into the mainframe.
So we understood that we had a lot of complexity on how to operate and orchestrate that reduce couplings because we had those monoliths. So how we operate that faster time to market. So how to from the idea to value. So how we could iterate quicker and provide value to our customers as we go.
And also integration with existing environment that we had that more than 40 years of code there. We need to think about how to provide integration and coexistence as we go.
And also observable and analytics. So it's imperative that you build that from the beginning in your platform. So you can have the right toggles to see what's going on once you go to production.
So how do we decide what to rewrite and what to relat?
For the rewrite, we understood that if you with we were thinking about transformation of a user experience, we need to go to the rewrite approach. So we understood that it's not only a matter of modernizing the platform but also modernizing the experience of the customers. So how do they iterate or consume our services through the mobile app or through the web applications that we provide them?
So based on this, we decided that these should be prioritized with the business criticality and business impact that we provide to our customers and to our business.
And the relat approach would be for systems or platforms that had no functional changes in the midterm or long term or no plans to retire at that moment.
And also, how to gain scalability and replicable patterns as we go. And as we modernize those systems, we understand that there are certain patterns on those softwares that would allow us to gain scale as we move.
But later, we understood that we need to create some tooling to provide this integration more seamlessly - like a platform for integration between the inbound requests and outbound requests between the cloud and the mainframe and mainframe and the cloud.
For the rewrite approach, we thought about the foundation. So it's imperative that how do you write or build software that streamlines the approach to set out the foundation of your accounts.
So when you're going to AWS, think about how you create the pipelines to provision accounts in a scalable way. It's imperative to do that because once you start reaching momentum of creating those accounts or modernizing, it's imperative that you have the right foundation - plugging in security, observable, guard rails.
So you make sure that even with all the pipelines, sometimes you mess up. So we are engineers. So we mess up, we put some code that is not gonna might hurt the customer. So the guard rails are imperative to support you with that.
Think about the coupling. So how you design your architectures or microservices. So how you design them, it's imperative that you take advantage of the cloud by doing this. So microservices, think about scalability, scale. It breaks everything. So think about scale all the time.
Why you're building your software, building solutions, think data driven. So how you provide extension points or to export your data and build or thinking about building your data mesh platform.
And also think about this on how you design to scale to evolve. So think about that, it's a long term view. Try to avoid as much as you can have the legacy of the cloud. So think about how you can evolve the platform as you go.
Lessons learned here, think about prioritization. So it's important - how you prioritize what to tackle first. So it's hard. What are the big problems or big rocks that you want to move first? So it's imperative that you prioritize that.
Coexistence - you have to think that it's not from a flip of a switch and you're moving data to moving your workloads to the cloud and you're good. There are a lot of integrations that have been there for years. So now you have to think about how you move those legacy integrations to your new platform and then you can retire the old platform.
Think about cost. I didn't put here, but cost is very important. So when you have the coexistence of your legacy application and your cloud applications, you have to put that on paper and think how you're gonna reduce the bubble you have with both systems running for a certain period of time. It's important to think about this.
Some of the numbers that we have with our rewrite strategy:
-
We reached 55% of modernization of our business services. So it's still going. So we moved most of those workloads to the cloud right now through this rewrite strategy.
-
We had 27% faster feature release. As we created or evolved our pipelines, we reached a point that now we have the maturity that the teams can evolve and iterate on top of the pipelines that we have. So this idea to value accelerated as we go as we understood how to operate in the cloud.
-
We use we measure business complexity points. So we reduced the hours per business complexity point by 4%.
Through this modernization, we created some tools to support our engineers to build good solutions on top of AWS:
-
IU ConfiA - An internal tool that we based on top of the Well Architected Framework. Based on the questions that AWS provides for the Well Architected Framework, we created a set of guardrails and tooling to provide visibility to the account owners or application owners on how they are doing in comparison to the Well Architected Framework. This helped us evolve the platform and evolve our solutions thinking about cost, security, availability. So we can put new questions or checks to new applications and help engineers evolve their platforms.
-
We created a solution called Cloud For You that's pretty much a control plane to provision accounts on AWS. Here we provision accounts for dev, QA and production. So this way we can have a streamlined approach to governance on how we scale accounts on AWS.
For the relat approach, we tried as much as we could to set up a landing zone for those new workloads that we are moving to AWS by leveraging zero functional changes.
As much as we could, we don't have to change the code or functional implementation that was in the mainframe. We just want to lift and move to the new platform.
We had an important concern with engineers - we didn't want to change their experience moving to this new way of working. So it's really important that you work through that.
And communication, as mentioned - how do you communicate and drive reskilling of the team if needed for this new strategy?
Think about data integration again. Think about data integration and how you push data to your data mesh. So how you are building the data strategy on top of this relat - it's really important.
And again, think about scalability, observable, and self-service as much as you can. Try to leverage the DevOps approach on how you relat your platform.
Some results we achieved:
-
Of the 1,300 business services, 2021 of the workloads were our home loan services. We took the approach to experiment with this specific workload and try to move it with the relat strategy.
-
We achieved 27.5% of this migration for the home loan service.
-
From where we started to decide to move with the platform strategy, it took 12 months from planning to deployment in production for this home loan platform.
-
Based on this relat execution for the modernization, we started identifying patterns that we could document and understand and replicate to other workloads. We tried to find similar workloads where we could leverage the same experience and start scanning the process of modernizing other applications.
-
We found close to 10,000 MIPS from those patterns. So we started assessing other workloads that might help us be successful with this relat strategy.
We are leveraging a lot of the tooling provided by AWS and MicroFocus to accelerate this strategy.
For the future so far:
-
In 2022 we started the home loan pilot that we successfully executed. We are taking the gains from that relat and starting to accelerate it for 2023-2024 - how we scale that out to other workloads.
-
For rewrite, we reached 1,500 business services that we already successfully rewrote and migrated to AWS. We will continue to scale this strategy to other workloads using that matrix of what to rewrite vs relat.
-
We are also starting to experiment with generative AI to help the rewrite strategy - to look at legacy code and generate more modern code to help us scale.
Thank you! I appreciate your insights. Let me know if you have any other questions.