AWS Mainframe Modernization Automated Refactor Transformation Center

Hi, nice to meet all of you. My name is Xavier Pro. So I'm leading the qualification team within Blue Edge. We are also uh making and delivering expertise to the people uh in order to help our partners um talking about modernization is like a life insurance. So you do an investment and you will get the benefit by the end of the project thinking about it.

Some organizations are not very mature, so they are still looking about the tool and they don't care about the process yet, some of the organizations are fully mature as they have ongoing affairs and success in the middle. We get organizations that are at the forefront of making their decision. And the idea of this demonstration i'm going to play today is to help you to understand the process and methodology that applies to the mainframe modernization.

So mainframe modernization powered by AWS Blue Edge is an automated refactoring. So what we are going to do is to make a re architecture of a legacy application by changing at the technical level, the way the function are implemented, we deliver an outcome which is going to behave exactly the same in terms of functionality and get the same equivalent performance in terms of non functional requirements. Our goal is to simplify the transformation and to streamline the process. In order that people can get the result as fast as possible.

First, the uh transformation that is going to be successful is the one that deliver an application that works and also deliver it in a predictable manner. Having the uh duration of the project and the budget being kept under control. That's why we have built all together into a methodology and process the building blocks that are monitored in order to achieve such a transformation by defining the task and by having this task repeatable at the locals in order that we can have everything under control and delivered smoothly in an automated manner. So you can concentrate on the unexpected part of your project and not being just diverging with your timeline.

So the first part of the transformation is to prepare everything everything for the project means that you need to get into a very rapid assessment of the code based of the application you want to transform in order to identify and the main findings, what are the technologies, what are the different building blocks of that application and what needs to be solution upfront. Before you consider the modernization of that application for that purpose, we get a code based analysis tool which is going to be accessible from the aws console and to relearn from the current implementation of that application, the artifact dependency. So you can analyze what are exactly the artifacts that are required in order to implement the functional scope and the transformation then group them per affinity, functional affinity because they belong to the sa uh uh functional domains or uh per uh technical affinity because they are sharing the sa me result in order to bring some work package that you will then order into a timeline in order to build your test strategy and your project detail timeline uh that is going to be monitor even using some predefined dashboard.

The technology we cover are from ibmz os. Also they are from s 400 we cover many rpg cobalt but also some and we are currently announcing the process and the tool in order to cover the fujitsu platform, the gs 21. So i'm going to make a first demonstration here about the code base and what you can do. So from the aws console, you are going to log in into uh the uh aws blue insight tool where you load your source code and you get into uh a project in order to run the code based analysis i was talking about.

So this code based analysis allow you to get the metrics of your application in the screen and then to go into the assets and to understand what are the different technologies that compose this application by also running some additional features in order to announce the value that you can deliver in terms of funding. So here we are going to run a complexity analysis. So we can immediately spot what are the main complex artifact or component of that application that we potentially need to focus on for the purpose of testing the result?

Those predefined dashboards that we are our interactive dashboard that will follow you all along the project in order to monitor the progress in a real time manner and to see the fails and to see the success in order that you can directly understand where you are without spending your time in excel spreadsheet or powerpoint. The complexity analysis graph allow you to identify some of the programs that are complex that you can edit from the workspace window in order to browse the source code and better understand why they have been identified as complex.

The dependency graph is very important because it allows you to query the graph. You can query the graph in order to identify the files that are needed in order to capture the data because they are part of an input or output process. Also identify the missing element in the code based that you receive because you want to come back to your customer and say i'm not complete. I don't have everything in my hands in order to do the transformation. So before you start, you do a kind of sanity, sanity check and you get exactly the picture of what you are going to do.

Some of the query allow you to identify uh the entry point. What we call an entry point is a way to trigger our features in the application. So it could be a batch, it could be a screen, it could be an api. So it's a way to trigger that feature and to identify it. So from this graph, we can um create some sub graph in order to identify a subset scope that will be candidate for a pc or for a calibration phase before we go to the full migration.

So here we are picking a screen definition and we go into a subgraph. And from that screen definition, we request to the uh blue inside to, to give us all the artifacts that are required in order to implement and to run the screen as a vertical slice of the application, including the data that are accessed by this screen and how they are played immediately, we can identify those artifacts that are required and in case there is some missing, we can go back to the customer again.

So here we are on the real code, code based analysis of more than 100 million lines of source code. So you can imagine the complexity of the graph we are going to see in a minute. So this application, it's not one single application, it's an application portfolio. And the purpose of this analysis is to identify how we can the multi year project that we are going to run for the customer.

So we are making the analysis of 100 million lines of source code which represent a graph which is very, very complex. And this level of complexity is not affordable by any human being. So that's why we have scripted something in order to go back to a more simply simplified version of that graph where every single node can be one, just one application.

So in the graph, we are going to see in a few seconds, we will see a simplification of the same graph representing the same code base. And in which we have also rejected the dependency of the data. So the input and output files as well as the persistency into the relation on data about the tables. So this way we can see how the application are coupled by the persistency layer and how we need to group them together because they are sharing some common resources that might be modernized exactly at the same time, meaning that two application sharing an i ms database for instance, absolutely need to be modernized simultaneously because there is no way to access an i ms non modernized database as well as modernizing it at the same time with two different applications.

So again, using the different features of the graph dependency, we can split this very huge graph into sub graph in order to identify giving receiving as an input, the functional requirement of the customer, we are going to group the application but having the perfect understanding of what's behind those applications in terms of dependency and how they relate to each other. So this way we are building waves that can themselves be decomposed into clusters and we are going to define the planning of this full migration journey.

So the way we group the application depends on both the input of the customer at a non functional requirement because they get some business lines that are candidates for the early adoption of the transformation of the early benefit. And they don't want to get mission critical application in the first wave for instance and as well as the tech uh technology implementing them.

So because we want to demonstrate that there is a solution that is working in the first wave and not wait for the end of the wave. So this way we built a complete timeline of the project that is going to be followed. And this timeline is going to be the basis of the project. And the way we are going to uh distribute the application to all the uh team members in order that they can monitor their progress and check that they are in line with the uh uh what is expected.

The second step now is to do this transformation. This transformation is made of three different steps. I'm going to describe the first one and the last one before i focus on the uh one in the middle, the first transformation, the first step of the transformation consists into the creation of a model from the uh relearn of the application implementation, current implementation, this model is going to contain all the information that is in the current implementation. And then this model is going to be uh gene from a model. We are going to generate the uh target source code into java. So this is the last part of the generation.

The generation is based on templates that can be customized to uh converge with the customer requirement for future maintainability and so on. But in the middle, we get the ref factor step which is very important. This step is going to pass and screen the first model and to recognize the patterns in order to map them onto modern pattern because we want to obtain an object oriented application by modernizing that second index uh application.

It means that this factor process is going to automatically rearchitect the application into a second model. So the transformation is not a line to line transformation, it's an architecture to architecture transformation or pattern to pattern transformation. It means that we are going to have the ability also to add on top of this automated transformation, some transformation rooms that are going to be specific to the project because we want to refactor a piece of source code in order to make it more maintainable in the future because we want to change the naming convention of on the fly by making this automated transformation because we want to capture some requirement of the customer in order to generate source code that is going to be reusing some pieces of their java framework that is in house framework and so on.

So it means that we get the refactoring toolbox in our hand in order to fully customize and obtain the quality that is expected by the customer and implementing the requirement. So let's go to the demonstration of that the uh we are going to go through the uh configuration. Uh so it's going to be based on the code base that we have been uh looking at uh just before and we are going to explore what is possible with the configuration and how we could eventually patch the source code in order to get some pattern, automatically replaced by some other. But for a specific reason in this context of the customer, we are going to also go to the properties and see how we can define the different refactoring piece that we want to inject into the transformation process.

So let's go. So from the landing page of blue insight, we are going to access the transformation center and from this transformation center that has been created based on the code base that we defined earlier, we are going to have this dashboard. So this dashboard is a bit different from the other one, they are focusing on the different step of the transformation. The one i just described what we can see here is that sometimes we want to change the source code in the input that we receive because there is some tipo or there is some stuff that are not fully compatible with the transformation. Here is the remaining formatted transcript:

So this way we can fix something very quickly before we receive the official retrofit from the customer. And this is the version of the input. It's also used in order to get some code refresh from the customer in the middle of the project because the customer, so here we get an extra dash we are going to remove.

So uh the uh customer is going to refresh the source code in the middle of the project because the maintenance is still going on during the transformation project. So this way we can automatically get an impact on any of this on what are the piece of the transformation that need to be re transformed and retested, reapplying the test strategy.

So the uh configuration of the transformation center, we will see that uh in this transformation, we can customize and set up the different properties for each of the step. Then we are going to focus on the one that uh allow you. Uh so here i can define my uh child set, i can define uh the uh the pass or the class pass in order to generate the source code and i can customize things.

So here we are going to say to define a patch. So this is what we want to have being replacing a pattern from the customer in the target. In order that we automatically, everywhere, we recognize the legacy pattern, we apply targets and not the one that is going to be generated by deform.

The reason why we do this is because the customer has a requirement and once we use some of their classes, java classes from their framework for instance, or we do something a bit different than it's going to be done. Usually in order to adopt their logging mechanism for instance and so on, then we can define here some ado refactoring.

So the aoc refactoring is uh what i introduced earlier. So the ability to define some additional rooms that will be applied in the transformation in order to implement a requirement from the customer.

So here, the refactoring that is defined is going to automatically expand and rename the basic naming convention that we had in the legacy because it was limited to six or eight characters. So we want to get meaningful names in the generated source code so that the future maintenance will be easier than if we don't do it.

So here what we are doing is selecting the work package online that we have defined in the code base. In order to say, i want to transform the online part of my application. Then we run the transformation, selecting the transformation engine and applying the process as i described it, we can monitor the resume.

So we can see here that this artifact has been automatically transformed with an a transformation rules in order to change the naming. So it means that the engine recognize that the naming needs to be changed into this artifact to converge with the customer requirement. In addition to the default transformation rules that have been applied, then uh the factor uh you can access the logs.

So you can have a look at everything that has been uh renamed here. Uh so you can trust you can have a history of what everything that has been made and you can repeat that process infinitely until you are satisfied with the result. And it's going to version the different output that you produce.

So this way you can have a comparison between a previous version and the current one in case you broke something or you want to uh get an explanation of something that has changed. Then also we a re able to uh download the source code that has been generated and this source code will be something that we can explore and we can share with the customer in the context of the puc.

So they can give us a feedback in order that they want to change something. So here we are going to look in the source code at the patch that we have defined and we want to apply and we just look uh we are going to search for the keyword patch and we can see that the source code has been generated in the path but replaced by the source code we wanted to apply for this pattern.

So you can see that the source code is fully readable. And uh the way we do the transformation is by making at the end of the transformation, the um a handover to the future maintenance team in order that they can fully understand the application structure and the application architecture that we have produced uh as it is a functional equivalence.

Uh people that are familiar with the uh functional algorithm that exist in the legacy application will fully recognize them by looking at the java source code even if they are not java expert. But the transformation is also something that gives you the ability to make big change and to take the change as a chance to enhance the application maintainability.

Because what we have in our hands is a refactoring toolbox. We can take the opportunity of that change in order to resolve some elements that are part of the technical depths. So maintaining the application for 20 years maybe was made by different teams going into different directions. And you want to get a uniformity in the new application that you are going to maintain, you can also decide to refactor some piece of the source code in order to make it easier to maintain.

You can i was talking about the naming convention and so on. So the transformation in itself is going to produce an application that will give you the ability to uh be more maintainable in the future and to take advantage of the new services and modern services of the cloud.

Thank you for your attention and i was happy to present to you today.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值