Evolution from migration to modernization using modernization pathways

Good morning. Wow, that's, that's the energy. Thanks for attending our session. That's awesome. It's Friday. I know, but I really appreciate your time. Thank you very much. Thanks. And we have a great agenda for you all and we will make sure that we will keep you awake, right? But that's the most important thing.

Ok. Um I'll start with the question first. Are your customers struggling with modern additions moving from migration to modern additions? Yeah, great, great, great. Who are working on migration at the moment? Great. The fox one modern edition, great. This session is for both of you, right? So don't worry we have, that's what i mentioned. It's, it's very interesting sessions. It helps you to solve your problems as well. Those are actually focusing on migration and modern additions.

This is a 300 level sessions and intended for business and technical personnel. So again, who from a business folks here? Mark quite a black. Yeah. Technical good. Ok. Wow, we have demo for as well. So stay tuned. Uh as uh slide said, my name is pras sha. I'm a senior solution architect focusing on a partner. I specialize in migration and modernizations. And I'm very excited to join you with my uh colleague, giovanni.

Hello, everyone. I'm giovanni graves and I'm also a partner solutions architect, migration modernization based in new york.

Great and alex. Hi. Oh, wow. I'm sure I'm not the only person with a sore throat this morning. Um my name's alex cairns. I'm principal solutions architect at uber test consulting, uberta consulting and aws advanced tier partner based in the uk. Uh we focus on migration and modernization of windows workloads. And I'm really excited to be able to share one of our customer stories with you today as well as how we've been on the journey of modernization pathways. Thank you very much holz.

Ok. So what are we covering today? We want to take you on a journey of modernization evolutions and want to focus on why modernization is important modern addition pathways specifically on infrastructure, database applications and analytics. We also want to talk about the a sw aws solution and partner solution specifically, this is a partner track. We also have a demo to modernize the legacy application with one of our is partner tools and aw services. So let's get started.

So migration and modernization is a journey. You agree. Yeah, it's so the first step is start with assessment discovery. You're selecting the migration strategy and create a business case and get a sense of the readiness. The mobilized phase is more about building the early muscles, get some momentum, define your operating model. And if you have leverage a cloud center of excellence last phase, once your landing zone is ready, you are ready to migrate at scale. And most important thing is you need to do continuous optimizations right from performance caused very important thought. And we see this structure approach phase approach specifically work with all customers.

So the question here is do you migrate or modernize on the way in or you modernize after the migrations? Right? That's a question normally like how do i do modern additions before or after? And the answer is both right. Let's find out how.

So when the customer has urgency to move from the current on premises data system, they want to get some cloud flexibility and quickly agility based on that. It starts with the migration as a first step followed by incremental modernizations. This is a two step approach to modernizations and i will call it a organic evolution to modernizations. And specifically this approach start with a low hanging fruit, focus on the repla first and then as become mature, you go on a on a certain way. Certainly this incremental modernization is a low risk approach and uh sometimes we provide you a better performance as well. But on the other side, when the customer wants to get an innovation performance, you went to faster specifically on that get a business agility. The modernization on the way is answered, it is a leapfrog approach to modernizations to achieve the specific goal. Of course, uh adoption of the higher degree of modernizations i is a complex, challenging but actually improve the order of the long term success. So two step, one step, of course, you agree with this one, right?

Modernization is a complex. The customer has to make the fundamental changes into their business capabilities, operating model and the technology when they're considering the modern additions and they often b to the number of challenges and some of the challenges we see is a decision for it, right? So making the quality of decision from the vast do of information and getting the priority that it is a finding the needles from the history, right? So it is too much information quality of decision is in number one priority.

Consensus on a targeted architecture is another things we have to involve all the organization between the number of teams that we need to get involved on making the decisions, right? So the challenge here is how do you get their voice in? How do you define and build your architecture which is specifically designed for scalability and drive the business value that is the most important thing.

And the last and this one is a more critical one as well that you know, skill, skill gap, right? How do i build my modernization muscles specifically on the skill side? Another thing we saw is how do i actually get the momentum upskill my resources and shorten the learning curve. This is the most important thing, right? You don't want to spend like month and month to train. You want to shorten the learning curve. We have to come with a very innovative ideas to that.

So many times customers ask like, do you have well tested and well defined approach for modern additions? So to address this, we have to streamline the pathway so we can establish the modern modernization's vocabulary and the best practices. It also help us to rationalize the applications from the modernization perspective. And finally, it will help to accurate the modern additions. So how do you do that?

So we start defining and decomposing the workload into the sixth modernization pathway. The first we start with move to cloud native, it is about moving out of legacy applications and decomposing into microservices mo to container. It is more about the gaining of efficiency. It's focused on the portability, right, consistency across the environment mo to manage database is the next one. Here you are moving from your self hostile self managed commercial database to fully and purpose built database specific like amazon dynamo db, key value pair or amazon neptune on the graph database, move to open source. This is again moving out from the commercial license software to more open source like tex and container. And we see that many customers get, you know, by adopting this modern addition pathway, they are able to reduce the tco move to manage analytics. Again, its adoption of data lake a data mesh and especially on the data lakehouse implementations. Again, here you are moving from the transactional data warehouse to fully manage uh data warehouse. And finally, mo modern develops is more about the adoption of test driven approach cic pipeline and of course culture matter the most, that's more common things.

So as a partner, you might be wondering how is modernization practice is aligned with this pathway? So i want to introduce a modernization pathway assessment. It is a structured framework to understand your organization alignment, business alignment and the capabilities with respect to the six pathway, as i mentioned earlier, right? This is specifically for design for the partner. It provides actions and recommendations specifically based on the quality of capabilities that we identify any gaps that we identify to help you to close that gap as well. And this assessment is based on your past delivery and the people capability, right?

So now the question here is as a partner, how do you get started? So please reach out to your partner development team for this enrollment. That's the first step. The second step is about reach out to your uh partner management solution architect is work with you on go for the assessment and it provides a report. I'm sure you have used uh migration readiness assessment through att tool or we can say amazon assessment tool. Uh this assessment is already available there.

Now, i hand over to giovanni to focus on specific pathway and container. move to cloud native and database giovanni flow is yours?

All right, great. Thank you. Hello, everyone. Thanks again for joining us this morning. Um so now what we'll do is we'll explore these pathways in a little bit more detail, discuss the common methodologies and the tools that we've seen in the field being used effectively before he moved to a demonstration.

All right. So we see as a minimum viable modernization path, taking our legacy applications and containerizing them without extensive refactoring. Ok. So now a key evolution is that the tools available to us, they vastly improved our ability to do containerization at large scale. And so we've actually been enabling the partner community on a number of low code, no code automation platforms for containerization such as matilda cloud and cloud hedge.

Now our native tool sets, they're they're pretty busy as well. So at the container, we like to think as not just a lift tinker and shift tool uh but it is also an assessment tool. It's a recommendations tool reduces the manual effort of having to determine run time dependencies in in many of our applications, micros surface extractor. Now, this one's interesting we we're using the power of a i within this tool. It gives us recommendations for breaking down monolithic.net code. It is now natively including.net porting assistant capability. So now we're able to convert our.net framework to.net core up to six times faster using this cloud enabled version rather than the installer that you might have in a developer id.

Now, i'm actually going to invite alex from uberta consulting to discuss um share a story with us, a customer story of move the containers and cost optimization. It's gonna happen every time i didn't even go to replay last night. This is just just friday.

Um before i share this customer story, i have, i have one question for you all. Uh so who saw werner's keynote yesterday? Ok. Most of the room and what was the kind of the the main theme through his keynote kind? I want you to, i want you to shout it out some energy this friday morning. Yes, great.

So this is how we're going to approach this customer story. So it does follow along the move to contain this pathway. However, the recurring theme throughout is cost optimization and cost saving.

So this particular customer is a financial services customer operating at a global scale billions of dollars managed in assets. And their workload began in a in a colo location facility. So it was java workloads. It was running quantitative financial models on windows server 2012 running on uh vmware virtual machines. So this came with a, a number of overheads both cost wise due to the licensing of windows server, but also performance wise.

So they'd identified this need to migrate and modernize from co location facility into aws to start trying to save some money on this using the migration acceleration program or, or map, as you may know it, we are able to establish uberta and aws as trusted partners for this customer. So that kind of power of three together and then leverage the funding available through the map program to reduce the financial burden of modernization.

The first step of this was lifting and shifting. Um in verne's keynote yesterday, he talked about this, this idea of if you continue to run your applications in the cloud as you did in your data center, you are never going to be able to realize the full efficiency of them and it is absolutely true

um we'll get a little bit more onto that. that's something i want to keep in the back of your mind as we as we go through this.

so we know that these modernization pathways aren't necessarily a straight line. they're not necessarily one step and they're not always one pathway either.

so once we've moved this application as is onto ec2, uh running the latest compatible version of windows server, we started to explore where these cost savings come from.

so we identified that there was a a 38% potential cost optimization there just by relat from windows server to linux. so we know that java workloads in theory should be portable across operating systems.

i'm sure there is enough experience in this room to know that in reality, perfection is probably not quite as easily attained.

so what we identified in this scenario was there were a number of windows dependencies in the java application specifically relating to sql server that meant there was some application refactoring required before we could explore this this move to open source pathway.

the customer was able to do that which then unlocked the ability to realize this 38% cost saving straight off the bat.

let's not forget here that we are talking about the move to containers pathway. so bringing it back into containers, not single step, not um single pathway.

we began to incorporate containers because now that we are running uh a linux operating system, it became a little bit easier to containerize. there were less dependencies. we didn't have that overhead of the licensing cost. the ecosystem was a little bit better.

um much, much easier so that the term that was used earlier was kind of minimal but minimal, viable modernization. and that's what we were looking to do at this point.

it was going from the the co location facility on the very far left. uh and going all the way through to how far can we go realizing as much as we can each time that incremental cost optimization, not looking for the the kind of the big bang cost optimizations and doing it and stopping after after one step but taking it steady and finding improvements along the way.

what we'd found was that 75% of the uh usage of the e two instances in relation to cost came from idle time. so these financial modeling applications only ran kind of once per day at the start of the, the markets opening.

and as a result, there was a a massive opportunity there to be able to, to cut some costs.

so we started to explore things like task scheduling. so on ecs making sure those tasks are only running at the time they are needed and getting rid of the the ec2 instances that we're running 24 7 and incurring that cost great result for the customer being able to realize a 38% cost optimization in the first step and then a 75% later cost optimization by scheduling those tasks.

now this doesn't even scrape the surface of the opportunities that are present through containerization.

so what i'll do is i'll hand back to giovanni and he will explore that further. thank you for that.

so wonderful story. we can see if we can compound our gains uh when we stack our modernization pathways together in this example.

now, also in this example, we've noticed that um you know, our customers, they continue to choose amazon ecs for its simplicity, but we are seeing many of our customers now moving to amazon ets to access a large ecosystem of open source solutions.

our customers have been telling us, please, you know, get us to production faster on eks, right? they've been asking us for quite some time.

and so in 2022 we launched the eks blueprints project and our customers are telling us they are able to on board workloads in a matter of weeks rather than months.

you've got partners as well with offerings built around ets blueprints and you might find them right in the marketplace actually.

um so it is friday and so, um i don't want to presume that you, you've uh heard about blueprints this week, but just to quickly recap, it's, it's opinionated infrastructure as code templates uh intended to um help us to quickly bootstrap best practice configurations uh such as including controllers for certis or bootstrapping, uh observable for eks using managed services such as manage grafana managed prometheus.

i'll spend a little bit more time on blueprints for a moment here. um there's a, a couple of components to, to understand. so there's blueprints core and there's projects that are built on top of the blueprints core and we'll hear more about those projects later, which for example, is data on eks, but blueprints core itself uh leverages the concept of teams.

and so we're borrowing this terminology from the gs philosophy, so to speak, right? so here we've got platform teams, they've got administrative access to our clusters, they then vend name spaces to application teams. so you think of that as name spaces as a service, almost the second component or element of the blueprints core is they allow us to bootstrap a wide range of cluster add-ons.

um in some cases in a single toggle within our infrastructures code. and so that's for both eks manage add-ons as well as self manage add ons from the open source ecosystem such as for auto scaling.

we've got a carpenter for continuous delivery, argo cd. we're actually gonna see argo cd uh in the demonstration in a moment. and the final thing to understand with blueprints core is the, the fact that it's, it's really a library of deployment patterns using uh best practice reference reference architectures.

so you'll find reference architectures for multi cluster, multi region uh utilizing managed or self managed node groups, fargate, uh even bottle rocket, blue green and canary configurations, et cetera.

and so if you, if you haven't deep dived this week yet, eks blueprints. um you know, please take a look and a spoiler though this is not gonna be the last time you hear blueprints uh today.

all right. so move to cloud native. um this is pathway based on microservices and modern dev ops and sometimes it is easier to build from scratch. this is the leapfrog approach.

uh we heard pras mentioned earlier, we find that incremental design patterns. however, starting small and building momentum, this is most effective for modernization, especially in brownfield projects.

you've got these large monoliths, they've been um built over years, maybe even decades in some cases we've seen. and so the two such patterns we see are leave layer, we enhance the application around the perimeter, we leave the core intact.

there's also the strangled freak pattern that's popularized from the 12 factor um applications handbook, so to speak. and we've, we've done is uh launched refactor spaces to model an environment for those two incremental patterns.

and simply so our customers are able to use this now to redirect traffic from a common ingress between a monolith account and any number of microservice accounts. so this is the step one, right?

we've got our sort of speak landing zone for a microservice, let's say right, with we practice spaces, we now need to think about day two for modernization of success.

how do we think about observable and governance and third party integrations? and how do we do this at scale every time we want to split up a new microservice? are we reinventing the wheel?

and so what our customers are doing is they are looking to adopt internal developer platforms to to solve this challenge, so to speak, right?

and so with our customers, we've used aws proton to build this self-service platform and to borrow some get up terminology again, your platform teams create these standardized opinionated infrastructure as code templates.

developers then consume those templates and they just get to focus on their code.

so what we're saying here is solutions such as proton which enables a high velocity developer experience. eks blueprints, which pushes us to production much faster.

and we've got our landing zone with uh refractor spaces. this is kind of your fundamental components of package for um self services, internal developer platform modernization at scale.

now, we still need tools to observe our applications, especially true for incremental modernization. and so we've been creating stronger partnerships with our a p and technology partners um in the past year or so, um such as from cast diner traced many others.

we're actually going to see in action v function who's going to be helping us to decompose mic, uh decompose a monolithic application to microservices and helping us to accelerate domain driven design.

before we get to the demo, we'll talk about move to manage database, who thinks moving to manage database is about amazon r ds. ok? all right. a little bit more than that. fundamentally, this pattern is about moving to purpose built. all right.

and so we still follow the methodology from from earlier today. the assess mobilize, migrate modernized. we start with database discovery and license evaluation and we've seen a number of tools in the apn marketplace to help our customers such as clarise, modernize it. uh and many others.

um from our native tool sets, we've got d ms fleet advisor. it helps us to automate inventory collection and migration planning for a large database server fleet and you'll actually be able to see that right within the database migration service console dms d ms is very popular.

it is our most popular database migration tool as well as a scheme and conversion tool sct. but we do find that our customers are star now starting to leverage partner solutions for much higher velocity and scale um solutions from click right flow confluent.

these are just a couple of examples that we've seen using any number of different replication technologies change data capture. uh even some interesting kafka based technology with syncs and connectors all to expedite heterogeneous database migration.

it's a purpose built. one last thing with move to manage database and uh thematic here is the move to open source. we need more mechanisms to uh break free, so to speak. uh we call it database freedom uh internally at aws, right.

um so specifically with the microsoft sequel to post grass pattern, uh we see babish as becoming a key enabler for some of our customers. and so that if you're not familiar with, that's a feature of amazon aurora uh that uh effectively gives us an interface uh to translate from t sql to post grad sql.

so bish 3.2 was launched in july of this year and the feature compatibility increased up to 70% over the g a 1.0 version. so we think if you haven't taken a look since g a at babel fish, you should take a look now. Here is the remaining transcript formatted for better readability:

uh so i will invite alex one more time to the stage here to discuss a little bit about babish and how it might play a wider role in their windows modernization practice.

yeah, that's great. thanks giovanni. it worked that time. um great. so we speak to many customers looking to migrate and modernize microsoft workloads onto and inside aws.

and often the natural choice for migrating a a sql server workload to aws is to repla and look at something like r ds for sequel server kind of path of least resistance.

but often the desire to move isn't just to start realizing the benefits of a managed service. it's also to save some money and moving from an on premise sequel server to r ds uh for sequel server, you still have that commercial license overhead you're still paying for for the the pleasure of using sql server.

and that's where um babel fish comes in. so it's not just moving to manage databases that commercial licensing of course comes, comes in.

so anything where you're moving from a microsoft workload into aws, this kind of idea of moving to open source and exploring options to to break free from those licensing overheads is always good.

so before giovanni mentioned it, who'd heard of babish before today? ok, probably about half and of those people who has used it a lot less people.

um so yeah, definitely some call to actions there, give it a go even with some kind of simple workloads, get a feel for how it works.

babble fish is kind of one tool that that we've explored to, to try and solve this problem, solve the problem of moving from uh microsoft sql server to an open source engine inside aws to to combine those cost optimization as well as benefits from modernization.

now, giovanni talked about the, the kind of leaps and bounds in terms of compatibility improvements that bubble fish has made since g a.

um but we're still not quite at 100%. so there is a compatibility assessment tool that bubble fish has.

so this tool is called compass. it's really great in those kind of presales scenarios where you need to fail fast, you need to, to to evaluate the viability of using a tool like bubble fish in later parts of the project.

what compass will do is it will uh evaluate both sql and ddl scripts written in t sql. so sequel servers, native dialect and it will categorize its findings into kind of severity in terms of how hard it would be to solve them for those that need to be solved manually.

so at uber test, we have a real kind of key focus on on microsoft migration and modernization and see babble fish as a tool that kind of forms part of that practice.

it's something that should enable customers to have additional options on the move to manage database pathway in some cases, maybe sql service is the correct target state for them in some cases, maybe not.

um but it's always great to have more options. the goal with this is always to, to lessen the risk and the cost when you're comparing to, to other kind of heterogeneous migrations where you might need to, to explore application refactoring as well as um database refactoring.

so i'll hand back now to introduce the live demo books, my own right.

ok. so it's friday and we thought let's not wait until the end of this session to do our demonstration.

so uh pesh and alex will will man the booth here and walk us through. uh so we're gonna focus on cloud native here for a bit.

and if you remember again, it's about microservices. and so we, we really have to get into the code here by decomposing the legacy app.

one common approach is to um use the main driven design of course. and, and that typically might happen in event storming workshops

Um and that's where we get an opportunity to isolate and free business logic from a, a large code base. And so v function is going to help us here to automate this domain driven design.

And so for our demo approach, the function is going to dynamically observe our order management system o ms. It will give us an initial recommendation for domain driven design and then we will extract this service from a java monolith.

Now to maximize our time here. Um we have decoupled our build and deploy step, which is actually consistent with getup's uh philosophy and so much of the build has already occurred. We've, we've taken that extracted code. We've, we've done some minor factoring to java spring root. We've containerized and that image is available to us in our repository.

Uh so what you'll see here is really the deployment step uh using argo cd uh open source tooling and uh graduated in the cncf um registry. Um argo has already been bootstrapped into our ets cluster from our blueprints uh core modules. And then finally, refactor spaces is providing that strangler fig uh environment for us. It will transparently allow us to reroute this this new service traffic from this common ingress that's coming in from api gateway all behind the scenes.

Gentlemen, let's get into action. Can you hear me? Yeah, let's get into actions.

Ok. So this is the wave function uh is pointing to currently into o ms is order management systems, right? And this multiple step we will go through that. So first is we functions to core capabilities, architecture, observ and refactor engine.

So architecture observ is focusing on identifying the key domain from the existing applications. And refactor engine specifically help to extract the microservices, right? Two core capabilities.

Excellent. First step is always start with the learning. This is a i based learning. So when we functions point to the applications, it will learn that what is happening. And once learning is completed, we functions presented a great analysis about finding the what is existing domain driven design.

In this case, specifically, this domain driven design is focused on the order management system, right? And let's let's explore what is this inside? The first is about understanding is what is this circle. So this circle is is focused on the functional domain, right.

So you can each circle will represent the functional domain within the applications. And in this case, we have the inventory order and product domains. And the color of this domain is specifically designed how this domain is exclusively within that, right?

And within each domains, you have the software component, right? So if you imagine that each applications, how we design is you have the software components in the classes, you have the uh databases as well as other api calls specifically and each resources has a domain domain specific things, database transactions, api call and messaging.

You can see from that we function, it clearly provide all information around that, right. And as i mentioned earlier, uh you, you also want to identify what is my entry points, how my internal applications, business flow is going around within the applications. And this is you can see whole detailed trees that provide within the applications.

Now, another important aspect of uh tool is dependency right now between the domain, there is a line it represents the dependencies between the domain, right. So that is another important aspect of that when you do the discovery of the domain even design, what is the uh applications dependency within that?

Another important aspect is about identify the domain exclusive. How exclusive i my domain is? It's like what is the quality of my domain? And that domain exclusivity is specifically defined by the colors. So red color represent the lower exit my domain is very poor. The green color represent my domain is very highly exclusive. It's been good conditions of i will say from that you can see that what is the exclusive uh specifically for the each domain? Right now.

Next phase is once you understand what is inside the applications, we functions. Another important aspect of that is identify the recommendation, provide the recommendations, right? It's about reducing the technical depth. So in my architecture in this specific applications, how do i actually reduce my technical depth?

And there's a two aspects of that identify any high depth class. So what is my class is highly dependent, highly dependent on that. Second important aspect of that is about the depth code in your legacy application. There may be code is never executed or maybe decommissions, right? Which is important for to identify.

So this to the high depth class functionality will identify and provided this. From this example, you can see the specific class is highly dependent. The recommendation here is to how do you decompose that applications de code again, as mentioned, you identify the depth code as well. So from this application, the v function presented that code, if you are not happy, you can remove that de code.

The next thing is about i want to understand my resource requirement within that. And another aspect of that when you are doing the microservices or decomposition that whatever the database right is very important. Are you are you also extracting the database? Are you also moving the database with that or keep your database with you?

So we function clearly identify that how exclusive have my database dependency within my applications. And this is one example of that. It will also show you on a specific area that this is my domains. What are my database? What are my resources dependence on that? So there's another great feature of that.

So it will give you end to end in inside with your applications, your dependencies, your domain dependency, functional dependency and the database dependency.

Now the another important aspect is that once you identify, discover how my applications is are dependent, you want to extract some of the microservices. Now here is specifically i want to in this example, demo we have the order management system and i want to extract the order management out of my domain design, right?

So how do i do that before that? I also want to improve my architecture capabilities as well. Like i want to reduce my depth. So what i'll do is so v functions provide the capability out of the parks. So here if you see that my, i want to reduce the dependency or reduce exclusivity or improve the exclusive around that.

So what i can do is with the v functions on the screen, i can merge to domain. That's my, i can improve the exclusivity of that, right? So here you can see my or domain is now highly exclusive. So all red turn into the green simple things.

So we have completed for order management, the domain even design. So my uh now i'm ready to extract. So again, another feature of the domain we function is is provide you the capability to extract the specific domain and this area is done by extract domain service.

What i'll do is here is identify the domains. I want to extract, click that and save it. Now when i extract the confirmations or domains, i need to identify what is my target where i want to run this application at the moment is running on, on the java. But my target application i want to run that microservices is on the sprint board.

In this case, the second thing is what i need. When i extract the microservices, i also need a dependencies, common class library as well, right? And then the thing is api end point to communicate with my microservices. And the simplest we function does this simplest data, you can extract the domain out of that.

You can see that another thing when you want to extract, we function also provide a copy utility out of the boxing. So with the cold copy utility, you can point to that and seamlessly extract your domain out of that.

Now let's move to the my development environment. So this is my development environment looks like. So what we have done like with the v functions, we identify the domains, extract the identify domain, extract it for which domains i want to interest it.

Now, i'm in my workstation environment. You can see here uh all my files. Original source code is here. I also downloaded my code copy utility. Now if i list my, what is my current code looks like? This is my original source code with v function utility. I can extract the service specification file and my domain is ready to use now and also extract raise a p as i mentioned earlier.

And that now what we have done is this one out of these things i have my domain d available. So, so what we have done so far is just a visual representation is about this domain design from our o ms extractor order management services.

So what is the next step? Once i have my source code, it is i can on board this source code to my devo practices. So we can you know push the code to my source code repository built it. Create image is in my available.

And now i invite alex to walk us through on focus on the cd part. So we understand like i based on the code i, i trigger my c continuous integrations. Now let's look into actions about cd.

Thank you, alex. That's great. Thank you for, let's just get this demo up and we'll be on our way, but it's fine.

Ok. So what we want to do first is we've, we've talked about ek ss blueprints. We've talked about some tools like argo cd. I want to kind of see them in action.

So what we have in cloud nine here is we have our uh workloads repository. So it's a repository which has the kubernetes manifest files in. So a really, really easy way for us to deploy workloads into our kubernetes cluster that's been created by amazon eks blueprints.

So what we can see in here is we've got to set up um the three manifest files we need inside our, our team that's separate to the platform team. So keep everything nice and isolated. And these manifest files are allowing us to, to specify resource requirements where the docket images, et cetera.

So what we can do is we want to have a look at argo cd in its current state. So we can see here, we've got our, our three manifest files deployed into resources as eks in our eks cluster. So we can see our service, our deployment and our ingress.

So at this point, this is just the extracted order controller that we saw in the v function demo. So what we're going to do is so we can come back to this and and see the argo cd is, is live and functional and working.

I'm going to in cloud nine, just add another example application in here. So this is a, a different app. So we're just going to take this, pop it in here and then we will commit and push that. So we should have our three new files we've created here for our example application. Ok?

So we'll come back to that one in a little bit later. So what we're going to do now is we've talked about refactor spaces and how refactor spaces can help you to implement the strangler fig pattern when you're modernizing an application.

So if we head over to refactor spaces, now, what we will be able to see is we've got an environment set up for our demo today. Um i'm assuming this is large enough for everybody as well. Um we've got an application set up and we currently have two routes active.

So we have one route which is going to our legacy application, which is the monolith and one route which is going to our, our decomposed order controller. So what we're gonna do is we're going to deactivate the order controller to start with.

So once this is deactivated, what's happening in the background is it's doing things like uh updating api gateway, closing that route off. So all the traffic will then be directed to the uh legacy monolith.

What we then want to do is have a bit of a look at some of the requests that we can send and see how we can direct traffic to different areas with the with we factor spaces, so we can send this first request in here.

So this is getting an order. The url it's going through is the url of the api gateway that sits inside refactor spaces. And what i want to look at here is for demonstration purposes on the uh decomposed order service

"We've added a custom header so we can show you when it's um changed over. So this at the moment is hitting the monolith. We haven't got this extra header in here. The request is returning, we can make another request again, still going to the monolith. This is the inventory uh controller but post cut over that will still remain with the, with the monolith.

So what we can then do is we can then go back into the fact spaces we can activate our route. So now what this is doing is doing the, the inverse of what we've just done. So it's taking all the traffic that was going to the monolith destined for forward slash order and it will now be sending it rather than going to our monolith service, it's sending it to our order controller service, which is deployed in eks blueprints.

So if we now head back over to postman and we create an order this time, the request we're sending is going to forward slash order. It's a post request. So we should find what comes back is this here? So we've got this extra header in here that we've created already. So this time it's coming from our order controller. Hopefully it's just about large enough to see. But that's kind of the idea of being able to incrementally modernize, send some traffic over to one service while still retaining this this legacy monolith and implementing that strangler pattern.

And just to prove that everything isn't going to the microservice, we can do another request to forward slash inventory and see that that header isn't there because it's coming from the monolith.

So just to finish up this live demo, we can go back now to argo cd. And if we refresh this, what we will see is we've now got our new service deployment and ingress created from the commit and the push i did for that example application so i can take this one here and we've got our, our example app that's been deployed through argo cd and that concludes our live demo.

Thank you. Thank you, alex. That works. Finally, that's good. So what we just saw is how aws service and isc partner tool can help to accelerate the modernization, right? That's very important. That's a synergy that we just saw now just turn back to the another important aspect of the data.

So data is a strategic asset, right? And it's a cornerstone of every organization for digital transformations, but companies are collecting, creating, storing the data more than ever before, but much of them remains unutilized or sometimes never utilized at all. Essen found that there may be 68% organizers and they are unable to realize the value of the data customer has made already capital investment in big data ecosystem analytics ecosystems. But this system, the solutions are complex time consuming, very expensive and hard to scale when you need it.

So moving to the fully managed and purpose built aws services, customer can unlock the true potential of data. They can also be their advanced capabilities. And some of the examples from the field, we see that customer can move her and spark to amazon emr, they can migrate the eat here, workload to aws glue, moving the transactional database or data warehouse to amazon red chips for exabyte of data scale analysis and doing the complex query analysis for that.

On a streaming side, customer can leverage uh kafka and kis and finally, customer has al they can move it to open source for live analysis search and the log analytics at aws. We see the customer get most out of data when they build high performance data platform. And this data platform has the capability to ingest the data from various sources, transform it and store in a consumable format. It should also provide the actionable insight at the same time, make the data available or accessible to the right application to the right user when they need it, which is extremely important.

But the challenges with this data and modern data platform is specifically how do i jump start? My, how do i build my, you know, i don't want to spend month and month in designing and building the solutions, right?

Second thing we see the customer wants to leverage the cities like for the consistency across the environment. Another important aspect of customer also want to integrate with the third party tool. They also want to leverage open source tool.

What we have seen the ability has emerged as as a popular tool specifically for data workload. And for that, we have design and a open source project called data on eks on amazon eks. It's an open source project. i will dive into the next slide. I also recommend you to check this qr code which is specifically for aws industry blueprint, for data and a i.

We also have amazons sct extractor mainly for migrating or moving the data on had cluster to is on em r and also from data warehouse to red. I also want to call out some of the partner tools unravel importer and the pepper data, especially for the customers who are interested to with their legacy big data and analytics migration want to migrate or transform.

So uh we have the partner tool as well for you. Now, let's look at the data on eks specifically. So data on eks is an open source project. uh customer and partner specific, your partner can, can design and build a scalable data platform on eks. It primary consists of infrastructure as a code template, a reusable and reference architecture. Also, uh we have designed specifically uh for datacenter workload.

So you you can take the painter uh take the patterns and reference architecture and deploy it for that. And of course, you will get s best practices and industry expertise. Part of the data on the secure code, you can check the zip and as well as the website.

One more thing this data on eks is built on, on data on eks project, especially that giovanni has mentioned earlier. So data on e cases has five patterns and you can build data processing analytics a il workload. You can also run distributed database, write and build a complex workflow for your specific for data pipeline and also build a streaming platform.

We saw the amazon emr on amazon eks provides up to 61% lower cost and 65% of performance improvement specifically for the spa workload. Let's check this qr code for the more details. You can leverage pattern and configurable template to provision the infrastructure faster in hours and minutes, you can also pick up the right sides computer and the storage to scale rapidly achieve that, you know, even the 1000 scale nodes, you can scale it through that.

So that's the important aspect of the benefit on data on eks. Now let's look at find like how do you build end to end data model platform. So at the core of course, use amazon eks blueprint. What we see is a customer typically start with the state aid applications and then move to the state full applications like eiml data when they become the mature customer, also want to uh leverage hybrid machine learning platform or hybrid learning machine learning architecture.

So they can learn cu uh cube workflow for experimentation, use amazon ekreks for high-scale using special for the gp u instances. And then later on, they can integrate with amazons maker. You can also run a deploy and tune large language model on that platform. And finally, we say amazon emr is another popular use case to run on amazon eks.

So you can run a big data platforms, spark workload side by side. Now i hand over to you on focusing on the next slide is about uh how as a partner, what is a partner opportunity, how you can build accelerators for that.

So as a partner, you can build a professional and manage service and manage on that. They can also differentiate your services, some of our isc partner has already built a uh observ scalability and the security specific uh offering around the uh amazon eks accelerators as a partner, you can integrate uh eks uh data as well as eks part part of your delivery capabilities or part of a delivery project as well.

So you can you know, improve the developer productivity for that. And finally, uh because it is infrastructure as a core template, you can actually provision the infrastructure faster, build a poc environment faster. So you can a customer can, you can earn the customer's trust.

Now, let's head over to giovanni to talk more about how partner can earn customer trust uh on their modernization journey.

Thank you for ganesh and thank you everyone again for sticking with us. So just get us through here real quickly. Um so we, we are, we're a technical crowd here, of course. But uh modernization is we also have to address some of the business elements with some of our customers. And so um you as a partner, you're gonna want to know how we can unblock stalled modernization programs and, and earn trust with our customers.

And has anyone ever participated in uh experience based acceleration eb a with, with aws anyone? Wow. Ok. All right. So not many of you. All right. So this is a good introduction quickly. It's, it's, this is a mechanism that we use for um unblocking stalls with, with customers. And so there are a number of different eb a variants. And this is specifically the modernization eb a we call this a mod x.

We have a number of customer examples of the of of success with this. Um it's a very straightforward methodology. The secret sauce really is part of it is in the assess assessment and architecture and this is where we apply a lot of these pathways together, right? So we start with an assessment, we use a tool called mod a mota and that is a tool that is available to you as a partner.

Now in the a two t portal, uh so please contact your alliance leads for access to that. That assessment is both technical and business dimensions involved. Uh it, it leads us to this eb a this modernization party uh which, which is sort of what most folks uh sort of associate with the eb a.

Um ultimately, it's, it helps our customers to kind of bring their core teams together very quickly. And we find that uh that is the the the key mechanism to, to get us to move forward in a lot of block projects. And so as a partner, many of our partners have actually supported some of these mode x engagements and you can actually become a mode x delivery partner as well.

So the first thing is if you'd like to learn more about this methodology, you could use the qr code on the left to find out more we'll be running some boot camps first half of next year. Um and so qr code on the right. Yeah, you're right. Um you can register your interest uh with that mailing list. I'll give you a moment there.

All right. So I'm just gonna send you, uh folks away with just a couple of takeaways. Our goal for today was to simply demonstrate a way you can use these different tools that you may already be familiar with into an end to end process. Ok. Uh so the first takeaway is each of these pathways are they're technical in nature, but there's a value benefit at the end of each one of them. And we saw that with alex's example, where each incremental pathway that was taken, there was a quantifiable business benefit.

And so think about these pathways, both technical and value as a value framework, so to speak, and so help help customers answer these questions. Uh you know, where do i start? What do i use using the pathways? Uh so they want a modernization road map and use the pathways to, to, to provide them that.

And lastly, your last takeaway is we, we've introduced a lot of tools to you today. Maybe you've already developed a lot of these tools uh in house on your own. Uh we think you can bootstrap a modernization practice by adopting some of the elements that we've talked about today.

Some of our a pn partners such as v function, you can um reach out to them at the end of the session if you'd like uh refactor spaces, babish uh eks, blueprints data on eks. Please check them out and see how they could be leveraged in your practice. Here is the remaining transcript formatted:

"And with that, uh we'd really love for you to complete the survey. We are a data driven company. We'd love to know what you, what you thought of us today.

Um and one last thing also is the partner pathway assessment that pradesh mentioned earlier. You can reach out to your partner development reps to uh enroll and request uh this assessment as well.

All right. So these resources will be available to you in the pdf that will be attached to the artifacts at the end of probably about two weeks, i think just about and with that myself, pras and alex.

Um thank you very much and wish you safe journeys.

Thank you very. Thank you. Thanks for coming."

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
AX58100是一个控制器芯片,用于将现有的贝克霍夫ET1100迁移到新的硬件平台上。该迁移过程需要根据贝克霍夫的ET1100应用笔记进行操作。 首先,我们需要了解ET1100芯片的功能和特性,以便更好地进行迁移。ET1100是一款嵌入式以太网控制器,具有高度集成的特点,可用于实现实时以太网通信。它提供了多个接口,包括以太网接口、UART接口和SPI接口等。 接下来,我们需要准备新的硬件平台,其中包括支持AX58100控制器芯片的主板,并确保该主板上的外部组件和接口与ET1100的连接方式兼容。 在进行迁移之前,我们需要对贝克霍夫ET1100应用笔记进行仔细研究。该应用笔记提供了关于迁移的详细信息,包括硬件连接、寄存器配置和驱动程序的编写等方面的指导。 然后,我们需要将ET1100的现有代码移植到AX58100平台上。这包括将现有程序的硬件相关部分进行修改和适配,确保其与AX58100的硬件特性和接口兼容。同时,我们还需要对驱动程序进行调整和重新编写,以确保其在AX58100控制器芯片上的稳定性和兼容性。 在移植完成后,我们需要进行详细的功能测试和性能评估。这包括验证AX58100在新平台上各项功能是否正常运行,并进行性能测试,以确保其与ET1100在通信速度和实时性方面的兼容性。 最后,我们需要对迁移后的系统进行整体验证和调试,以确保其在实际应用中的稳定性和可靠性。如果发现问题或不兼容性,我们需要进行相应的修正和调整。 总而言之,AX58100从贝克霍夫ET1100应用笔记的迁移需要对硬件和软件进行细致的兼容性分析和调整,以确保迁移后的系统能够正常工作,并满足用户的需求。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值