Use AWS data lakes and RISE with SAP to transform business processes

Good evening, everybody. Welcome to re:Invent. And I think it's been a long day for most of us, but welcome to Session ENT324 - Using AWS Data Lake and RISE with SAP to Transform Your Business Processes.

The session is with one of our customers who's taken this journey to not only use AWS services to do their transformation around their IT but also around their business processes. And that's what we are going to be fundamentally talking about today as we share the journey Water Apollo Tires India has taken on this journey to transform themselves.

I'll be joined on stage by Himesh Hasan, who's the Chief Digital Officer and Chief Supply Chain Officer at Apollo Tires. It's been a pleasure and privilege working with him over the course of the last few years and see how this journey has spanned out. And that's what we're gonna be focusing on - sharing the learning experience, the partnership and what it takes to actually ensure that when you set out a vision and go about doing that.

My name is Saurav, I lead the EPN AWS business line for India. Been with this industry for about over 22 years, been in the SAP ecosystem for about close to 14 years, including working with SAP and I'll try and share how we go about doing and working with customers as well.

I'll also be joined with on the stage with Ani. Ani is a Principal Solutions Architect who works very closely with engineering teams, helping set up how do we deliver RISE for our customers as well?

So, brief insight into what we're going to cover today in the sessions, right? We're going to talk about Himesh is going to share the vision that he laid out when he charted out the digital strategy for Apollo. We're gonna talk about how this journey started. How did we go about looking at what Apollo needed and going to going the distance in terms of what it was needed to kind of make this vision a reality, look at how we collaborated with AWS with Apollo to ensure that we bring the right solutions, the right strategies, the right perspectives and partnering to do that, talking to a little details around some of the benefits and use cases that Apollo has achieved over the course of the last few years in terms of and this would be great learnings for you as you pick up some of these things for see to be relevant for you. Because we believe some of these learnings are the mode why we want to talk to you about and see how this helps you.

And at the end, I'll hand it over to Ani to talk about what's going on in the world of RISE. What are we doing with SAP more insights into the engineering and collaboration with that with that? I'd invite Himesh on stage, Himesh. It's been a pleasure working with you and we'd love to hear the story with you. Thank you.

So, a very quick overview of Apollo Tires. So as Saurav said, we are India's largest tire manufacturer. We have obviously huge presence in India and through a second brand called Vredestein, we've got a fairly large presence in northern Europe. So that's a geographical presence in terms of the main home markets, but we're growing in US.

So we sell tires into all of the main types of vehicles as you can see here. Ok. So when we started on the journey and I joined Apollo three years ago as a Chief Digital Officer. So my responsibility spans the whole digital space, the digital transformation. So the ITAO ERP system and the underlying infrastructure and for my sins because digital technology plays such a pivotal part of running a global supply chain for a large manufacturer. So for my sins, I was given the Chief Supply Chain role as well not so long ago. And it does make sense because it is such an integral part, right?

So what we then developed was clearly, we said we the whole digital and technology strategy and even looking at cloud needs to be worked backwards from our core business strategy, right? So our core business strategy was to become a $5 billion business with 15% EBITDA and 12% return on capital employed. We've said that ourselves for the fiscal year 2026.

And I know this is gonna be a busy slide. But what this shows is that everything that we did in digital and technology had to have a clear line of sight to these core elements of the business strategy. So, you know, it's about delivering highly transformative customer interaction. A highly efficient order to cash cycle. It's about and, and we did use this phrase Amazon organization of our supply chain, which basically means agility speed at a target networking, capital level. And then we saw industry 4.0 manufacturing as a key element of that. And certainly driving back office function.

Now it is a very busy slide. What we did say to start with anything which is legal and sustainable, had to be a priority. But apart from that, we were gonna focus on first delivering efficiency transformation to our customers, linking our supply chain. And then a big focus was on driving manufacturing efficiency. So the reason we selected those as the core priorities is if you look at tire manufacturing, it's highly capital intensive. Ok? And our return on those assets were below peered target levels. And we made a, a pledge to use data to really understand where we're losing inefficiencies and to drive forward with that. And that was really the beginning of our journey towards cloud and our selection of our cloud partner, which we'll talk a little later about.

Yeah, so I'm not gonna go through this. It's a repeat of the other high level strategy, but this just goes to show in these key areas that I just talked about what are the core processors that we had to be excellent at and therefore, what was the where digital and technology had to be a key enabler in those core processor?

Ok. As you said, if you look at the middle bucket, picking up on a few things, we said we had to really drive more kilograms of tire per head count in the factory and that to reduce conversion costs and certainly use that to then drive asset turn and asset utilization.

So given that what we said was that we had to really look at a transforming underlying technology infrastructure. Ok. It was about utilizing and outsourcing key elements of that creating resiliency and business continuity. So SAP was on prem to give you an example of the business continuity, we did have a backup with another data center. However, it didn't give the resiliency in the business continuity. So this was identified as a high business risk and that was one of the reasons plus a couple of other reasons why we migrated to the cloud and we did select SAP RISE in that journey.

Ok. And the other key reason was to operationalize or what I call it, turn up IT infrastructure from capex to opex. Ok. So when I joined, we had very little use of cloud today. I can say that about close to 85% of our data and compute runs on cloud and predominantly AWS cloud. So our data center that we had is now being turned into office space. Ok?

And, and it's not just typical applications that, you know, like SAP or non SAP applications. Interestingly, a lot of our automation systems are highly efficient, you know, the the automation providers like the Rockwell, the Siemens of this world, they're great at the hardware side, but they're not excellent at the software side. It does the job in terms of running the automation, but in terms of data management visualization and, and insights from that, they, they're not very great.

So what we do is we actually use a cloud environment as not just the backup of the data, but the whole visualization analytical layer and we keep very little data on the edge on these automation systems that has not only saved storage costs and compute at the edge, but it has also significantly improved the efficiency of those automation systems.

So we we went through a process of selecting a cloud provider and I know a lot of CIOs CDOs talk about multi cloud. And yes, we should certainly look at multi cloud keep you know, your cloud providers on their feet, on their toes in, in, in terms of commercials. But in the initial phase of the journey, we wanted to work with one partner and we made it very clear in the selection process, we went with AWS because they were, they had real strength in the whole IoT and manufacturing environment. And that partnership has helped us a lot in terms of the journey, educating the internal IT function and also transforming the IT function.

Ok. So with that, let me hand you back over to Saurav who's gonna talk through the journey. Thanks Himesh and I think one of the factors which was there with this evaluation and the whole discussions were ongoing is that how do we kind of ensure that a vision translates fast enough soon enough? And we start getting some of the quick results and, and when we were kind of doing that, the brief that we had from Apollo was look at all aspects, right? Look at what we need to do to kind of get quick wins. How do we transform our legacy applications and how we go on the journey?

So we kind of worked backwards from the requirements to go into and look in what we wanted to look at is a two step strategy in terms of how we went about going this roadmap. So it was a two speed approach. So we looked at some of the quick start projects that we wanted to do, which could drive efficiencies. That's where we started with IoT and then we bunched the whole set of the transformation program around data lake migration of the ERP systems. I'll talk about RISE in a while. Then there was an approach of how do you build a unified data landscape, right? The brief was single location. How do you build a data strategy? How do you go on to the AIML journey was the third pillar we looked into and the fourth was which was more critical for Apollo's transformation was how do you ensure that the Industry 4.0 journey happens?

So for this, we ensure that we were working on a lot of workshops, we did a lot of ideations, immersion days, got the Apollo team across to get a feel of what other customers have done. And, and we fundamentally focused on going into purpose solutions. What mattered most for Apollo looked at partners who would probably help them deliver the right cases. In this case, we backed it up with a lot of investments and programs that we usually bring to the table so that we could ensure that the projects were done at an optimal cost. And that kind of helped us get into the outcomes, which was laying out a roadmap, how they can do this. And we'll talk about the timelines in the next two. That was not a single day project. It's obviously went on for quite a few months, etc., right?

So that's, that's largely what we focused on and during the discovery sessions, right? We figured out that there were a couple of very strong stand out things that we wanted to cover in terms of what Apollo needed in terms of going to the next stage was, for example, the entire approach to how can they cut down on the capex, right. And the need for upgrades, frequent upgrades, hardware refreshers could be avoided. And kind of how do you turn that into the opportunity to go into cloud? Right. Himesh talked about how do you ensure that the posture improves in terms of your high ability, high availability security resilience and the need for a data strategy. And how do you actually without all of these, how do you actually go on to do the AIML which whole parts of the industry was approaching?

So if we look at the pillars, right, in terms of how did the project span out as a part of the roadmap? Right. Clearly four distinct parts to this one was definitely taking SAP into the cloud and which is all processes that were running on SAP. There were some of the other applications like Hyperion etc for use for closing. So I'll talk about RISE in a while. Then there was an approach of how do you build a unified data landscape, right? The brief was single location. How do you build a data strategy? How do you go on to the AIML journey was the third pillar we looked into and the fourth was which was more critical for Apollo's transformation was how do you ensure that the Industry 4.0 journey happens?

A quick view on uh how did we go about doing this uh transformation around er p uh a quick 10 seconds brief on i think most of you would know what rise with sap is. It's a managed services offering from sap taking into account the infrastructure services and running them and offering that as an sl a right? And this project kind of uh wanted uh the entire systems that were on prem like uh his m mentioned was moved on to rise, some parts went into nt aws because of the integrations et cetera. And we have successfully migrated to 120 applications.

They were also using a whole lot of risk based systems. So there was a transition on to the x 36 architecture and there was also the need to move all the integrated applications around it. And all this also needed an upgrade to hana 2.0 and the need to improve your business resilience and h id r posture as well.

So we looked at some of the technical drivers that were very critical for this migration, right? One was definitely as to uh how do you kind of avoid some of these capacity enhancements, frequent upgrades that were being warranted by some of these peak loads that would typically come on month ends and the various aspects of the business, some parts of the business also had seasonality. And how do you kind of ensure that the system lives up to support that? How do you kind of also look at the ability to provide the business continue to needs that the business was asking their on prem set up kind of did not meet that standards and these kind of came together to kind of, i would say uh formulate a strategy as to why they should move er p to the cloud and was primarily the drivers of moving sap onto rise for that.

And once we were clear that we will move and migrate sap uh onto the rise, we also wanted to create an architecture that would ensure that once they're doing this, ensures that their sap applications run on seamlessly with their non sap integrated applications. And at the same time, there was a need to have a lot of iot integration with a lot of systems that would still stay on premises, which is i think the case with a lot of enterprises today, you will not be able to move all your applications one day or a couple of weeks. It will be a few weeks to end in the fullness of time. The journey will warrant you to end sure that there is an architecture where you can bring in projects as in one at your own pace and steps, right?

So this this kind of helped us ensure that we were seamlessly able to move workloads whenever we wanted to, right? And we took advantage of proposing the air capabilities of how do you create some of these virtual private clouds? Ensure there is bering and ensure the right elements of networking security come in so that you seamlessly able to use these target architectures, key requirements that we kind of kept in mind when this migration was happening is that while there is a disruption, let's use it to ensure that we are upgrading to the latest versions. We are improving the security posture. There were quite a few contractual and constructs that were up for renewal. So we wanted to ensure there is a migration plan. Um that kind of ensures that there's that, that window could be utilized in a nice way and we wanted to minimize business downtime. So the entire cutover happened in less than 48 hours for a business that's global, that's run across various continents. It's very important that you minimize downtime. So mock runs, et cetera were done to ensure that we keep that to a minimum and last but not the least. How do you ensure that you are augmenting your business continuity posture? So we went for a concert we'll talk about and i will explain this in a little more as zero rpo. Um the business was ok with something less than 15 minutes, but we kind of get with a zero rpo and successfully ensure that there were 300 plus integrations are migrated. Because when you're in a manufacturing organization, there is always going to be systems that go and talk to each other and you cannot just isolate it in an isolated manner, just move systems to the cloud without thinking about what's the downstream integration landscape likely to look at all that. So that was the story.

What was the first pillar i talked about how we went ahead and did the ep migration once that was done. I think the next critical thing was how do we look at what do you want to do on the data strategy? Right? And, and, and this is where i think the brief from his me was also that being a multi locational uh organization, there was a need to bring all data together if we want to kind of add a little more perspective here is because i think there was a lot of need of data democratization. The brief was how do we bring everything into one location and as well as a single data link, right?

So one of the challenges we had as most corporates will have is data is in silos, not in one location uh and certainly not transformed, blended, merged and available for consumption, right? So we uh we made a big effort to come up with a common data standard and common metadata. Certainly we couldn't retrofit that into the transaction system. That's too difficult. Yeah, but we said that's what the whole globe from executives to regional to local management will look at data this way, common product hierarchy, custom hierarchy, machine hierarchy.

One of the biggest reason we wanted to do that was because the first source of business benefits or efficiency gain comes through benchmarking. So factory a is producing this tire in cycle time, you know, let's call it a minute. Why is this factory taking a minute and a half? Those are very quick wins and that you can only do when you bring data together, harmonize it in one particular environment. And the other thing we've done is is is by making it available is we've, what we've democratized is the whole analytical phase. So we've got power, power, power bi i uh power users all over the world. Uh who now you know how to use this data, how to interact with the data in aws. Cool.

And i think that brief kind of led us to ensure that we were putting out a modern data strategy in place, right? Which meant that there was data that would come in at various speeds, right? And because there was an approach to industry 4.0 which meant that there are a lot of be streaming data which could uh which could come. And i'm going to talk about that in a little detail as well is which meant that you should be able to put a data strategy in place which brings in data at scale.

How do you choose uh uh parts of the solution that give you the best price performance? Because obviously there's you can pretty much go and select the best of the tools. But then how do you ensure that it's coming at the right cost benefits for the business? Right? And then how do you ensure that once you bring data together, how are you improving governance around it? There's a whole stress around data security and with a lot of things coming into the cloud we never wanted to ensure that that's taken care of, right? So that's the approach that we take when we go and talk about a modern strategy, right?

And, and their journey from an apollo standpoint was at first, the approach was how do you ensure that a lot of this data can be brought together for a multi uh operations like them? How do you bring all of that into one location and in a single data warehouse? Right? So that was the first phase of integrating all the data sources, setting up the data lake foundation and starting with the initial analytics etc visualization that can be done so that you kind of entice and bring users onto the platform and then the conceptual you as you grow the data warehouse plug in more sources, you improve on your analytics uh use cases. So we started looking at a lot more business data complex uh insights that were needed. How do you ensure that you automate some of these operational reporting? Because that's how you ensure that an analytics adoption in an organization goes, right?

And then once you've kind of brought all of this together, and i think they were also shutting down data warehouses at that point in time, which had come up with the various siloed approaches and then hand over some of these data elements to be used to some of the data science teams to be used for a iml and insights. So so that required clean data to come together. How do you and the am l teams wanted to then look at what could be the predictive use cases that they could start looking at, which could fuel a lot more insights for their businesses and the democratization of analytics, right? Which means a set of data meant for users a may not necessarily serve the purpose for user b and how do you give access to all sorts of data? Someone wants to summarize, someone wants the entire raw data. How do you ensure all of that fits into an architecture? That's very important because you might say that look, i've cleaned all of the data, given it to you for use. But someone on the ml side of the data science of things who want to train all of this stuff on the raw data. So how do you ensure all of this is taken care of?

So we build an architecture that we ensure that all the data from various sources come together. All right. So we had data sources from plc's streaming data that was coming from iot green grass, one of the uh uh manage services that ensures that you get streaming data from uh your uh telemetry from your systems. There was a lot of data that was coming in from custom applications, having databases, er p applications, sap and other data sources, right? And then there was how do you ingest some of the streaming data that was using iot sitewise, coming from the iot side of things. And there was aws d ms which was being used to ensure the data that comes in using change data capture from the custom rd ms. And then in some parts glue was also being used. So we looked at all sorts of options available to ensure a scalable data strategy comes in which means various types of data. How do you bring it together? So that all of these works into an elaborate data strategy, which means tomorrow, if you were to do a lot more other things, this would support as the base layer for it.

And all of this data was fundamentally initially getting brought into the s3 as the data lake. And uh this was done to an ensured in an offline manner. So that tomorrow, if there are certain links that get cut off, you don't have to re just data of it. And then the warm data would pretty much stay in s3. And if it doesn't get used over a period of time, it would use the s threes, intelligent data tearing and would be moved to the cold tier and the data that would fundamentally be used from analytics. And the other purposes was moved into red shift for consumption at the analytics layer. And some of the other users who wanted to use and query a lot of the raw data that was on s3 um apollo started using athena to kind of start doing that. And if you were to develop a lot more a iml use cases which the teams were doing, they started using amazon sage maker as well.

So if i summarize that, we talked about the er p pillar, we talked about how the data strategy came into being. Uh we talked about how some of the im conversations, the fundamentally the solutions that we proposed was to look at a few things and how uh help apollo achieve some of the things is how do you build and deploy faster? Which means look at purpose built solutions solutions that can solve and simplify problems, right? So select them. That was the approach, gain fast time to value, which means go with mvps wherever possible, that helps you test out a lot of things and also at times help you proof the business cases. For example, the investments around iut green grass or glue and ensure that we started with mvps and their interest product is right, ensuring that we went with the right architectures so that it improved their posture around governance, security resilience. That's that's really critical for manufacturing organizations today. Deploy the right services. Which means when i say right services, i mean not saying that one size fits all approach. If it's a streaming data, look at what the right sort of solutions can be used to do that. And more importantly, look at the right partners, i mean, not one partner can do everything. So selecting the right partners to ensuring that the right outcomes can be delivered. And of course, wherever we could, we always ensure that we are bringing in the aws best practices, things that we learn a lot from our customers implementations to come together so that it kind of seamlessly helps the customer invent and simplify from them.

I'll now invite me to kind of talk about how this journey expanded. And obviously some of these pillars that you seen definitely did not happen in a day, right? So as i want to talk to you, i want you to ensure that you take them, that this is a journey that we want to go through. Yeah.

So you, you, you sort of talked about, you know how we started this, we did these mbps, we then created the data lake, right? And we've seen some fantastic benefits in this. So first of all, as i said, we now have 80 plus percent of our data and compute in the cloud. And you know, when, when i've spoken to a lot of cio s they said, oh, clouds a lot more expensive than on prem, i will actually challenge you the other way around. Yeah. Uh if you manage cloud and you understand how it works and and how to use the data and to be very clever in segmenting the data and having purpose, it can actually be more cost effective we also saw very interestingly when we moved, uh sap on to rise on aws cloud that the performance, the response time increased by 30% because we've been measuring that response time. Yeah. And significantly reduced the number of unplanned downtime. And usually when it gets to month end, the response time was very slow. Today, i don't get any escalation during month end. Yeah. And that's a big, big benefit for me.

Um then on the industry four dot side, as i said, we have seen some great savings on efficiency. Um mixers when you look at a tire black round thing, it starts with compound mixing and those are about four story high big machines with 180 sensors. We've got 50 of those or 45 of those around the world and we basically got the data looked at the data and the immediate savings in just using benchmark data was around 20%. Yeah. And we are now targeting 50% of uh productivity gains

The other big benefit of moving a big, big enterprise like S A onto the cloud is what I call removing the brains out of ECC because we run ECC on Hanna or SAP, right?

So what do I mean by that? So uh your CO ERP is relatively static, it's heuristic, it's rules based. So what we've done with Rise and the licensing structure we've agreed with SAP is and by putting it into the same cloud environment, we're able to in certain core processor uh bring in intelligence outside SAP because either that's in house developed or best in class.

For example, we're working with a starter to improve sales forecasting. Ok? Because the SAP forecasting model is not great. So here's this, you know, specialist forecasting environment. Another one we're looking at is credit control. Half of our orders get stuck in credit control because the rules that you have in SAP for credit control is quite static, right?

So now we've got an engine, we're looking to develop an engine integrated with Dun and Bradstreet. That does it more dynamically. Now you could do this in much quicker environment because you're running this now in a single infrastructure.

So that's what I mean by removing SAP, you know, removing the brains out of SAP and we'll do more and more of this in the in the world of supply chain, things like safety stock calculation, lot size calculation becomes uh much easier.

And now we think that way now when we say, ok, let's now look at how do we optimize the lot size we think in a way uh very differently to if, if everything sits in a siloed environment.

Yeah, on the world of uh IoT and Industry 4.0. So Industry 4.0 is not a silver bullet, there is not one set of actions uh that's suddenly gonna give you 15 20% gains in OEE or manufacturing efficiency. Ok.

Um it's a series of projects. So we were running when we started to collect this data stream data into the cloud. Uh at any given time, we would be running 3040 projects in peril, the relatively smallish projects uh run by people on the uh on the different plants, right?

And what we did was we said first, let's look at the capacity, that's a bottleneck capacity. Ok? And let's put in projects there to de bottleneck that. Ok? And then you go into the uh production processes which are high conversion costs.

So it was the projects of constant projects of um looking at the data, analyzing it, you have data, scientists working with it, finding the root cause, then the industrial engineers will come in find the solution.

Then the beauty of this is because we've harmonized the data, everybody can see each other's data. Then you would then use that best practice and you would cross fertilize that across all of the plants.

Yeah. So through that, we have uh you know, made some great progress. Um and I'll talk a little bit more about those benefits uh along the line. But just to um just to also highlight the journey, one of the things we wanted to really do with this Industry 4.0 right?

Because the challenge is not technology, right? Most of these technologies, you can now buy it with a credit card. Ok? And there are in some cases, they are free because they're open source. The challenge was change management and mindset within the organization.

So what we therefore did was we said the only way we can do this is if we do it really fast. So one of the agreements with AWS and the partner that we selected was we uh we created this thing called the IoT in a box. Ok.

So took the, the relevant technology that linked and connected to our machines to stream the data and we said that nobody can touch, nobody can customize none of the plans. We're gonna plug it. And uh there will be a central team that does that.

So for the first set of machines, that's what we did. And we were able to therefore stream within three months, we had set up the data lake, we were streaming the data and started to visualize the data across all of the plants.

Uh in this particular, we actually selected the compound mixing machines and that was the best way to sell the art of the possible to production environment, you know, production people, industrial engineers, et cetera. Uh and we saw that as a, as a key selling point.

And after that, pretty much, we then empower the industrial engineers to go and do the connection, take the IoT in the box and link it to their machine controllers, the L Cs and start streaming the data. Ok?

So today we must be streaming probably my guess is 1005 100 data elements. So temperature is a data element, door, open door, closes two other data elements. So because some of these machines have lots of sensors, they're highly capital intensive. And then you have that number of machines. So a huge amount of data get streamed.

And the interesting thing is now people know what data where to look at. Ok. How do you then use the data to really find the root cause behind why they're losing efficiency?

So that journey has um you know, has given us a lot of, I think the word I wanna use is confidence. Ok?

Um with those, you know, operational p operational colleagues, whether it's in supply chain manufacturing or certainly even the c suite, uh it's the word is confidence. So we've seen the quick winds that's given us in the case of m uh it's given us productivity increase, right?

Um and one of the, the what these numbers tell us and one of the confidence it's given us is so we we in in highly capital intensive environments like tire manufacturing, car building, you know, automotive assembly, your increase in capacity comes in big chunks. Ok?

Uh so there's a lot of civil works, a lot of machines you need to buy. So therefore, there's a long lead time. So you need to forecast what, what's your sales gonna be in 23 years time.

So we took the decision that for the next chunk of increase given the success of this and the use of data, we're not going to buy the next set of machines that we would have bought based on last, you know, recent levels of manufacturing efficiency. Ok?

We, we bet ourselves that we are going to use the data and we'll be able to unlock more capacity. And that's the journey we've gone. We've now gone past the point of no return to help achieve sales targets next year and the year after.

Yeah. So what that's done that's now forced even the naysayers that the only way forward to, you know, achieve our sales target is to use data to unlock more capacity in the supply chain.

So it's a brave move that the CEO has made. Uh and uh and we're sticking to it. Yeah, we're sticking to it and we've, and some of the early projects gave us the confidence to do that.

The other crazy thing we've done is manufacturing execution system. So for those who may not know what MES is, an MES is, is your CO CO systems in the manufacturing environment that runs your manufacturing. Ok.

It uh it has the bit of material, it has the routing, it tells you what to move, where to move. You know, it controls every single batch that goes through, it then creates the genealogy from the finish tire all the way to the raw material. What batch of raw material you get.

So it's a huge amount of data. It also controls and instructs the machines. These are the orders you are producing. So typically MES systems come from the large players like the Siemens, the Rockwells, the GE Goliath, these systems are huge and we've got these systems as well in some of our machines they run on prem. Ok?

Because you, you, you know, you tell a manufacturing person, I want to put this in the cloud. They'll say no way you're gonna do that because when an MES system stops, the factory stops. Ok.

So what we've done for some of our um we were not, we're not very happy with these huge goliath systems. So we, we're gonna work with the start up, create our own true cloud native MES system and we're going to put it into the cloud. Ok.

So we did that for one of our large factories in uh in the state of Gujarat in India. And that actually performs better than the on prem environment. I don't, I'm not a techie, don't ask me how that works, but it works.

Now, everybody's coming to me and saying, when are you going to put our MES into the cloud? Ok. Uh what that does do though is it puts now a much bigger strain on your network on your connectivity, you need to have resiliency performance and high availability on that. Ok?

Um but interestingly the, the advantage of this is it now allows better interconnectivity with other uh uh systems in the ecosystem. Certainly, the scalability is a big thing. The biggest challenge we have with on prem MS is you're constantly putting in more hard disks, more cpu etcetera because as you have more transactions, more data, it just gets bigger and bigger.

Obviously, when you're in the cloud, you don't have to worry about that. So what's our target end game? So all this, as I said is giving us confidence. I think that's again, the word that you use, you know, it gives us a little bit of swagger a little bit of confidence to say right now, let's look at the next phase of our journey.

Uh so really the the whole thing is to continue to drive, you know, IoT machine learning and certainly bring in AI and gen AI as we all talked about, right? You know, data is the key differentiator.

Um and we can, you know, continue to connect our vendors, our customers together. Um and, you know, use this whole term about using the brains of our enterprise, put it into cloud, uh use either best in class services or apps or even develop in house for the critical differentiators that are important for a manufacturing organization, right?

So as I talked about sales forecasting, uh production scheduling is hugely important for us. That's another source of capacity extraction. Um and as I said, yeah, so these are key areas of, of benefit uh for us, we, we were looking to target.

Yeah. So the lessons learned is, you know, get the technology built out of the way quickly. You know, what, what we, what we did was not to lengthen, make it a huge techie IT project and business doesn't see the end of it for a long time to come. So, you know, three months select something and just get that out of the way and then work uh work on, on the data. Yeah, work always work backwards from a problem statement and that's something you know, try and avoid finding a. So, you know, you have a solution looking for a problem, turn it around, say work backwards from a problem statement. Yeah.

And we've also insisted that problems must have a direct line of sight to our business priorities and the strategy that I outlined before. Yeah.

Um there is no palace here, ok? You have to break this into small bite size pieces, right? Um and then finally, as I said, leverage partnership, we like partnerships, you know, we work very closely sap on our business process improvement. So aws is a, you know, cloud technology partner. So work with partners closely because they obviously they understand the underlying core technologies but their service and that has been a really a powerful message for us as well. Ok.

With that. And I think, uh and I think, I think this is pretty much what sums it up for us, right? I think how do you unlock productivity? How do you ensure efficiency gains come in when you're kind of implementing such a digital transformation? And I can't stop by saying this, that I think he's me. Thank you for making the rubber hit the road and ensuring that that success is there. So we've been really, really been a pleasure and privilege to partner you on this journey.

I also now wanted to take this opportunity to invite uh my colleague, annie on the stage and give a little more insights as to on rise with sap in terms of how we are going to take the conversation forward stages. As thank you.

Thanks and thanks for sharing this customer insights. It's very cool to see how customers innovate and benefit from rice with sap on aws and also innovate using additional aw s services and also sap bt p services which runs on aws. My name is anne nola. I'm the tech lead for sap as a strategic customer. And I want to give you in the last couple of minutes, some insights, how we work from an engineering perspective together with ss a p to bring innovation to rise with sap that customers like apollos and other customers benefit from these.

So let's dive in into some engineering updates. Um as you might know, the aws cloud spans across 102 availability zone across 32 geographic regions and we have announced 15 more availability zones in five regions in canada, germany, malaysia, new zealand and thailand. Rise with s a. Today is available in 31 out of 32 aws regions. And we were working hard to adopt this driven by customer demand that we have a global presence that we can offer. Rise with sap in nearly every aws region.

We are also working with the sap engineering team to reduce the adoption time. In case if we roll out a new region to enable the service faster and we reduce that time from months to a couple of days and weeks. That rise with sap is now available either at g of a new region or shortly after. And this is the benefit then for you customers that you can adopt the rise with sap service faster aws region consists of a minimum of three isolated and physically separated availability zone and an availability zone is one or more discrete data centers with redundant power networking and connectivity.

That unique design enables s a to offer an rp of zero in every aws region. So what does that mean can offer their short distance dr? But it's basically a fail over within an aws region without losing any committed transaction. Because of the low latency between the availability zone, we can establish a synchronous replication between the sap systems without losing data and that's key because even if there is a fail over, you don't want to lose data, but this was or this might not be always good enough.

So customers are also asking for fail over across the region. And the sap offering is the so called long distance disaster recovery. So a fail over between two aws regions. This is also possible and sap started at the beginning to offer this between specific region pairings or pairs. So you could only fail over from region a to region b but you could not leverage the entire aws footprint.

That's why we collaborated with the sap engineering to offer the so called flexible disaster recovery option. With that you can of uh you can choose now your region of choice for your disaster recovery option. That means specifically for customers in india like apollo. If they are running in mumbai, they can select hyderabad as disaster recovery location and data stays in country. That also means for a customer being in europe, they can fail over to the us to have a really long distance between this and to establish a geographic disaster recovery.

Another innovation we brought into the sap portfolio is the aws sdk for sap a bop which brings the power to an sap developers to change and um empowers and change the business processes in the a b a code line connecting to 200 plus aws services. It's very powerful because it brings this connectivity and integrations to the people who understand your business processes and it can be easily adopted and very natively integrated.

We are also working with sap to get this into the arab cloud on btp and working for this for next year. The question was when we were implementing this, how do we get this into rice? Because typically rise with sap comes in an sap account, but the services you are calling through the app up runs in your aws account because you want to control this.

So the question is how do you authenticate your sap system against the aws services running in your account? Because you have two different accounts. The easiest and most straightforward way is to use metadata, but that doesn't really scale for sap because they would have thousands of different rules and roles attached to an instance, giving them access to aws services and actually you want to control which services and developer can access. You want not notify sap each time you want to change this.

That's why we decided to start with a simple user password authentication. That means you can store the user and password in a secure store in the sap system inside the rice account, which authenticates them to your aws account and only against the services you are allowed to access to make that even easier.

We are working on the next step to use im rods anywhere to get rid of the user password and use certificates. We are planning to launch this early next year that you can authenticate via x 509 certificates and then you have full control in your account which services are going to be used and which can be consumed from the right service.

We spend also a lot of time this year on the resiliency overall. Um giving or reducing the fail over time um of an a a system, either we scale up or scale out um to reduce the time. We decided to implement a cluster. And we were working together with sap to follow the aws best practices in the implementation to bring this into their offerings for short distance dr so within an aws region but also across aws regions.

I already mentioned the flexible disaster recovery option which gives you the choice to decide which aws region to choose for your disaster recovery. And that brings also the additional service offering the gdr that you not only need, you can stay within a country or to the next aws region, but you can also select a region which is more far away.

Another project was a customer were asking for um the so called fenced and live dr test. So the option to test your data on your disaster recovery system while productive operation keeps running. So sap is fencing the disaster recovery system in a long distance setup. You can log on to this system and test your data test, also some interfaces and connectivity. While your primary productive system keeps on running managed firewall as a service was another project s a was reaching out or we collaborated together to apply the best practices by implementing a scalable and robust service for you and sap private link.

You also might have heard of this. Another project we launched this year that you can access to aws resources in a private and secure fashion. Private link provides the option without using public interfaces to connect from an sap system from an s apr system or from native sap system to either aws services or to um sorry to the to a right system.

So from an btp application, so if you're running a cup application on btp, you can use private link to connect to aws services like s3 sns, but also to bedrock, for example, or to connect to your sap system which ones under rise. And this is planned for next year for rice and it is all ready available today for native aws customers.

Now let's look into support how we support sap and what support do you get from sap? First of all, everybody or every customer should of uh aws should be familiar with the shared responsibility model. And the shared responsibility model basically says aws is responsible for the security of the cloud. So the physical infrastructure and providing the aws services and you as a customer, you are responsible for security in the cloud. So for example, defining which ports to open in a security group. And so on and this can be applied to the shared responsibility model you have with sap and we also replicate this what we are doing.

So in the rice context, we as aws take care about the region, the availability zones, the physical infrastructure, the services which are going to be used. So compute storage, networking and the software stick sap is taking care of the infrastructure and sap services. So the orchestration of the aws account, the logging, the operating system management, the database management, of course and some parts of the sap basis management, you as a customer or a partner of you is responsible then for application managed services. The actual s four transformation of your business processes or some additional consulting businesses, network connectivity is a little bit in between because here, sap and us customers are responsible to set up the connectivity to your i account having said that this is the fundamentals but how do we provide support to sap? Because with rise with sap, you get one contract and one contact from sap.

That means if you have an issue or a question or if there is an outage, you get support through sap. But we are also helping sap not only during incidents or an outage but very much proactively. So we know and understand the workload that it is in critical or an sap system running on these accounts, we know the architecture and we can respond and support sap accordingly. And this happens through regular incident identification and reduction mechanism.

We regularly have an architecture review where we optimize services, where we introduce new services under the hood for s a where you can also benefit from or enrich the overall portfolio. We do cost optimizations on an ongoing basis and an incident response and major event management, we also provide a root cause analysis to sap if in case if something goes wrong and all this happens in a continuous fashion, these continuous optimizations and enablement for sap that you at the end, get a solid and robust sap service provided.

And with that, um i can say that sap runs on aws with rice and we provide a reliable and secure s a service and customers are able to innovate using aws services. Sap btp services in addition and all this under the rise with sap. Thank you and enjoy the rest of the day and enjoy replay.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值