Save money & increase value using the CFM Capability Assessment

Good afternoon and welcome to this session, "Save Money and Increase Value Using the CFM Capability Assessment." My name is Nan Basu. I'm an AWS Cloud Economic Specialist based out of London. I support my UK customers to help them improve their CFM capabilities and improve their FINS maturity.

I'm really delighted to present this session along with my AWS colleague, Himanshu Sahu, and our esteemed guest, Christian Martini from Mercado Libre.

Quick overview of the session:

  • I'm going to talk a little bit about what is CFM, what is this assessment, and take you through some of the processes we use and also share some insights that hopefully will help you take away some actionable activities.

  • After that, we'll have Christian share the Mercado Libre story and Himanshu will give you some best practices and actionable advice based on the insights we have found from the assessment results.

  • Then we'll have some time for Q&A at the end.

More than 450 organizations across industries and geographies have already leveraged this assessment mechanism to drive cost efficiencies and increase value. Let us talk about a few examples briefly:

TUI Group:

  • It's the world's largest integrated tourism organization with a network of 400 hotels, 130 aircraft and 16 cruise ships across the globe.

  • They've orchestrated a business transformation and technology transformation to build a resilient IT and provide an outstanding digital experience to their customers.

  • To quote Yasin Qureshi, their Head of Technology, "The CFM capability assessment helped us prioritize initiatives, achieve our goals and build a strong foundation for FINS maturity resulting in 28% reduction in costs and a 12x faster time to market."

  • They achieved 28% cost reduction, up from 15% in the previous year, and have improved their cost visibility with insights, established KPI dashboards, centralized management of reservations and savings plans, and many such initiatives throughout the year.

  • The FOPS team is also enabling innovation at a faster rate - they rolled out a dynamic accommodation feature in 3 months within budget and meeting SLAs. Such a change used to take almost 36 months previously.

  • They are developing a strong organization-wide FINS community with regular training and active engagements.

Adevinta:

  • It's an online classified specialist operating in 11 countries with leading local brands.

  • On average, they process 1.3 billion visits to their websites every month.

  • While a digital native company, they have mature FINS but complexities from rapid growth and acquisitions leading to heterogeneous cloud adoption and different maturity across the organization.

  • Javier Gavilan, their Head of Cloud Governance, and Fran Gao, their Head of FOPS, say: "The CFM capability assessment helped us benchmark the practices implemented by the FOPS team. We will use this to calibrate, standardize and improve FOPS across our recently acquired companies and aim to capture 10% additional cost savings overall through new initiatives."

  • They have already achieved 33% cost savings this year by leveraging robust commitment-based purchasing strategies.

  • They got 5x faster time to insights on their cost and usage reports.

  • They are aiming to achieve high FOPS maturity across the organization including new acquisitions.

So TUI, Adevinta, and many others are using this assessment as a mechanism to improve. I could have shared more examples given the time, but you will hear from Mercado Libre shortly to get a more detailed understanding.

Before I jump into the assessment itself, let's take a quick refresher on CFM or Cloud Financial Management. It's also called Cloud FOPS.

It's a cultural practice that enables organizations to get maximum business value by helping engineering teams, finance teams, technology teams collaborate on data-driven spending decisions within AWS.

We use a simple framework of 4 key pillars - See, Save, Plan, Run. This provides structure to organize and group activities to continuously evolve FOPS maturity.

These areas help address challenges around measurement, accountability, cost allocation, cost optimization, planning, forecasting, and building a cost-aware culture.

When done right, organizations leverage efficiencies to meet business agility needs. We share many best practices and awareness, but still many organizations face challenges, captured in this popular Jeff Bezos quote:

"Good intentions never work. You need good mechanisms to make anything happen."

This captures the root cause - many don't know where to start or take a holistic CFM approach. To drive cultural change, robust mechanisms are needed.

Customers repeatedly ask:

  • I'm doing things, but where am I?

  • How do I compare with others?

  • Where do I start? What's my next step?

These are valid questions that you may have faced too. That's why I want to share the CFM Capability Assessment - a 3 step approach:

  1. Inputs & Observations: Understand current state through ~40 questions across the 4 pillars. Facilitated discussion brings stakeholders to a common baseline.

  2. Evaluate & Create Results: Average scoring per question, aggregated by pillar and overall CFM score. Visualize areas to improve via a heat map. Benchmark teams or peers.

  3. Recommendations & Action Plan: Most important - tailored recommendations and an action plan. Prioritize based on business needs.

Let's take a sneak peek at the process:

We look at a question around cost reporting and monitoring. Don't worry about the rating guide details - the facilitator will help. The key is the discussion reveals differing opinions that are aligned to a common baseline.

For example, 4 participants had different views, but after discussion agreed they are at Stage 3 out of 5.

Repeat this for 40+ questions to get a holistic view of current CFM status.

Next, we see the score is 3/5 while the global average is 2.78 and top performers are at 4.13. This helps understand where you are and how much you can improve.

Finally, build recommendations - e.g. look at the cost intelligence dashboard, set up a workshop. Prioritize based on business needs and collaborate with AWS to create an action plan.

Aggregate this for all questions to get a roadmap for the next 6-12 months.

450+ organizations are benefiting. You're able to:

  • Visualize, identify and deploy the right initiatives

  • Continuously drive efficiency and cost savings

  • Prioritize based on benchmarks and business needs

  • Set goals to improve maturity

  • Standardize CFM across the organization

Let me share some data from 400+ completed assessments:

  • Global average score is ~2.8 out of 5 - majority are at an intermediate stage, reflecting the evolving nature of CFM/FOPS.

  • 48% are intermediate, 6% are experts (score >4).

  • Top 10% average 4.12, just crossing the expert threshold.

So it's still evolving, but let's see what top performers are doing differently.

So we dug a little bit deeper and try to compare some of a sample size of the top performing organizations and some of the low performing organizations. And we found that obviously the top performers are able to or the high maturity, one are able to drive more efficiencies and deliver more business value.

Now we took two example, two examples. The first one is we looked at how are they using the Amazon S3 storage? So the high maturity organizations, they are using 18% more than the low maturity ones in optimized storage, meaning between your intelligent tearing and all of that stuff compared to standard storage. So they are able to match the right storage class to the right data object for the optimized usage.

The next we also looked at what is the adoption of managed data solutions in there. So again, the high maturity organizations, they are using 22% of the overall spend in managed data solutions versus only 13% for low maturity ones.

Now, in a recent study that we published on the business value of modernization, it was established that having more data for insights, faster time to insights and more cloud native apps results in 28% increase in your revenues from. So organizations who are using managed data solutions like that, right? So the data clearly is indicating that getting to a high cfm maturity helps you be more cost efficient and increase value out of your cloud usage.

So now let's hear the data has one story, but let's hear a real life story. So I'm delighted to invite Christian. He is the head of IT governance at Me De Libre to come on stage and share their story.

Thank you. Thank you very much, Nun Jen. So as I mentioned, I am the IT governance head for Mercado Libre. And it's a pleasure for me to be here today. Not only to share our story on our financial management, but also how we use the capability assessment from ADL US to build a framework and a strategy which we have been executing the last couple of years.

Before that, I would like to share a little bit of ourselves. So, Meral is a large tech company based in Laam. We have operations in 18 countries and we also have grown from a very small start up in a garage underground into having 55,000 employees out of which 14,000 are developers which work on a daily basis to bring the best experience to our our users, not only in the ecommerce uh industry, but also in the fintech of financial services industries in which we have Mercado power, our largest product, we have Mercado for ecommerce. And the latest addition has been Mercados which is probably a tech based. So uh logistic networks, the most sophisticated and, and we have built it in only five years across latin.

Now, as you can imagine all these products and services we have are based on 25,000 microservices which use roughly around 200 services our of our card providers to render and, and build this experience to our customers and this is no easy task. Um so we have 20 million requests per second and almost 3 billion lines of billing information. We invest on a monthly basis.

Now we have not always been this big, we started as i mentioned as a start up and we used to have a monolith which there are certain key dates i want to share with you before moving on with the presentation, which are in 2008, we decided to move into the microsoft architecture, which i mentioned earlier. And by 2015, we were already acknowledging that we need to increase our speed, our time to market. So we decided to move into uh private uh publicly sorry.

Now, by the time i think that the the best decision we took it was not only to move into parallel clouds but also to build our own platform as a service which for key benefits, we acknowledge we have uh gained along this this time.

Number one, we have a more seamless experience in our developer race. Number two, we we use components so we only develop once and all of our developers can access uh the the that component and we use it. It also uses complexity, meaning that developers only focus on delivering on the product they have or the challenge they need to solve. But uh for this uh session, in particular, the fourth element is i think the most important which is it was always, it was always uh or um it was the best place for us to introduce all the optimizations uh into uh the our users without our development teams even noticing.

But before moving forward, let me show you with you also. Uh and let's stop there for a second, we will review the, the tools afterwards, how we used the, the aws a body assessment to build on a framework and a strategy which we have been executing since 2019, 2020 20.

So i joined the company back in 2019. And one of the first tasks that i wanted to execute was understand, not only our strengths and weaknesses but also how we were um benchmarking against other companies. It took a while. Uh our aws team helped us really, really a lot in, in this sense. But by 2021 when the first uh aws krd assessment was available, we decided to execute it. And it was really interesting because it showed us that we were lacking the fine, for instance, in our t strategy because it was completely designed to keep our up time. As as high as possible. And also it made us realize that having fury, we could not waste that opportunity of introducing all those algorithms and tools directly within it.

So we came up with this framework which is very well summarized in this, in this slide. And of course, this is a work in progress. The more we continue to mature, the more we realize that each of the layers are capabilities that we need to develop before even thinking about having a product um and delivered to our stakeholders internally.

So for instance, one of the things that we were struggling back in 2021 was having the correct visibility because most information from the building was coming within the same pipelines as we had all the operational metrics. So we decided to decouple that and we built specific pipelines for that based on that, we then started thinking about maybe a better price negotiation or how to allocate costs of fresh resources. So we needed then we, we, we build this this layer and we started thinking about how to deliver on those um products.

Now let me share with you how having a a that anomaly satellite for four properties looks like roughly 50% of our costs come from compute. I'm pretty sure that everyone in the audience feels related to it. And what you are seeing here is not only a photograph of our consumption on a certain date, but also how things are moving along. So we may be able based on this information to understand if we need to address higher costs in compute and start thinking about why those costs are increasing. Or even as you can see below that analytics have been growing heavily for us. So we might be now working on solutions around that area.

Also a very good example of this is a situation that we came across this year. So we used to be very heavy users of spot. Roughly 22 42% of our 150,000 instances were based on spot and using spot means you need to be very aware of changes in context. So sometime around this year, things changed. Lucky. Uh lucky us, we had all the controls and alarms in place in order to acknowledge these changes. And we were able to make very uh fast decisions for us to uh be as efficient as possible and and and as kind of get into these changes into our ecosystem as you can see somewhere around, sometime around. May, we decided to leave spot.

We have now a very small percentage of spot instances and we decided to increase our percent or our usage of compute saving plans and always keep an eye and leave some spare on demand instances just to be sure that we have the flexibility to open to spot once again if they are available or some something changes.

Now on the other graph. What you can also see is how we benchmark this blend of having on uh spot instances, reservations, computer saving plans and on demand and how we benchmark that against efficiency levels we had had in the past. So same time around may we have the same drop in efficiency and then we are recovering slowly based on the decisions we make.

Now we also track what this means for us in terms of budget. So each of those efficiency levels and each uh let's say each blend that we decide upon have has an impact in our cash flow, which means having this information for us was also a bit of built also a better opportunity to have richer conversations with our teams. So it's the cfo team planning finance and also which was the impact we were hoping to have for that fiscal year.

Now, remember i told you about the tools and and theory and how we decided based on our knowledge of our and our usage of uh compute. So back in 2015 when we moved into the cloud, we were already working in being as efficient as possible. And we started working without knowing that um without pinups or economics existing in certain algorithms, which helped us be as efficient as possible every day for instance. And let's review a couple of them, the mini miser, it's uh an algorithm which uses machine learning to focus on traffic and then adjust the the amount of distances for the traffic ahead.

So either it's upscale or downscale on the instances automatically and then you don't need the world map times you have with the sgs right size is self explanatory memo. It's the right size for memory which we are still trying out with. Very good um very good results.

Dynamo e optimizer is it orcas reads and writes and then informs that into the dynamo reserva which makes all reservations across mercadolibre saving that optimizer is reserva. Basically, they make recommendations for us to make reservations and probably the the algorithm which was the, the best performing and the one had the major impact for us. It is a spotify which i'm pretty sure you can find nowadays uh very similar algorithms in the marketplace, but it uses machine learning to forecast how deep pool are for each availability zone for each instance, type for each family in each region we use. And then informs theory in order to make that information uh or as an input is used as an input. And then the theory deploys using that information and last but not least the four remaining algorithms uh since we are a company which uses four major cloud providers, coordinates in between clouds, so we can move workloads and use uh spare capacity out of reservations we may have in any of those um clouds.

Now, this is how our fleet moves as you can see in any given day, we go from 80,000 instances at night to over 100 and 50 during the day when the time is full, fully loaded. And this was a key component for us in order to handle after lockdown, when traffic spiked, we were already ready to handle the traffic. So nothing changed for us. In terms of the infrastructure team, we were, we were it was really easy for us to handle the traffic.

Also, i mentioned that we had several opportunities along the way, we have built uh an opportunity management system in which within theory, you get recommendations for right sizing, which are either applied automatically if your application is noncritical or as a suggestion for the next deployment, if it's a a aaa critical um api and as you can see on screen, this team has several of them, which means it's not only pushing recommendations for cost opportunities but also for security opportunities and applying them is as easy as as hitting apply.

Now, as you can see on the screen now, all this, as you can imagine, it's a lot of work, right? So the team has been growing steadily. Uh the first person in the team was hired back in 2018 as a single contributor. I joined the team in 2019. We were three of us. 20 yeah, 20 december 2019, 34 of us. And we have grown into a, a 19 person team by the end of this year, directly, meaning direct reports and 15 others from other teams that help us, for instance, machine learning teams which help us deliver on on this. Uh so solutions now,

I don't want to leave the stage before sharing with you how building on this strategy using the capability assessment to help us make this framework and how we are now against benchmarking against that tool. Right? As you can see, we improved slightly on those dimensions which are more related to the centralized team decisions

So we improved on our algorithms, we improved on visibility. Now we are suffering a little bit on those dimensions which mean better forecasting better anomaly detection. Why? Because we have raised the bar, we used to focus for instance on a total bill level and we are now forecasting or we we wish to forecast on a different place, maybe team level, maybe bu business unit level. So that makes us rethink what was awesome two years ago. It's not so good nowadays.

Now i i want to wrap up with three with three key takeaways.

Number one is Mercari is going to continue working in allocating better the shared resources since we have this platform of the service, most of the services we use are shared internally.

Second thing as you might wonder, we are very good in a central team, but i think we have still a very large opportunity in democratizing the the decision making across Marca Lire.

And last but not least we are going to leverage gen a technologies mainly for forecasting for anomaly detection in order to have less false positives and also in order to build better insights for our teams to make those decisions.

I hope it was helpful. I now leave you in with Him and Suu for the remaining of the session. Thank you.

Thank you, Christian.

Hi everyone. My name is Himanshu Kapoor. I am a Cloud Financial Management specialist within our Cloud Economics team supporting enterprise customers in North America region. I've been with Amazon for two years so far.

We've looked at the AWS CFM framework. We've looked at the CFM capability assessment and we also heard from Mercado Libre, how they leveraged the capability assessment to identify areas of improvement to act upon, improve their CFM maturity and save on their cloud costs.

In this section, we will cover, cover two things first, how you can get started on your own by building some foundational CFM capabilities. And then we'll take a look at an example on how you can build on those capabilities to develop a road map to meet specific business objectives.

We can group CFM capabilities by foundational operational and strategic. Foundational capabilities are the fundamental components that are essential CFM, building blocks operational, focusing on continuous improvement of ongoing CFM processes and strategic, focusing on deriving business value from CFM.

Now let's take a look at a subset of foundational CFM capabilities that have been identified as having direct correlation with overall CFM maturity. In other words, organizations that reported overall high CFM maturity also reported high maturity in these areas.

The key to a successful CFM program is to first secure executive buy in. This executive sponsor will act as a champion and an advocate for the CFM program. Help identify and prioritize CFM objectives, align those objectives with organizational goals and help remove any roadblocks. While CFM is a practice, everybody takes ownership on. We recommend identifying a dedicated owner for the CFM program. Having a 100% dedicated CFM owner will help establish a standardized and programmatic approach to centralized cost management and help drive decisions and results faster.

Establishing cross functional relationships and partnerships is important. Identifying key stakeholders across business, technology and finance and improving their cost awareness through reporting and dashboarding, education and sharing success stories. The key will be to meet regularly and frequently.

Discussion points can include uh past versus uh historical spent trends, actual versus forecasted. What is it that could potentially increase your AWS spend such as new migrations, new applications being developed, net new cloud native in AWS. Also anything that could potentially reduce your AWS spend through cost optimizations as their understanding of cross functional impact gets better, they will be better aligned with CFM objectives. They will be, they will better understand the relationship between AWS spend and business objectives. And over time they begin speaking the similar language if not the same when it comes to cloud costs. And most importantly, you build trust among cross functional stakeholders.

A study conducted by the Hackett Group in 2022 for more than 1000 organizations reveal that organizations that have formal and strategic partnerships in place across cross functional teams allocate two times more cloud spend to the consuming business units than organizations that lack these partnerships.

Having a robust AWS spend reporting mechanism is crucial for improving cost transparency, driving financial accountability and enabling informed decision making. AWS bills and AWS Cost Explorer are two out of the box solutions that AWS offers for you to leverage.

AWS Bills allows you to view your invoice by surveys linked account or region.

AWS Cost Explorer allows you to visualize your spend by filtering and grouping on multiple dimensions. As your business grows, you will have if a need to report on a finer level of granularity using a custom solution tailored specifically for your organization needs. This can be addressed by AWS Cost Intelligence dashboards. These dashboards are based on Amazon QuickSight. Using these dashboards, you can create custom calculated fields, you can create measure and track KPIs and build custom views for reports and dashboards based on persona business finance or stakeholder.

There are also third party solutions offered by Amazon Partner Network with similar capabilities. The key will be not only to provide access to these tools to as many stakeholders as possible. But to the right stakeholders, the study conducted by Hackett Group reveal that organizations that monitor their cloud spend systematically and consistently saw an incremental increase in cost savings by 22%.

On the previous slide, we saw the importance of having a robust reporting mechanism but numbers without context don't mean much. This is where cost allocation comes in. Cost allocation is a mechanism through which you distribute your costs incurred by users projects or applications that consume the corresponding AWS resources. In a predefined way.

The three most common AWS cost allocation methodologies include AWS account strategy, tagging strategy and then AWS cost categories.

It's account strategy involves developing a hierarchical structure of your accounts and also logical grouping of accounts. It's probably least effort to implement and provides high high accuracy for spend, show back and charge back at the linked account level or at a group of accounts level.

The tagging strategy allows you to align your organizational metadata to AWS resources in the form of a tag. Tagging does require more effort but can provide high accuracy if you have mixed or shared or complex accounts.

AWS cost categories is a tool that allows you to combine costs across accounts and tags and provide you further capabilities to allocate and analyze your spend.

Whether you select one or multiple mechanisms, the key will be, how will you enforce them? Going back to the study completed by the Hackett Group. It revealed that cost allocation is a force multiplier. In other words, organizations with higher levels of cost allocation apply nearly two times more cost optimization strategies to reduce spend.

AWS offers multiple pricing models for you to pay for your AWS resources that best meets your organizational needs. A Savings Plan allows you to make an hourly spend commitment for discounted pricing across your compute services. A Reserved Instance allows you to make a commitment on running a minimum amount of resources, both Savings Plans and Reserved Instances can offer savings as high as 72% over on demand.

We recommend adopting a rolling purchase strategy. Start by making smaller commitments, evaluate your requirements, analyze your utilization data and then gradually purchase additional commitments every month or quarter. We recommend a coverage rate of 70 to 90% using Savings Plans and Reserved Instances as we've come to learn, that's a sweet spot to be in from our successfully optimized customers.

The study completed by Hackett Group revealed that organizations that subject at least 65% of their spend to efficient pricing models such as Savings Plans and Reserved Instances. See an average improvement in incremental savings by 35%.

Now that we've seen some foundational CFM capabilities, let's see how you can leverage them to build a roadmap to meet your specific business objectives.

Here's an example, being able to forecast your cloud spend with higher accuracy, right? A fairly typical scenario that many of our customers face.

Typically, you want to start with a lightweight mechanism such as a trend based methodology, which is basically extrapolating your historical spend to predict your future spend. Then you want to establish cross functional partnerships across business finance and technology. You want to improve visibility of your key stakeholders into cloud spend through reporting, dashboarding education. And you want to establish a regular cadence for them to meet regularly and frequently again, discussing actuals versus forecasted, anything that could increase your AWS spend, anything that could decrease your AWS spend and how you need to adjust your forecasting numbers accordingly.

You may want to develop a different methodology for each for different types of workloads. As you're getting started, for example, a workload that is steady state, a trend based methodology may be best suited for any workloads that are being migrated into AWS. You may want to leverage AWS Migration Evaluator or engage Cloud Economics team. Any workloads being built cloud native in AWS, you can leverage the publicly accessible AWS Pricing Calculator, any workload that have any seasonality, you may want to develop a forecast for peak season and then a forecast for off season. And then for short term POC type of projects, you may want to develop a t-shirt sizing model, small, medium large based on number of instances amount of storage, then you want to mature anything that's not following a trend based forecasting into a demand driver based forecasting.

It is essentially identifying your key business drivers and working backwards to calculate AWS spend per business driver. For example, AWS spend per transaction, AWS spend per claim processed AWS spend per payment, processed. When you have that through your partnerships across cross functional teams, you can exchange information. For example, we know the spend per transaction, marry that up with expected number of transactions over the next quarter. You have a better way to forecast.

Then you want to combine trend based forecasts and demand driver based forecasts to roll them up into an overall AWS forecast model and then continuous improvement on lessons learned, fine tuning your processes and bringing in new key stakeholders and their processes.

These steps together can form a comprehensive strategy to enhance your forecasting processes.

The best time to start your CFM journey is today. Leverage AWS Cloud Financial Management framework as is or tailor it to meet your specific business needs, implement your foundational CFM capabilities first that will set up your organization for long term success when it comes to managing cloud expenditure, continuing momentum experiment and iterate on your foundational CFM capabilities to meet your specific business objectives, leverage the CFM capability assessment to help you build a CFM roadmap, engage your account manager to learn more.

You can find helpful resources, additional information, blogs, customer success stories on AWS Cloud Financial Management web page. We would encourage you to attend the demo of the capability assessment later today at the Venetian.

Happy to answer any questions stateside. And thanks so much for joining us today and please remember to take the survey in the app. Thanks so much. Appreciate it.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值