[NEW LAUNCH] AWS Resilience Partners: Best practices to create a resilient organization

Hello, everyone and thank you for being here today. I'm really thankful that you guys can make time and I hope you are excited for the replay party tonight. We are all looking forward to hosting you.

My name is Ashu and I lead the worldwide partners team for resilience. We are here to talk about and learn some best practices to build a resilient organization. I'm joined by Steve and Nathan, leaders from Cigna and Deloitte. Together we are going to deep dive on basically identifying the best practices to build a resilient organization.

Now, as part of the key takeaways of this session, we really wanted to understand what is resilience, why does it matter, and how do we basically approach a customer's resilience goal? How do we get there? What are the key objectives that we need to uncover before we can achieve and embark on a resilience journey?

Before I really get started, I really want to dig deeper and figure out what's AWS's view of resilience. How do we view resilience as the expectation on the cloud? Most customers are thinking about always on, always available mindset. What that means is that critical applications have to be up and running all the time without any disruptions. Think about cyber attacks, human error, unauthorized third party access - customers' mindset today is that everything has to be up and running all the time.

Resilience is all about planning for the unexpected. Think about a time that your favorite airline app goes down leaving 1000 passengers stranded at an airport or you not being able to binge watch Game of Thrones. So resilience is all about planning for the unexpected and how we view this is that this is basically an ability to bounce back or recover from any kind of unauthorized third party access or any kind of cyber attacks.

I'm super excited because we announced the AWS Resilience Competency as part of Rubrik's keynote yesterday. This is a brilliant specialization that basically validates partners on their capabilities and experience in building resilience, best practices and operating resilient AWS workloads.

What we do as part of the competency is we have three categories which are design, operations, and recovery. We basically have a curated set of partners and I'm super excited to be with one such partner here - that's Deloitte, the partners who actually qualify for design, operations and recovery. They get a special status, which is the Core Resilience Partners.

So I'm really excited that we are here to actually address a customer story that's gonna deep dive on how do we achieve the best practices, how do we achieve a customer's resilience goals?

With that said, I would like to call upon Steve from Cigna to take us through that story. Over to you, Steve.

Steve: Thanks and thanks everyone for attending. My name is Steve Sefton. I am a senior principal engineer with Cigna and I'm also a lead engineer for our resiliency program.

Now, before we get into our resiliency journey, I want to set some context about the Cigna group and why resiliency is so important to us.

A little bit about the Cigna group - we are a Fortune 20 company and at our topmost level, we have two brands: Cigna HealthCare and Evernorth Health Services. Cigna HealthCare is a health benefits provider servicing US commercial, US government and international markets. Evernorth Health Services, on the other hand, has a range of services from pharmacy benefits management, pharmacy home delivery, specialty pharmacy home delivery, and a growing range of care services.

And our mission at the Cigna group is to improve the health and vitality of those we serve – our patients.

Now, let's look at the scale of this. At the Cigna group, we have thousands of clients and across those clients, we have more than 100 million patients – 100 million – and they depend on us. They depend on us to have our applications and services available and working correctly when they need them the most and unfortunately, oftentimes that's when they're sick or when someone that they love is sick like a spouse or a child.

So this is a huge responsibility and we take this very, very seriously. System stability is always front and center for us. But we're also always looking for ways to improve it and setting up a formal resiliency program was another step that we could take to improve.

Now, if you're on a journey like this, you might think of engaging a partner that could help you be successful - so reaching out to a partner that has been successful in helping other companies on the same journey. And for us, we selected Deloitte to work with.

Now Nathan from Deloitte is going to come up on stage and set some context for us on Deloitte and their resiliency offerings.

Nathan: Hi, thank you for joining us for this session. Today, I know you had choices to go to and had multiple session options and you chose to come here. So thank you.

My name is Nathan Gupta. I'm a leader in the technology strategy practice at Deloitte. I started my career as a software developer, did it for quite a long time and then now exclusively I focused on providing technology resiliency services to Fortune 500 clients like Cigna.

I was also the lead when we started this engagement with Cigna, talking about what the initiatives that we need to launch, what are the things that we need to take care of as we move on this journey. Very excited to talk about that and also talk about the things that worked very well and the things which actually did not go so well and what we learned from those.

Before I talk about what we did, a little bit about Deloitte. We are a $60 billion company with 700 offices spread across the world. We provide a lot of services - you know Deloitte from the work we do in strategy, audit and tax practice - but we also focus a lot on the software engineering aspects.

Today, we have done more than 3,500 plus engagements on the software engineering side and built thousands of solutions for our clients over the last few years. What we have been seeing is there has been a growth in number of technology disruptions which our clients have been facing and there are two reasons for this:

  1. The tolerance from the customer side has been going down. You always want the systems to be available all the time with you, which is not possible.

  2. At the same time, the systems have become super complex. So any small outage now will land you at the top of the front page on the Wall Street Journal, which is never good.

So now we have a dedicated practice to help our clients in this area. These are some of the services which we provide:

  • We provide strategy and operating model where we help in identifying what your north star should be from the resiliency side and how to achieve that north star.

  • We help in identifying any architecture gaps that may exist and help in closing those gaps.

  • We also help in enabling the resiliency testing and observability.

  • Helping you achieve those goals from that perspective.

  • Disaster recovery - resiliency is not complete without talking about disaster recovery. We help in conducting the disaster recovery tests and also help achieve your disaster recovery posture and the goals that you are looking for.

  • And last but not the least, we also help with providing SRE as a managed service. What we mean by that is we can help SRE teams in your company, help them and they can help you close your resiliency gaps that might exist.

So as we move on, one thing which I think we need to level set on is what is resiliency. I should talk a little bit about technology resiliency and there are multiple definitions that might exist today in the market and I want to make sure that we are all on the same page.

When we talk about resiliency, for resiliency it means having a reliable design - what I mean by that is a system that is designed to scale up or down in case any issues occur. It also has intelligent visibility which means the system provides enough monitoring so that you can predict when the failures can occur.

It is also highly available, which means it is structured in a way that if some of the zones goes down, you can still function from the other zones - an example using multi-clouds in the Amazon environment.

And it is also prepared for disaster, which means you have mechanisms built in - maybe alternative mechanisms, manual mechanisms - which allows you to function in case a disaster does occur in your systems.

And last is fault tolerance - that is your system is designed to continue working in case small disruptions happen and disruptions will happen. So the system will not take the whole environment down with it when there are issues.

Now, hopefully we're on the same page on what is technology resilience.

Now, in order to get started, how do you think about the guiding principles? What are some of the things that you need to think when you start your resiliency journey? And that's what Steve will talk about next.

Steve: All right. So these are the guiding principles that we put together at Cigna group and what we wanted from this was something that would be memorable - something that our engineers could keep top of mind as they're building, testing, deploying and running our applications.

So we're gonna walk through these one by one and hopefully it will give you some inspiration if you're thinking of setting up a resiliency program back at your company.

All right. So first we have "Integrate Defensively". So this is about assuming that every application dependency that you have at some point could be slow or unavailable and you want to protect your application from the impacts of that.

So things that we think about in this space are software design patterns like circuit breakers, retries, setting timeouts correctly - basic things like that. But we also think about avoiding consuming single points of failure.

So let's say that you have an endpoint that you're about to integrate with. You should be aware of what's behind that endpoint, right? Are you talking to an Elastic Load Balancer or API Gateway or are you talking to a single EC2 instance which would be bad?

And even if you're talking to a load balancer, what are the targets behind that load balancer look like? Are you talking to targets that are spread across Availability Zones making them fault tolerant or are you talking to targets that are within one Availability Zone which wouldn't be as good?

So next, we have "Test Completely" and this is about proving that your application handles those slow or unavailable dependencies as designed.

So the thing that we think about in this space is something called Game Day. So Game Day is an exercise where you use a technique called fault injection to simulate your application dependencies being slow or unavailable or returning errors. You essentially just want to test that those defensive integrations that you put in place are working as designed.

Now, you also want to test that your monitoring is working correctly, you want to test that your alerts are firing as they should when things that are unexpected happen.

So in this regard, you can kind of think of Game Day as like a dress rehearsal for future production incidents that could occur, right?

And we'll refer back to this "Test Completely" on a couple of other of these principles. This is a critical one.

Next we have "Deploy Pessimistically" - this is about assuming that every application change that you ever make will fail. And how do you take steps to limit the impact of that within your ecosystem?

So the thing that we think about in this space are rollout strategies like blue-green rollout where you can test a candidate release before cutting over to it in production or percentage based rollouts like canary or linear rollouts where you can set up a small percentage of traffic, like 2% or 5%, and you route that production traffic to your candidate. Such that if there is a problem with it, you've only affected 2% or 5% of your traffic, not 100%. So the impact is less.

Now on the AWS platform, there are several services that support rollout strategies like this - ECS and Lambda are a couple of examples. You can also do this with EKS if you use third party projects like Argo CD and Argo Rollouts.

All right. Next, we have "Run Cautiously". So this is about running redundant instances of applications that scale to meet demand within defined limits. Now, there's three ideas packed into this.

First, we have running redundant instances of applications. So earlier I talked about we don't want to consume single points of failure. This is about avoiding creating single points of failure.

Next, we have scaling to meet demand. So this is referring to horizontal auto scaling, scaling out to meet demand, scaling in for cost effectiveness when that demand isn't there.

And then finally, we have within defined limits. So if you're doing auto scaling, there's usually a finite limit to that, you can't auto scale infinitely, usually there's some limit. And so when you hit that limit, you start rejecting traffic and this is called load shedding.

So these concepts again are supported on a variety of AWS services. The ones I mentioned before - ECS, Lambda, EKS - you can do these types of things with all of those.

Next. We have "Observe Obsessively". This is about being the first to know if your application has failed and having the data points available to determine why.

So if you like us and you have 100 million users, right? You can't let your users tell you that your applications are down or not working correctly, you need to know. So if you're running something in production whether it's a database, whether it's an application you have to own that and you have to be the first to know that that's not working correctly.

And this ties back to testing completely. I mentioned knowing that your monitoring is working correctly, knowing what your monitoring is telling you. Right. So these two kind of tie together with that regard.

Next, we have "Recover Urgently". This is about being able to efficiently use that observable data to understand what action is required to bring your application back into a working state if it fails and it wasn't resilient, meaning it didn't recover on its own.

So again, this ties back to test completely with that dress rehearsal for production incidents and knowing what you're looking at with your monitoring data.

Finally, we have "Update Frequently" and this is about reducing exposure to security vulnerabilities and defects by keeping your application dependencies up to date.

So software doesn't age well, right, neglected software, the longer it goes without being updated, the more difficult it is to update it when the time comes. And if that time is when you find a security vulnerability in those dependencies, that's not good. So it's the best practice to keep your software up to date.

All right. So we're gonna pivot from this and we're gonna take a look at some of the initiatives that Cigna and Deloitte worked on to launch our resiliency program. So you can kind of think of this as like a step by step guide for how you might wanna go about setting up your own resiliency program back at your companies, if you're interested in that.

As we go through this, keep in mind these guiding principles that we just talked about and think about how some of these initiatives tie back to those guiding principles.

Now, if you're starting on this journey, one of the places that you would probably start and think of first is what, what should my organization for my resiliency program look like and how would I enable that structure for success? So Newton's gonna cover that for us.

Newton: As you start your SRE journey, one thing which you want to think about is, you know, you want to bring your people along with you because without people, it will remain as a responsibility of the small group.

We wanted to make sure as we were starting at Cigna that SRE is the responsibility of the bigger group, the testers, the product teams, application teams and not just limited to Steve and his team. And that's where we focus first on SRE operating model.

We want, we defined and identified what are the key SREs that are responsible for the key verticals within Cigna and then assigned their roles and responsibilities.

One thing which we wanted to make sure was we identified their focus areas and the things they should work on and quite frankly, the things they should not work on. The idea was SREs being expert in their area, are going to keep getting pulled into multiple directions. And as a result, they may not focus on the things that they need to focus on. Hence, defining the responsibilities areas was super critical.

Next, we also identified their OKRs which is Objectives and Key Results. The idea was we'll define a north star for them and help and have them play on how they want to achieve those north stars. So we provided enough guiding room to help them do what they want to do but achieve the common goals that are needed.

We also post this. We also led to identified some resiliency programs that needed to be immediately started to close any tickets that existed in the environment and any major gaps that needed to be closed.

And last but not the least was the culture aspect. We want to make sure there's a room available or there's a platform available for every everyone to voice their concerns, learn around the SRE principles and discuss ideas and share, share success. And that's where we started Community of Practice, which has been a big success.

We also opened up multiple communication channels where people can come and discuss their ideas on what should they, what should Cigna focus on from the resiliency perspective.

Now, as you start on this resiliency journey, and you've taken this first step, how do you move forward? How do you know you're moving in the right direction? What should be the success measuring criteria? And that's what Steve will talk about, please.

Steve: All right. So you can't go on a journey like this if you don't understand if you're succeeding or failing. And it's important to actually know at an application level if you're succeeding or failing.

So the tool that we have for this is something called Service Level Objectives. It's a concept, right? Or SLOs. And you can think of SLOs as like goals that you set for your application.

Some examples could be your availability for the month or your average response time for the month, your error rate for the month, right? So all of these metrics that I just mentioned, these are called Service Level Indicators or SLIs and we'll talk in a minute about how you can harvest those from your, from your tooling and your monitoring and your environment.

So we talked about SLIs and SLOs. So what are error budgets? So you can think of error budgets as like the space between your current SLI readings for your metrics and the Service Level Objectives that you've defined.

So think of it like a buffer, right? And as long as that buffer is full, you're not in danger of violating your Service Level Objective that you've set. But if that error budget gets depleted, if that buffer is depleted, then you've violated your Service Level Objective, which isn't what you want.

So you can use this as a management tool to help you manage to your Service Level Objectives and meet them.

So here's an example, let's say that you have a release coming up and your error budget is looking good, right? It's not in danger of being depleted. So you would proceed with that release. Of course, you would deploy pessimistically like we talked about with our guiding principles.

Now, what if that error budget wasn't looking so good? What if you were in danger of depleting that and then violating your service law objective? Well, you might think about mitigation strategies. Maybe you can move the release out until your error budget is replenished or maybe you take extra precautions on that release.

All right. So we talked about SLIs, SLOs and error budgets. Working with Deloitte, we defined what our SLIs were that we wanted to use. I mentioned them earlier with availability, response time and error rates, we call these our golden signals, right? So we defined those, we had to determine where we would want to harvest them from.

So we have several observability tools within Cigna. We have Splunk for log aggregation. We have a couple APM tools with New Relic and Dynatrace. And really, we could get all of the metrics that we needed from that. If you're on AWS platform, you certainly have CloudWatch available that you could use to harvest this type of information and for availability using synthetic monitoring capabilities in APMs is really what you would want to use for that metric.

So once we knew our SLIs and we knew where to harvest them from, we were then set to begin defining SLOs. And we did that for many of our critical applications and processes and then also defined error budgets as well to help us manage to those Service Level Objectives.

Ok. So this helps you to understand if you're succeeding or failing at an application by application level. But how do you know where your opportunities are for improvement? Right? How do you know what you should work on first to get the most bang for the buck?

So for this, we have a concept called Failure Mode Analysis. So what is Failure Mode Analysis? Failure, remote analysis when you is exercise where you s you look at your architecture for your application and you think of all the ways that your application may fail, right? You think through that and you document those and each one of those is called a failure mode.

All right. So working with Deloitte, we actually did hundreds of these failure mode assessments and we uncovered more than 400 failure modes through that exercise.

Now, many of those failure modes had already been addressed, they already had remediations in place. So we already had circuit breakers or retries or what have you but some did not. Right. So this gives us the start of a to do list that any good to do list has a sorting order for it. What should you do first? Right.

So this gets into risk-based scoring on those failure modes. And for us, the way that we scored is we looked at each failure mode and we, and we asked the question, how likely is that failure mode to occur? What is the impact if it occurred and what's our ability to detect it? And based off of these calculations, we come up with a score for each failure mode and then we can sort the failure, failure modes even across applications. And this gives us basically that priority for our to do list if you will.

Now, in terms of tooling for doing these failure mode analysis exercises, we did what most companies do, which is we use spreadsheets, we created a spreadsheet template and we set our engineers to use that.

Now, um you know, we talk about challenges through this, through this journey and this is one of them, right? So failure mode analysis is a bit of an art, right? And some, some engineers will get it and some may not.

So one of the things that we're doing to improve, this is we're currently building an application with Deloitte um to start doing failure mode analysis within that application, that tool and that tool will suggest failure modes based on the workload type of your application and the workload types of your dependencies.

So if you're integrating with a database, it understands what those failure modes for that integration may look like and it can suggest those. But in addition to just suggesting the failure modes, it can also suggest the mitigations that you might want to put in place.

And it can suggest the fault injection tests that you might want to run to test them.

All right. So in addition to streamlining our failure mode analysis exercises this application that we're building actually becomes a bit of a teaching tool, right? It's teaching engineers what to look for. So they improve their understanding of their applications.

This is a couple of screenshots of the app we're building with Deloitte. You can browse or search for your applications and then once you see them, you can manage your failure modes, you can adjust them as needed, adjust the scores, sort the scores, et cetera.

All right. So now we know where our opportunities are, right? We have our priority list if you will. But how do we go about remediation for our failure modes? And how are, how do we be consistent when we're doing these remediations? So for that we created something that we call the Reliability Guide. And this is a catalog of resiliency patterns organized into three categories. We have system design patterns. So you can think of like circuit breakers, retries, horizontal auto scaling caching, things like that, deployment patterns and observable patterns are the other two.

Now, we bring this up in the context of failure mode analysis. But we actually from an implementation standpoint, we got a lot of benefits even outside the context of failure mode analysis. We get questions all the time from engineers. You know, hey, I'm having this problem. I had this production incident or I'm seeing a problem in QA. How do I address it right now? We can point engineers to a common place where they can see the strategies to mitigate these things as well as code samples that they can follow to implement.

Now as our engineers worked with this, they might have questions. So we did set up checkpoints with them and we review it if they're on the right track, if they're implementing these patterns when they should and if they're implementing them correctly. But one thing we want to do that was all manual and what we wanna do is automate that and actually put those checks into our CI/CD pipeline as policy as code. And we think that will help streamline this process.

Now all of this has been about internal integrations, what I was just speaking about here. But you also need to think of resiliency when you're doing external integrations from your applications and how you might mitigate those failure modes.

So, Nitin is going to cover that off for us, the chain is as weak as its strongest, you know, the weakest link, right? So what we wanted to make sure was at Cigna, we are taking responsibility of end to end value chain from the resiliency perspective, which meant not only just the internal systems, but we also needed to bring in the vendors and the third party along on this resiliency journey.

We started with identifying the major vendors which are existing there and also doing a quick assessment around, you know, what their disaster recovery plans, what their BCP plans, what their data protection capabilities looks like, any gaps that were identified is being planned.

Now, how do we remediate those next was you know, how do we prepare for the future vendor engagements and making sure that the contracts for these vendors are being updated and resiliency requirements are being put right into the contract. So that in case there are any specific requirements that Cigna may have is is now as a part of the agreement, not only that, knowing that sometimes the issues will occur on the vendor side and we should be prepared for that.

Cigna is working to prepare itself in case any of its critical window goes down and the things are in place to handle those proactively. A lot of the things will be put right into the system so that at the time of on boarding a vendor or when you want to assess a vendor for its capability of resiliency, all that is in place and the right decisions can be taken with regards to whether what we should do with the vendor around its resilience issues.

So if you, if you are falling on the story, right? So we talked about how do we set up an SRE operating model? How do we do the FM to identify the issues? How do we fix those and how do we fix across the value chain? What's next? How do you make sure your system is going to work in the real life scenario? Right? And that's where chaos testing comes into play.

Chaos testing is a way to introduce fault into the system and seeing how your system, your, you know, your people and the processes react to those scenarios. And this is exactly what we did at Cigna by conducting chaos testing in the critical application environment. The goal is ultimately to get to a place where, you know, we can do this at scale and no deployment happens without doing a chaos testing. But for now, we focus on critical applications in specific areas where things like CPU spikes latency gets introduced and we see how the things react and we fix. If we are in defining gaps, the tools and technology play a big role in this.

We wanted to make sure that we choose the right tools for Cigna to work in this space. There are lots of tools which are available. But we tried and tested Chaos Monkey Gremlin AWS and some internal tools for this purpose. Several pilots are being conducted and SREs are playing a big role in this to help conduct those experiments, providing those hands and support so that the teams are prepared to handle any kind of issues that may occur when chaos testing is being performed game day, Steve talk a little bit about that.

It, it requires that you set up your environment, you set up your applications, you have clear rules and responsibilities for the testers for the SREs as you conduct those game days. Those have been well documented. They are available through technical guides in the environment at Signal so that anyone can go now and conduct their own game days.

So as we perform on this game day testing, what you need is some kind of a guide on what kind of tests you want to conduct. And by no means this is exhaustive, but these are some of the test cases that you can run in your environment as you start working through on chaos testing in your in your organization. It will take you across the layers of, you know, the application stack and also some of the scenarios like when you're deploying an application and the servers becomes unavailable, what do you do at that time? So you can test some of those.

What if you can't roll back? Those are some of the scenarios which do occur in the real world and you want to be prepared for that and there's no better means than chaos testing to do that.

So, as we move forward, how do you make sure the rest of the organization is coming along with you? You want to make sure that no one is reinventing the wheel on their own and not trying to do the chaos testing in a different way or doing FM in a in a different way. And that's where resiliency training plays. A big residency training plays a big role. You want to make sure you are bringing the people. We are increasing the knowledge of the people so that they are on, on with you on this journey.

And we started with conducting live sessions where we identified the different training personas example, product owners may get a different training. The testers may get a different training and application teams may have a different training to take through from the resiliency perspective. It made sure that everyone understood their rules and responsibility in this area, office hours were super critical. So that in case there are any issues when application teams are conducting their test, they have a place to go and ask their questions, hands on assistance.

This will be needed when the teams are new and they are trying to conduct their tests or you know, conduct FM and hands on assistance is supposed to be provided where SREs now play that role on working with the teams to set up the environment to conduct any of those tests or residency related initiatives. And the last is the self service. You want to make sure when the project winds down, the people are still self sustainable to be able to run all these tests, all these scenarios on their own. And that's where we have resources and content made available on the conference websites and other things and internally so that they have a place to go to learn from this.

And last, but not the least, a lot of the trainings are now made mandatory. So that in case anyone wants to go, there is a place to go and learn those. Plus they are becoming mandatory in the way that everyone has to actually take those trainings as they so that their knowledge can be updated.

Now, how do you make all of the scalable? How do you make sure something which started with a few applications can be scaled across the organization? And that's what Steve will talk about, please.

All right. So we're gonna talk about application resiliency certification, which is a concept that we came up with. There's an important distinction to draw here that this is not about certifying people, this is about certifying software as resilient. And what we looked at here is if you imagine everything that we've been talking about from defining surface level objectives, performing FMAS, mitigating your failure modes, running your game days, right? If you bundle all of this up, this is what we refer to as our resiliency certification. So it's a point in time, our requirements at a point in time for resiliency.

So once we defined this process, right, so that it was repeatable and we were asking the same thing, we looked at our applications and collectively looked at them and said, ok, which applications have the most impact if they're slow or unavailable as we were talking about earlier. And so we sort of categorized applications by the impact that they cause. And this gave us sort of a way of planning that we could do so we could focus on those most critical apps first.

So then we went to those dev teams that own those most critical applications and asked them essentially to do this resiliency certification and that went well for some teams, but this was a challenge for us and with some teams that had competing priorities because this isn't a small amount of work, right? It's a considerable amount of work.

So one of the things that we're thinking about pivoting to with roll out is actually requiring each step of our resiliency certification at certain dates and then having our CI/CD pipelines actually check for those. So for example, service level objectives, maybe on some date in the future, we will not allow a production deployment unless that application has its defined service level objectives configured or another date. Further out, we won't allow a production deployment for an application unless it has an up to date failure mode analysis document. So these are some of the things that we're looking at at to improve this.

Now, if we look at the value that this resiliency program has given us, we had goals of reducing our high and critical production incident counts and duration. Both of those reducing those by 15% for the year. And to date, we're beating that we're at about 25% currently. So that's a good improvement from a resiliency culture aspect. We move the needle there.

Nitin talked about training earlier to date. We've trained more than 2000 of our technology employees on our resiliency training and that number is gonna continue to rise. And as N mentioned, we're gonna repeat that every year. So it refreshes that for, for those individuals, I talked earlier about failure mode analysis and how we identified hundreds of failure modes. And that gave us the opportunity to address things that we had not seen yet. So that also improved things for us and probably ties into our improvements in terms of reduction of incident counts and duration.

So this is all good progress, but it's a starting point and we have more work to do. We have a lot more work. One of the things that we're looking at doing everything that we've talked about. If you recall the two brands that are under the Cigna group, everything that we've talked about has been done under our Evernorth Health Services side. So we're actually now going to apply this to the Cigna Healthcare side as well as our infrastructure services that span both. So we're looking forward to the future benefits of that as well.

All right. So with that, I want to thank you guys for attending today. I hope that you get some inspiration if you're considering setting up resiliency programs back at your companies. And oh, thank you. Thank you, Steve for coming down here and, and talking about the journey we had together and thank you for coming down all of you and hopefully you will have something to take away from the session. Me Steve and Asher will be around in case you know, you have any questions happy to happy to take them. Wonderful. Thank you. Thank you, gentlemen. This was really insightful. I, I really learned a lot from the session. So thanks a lot for being here.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值