Building operational resilience through AI and automation (sponsored by PagerDuty) (PagerDuty)

JONATHAN RENDI (SVP of Products): Well, good afternoon. Good evening. Good evening. Good afternoon. Uh depends on which time zone you're in. I'm Jonathan Rendi. Our SVP of Products had paid your duty and I'm honored and excited to host this panel today with three partners, customers and colleagues in the industry where we're going to talk about automation and AI and its use in an organization. So before we get started, maybe it makes sense to do quick introductions.

DAVE JUSTICE (VP of Engineering at Sunrun): Sure. Uh hi y'all, I'm Dave Justice. Uh VP of Engineering at Sunrun, which is uh America's largest residential solar company.

DAN GRAY (VP and Deputy CIO of DXC Technology): Yeah. Hi, everyone. I'm Dan Gray. I'm the Vice President and Deputy CIO of DXC Technology, which is a, a large global technology services provider.

YASIN QURASHI (Head of Technology for TUI): Hi, it's uh Yasin Qurashi here. Uh head of technology for TUI uh the world's largest travel uh tourism organization.

JONATHAN: Great. Well, welcome. I know we've flown from all over to get here. Um so before we get into AI and automation, uh kind of curious, given the economic climate, um things have changed a lot over the last year over the last even six months where, you know, budgets are getting centralized tightening. Uh I guess one of the kind of balance questions that goes on in every organization? Is it more about cost containment or is it more about innovation? So, just curious, um Dave starting with you, um are you doing one or the other focused on with goals for the team or balancing both?

DAVE: No, we're trying to have our cake and eat it too as I think what every organization is trying to do. But yeah, I mean, when I came into Sun Run about a year and a half ago, we were starting a very large digital transformation project, right? And that's not a thing that you stop just because, you know, things have gotten a little bit more tight from an economic perspective. Um but that being said, there's still a conscious mindset toward how do we drive efficiency where we can? And I think the, the real question is how do you enable, at least my engineers and my team to focus on high value things rather than things that we can just, you know, automate away or, you know, annoy developers who are like, oh, I don't have to do this, like really plumbing thing. And so that's really what we've been focusing on is how do we drive innovation by freeing up our engineers to focus on high value activities?

JONATHAN: Great, so efficiency, efficiency, entertainment, anybody else?

DAN: Yeah. Well, for me, it is about um how we can exactly what they've said about the, the distribution of work, how can we generate as much value as possible? But while still, you know, making the existing operations as efficient as possible and not forgetting, doing everything in a secure resilient way. Um those I think are the, the top three things of my mind, which is maximize value, continue to optimize the existing operations. But doing everything in a secure and resilient way makes sense.

YASIN: So I think uh from a travel organization, we, we've been impacted severely uh during the downtime, uh our business was completely shut down and um we are quite focused on efficiency, uh resiliency and uh the biggest priority we have actually is growth. So we have a huge ambition to uh create market share. So from a business perspective, the expectation is that we'll grow our customer base, which is currently approximately about over 20 million customers a year that we take on holiday uh to over 30 to 40 million uh customers. So the, the products that we provide uh are under the umbrella of travel and tourism. We own our own planes, we own our own hotels, we own cruise ships and uh we provide the end to end holiday experience for our customers. And uh we work as a base from europe, but we have over 180 destinations across the world that we go to. And uh we are improving market share in customer growth and also uh market share in terms of the markets that we are in. So we're moving into new countries and providing our services from there. So a lot of cost containment and growth, innovation, moving into new regions, et cetera.

JONATHAN: Um you know, we have kind of a, a pretty diverse set of organizations, businesses, both enterprise and and consumer. Um there's always the kind of the common thread is a ever never ending pressure to innovate to drive more velocity to drive more innovation out of the teams. What comes to mind is kind of a question around what strategies are your teams using when you do have cost measures to still innovate, to still drive more. And maybe you've seen, I think everybody can relate to the travel industry having gotten here at different times today. So that's right. How are you dealing with this?

YASIN: Yeah. So we started our transformation journey about seven years ago and we had a strategy of a BD ABC DS A was API s. Uh make sure that everything that we produce is event driven. Uh B was big data. We, we have a huge uh customer base and huge amount of data that we collect. And we want to be able to use that uh to the advantage of providing better quality holidays and experiences for our customers. C was moving to the cloud space. Uh so we chose AWS as our partner. So we moved everything into the cloud. Um D was devops so more, more focused at you know, how do we actually work as a team. We moved away from project delivery to product delivery. So we made uh two pizza sized teams and um, S is security, which is, you know, the first principle we, we care about uh the service that we provide and we need to make sure that is secure. And that journey started, as I said, seven years ago and we are innovating at a rapid pace. So we've had the ability to lay the foundations of all those strategic goals and we're constantly reviewing those and making sure that, you know, we are moving in the right direction. And the good thing is is that the foundations that we've laid have actually helped us accelerate and have the confidence to actually grow as a business. It's, it's uh very appropriate for these times.

JONATHAN: I know it takes also not only kind of uh um that balancing act but also a level of autonomy of the teams to be able to move, which kind of leads me to a follow up that to have some of the innovation, to have some of the growth, you do have to have a culture of kind of ownership of autonomy in the organization. Um I guess how do you make sure that you have that ownership in the organization?

DAN: Yeah. Um so, you know, really, really good question. So what we've done um with DXC is we've um aligned at the, at the very outside of the technology group. Um we've aligned by our critical business process of value streams um to get stakeholder involvement from not just the technology leaders, but more importantly, the business function leaders. Um and that allows us to um really, you know, focus on, on getting that, that agility, but then also um making sure that our road maps are aligned to those, those priorities, we then um similar to, to your scene have moved from a project to a product delivery model um whereby um we've, we've, we shrunk the teams into more services focused on delivering value to that particular value stream versus the very large shared pool teams that we've had in the past. So those are some of the strategies that we think.

DAVE: Yeah, I think, you know, we, we've done similar, I mean, we've, you know, we're largely project driven historically, move to product mindset, follow more domain driven architecture where there's direct accountability and ownership of key business functions and then also having like trust in partnership with business units. But I think part of it too has really come around us allowing space for experimentation uh within the team, right? Because, you know, highly efficient systems are not resilient systems. And so we want to allow people to sort of do moonshots sometimes try things out and see because that's really where you get like leapfrog efficiency and actually, you know, drive meaningful value rather than just maximizing to the local optimum.

JONATHAN: So you brought up efficiency and we have innovation efficiencies again, kind of the the balance of the two. I always think it's a bit of a struggle to get efficiency at scale to scale that. Um and you know that that takes some level of kind of governance across the board, some level of following consistent practices or processes, but you don't want to take the autonomy out of out of individuals, individual teams. So I guess to kind of achieve some of those efficiencies. Just curious, Dave what you do at Sunrun to kind of have a more consistent scaled approach to some of those.

DAVE: Yeah, I mean, I think it's all tied to incentives, right? If you want to get people to behave in specific ways you incentivize appropriately, right? I think everyone can relate to that. And so, you know, it's all about defining key KPIs and metrics um that we hold the teams accountable to and then we're a little bit more, you know, fuzzy on the, how you necessarily deliver that. Obviously, we have best pro north star architectures, all that stuff, but we give the autonomy and so we look at things like, you know, are you meeting whatever your domains, business objectives are? What's your time to market? How many errors do you have? How often are you having production outage instances? What's your MTR by creating these metrics and holding teams cross accountable and cross sharing those? It then drives some of that accountability to move efficiency. And it costs obviously is one of those metrics, but it shouldn't be the only metric. And how frequently do you set that? Is that quarterly, annual?

DAVE: Yeah, we we do quarterly reviews. Um so we set metrics basically every quarter across like every team is basically broken out between human metrics, like team retention, happiness, those kinds of things, culture fit business objectives. So are you driving whatever your domains, business function is and then like more operational things cost efficiency, you know, the times of and all that and we basically set that every quarter and then we re evaluate and then we readdress and then we, we also cross share that across teams. So each team can see and then share what's working or what's oh you got your MTA way down MTR. How did you do that? Right. And then that provides that culture of innovation and growth. So that's Sunrun's approach any others when it comes to kind of scaling?

DAN: Yeah. So just to give an idea of our scale. So DXC technology, we're present over 70 countries, we have 130,000 employees, we have some of the largest customers in the world that we're delivering mission critical services to. And what i found um in the past um is when people strive for efficiency, they tend to really go for a totally, you know, horizontal shared service model and often over rotate in that direction and you end up with a very diluted set of services with a, with a real um low sense of purpose. In fact, I've got some anecdotes where, you know, I've been talking to some members of the team, um, maybe, you know, who have worked on a server or, and, and they don't really understand the purpose of the system running on that infrastructure because they're so highly leveraged. So what we did because efficiency isn't just about cost, it's about delivering the value for that for that lowest cost. So again, going back to what i said before, around that operating model in creating strong service oriented teams with a strong sense of purpose who can then deliver that value is, is how we have chosen to scale and it's been extremely effective for us.

JONATHAN: So I'm curious, Dan, you know, in an organization where you have autonomy, you have a service ownership kind of model. Um you know, services are only getting more complicated, greater dependencies, greater complexity. Uh it's not a question of when or if something will go wrong, it's more kind of an issue of when it will happen and when that happens, you want kind of the right team to step forward and and the other teams to take a step back. How do you deal with that? What do you put in place in your organization?

DAN: So, so the the service owner is, is very aware of that they own the quality of service for that, for that particular um you know, service unit of measure. But we do have some, you know, we have a reliable engineering team and we have some shared service teams who work with those service teams to implement design for operations, who talk about the communication strategy, understanding a stakeholder templates ensuring that the right people are notified. Because I think that's the key thing when something does break, which inevitably it does that the communication is very clear, crisp, timely and then the service is back up and running as quickly as possible. Ideally before there's any real impact because we, you know, what we do is we try and you know, tune our, our measures, our SLOs to the point where it's a proactive event versus a business impacting incident. So that's some of the techniques that we use,

YASIN: I think from, from our perspective, uh we, we run mission critical services and um what we really strive for is predictability and consistency. So whenever there is a service impacting incident, uh the group of people who are responsible for resolving the issue are fully aware of what processes and procedures to follow. So we, we've got a single focal point uh with a defined process for service management. So how do we manage incidents? How do we manage change and how do we manage uh disruptions to service which are uh impacting not only customer experience but potentially revenue as well so uh we have a very clear, clear defined process for managing that. And we use PagerDuty as the single pane of glass for giving us that visibility and autonomy and control to a very large set of engineers. So we have over 3000 engineers in IT, delivering our product services and we've completely decentralized. We do not have central teams managing uh issues. Uh it's all based on self service and uh it's all based on uh getting a notification of events at the, to the right person at the right time and being able to coordinate those incidents uh within the, the tool set itself.

DAVE: Yeah, we're, I mean, we're pretty much the same way. I think we, we've, you know, gone like we do residential solar. And so for those who aren't aware, a lot of that is door knockers going into people's houses, which I know for this audience, it's kind of a terrifying thought of a bunch of people showing up at your door and trying to get into your house. I've answered that doorbell twice in two places. I use Sunrun. So, thank you. Yeah

Person 1: Yeah, that's awesome. No, glad to hear power bills are much lower. Now, that's the goal that's our objective. Um but to that point, you have people in person interacting with each other. So when there's an outage, like that's real awkward for the sales rep because you're literally in somebody's living room and you're like "Hey, do, do do" like trying to tap dance around, there's an outage. So what we've done is, you know, similar to you, we have a very distributed model set processes, but we've also given power to our end users to be able to set alerts and notifications through PagerDuty. And that's really accelerated our ability to, you know, figure out when something's gone wrong and get it up because there's a number of permutations of how something can break within our system.

So I'm curious, you know, there's the service model. Um there's more centralized models that maybe many of your organizations have and use today. Um more often than not, I'm starting to see a blend of the two models come together. So I'm just curious um when issues are flowing into your organization, interruptions, disruptions, uh unplanned events, um do you still have the notion of kind of an L1 or a network operations center that while they're not, you know, maybe managing the issue from beginning to end, they are helping kind of route the issue add in the automation add in the efficiency. How do you blend the two models together today? Maybe Yasin, I'll start with you.

Person 2: So we've converted everything to a product now. So network services are provided as a product and they're dealt with as a product. So all the philosophy of dev ops and agile and automation, they're all built into the same uh construct. And we also have uh incident command teams. So we do have uh teams uh readily available to coordinate major incidents because that's very uh uh uh very much required as a service. And to ensure that we get that consistency of decision making, we need a single owner for major incidents and that's where the incident command team play a big role.

Person 1: Dan, how about you?

Person 3: Yes, we also have a centralized incident command team, which is just essential, especially in those mission critical um events. We are still, you know, on that journey, we, we still have a, we have a, an L1 service team but, but that is increasingly shrinking and we're actually using the service structure as a as a bit of a career path for that L1 team as we, as we convert more of our services into fully self service products, it allows that team to, to move. So I talked before about the distribution of work. Um and what I mean by that is just giving people the ability to work on more feature enhancement type work and, you know, new initiatives versus a lot of the more toil sort of operational work. And that's been really successful for us in terms of just retaining that talent and giving people a sort of line out of that, you know, that the bad old days of operation we all remember.

Person 2: Yeah, it's funny. I think we're exactly the same, but we've seen the same thing we have tier one support, like support desk because, you know, supports everybody, tier two and then goes to the software engineers. But we've now turned that into a career path suggestion. And like, honestly, some of the best engineers come out of those, those pathways, which is, which is really nice.

Person 1: How aggressive or I guess how, um uh how clear is, are you trying to automate L1 like fully automated away or do you always have kind of the, maybe the approach of there'll be a small team in place. It will never be fully auto. I'm just curious.

Person 3: So we don't have a target set to, to have zero, but we do have targets in terms of reducing and automating. We do um ongoing toil analysis of, you know, manual repetitive work that is flowing into the L1s and we automate those flows. Um and those are then handed over to the service teams to, to run. We haven't considered, you know, to zero operation, you know, L1 operations model, but we have initiatives that are running to shrink that. And we see that is really another value lever because we're then creating, you know, new features that are then helping our business grow.

Person 2: I mean, so we, we've got a, a maturity model that we've created uh which has the ambition of no ops. Um we realize that it's very difficult in many situations to actually achieve that in reality. But what we are promoting is that the product team has to take ownership of what they build. If they build poor quality, they're responsible for fixing it and um to help them, we provide guidance on what is the best thing to do and how to do it. So, before you run something into a production environment, do you actually have a run book? Do you have some checks that you've done to make sure that you've actually delivered a good service into a production environment? So it's not just the the the code is working, it's about uh is it secure? Does it meet our compliance standards and security standards as well?

Person 1: I mean, to be provocative, um like I would argue that you would never want zero at tier one or, or, or at that level of support because I think like you have to worry about employee experience too, right? Like there's a lot of proof that shows that employees are happy, customers are more happy and it drives business value and just like anybody, you know, when you talk to an automated system, when it works, it's awesome but that it doesn't work, right? So I was just arguing with Delta today about something or trying to get something upgraded. I was like talk to a representative and so you want that interaction. And so what I think for me, at least what it comes down to is how can we use automation to get rid of the for lack of a better word bs stuff that you can automate away and let your teams focus on high value things that drive that employees satisfaction.

We talk a lot about the quality of life of the people who are doing and that operations working and proving that quote. That's really important to totally agree with that. Not human ops, but humane, humane humane ops.

Person 3: Yeah. Yeah. Um, so we've talked, talked about scale, uh uh you know, efficiency at scale. We've talked a little bit about automation. Um let's switch gears and talk a little bit about AI um Yassin in your organization. Uh how are you thinking in the travel industry? How are you using AI today?

Person 2: I think we've embraced the notion of AI quite heavily in our organization and we're very keen to see whether we can actually create new models of how we can offer uh travel, hospitality and tourism uh from an industry perspective. And we want to lead in that space. We've been using machine learning for many, many years. So one good example is, you know, how we price our holidays. We, we used, we've used machine learning tools to actually help us balance between the demand and the supply that we have and we do it real time and the way we build our products, it's almost in near real time where a packaged holiday, which includes a flight, hotel, holiday, attractions and experiences and also transfer from airport to destination and any anything additional it's actually built in real time.

So we, we are providing a dynamic inventory management system and automation of that is very, very critical. So we've got several touch points where we have been using machine learning quite extensively over the years to actually evaluate patterns and give us uh predictions of, you know, what is the best case uh that we can actually achieve this in. Uh another good example is uh in our flights. Uh so we have to make sure that we have the right amount of fuel to get to the destination and always a good thing. And from a safety perspective, we also have a requirement to have additional fuel in case there is turbulence or any other situation that requires extra time in the air.

So we're also conscious that from a travel industry perspective, fuel is the most expensive element of the holiday expense. And on top of that, there is a sustainability issue as well in terms of how much fuel we consume. And we've applied machine learning algorithms to actually calculate what is the best rate of fuel that we actually can put on a plane in a, in a safe manner. And we always rely on the pilot to make the final decision. And we've actually improved the efficiency of the fueling by at least 5 to 10% after we applied the machine learning algorithms. So more of the assisting in the decision making at the end of the day.

Person 3: How about at Sunrun?

Person 1: Yeah. So, I mean, you know, similarly we use AI and machine learning and, you know, statistics and all that fun stuff, kind of all throughout the process. You know, I, I would say we kind of chunk it into three major buckets. Um the first is sort of that like business intelligence side of the house, right? Where it's, hey, how do we get smarter about the hardware that we're picking? Right? Because we don't, we don't make our own hardware for any of this stuff. So, you know, we have, we have to figure out like what's gonna like fail when it rains versus what's gonna work well in a desert versus what's like. And so AI like AI in Michigan is very good at those kinds of business intelligence with an inform our business making decisions.

Um two is from a consumer experience perspective. And so a, a really good example is we have an older fleet um for a number of our, our customers and we needed a way to provide the field service. This is, well, it's actually just in the customer app. Uh and so we needed a way to provide near real time data to customers. Like you look at your app and you're like, ok, this is two day old data because that's the refresh rate, that particular inverter provides us data. We were able to use AI to predict a very, very high accuracy, what the likely production was and then present that as an estimated production and people were like ecstatic for that because they could still see that their system was working and then that reduced call volumes or hopefully reduces, we just launched it, but hopefully reduces call volumes into our call center to be like, hey, is my system working like because i don't see data from yesterday, right?

And so, so we look here from the consumer experience side and now, you know, with generative AI and LMs and all that fun stuff, you know, we are looking at, how can we do more intelligent assistance? How can we do more like focus chat bots again, that's around reducing um call center volume of, of repeatable issues. I love that app. by the way, i love seeing how many kilowatts were created each day. It's fun. It's really cool when you can look at the entire fleet and then you're like, oh, it was raining in North Car or like in Carolina and you see like the production go down. But anyway, how about?

Person 3: Yeah. Um so from as a technology services company, we have AI built into all of our um our offering teams. So whether it be analytics and engineering, modern workplace, cloud security, um it's embedded in all of our sort of customer facing offerings actually just a plug if you go over to the expo center. We have a Ferrari racing car in our, in our booth and we're doing some pretty cool work with Ferrari on like bringing in the IoT sensors and, and, and using that data for insight. So some really cool stuff in the in the client facing side from an internal IT side.

Um we, we use AI heavily in our device management for our PC fleet. So as I said, we have a 130,000 employees in 2019, we took the decision to go virtual first. So pretty much all of all of those employees are working from home and we have no goals to change that. So we have to come up with ways to manage that. And that's got a large capital expense which I'm sure you can, you can figure out. So we've done a lot in terms of maximizing the performance of those devices, not just replacing them after a certain given, given time.

Um and so we've seen real benefits there. We've done a lot of things in the um intelligent assistant uh for support services, you know, for HR payroll, um IT support and then from an operations point of view, we've heavily invested in a, an AI driven um operations stack of which PagerDuty is, is part of. Um and you know, we, we, as we've already talked, we, we, we're seeing good uh operational efficiencies from that.

Person 1: So the last 36 months have been uh you know, the amount of innovation with all the, the LMs that have come out and with generative AI have really changed a lot uh changed maybe many organizations perception of what they can do. What, what could you do? Uh on top of, I think Yasin, you were saying kind of the uh the data science and machine learning has been going on for many years, same as it PagerDuty.

Um I'm curious how you view some of the new generative uh advances versus existing data science and machine learning. And um do you view them as different in your organization? Are they, they run differently or yet basically just applying one on top of the other? I'm curious.

Person 2: Yeah. So I think um there's a huge opportunity that we see with the advances in generative AI um that there are possibilities uh which make it a lot easier to uh connect the human to the machine and uh have a natural language conversation with the machine where you cannot tell whether it's the machine or, or a human that, that you're actually talking to. And we are uh I've got a really good analogy where we talk about uh AI and JAI uh I walk into a bar and JAI uh predicted uh what uh cocktail drinks to have and Gen AI actually produced uh the next cocktail drinks that you need to have and the caution for us is is that you need to be careful about the first sip that you have on that drink from JAI because you, you're not sure what you can get.

So from a concerned perspective, we are a customer driven organization and we have to ensure that we build trust with the services that we provide. So for us, it's very important to be conscious about the data that we use the security that we embed within the processes itself and making sure that actually can trust the answers that come out from the tool itself.

So from our perspective, we've created a very clear strategy of how we are going to use, Gen AI in our organization. We are very ambitious about exploiting the technology and making the best use of it. But we need to be, be, be very careful about uh the safety concerns uh with uh the technology as well.

Person 1: So to that point, are you today applying any of the generative AI to the consumer side, to the customer experience at all or or not?

Person 2: Yes, we, we've categorized it into a couple of buckets. Uh one is the front office, uh which is customer facing and how do we improve processes and engagement and customer experience. The second one is the back office, how do we improve the employee experience and also how efficient it is in terms of the processes that we manage.

And the third bucket is really the the difficult one, which is the game changer. How do we see the travel industry and how can we use the technology to transform the services that we provide? So that's the core element of it in terms of, you know, the the real competitive advantage that we can gain from the service.

So at the front office, one example is with the call center, we get millions and millions of emails on a monthly basis. And agents look at those emails and respond to each customer. And it usually takes about three or four minutes to actually respond with an answer. We've applied the gen a i to actually analyze the text, read the sentiment and the context of what the customer is asking and respond within three seconds. So we've reduced drastically the amount of time the contact center has with a customer and being able to give a good response at the moment.

We are very careful. We don't just automatically send those responses to the customer. The contact center per agent will actually review that and send it across. And I think you had a good point, right? Which is like whenever you're deploying, you know, i have deployed a i systems before it's the same thing. Test it with humans first, like and control what's the output? Make sure it works and make sure it's predictable and then you deliver it to the customer because it only takes one wrong instance and then you know, you have a disaster on your hands. Yeah.

So one of the concerns is, you know, is it going to uh beautifully tell a lie in, in a, in a very grammatical uh correct fashion? Uh the other thing is, is uh just to make sure that uh it's not abusive as well because you know, it can result in a negative experience for the customer on the back back office side. We're actually looking at content generation. So we have thousands of hotel combinations available and we have different markets and we have different sentiments in terms of which market requires what type of language to describe the hotel facility. So we're using a in a production environment at the moment to create that content for us from an ethical perspective.

We've also made a decision, we will not generate images for hotels through gen a i because that's not a true representation of what that property is. So we do not want to mislead our customers into promising something that in reality doesn't exist. Is that something that's stated? Yes, it's, it's, it's, it's a clear principle. I know you're the ma i actually have a question off that. Uh do you guys have like an ethical a i committee or anything like that? Where you actually? Yeah.

So we've actually created an a la community. So we have a single owner responsible for this principles, the strategies and the policy and one of the key aspects that we're focused on at the moment is upskilling our organization. It's not an it problem that we need to solve. It's a business challenge as well. So it's not something that the it department can say great ideas. We will go and solve these 20 different possibilities that we have. What we are creating is a funnel. So a playground where we have a platform which is safe for everyone to play in. So we do not restrict any use of gen a i at all in our organization. But we provide guardrails and security always comes first. We will protect customer data. We do not allow customer data to go into the public forum. And we also are incubating at least 20 projects at the moment which are in the scale of, you know, the very easy ones where you can make a huge impact immediately.

So one example is something that we deployed only a few weeks ago is in microsoft 365. We've got a bot which will listen in on the conversations we have in the meetings and we create the agenda, action items and overview of and a summary of what we actually spoke about. Now that from a pro productivity perspective is immense because usually it takes me 20 to 30 minutes to write up a summary and actions. This is now done automatically for us and it's a trust that we build over time that allows us to gain that confidence and uh uh exploit the technology.

The other part uh that we've really exploited is the knowledge base that we have. So we use confluence as our knowledge base across it. We share all information. But over a period of time, it becomes very difficult to find the information, you know, is somewhere. But you can't actually pinpoint which page or which shared location it's in. We use uh the, the gen a i models now to actually allow us to just, you know, chat to say, i'm looking for this information, where can i find it? And it actually gives us the information instantaneous.

That's it. We're using ja i in similar ways. We're doing release note management. So it's like we ship stuff like hourly and so we wanted to send to recapture our senior leadership. Like here's all the stuff that went to market, right? But capturing 8000 pr s and all the things that are associated that is not digestible. So we've used gen a i to turn that into like common people, speak of what actually went to market and then review that saving like me and my team hours of work of like coalescing all that and making reports. And similarly, we've been looking at it and you know, it was sort of related to the pager duty space, um run book automation in certain regards too like getting out incident information to be like, oh have this issue in production, what was the solution and i was like, last time it was you kicked the ccs instance and then that solved the problem and it's like, cool, great. I know to do that rather than spending the time to triage and figure out where that confidence article is. The summarization, the reporting, the generation can be a powerful efficiency gain.

I'm curious. Dxc. Yeah. Um so, first of all, i think, you know, as we've all said, we've been using a i for years, machine learning. I think one of the great things about this, this generative a i, you know, surge has been just the awareness for a i across the, you know, everywhere but certainly across our company and that, you know, people who previously weren't talking about a i or, or, you know, there, there's a thirst for how can they use a i in their project with their client, et cetera.

So, one of the first things that we did um similar to what was already discussed, as we said, obey an office of a i with, with ethics, legal, um strong governance procedures around the technology presented and any sort of client facing the use case needs to be suggested and, and, and um put forward um to be reviewed to ensure that it's um you know, ethical and it's responsible. Um but we don't, we don't restrict, we just have strong policies in terms of, you know, what, which tools should be used. We're providing alternatives to some of the more publicly available tools which are, which are ring fenced. Um you know, it's just within our and we're seeing, we are seeing huge value similarly.

So one of our major use cases is we get a lot of requests from clients or potential clients for work and we have to turn those around rfp s etcetera. Um we have a huge wealth of knowledge. We have, you know, examples of work that we've done in the past, which has went well, maybe that hasn't went so well and, and, and understanding those use cases and bringing them all together is hugely valuable. So especially again at our scale, um you know, in the past, we've had a lot of, you know, local discoveries maybe in a or in an industry, but now we can make that global knowledge and, and that is, that is truly powerful. But again, it all has to be underpinned by the strong governance principles.

Yeah, it's funny, we basically had the same thing where it was, i mean, my background is mathematics, pure science and a i stuff. And so i was like the nerdy guy that was like doing equations and everyone was like, what are they doing? And then all of a sudden it became like cmo s and ceo s like everybody was like, generally to solve all the world's problems. It's gonna be amazing and world hunger is gonna end and it was really nice because there was excitement about it and, and understanding and desire to do that stuff. But at the same time then what happened was everybody everywhere was trying to do a i stuff. We're like, whoa, whoa, whoa, whoa, whoa, whoa, whoa, like that's not a successful measure.

So similar, we had to create an a i group that was, you know, is responsible and tasked for how do we, you know, largely deploy a i and within our system uh within our organization and to drive efficiency to drive some of these, these things that we've talked about. But that's a short lived group. We like, everyone knows that that's gonna exist 23 years and then they'll get weaved back into the fabric and then it just becomes a thing that every team does.

Yeah, i think putting together those multi-layered training curriculums is really important as well because, you know, just a simple thing, you know, when we first mentioned hallucination, people are like, what is that like? This is a computer program, i'm going to give you a different output, you know, to the same question. So, so that level of education is, is really important at different levels of the organization from stakeholders down.

I'm curious just by having, you know, 3000 engineers who have access to this technology, you can easily duplicate the effort for solving the same problem. So what we've ensured is we created visibility in, in a single community group everyone has access to. What are the key things that, you know, individuals are working on sharing the experience as well of the experiments that are being done that kind of led to one of the questions i was gonna ask, which is um at pager duty we've had also machine learning has been um a team and a project that we've been working on for six plus years now. Um generative a i started with a swat group, but now it's a part of every team is investigating, what can they do, whether it's summarizing whether it's a capability that could be on the road map in the future.

I'm curious when you think about not only the data science and the machine learning that's been going on for some time, but also the generative a i, are there any implications to the structure of your teams?

Yeah, i mean, as i said, um you know, we did create an a i group um that sits sort of independently to drive a lot of that. But, but i think to your, your point um at the end of the day, it's gonna start weaving back into the teams. And so it really kind of goes back to your original questions that you're asking. Um and how do we set teams up to be successful and to have accountability and to drive these sort of efficiency goals, but then also have the autonomy to be able to experiment and understand right. And i think when, when you do that and you set that up, right. The rest of it kind of follows and flows in, in, in suit.

Yeah, i think it's important not to get distracted just from a, you know, because it's a cool technology. Right. I mean, i think there will be new sort of problem sets that we can maybe discover to solve. But what i keep saying to my team is we have a lot of problems we already want to solve on the backlog. How can generative a i accelerate that? So i think that service structure then becomes even more important because they already have a backlog of things that need to be solved. They have a road map. How can generative a i accelerate the value of that delivery we have.

And i think this is again, more of a temporary thing. We have consolidated data a i and automation teams together more as a platform organization to provide and enable the service teams on top of that. So that is an organizational change we've made in addition to the, to the formation of this governance structure as well.

Great. I i think in the short term, it's going to uh really uh if you use the right way uh create great efficiencies for uh our engineers and in terms of, you know, where they spend their time, uh there's also uh an opportunity that we see in terms of creating a new product or entering a new market or having a new uh proposition doing things differently. And also it is going to improve overall processes uh in terms of operations that we deliver for our customers and our employees as well.

So i think it will be disruptive um short term definitely uh in terms of a positive uh in the long term, we just don't know how far uh disruptive it will be. Um but there was a question asked, will a gen a i replace humans eventually? And the answer is no, i think humans will use a i to augment their, their experiences even more. The ones who choose not to use a i will be replaced.

It's always interesting whenever these disruptions come along, there's a level of kind of fud that's put out there. But at some point, it usually helps to, you know, kind of rising tide, raise all boats. So wanted to open it up for questions from the audience if you have any.

Yes. In the back.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值