Good afternoon, everybody. Thursday afternoon at re:Invent. We are the survivors. So everybody should be giving ourselves a good pat on the back and a round of applause. I hope please. Yes, it's called winning re:Invent.
I hope everybody did save a little bit of time for this conversation about dev sec ops and what it takes to culturally transform a venerable company into modern ways of building and more importantly, securing software.
My name is Clinton Herget. I'm the Field CTO at Snyk. We are a developer security platform. We can go into a little bit of detail about what that means. The company is pronounced like a pair of sneakers or sneaking around. It's also an acronym - stands for S-N-Y-K. So now, you know, so now you know that much, at least, hopefully by the end of this hour, we'll all know a little bit more about implementing desk ops in a traditional organization.
As for me, I spent about 20 years building software for a living. Now I talk about building software for a living and that is much more, much easier and I am joined by my friend Dan. No, who is the lead security engineer in the cybersecurity engineering and risk management organization at R EI Dan. Do you want to introduce yourself?
Dan: Hey, everyone. My name is Dan. No, I joined R EI about 4.5 years ago for those that may not know what re i stands for. That's also an acronym for Recreational Equipment Inc and we are a co op and an outdoor retailer here in the United States.
Um prior to that, I spent four years over at Nordstrom, another retailer here in the us. And then prior to that, about nine years um serving in the u united states army as a cybersecurity operations officer. So I'm glad to be here to talk about our journey into dev sec ops with clinton here.
Clinton: Awesome. Yeah, and I'm so excited to get into the details of that transformation that you were really spearheading over at rei i think we'll do first, uh just to kind of define some terms. So we're all on the same page here. But sort of what is de dev sec ops? How are we using that terminology? What does that mean when we're introducing security into uh modern ways of building software? And then I'd love to do, you know, deep into the details of how dan and his team accomplished that over at re i.
So we're going to talk about sort of what makes modern software different. Why is dev sec ops, something that we need to think about in building our modern applications when we maybe didn't, you know, 10 and, and, and, and more years ago, why has application security had to change as a result? And then we'll get into how re i built its developer security program? How in particular were the cultural implications of that transformation managed over time? This is much more talk about people and process than it is about tools and technology. And then we'll talk about the future. How is re i thinking about some of those issues related to devs ops moving forward. We'll leave some time for audience q and a at the end, really want this to be as sort of as interactive as possible.
So I think from my perspective and, you know, talking specifically about how we see the world evolving here at sneak look, obviously, we wouldn't all be here at re invent if dev ops hadn't changed our lives and the way we do our work one way or another, the way we build software today is simply fundamentally different. And that's more than saying it's, you know, quantitatively different. Yes, there's more software, but it's also fundamentally developed and structured differently than the software of yesterday.
You know, it used to be, we would build a release every few months and we'd go through a very manual release process of deploying that, you know, out into production with a lot of, you know, manual stop gates involved. Today, we do agile development we deploy every few minutes, we have continuous integration pipelines and automated tools that do a lot of that undifferentiated heavy lifting for us.
We used to have siloed developers and operations teams. And we recognize that by integrating those teams together, these dev ops centers of excellence or into integrated platform teams within organizations, we could build software much, much more quickly. Because if we integrate the operation of a particular application or service into the team that builds it, those teams can then run out ahead. They're no longer bottlenecked by the operation side of the house. So by building and maintaining software as part of this continuous infinite loop, we can unlock a lot of innovation and potential.
And of course, the way we define the requirements of software has changed, we no longer have these rigid waterfalls requirements documents where we have to decide every aspect of user interaction before a piece of software is built, we define the software as we build it as part of these rapid agile iterative cycles where we can make changes in real time. And again, that allows things like microservice transformation. We break down monoliths into microservices to allow teams building individual features to run out much further ahead because they own that entire application sort of cradle to grave.
And then finally, and most importantly, i think from a risk perspective, we no longer deal with very limited and visible software supply chains. You know, it used to be in the olden days, everybody building software for an organization basically worked in the same building and they deployed every few months onto servers that often were in the basement of that building. That's a very easy supply chain to understand.
Now, obviously, we have these nearly infinite cloud based software supply chains because we have things like containers, things like open-source dependencies, these massive application graphs to say nothing of the third-party services and the cloud providers that are providing a lot of that agility. So software risk fundamentally looks different today than it did even just a few years ago.
What hasn't necessarily kept pace is security the way we implement the risk management of that software, traditional application security relies on testing after or testing the artifact of a software development process rather than testing the process itself, continuously testing as part of the act of building software. It's generally audits, right? It generates a big pdf or a result saying here's everything that you did wrong as opposed to being a fixed based or iterative approach that allows developers to continuously build better.
It relies on sort of siloing right. We have to secure our application and then secure the cloud environment that it goes into independently of each other. Without recognizing that risk often is holistic. We need to understand the security implications of an application holistically by looking at multiple factors of context across that sdlc.
And finally, traditional apsac relies on security being a bottleneck on being sort of the office of no and, and, and siloing that knowledge in a handful of security experts within an organization rather than farming out that knowledge, relying on developers to be security champions embedding that software risk knowledge within the teams that are building that software.
So when we talk about devs ops, we're talking about sort of the right-hand side of this diagram. What is a developer first approach to security in precisely the same way that we made operations developer first by integrating it into the process of building software.
Ok. You all have heard me talk enough. I think we've defined the landscape of what we mean by devs ops. I really want to dig in with you dan to find out how was this implemented at e and you're talking about an 85 year old retailer, you said you're about four years into this journey. Now, what did that landscape look like when you were on the ground? And what was the relationship like between the builders of software and the assessors of risk when you arrived?
Dan: Yeah, that's a good question. So i arrived at re i four years ago and um essentially, we actually didn't have a uh an official application security program. Um and our security department was pretty much siloed off from our, what we call our digital retail team. Um they were the main main consumers that were developing software um microservices and managed our website re i.com.
Um so when we got there, um i suddenly realized that hey, within, within our, with our software development program, there, there really wasn't any kind of security knowledge um per se, we definitely had a lot of developers um within re i that, that really championed security though. So that was a very encouraging, encouraging thing.
Um when there were no official security tools like for sc a or sas or software composition analysis, st a code analysis, um a lot of that depended upon like open source tooling. Um so one of the funny things that when i came in, i was uh i asked some of the developers, oh, what do you do for security? and he said, well, we don't really have anything um scanning our code or looking at the dependencies, but we do have this thing in sonar cube, it scans our code, but there's no gates there. It just gives us a report and we don't really take a look at it. Um similar to that. They also used a, a dependency checker which is an open source tool.
So there was some efforts in there and there were some engineers and developers that, that had the um had security mindset, but they, they implemented a tool but didn't really have the knowledge of like what is, what are the results supposed to be? What are we supposed to do with these results?
Um and really from us on the security side of things. Um we were very much on the, i would say, right side of protecting in runtime. So we did have a web application, firewall bot protection things to protect our website after things were in production. So, but to your point, a lot of the areas where we could have improved upon was on the application development side so that we could, we could identify the problems earlier and make sure that it doesn't get introduced in production.
Clinton: That problem you mentioned of developers saying, yeah, we have security tools in place but we don't really look at the results, right? Or that's there because of a compliance reason. Or somebody said we had to integrate that into our pipeline, but it's not ours, we don't own it. We don't actually get any value from it. How did you to go about handling those objections? I mean, do you have any particular horror stories concerning that, that initial recognition of like, hey, actually wait a minute, even though we say we're doing security, it's not actually integrated into the way our engineers are approaching their jobs on a day to day basis.
Dan: I wouldn't say we had like any particular horror stories starting out four years ago, i think from my perspective is that uh the one thing i think locker j was definitely an interesting time for that, just trying to get an idea from a security perspective. Like what, what, what kind of software is out there. Where is law for j actually implemented that was, that was initially a hard time for discovery. Fortunately, we had some other tooling and we had a very engaged developer community to help us understand um how their applications were structured. So we were able to tackle that problem, but we didn't have an overall visibility from a security perspective, like a tooling to give us that, that robust reporting. And so it was a very onerous process to track down and remediate and to understand what, what actually had that dependency problem.
Clinton: Yeah. And when i think back, you know, looking at four years ago, i'm sure you were also doing a cloud migration at the same time, right, as part of a larger effort at digital modernization, how did the security conversations fit into the way you approach developers when it came to saying, hey, now we want you to take ownership of all of these additional pieces, right? You're probably moving to things like containerization infrastructure as code and so forth at the same time. What was that relationship like between the the sort of cloud migration piece on the one hand and modernizing your security on the other
How did those interact? Yeah, definitely. Also four years ago, starting out off of we we actually were just starting our cloud security program as well. I would say four years ago, um our cloud migration wasn't really fully implemented. Um we were just starting um with an effort to move most of our microservices and our, our website over into AWS. Um so that was the first initial big push of it.
Um with that, um there was definitely security in mind as we, as we were moving out of our data center. Um and I think what started out was the initial u utilizing cloud native solutions like uh GuardDuty um Security Hub to be able to help us gain an understanding of the risks that are in the cloud environment.
Um but again, um as I came in that, that started initially naturally within the cloud engineering team, which was a very good encouraging thing. And then as I came in, we began understanding the problem set and environment. Um and then um really just starting that cloud security journey um eventually moving in into like cloud security posture management, other tooling.
Um and I said, I would say like going ahead now, um we have moved our in our website into AWS and um going forward um with our next three year journey is is to actually complete our cloud migration journey by moving out of our data center. Yeah, and I'd love to get into that a little bit more, maybe a little later.
What I'm particularly interested in though you kind of made references to a bunch of other teams. You have cloud engineering, you have platform engineering that are all on some level designed to enhance the developer experience kind of within the organization. But I think when we, when we think about devs ops transformation, it's never as simple as buying a bunch of tools, right? Or procuring a bunch of open source and slapping them all together. It really has to be sort of a a cultural transformation involved people in process.
So, you know, we've, we've got the lay of the land, you had these very disparate relationships between sort of software development. On one hand, security on the other, how did you get that initial buy-in? What were some of the stories that you had to tell to developers to kind of get their initial excitement level up about, you know, what on some level could sound like you're asking them to take on a bunch of additional responsibility in addition to their day to day job, like, how did you kind of ease that transition and mindset for them?
Yeah. So 11 of the interesting things is uh when I first started re i um again, starting at four years ago was uh i was hired into um the security organization. However, um in order to address that problem and for security to understand our platform engineering, our developer community and cloud engineering teams. Um i uh i was actually embedded with our ss re our platform engineering team um to, to better understand the processes. How are they deploying their code? How are they building their pipelines?
Um so in a way um getting a very good understanding of, of how things worked within our digital realm at re i because prior to that, um i would say our security team was very segmented off. Again. We're very policy focused um much more into what i would say, like our enterprise security environment with endpoints um and um email protection, things like that.
But starting off there in order for us to like gain that trust with the developer development community and the platform community. Um it was really like we said, hey, dan's gonna go and be part of sre like 50% of his time. He's not actually going to be in the security engineering team. Um and so for me that that was very, very helpful.
Um i was able to just embed with them and as an engineer help contribute to like code, understand how microservices work, understand how a infrastructure engineer um did their work. Um and then from there, it was kind of like that building of trust then come in to um introduce more security concepts to them. Um and so, so yeah, that's, that's how we started that.
That's a huge investment though, right? It takes someone like you and say 50% of your time is going to be embedded with one of these platform sre teams. I i would imagine maybe that created a little bit of friction internally. But i could also imagine that from the engineering perspective to be able to hear from someone who has done their data work, right? Who's had the experience of actually getting, you know, the, the black friday push done into production probably was able to build some trust within those organizations that like, hey, security gets it because they've actually been down here in the trenches with us.
Yeah, absolutely. That, that, that was, that was definitely the intent and that's, i think the result that that really happened. And really, it was like also with that, we were able to better understand the requirements. So um kind of stepping back is when we finally decided to start an application security program, initially, what occurred was we did try to push a different static analysis scanning tool kit from the security side.
However, because of that divide and that gap of not understanding our process, we actually had a vendor. Um we tried to say, hey, here's a sat scanning tool. Um we want to start scanning our code, we want to be proactive in it. Um come to find out the belber community pushed back and said there was tons of friction. They said this doesn't even work in our jenkins pipeline, can't even integrate it. What what are we doing? What is security doing? Do they even understand our process?
Um and so that, that really took back. And so now going forward, i spent about a year or two, i would say embedded with those teams and then understand from my perspective, i, i gained a lot of knowledge on how does jenkins work, how does uh how does c ic, how do, how do c ic pipelines work?
Um and from there, we're kind of restart our abs a program. And then from there, we, we actually invited the developers and architects to say, all right, we're going to restart this journey. Um what are some of the requirements we want to bring you along due to p os pov s and, and really work together on securing our application environment.
It turns out, yeah, when you actually get the developers' hands on a tool that you're thinking about procuring that it's going to impact their workflow, they're ultimately more likely to adopt it if they've had a chance to really, you know, put it through its paces ahead of time, right? Because i think the way we interact with tools as engineers is just fundamentally different than the way that might look to other stakeholders in the organization. And that can unveil either a lot of friction or a lot of potential chance to collaborate, right? And increase that velocity.
I'm also especially interested in the role of sort of the the platform team in the organization because i think as we, as we go down this story of, you know, how devops has, has really shifted our thinking over the last 10 years or so, there was initially this rush of excitement, right to say, well now teams are going to own everything about their service and they can choose their own tech stack and they can build their own pipelines. And what happened? Right. Organizations ended up with thousands of jenkins pipelines without a lot of management. At least visibility into what are they even doing. Do we have the right security controls in place?
And at least what i've seen is the reaction to that being creating these sort of centers of excellence or platform teams to build paved roads for developers, which is important, not just from the security perspective of getting that risk oversight and ensuring that there's some level of commonality in what all those teams are doing while still embracing devops philosophy, but also doing that in a way that is, is congruent with what we want from a positive developer experience, right, which is reducing the cognitive load, shortening the feedback loops as much as possible and ultimately keeping developers in their flow state, right? So they can actually be doing what we hired them to do rather than constantly be playing ping pong back and forth with aec.
So did that team become sort of an ally in projecting that narrative for devs ops throughout the organization?
Oh, yeah, absolutely. I think our, our platform engineering team internally, we, we called them uh we, we give outdoor names to a lot of things. Uh our alpine, our alpine team um um which was, is basically our our platform engineering team uh really was we're champions and we're really our allies in implementing a lot of the tools and application security.
So they gave you a lot of insight, um a lot of feedback into how to integrate within pipelines, how to, how to modify our, what we call crampon framework uh um to better um be secure and to help um help those developers. So um within ra i for a lot of our site, um we utilize this crampon framework as a standard like a, like a golden path, a happy path so that developers would have aaa really standard set of libraries, a standard set of um deployment mechanisms into, into aws.
Um so that way a developer can just focus on developing their features, developing the core, the the code and then rely on this core framework of crampon and um our deploy mechanism called called sherpa to, to really deploy a, a container um and microservice into the ecs.
Um so yeah, just that ii i totally agree like during that um that whole entire transformation over at ra i within the last i would say it two years that that really started where we actually created a platform engineering, a center of excellence and really coalescing around that. Um that was immensely helpful. And then from security, if we can't have just have focus on that, that core platform to implement guard rails and, and our tooling.
And i think that's such an important concept is what do we need to do to enable security to move past the concept of being gatekeepers, right? Being the office of no, within the organization and really turn them into creators of guardrails, right? They're sort of building the road, they're defining where the road goes. Platform engineering builds the road and then developers drive on it, right? At least four, you know, roughly the 80% part of the problem, although that does leave sort of the 20%. And what happens when you've got a team that really believes they are a special snowflake, right? And they say here's why we don't want to drive on the paved road. Like what was your strategy for handling some of those objections within the organization for, you know, people who maybe didn't want to necessarily play nice with that platform vision.
Um that, that is a good question. We, we do have one or two of those type of programs. I would say we, we, we're still taking that approach um of, i would say they're outside of the, the our regular um alpine process. Um but really engaging with those teams, those, those teams um on understanding what, how are the, how is it that they're deploying code and how they are, how they're developing within aws.
Um but it's really if they have that special, special, like deviation from the platform, um we're still trying to figure out the r uh the right process to really do that. Um and just as an example, like for sneak, for example, i'll, i'll use that is we've on boarded our core of our alpine platform and all the microservices that fall underneath it.
Um going into the future though. It's like considering what h how do we, how do we implement to our merchandizing teams that also develop code some of our internal teams, um our mobile application team um for our mobile app, how do we integrate that into sneak? Um and that's a processing journey where it's like i'm sitting down with the the team to just understand what they're doing.
Um but we don't have like a very like hard enforcement saying you must go into this platform. But ultimately, at the end of the day, we're engineers, right? We want the carrot and the stick, we want to understand what do i get from doing that and typically what you get is well, a lot less friction as long as you're staying within this realm of controls that we've already implemented at the platform level while understanding that there are always exceptions, right?
You want to remain flexible enough to not get that sort of reputation. And again, as the as the department of no one of the other problems that tends to come up in these circumstances, obviously, there's the sort of friction issue, but then there's also kind of an ownership and attribution problem, right? You're understanding that we've got all these services, we've got, i would assume a pretty large amount of legacy code for an 85 year old company, especially as a retailer.
And when you say we're going to devolve this responsibility to teams, we're going to, you know, move from the hub to the spokes of the wheel that can sometimes leave out critical areas of responsibility. Right. So how do you deal with maybe those services that are running in production that don't have clear ownership or that you don't necessarily understand who owns that responsibility, particularly from the, the one thing that you can't actually shift out to the spokes, which is sort of the, the risk appetite of the organization.
Yeah, i, i think to answer that question. Um there is a good example. So as we moved, um as we started moving into AWS, um we, we were starting to also decompose our monolith. Um our, the, our, our website used to be a complete monolithic application.
Um as we started doing that, uh that was, that became a little bit easier. Um internally as we start splitting out into microservices, we, we intentionally identified a team, we identified the owner. Um so, and internally, we, we knew who the developers and back end front end engineers were for, for the microservices as we did that.
However, during the migration, we still have pieces of, of code and services and functions in our monolithic application. So this monolith is still there. However, when issues arise there, there is no ownership. So it, it's interesting to tackle it because then when a security issue arises for myself, uh i look at it and i see like, oh, there's a dependency issue in the, in the monolith
Um, it relates to this, um my, uh this like function within the monolith, like who owns it and, you know, and everyone's like, I, I don't own it. It's not me. Uh so one of the things I've, I've sort of tackled and um is I because I, I've started to gain more experience within like our, our application security process and, and how we develop code uh and using Sneak as well.
Um, is when I do see an issue, I'm able to actually go in our monolith, find a dependency and kinda myself commit that into it. It, it's almost, I would say in my words, like chaos engineering because as, as, as that change is going, they see us uh myself committing code and then it's like, hey, security is doing something in, within our uh the monolithic application. Uh we should take a look at that.
Um so it's not a perfect answer of saying like, ok, we definitely find the owner for this. But then I say for a lot of unknown things, our, like our security team will take that initiative to, to make the change and push the change and the development community takes notice of that because they may have a dependency on that feature or, or suddenly someone who's been working on that realizes, hey, like security pushed to change and they're like, and then the other thought that comes into their mind is like, oh, well, security is actually committing code into our application and that, that to them that takes a lot of that gets a lot of attention.
I've got to say I've never heard that solution to the problem before. Like if security becomes the default owner, I think you find ownership uh chains pretty quickly within engineering. Um so that's, that's especially interesting. Uh I gotta ask though, sort of, um are you thinking about any kind of frameworks of adoption or tooling around sort of uh explicit lineage of software? That's been a big buzzword recently as we think about how can we track given that we're in this uh cloud native world of almost impenetrable opaque, you know, software supply chains, being able to identify even something as simple as, you know, who's the owner of a particular repo. And then how does that end up in a container that ultimately ends up in a production pod in our infrastructure, which could be as simple as adopting best practices like code owners, files. But there's also frameworks like Salsa and others that have been gaining a lot of traction. Are you looking in any directions like that to, to make that sort of thing more explicit if only for yourself. So you're not doing so many late night commits.
Uh no, that, that's a, that is a good point. Actually, I, I do like that and um I think we at REI, we've been actually modernizing a lot of our um text and the behind the scenes. So um I do see code owners as a possible solution to that um as we mature our program and then as we go into there. So, um but to, to answer your question on that, um no, I actually haven't looked at a certain type of framework to be able to help manage that, but that's a good suggestion.
Well, I think that's a good segue because what I would love to hear more about sort of is as you approach 2024 I don't think any organization on earth would say we are a mature dev sec ops organization, there's always new things to learn. There's always sort of a next step, but given that you're four years into this journey. Now, what do you see as that sort of logical, next set of steps that you're looking at over roughly the next year or so on your, your DSO road map?
Um so, yeah, I think over the next year um actually is what I came to realization as we went through this journey is REI we are not uh not only just a retailer um that um sells outdoor gear, but we we manage a website, we manage mobile applications, we have a very, very robust um very creative uh team that manages the website. Um so again, as I mentioned earlier before, we had we created our own like kind of software framework with Crampon um and, and we have some very incredible engineers that, that manage the platform behind the scenes with REI. So in a way REI also has and it's like a own mini software company. Uh so the Crampon framework is also versioned. And so that Crampon framework has a ton of transitive dependencies and dependencies that are libraries on its open source libraries on its own.
Um so, so the next on the journey is really taking a look at understanding. Ok, what does it look like to do? Like monthly updates or monthly version updates to our, our Crampon framework? How do we, how do we get our application teams to consistently or or in a more programmatic way to update their Crampon their Crampon library? Um and so, so that way it's not like a just a major update and that way we can keep pace with open source vulnerabilities um with the libraries and out of date libraries. So yeah, that, that, that to me is like in the near term for, for the next year journey. Um and, and, and maturing that so that way, yeah, if it the other thing is, is also we recently have done a bug bounty program. So, um that, that's really a big win and a good way to start highlighting and giving developers more of uh awareness of security vulnerabilities within, within their application. Um and so with that next year tying that with thing tooling, like Sneak. Um and so it kind of pushes that to keep them more aware, to maintain their, their code.
Yeah, I think that's a really interesting analogy, right? We're now what 15 years into this concept of software eating the world every company is now a software company, but there's almost an additional layer of abstraction where now these platform teams inside almost become software companies within software companies, right? Because they are building the tooling that other developers are relying on. And so, you know, we're seeing this small more and more where you almost need to build out explicit SLA and expectations as if you're dealing with external customers because you are now in the critical path for every other team that is building on top of that sort of paved road.
Has there been any, I guess sort of cultural backlash from that or when it comes to even things like writing documentation or how you handle dependencies internally when you can have maybe one team that's managing a library that's utilized by hundreds or thousands of other services that if it were to go down would cause tremendous business disruption. I guess. How are you dealing with that, especially coming from the days of a monolith where everybody's maintaining sort of the same code base that doesn't have a lot of really explicit ownership when ultimately things, you know, might, might get in trouble.
Yeah. So I guess that ties back to like a really close relationship with our platform engineering team and with Crampon. Um and so within that, uh it's, it's definitely uh when we first introduced Sneak, for instance, um a lot of developer, we had office hours um on like helping developers address their um open source vulnerabilities and dependencies. And a lot of them came back saying like, hey, like these, these dependencies, these libraries are not necessarily tied to us, they're tied to, to Crampon.
Um and they, they were a little bit wary that hey, eventually, if security decisive, put up a gate and enforcement and say like, no, if you have critical vulnerable vulnerabilities in our dependencies, you're going to block us, but it's not our fault because we're utilizing this Crampon version. Um and so I think we uh we, we now have like a deprecation policy, we have SLAs um and we're working towards defining like what is that minimum Crampon version or what is that minimum framework version and communicating that out to developers so that they, they understand that they need to be on a certain, like minimum version of our internal tooling.
Um yeah, so I think that that's how we're, we're kind of addressing that right now, what, what sort of goes into as you think about constructing that roadmap, whether that's on an annual basis or however you sort of chunk it out. How do you measure success? How do you really understand where you're at? How do you think holistically about where the gaps in our for security program? What, what might we need to bring in next? Either from a tooling perspective, a process perspective or ultimately, where are we at on that cultural change? And what is the readiness of the organization to maybe accept new responsibilities, new ways of thinking? Like if you zoom way out, what are some of your metrics for deciding sort of where you're at and, and, and what comes next?
Um so, so some of the metrics that I look at is like uh meantime of remediation is usually the one that we would, would measure to be able to say like, ok, um what is that mean time to reme when a, when a development team gets a finding? How long does it take to resolve criticals? How long does it take to resolve highs? The criticals and high severity are, are usually what, what we take to do that? Um a as we start to reduce that meantime, to reme and understand that hey, teams are starting to like take security a little bit more seriously. They're integrated into their, into their Jira, into their sprints.
Um and then when we look at tooling, it, it really depends as, as the next step is like, ok, if we introduce like DAST, for instance, um, will that reduce and will that surface up? And is the team ready like to, to take on some of those additional findings? Um that, that's actually similar to how we actually also did our bug bounty program. Um, we, we didn't take it public for a while because we wanted to make sure that we had a good cadence that teams had a good process to be able to address the the findings from a bug bounty program in a timely manner. And so looking at that, that's, that's one of our key metrics is is meantime to remediation.
Yeah, because there's a huge balance that you have to strike here right between you want to get developers the right information at the right time, but with the necessary context to understand it, right? And if you're just dumping undifferentiated alerts, particularly from a tool or a process that runs further to the right and you're not integrating it into their day to day ways of working, you're sort of doing everything wrong from a developer experience perspective, right? You're getting them out their flow state, you're adding to their cognitive load, you're increasing the feedback loop and that I think mitigates really the core currency you're dealing with, which is developer trust, right? You want your engineers to say I have confidence that the security program is pointing me in the right direction and giving me everything I need to be able to make the right decision the first time around rather than to have to guess.
Right. And only find out later that there's a critical bit of context that you weren't aware of because it wasn't integrated into correlating all of that information together. And it just came as the sort of, you know, endless stream of what 900 page PDFs that we've all sort of had to deal with. So I wonder, like, as you think more broadly, like past the next 12 months, do you have sort of a north star in mind? Like, what would success look like to cause you to say, hey, I think we have implemented as much as we can of this DevSecOps cultural shift. And I'm and I'm proud of where we at. Like, do you have any sense of, of where your utopia would be so that you would know, hey, I think we're really doing this, right?
Yeah, I think uh yeah, my north star would be if we had it was, it's just basically if we, the process is complete. So like from if it would, if a finding or security issue does arise, um I think if a developer is able to get that understanding of that context within a system that they're used to working. So like take Jira, for example, if, if there's a process where finding comes in. Um a developer gets it in, in an issue just arises within their Jira board. Um and they're able to contextualize, understand the problem, get that prioritized and move it forward and, and implement that within uh our defined SLAs. Then that to me is a success um to be able to just timely address those vulnerabilities.
Um so, yeah, just piggybacking off that like the, again, the meantime to remediation is, is one of those, those things. So like we have our own set of SLAs internally for criticals, highs, mediums, lows. And I would say like if every category where if, if a team, application team were able to meet a remediation of security issues within the defined SLAs even all the way down to a low finding that that would mean that we have a very robust process and a very good trust between security and the development team to like address any kind of security issue.
Um so, so yeah, going through that, um I think our in our journey um and i shared with this with you earlier was it kind of made me a little bit uh happy, encouraged um where there was a we have a Slack channel for deployments. Um and there, there was a deployment recently and someone said, hey, I'm deploying a fix um to, to our application because Sneak said it was a problem and I was like, oh, someone actually looked at Sneak and they, they actually credited Sneak in our security to, to say, all right, um, I'm taking this and, uh, I'm gonna deploy it because, um, i recognize that this is the issue and this is a problem. So, um, so that was an encouraging sign and I hope to see more.
Well, now, when I heard that story originally, this happened in a very particular day. Right. Oh, yeah, I think it was like close to like Cyber Monday. So I was like, oh wow. Yeah, you're gonna deploy on it. Ok. Well, we're glad that went well, but I, I do think that that speaks to the point of not just understanding that your SLAs are met from an application security perspective, but that also the engineers are able to implement those fixes at their DevOps velocity and at scale because, you know, let's face it. Their day job is not to respond to tickets from you, right? It's not to constantly be worried about the risk management of the software. It's actually to build features, it's to innovate, it's to get user stories into production and to ultimately move the enterprise forward. The question is how do we do that at scale while ensuring we have adequate visibility of all the risk? And so I think that makes sense as a north star and, and hopefully your engineering and your platform colleagues are sharing in that while also recognizing that, you know, they've got an entirely different set of incentives, you know, that they're mot by, motivated by at the same time.
So I think we're, we're getting down to the last maybe third of the hour here. I'd love open it up if anyone in the audience has questions.
Well, I just wanted to say thank you to everybody for coming out today, learning more about DevOps. Again, we are Sneek developer security platform covering sort of the entire modern application stack from code, open source container and infrastructures, code security moving rapidly into the application security posture management space. If you are interested in hearing more about Sneek pleas, don't hesitate to come see us at the booth for I think the next couple of hours at least uh or sneak.io if you're interested in learning more.
Um we are also available, of course on the AWS Marketplace. Very proud to have integrations with things like AWS CodeBuild Security like EKS and probably a dozen other AWS services uh that I'm forgetting really important partnership for us moving forward. Thank you to our friends here at Amazon for giving us the time. And thank you to Dan for a great conversation about DefTech Ops. All right, thanks. All right. Thanks everybody for making the time.