I’ve always been fascinated by how other Cybersecurity professionals ended up in their roles. For some it was a childhood dream to be a hacker (or catch hackers) after watching an old school hacker movie, others fell into roles organically after a career in enterprise IT, and if we’re being honest, some are in it for the money. With record shortages in skilled security personnel, the field is growing with more and more diverse people with different backgrounds.
我一直对其他网络安全专业人员最终扮演的角色着迷。 对于某些人来说，在看完一本老派的黑客电影之后成为一名黑客(或赶上黑客)是一个儿时的梦想，而另一些人则在企业IT事业发展后自然地扮演了角色，如果说实话，有些人是为了钱。 随着熟练的安全人员的创纪录短缺，该领域的发展与越来越多的具有不同背景的人一起。
I’ve always enjoyed my own origin story (though I may be biased), partially because it was so unexpected. Here it is.
Notice: Any names and specific details have been changed due to privacy concerns.
部署日 (Deployment Day)
This story starts back when I was a QA Analyst. I’d actually just been promoted to a Senior Analyst and one of my new duties was to manage our biweekly production deployments.
At the time our primary application was a monolith and our deployment process relied on a number of different databases which often needed to be migrated as well as a handful of other build steps. All in all the deployment process itself would take roughly 30 minutes and pretty much needed a constant babysitter incase we encountered unexpected errors or needed to retry a stage.
During and following a deployment I would typically keep a tab with the site open — to make sure we were still resolving successfully, as well as a handful of Loggly tabs that I would use for running ad-hoc queries to confirm migrations had been successful as well as to look for errors.
异常 (An Anomoly)
Most commonly if something went wrong with the deployment related to the new changes I would either see that the deployment job itself failed, the login page would fail to load, or that we would begin seeing errors in the logs and being returned to active users or our QA staff internally.
Shortly after the deploy when looking through the logs I noticed that our overall error count was up slightly. However on further inspection it was actually only 404s that were up. Thinking that maybe we’d somehow removed a route, I tried to hit the endpoint in question with my account using Postman.
GET /api/v2/users/3625 returned a 200 for me.
部署后不久，当查看日志时，我注意到我们的总体错误计数略有上升。 但是，进一步检查实际上只有404个出现故障 。 考虑到也许我们会以某种方式删除一条路线，所以我尝试使用Postman用我的帐户到达有问题的端点。
Digging into the logs further I began to notice a strange trend — the 404 accounts had actually started a few hours before the deployment. In addition to that, it seemed like nearly all of the errors were actually just for a single account. What the heck.
进一步研究日志，我开始注意到一个奇怪的趋势-404帐户实际上是在部署前几个小时就开始的。 除此之外，似乎几乎所有错误实际上都只针对一个帐户。 有没有搞错。
“哦屎”的时刻 (The “Oh Shit” Moment)
Then it hit me, the account wasn’t just making requests for a handful of users, but rather for all users — and they were appearing sequentially with a lot of large gaps.
I turned around and asked our lead developer, “Hey, should an account admin be able to make requests to fetch any users?” “Uh, no”, he said.
I went back to Postman and tried incrementing the ID in my request using my authenticated session.
GET /api/v2/users/3626 >
200 Success. Oh shit. These were not my user details.
GET /api/v2/users/3626 >
200 Success 。 妈的。 这些不是我的用户详细信息。
To their credit, the team jumped into action immediately once I shared these results. While one developer worked on locking the account in question others were quick to preserve logs and begin investigating the bug that the attacker had be utilizing. We found out that our discovery vector, the uptick in 404 error, was actually due to the attacker trying to request details for deleted users.
值得赞扬的是，一旦我分享了这些结果，团队便立即采取行动。 当一个开发人员致力于锁定有问题的帐户时，其他开发人员很快就可以保存日志并开始调查攻击者正在利用的错误。 我们发现我们的发现向量，即404错误，实际上是由于攻击者试图为已删除的用户请求详细信息。
During all of this I scoured the rest of our API for any other endpoints that could potentially leak data to outside accounts. I quickly found another, though it was much less impactful data than user details.
It was a long day — but by the end of it we confirmed that the attack was stopped, we’d closed the initially identified attack vectors, we’d preserved the evidence, include IP details, and we’d recorded all of the affected accounts that would need to be notified.
善后 (The Aftermath)
A seismic shift in our security practices started the next day. Our development team spent upwards of a week huddled in a conference room doing a security audit, often going line by line together trying to identify logic issues that could lead to data exposure.
We had a mindset shift in development going forward. There were additional security considerations with every new feature.
We were also able to get access to new security testing tools and I quickly tried to learn as much as I could — I was hooked.
I was able to run Web App scans from Qualys, learned more about web application security from OWASP and countless Troy Hunt Pluralsight videos and even started finding XSS and other security issues within our application.
我能够从Qualys运行Web App扫描，从OWASP和无数Troy Hunt Pluralsight视频中了解了有关Web应用程序安全性的更多信息，甚至开始在我们的应用程序中发现XSS和其他安全性问题。
回想起来 (In Retrospect)
As a team, security had not been part of our development culture prior to the hack. I can’t speak to the mindset of individuals, I’m sure the experience varies, but at a team and leadership level it was lacking.
The company had been going through difficult times, we’d laid off a significant portion of our staff up to this point and had just started to turn the business around — but we were in survival mode, trying to keep the product alive and deliver new features as quickly as possible, so something that didn’t immediately add value like security likely wasn’t a priority.
Those familiar with the type of attack we suffered may know it by another more general term, Broken Access Control. Essentially the attacker was able to manipulate url parameters in API requests from an authorized session to bypass our access controls and read user details for other accounts.
那些熟悉我们所遭受攻击类型的人可能会用另一个更通用的术语Broken Access Control来了解它。 实质上，攻击者能够操纵来自授权会话的API请求中的url参数，从而绕过我们的访问控制并读取其他帐户的用户详细信息。
I ended up leaving the company some time after the incident, but not before becoming an internal security champion on the team and working to gain several security certifications.
From there I ended up getting a job with a prominent security vendor and had the pleasure and honor of working alongside many brilliant minds in the industry. I was lucky enough to be mentored and encouraged by many of these individuals and it’s been exciting to watch as many of them have gone on to lead new and exciting companies and products. I also ended up getting degree in Cybersecurity and a handful of additional certifications.
从那以后，我最终在一家著名的安全厂商那里找到了一份工作，并很高兴和荣幸与业内许多杰出的人才一起工作。 我很幸运地受到许多这样的个人的指导和鼓励，很高兴看到他们中的许多人继续领导着令人兴奋的新公司和产品。 我最终也获得了网络安全学位和少量其他认证。
Nowadays I spend my days helping the worlds largest organizations understand their internet footprint and reduce their attack surface, and I spend my nights learning, writing, and trying to build the next iteration of software composition analysis tools.
I love being party of the infosec industry — the gallows humor, the technical challenges, the unrelenting advance of attack and defense capabilities, and not least of all, the community.
What is your infosec origin story? I’d love to hear from everyone; longtime veterans to folks just beginning to get into it.
您的信息安全起源故事是什么？ 我希望听到大家的声音； 长期的退伍军人才开始涉足其中。