How to control bots and help prevent account fraud using AWS WAF

hi there. welcome to re invent.

um i'm looking at here, i'm guessing a lot of you are builders or security professionals or maybe you actually are curious about bots or maybe you even have a problem with bots and you're hoping that you'll learn something from this talk. so i, i hope you will.

uh our talk is going to be about how you can control the bots and prevent account fraud using aws w a. uh my name is chris at, well, i'm the general manager for aws wf over here. uh we have nitin sakina, uh who is my partner in crime who will be uh going deep in just a little bit.

so, first of all, let's go over what we're gonna talk about today. we're gonna let you know a little bit about what the cap capabilities of aws wf are. we're gonna focus around the bot and fraud technologies. we're gonna dive deep into the bot and fraud technologies and how they work, how you'll be able to put them to use in your website and application. and lastly, we're gonna go through some real life scenarios around instances where we've been able to help customers using the technology.

so first what is aws wf?

so aws laf is a multi layered security execution engine that evaluates requests. it protects you against common internet threats like cross site scripting sql injection, application vulnerabilities. uh in a pinch, it will serve as a zero day mitigation. it is able to implement policy like geo blocking if you don't want to serve to certain areas. and it also does traffic shaping so that it can protect you against denial of service attacks. and of course, it does butt control and fraud.

so this is a normal data flow for wf. you can see that a user or a malicious actor is submitting a request. aws waf is going to be evaluating that request with either amazon managed rules. so those are rules that we keep up to date and we make sure that they're going to protect you against common threats or your own rules. those are going to be served by uh our front door services like amazon cloudfront, uh the application low balancer, amazon, api gateway, appsync and amazon cognito.

at the thing about this is it's very easy for you to implement into your application. so if you want to have protection, even if it's an on prem application, you can pretty easily throw a cloud front distribution in front of that, put wf onto it and then you're going to have uh a pretty good level of protection against basic internet threats uh so it's, it's good for that. it's also really great for aws workloads. it's a managed service. so all the resources that you need uh are deployed automatically, you don't need to do very much with it at all.

the focus of this session is going to be around bots. so what is a bot?

um actually, i bet most of you are probably running bots of some kind. uh we use them uh just around uh our health monitoring, right? we want to make sure that the service is running. so we have lots of queries going out. but basically, it's around a service that's just making repeated queries without human intervention uh to a web service or application.

there are good bots and bad bots. uh they've both been around since the beginning of the internet. uh the first spots uh were around uh exploring the internet and making sure it could get indexed so that people would be able to find things. um people would use them to make sure that their site was up.

um good bot are easy to spot because they don't try to hide. they have a user agent that does not change. they respect your robots dot txts. for those of you who don't know that will be telling the bot where it can explore where it can't explore the rates that it's allowed to be able to operate at and so forth.

um it will try to keep your site up, it's designed for good. um and it provides a lot of really great utility for you as both uh an operations point of view and from a customer point of view, malicious spots. on the other hand, have also existed since the beginning of the internet. um, they put your business at operational risk, business risk, security risk, financial risk. they absolutely need to be controlled.

the first ones were actually denial of service spots. they would just try to overload a site and try to make it crumble to the ground. um, later they evolved into more interesting bots. so bots that would try to find security vulnerabilities, bots that would try to take data from your application and be able to publish it maybe as an aggregator, possibly in, uh, giving you a competitive disadvantage. uh, sometimes they're used for very specific purposes. so they may be targeted at your business. uh, an example of that would be a bot that crawls sites looking for, uh, betting odds and then performs a bet arbitrage as an example for those of us who are here in vegas.

um, one of the effects that you don't think about too much with bots, but it's really important is that it obscures all of your web metrics. so you think about you're trying to find out is your marketing campaign working, right? are you bringing more people onto your site? what's the user journey that people are following on the site? are they able to do workflow completion? are they able to fulfill their orders? are they able to quickly find what it is that they're looking for? when you have a lot of bot traffic, it makes it very, very difficult to see between the lines around what's real and what's not. it makes it harder to improve your site.

um, you can see there are a lot of other risks around just reputational risks, like bots that will do fake reviews, they'll manipulate seo they'll do account fraud. they'll actually log in as users and perform actions ranging from those kind of review things to actually stealing user data and stealing your data.

lastly, it just costs a lot. um, you know, one of the surprising things again, you don't think about until you're starting to poke around at it is that, you know, we find anyway that nearly half the traffic the site might get if they're not, uh employing some sort of control mechanism, almost half the traffic is going to be bought. and in fact, some customers, uh if they're doing a one time sale, that cost could be up to like 95% of the traffic is coming out of bots trying to perform some action. so it's something that you really want to take control over because it can have a material impact to your business.

so with that, i'm going to turn it over to nitin to do a deep dive in our bot control.

thank you, nitin.

right. thanks chris.

all right, good afternoon, everyone. uh and welcome to the invent.

um uh after the uh introduction, i know you are eager to learn about how aws f works to uh protect your applications and prevent risk that that's associated with the bots and fraudulent activities.

so, let's dive right in. i'm going to start, i'm going to start with a uh architectural overview of aws ff.

um so we're gonna see how a request comes, uh and how it's handled. we're gonna start with a application which is um behind a protective service such as cloudfront, uh a lb api gateway when a normal user or a militia sector at this point, we don't know what is, what a request, any request that comes into a protected service. when you have aws ff configured to protect that service. the request is inspected by the aws wf rule engine uh as chris alluded earlier that it has a rule engine which uh in which you can place uh different kind of rules. the some are managed rules, custom rules is gonna go more oo over uh quite a few of them if you have configured your protected resource to be protected against bots and fraud. uh the request is further inspected by bot and fraud detection services which are also running behind the aws f rule engine. and depending upon the decision that is sent back, either the request is considered safe and it is allowed to proceed to the application origin or the request can be stopped from proceeding to your application.

when we started designing bo and for control, we realize one thing that this is an evolving landscape. this is constantly changing. the boat operators are changing tactics uh trying to fly under the radar and trying to evade detection. so we decided to go the route of making these services available as amazon managed rule rule groups or we call them a rs. so i'm gonna use the term am r quite a bit in this context. a rs are packaged rules, set of rules. so think about it's a rule group which is prepackage. it contains a set of rules and these rules can be customized to a degree.

um and what it does that it allows us to keep adding new rules into the am r as um bought and fraud uh vendor uh operators evolve. so once you add a an amazon manage rule group, uh you know, the you can take advantage of new innovations coming in without doing anything and we do all the heavy lifting on that front.

so i'm gonna draw your attention to um 3 a.m. rs and four use cases. the first am r i'm gonna talk about is uh aws managed rules bot control. that's number one over there. and this am r supports two use cases. common bots and targeted bots.

common bots is our way of um helping you manage good verifiable bots. and there are reasons why you want to manage them. they may, they may be good, they may be all verified search engine bots, advertising bots, monitoring bots, but you also wanna manage them like how much impact you want to take on your website or infrastructure uh from those bars and sometimes even malicious bots also pose as good bots. they will try to use the same user agents. and um so we want to verify them and we want to make sure that a bot comes in. um you know, it is what it says it is.

so there's common bots targeted bots which is also part of the same uh am r but at a different inspection level and we're going to talk about inspection level, uh a bit later is um targeted towards man managing controlling evasive militia and potentially malicious bots. so these are the bots that um try to emulate human behavior, try to evade detection, try to um evolve and change their tactics. uh uh fly under your rate limits, try to figure out what the rate limits are. so this talking bots,

the third a r second a r i'm going to talk about is the aws managed a p root set. this is uh uh geared towards protecting against account takeover attacks. and the fourth one i'm gonna talk about is the, the last one here is the um the account creation fraud prevention. uh and the am corresponding am r is aws managed rules. a cfp rules set uh which is uh designed to prevent fake account creation.

so we will go over uh these topics. so let's start with some configuration fundamentals. the first concept is a web access control list or we fondly call it web ech l web apple gives you fine grain control over all the requests and it gives you control over how you want to inspect them. how do you want to process them? so you start by adding your protected resource into the web. now, you could create a new web or you could take an existing web and um have a resource there.

so, so what it is telling the web is that uh i want this resource to be protected in order to start with bots and fraud, you would add one of the bots or fraud. a rs, the bots and fraud. uh the am rs have two kind of configurations.

um the first kind of configuration is the rule group configuration. the rule group configuration apply applies to all the rules within the, within the rule group. and the second level of configuration is the action that you want to take for each rule. so each rule comes with uh a set of actions and you can change those actions depending upon what rule. um uh how, how do you want aws? we have to process the request.

lastly, you can also add scope down statements. um let's say you want to uh have bots and fraud uh services look at on the subset of a uris. so you can create scope downs to just uh enable that am r to uh look at only those specific requests.

then um in the same debacle, custom rules can be created. one of the concept i'm gonna talk about here is labels, uh labels are like tags. so when aws v processes a request, uh it tags the request stream with a string which we call the label. and uh the label can be used to fine tune detections and it can be used to fine tune. um uh how do you want to process the request?

um you can also add um custom rules such as uh matching on path ur i just based upon regular expressions, you can add red based rules there, you can add custom responses. for instance, you want to send a 403 back in a certain situation and um you can add a rule and it will do that.

let's go into common bots. as i mentioned earlier. the common board rule sets are designed for managing good bots to get started with it. you take the birth control, am r you add it to the verb and this is where the inspection level comes into play where you select the inspection level as common

Must he do that? These are all the rules within the common bot category that become available to you.

We know that you have complex business requirements. You may want to allow a certain kind of bots. You may want to disallow certain kind of bots. You may want to rate limit certain kind of bots. For example, you may want to allow search engine bots, but you may want to um disallow link checker or monitoring or scraping bots, right?

So, so each, so we, what we have done is that we have um uh divided um bots by a category and this is where um a lot of Amazon experience of uh battling bots on our retail side comes into play as well. Um we have taken a lot of signals and intelligence uh from our experience for handling bots, categorize them into a set of rules and created these rules which we are again evolving and changing over time as these categories evolve, new things uh show up on our radar and you can take advantage of all this while we are doing the heavy lifting for that.

So the next thing is uh the rules, each rule comes with five choices of rule actions - allow, block, count, capture or challenge. So this is where the second part of the configuration comes into play, which is you take a rule and say category advertising, the default action is blocked, but I want to change it to allow.

One thing I want to call out here is this mode called count. Um, allow is allow, block is block. Uh captcha, we'll throw a captcha challenge, we'll throw a silent challenge. The count is an interesting one. So I'm gonna, I'm going to call attention to this. The, um, when AWS WAF uh sets a rule in count mode, it is going to count the number of requests and also create labels for those requests as it counts the request.

So this is a very good mechanism when you start playing with the AWS WAF and especially about control. Um you're not sure what kind of traffic we are getting, uh what traffic we want to allow, what traffic we want to uh rate limit. So our recommendation is to always set uh when you are trying out uh bot control um as a fresh start, uh you want to set the rule action to count. This will give you the counts of each category of bots. And also we will give you the labels using which you can start funding that.

One example of how you can use the label is by rate limiting. Let's say you want to allow search engine bots on your website but not more than a specific rate because you don't want to pay the added infrastructure costs that are associated with uh search engine and scraping bots to do. So you can use the uh AWS WAF and set the search engine um category in count mode which will give you the count and also give you the names of the bots that are coming in and you can create a custom rule. Uh in this example, we are showing a rate based rule and this rate based rule uh is the limit is 1000. And the label, which is a just a label string, that label will be replaced by the actual search engine bot, uh which the, the name will be uh as part of part of the label. And this, this kind of configuration will allow um uh that particular search engine bot to come in only 1000 times in a five minute window, thereby helping you manage uh bots.

So, recapping common bots, uh you don't need any code uh or any integration to uh start using common boards. It's uh very easy to start you uh tape at the resource to a uh new or an exist we that you want to protect and you can start managing self-identified bots on your uh on your website. We, like I said, we use multiple techniques. Um we use signals and intelligence that we have generated over over the years to, to detect and classify those bots as well as well as we use techniques such as uh looking up the published ip ranges uh reverse dns information. Uh so all of that is kind of packaged into uh the rule group, there is no extra configuration required for us to verify the bots.

Anything that is not verifiable. For instance, a bot comes in and says I am so and so bot. And when we look at the signature, when we look at it other attributes, we deem that no, this is not, this is pretending to be that bot, but it is not exactly that bot. The default action uh is block for anything that we um are not able to verify.

Other customers tell us that this rule group has very low false positives. So, so you can use it with um quite a bit of confidence.

So how does um now I'm going to go back to the architecture diagram uh and slightly simplified version of it uh and explain the request flow and uh I am going to keep coming back to this architecture um uh because uh to tie up uh quite a few things as we proceed through this presentation.

The request comes in, it is configured for bot protection. It's inspected by the WAF engine, it's inspected by the bots and fraud service. It checks the signature, makes a decision and you can allow the request, you can rate limit the request or you can stop the request based upon rule actions, labels uh and custom scripts that you could write uh using other things as well.

Now, let's talk about targeted bots. These are bots which will evade protection. They will try to emulate human behavior and in many cases they are financial incentives. So how do we detect these bots? They are constantly changing their techniques, they're evolving. And the moment you find a signature, find an IP address, find a rate, they're going to go right past it within minutes sometimes.

So when we were thinking about targeted bots, we um we had quite a few decisions to make as to what is our methodology in finding and detecting these bots. So what are our strategies?

First thing we want to do is to raise the cost for them because many times there are financial incentives, they are scraping something, they are buying tickets, they are holding inventory and thereby causing financial impact and maybe financial impact to you, financial gain to somebody else. To do that, we force them to execute a proof of work and we will talk about proof of work and silent challenges in more detail. Proof of work is a cryptographic technique in which a server tells a client to run a particular script and solve the problem. And if the problem is solved, um we know that this is a valid client.

Second, we want to identify the clients that are connected to the application. So now think about how requests come into your application. They're coming from a lot of IP addresses. But behind those IP addresses, there are a lot of large number of clients, they could be sharing an IP address. So IP address is really a bad um um um attribute to look at for bot traffic. What we want to do is we want to look at a unique client.

So we do two things - as the proof of work is run, we collect client telemetry and generate a unique ID. And this unique ID is using the unique ID we are able to track a client, a specific client globally anywhere. And we use the telemetry that is collected to make determinations about what kind of client is connecting and what's the posture of the client that is connecting to your application?

Lastly, there is a lot about data science and machine learning. Most of our techniques to detect targeted bots are deep rooted in statistical analysis and machine learning.

A few concepts that we briefly touched upon just now. The first one is client challenge and fingerprinting and the second is AWS WAF token. These two are really important for targeted bots.

So what is the cheapest fastest way for a bot operator? They write a script, they copy the script to 100 thousands of nodes and they launch an attack and you know, scripts don't cost a whole lot. They run, they run pretty fast. The infrastructure cost that is associated with the scripting is fairly low but the cost that they, that your applications and websites will incur can be quite high because you are handling all those requests.

So here comes an asylum challenge. Client comes into AWS WAF and this is a simplified diagram to show the flow and AWS WAF redirects the client to a silent challenge and tells the client to solve a particular proof of work. This proof of work is a computational proof of work in which the client has to perform some action. And in order to perform this action, the scripts don't work for two reasons.

Number one is a redirect. Um number two, the scripts will require JavaScript to be enabled uh or some sort of integration that um which could be native to an iOS app or an Android app to actually solve the cryptographic challenge. So we are now forcing the client from script to a simulated browser or a automated browser. And the cost difference between running a script versus running node JS or um running Selenium or Puppeteer could be anywhere from 100 to 1000x for the bot operator - very, very effective in rooting out the script.

So client presents a solution, AWS validates the solution and presents a token to the client. This token is sent back by for every subsequent request and using the token AWS WAF knows that the challenge was solved and uh or not solved. Uh because that fact is also embedded within the token as part of running the script.

On the client side, we also collect a number of attributes. We collect details from the browser or the client such as their GPU capabilities, canvas fingerprints, browser plugins, um screen sizes. Are there any inconsistencies in their time stamps on when the challenge was solved? How long did it take the for the client to solve the challenge? Um human interactions? Um movement of mouse and clicks and we take all those attributes and encode these attributes into what we call AWS WAF token which can be sent back to WAF either as a cookie or as a special request header. Uh this allows us to identify unique clients and this is encrypted and tamper proof to also know that your application environments are complex and there are limits to uh and there are constraints on, on your end on how you can integrate such a solution.

So we have provided some different options to acquire a token and to run a proof of work on the client side. The first one is a challenge action - typically very useful for browser uploads. So if you have web uh web uploads your, your clients are going to be normal browsers. Um the, the challenge redirect in the proof of work um has visibly no impact on a real user, but it raises the cost um by orders of magnitude for uh what operators.

We recognize that a redirect is not a situation where if you have a post workflow or you have a API workload. Uh in those situations, we have an option for an SDK which emulates the same thing, but it does it a little differently. It directly goes to AWS WAF and acquires a token and proves who it is. So we have an SDK and then uh we have captcha - two flavors. Uh one is a web redirect. Um and the other one is uh embedded using a JavaScript API for single page applications or iframes.

So as you integrate with targeted bots, um you want to um think about what is the best option and uh use the best option uh for your different workloads and applications to um um help with WAF.

So WAF can acquire token and understand the clients. Captcha can be invoked by a rule action. Um, so you saw the rules and the rule actions dropdown in which captcha was one of the options. Uh if you take any rule and set captcha as an option, it will invoke a captcha redirect. Single page applications can use JavaScript or JavaScript API to embed the captcha.

Captcha also runs the proof of work and the client site telemetry collection. So everything that is happening on the behind the scenes with a silent challenge is also happening with AWS WAF captcha and you can configure how long you want a token that is acquired by uh running a captcha can be valid.

We talked about the bot control inspection levels - common and targeted. So common was for the good bots and verifiable bots. The targeted inspection level is what you will choose if you want to protect your application against evasive bots.

There's an option here to use machine learning. It is enabled by default. When this option is enabled, it allows AWS WAF to capture website traffic statistics and run them through our ML models and provide detections for um but the level of confidence of what we believe that this request is coming from a bot or from a human. This is optional - if you do not want AWS WAF to use your website traffic statistics and run them through machine learning, this option can be turned off but it is enabled by default when you set the inspection level to targeted.

These are all the rules from targeted bots that become available. The first rule is a challenge pass or fail. This rule. The, the default action for this rule is challenged. If this rule is enabled, it will let a few requests come in from every uh IP address because we at this point, we don't know um um what the client is and if the client hasn't yet acquired a token by any other mean that at that time, AWS WAF will force a token acquisition and if the token acquisition fails, then it blocks that request right there. So any subsequent request coming from those IP addresses are are going to get blocked.

Second, we look for volume anomalies. So volume anomalies are they work like rate limiting, but they are a bit more sophisticated than um rate limiting. In rate limiting you specify how many requests, what time do I want? But what we do here for bot control is we do this dynamically. So as clients are coming into your application, and since we are able to track both IP addresses and individual clients using tokens behind those IP addresses, we establish traffic baselines and what are the normal ranges of traffic? And when we see clients coming in that exceed those thresholds, we start issuing blocks at that point. And these baselines could change during time of the day. And as as as the traffic patterns change during the day, during the week, during the month on the website, our baselines are continuously recalculated across your entire client base.

We collect signals that we talked about like canvas fingerprints, plugins based upon that we can look at um whether this traffic is coming from an automated browser or there is some inconsistency about the browser that is behind that particular request. And if we detect any, any such things, we can block or the default action for these ones is actually capture.

Tokens cannot be tampered, they're encrypted, but they certainly can be reused because they are a client's identity. So there's nothing preventing anybody from opening a browser making a request, acquiring a token, then copying the token from the developer console and copying it to 100 scripts. But when they do that, two things can happen:

  1. We will start to see a lot of requests coming from the same token so that can trigger the volumetric rule.

  2. We may see different IP addresses associated with our token. Now, that's a valid scenario. Let's say you go to a website on your phone right now and you um walk out of this um this hall and now you're connected to 5G network instead of the WiFi - your IP address is changing. But we allow, so we allow the same number of IP addresses changes. But if we see that these IP address changes are um more than um in, in normal scenario, in that case, uh we can block them and the way we determine that what is, what are the thresholds for that is by doing extensive data research on what are the normal patterns and how many times and you know, what are the 19th percentile and what are the 99 percentile of people changing IP addresses? So a lot of data science goes back into determining the thresholds.

And lastly, we run your traffic through machine learning. Um and use behavioral analysis, the way we do it is we look for the rules are called coordinative activity. Um and the coordinative activity is looking at the pattern, not the, not the rate of the requests, but when a normal user interacts with the site, they're coming in and clicking here and clicking there. But when bots come in, they follow a certain pattern. Um and the pattern generally differs from a normal user. And the machine learning model discern the difference between a normal user pattern and the patterns created by by, by the bots.

In targeted bots, you can also customize the rule actions for each one of the, each one of the rules. So in this request flow, this is the same diagram that you saw here, but I'm going to introduce a couple of new things here.

First thing is when a request comes in, we issue redirect to a silent challenge or a captcha. When this is solved, the request comes back in inspected by the bot detection services. A bunch of rules are run, the result is emitted back and based upon the actions labels, you can make a determination to rate limit block or allow the requests.

Let's move to AWS WAF fraud control. The first thing we are going to talk about is account takeover prevention.

Our customers tell us that a majority of brute force initial stuffing attacks happen by botnets. So there is a good overlap between account takeover prevention and the bot control. And this rule group is designed to prevent bot activity on the logging pages and prevent attacks such as stuffing and brute force.

There are two parts to um account takeover permission or how we call it, ATP. Um, the, the rule group configuration looks for requests and responses. So first part is request inspection. And in the request inspection, you tell us what, where are the login fields and how are they uh situated and in response inspection, um you tell us what is a valid response versus what's not a valid response. So, a successful login versus a failed login and uh that could be a sort of code or a header or body.

Um, and both uh when the request comes in, uh a w staff is looking at both the incoming requests as well as outgoing responses from the origin to determine whether these are normal users or somebody is trying to do something malicious with uh and doing brute force or of attacks.

Since we do need to inspect user names and passwords, these rules are only available when aws ff can inspect the credentials if you are using some cognito or other id p. Um in that situation. Um since aws f is not going to see the user id and password account takeo prevention cannot be used.

Um two more things i want to call out here. One is stolen credentials. We maintained a vast database of stolen credentials which we constantly keep up to date and if a request matches any stolen credentials, then it is automatically flagged and blocked.

The second thing i want to call attention to is response inspection. So uh think about response inspection capability as a safe. So let's say a client comes in, they are not using the slow credentials, but the response is failure multiple times that that tells us that somebody is trying to uh traverse user i ds passwords find different combinations. And, and based upon the risk score, we can block certain requests for from trading accounts.

Number three, we are looking at behavioral patterns, a large number of accounts created by the same phone number. Um, there is a reasonable limit there. And once things go past the reasonable limit, we start to block those clients. And since you have token acquired here, we can block them on based on ip address as well as individual client coming with the token.

And lastly, uh if anybody is trying to create a um uh uh account using stolen credentials, uh we can block that as well by improving the security posture of the uh users on your, on your applications and websites.

Here's the request flow for from uh for a tp and uh conation fraud prevention, very similar to targeted bots. The reason i want to show this diagram here. Um and uh why this is important is as we started designing bot control and fraud prevention, we recognize that these sort of these operators are constantly evolving.

So we did two things. First, we packaged our um configurations in managed rule groups where we are able to innovate and add more rules to those rule groups. And secondly, what we have built is a very complementary architecture in the back end which is extensible as we add new kind of services. And um new rules that become automatically available as part of the am rs.

With that, i am going to hand it back to chris and we're going to talk about a few real life scenarios. Thank you.

Thanks soon. So who here is a gamer? Me? I am, i like games and i hate people who try to keep me from playing games. So in this particular instance, there was a online gaming company, they had a challenge. They were getting hit by denial of service attacks. The, the denial of service attacks weren't particularly uh amazing in terms of their complexity, but they were causing them infrastructure, impact making it. So users couldn't log in and they were causing them to lose money.

And so we engaged with them. Uh the awesome thing about uh their scenario was it was fairly quick to identify uh from their traffic. All right, you have these kinds of d os uh events happening. They're probably largely generated off of scripts that aren't super complicated. Uh so what we're gonna do is we're gonna enable bot control and we're gonna enable the challenge action.

So if you remember back to what n was talking about, what that means is that when a customer is performing an action on the site um or ad os uh agent was performing an action on the site, we would send back challenge it, they would have to run a script, come back with a proof of work. Uh and then they were allowed to proceed or we would block those actions.

So you can see here the scripts were executing on those graph at the top. You can see how high they were peaking. And if you look at the graph at the bottom, you can see that the uh website traffic statistics uh didn't even, didn't even move. Uh so the challenge effectively prevented those dd os attacks and kept the uh gaming company operating.

Another great scenario is around this travel website. So they had a couple of challenges. One of them was they operated a lot of websites. So they had a broad footprint and they were having agents that were targeting their website. So it wasn't just somebody who was trying to cause mayhem and make a website overload. They were trying to accomplish a specific purpose.

So the first thing was around trying to consolidate their, their websites into a smaller number of web apples. And so that's around using cloud front as a controlled front door and then having a smaller number of web apples. So it's easier to control.

The second thing was just ok, enable to get breathing space, right? So you have something bad happening, your site isn't working, you need breathing space so that you can figure out what a long term solution is going to be. So the easiest thing was enabling our common bots and challenge. Uh so that immediately reduced traffic about 50% in their scenario.

So they were having uh previous to that, you know, super big challenges around their infrastructure, this gave them breathing space. Um but as they have breathing space, uh the bots began to evolve um because they were targeted at this company, there was a motivation behind it. There is a financial motivation and they were adapting and they were using um automated browsers as opposed to just doing scripts that were hammering the website and getting information from it.

And so uh we worked with them after doing the traffic analysis to uh suggest that they would go into targeted bots along with some custom rules. Um and that was able to reduce the traffic. Uh another 19% even after it started ramping back up again. So it was a relatively easy thing for them to do and uh had uh great uh great effects both in terms of their operational health.

Um, being able to manage more of their security through one place, but also uh traffic reduction. And the last one was a mobile e commerce app. So they had had uh threats out of malicious bots, they were doing account takeovers and with the account takeovers, what that means is that they were having bots who are performing actions as if they were people.

So the bots were trying to masquerade as humans, they were uh going in, they were doing fake reviews, they were trying to extract user information and the company had their own custom built solution. Um, which isn't unusual if your company has been around for a long time, you've probably run into this problem. You've tried to, to, to control it.

What's pretty clear is that it's a cat and mouse game. And so you have to always be investing in that and always maintaining it. So, one of the nice things about aws wf is that we take that burden for from you. So we implemented the account takeover protection with them.

Um the, it was effective, which is great. Um but they were getting, uh uh let's come back. They were getting an awful lot of requests. They were getting like, you know, 28 million requests in five minutes. It was, it was sort of nutty. Uh and so what was happening is we would mitigate the attack, but then we had to issue an awful lot of challenges and get an awful lot of responses back which turns out it cost them more money than they were expecting.

Um it was actually a higher rate than we had expected for account takeover protection. So, what we did is we engaged with them and uh had uh sas work with them to come up with a different solution. And one of the nice things about aws w a and aws services in general is that they're very composable. Uh there are lots of easy ways that you can do it.

So using amazon managed rules. The am rs, you can just say i want to block bots and i would like to uh i would like to have a common and targeted and it will do most of the work for you. In this case, they came up with an optimized solution around running the uh ruling count mode. Once they saw a pick up and count mode, they knew that uh there was a spike in a account takeover traffic.

They switched it over the block and then they added source ip s to a denial list with att l. So it wouldn't execute the uh ip forever. It would die off over time. As a result, uh they were able to achieve similar levels of protection with latency, but their cost was uh they saved about 99% in terms of that because they were willing to uh live without latency.

So one of the nice things uh here is we have a composable system, you can make your own rules, you can uh utilize the logs to be able to accommodate your particular user scenarios. It gives you flexibility. Uh and uh you know, takeaways, well, we have some easy to use solutions here that should help you literally today if you went, uh went back and tried to do it, always try to give yourself breathing space when you're dealing with a ddos attack or some other kind of attack.

So don't you, you want to focus on mitigating, then you wanna focus on fixing afterwards. And last, don't be afraid to reach out to us to, you know, have us help you. We, we uh love this work and we love working with our customers to make it even better.

Um, there are some related sessions, uh, so you can use the qr codes. These are, uh, we'll help you learn even more about the aws wf. Thank you very much for coming and, uh, please fill out your surveys. So thank you very much.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值