triangle
Fighting DDoS attacks with nginx

Author

David

Karlsruhe, Germany

Fighting DDoS attacks with nginx

 

DDoS is an attack type that is becoming more and more frequent, largely because of the increasing number of devices that are feeding botnets: Besides Smartphones and Tablets, IoT Devices grow in number and the botnet sizes that evolve from this amount are far beyond any capacity a web application can bear.

Among the first countermeasures filtering the “IP-Address:Port” combination via classic firewall rules is the most effective. IP based blocking/limiting has been and still is a good standard solution for all kinds of cyber-defense. With good reason: blocking on IP level is super effective with a very small footprint because there is no protocol overhead to take care of.

But the times when an IPv4 Address resolved back to a single machine are long gone. Especially in Southeast-Asia a majority of the traffic comes from proxies and in every part of the world providers start to use internal IPv6 nets for their customers and masquerade this traffic behind an IPv4 gateway.

So, when operating high-performance web applications that receive huge amounts of traffic (I am talking about Millions of hits per hour) you cannot afford blocking IP Addresses where attacks originate from. You would also block potentially good traffic originating from these sources.

The same goes for User-Agent blocking: For example, you don’t want to exclude users of a certain Chrome mobile version, just because millions of bots are faking this browser.

Here, the nginx webserver with its http_limit_req_module can do awesome things, along with other built in nginx mechanisms:

 

If you are looking for examples on nginx request limiting, you often find only one configuration snippet:

limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

basically this means: DDoS Attackers will be registered with their IP and then the request is limited if the IP matches again. It does not mean that the IP is blocked! At least one request per second is making it through (zone-bursts not considered).
If you are hosting a website and want to survive distributed or just plain DOS attacks, this is the way to go.

But this rule also means: 1 request/second is accepted and nginx does not distinguish between bogus DDoS requests or real user requests as long as the remote address matches. Not good if you are running a traffic network…

Thus, another key for identifying malicious requests is required, one that is preferably not IP-bound, or at least not only IP-bound. A combination of several given parameters makes much more sense: For example, remote_addr and user_agent combined make a better indicator: DDoS attacks usually originate from botnets and are very simple in their structure: the requests usually have the same useragent, or at least not the huge variety real user traffic offers. Often bots just bomb your servers without dealing with your application. perhaps your site needs a certain url parameter, etc.

The problem is, you should not use keys that consume too much memory, because
“If the zone storage is exhausted, the server will return the 503 (Service Temporarily Unavailable) error to all further requests.” – Quote from http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone

You could of course increase the size of the zone storage, but this could impact the performance. Also increasing limits is not the solution we want in this case because these are values that cannot be defined properly.

So this narrows it down to two points:

  1. a zone key should consist of several indicators
  2. the zone key itself should be kept small, so we never reach the zone storage limit

We can achieve this with the map directive:

# Request Limiter checking multiple values with small footprint 
        
        # if referer is not set 
        map $http_referer $limit_referer {
        default     "";
        ""          "1";
        "-"         "1";
        }

        # if custom parameter is not set. i.e. you always expect urls like "/tracker?campaignid=1234"
        map $arg_campaignid $limit_campaignid {
        default     "";
        ""          "1";
        "-"         "1";
        }


        map "$limit_referer$limit_campaignid" $custom_limit {

        default     "";
        "11"         1;
        }

        limit_req_zone $custom_limit zone=custom:1m rate=1r/s;

with three map directives a variable $custom_limit is generated. Each hash table is very minimalistic with the contents, so there is almost no footprint in the nginx hash tables. The same goes for the limit_req_zone size: $custom_limit can only contain nothing or “11”. Now try to fill one megabyte with that ;)

You can even add more indicators, or register different values like “11”, 1a”, “abc” and so on.

This is just one way to go, but if offers us possibilities to deal with DDoS without giving up our business. Also this method is highly extensible, allowing us to react and adapt to different types of attacks.

Let me make one thing clear: When DDoS hits you, your servers will suffer. You cannot prevent it completely. You need to react and analyze the attack patterns. Make some “stupid” IP blocking rules to keep the business running during the attack. Just don’t stay with these quick’n dirty solutions! Because not every request that fulfils the criteria is a malicious one. Try different patterns, make a/b tests, increase request limits (1r/s may be good for a blog, but for a traffic network even 10r/s may still limit real user traffic) and so on…

Another important thing to take care about are access-logs: when DDoS happens, log file sizes grow rapidly, so make sure you have them on an isolated mount point that provides enough space to analyze the attack in the aftermath.

Good Luck!

Leave a Reply