Handling the Microsoft 365 outage - a lesson for future DDoS attacks.
Battling Layer 7 Attacks: The Road to Better Cyber Defence
Cyber threats have been growing rapidly in both complexity and volume, with Layer 7 attacks proving to be especially challenging to defend against1. Each layer presents a unique set of vulnerabilities that threat actors can exploit. The 7th layer, or the application layer, deals with application-specific communications. This level becomes a prime target for attackers due to the complexity and diversity of applications.
Defending against cyber threats, particularly Layer 7 attacks, requires continuous innovation and adaptation1. This was highlighted when Microsoft reported a surge in traffic that temporarily impacted the availability of some of their services in June 20231.
Microsoft's Layer 7 DDoS Attacks: An Overview
Microsoft's security team detected and tracked the DDoS activity by a threat actor they named as Storm-1359. The actor used a variety of resources, such as multiple virtual private servers (VPS), rented cloud infrastructure, open proxies, and DDoS tools1.
This attacker did not target layers 3 or 4 but instead launched a more complex attack on layer 7. These types of attacks pose a significant challenge due to their ability to mimic regular traffic and their distribution across the globe from different source IPs.
The Attack Methods
Storm-1359 used various attack types, including:
- HTTP(S) Flood Attack: The attacker aimed to exhaust the system resources with a high load of SSL/TLS handshakes and HTTP(S) requests processing. This attack led the application backend to run out of compute resources like CPU and memory1.
- Cache Bypass: The attacker attempted to overload the origin servers by bypassing the CDN layer1.
- Slowloris: In this case, the client opens a connection to a web server, requests a resource (like an image), but fails to acknowledge the download (or accepts it slowly). This causes the web server to keep the connection open and the requested resource in memory1. Strengthening Layer 7 Protections
Microsoft was able to mitigate the majority of disruptions by hardening their Layer 7 protections. They fine-tuned Azure Web Application Firewall (WAF) to better defend their customers from the impact of similar DDoS attacks1.
Azure Web Application Firewall, ModSecurity, and DDoS Attacks
Azure Web Application Firewall (WAF), an integral part of Microsoft's security architecture, is built upon ModSecurity4, a well-established open-source Web Application Firewall (WAF) module1. However, the recent DDoS attack Microsoft faced highlighted potential limitations in using ModSecurity, or any conventional WAF, as the primary defence mechanism against such threats.
ModSecurity's Limitations in DDoS Defence
ModSecurity, while effective against a variety of web application threats, has certain limitations when dealing with DDoS attacks:
- Lack of Scalability: ModSecurity is not inherently scalable. It can struggle to handle the enormous traffic volume associated with DDoS attacks.
- Delayed Response: ModSecurity's rule-based approach may result in slower response times to evolving DDoS threats. While it can block threats based on established rules, it can take time to identify and create rules for new or uncommon attack patterns.
- Operational Complexity: ModSecurity requires substantial expertise and constant fine-tuning to remain effective, potentially slowing down response times during a fast-paced DDoS attack.
These limitations were evident during the recent DDoS attack Microsoft experienced. Even though Microsoft utilised ModSecurity via Azure WAF, the time it took for Azure to respond underlines the challenges associated with using traditional WAFs for handling such attacks1.
The Role of Residential Proxy Networks in Layer 7 DDoS Attacks
Residential proxy networks present a unique and complex challenge in the defence against Layer 7 DDoS attacks3. These networks leverage IP addresses tied to physical locations, often originating from typical home or office internet connections. This makes it increasingly difficult to differentiate between legitimate and malicious traffic.
Unlike traditional proxy or VPN networks, where traffic can be blocked or rate-limited based on their recognisable IP ranges, residential proxy networks blend in with legitimate users. This blending complicates the process of identifying and blocking malicious requests, as any blocking or limiting measures could potentially impact legitimate traffic from residential IPs.
Fingerprinting: A Potential Solution
In this context, fingerprinting emerges as a crucial mechanism to distinguish between legitimate clients and malicious actors. Fingerprinting involves gathering various data points from each client request, including user agent, IP address, headers, cookies, and more. The combination of these data points creates a unique 'fingerprint' for each client.
By analysing these fingerprints, it's possible to detect anomalous patterns in the requests, potentially identifying malicious clients hidden behind residential IPs. However, it's important to bear in mind that while fingerprinting can improve the accuracy of identifying malicious traffic, it isn't foolproof and should be part of a broader, layered defence strategy.
Moreover, implementing effective fingerprinting measures requires substantial technical expertise and resources. It's crucial to ensure that such measures do not inadvertently affect the user experience or breach privacy regulations.
The Need for Specialised Rate Limiting Services
A specialised rate limiting service could have offered a quicker and more effective response to the DDoS attack. Rate limiting restricts the number of requests that an IP address can make within a specific time period2.
Such a service offers several advantages in defending against DDoS attacks:
- Rapid Response: Rate limiting can provide a quick initial defence against a DDoS attack by immediately limiting traffic from suspicious IP addresses.
- Flexibility: Rate limiting rules can be applied to various factors such as IP addresses, URL, headers, response codes, and more, offering flexible and comprehensive defence mechanisms.
- Reduced Load: By limiting the rate of requests, these services can reduce the load on the server, preserving resources for legitimate traffic.
Advanced Rate Limiting and Custom Keys
One way to defend against these attacks is through advanced rate limiting2. Rate limiting restricts the number of requests an IP address, URL, or another custom key can make in a set time period. This technique can prevent a single actor from flooding a network with traffic.
Criteria Used in Rate Limiting
Rate limits can be defined based on various criteria:
- IP Address
- URL
- Query String
- Headers
- Response Codes
- GeoIP Information: ASN or Country Code
- Parsed User Agent Information: Different rules for search engines vs. generic 'bots'
- Fingerprints: TCP, TLS or H2 fingerprints can uniquely identify the connecting software
- Meta Information: From bot protection service2
This allows rate limiting to 'bucket' requests using different criteria effectively rate limiting a larger group of connections.
The Role of Anomaly Detection
Anomaly detection is another key tool in our arsenal against these attacks. It involves identifying patterns or events that deviate from the norm, which might indicate suspicious activities. By promptly detecting such anomalies, we can respond faster, identify a suitable rate limit key and thwart the potential attack.
Caching as a Mitigation Strategy
Caching is an effective strategy to mitigate Layer 7 attacks. Caching stores static responses to requests, reducing the load on the server by serving these stored responses instead of processing each request individually. In a DDoS scenario, where a flood of requests is sent to the server, caching can help maintain the server's availability. Ignoring client provided 'Cache Control' headers such as 'max-age=0' or 'no-cache' can be an effective strategy as these headers are typically employed to bypass a CDN.
Peakhour Recommendations for Defence Against Layer 7 Attacks
- Utilise anomaly detection to know when an attack is happening.
- Use Layer 7 protection services including rate limiting. Utilising past 99th percentile hit rates as a starting point.
- Employ bot mitigation techniques, most Layer 7 attacks originate from bots.
- Utilise IP reputation as an early warning sign, many IPs have been involved in attacks before.
- Block, limit, or redirect traffic from outside a defined geographic region.
- Rate limit or block requests from Datacenters and Hosting ASNs.
- Create custom WAF rules to automatically block and rate limit HTTP or HTTPS attacks with known signatures.
- Utilise effective CDN caching, and ignore client presented Cache-Control headers.
Defending against Layer 7 attacks requires an array of strategies. Rate limiting, anomaly detection, and effective use of caching are all valuable tools.