A
andrewmathu
Introduction
Bots are a common presence on the internet, serving a range of functions from automating customer service to indexing pages for search engines. However, their capabilities can be exploited for malicious activities, such as launching botnet attacks that can compromise web applications and disrupt services. Businesses continuously face the delicate balancing act of allowing good bots to perform their functions while preventing bad bots from causing harm.
To address these challenges, Azure Web Application Firewall (WAF) has new enhancements that provide advanced protection against such threats, ensuring the security and integrity of web applications. In this blog, we will explore Azure WAF Bot Manager 1.1 in Azure Front Door (AFD) and coming soon to Application Gateway WAF, as well as the WAF JavaScript Challenge which is available in both Application Gateway and Azure Front Door. These features offer comprehensive protection against malicious bots while ensuring that good bots can continue their work without interruption.
The Malicious Bot Landscape
Bots account for approximately 48% of all internet traffic, with 30% of this attributed to malicious bots. These malicious bots are automated programs designed to attack web and mobile applications for fraudulent and harmful purposes. A sizable portion of these, around 33%, are simple bad bots that use automated scripts to conduct their malicious activities.
Bad bots can engage in a variety of attacks which include:
Given the wide range of threats posed by malicious bots, it is crucial to have robust defenses in place to protect your web applications. In the next section, we will explore how the new Bot Manager 1.1 ruleset and JavaScript Challenge work to effectively prevent the threats posed by malicious bots.
Azure WAF Bot Manager 1.1 Ruleset
The Azure WAF Bot Manager ruleset helps protect web applications by identifying and managing bot traffic, distinguishing between good bots and malicious bots, and applying appropriate actions (Block, Allow, Log, JS Challenge) to each rule.
Azure WAF’s Bot Manager 1.1, available in Azure Front Door, represents an improvement over its predecessor, Bot Manager 1.0. This ruleset is designed to provide more precise detection of both good bots and bad bots, reducing false positives, and improving security.
Bot Manager 1.1 introduces advanced detection capabilities by refining and expanding the rules that differentiate between legitimate and malicious bots. These enhancements have been made in the Goodbot and Badbot rules.
The Goodbot Rule Group
The Goodbot rule group in Bot Manager 1.1 has been significantly enhanced to reduce false positives and improve SEO rankings by allowing a broader range of legitimate bots to access websites. This group now includes a variety of verified good bots categorized into specific roles such as search engine crawlers, advertising bots, social media bots, link checkers, content fetchers and feed fetchers. These enhancements ensure that well-known legitimate bots such as Bingbot and Googlebot can perform their functions without being blocked, thus preventing issues like lower SEO rankings or disrupted services through blocking. Additionally, the flexibility to customize actions for each Goodbot rule gives users granular control over their web application’s interaction with these bots.
The screenshot below displays the new Goodbot rules added to the Bot Manager 1.1 ruleset:
For more details on the Goodbot rules you can check out – Goodbot Rules.
The Badbot Rule Group
The Badbot rule group in Bot Manager 1.1 introduces a powerful new rule, Bot100300, which targets IPs with high-risk scores identified through threat intelligence. This rule complements existing bad bot detection mechanisms, such as Bot100100, which focuses on verified malicious IPs. By enhancing the detection of risky and malicious bots, this rule group helps mitigate threats like scraping, phishing, spamming, and denial-of-inventory attacks. The default action for these bots is set to "block," ensuring that harmful activities are effectively thwarted, although users have the option to modify this action if needed.
The screenshot below displays the new Badbot rule added to the Bot Manager 1.1 ruleset:
For more details on the Badbot rules, you can check out – Badbot rules.
Enabling and using the new Bot Manager 1.1 Ruleset
To enable the Bot Manager 1.1 ruleset in WAF in Azure Front Door in the Azure Portal, navigate to your AFD WAF Policy.
With the enablement complete, the ruleset is available for use, providing your application with enhanced protection against malicious bots while allowing legitimate bot traffic.
To demonstrate the Bot Manager 1.1 ruleset in action, we conduct a simple test to show how a bad bot can be blocked. In our setup, we install Postman in a virtual machine with internet access and configure Azure Front Door with a WAF Policy that has the Bot Manager 1.1 ruleset enabled. Behind this Azure Front Door, a web application is running and is actively protected by the WAF. We use Postman as it allows us to manually craft HTTP requests, making it an ideal tool to simulate bot traffic and test the WAF's response to malicious IP addresses.
In Postman, we simulate a request from a bad bot attempting to access the protected web application. This is done by injecting a known malicious IP address into the ‘x-forwarded-for’ header—a technique often employed by bots to disguise their actual origin. We configure Postman to send a GET request to the web application's endpoint. In the headers section, we add the ‘x-forwarded-for’ header and assign it the malicious IP address, which has been flagged for engaging in malicious activities.
With the request configured, we send the GET request to the web server through the AFD address. The WAF policy, with Bot Manager 1.1 ruleset enabled, detects the request as malicious based on the IP address and blocks it before it can reach the web application. The server responds with a 403 Forbidden status code, confirming that the bad bot has been successfully prevented from accessing the application.
In our AFD WAF logs, we observe that the request was blocked by the Bot Manager ruleset:
Azure WAF JavaScript (JS) Challenge
The JavaScript (JS) Challenge in Azure WAF is an invisible, non-interactive web challenge designed to differentiate between legitimate users and bad bots. When triggered, it presents a challenge to the user's browser, which is processed automatically without any human intervention. Legitimate users pass through seamlessly, while malicious bots fail the challenge and are blocked. This approach effectively protects web applications from bot attacks while maintaining a smooth experience for real users, as it operates behind the scenes without disrupting normal browsing activities.
The JavaScript Challenge works when it is active on Azure WAF and a client's HTTP/S request matches a specific rule in the WAF policy causing the challenge to be triggered. This challenge prompts the client's browser to perform a computational task on a dedicated JavaScript challenge page. While the user may briefly see this page, the challenge runs automatically in the background without requiring any user interaction. If the browser successfully completes the task, the request is validated and allowed to proceed, indicating that the client is a legitimate user. However, if the challenge fails, the request is blocked, effectively stopping the bad bot from accessing the application.
The JS Challenge is particularly beneficial because it reduces friction for legitimate users; it is invisible and requires no human intervention. This seamless approach ensures that the user experience remains unaffected while providing robust protection against bad bots. Additionally, the challenge is reissued under certain conditions, such as when a user’s IP address changes or when accessing the page from a different domain, ensuring continuous and adaptive protection.
Azure WAF JS Challenge Characteristics:
Enabling and using the JavaScript Challenge
As seen earlier, the JavaScript (JS) Challenge can be enabled within both the Bot Manager ruleset and custom rules. To enable it within the Bot Manager ruleset, simply navigate to the Managed Rules section of your WAF policy in either Application Gateway or Azure Front Door, select the Bot Manager rule you want to configure, and change the action to JS Challenge. For custom rules, you would create a new rule and select the JS Challenge as the action. Additionally, within the Policy Settings, you can adjust the JS Challenge cookie’s validity period, with options ranging from 5 to 1,440 minutes.
Azure Front Door (Example) – Enabling JS Challenge in the Bot Manager Ruleset:
Application WAF Gateway (Example) – Enabling JS Challenge in a Custom Rule:
To demonstrate the JS Challenge in action, we set up a simple scenario using an Application Gateway with a WAF policy and use the custom rule we created above. We have a demo web application behind the Application Gateway protected by our WAF. Our custom rule is configured to inspect the RequestUri and trigger the JS Challenge when the URI contains /ftp. If a request matches this condition, the WAF challenges it using the JS Challenge. A bot will fail to solve the challenge, whereas a legitimate user using a browser will pass through without issues. In our setup, within Policy Settings, the JavaScript Challenge timeout is set to 5 minutes.
We first enable Developer Tools (clicking F12) on our browser and navigate to the Network section to monitor the requests. Then, we launch the web application and click on the link that leads to the /ftp path. The browser briefly displays the challenge, confirming that the JS Challenge is active and functioning.
After the challenge finishes, the JS challenge cookie will appear under the Response Headers:
When we navigate to any other page within our application website, we notice the same cookie included in the Request Headers:
The same JS challenge cookie appears on other pages of the application as it confirms the user has already passed the challenge. Once the challenge is completed, the cookie is stored in the user's browser and sent with every request to any page within the same domain. This prevents the user from being re-challenged on each page, ensuring they can navigate smoothly across the application without interruption while maintaining security.
The Application WAF logs provide detailed insights into JS Challenge requests, showing the issued and passed challenges as well as active challenges:
Conclusion
Malicious bots pose serious risks to web applications, from scraping content to launching denial-of-service attacks. Azure WAF’s Bot Manager 1.1 and JavaScript Challenge provide robust protection by effectively blocking bad bots while allowing legitimate traffic to flow seamlessly. By implementing these features, businesses can safeguard their web applications from automated threats without compromising the user experience. These tools offer a powerful, adaptive defense against the evolving landscape of bot-driven attacks.
Resources:
Continue reading...
Bots are a common presence on the internet, serving a range of functions from automating customer service to indexing pages for search engines. However, their capabilities can be exploited for malicious activities, such as launching botnet attacks that can compromise web applications and disrupt services. Businesses continuously face the delicate balancing act of allowing good bots to perform their functions while preventing bad bots from causing harm.
To address these challenges, Azure Web Application Firewall (WAF) has new enhancements that provide advanced protection against such threats, ensuring the security and integrity of web applications. In this blog, we will explore Azure WAF Bot Manager 1.1 in Azure Front Door (AFD) and coming soon to Application Gateway WAF, as well as the WAF JavaScript Challenge which is available in both Application Gateway and Azure Front Door. These features offer comprehensive protection against malicious bots while ensuring that good bots can continue their work without interruption.
The Malicious Bot Landscape
Bots account for approximately 48% of all internet traffic, with 30% of this attributed to malicious bots. These malicious bots are automated programs designed to attack web and mobile applications for fraudulent and harmful purposes. A sizable portion of these, around 33%, are simple bad bots that use automated scripts to conduct their malicious activities.
Bad bots can engage in a variety of attacks which include:
- Launching DDoS attacks on customer-facing websites.
- Gaining initial access by escalating privileges in critical systems, then using that access to launch additional attacks through lateral movement.
- Spamming customer websites with form submission pages.
- Spoofing legitimate mobile user agents to execute a range of fraudulent and malicious activities.
- Scraping website content, tampering with SEO rankings or prices, and launching denial-of-inventory attacks.
- Spreading false information, performing targeted phishing, and conducting social engineering attacks.
Given the wide range of threats posed by malicious bots, it is crucial to have robust defenses in place to protect your web applications. In the next section, we will explore how the new Bot Manager 1.1 ruleset and JavaScript Challenge work to effectively prevent the threats posed by malicious bots.
Azure WAF Bot Manager 1.1 Ruleset
The Azure WAF Bot Manager ruleset helps protect web applications by identifying and managing bot traffic, distinguishing between good bots and malicious bots, and applying appropriate actions (Block, Allow, Log, JS Challenge) to each rule.
Azure WAF’s Bot Manager 1.1, available in Azure Front Door, represents an improvement over its predecessor, Bot Manager 1.0. This ruleset is designed to provide more precise detection of both good bots and bad bots, reducing false positives, and improving security.
Bot Manager 1.1 introduces advanced detection capabilities by refining and expanding the rules that differentiate between legitimate and malicious bots. These enhancements have been made in the Goodbot and Badbot rules.
The Goodbot Rule Group
The Goodbot rule group in Bot Manager 1.1 has been significantly enhanced to reduce false positives and improve SEO rankings by allowing a broader range of legitimate bots to access websites. This group now includes a variety of verified good bots categorized into specific roles such as search engine crawlers, advertising bots, social media bots, link checkers, content fetchers and feed fetchers. These enhancements ensure that well-known legitimate bots such as Bingbot and Googlebot can perform their functions without being blocked, thus preventing issues like lower SEO rankings or disrupted services through blocking. Additionally, the flexibility to customize actions for each Goodbot rule gives users granular control over their web application’s interaction with these bots.
The screenshot below displays the new Goodbot rules added to the Bot Manager 1.1 ruleset:
For more details on the Goodbot rules you can check out – Goodbot Rules.
The Badbot Rule Group
The Badbot rule group in Bot Manager 1.1 introduces a powerful new rule, Bot100300, which targets IPs with high-risk scores identified through threat intelligence. This rule complements existing bad bot detection mechanisms, such as Bot100100, which focuses on verified malicious IPs. By enhancing the detection of risky and malicious bots, this rule group helps mitigate threats like scraping, phishing, spamming, and denial-of-inventory attacks. The default action for these bots is set to "block," ensuring that harmful activities are effectively thwarted, although users have the option to modify this action if needed.
The screenshot below displays the new Badbot rule added to the Bot Manager 1.1 ruleset:
For more details on the Badbot rules, you can check out – Badbot rules.
Enabling and using the new Bot Manager 1.1 Ruleset
To enable the Bot Manager 1.1 ruleset in WAF in Azure Front Door in the Azure Portal, navigate to your AFD WAF Policy.
- In the policy settings, go to the Managed rules tab. Here, you will find the option to assign the Bot Manager 1.1 ruleset.
- Simply select the Bot Manager 1.1 ruleset from the dropdown menu under the Assign option.
- Click on Save to apply the change.
- After assigning the ruleset, you can customize the specific actions for each rule group based on your security needs, such as blocking or allowing certain bot categories.
With the enablement complete, the ruleset is available for use, providing your application with enhanced protection against malicious bots while allowing legitimate bot traffic.
To demonstrate the Bot Manager 1.1 ruleset in action, we conduct a simple test to show how a bad bot can be blocked. In our setup, we install Postman in a virtual machine with internet access and configure Azure Front Door with a WAF Policy that has the Bot Manager 1.1 ruleset enabled. Behind this Azure Front Door, a web application is running and is actively protected by the WAF. We use Postman as it allows us to manually craft HTTP requests, making it an ideal tool to simulate bot traffic and test the WAF's response to malicious IP addresses.
In Postman, we simulate a request from a bad bot attempting to access the protected web application. This is done by injecting a known malicious IP address into the ‘x-forwarded-for’ header—a technique often employed by bots to disguise their actual origin. We configure Postman to send a GET request to the web application's endpoint. In the headers section, we add the ‘x-forwarded-for’ header and assign it the malicious IP address, which has been flagged for engaging in malicious activities.
With the request configured, we send the GET request to the web server through the AFD address. The WAF policy, with Bot Manager 1.1 ruleset enabled, detects the request as malicious based on the IP address and blocks it before it can reach the web application. The server responds with a 403 Forbidden status code, confirming that the bad bot has been successfully prevented from accessing the application.
In our AFD WAF logs, we observe that the request was blocked by the Bot Manager ruleset:
Azure WAF JavaScript (JS) Challenge
The JavaScript (JS) Challenge in Azure WAF is an invisible, non-interactive web challenge designed to differentiate between legitimate users and bad bots. When triggered, it presents a challenge to the user's browser, which is processed automatically without any human intervention. Legitimate users pass through seamlessly, while malicious bots fail the challenge and are blocked. This approach effectively protects web applications from bot attacks while maintaining a smooth experience for real users, as it operates behind the scenes without disrupting normal browsing activities.
The JavaScript Challenge works when it is active on Azure WAF and a client's HTTP/S request matches a specific rule in the WAF policy causing the challenge to be triggered. This challenge prompts the client's browser to perform a computational task on a dedicated JavaScript challenge page. While the user may briefly see this page, the challenge runs automatically in the background without requiring any user interaction. If the browser successfully completes the task, the request is validated and allowed to proceed, indicating that the client is a legitimate user. However, if the challenge fails, the request is blocked, effectively stopping the bad bot from accessing the application.
The JS Challenge is particularly beneficial because it reduces friction for legitimate users; it is invisible and requires no human intervention. This seamless approach ensures that the user experience remains unaffected while providing robust protection against bad bots. Additionally, the challenge is reissued under certain conditions, such as when a user’s IP address changes or when accessing the page from a different domain, ensuring continuous and adaptive protection.
Azure WAF JS Challenge Characteristics:
- Invisible, Non-Interactive Challenge: The JS Challenge operates without requiring input from users, allowing for a smooth browsing experience while blocking malicious bots. The user very briefly sees the challenge page (shown below):
- Customizable Cookie Lifetime: The validity of the JS Challenge cookie can be customized, with options ranging from 5 to 1,440 minutes (24 hours). The default setting is 30 minutes. This is found in the Policy Settings page of the WAF policy in Application Gateway and Azure Front Door.
JS Challenge action settings in Application Gateway WAF:
JS Challenge action settings in Azure Front Door WAF:
- JS Challenge in Managed rules: The JS Challenge is integrated in the WAF managed rulesets within the Bot Manager ruleset. To enable the JavaScript Challenge within the Bot Manager's managed rules, users can navigate to the Managed rules section in their WAF policy and adjust the actions for each rule group. This setup allows the WAF to adapt to various security needs, applying the JavaScript Challenge as necessary to ensure ongoing protection.
JS Challenge action in Managed rules for Application Gateway WAF:
JS Challenge action in Managed rules for Azure Front Door WAF:
- JS Challenge in Custom Rules: The JavaScript Challenge can be applied within custom rules, allowing administrators to target specific traffic patterns or conditions, such as IP addresses or request headers. This provides granular control over when the challenge is triggered, enhancing security by focusing on specific threats.
JS Challenge Custom rule action in Application Gateway WAF:
JS Challenge Custom rule action in Azure Front Door WAF:
- Cross-Origin Resource Sharing (CORS) Protection: The challenge is reapplied when accessing resources from a different domain, ensuring consistent security across multiple domains.
- Logging and Metrics: Detailed logs and metrics are captured whenever the JS Challenge is triggered. These allows security administrators to track the challenges and analyze traffic patterns and security incidents. The JS Challenge logs and metrics are available in both AFD and Application Gateway.
Example JS Challenge Metric for Application Gateway WAF:
Example JS Challenge Logs in Azure Front Door:
Enabling and using the JavaScript Challenge
As seen earlier, the JavaScript (JS) Challenge can be enabled within both the Bot Manager ruleset and custom rules. To enable it within the Bot Manager ruleset, simply navigate to the Managed Rules section of your WAF policy in either Application Gateway or Azure Front Door, select the Bot Manager rule you want to configure, and change the action to JS Challenge. For custom rules, you would create a new rule and select the JS Challenge as the action. Additionally, within the Policy Settings, you can adjust the JS Challenge cookie’s validity period, with options ranging from 5 to 1,440 minutes.
Azure Front Door (Example) – Enabling JS Challenge in the Bot Manager Ruleset:
Application WAF Gateway (Example) – Enabling JS Challenge in a Custom Rule:
To demonstrate the JS Challenge in action, we set up a simple scenario using an Application Gateway with a WAF policy and use the custom rule we created above. We have a demo web application behind the Application Gateway protected by our WAF. Our custom rule is configured to inspect the RequestUri and trigger the JS Challenge when the URI contains /ftp. If a request matches this condition, the WAF challenges it using the JS Challenge. A bot will fail to solve the challenge, whereas a legitimate user using a browser will pass through without issues. In our setup, within Policy Settings, the JavaScript Challenge timeout is set to 5 minutes.
We first enable Developer Tools (clicking F12) on our browser and navigate to the Network section to monitor the requests. Then, we launch the web application and click on the link that leads to the /ftp path. The browser briefly displays the challenge, confirming that the JS Challenge is active and functioning.
After the challenge finishes, the JS challenge cookie will appear under the Response Headers:
When we navigate to any other page within our application website, we notice the same cookie included in the Request Headers:
The same JS challenge cookie appears on other pages of the application as it confirms the user has already passed the challenge. Once the challenge is completed, the cookie is stored in the user's browser and sent with every request to any page within the same domain. This prevents the user from being re-challenged on each page, ensuring they can navigate smoothly across the application without interruption while maintaining security.
The Application WAF logs provide detailed insights into JS Challenge requests, showing the issued and passed challenges as well as active challenges:
Conclusion
Malicious bots pose serious risks to web applications, from scraping content to launching denial-of-service attacks. Azure WAF’s Bot Manager 1.1 and JavaScript Challenge provide robust protection by effectively blocking bad bots while allowing legitimate traffic to flow seamlessly. By implementing these features, businesses can safeguard their web applications from automated threats without compromising the user experience. These tools offer a powerful, adaptive defense against the evolving landscape of bot-driven attacks.
Resources:
- What is Azure Web Application Firewall on Azure Application Gateway? - Azure Web Application Firewall | Microsoft Learn
- What is Azure Web Application Firewall on Azure Front Door? | Microsoft Learn
- Bot Protection Ruleset
- Configure bot protection for Web Application Firewall with Azure Front Door | Microsoft Learn
- General availability of Azure WAF Bot Manager 1.1 Ruleset - Microsoft Community Hub
- Azure Web Application Firewall JavaScript challenge (preview) overview | Microsoft Learn
- Azure WAF Public Preview: JavaScript Challenge - Microsoft Community Hub
Continue reading...