Dev Overflow – Part 7
By Mikhail Sudakov, Cyber Security Architect and Analyst, LEO Cyber Security.
As previous posts demonstrate, (software) security is a state of mind and not a checkbox to be marked off to meet some criteria, and definitely not something to be added on top of a completed product. The issue of broken access controls especially underlines the problem of automated testing. Indeed, it is very hard to detect broken access control vulnerabilities with the usual static or dynamic analyses.
A5 – Broken Access Control
User access controls deal with authorization. Do not confuse that with authentication, which must happen first. How can we successfully determine what a user is allowed to do if we first do not explicitly establish the user to be Alice, Bob, or whoever? Once the authentication process is complete and we can trust the user’s identity, proper authorization of user actions and controls must be carefully implemented and enforced.
“Restrictions on what authenticated users are allowed to do are often not properly enforced. Attackers can exploit these flaws to access unauthorized functionality and/or data, such as access other users’ accounts, view sensitive files, modify other users’ data, change access rights, etc. ” – (OWASP).
Perhaps one of the most crucial mistakes inexperienced developers make, especially in web applications, is the assumption that users (e.g. the adversary) cannot interact with an element they cannot see. Absolutely not true! This often ends up being a very costly mistake. Moreover, there are so many instances where a similar blunder could be committed as well as a multitude of attack vectors to exploit such vulnerabilities that I won’t even go into any detail on this. Instead, I want to address the primary issue (I believe) that lies at the very heart of the overall problem of broken access controls.
In my opinion, the main blunder isn’t that a developer might think an attacker cannot interact with a hidden or non-existent element (yep, attackers can create any page element they want, and more!), but that the developer puts trust in anything that comes from client to server. This principle is fairly straightforward, and it is surprising that it is not a universal truth in development. Alas, it is not; maybe one day…
The concept of zero-trust is relatively new (less than a decade), and it is often hard to achieve with networks. However, it is a must for any application that wants to be successful in protecting against such vulnerabilities as broken user access controls. Again, remember that this cannot be easily tested automatically and often requires manual human inspection (costly!). So what’s this zero-trust model all about when applied to software development? After positive user authentication happens (at this point, we know exactly who and what the user is)…
The server trusts nothing the user sends over. NOTHING! Not. One. Bit.
Is this hard to achieve? Not at all! It sounds harsh and unnecessary, but all this really means is that the server has to double-check everything the client sends. That is quite manageable. The reason we must double-check everything is that software running in the client (e.g. a browser) is completely malleable, meaning the adversary can very easily change some or even all of it. Let’s take a look at a simple example:
Alice (client): "Bob, this is Alice." Bob (server): "Identity confirmed and verified. Alice is acknowledged. What's up?" Alice: "Give me a list of account numbers I control." Bob: "Your inquiry ID is 'qwerty'. Your account numbers are: 123, 456, 789." Alice: "Further inquiry for ID 'qwerty'. Show me account number 1337."
Hopefully, you can see where this is going… At this point, Bob (server) checks the inquiry ID (or whatever really) and confirms it as valid. Now, it would be an obvious blunder if the conversation completed like this:
Bob (server): "Inquiry ID 'qwerty' confirmed and verified. The details of account number 1337 are 'blah_blah_blah'."
Whoops… And here we have it. Alice sees something she was not supposed to see – 1337 is not one of her account numbers. Bob did some due diligence in verifying Alice’s inquiry ID (again, or whatever other information), but not enough. Specifically, the server failed to confirm that the account information he was sending Alice is one she really owns. Yes, Bob should have checked the same thing twice. This silly example does seem obvious, but this could happen a lot easier in modern applications (and it does – quite often) when there are many different numbers and identifiers being passed between client and server.
I believe all developers must follow the zero-trust model as closely as possible. And the model is very-very simple: trust nothing, verify everything.
Mikhail is a cybersecurity professional with an enormous passion for the field and thirst for knowledge. In addition to his day job, Mikhail is a professor of cryptology at St. Bonaventure University. A programmer specializing in software security and cryptology, Mikhail always seeks to expand his horizons, hone his skills, and take on new challenges. One day, he hopes to know enough to realize that he knows nothing. Mikhail holds an Offensive Security certification (OSCP) and, although he is a “Russian hacker”, his hat is white and his directive is to protect.