Lack of Input Validation and Security by Obscurity

This post is about something very common among web applications, user input validation. I’ve assessed a ton of web applications that relies in trusting the good behavior of users and hoping they respect, or at least don’t discover, what his current user profile is able to do. This can be translated in a bad development practice and this leads us to the root cause of this problem which is the lack of user input validation (checking every command a user sends to a application) and just “hiding” the menus a user is not supposed to have access to (hidden links).

It is pretty common that developers build their application with just a single authorization step, once you’ve passed the login screen, the application assumes that anything the user does beyond that point is an authorized action and no further profile and credentials check will be necessary. This also comes with the “menu hiding” where the functions that the user is not supposed to access is just with the “hidden” attribute.

To illustrate this, I’ll be showing a practical example, a application that I’ve assessed in the past.

Figure 1 – Web application main interface

The image above is the web app, pretty simple and straight forward this app was developed to register tickets and services. The highlighted text above is the user name that logged in, this is already a bad practice itself since this is a sensible information that could be easily captured or stored by the browsing history. This example is really simple, all we need to execute this vulnerability is to know another valid username and just change it in the URL itself. This URL is supposed to be something that the regular user would never see, the web app use redirects and URL rewrites to obfuscate this information but this comes to light when you use a web debugger tool.


Figure 2 – Captured traffic, browser-app

Here’s the view from the web debugger tool, which shows the full URL. We just have to change the user name e see if the application accepts the new input without validating it.

Figure 3 – Changed user credentials

And there we go, the application just assumed that the user “alexb” made the request since his username was supplied on the previous request. This allowed me to log into another user account without knowing it’s password. This kind of problem can be remediated by issuing a credential check at each request made by the application or implementing a session ID system that confirms if the currently logged user has enough permissions to execute his command.

Another easy to exploit example can be seen below:


Figure 4 – Disabled button

In the above image, I’m highlighting a button that were disabled, probably because the function was already developed but not yet in production, so the developer decided to just disable the button and leave the function working on the background until it gets to a production phase. By checking the source code of the web app we can see that the button is just disabled, if you are wondering what would happen if we just enable and click the button, here it is:


Figure 5 – Access to the hidden feature

By enabling the button and clicking it, I got into the disabled portion of the web app. Although this may seem to be something really simple to do, this is a real world example of how some web applications are developed, this example is applicable to many other kinds of attacks that have the same root cause, a bad development practice and trusting the good will of the user.

To finish this post, I’ll show you how some types of HTML injection and XSS (Cross-site scripting) attacks works. OWASP states that:

“Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.”

In other words, XSS happens when you send a command to the web application so it would “render” or process it and show the results to you. This is widely used to trick users into believing that they’re accessing a trustworthy site, when in the reality, is a malicious copy of the original one, designed to steal information.


Figure 6 – HTML injection

The above image is the example. The red rectangle shows the vulnerable parameter, this resembles to the upper statement that I’ve made, lack of user input validation. This is possible because this parameter shows a message to the user with something that he sent to the web app, without checking it to see if there’s anything malicious.

The orange rectangle is the malicious command I’ve sent, it is a set of HTML instructions that builds up a user and password form (blue rectangle). The idea is to convince the legitimate user to send his information to me (attacker). Since he’s still on the trustworthy web site, this would seem pretty secure and legitimate.

Cross-site scripting attacks are really hard to avoid and can be found in many web sites around the web, but there are a few things you can do to mitigate this, the main objective is to filter user input to something that the application can trust and is expected to receive. I strongly recommend you to check out OWASP guide to avoid XSS attacks, which can be found here:


URL Manipulation

As my second post on this blog, I’ve chosen something a bit more interesting from my previous tests. This is about a URL manipulation vulnerability, where the attacker can change the content that is accessed or called by the application to some other function or content that he wishes.

The root cause of this vulnerability is often the lack of validation of user input, in other words, the application trusts anything that is sent by executing any commands issued to it. I’m planning to talk a little bit more about this topic in a future post, aiming for SDLC (Software Development Life Cycle) processes and best practices in application development.

About the application, it is a widely used cloud application with headquarters based in Brazil and in the US. I’m not disclosing the name or any other information of it just yet because I do think that the vulnerability is not fixed in all their clients websites.

So, here’s the deal…


The image above represents the application’s printing functionality, you can basically click on the printing icon and this window shows up. In every pen-test that I perform, I normally go through the entire application with a web debugger software to catch all the requests that are made to the server for later analysis. By doing this I was able to catch the request that was sent to the web server, whenever I clicked the print to .doc or .pdf function and this is what I got…


As you can see above, the big red rectangle is marking a bad development practice, the application is calling for a file in the web server, showing it’s full directory path and ultimately disclosing the server directory structure. This caught my attention of course and I started fuzzing the path looking for some interesting response until the vulnerability showed it’s full potential.

By modifying the path to a file that I knew existed (because it exists in most of Windows platforms), I was able to download anything from the web server, like system files for example. As shown in the figure above, the function that the application calls is the following:


The path that feeds the “reportFile=” function is encoded, nothing to worry since you just need to google something like “URI encoder” and just drop the path to the file you want, something like this:

  • C:\Windows\System32\mmc.exe


  • C%3A%5CWindows%5CSystem32%5Cmmc.exe

For the evidence below I used the “hosts” file, located in “C:\Windows\System32\drivers\etc\hosts”


And there it is, we can now download anything from the web server, you just have to know where to find it or by simply brute forcing the possible paths. The awful part is that there are a bunch of published web sites with this vulnerability, take a look:


I’ve found this doing an assessment to one of our clients and we contacted the vendor who fixed the problem on the specific web server. By the time I made this post, the vulnerability still persists in other websites and I’m thinking of opening a CVE for this, since the vendor didn’t fix it in their other clients.

The possibilities here are endless and I just downloaded the hosts file to prove my concept since my time is really limited when I’m doing this kind of work.

Feel free to comment and criticize my post!

XML Injection – Bosch Security Systems

A few months ago I did an assessment in a web application from Bosch Security Systems, which is basically a front end for a surveillance camera. The system is pretty simple and straight forward, you can view the live feed of the camera and also do some recording.

The problem is that most people set these kind of systems with the default options, and almost every time, these options aren’t enough for a secure set up. This time was no different, so here’s the deal:

XML Injection Vulnerability – Bosch Security Systems

  • Camera Model – Dinion NBN-498-P IVA

Vulnerability was found in the web interface used to monitor the live feed of the camera, which also can be published to the web. By injecting any XML or HTML commands in the field “idstring”, the web application does not properly sanitize the input. This vulnerability was only found at this specific component.

  • Vulnerable component:
    “camera address”/rcp.xml?idstring=

Figure 1 – Web Camera Interface

The image above represents the web interface of the camera. As you can see, pretty simple and easygoing. Also, it’s an administrative with restricted functions interface, which was set up with default security settings without any passwords.


Figure 2 – POC – Command injection at “idstring”

The command injection can be done by sending the command at the “idstring” field, anything that you type there were being accepted by the system. The lack of background knowledge of this system made me run out of tests here, by the time I did this testing I didn’t know anything about the camera’s backend system and by that I wasn’t able to make any elaborated attacks.


Figure 3 – Cont. Command Injection

This last image is the proof of concept, as you can see the function “tagnode” was inserted in the “idstring” parameter. As I stated before, this isn’t an elaborated attack and I wasn’t able to compromise the system due my lack of time by the date I was testing but this is a proof that the system is vulnerable for XML commands injection.

I’m looking forward to extend my testing in this system for something more practical.

Here’s the timeline for this finding:

  • First contact: 09/17/2015- no answer
  • Second contact: 09/21/2015- no answer
  • Disclosure: 03/27/2016