Lack of Input Validation and Security by Obscurity

This post is about something very common among web applications, user input validation. I’ve assessed a ton of web applications that relies in trusting the good behavior of users and hoping they respect, or at least don’t discover, what his current user profile is able to do. This can be translated in a bad development practice and this leads us to the root cause of this problem which is the lack of user input validation (checking every command a user sends to a application) and just “hiding” the menus a user is not supposed to have access to (hidden links).

It is pretty common that developers build their application with just a single authorization step, once you’ve passed the login screen, the application assumes that anything the user does beyond that point is an authorized action and no further profile and credentials check will be necessary. This also comes with the “menu hiding” where the functions that the user is not supposed to access is just with the “hidden” attribute.

To illustrate this, I’ll be showing a practical example, a application that I’ve assessed in the past.

Figure 1 – Web application main interface

The image above is the web app, pretty simple and straight forward this app was developed to register tickets and services. The highlighted text above is the user name that logged in, this is already a bad practice itself since this is a sensible information that could be easily captured or stored by the browsing history. This example is really simple, all we need to execute this vulnerability is to know another valid username and just change it in the URL itself. This URL is supposed to be something that the regular user would never see, the web app use redirects and URL rewrites to obfuscate this information but this comes to light when you use a web debugger tool.


Figure 2 – Captured traffic, browser-app

Here’s the view from the web debugger tool, which shows the full URL. We just have to change the user name e see if the application accepts the new input without validating it.

Figure 3 – Changed user credentials

And there we go, the application just assumed that the user “alexb” made the request since his username was supplied on the previous request. This allowed me to log into another user account without knowing it’s password. This kind of problem can be remediated by issuing a credential check at each request made by the application or implementing a session ID system that confirms if the currently logged user has enough permissions to execute his command.

Another easy to exploit example can be seen below:


Figure 4 – Disabled button

In the above image, I’m highlighting a button that were disabled, probably because the function was already developed but not yet in production, so the developer decided to just disable the button and leave the function working on the background until it gets to a production phase. By checking the source code of the web app we can see that the button is just disabled, if you are wondering what would happen if we just enable and click the button, here it is:


Figure 5 – Access to the hidden feature

By enabling the button and clicking it, I got into the disabled portion of the web app. Although this may seem to be something really simple to do, this is a real world example of how some web applications are developed, this example is applicable to many other kinds of attacks that have the same root cause, a bad development practice and trusting the good will of the user.

To finish this post, I’ll show you how some types of HTML injection and XSS (Cross-site scripting) attacks works. OWASP states that:

“Cross-Site Scripting (XSS) attacks are a type of injection, in which malicious scripts are injected into otherwise benign and trusted web sites. XSS attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user within the output it generates without validating or encoding it.”

In other words, XSS happens when you send a command to the web application so it would “render” or process it and show the results to you. This is widely used to trick users into believing that they’re accessing a trustworthy site, when in the reality, is a malicious copy of the original one, designed to steal information.


Figure 6 – HTML injection

The above image is the example. The red rectangle shows the vulnerable parameter, this resembles to the upper statement that I’ve made, lack of user input validation. This is possible because this parameter shows a message to the user with something that he sent to the web app, without checking it to see if there’s anything malicious.

The orange rectangle is the malicious command I’ve sent, it is a set of HTML instructions that builds up a user and password form (blue rectangle). The idea is to convince the legitimate user to send his information to me (attacker). Since he’s still on the trustworthy web site, this would seem pretty secure and legitimate.

Cross-site scripting attacks are really hard to avoid and can be found in many web sites around the web, but there are a few things you can do to mitigate this, the main objective is to filter user input to something that the application can trust and is expected to receive. I strongly recommend you to check out OWASP guide to avoid XSS attacks, which can be found here:


Vulnerability Management pt.1 – A custom approach

Companies now a days must face an “always-growing” risk named cyber crime. By the very first time that a company publishes it’s systems or resources on the internet, for the world to see, it starts to risk itself with threats like cyber-crime, hacktivism or just people pure malicious will. Vulnerability management should allow an organization to understand, in a continuous form, the risks associated to the vulnerabilities contained in it’s assets. The goal is to identify and mitigate vulnerabilities related to it’s IT systems so a organization can prevent attackers from causing damage.

For this post I’ll be writing about a relatively new subject, at least for me and most of the companies in Brazil and maybe also in south America. As most of people know, or should know, the methodologies or good market  practices around doesn’t work as a silver bullet, these methodologies are very useful as guidelines for something that you (or your consultant company) may use for drawing a customized and efficient process that fits your needs.

Based on my experience, study and security consultant/analyst years, I started drawing and developing a Vulnerability Management cycle, by reading from many published good management practices, including sources like NIST and SANS. This work was also my essay, presented as my graduation work, which was accepted and approved as my conclusion thesis.

To start off, I’ll be quoting some basics about vulnerability management as told by SANS in one of it’s publications. A vulnerability management process typically has the following steps or fields:

  • Asset Inventory
  • Information Management
  • Risk Assessment
  • Vulnerability Assessment
  • Reporting and Remediation Tracking
  • Response Planning

Each field has it’s unique challenges and good practices, which aren’t my objective here for this post but if you are interested I definitely recommend reading “Vulnerability Management: Tools, Challenges and Best Practices” by SANS. These fields are the baseline for a successful vulnerability management process and therefore must be accomplished.

To illustrate the process itself, SANS uses the following image:


Moving to the main objective of this post, I’ll be presenting one of the fields which I stressed the most during this project and the complete overview of the custom vulnerability management approach proposed. Before I move forward, here’s a little background from my current company and the environment that I have to deal with:

“We are a multi-business, multi-national enterprise, a holding, of 5 different companies from energy (gas and petrol) to retail and logistics, with 10.000+ employees. Me and my team are responsible for the information security processes and risk analysis for all 5 business.”

By that I think I could say that our network environment are pretty big and complex, something that can totally justify the need of implementing such process.

My goal was to develop a flux of processes which could be executed repeatedly and would feedback itself, something like the PDCA model and many others that aims for the continuous improvement. The following cycle was developed based on the good practices mentioned above and with my real world experience, also looking for the company needs and ours GRC’s (Governance, Risk and Compliance) objectives.

GV0-Fluxo Macro-v1.0-EN

This cycle is the main overview for the vulnerability management process, it is divided in 3 big basic processes as you have seen above:

  • GV1 Detect Vulnerabilities;
  • GV2 Report;
  • GV3 Manage Vulnerabilities.

Each one of these processes has it’s unique set of activities and tasks to be completed before moving to the next step. For the GV1, the key activities are:

  • Assess systems, applications and infrastructure;
  • Program automated security tests, tool-assisted;
  • Safely explore critical vulnerabilities, checking it’s full potential;
  • Vendor and vulnerabilities newsletter analysis.

It is crucial that this process gets as automated as possible, since it requires the analysis of many applications and infrastructures. The recurrence is also something very important, as time goes by, new threats and vulnerabilities will be spotted in the wild, consequently new risks will appear.

Moving to the second step, GV2, the main activities are:

  • Develop and maintain a report standard;
  • Document and inform the findings;
  • Keep stakeholders aware of the known risks;
  • Expectation alignment, risk acceptance, remediation plans, etc.

It is important staying up to date with reporting the findings and making sure that the stakeholders involved are well aware of the risks and impacts that the vulnerabilities may present. It is also time to relate and compile all the information regarding the vulnerable asset, using asset inventory and vulnerabilities databases. It is possible to occur a callback for the GV1, it should happen whenever the findings could have changed, for example, when the stakeholders have taken some mitigation action and the vulnerability must be reevaluated.

For the GV3 step, the key tasks are:

  • Document, manage and monitor vulnerable assets;
  • Keep the risk acceptance or remediation plans in track and up to date;
  • Study and apply vulnerability remediation possibilities, firewalls, IPS, etc;
  • Focus efforts in mitigating critical vulnerabilities.

This step is supposed to organize the changes requests, incidents handling and risks management related to vulnerabilities, the idea is to maintain track of the risks and keep people aware in a timely manner. For example, if a given incident root cause is a previously found vulnerability, were the stakeholders aware of the issue and the impacts that it could lead to? Did they accepted the risk and maintained the vulnerability for a later study? Independently of the answer, it is important that the information security team does it’s job by safeguarding the company’s IT assets, informing the stakeholders that there are vulnerable assets and the risks are real.

For this first part, I’ve just shown a quick preview of this work and I’ll be digging in a more detailed post in part 2, talking about the GV1 process itself.

Source material:

  • SANS – Vulnerability Management: Tools, Challenges and Best Practices
  • SANS – Implementing a Vulnerability Management Process