Cyber hygiene and security awareness programs

Security sensitive companies (now a days almost every single one that is connected to the internet) spends a lot of manpower and most importantly, financial resources, trying to keep their infrastructure and users safe from the most recent threats the internet has to offer. This means spending thousands of dollars on the most recent technology, training people and monitoring the environment. The irony of it all is knowing that all this effort and investment could come down at once just by a single click, of course that the more security layers you have, the less chance of someone clicking or running something suspicious on his/her computer.

Cyber hygiene comes in place when we try to look for an answer for this matter, it can be defined as the responsibility of the individual in maintaining a safe behavior towards his actions on the work place and even at home. A safe behavior, for example, includes checking if an e-mail is legitimate or expected before opening it or downloading any attachments and not providing your personal information, like passwords, to anyone. Unfortunately, this kind of behavior isn’t present in most of the companies around the world and that’s the problem.

Most users have the concept that the company is the only one responsible for keeping their information, work tools (such as pcs) and everything that is work related safe and sound from threats. By doing so, people usually doesn’t think or even critically analyses what they are doing before it’s done, for example opening a file that comes from e-mail or clicking a link. Others may say that the fast-paced day to day tasks leaves them with no time to stop and analyze everything.

Independently of the reason, the truth is that everyone should act towards their day-to-day work tasks the same way they act on the street or with strangers. You usually don’t accept anything offered from someone you never saw or look suspicious on the street and doesn’t follow people around when they call you for an irresistible offer on the store around the corner, do you?

So, how should we get in touch with these people and pass some knowledge about Cyber Hygiene? It’s crystal clear that people who doesn’t care about this kind of subject won’t invest much time or attention on this matter and making them go thru a long training or reading extensive documentations won’t bring much result. That’s where security awareness takes place.

Successful security awareness programs should deliver the following:

  • Relevant information regarding the people you are trying to inform;
  • Quick and easy to understand directives (tips);
  • Illustrative images regarding the messages you send;
  • Gamification of security awareness is also a plus if possible;
  • Up to date subjects, latest information leakages, attacks or trends;
  • Physical actions, work desk and behaviors that takes place on the physical world should also be included;
  • Recurrence and knowledge evaluation.

Unfortunately, there’s no silver bullet for security awareness programs but there are directives you should follow and adapt to your reality. The goal is same for any program, which is to basically make people think and question before doing any action.

I would recommend starting small with informative e-mails or maybe phishing campaigns and measuaring the results of those actions to check whether they are being effective or not. It’s also very important to be aligned with your Human Resources department, as they have the expertise to talk with the employees and maybe require them to take the awareness courses or tests.

There’s an awesome free resource for this kind of awareness but it is a Brazilian entity with all its content in Portuguese, if you can understand Brazilian Portuguese I strongly recommend checking this site out. As soon as I find anything like that in English, I’ll sure share with you all.

As usual, feel free to comment below and to reach contact with me.

Hardening HTTPS connections on your server

In this post, I’ll be talking about a very common vulnerability in HTTPS encrypted connections and how to fix it. Most web server’s or services that uses HTTPS don’t worry about the hardening of its ciphers and protocols.

The main problem is that encryption protocols and ciphers become obsolete over time and new vulnerabilities rises from its deprecation, for an example, SSLv2 and SSLv3 are long considered vulnerable and yet you still can find many services that uses this type of protocol.

Going for what matters, this guide is about setting only the strongest and compliant protocols on your cryptographic connections over Windows and some web services on Linux. The procedures on this guide may need to be tweaked in order to function properly on your environment, as I can’t predict all the possible variations. Here are some benefits of applying this hardening guide:

  • It will remediate attacks known as DROWN, Logjam, FREAK, POODLE and BEAST;
  • Insecure ciphers and protocols will be disabled, such as SSL 2.0, 3.0, PCT 1.0, TLS 1.0, MD5 and RC4;
  • Only TLS 1.1 and TLS 1.2 protocols will be accepted;
  • These changes are compliant with PCI 3.1 and FIPS 140-2 practices;
  • Old web browsers may no longer function with HTTPS connections, such as Internet Explorer <7.0.

Obs.: It’s highly recommended to use a test environment before applying any change on the production environment.

Windows environments

There’s a tool called IIS Crypto that will do basically everything for you, you can find it here:

  • IS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website.

Here’s what to do once you download and run it:

  • Run it as administrator;
  • Click the “Best Practices” button;
  • Uncheck the “TLS 1.0” option. TLS 1.0 is no longer recommended or safe. This may crash some RDP (Remote Desktop) functionality;
  • Click “Apply”;

As fast as it can be, it’s all done now. If you want to check out the changes that this tool made, do the following:

  • Run “regedit.exe”;
  • Go to the following folder “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL”;
  • Check the new folders and keys generated.

You can also to it all by yourself if you want, check out Microsoft’s guides about it:

Linux environments

Doing this kind of stuff on Linux is a bit trickier, since it has a lot so distros and several types of web services, this guide may not apply for everything. Anyway, there’s also a tool that may help you a lot while doing this, this is a online tool that can be found at:

It is a web page where you can set your web service and it’s version, once you’ve done that, the tool displays for you the configuration lines that must be imported onto the file that sets the security characteristics of your web server. If you use Apache or any other technology, just navigate to the folder where this file resides and modify it, remember to always have a backup copy of it.

  • Set your technology (yellow);
  • Set the “modern” option (Blue). This defines the acceptable protocols and ciphers, only the good ones;
  • Set the server version and OpenSSL version. The “HSTS” is a security header option that may be not compatible with older web applications;
  • Check the configuration to be imported (green);

All right, you are all good now, at least better than before. The worst down point on doing all this stuff, is that some old browsers may have issues connecting to your web page or service. Older versions of IE like 6 or 7 does not support TLS 1.1 or higher.

If you are worried about this, you can check out this awesome reference on Wikipedia which compiles the support of HTTPS connections on most of the browsers, look after the big table named “TLS/SSL support history of web browsers”

Comments are always welcome!

Installing and running Cuckoo malware analysis platform – Part 2

As I promised, this is my second post of the Cuckoo tutorial set, I’ll be guiding you through the process of making a Windows VM (Sandbox), where Cuckoo will run all the malware you throw in it. This part will also show a first run of the platform.

It is important to state that this step isn’t as easy it seems to be, the hardest part is tuning the VM as much as possible so most of the malware found around the internet won’t be able to identify it as a VM. Malware now a days have various ways to check whether it is being ran on a VM or a real host. This happens because the people who make them put a lot of effort on doing so and they won’t be pleased to know that their malware got reversed engineered and countered.

To start off, as you could have imagined, you are going to need a Windows 7 ISO image to install on your new VM. Check the list below for the specs recommended for it, some specs are also checking points for malwares like HD size and memory available. Remember that this tutorial is based on a Virtual Box environment.

  • At least 60 GB HD;
  • At least 2 GB RAM Memory;
  • At least 2 processor cores;
  • Set up the “Pointing Device” as “PS/2 Mouse” (this may cause malfunction with the mouse while operating the VM on the Linux machine through xRDP);
  • Set up the processor execution cap at 100%;
  • Set up the extended feature “PAE/NX”;
  • Set up the hardware virtualization “VT-x/AMD-V” and “Nested Paging”;
  • No video acceleration is required;
  • Set up the network to “Host-only adapter”.

After setting up the characteristics for the VM, it is time to install your Windows 7 image. It’s optimal to install a fully up to date image, since your sandbox should look like a real machine. After doing the steps above, your VM should look something like this:


Note that my VM has only 40 GB HD, this is something that I came across while creating it and running some tests on it. It is widely advised that you build yours with at least 80 GB HD, since this is something that malware nowadays look after. So, when Windows finishes installing, there’s some steps you’ll need to take to keep up with the setup of your sandbox, here they are:

  • Do not install Virtual Box Guest additions. Some malware look for registry entries and they may find those. If you do, my guide will cover you up lately;
  • Fully update the system via Windows update;
  • Turn off Windows update after the step above;
  • Turn off Windows firewall;
  • Turn off Windows defender;
  • Turn off Security Center;
  • Turn off UAC;
  • Turn off all the notifications you will get by disabling these services;
  • Set the “Adjust for Better Performance” option on System Properties
  • Set a fixed IP address, Cuckoo default network is 192.168.56.x, so you can set up yours with something like This address must be placed on the virtualbox.conf file on the Cuckoo conf folder (check this out on part 1);
  • Set video resolution to 1024×768;
  • Put some garbage on users folders like images and music, also surf the web a bit for browser history.

Now that you’re done tweaking Windows, it’s time to install all the software and tools you will be needing to run the vast majority of malware you will find. You have basically 3 ways to do so:

  • First is to setup an ISO image with all the software you need inside it and open it up on the VM;
  • Second is to make a network share between your host machine and the VM, then move the files to the VM;
  • Third and the least recommended is to install Virtual Box guest additions and transfer all the files;

The third way is the least recommended because, as I already stated above , it leaves traces on the machine that it is a virtual machine. You can still install it and remove all the registry entries that relate to Virtual Box, I’ve done that. So, about the software you need to install, here’s the list:

  • Microsoft Office 2013 x86 (32 bits)
  • Microsoft .NET Framework 4.6 and 4.6.1
  • Microsoft Visual C++ 2005, 2008, 2010, 2012, 2013, 2015
  • Adobe Reader v9.0
  • Flash Player v11
  • Java RE 6 (I’ve installed v6u22)
  • Python 2.7
  • Pillow 2.9.0
  • 7zip
  • Cuckoo agent “agent.pyw”
  • PaFish – Paranoid Fish (tool used to check whether the VM is well obfuscated or not)

After every installation, be sure to run the software for the first time and accept any terms it may pop up, also leave it maximized and then close it. Cuckoo won’t be able to run every single software that exists, it has compatibility with some software at specific versions. Be sure to check out Cuckoo documentation for details about this.

You can find all this software around the web with a few clicks but I know how boring it would be to get all this stuff. Knowing that, I will soon put a link on this post with all the stuff you need in a single ISO file, stay tuned. x64 versions or most recent versions of some software’s such as Office and Adobe Reader, may not work properly with Cuckoo, you can try them out if you want.

Going forward, there’s still some things you need to do before you can fire Cuckoo up. There’s a piece of software from Cuckoo platform that we need to put on the VM so it starts every time the VM runs, it’s the “agent.pyw”. You can find the file on the Cuckoo dir that you’ve downloaded before. Here are the steps:

  • On the Windows VM, navigate to “C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”;
  • Put the agent.pyw file on the folder;

All right, we now need to add a file I made myself which does some changes to Windows every time you start it up. Since Cuckoo will run a snapshot of the live VM, as soon as the VM fires up when analyzing a sample, this script will clear some stuff that may be used by malware for tracking, such as the registry entries from Guest Additions.

  • Open up a notepad;
  • Type in the following:
Windows Registry Editor Version 5.00


[-HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\Virtual Box Guest Additions]
  • Save the file as any name you want with the extension “.reg”
  • Put it or create a shortcut to it in the startup folder “C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”


This file will erase a few registry lines and rewrite some Bios data about the machine.


The next step is to delete a device that is installed by the Virtual Box, it gets there every time the machine starts up. I couldn’t find any other way to prevent it from being installed or removing it with a script. If you find out any way to do that in a more automated way, please let me know!


It’s almost over now, run the “pafish” tool to check how well obfuscated your VM is. I couldn’t make mine perfect, from all the research I’ve done, many have stumbled with the same stuff I have and I didn’t found out how to fix it. Anyway, here’s how it should look like.



As you can see I got traced on a few things, this means that my sandbox setup isn’t as good as it could be.

For the final steps, you’ll be going to need to do the following:

  • Export the VM to an “.ova” file (if you were using Virtual Box outside of the Cuckoo linux host) and move it to the Linux host;
  • Import the machine on the Virtual Box of the Linux host:
    • Log in the Linux host with xRDP
    • Run the console and type “sudo virtualbox”
    • Import the appliance
  • Close Virtualbox and type the following on console:
    • sudo vboxmanage list vms (check VM name)
    • sudo vboxmanage controlvm “Sandbox-Windows7” poweroff (make sure it’s off)
    • sudo vboxmanage startvm “Sandbox-Windows7”
    • Wait the machine to start and uninstall manually the device showed above, then close every window and leave the desktop clear
    • Go back to the console on the Linux host and type “sudo vboxmanage snapshot “Sandbox-Windows7” take “baseline” –pause”
    • sudo vboxmanage controlvm “Sandbox-Windows7” poweroff
    • sudo vboxmanage snapshot “Sandbox-Windows7” restorecurrent

Ok, from now on Virtualbox is ready to receive the samples from Cuckoo and the virtual machine will turn on right where we left it when a job is sent. You should double check the conf files of Cuckoo to make sure that all settings match with the VM, for example, the IP address you’ve set on the VM must be the same at virtualbox.conf file, as well the VM name.

Now it’s time to run a few commands and try out Cuckoo, do the following and start testing!

  • Start Virtualbox network interface (you will have to do this every time the Linux host boots)
    • VBoxManage hostonlyif create
    • VBoxManage hostonlyif ipconfig vboxnet0 –ip –netmask (if you didn’t changed the default IP address, it will be the same)
  • Open two console windows on the Linux host
  • Run sudo -i to make sure you got root privileges on both
  • Navigate to the main Cuckoo folder and type this:
    • python -d
  • On the other console, also navigate to Cuckoo main folder and then on the web folder and type this:
    • python runserver 192.168.X.X:80 (where X is the IP address of the Linux host)


Now you can go to your browser and type in the IP address from your Linux host, if everything went fine, you should see this:


Try out Cuckko sending your first sample. You can also check out the VM working alone.



And that’s it!

It’s all good to go and you can start testing. Check out the results on any analysis you make on the web interface. You can open up the xRDP on Linux to see Cuckoo working or to troubleshoot any problems you face.

I hope I’ve had covered everything in these two parts, if you got any trouble, ideas or suggestions, please comment below or just leave your feedback. I’ll be around and improve anything that may need an extra touch.

Installing and running Cuckoo malware analysis platform – Part 1

In this post I’ll be guiding you thru all the steps required to install and run a Cuckoo malware analysis platform. I’ve talked about it briefly in my previous post and promised this guide as a continuation. I estimate the time to accomplish this installation in something about 40 to 60 minutes, depending on how straight forward you plan to follow this guide.

I’ve faced many dependencies problems and errors until I was able to compile (or at least hope so) everything you need to run the platform on the first try. I’ve also spent a lot of time reading different guides until I finally could compile this one. Most guides out there would only help you set up the platform with none-basic settings and modules, which may not deliver satisfactory results.

This guide will cover from preparing the platform host to the creation of the Windows 7 VM, where the files will be run. I’m splitting this tutorial in two main parts, preparing the host and the virtual machines. Let us begin then with the host.


Preparing the Host

You’ll need a physical machine with a Linux distro. This machine must be able to run at least a single virtual machine, so something about 4gb of RAM and a quad core processor should do the job just fine but the more, the better.

Install Ubuntu Server

Ubuntu Server was my OS of choice while installing Cuckoo, it is also recommended OS from the Cuckoo’s website.

Install SSH

First thing you should do is to install a SSH server on the host. SSH will allow you to connect to this machine from anywhere on your network or internet. Useful if you want to finish this tutorial from another machine.

  • sudo apt-get install openssh-server
  • sudo service ssh restart


Install a graphic (XFCE) interface and RDP compatibility

I’ve added this step because in my corporate network we mainly use Windows with the Remote Desktop app. It is not mandatory to install a GUI, but it helps a lot.


  • sudo apt-get install xfce4
  • sudo apt-get install xfce4-terminal
  • sudo apt-get install gnome-icon-theme-full tango-icon-theme
  • sudo apt-get install xrdp

The next two steps set XFCE as the default GUI when using the Remote Desktop app. Edit the and add the text below to the file.

  • echo xfce4-session >~/.xsession
  • nano /etc/xrdp/
  • Type in the following:
 if [ -r /etc/default/locale ]; then
 . /etc/default/locale
  • sudo service xrdp restart


Install SAMBA

Samba will be used for directory sharing between Linux and Windows systems. You’ll need a share on the host for transferring the VMs and any other files.

  • sudo apt-get install -y samba samba-common python-glade2 system-config-samba

Edit the smb.conf for share definitios, run the following command and add the text in the box below at the end of the smb.conf file.

  • sudo nano /etc/samba/smb.conf
  • Type in the following at the very bottom of the file:
 workgroup = WORKGROUP
 server string = Samba Server %v
 netbios name = ubuntu
 security = user
 map to guest = bad user
 dns proxy = no
 path = /samba/share
 browsable = yes
 writable = yes
 guest ok = yes
 read only = no
  • sudo service smbd restart

Install VirtualBox

Cuckoo needs a virtualization software in order to automate it’s malware analysis functions. For this guide, I’ll be recommending Virtual Box, Oracle’s open source solution for virtualization.

  • sudo apt-get update
  • sudo apt-get install virtualbox-5.1
  • sudo apt-get install dkms

Install Cuckoo and Dependencies

This step is responsible for installing the Cuckoo platform itself, as well all its dependencies. Being modular means that Cuckoo will be depending on many other tools to work properly. I went thru this process a few times and tried to make sure that I’ve noted down all the tools needed.

  • sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y && sudo apt-get autoremove -y
  • sudo apt-get install python python-pip python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg-dev
  • sudo apt-get install git mongodb python python-dev python-pip python-m2crypto libmagic1 swig libvirt-dev upx-ucl libssl-dev wget unzip p7zip-full geoip-database libgeoip-dev libjpeg-dev mono-utils yara python-yara ssdeep libfuzzy-dev exiftool curl openjdk-8-jre-headless
  • sudo pip install –upgrade pip

Install Cuckoo Modules

  • PDF Reports
    • sudo apt-get install wkhtmltopdf xvfb xfonts-100dpi
  • TCP Dump
    • sudo apt-get install tcpdump libcap2-bin
    • sudo chmod +s /usr/sbin/tcpdump
  • ClamAV for malware id
    • sudo apt-get install clamav clamav-daemon clamav-freshclam
  • Pydeep for fuzzy hashes
    • sudo pip install git+
  • Malheur for malware behavior analysis
    • sudo apt-get install uthash-dev libconfig-dev libarchive-dev libtool autoconf automake checkinstall
    • git clone
    • cd malheur
    • ./bootstrap
    • ./configure –prefix=/usr
    • make
    • cd
  • Volatility for memory analysis
    • sudo apt-get install python-pil
    • sudo pip install distorm3 pycrypto openpyxl
    • sudo pip install git+
  • PyV8 JavaScript engine for malicious JavaScript analysis
    • sudo apt-get install libboost-all-dev
    • sudo pip install git+
  • Suricata IDS
    • sudo apt-get install suricata
    • sudo cp /etc/suricata/suricata-debian.yaml /etc/suricata/suricata-cuckoo.yaml
    • sudo nano /etc/suricata/suricata-cuckoo.yaml
      • Search for “# a line based alerts log similar to Snort’s fast.log” by pressing “ctrl+w”
      • Set to “enable” to “no” for “fast.log” and “unified2”
      • Find “file-store” set “enabled” to “yes”
      • Set to “yes” the fields “force-md5” and “file-log”
      • Find ” # Stream engine settings. Here the TCP stream tracking and reassembly” and set “depth” to “0”
      • Find “request-body-limit” and “response-body-limit” under “default-config” to 0, without any unit
      • Find “vars” and under “address-groups” set “EXTERNAL_NET” to “any”
    • Update threats on open IDS rules
      • git clone
      • sudo cp etupdate/etupdate /usr/sbin
      • sudo /usr/sbin/etupdate -V
      • sudo crontab -e
        • choose 2
        • Add the line * 22 * * * /usr/sbin/etupdate so it will update at ever 22 hours, or modify the time at your will;

Installing Cuckoo

For this step, you can either download the ZIP file from the Cuckoo website ( or download a improved and modified but outdated version from the git link mentioned below. You can check the improvements out at

Starting Cuckoo

Every time you restart the machine, you will have to re-create and start the virtual network interface. You will also need to start Cuckoo and the webservice used for checking results, statistics and submitting malware.

  • sudo VBoxManage hostonlyif create
  • sudo VBoxManage hostonlyif ipconfig vboxnet0 –ip –netmask
  • cd cuckoo-modified
  • sudo python -d (start Cuckoo platform)
  • cd cuckoo-modified/web
  • sudo python runserver XXX.XXX.XXX.XXX:YY (X should be the Linux machine IP address and Y should be the http port)


Obs.: Cuckoo won’t run properly on this first try since we didn’t set up any virtual machine as the sandbox.


In this post I covered everything you need to install and run Cuckoo, also giving you a RDP interface, for using the GUI with Windows Remote Desktop and being able to connect to this host by a network share. The main difference of this guide to others on the web is that this is a compilation of my efforts for running Cuckoo on an enterprise production environment, as I stated before, most guides will only help you install the basic functionality of the platform, which won’t be as good as a fully geared Cuckoo.

I’ll be posting soon the continuation of this guide, which I’ll be helping you out on creating your sandbox VM with most of the tweaks needed to make it harder to detect when analyzing sandbox-proof malwares.


Cuckoo – A open source malware analysis platform

In this post I’ll be covering  an awesome open source solution, which is named after Cuckoo.

Cuckoo is a very modular platform used for managing sandboxes and automatizing malicious file analysis. As any other open source platform, it is supported by a community and have most of it’s components developed by it’s supporters and users. Putting the lack of a “professional” support question aside, the platform itself is in a very stable version, I can tell that by being a tester myself and mostly because I’ve set a production environment with this platform at my current job.

The usage of the tool is very intuitive and user friendly, including the way the platform shows the results of the analysis, you doesn’t need to be an reverse engineering expert to understand what it says after a job is done. The not so easy part is of course the installation, being that modular you will be dependent of other tools and it’s dependencies, which may lead you into some hard time.

People at have set up this solution so anyone on the internet can use it on the go, I highly recommend you trying it out if you are interested on Cuckoo. As for this post my goal was only to cover a brief intro about this great tool, I’m working in a detailed tutorial on the installation process for this platform and will be posting it soon!


Pentesting Script – Guidelines

In this post I’ll be covering mostly the basics of any pentest assessment, which is basically a generic checklist of what to test and the tools you should be using. Obviously, as my recommendation, they sure aren’t the only way to do things, this is a compilation of best practices and experiences that I’ve gathered over my years working as a security analyst. The best way will always be the way you get along with mixed with the one that suit your needs.

To start off, this post covers 5 major phases that I generally run on every pentest assessment, this list doesn’t need to be run extensively in every test, since each assessment has its own particularities. This works great as a guide during a test, by looking at the phases and its tasks as testing recommendations, and as you may think, I’m not listing every single step and tasks since my testing script is very extensive. Start off, as for the first phase:

  • Phase 1 – Automated Testing

This phase is responsible for the hard and extensive work during an assessment. Automated scanning tools should be used first, as they can setup the initial path you should take for more detailed testing and save precious time. These tools can be slipt mostly in two categories, the first one that focus in web application testing and the second for infrastructure testing. As for the tasks in this phase, here’s the most important:

  • Run infrastructure automated scan. (Nmap, Nexpose, Nessus);
  • Run application automated scan. (Acunetix, Burp Suite, ZED);
  • Run Nmap or similar tool to scan all TCP ports;
  • If any vulnerabilities be identified, verify whether public exploits exist (Metasploit,

Acunetix, Nexpose and Nessus are excellent paid commercial tools but they all can be replaced by manual testing, open source tools and a lot of patience if you can’t afford paying for these licenses.

  • Phase 2 – Manual Testing – Information Gathering

In this phase, if it applies, we will be looking for relevant public information about the target of the assessment, such as e-mails addresses that may be used on the application, manuals and any sensitive information indexed by the major search providers (Google and Shodan for example). Any information found on this phase may help the further testing at some point and mostly, at the manual testing. The tasks of this phase are:

  • Provoke application errors and analyze responses for possible information leakage;
  • Identify potentially dangerous functionalities such as file uploads;
  • Attempt to identify possible hidden features of the application (e.g. Hidden debug / admin parameters or links);
  • Verify whether error pages can be influenced by user input parameters.

Any information that may seem useless at first should be stored for later analysis. I had countless assessments that I got myself using something that at first I thought I you’d never use.

  • Phase 3 – Manual Testing – Authentication Testing

This phase main objective is to test how the application works with authentication. Whether by using manual credentials input, SSO (Single Sign On) feature or none at all, we will focus our efforts in finding vulnerabilities regarding the authentication process.

  • Verify whether HTTPS is used to encrypt credentials and / or sensitive data;
  • Test for user enumeration vulnerabilities;
  • Test for bypassing authentication by forced browsing;
  • Test for bypassing authentication by SQL Injection on the login page;
  • Test if password reset/reminder can be guessed or bypassed;
  • If possible, verify that all users have a unique user id;

The idea is trying to bypass the regular authentication of the application, accessing it without any authentication at all or even by using a profile you don’t have access.

  • Phase 4 – Manual Testing – Session Management Testing

Session management is how the application handles user sessions by the time you get first authenticated, for example, after you provide the login credentials. The main idea in this phase is to check whether the application can maintain good track of your privileges as an authorized user and which actions you can take “inside” of the application. Tasks in this phase can be summarized as:

  • Verify session timeout enforced in a reasonable amount of time;
  • Check for session fixation (not invalidating/re-issuing current session token after authenticating or forcing a known session ID on a user);
  • If the site is secure (HTTPS), check if the session ID passed over an unencrypted connection at any stage (HTTP);
  • Check is the session ID is sent in a GET request at any stage. Verify if it is possible to force it into the GET request;
  • Verify that all pages that require authentication also contain a clear Logout button;
  • Check for weak obfuscation or encryption of cookie data;
  • Test if concurrent logins are possible.

Common exploits regarding session management are privileges elevation, user personification and session theft, these can be done by manipulating sessions cookies and by modifying users ID’s.

  • Phase 5 – Manual Testing – Data Validation & Business Logic Testing

This final phase aims for tests of vulnerabilities that your automated tool of choice may already had pointed out, such as SQL Injection, XSS (cross-site scripting), HTML injection and many others. If none of these were pointed out, manual testing should take place in areas where the current type of vulnerability you are testing normally appear, places like forms, fields and any place that data can be inputted by the user. This phase is also the time for specific business logic testing, functionality that are critical to the business.

  • Attempt to subvert critical business logic. (Change transfes limits, access data from client a with client B, change users preferences, etc.);
  • All suspicious parameters (POST & GET parameters, SOAP Headers, etc) manually tested for (Blind) SQL Injection;
  • Generic user input validation testing;
  • Command injection;
  • All suspicious parameters (POST & GET parameters, SOAP Headers, etc) manually tested for Cross-Site Scripting.

Remember that the best way to do any pentesting is by following good practices and most importantly doing what works for you, focus on your best field like programming, thinking out of the box or if everything else fails, reading a lot from Google. Check out OWASP Testing Guide that I’ve posted on my previous post, it should serve you just right to start off.

Vulnerability Management pt.2 – Detecting Vulnerabilities

This is the second part of my previous post ( In this post I’ll be talking about the process of detecting vulnerabilities in applications and infrastructure.

When doing a vulnerability assessment, I usually split it in two sections, application findings and  infrastructure findings. The reason is that they’re two different things, and in a enterprise wide environment you will certainly have different teams taking care of these resources. It is very important to address the issues to the right people if you are interested in a functional vulnerability management process.

Keep in mind that automation of this detection process is very important, taking care of large environments is an arduous task and using tools in your favor is key, even if by doing so you may lower the detailing of what you find. Summarizing the tasks of this step, we can list the main objectives as:

  • Setting-up automated tool-assisted scans. Infrastructure and application;

Tools like Nessus, Acunetix, NMAP and many other must be used in order to automate as much as possible the vulnerability assessment.

  • Scheduling manual testing to critical business related environments;

Systems that are critical to the business or the ones that are sensible to scanning tools must be treated in a different way. Whether it is important to do some deep testing or not bringing  them down, you must list and be aware of them.

  • Maintaining a up-to-date security newsletter base;

Subscribe to vendors and security newsletters to be in touch of the new critical patches or that nasty zero-day vulnerabilities. CVE offers a free newsletter subscription right here: “;

  • Safely exploit critical vulnerabilities to find out it’s full potential.

Manually test any critical or potentially critical vulnerability to find out it’s full potential. Some system’s vulnerabilities may lead to access to the company’s network and other systems, it’s a pay for one get two type of problem.

As for the tools, you will find a lot of stuff in this area, the ones that are free will mostly do only specific things and you may run into trouble trying to fulfill the gaps, as you will find yourself running multiple tools to achieve one goal. Of course there are a few paid professional tools that will do the job just fine as well manage all the results you’ll get in one console.

With your favorites tools in hand, automated scans and the results coming in, you may find yourself in a sea of documentations an vulnerabilities. At this point you’ll realize that you probably won’t be able to handle all the reports and most importantly relate all the information. But fear not, this problems should be addressed in another section of the vulnerability management, at the GV3 Manage Vulnerabilities. Software like Nessus, Nexpose and Acunetix are mostly the first pick if you are looking into automated scan tools, these are top commercial tools used worldwide and personally speaking, the best around.

Besides having the tools it’s important to define which is important to you to look after. For example, if it is important to your organization that little to none information about it’s infrastructure is published, your analysis should focus on the footprint step of the vulnerability assessment. I strongly recommend reading OWASP’s Testing Guide, this is an huge document that addresses all the steps you should take and what you should be looking for, it can be found at:

This is one of the most complete and extensive guides that I could ever find, it will surely provide you all the directives you’ll need to start testing.

After setting your testing script, scheduling the automated testing and mapping the systems you must give an special attention, you are all ready to move forward and map the vulnerable systems. Relating all the data you get and prioritizing the systems that the responsible teams should focus their efforts in fixing, is whole different subject and is the main derivable of the vulnerability management process but this is material for a new post.

To conclude this post I would like to point that tools may seen very important to a successful vulnerability management process but in reality, the most important thing a process like this should deliver is to specify the vulnerabilities that all the effort must be focused on, based on it’s risk and the company goals.

Vulnerability Management pt.1 – A custom approach

Companies now a days must face an “always-growing” risk named cyber crime. By the very first time that a company publishes it’s systems or resources on the internet, for the world to see, it starts to risk itself with threats like cyber-crime, hacktivism or just people pure malicious will. Vulnerability management should allow an organization to understand, in a continuous form, the risks associated to the vulnerabilities contained in it’s assets. The goal is to identify and mitigate vulnerabilities related to it’s IT systems so a organization can prevent attackers from causing damage.

For this post I’ll be writing about a relatively new subject, at least for me and most of the companies in Brazil and maybe also in south America. As most of people know, or should know, the methodologies or good market  practices around doesn’t work as a silver bullet, these methodologies are very useful as guidelines for something that you (or your consultant company) may use for drawing a customized and efficient process that fits your needs.

Based on my experience, study and security consultant/analyst years, I started drawing and developing a Vulnerability Management cycle, by reading from many published good management practices, including sources like NIST and SANS. This work was also my essay, presented as my graduation work, which was accepted and approved as my conclusion thesis.

To start off, I’ll be quoting some basics about vulnerability management as told by SANS in one of it’s publications. A vulnerability management process typically has the following steps or fields:

  • Asset Inventory
  • Information Management
  • Risk Assessment
  • Vulnerability Assessment
  • Reporting and Remediation Tracking
  • Response Planning

Each field has it’s unique challenges and good practices, which aren’t my objective here for this post but if you are interested I definitely recommend reading “Vulnerability Management: Tools, Challenges and Best Practices” by SANS. These fields are the baseline for a successful vulnerability management process and therefore must be accomplished.

To illustrate the process itself, SANS uses the following image:


Moving to the main objective of this post, I’ll be presenting one of the fields which I stressed the most during this project and the complete overview of the custom vulnerability management approach proposed. Before I move forward, here’s a little background from my current company and the environment that I have to deal with:

“We are a multi-business, multi-national enterprise, a holding, of 5 different companies from energy (gas and petrol) to retail and logistics, with 10.000+ employees. Me and my team are responsible for the information security processes and risk analysis for all 5 business.”

By that I think I could say that our network environment are pretty big and complex, something that can totally justify the need of implementing such process.

My goal was to develop a flux of processes which could be executed repeatedly and would feedback itself, something like the PDCA model and many others that aims for the continuous improvement. The following cycle was developed based on the good practices mentioned above and with my real world experience, also looking for the company needs and ours GRC’s (Governance, Risk and Compliance) objectives.

GV0-Fluxo Macro-v1.0-EN

This cycle is the main overview for the vulnerability management process, it is divided in 3 big basic processes as you have seen above:

  • GV1 Detect Vulnerabilities;
  • GV2 Report;
  • GV3 Manage Vulnerabilities.

Each one of these processes has it’s unique set of activities and tasks to be completed before moving to the next step. For the GV1, the key activities are:

  • Assess systems, applications and infrastructure;
  • Program automated security tests, tool-assisted;
  • Safely explore critical vulnerabilities, checking it’s full potential;
  • Vendor and vulnerabilities newsletter analysis.

It is crucial that this process gets as automated as possible, since it requires the analysis of many applications and infrastructures. The recurrence is also something very important, as time goes by, new threats and vulnerabilities will be spotted in the wild, consequently new risks will appear.

Moving to the second step, GV2, the main activities are:

  • Develop and maintain a report standard;
  • Document and inform the findings;
  • Keep stakeholders aware of the known risks;
  • Expectation alignment, risk acceptance, remediation plans, etc.

It is important staying up to date with reporting the findings and making sure that the stakeholders involved are well aware of the risks and impacts that the vulnerabilities may present. It is also time to relate and compile all the information regarding the vulnerable asset, using asset inventory and vulnerabilities databases. It is possible to occur a callback for the GV1, it should happen whenever the findings could have changed, for example, when the stakeholders have taken some mitigation action and the vulnerability must be reevaluated.

For the GV3 step, the key tasks are:

  • Document, manage and monitor vulnerable assets;
  • Keep the risk acceptance or remediation plans in track and up to date;
  • Study and apply vulnerability remediation possibilities, firewalls, IPS, etc;
  • Focus efforts in mitigating critical vulnerabilities.

This step is supposed to organize the changes requests, incidents handling and risks management related to vulnerabilities, the idea is to maintain track of the risks and keep people aware in a timely manner. For example, if a given incident root cause is a previously found vulnerability, were the stakeholders aware of the issue and the impacts that it could lead to? Did they accepted the risk and maintained the vulnerability for a later study? Independently of the answer, it is important that the information security team does it’s job by safeguarding the company’s IT assets, informing the stakeholders that there are vulnerable assets and the risks are real.

For this first part, I’ve just shown a quick preview of this work and I’ll be digging in a more detailed post in part 2, talking about the GV1 process itself.

Source material:

  • SANS – Vulnerability Management: Tools, Challenges and Best Practices
  • SANS – Implementing a Vulnerability Management Process