Malware and security metrics

Probably, at some point in your security career, you have been asked the most difficult question a specialist could ever be asked – How secure are we (the company)? Very tricky question that may put you in a hard time trying to figure out what to respond. Numbers are everything to any CEO and it is no different when talking about security, that’s why it is important to have data when requesting more resources or showing that you have been doing a good job. Consistent security related data may be very hard to gather in a compiled and organized way and in this post I’ll talk about how we can use malware related data to get interesting numbers.

System Center Configuration Manager is a solution that I`d say a vast majority of mid to big companies have/should have in their Microsoft domain, besides many other characteristics, it facilitates workstation and server’s administration, allowing mass deployment, monitoring and compliance. As you may know or suspect, it has a lot of information inside its database ranging from software installed, updates, last logged user to the malware related data, if and only if you`re using SCEP (System Center Endpoint Protection).

I`ve been working lately with the virus/malware data that SCCM database compiles, generated by SCEP, which is then stored into the database. My idea isn’t to talk about how good or bad SCEP is as a malware solution but to work with what I have and transform all this data into useful information that can lead to most relevant security risks, other than just a ton of alerts that came from a keygen or crack being quarantined in a user`s thumb drive. My goal with this post is to open your mind related to the possibilities we have (if you at least have the same or similar resources as I do) by doing some data analysis.

Some interesting security indicators that would reflect how your organization have been handling malware infections are:

  • How many malware infections occurred in a given period, having this information in a location specific view would be even better;
  • How many malware infections required a manual/admin intervention in order to be resolved, when the anti-malware solution isn’t capable of cleaning the machine just by itself;
  • How many malware infections resulted in a security incident, that led to business disruption in some way;
  • How long does it take to firstly remove an infected machine from the network and ultimately, re-image it if needed;
  • Are all malware solutions up-to-date and executing security scans in a timely manner, this indicates the solution health;
  • How many APT (advanced persistent threats) or high-risk infections occurred in a given period, recurrent infections, malware family name, disabled anti-malware solutions, etc. may indicate some kind of persistent malware.

Each one of these measures could be a good challenge to develop and many security vendors will promise you that it’s solution will easily provide you this type of data but it may not be that simple. Intelligence to build these relations aren`t technology tied, it’s a matter of knowing what matters the most to you and what kind of information you have, only then it will be possible to make the links between each piece of information. That’s why its important to first do your home work of knowing what you have and what is important before buying solutions or developing anything.

Here are some examples of data that can be very useful when building relations regarding malware:

  • Date of last scan;
  • Date of last update;
  • Date of last infection (historical data can be very useful);
  • Date of operational system installation;
  • Date of last login and reboot;
  • Indication that all anti-malware modules are operational;
  • Anti-malware solution infection status;
  • Infection path (indicating the infection root);
  • User that caused the infection or has the most console runtime.

You can then relate data to trigger your high-risk infections, for example, relating the last scan, last update and operational status so you can build a compliance health check for the malware-solution across all domain machines. Another one is relating the infection path to a probable root cause, if it comes from a pendrive (F: or G:) it may indicate that the user is infecting the machine, you can then use these numbers to run a security training or awareness campaign. A last one could be the infection status against the last infection, indicating that the anti-malware solution is cleaning the machine but the malware is persistently re-infecting it.

Your current technology won`t be the barrier here, you can either have SCEP or any other market solution, the goal is to have the data and knowing what you`re looking for, of course that technology will matter on the sense of compiling the information for you. Most importantly, never assume that you got it all covered, security risks, malware and hackers change every day and you must adapt to these changes, having efficient and relevant measures will assist your organization adapting to emerging threats, as well of answering that “question”.




User behavior analytics – How to use data analytics for security

You probably already heard about new trends on how security is evolving by, instead of working reactively and detecting malware signatures in each workstation for example, it should work by observing how your users behave inside your corporate network, keeping an eye on malicious actions like trying to connect directly to your main AD or executing files without enough privileges.

This way of thinking security is supported by technologies named UBA (User Behavior Analytics) or UEBA (User and Entity Behavior Analytics), both are more o so the same thing, the difference is that UBA only worries about the user behavior meanwhile UEBA will also look to entities like hosts, network devices, etc. In today’s world, what’s matter isn’t how impenetrable your network is but how fast can you detect an incident, react and contain it, you must work on the premise that you will eventually be hacked, sooner or later, and these solutions will assist you better than any firewall.

Most security vendors will say that this is the approach of the future but how much this kind of intelligence and technology will cost and, more importantly, is it worth it? Cutting edge appliances or cloud services tend to have a very high price, and thinking of monitoring the behavior of everyone, you certainly can add to this count a huge amount of data coming and going.

This post is about what can you do to bring some more intelligence when analyzing the information you already have so you can increase your maturity on detecting malicious behaviors on your network, without having to invest enormous amounts of money. I’ll take as example some analysis I developed myself inside my company, looking at all the data we had about the countless malware infections SCEP detects daily, ultimately compiled and provided by SCCM. If you have an SIEM technology, you can go even further when analyzing data, but this is material for another post.

So, here’s the scenario and what you need to have so you can get going:

  • Aggregated and organized data regarding malware infections detected in your environment
  • A centralized way to consult and display this information in a structured way, like a BI tool
  • Relevant information, for example:
    • What is the malware family/name
    • Hostname and last logged on user
    • Time and day of occurrence
    • The path where the malware was detected/executed in the first place
    • Was it successfully removed or it need a manual action like an reboot

You can now think of building relations and linking information that would be useful to you, generally speaking, as a security manager you would definitely be interest in knowing the source of your malware infections, they could be coming from a malicious user or maybe system vulnerabilities. Knowing this is key to direct your already limited resources to mitigate risks, check below some examples of relations that will help on doing so:

As shown above, we are creating links between different data that may seem useless if taken separately, but if you put them together it gets a whole new meaning. In order to automate possible root causes of infections, we could create measures indicating which is the most probable root cause of infection in a given period of time. This data could be then used to drive investments in areas like user awareness, policies and standards or even to acquire new technology. Of course that we are targeting only infections that we can detect but it’s a way to at least have a better knowledge of your own environment. This data could also feed an incident management process, saving investigation time by suggesting a probable root cause and raising alerts to the risk of a malware outbreak.

I’m safe to say that if you have the data and the tools to dig into it, you can transform it in information and bring intelligence and facts to the actions you take inside your organization. For higher management, this is key while requesting investments and, most importantly, to do a good job as a security professional.

As always, feel free to share feedback and your experiences about this subject.

A new “hacking” trend – Mining Bitcoin on the comfort of your browser

You may have heard already about Bitcoin or some other crypto currency on your work, talks with friends, news or internet, so I suppose that this subject isn’t new for you. If it is, make sure to check out some good sources about the subject on the end of this post. For now, I’ll be talking about something that I perceive as a new “hacking” trend and maybe even something that companies could use to generate income (if done legally).

Mining Bitcoin is something that is on the backstage of the whole Bitcoin subject, people does that to generate their own Bitcoins, at least used to.

  • So, what is this all about? How does one mine Bitcoins?

The short answer – when people says that one is mining bitcoin, it basically means that the person or group of persons is exchanging his/her computational power for Bitcoins.

  • Why would I exchange computation power for bitcoins?

The technology behind Bitcoin consists of a huge network of computers, each computer of this network processes transactions that are made with Bitcoin, something like a real-life broker. So, imagine that you’re buying something from a friend and you’re paying with Bitcoins, by doing that you send to his wallet the amount of 1 bitcoin. To the Bitcoin network complete the transaction, it must be processed by all computers on the network, this guarantees the transaction uniqueness and safely “register” it on the network.

If you opt to join Bitcoin’s processing network, you will be able to execute and register these kind of trades, receiving a “salary” for doing so. This is in simple terms how you “mine” Bitcoins.

  • Is this profitable?

If you are willing to put your domestic computer to work while you’re on the office, the short answer is no. Nowadays, there is so many computers “mining” Bitcoins that it is totally unreliable to use domestic computers for it, the electrical power that you will use to keep your computer running will suppress the amount of money you’ll make.

  • So, why are hackers using my browser to do that?

That’s the golden question and the answer is scalability.

Imagine Facebook, how many people goes to Facebook everyday and stays there for a while? A lot… Now, think about my previous statement, where I said that a single domestic computer won’t be able to mine enough Bitcoins to become profitable. So, what about 1 million computers working together at a given hour/minute/second?

That’s sounds like a lot of computational power, right? And that’s exactly how it’s being done, not only via browsers but as well computer viruses.

  • How do they do that?

In a way most people won’t even notice, “hackers” add a piece of code into their websites or stolen/hacked websites, so when someone opens the site, the piece of code starts using your computer power to mine Bitcoins to him/her trough the browser. They usually set the code to use just some of your processing power, so the most users won’t notice it, and it stays there sucking your computational power until you leave the web site or close the browser completely.

The first one to do this was a famous torrenting site called the Piratebay. A few pages of the site were set to mine Bitcoins by using its visitor processing power, the site said that they were testing a way to generate revenue with the people that uses its services but they didn’t alerted the user about it and kept doing it until someone noticed and brought it to the news.

Now you are probably asking yourself: How can I detect and avoid this? Until now, you pretty much have two ways of doing it…

  • Stay alert for sudden loss of computer processing power when visiting websites


  • Block resources used by your browser to load and run web pages, more specifically JavaScript

Right now may not be the best time to worry about it since this technique is pretty new and most sites that does this are the “underground” ones, but it is indeed very interesting to be ready and aware of what comes next.

So, just like me you may be wondering, will this new trend become popular among hackers? Will my favorite website start doing it for additional revenue? These are answers that only time can will be able to answer.

At the end of the day, this probably could be done legally and be an alternative for those annoying Ads.IN my opinion, this won’t be a problem if visitors and customers are warned about what’s going on with their processors, after all what’s bad in sharing a little of processing power in exchange of accessing your favorite content?

As always, thanks for you time on reading this and feel free to share any comments about the subject!

References and more content:



Planning your Infosec strategy with ISO 27000

This post is about how to establish your strategy to properly implement the security controls your company needs most, based on the global security standard ISO 27000. First things first, if you never heard of ISO 27000, here’s a short explanation about it:

“The ISO/IEC 27000 family of standards helps organizations keep information assets secure.

Using this family of standards will help your organization manage the security of assets such as financial information, intellectual property, employee details or information entrusted to you by third parties.

ISO/IEC 27001 is the best-known standard in the family providing requirements for an information security management system (ISMS).” Source:

In other words, ISO 27000 is a series of documentation that defines, suggests and explains what you, as a manager, need to be worried about when defining the information security strategy of your company, by using security controls to mitigate risks. There are a few other frameworks that provides guidance on this matter, like NIST Cyber Security Framework and SANS Critical Security Controls but for this post I’ll be referencing ISO. ISO is also the worldwide standard for most companies and is recognized as the best practices around the information security matter.

My point here isn’t to get in every single detail of the standard but to bring awareness to everyone who’s seeking out for directives, good practices or even something to start off in your company, from small to big business, ISO’s directives can be applied based on your company’s needs.

Talking about business focus/needs, this should be the first thing you need to have in mind before drawing your strategy, knowing what your business does and what it’s willing to do is key.

  • Get to know your business needs, worries and how flexible it is to changes in the short-medium term;

As I stated before, this step is key because your business will be very inflexible or even intolerant to changes that impact their operation or sales. You even can get in a complicated situation trying to force safe behaviors in your company, so it’s very important to work with your business, not against it.

  • Summarize the main risks that your business is exposed to;

Map the risks that your company is exposed to, for example, if the core business of your company is to transport goods, I would say that the main risks are related to goods transportation, storage and inventory (in a very simplistic analysis). You should then check for controls that mitigate these risks.

  • Check the ISO (or any other framework) for suggested security controls regarding the high risks you’ve mapped before;

Using the example stated before, ISO have a few directives for physical access controls that may make sense applying in this business scenario. If you check the directives from controls A11, for example, you can observe that there are controls for security perimeters, physical entrances, protection against external threats, etc. You can always look for other market standard if ISO doesn’t cover all the gaps, mixing up more than one standard like ISO, PCI and SOX will always increase your security maturity.

  • Start with the quick wins first, anything that is easy to implement, any controls that just need some tweak, security policies and standards or even security awareness;

Based on your maturity ruler (all ISO controls) map the quick wins and show how much progress could be made with them. Showing your directive board how they can mitigate risks with quick and cheap actions is a good way to acquire their support. Once the board have seen how valuable these risks mitigating actions are, it will be a lot easier to move on the hard ones later.

  • Plan the rest of your actions accordingly. Invest yours and company’s resources in actions that will bring valuable results to what the business is worried about;

It is not interesting for a company that doesn’t have or doesn’t see the IT department as a core resource for the business, to implement all ISO’s controls aiming a possible certification.

In the end, remember that 100% secure will never be possible and even at some companies 50% secure can be a real challenge, you should then be realistic about the current situation and what’s reliable to do. I’m summarizing below some key success factors that you should take note before creating your strategy:

  • Align your strategy to the business. Define how much compliance to the framework is enough to your company to mitigate the main risks;
  • Don’t push long term cultural changes in short periods of time. Losing stakeholders or sponsorship can end your strategy and even your position;
  • Work with the quick wins first and show the results. In other words, use the 80-20 strategy, fix 80% of the problems with 20% of the effort/resources;
  • After doing the quick wins, show how far your company can go in terms of security, risk mitigation and money saving if more resources were invested in the security plan;
  • Spread security awareness and mentality. The more people you have thinking about security, the more attention and sponsorship your work gets.

By the end of the day, following these tips, planning your strategy aligned to your company reality and going one step after another, you job as information guardian should be done successfully. Companies need to follow the technology evolution in other to keep up the market pace and business is always looking for the profit, it’s your job to keep their feet on the ground and guide them minimizing the security risks.

Cyber hygiene and security awareness programs

Security sensitive companies (now a days almost every single one that is connected to the internet) spends a lot of manpower and most importantly, financial resources, trying to keep their infrastructure and users safe from the most recent threats the internet has to offer. This means spending thousands of dollars on the most recent technology, training people and monitoring the environment. The irony of it all is knowing that all this effort and investment could come down at once just by a single click, of course that the more security layers you have, the less chance of someone clicking or running something suspicious on his/her computer.

Cyber hygiene comes in place when we try to look for an answer for this matter, it can be defined as the responsibility of the individual in maintaining a safe behavior towards his actions on the work place and even at home. A safe behavior, for example, includes checking if an e-mail is legitimate or expected before opening it or downloading any attachments and not providing your personal information, like passwords, to anyone. Unfortunately, this kind of behavior isn’t present in most of the companies around the world and that’s the problem.

Most users have the concept that the company is the only one responsible for keeping their information, work tools (such as pcs) and everything that is work related safe and sound from threats. By doing so, people usually doesn’t think or even critically analyses what they are doing before it’s done, for example opening a file that comes from e-mail or clicking a link. Others may say that the fast-paced day to day tasks leaves them with no time to stop and analyze everything.

Independently of the reason, the truth is that everyone should act towards their day-to-day work tasks the same way they act on the street or with strangers. You usually don’t accept anything offered from someone you never saw or look suspicious on the street and doesn’t follow people around when they call you for an irresistible offer on the store around the corner, do you?

So, how should we get in touch with these people and pass some knowledge about Cyber Hygiene? It’s crystal clear that people who doesn’t care about this kind of subject won’t invest much time or attention on this matter and making them go thru a long training or reading extensive documentations won’t bring much result. That’s where security awareness takes place.

Successful security awareness programs should deliver the following:

  • Relevant information regarding the people you are trying to inform;
  • Quick and easy to understand directives (tips);
  • Illustrative images regarding the messages you send;
  • Gamification of security awareness is also a plus if possible;
  • Up to date subjects, latest information leakages, attacks or trends;
  • Physical actions, work desk and behaviors that takes place on the physical world should also be included;
  • Recurrence and knowledge evaluation.

Unfortunately, there’s no silver bullet for security awareness programs but there are directives you should follow and adapt to your reality. The goal is same for any program, which is to basically make people think and question before doing any action.

I would recommend starting small with informative e-mails or maybe phishing campaigns and measuaring the results of those actions to check whether they are being effective or not. It’s also very important to be aligned with your Human Resources department, as they have the expertise to talk with the employees and maybe require them to take the awareness courses or tests.

There’s an awesome free resource for this kind of awareness but it is a Brazilian entity with all its content in Portuguese, if you can understand Brazilian Portuguese I strongly recommend checking this site out. As soon as I find anything like that in English, I’ll sure share with you all.

As usual, feel free to comment below and to reach contact with me.

Hardening HTTPS connections on your server

In this post, I’ll be talking about a very common vulnerability in HTTPS encrypted connections and how to fix it. Most web server’s or services that uses HTTPS don’t worry about the hardening of its ciphers and protocols.

The main problem is that encryption protocols and ciphers become obsolete over time and new vulnerabilities rises from its deprecation, for an example, SSLv2 and SSLv3 are long considered vulnerable and yet you still can find many services that uses this type of protocol.

Going for what matters, this guide is about setting only the strongest and compliant protocols on your cryptographic connections over Windows and some web services on Linux. The procedures on this guide may need to be tweaked in order to function properly on your environment, as I can’t predict all the possible variations. Here are some benefits of applying this hardening guide:

  • It will remediate attacks known as DROWN, Logjam, FREAK, POODLE and BEAST;
  • Insecure ciphers and protocols will be disabled, such as SSL 2.0, 3.0, PCT 1.0, TLS 1.0, MD5 and RC4;
  • Only TLS 1.1 and TLS 1.2 protocols will be accepted;
  • These changes are compliant with PCI 3.1 and FIPS 140-2 practices;
  • Old web browsers may no longer function with HTTPS connections, such as Internet Explorer <7.0.

Obs.: It’s highly recommended to use a test environment before applying any change on the production environment.

Windows environments

There’s a tool called IIS Crypto that will do basically everything for you, you can find it here:

  • IS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website.

Here’s what to do once you download and run it:

  • Run it as administrator;
  • Click the “Best Practices” button;
  • Uncheck the “TLS 1.0” option. TLS 1.0 is no longer recommended or safe. This may crash some RDP (Remote Desktop) functionality;
  • Click “Apply”;

As fast as it can be, it’s all done now. If you want to check out the changes that this tool made, do the following:

  • Run “regedit.exe”;
  • Go to the following folder “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL”;
  • Check the new folders and keys generated.

You can also to it all by yourself if you want, check out Microsoft’s guides about it:

Linux environments

Doing this kind of stuff on Linux is a bit trickier, since it has a lot so distros and several types of web services, this guide may not apply for everything. Anyway, there’s also a tool that may help you a lot while doing this, this is a online tool that can be found at:

It is a web page where you can set your web service and it’s version, once you’ve done that, the tool displays for you the configuration lines that must be imported onto the file that sets the security characteristics of your web server. If you use Apache or any other technology, just navigate to the folder where this file resides and modify it, remember to always have a backup copy of it.

  • Set your technology (yellow);
  • Set the “modern” option (Blue). This defines the acceptable protocols and ciphers, only the good ones;
  • Set the server version and OpenSSL version. The “HSTS” is a security header option that may be not compatible with older web applications;
  • Check the configuration to be imported (green);

All right, you are all good now, at least better than before. The worst down point on doing all this stuff, is that some old browsers may have issues connecting to your web page or service. Older versions of IE like 6 or 7 does not support TLS 1.1 or higher.

If you are worried about this, you can check out this awesome reference on Wikipedia which compiles the support of HTTPS connections on most of the browsers, look after the big table named “TLS/SSL support history of web browsers”

Comments are always welcome!

Installing and running Cuckoo malware analysis platform – Part 2

As I promised, this is my second post of the Cuckoo tutorial set, I’ll be guiding you through the process of making a Windows VM (Sandbox), where Cuckoo will run all the malware you throw in it. This part will also show a first run of the platform.

It is important to state that this step isn’t as easy it seems to be, the hardest part is tuning the VM as much as possible so most of the malware found around the internet won’t be able to identify it as a VM. Malware now a days have various ways to check whether it is being ran on a VM or a real host. This happens because the people who make them put a lot of effort on doing so and they won’t be pleased to know that their malware got reversed engineered and countered.

To start off, as you could have imagined, you are going to need a Windows 7 ISO image to install on your new VM. Check the list below for the specs recommended for it, some specs are also checking points for malwares like HD size and memory available. Remember that this tutorial is based on a Virtual Box environment.

  • At least 60 GB HD;
  • At least 2 GB RAM Memory;
  • At least 2 processor cores;
  • Set up the “Pointing Device” as “PS/2 Mouse” (this may cause malfunction with the mouse while operating the VM on the Linux machine through xRDP);
  • Set up the processor execution cap at 100%;
  • Set up the extended feature “PAE/NX”;
  • Set up the hardware virtualization “VT-x/AMD-V” and “Nested Paging”;
  • No video acceleration is required;
  • Set up the network to “Host-only adapter”.

After setting up the characteristics for the VM, it is time to install your Windows 7 image. It’s optimal to install a fully up to date image, since your sandbox should look like a real machine. After doing the steps above, your VM should look something like this:


Note that my VM has only 40 GB HD, this is something that I came across while creating it and running some tests on it. It is widely advised that you build yours with at least 80 GB HD, since this is something that malware nowadays look after. So, when Windows finishes installing, there’s some steps you’ll need to take to keep up with the setup of your sandbox, here they are:

  • Do not install Virtual Box Guest additions. Some malware look for registry entries and they may find those. If you do, my guide will cover you up lately;
  • Fully update the system via Windows update;
  • Turn off Windows update after the step above;
  • Turn off Windows firewall;
  • Turn off Windows defender;
  • Turn off Security Center;
  • Turn off UAC;
  • Turn off all the notifications you will get by disabling these services;
  • Set the “Adjust for Better Performance” option on System Properties
  • Set a fixed IP address, Cuckoo default network is 192.168.56.x, so you can set up yours with something like This address must be placed on the virtualbox.conf file on the Cuckoo conf folder (check this out on part 1);
  • Set video resolution to 1024×768;
  • Put some garbage on users folders like images and music, also surf the web a bit for browser history.

Now that you’re done tweaking Windows, it’s time to install all the software and tools you will be needing to run the vast majority of malware you will find. You have basically 3 ways to do so:

  • First is to setup an ISO image with all the software you need inside it and open it up on the VM;
  • Second is to make a network share between your host machine and the VM, then move the files to the VM;
  • Third and the least recommended is to install Virtual Box guest additions and transfer all the files;

The third way is the least recommended because, as I already stated above , it leaves traces on the machine that it is a virtual machine. You can still install it and remove all the registry entries that relate to Virtual Box, I’ve done that. So, about the software you need to install, here’s the list:

  • Microsoft Office 2013 x86 (32 bits)
  • Microsoft .NET Framework 4.6 and 4.6.1
  • Microsoft Visual C++ 2005, 2008, 2010, 2012, 2013, 2015
  • Adobe Reader v9.0
  • Flash Player v11
  • Java RE 6 (I’ve installed v6u22)
  • Python 2.7
  • Pillow 2.9.0
  • 7zip
  • Cuckoo agent “agent.pyw”
  • PaFish – Paranoid Fish (tool used to check whether the VM is well obfuscated or not)

After every installation, be sure to run the software for the first time and accept any terms it may pop up, also leave it maximized and then close it. Cuckoo won’t be able to run every single software that exists, it has compatibility with some software at specific versions. Be sure to check out Cuckoo documentation for details about this.

You can find all this software around the web with a few clicks but I know how boring it would be to get all this stuff. Knowing that, I will soon put a link on this post with all the stuff you need in a single ISO file, stay tuned. x64 versions or most recent versions of some software’s such as Office and Adobe Reader, may not work properly with Cuckoo, you can try them out if you want.

Going forward, there’s still some things you need to do before you can fire Cuckoo up. There’s a piece of software from Cuckoo platform that we need to put on the VM so it starts every time the VM runs, it’s the “agent.pyw”. You can find the file on the Cuckoo dir that you’ve downloaded before. Here are the steps:

  • On the Windows VM, navigate to “C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”;
  • Put the agent.pyw file on the folder;

All right, we now need to add a file I made myself which does some changes to Windows every time you start it up. Since Cuckoo will run a snapshot of the live VM, as soon as the VM fires up when analyzing a sample, this script will clear some stuff that may be used by malware for tracking, such as the registry entries from Guest Additions.

  • Open up a notepad;
  • Type in the following:
Windows Registry Editor Version 5.00


[-HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\Virtual Box Guest Additions]
  • Save the file as any name you want with the extension “.reg”
  • Put it or create a shortcut to it in the startup folder “C:\Users\<username>\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup”


This file will erase a few registry lines and rewrite some Bios data about the machine.


The next step is to delete a device that is installed by the Virtual Box, it gets there every time the machine starts up. I couldn’t find any other way to prevent it from being installed or removing it with a script. If you find out any way to do that in a more automated way, please let me know!


It’s almost over now, run the “pafish” tool to check how well obfuscated your VM is. I couldn’t make mine perfect, from all the research I’ve done, many have stumbled with the same stuff I have and I didn’t found out how to fix it. Anyway, here’s how it should look like.



As you can see I got traced on a few things, this means that my sandbox setup isn’t as good as it could be.

For the final steps, you’ll be going to need to do the following:

  • Export the VM to an “.ova” file (if you were using Virtual Box outside of the Cuckoo linux host) and move it to the Linux host;
  • Import the machine on the Virtual Box of the Linux host:
    • Log in the Linux host with xRDP
    • Run the console and type “sudo virtualbox”
    • Import the appliance
  • Close Virtualbox and type the following on console:
    • sudo vboxmanage list vms (check VM name)
    • sudo vboxmanage controlvm “Sandbox-Windows7” poweroff (make sure it’s off)
    • sudo vboxmanage startvm “Sandbox-Windows7”
    • Wait the machine to start and uninstall manually the device showed above, then close every window and leave the desktop clear
    • Go back to the console on the Linux host and type “sudo vboxmanage snapshot “Sandbox-Windows7” take “baseline” –pause”
    • sudo vboxmanage controlvm “Sandbox-Windows7” poweroff
    • sudo vboxmanage snapshot “Sandbox-Windows7” restorecurrent

Ok, from now on Virtualbox is ready to receive the samples from Cuckoo and the virtual machine will turn on right where we left it when a job is sent. You should double check the conf files of Cuckoo to make sure that all settings match with the VM, for example, the IP address you’ve set on the VM must be the same at virtualbox.conf file, as well the VM name.

Now it’s time to run a few commands and try out Cuckoo, do the following and start testing!

  • Start Virtualbox network interface (you will have to do this every time the Linux host boots)
    • VBoxManage hostonlyif create
    • VBoxManage hostonlyif ipconfig vboxnet0 –ip –netmask (if you didn’t changed the default IP address, it will be the same)
  • Open two console windows on the Linux host
  • Run sudo -i to make sure you got root privileges on both
  • Navigate to the main Cuckoo folder and type this:
    • python -d
  • On the other console, also navigate to Cuckoo main folder and then on the web folder and type this:
    • python runserver 192.168.X.X:80 (where X is the IP address of the Linux host)


Now you can go to your browser and type in the IP address from your Linux host, if everything went fine, you should see this:


Try out Cuckko sending your first sample. You can also check out the VM working alone.



And that’s it!

It’s all good to go and you can start testing. Check out the results on any analysis you make on the web interface. You can open up the xRDP on Linux to see Cuckoo working or to troubleshoot any problems you face.

I hope I’ve had covered everything in these two parts, if you got any trouble, ideas or suggestions, please comment below or just leave your feedback. I’ll be around and improve anything that may need an extra touch.

Installing and running Cuckoo malware analysis platform – Part 1

In this post I’ll be guiding you thru all the steps required to install and run a Cuckoo malware analysis platform. I’ve talked about it briefly in my previous post and promised this guide as a continuation. I estimate the time to accomplish this installation in something about 40 to 60 minutes, depending on how straight forward you plan to follow this guide.

I’ve faced many dependencies problems and errors until I was able to compile (or at least hope so) everything you need to run the platform on the first try. I’ve also spent a lot of time reading different guides until I finally could compile this one. Most guides out there would only help you set up the platform with none-basic settings and modules, which may not deliver satisfactory results.

This guide will cover from preparing the platform host to the creation of the Windows 7 VM, where the files will be run. I’m splitting this tutorial in two main parts, preparing the host and the virtual machines. Let us begin then with the host.


Preparing the Host

You’ll need a physical machine with a Linux distro. This machine must be able to run at least a single virtual machine, so something about 4gb of RAM and a quad core processor should do the job just fine but the more, the better.

Install Ubuntu Server

Ubuntu Server was my OS of choice while installing Cuckoo, it is also recommended OS from the Cuckoo’s website.

Install SSH

First thing you should do is to install a SSH server on the host. SSH will allow you to connect to this machine from anywhere on your network or internet. Useful if you want to finish this tutorial from another machine.

  • sudo apt-get install openssh-server
  • sudo service ssh restart


Install a graphic (XFCE) interface and RDP compatibility

I’ve added this step because in my corporate network we mainly use Windows with the Remote Desktop app. It is not mandatory to install a GUI, but it helps a lot.


  • sudo apt-get install xfce4
  • sudo apt-get install xfce4-terminal
  • sudo apt-get install gnome-icon-theme-full tango-icon-theme
  • sudo apt-get install xrdp

The next two steps set XFCE as the default GUI when using the Remote Desktop app. Edit the and add the text below to the file.

  • echo xfce4-session >~/.xsession
  • nano /etc/xrdp/
  • Type in the following:
 if [ -r /etc/default/locale ]; then
 . /etc/default/locale
  • sudo service xrdp restart


Install SAMBA

Samba will be used for directory sharing between Linux and Windows systems. You’ll need a share on the host for transferring the VMs and any other files.

  • sudo apt-get install -y samba samba-common python-glade2 system-config-samba

Edit the smb.conf for share definitios, run the following command and add the text in the box below at the end of the smb.conf file.

  • sudo nano /etc/samba/smb.conf
  • Type in the following at the very bottom of the file:
 workgroup = WORKGROUP
 server string = Samba Server %v
 netbios name = ubuntu
 security = user
 map to guest = bad user
 dns proxy = no
 path = /samba/share
 browsable = yes
 writable = yes
 guest ok = yes
 read only = no
  • sudo service smbd restart

Install VirtualBox

Cuckoo needs a virtualization software in order to automate it’s malware analysis functions. For this guide, I’ll be recommending Virtual Box, Oracle’s open source solution for virtualization.

  • sudo apt-get update
  • sudo apt-get install virtualbox-5.1
  • sudo apt-get install dkms

Install Cuckoo and Dependencies

This step is responsible for installing the Cuckoo platform itself, as well all its dependencies. Being modular means that Cuckoo will be depending on many other tools to work properly. I went thru this process a few times and tried to make sure that I’ve noted down all the tools needed.

  • sudo apt-get update && sudo apt-get upgrade -y && sudo apt-get upgrade -y && sudo apt-get dist-upgrade -y && sudo apt-get autoremove -y
  • sudo apt-get install python python-pip python-dev libffi-dev libssl-dev libxml2-dev libxslt1-dev libjpeg-dev
  • sudo apt-get install git mongodb python python-dev python-pip python-m2crypto libmagic1 swig libvirt-dev upx-ucl libssl-dev wget unzip p7zip-full geoip-database libgeoip-dev libjpeg-dev mono-utils yara python-yara ssdeep libfuzzy-dev exiftool curl openjdk-8-jre-headless
  • sudo pip install –upgrade pip

Install Cuckoo Modules

  • PDF Reports
    • sudo apt-get install wkhtmltopdf xvfb xfonts-100dpi
  • TCP Dump
    • sudo apt-get install tcpdump libcap2-bin
    • sudo chmod +s /usr/sbin/tcpdump
  • ClamAV for malware id
    • sudo apt-get install clamav clamav-daemon clamav-freshclam
  • Pydeep for fuzzy hashes
    • sudo pip install git+
  • Malheur for malware behavior analysis
    • sudo apt-get install uthash-dev libconfig-dev libarchive-dev libtool autoconf automake checkinstall
    • git clone
    • cd malheur
    • ./bootstrap
    • ./configure –prefix=/usr
    • make
    • cd
  • Volatility for memory analysis
    • sudo apt-get install python-pil
    • sudo pip install distorm3 pycrypto openpyxl
    • sudo pip install git+
  • PyV8 JavaScript engine for malicious JavaScript analysis
    • sudo apt-get install libboost-all-dev
    • sudo pip install git+
  • Suricata IDS
    • sudo apt-get install suricata
    • sudo cp /etc/suricata/suricata-debian.yaml /etc/suricata/suricata-cuckoo.yaml
    • sudo nano /etc/suricata/suricata-cuckoo.yaml
      • Search for “# a line based alerts log similar to Snort’s fast.log” by pressing “ctrl+w”
      • Set to “enable” to “no” for “fast.log” and “unified2”
      • Find “file-store” set “enabled” to “yes”
      • Set to “yes” the fields “force-md5” and “file-log”
      • Find ” # Stream engine settings. Here the TCP stream tracking and reassembly” and set “depth” to “0”
      • Find “request-body-limit” and “response-body-limit” under “default-config” to 0, without any unit
      • Find “vars” and under “address-groups” set “EXTERNAL_NET” to “any”
    • Update threats on open IDS rules
      • git clone
      • sudo cp etupdate/etupdate /usr/sbin
      • sudo /usr/sbin/etupdate -V
      • sudo crontab -e
        • choose 2
        • Add the line * 22 * * * /usr/sbin/etupdate so it will update at ever 22 hours, or modify the time at your will;

Installing Cuckoo

For this step, you can either download the ZIP file from the Cuckoo website ( or download a improved and modified but outdated version from the git link mentioned below. You can check the improvements out at

Starting Cuckoo

Every time you restart the machine, you will have to re-create and start the virtual network interface. You will also need to start Cuckoo and the webservice used for checking results, statistics and submitting malware.

  • sudo VBoxManage hostonlyif create
  • sudo VBoxManage hostonlyif ipconfig vboxnet0 –ip –netmask
  • cd cuckoo-modified
  • sudo python -d (start Cuckoo platform)
  • cd cuckoo-modified/web
  • sudo python runserver XXX.XXX.XXX.XXX:YY (X should be the Linux machine IP address and Y should be the http port)


Obs.: Cuckoo won’t run properly on this first try since we didn’t set up any virtual machine as the sandbox.


In this post I covered everything you need to install and run Cuckoo, also giving you a RDP interface, for using the GUI with Windows Remote Desktop and being able to connect to this host by a network share. The main difference of this guide to others on the web is that this is a compilation of my efforts for running Cuckoo on an enterprise production environment, as I stated before, most guides will only help you install the basic functionality of the platform, which won’t be as good as a fully geared Cuckoo.

I’ll be posting soon the continuation of this guide, which I’ll be helping you out on creating your sandbox VM with most of the tweaks needed to make it harder to detect when analyzing sandbox-proof malwares.


Cuckoo – A open source malware analysis platform

In this post I’ll be covering  an awesome open source solution, which is named after Cuckoo.

Cuckoo is a very modular platform used for managing sandboxes and automatizing malicious file analysis. As any other open source platform, it is supported by a community and have most of it’s components developed by it’s supporters and users. Putting the lack of a “professional” support question aside, the platform itself is in a very stable version, I can tell that by being a tester myself and mostly because I’ve set a production environment with this platform at my current job.

The usage of the tool is very intuitive and user friendly, including the way the platform shows the results of the analysis, you doesn’t need to be an reverse engineering expert to understand what it says after a job is done. The not so easy part is of course the installation, being that modular you will be dependent of other tools and it’s dependencies, which may lead you into some hard time.

People at have set up this solution so anyone on the internet can use it on the go, I highly recommend you trying it out if you are interested on Cuckoo. As for this post my goal was only to cover a brief intro about this great tool, I’m working in a detailed tutorial on the installation process for this platform and will be posting it soon!


Pentesting Script – Guidelines

In this post I’ll be covering mostly the basics of any pentest assessment, which is basically a generic checklist of what to test and the tools you should be using. Obviously, as my recommendation, they sure aren’t the only way to do things, this is a compilation of best practices and experiences that I’ve gathered over my years working as a security analyst. The best way will always be the way you get along with mixed with the one that suit your needs.

To start off, this post covers 5 major phases that I generally run on every pentest assessment, this list doesn’t need to be run extensively in every test, since each assessment has its own particularities. This works great as a guide during a test, by looking at the phases and its tasks as testing recommendations, and as you may think, I’m not listing every single step and tasks since my testing script is very extensive. Start off, as for the first phase:

  • Phase 1 – Automated Testing

This phase is responsible for the hard and extensive work during an assessment. Automated scanning tools should be used first, as they can setup the initial path you should take for more detailed testing and save precious time. These tools can be slipt mostly in two categories, the first one that focus in web application testing and the second for infrastructure testing. As for the tasks in this phase, here’s the most important:

  • Run infrastructure automated scan. (Nmap, Nexpose, Nessus);
  • Run application automated scan. (Acunetix, Burp Suite, ZED);
  • Run Nmap or similar tool to scan all TCP ports;
  • If any vulnerabilities be identified, verify whether public exploits exist (Metasploit,

Acunetix, Nexpose and Nessus are excellent paid commercial tools but they all can be replaced by manual testing, open source tools and a lot of patience if you can’t afford paying for these licenses.

  • Phase 2 – Manual Testing – Information Gathering

In this phase, if it applies, we will be looking for relevant public information about the target of the assessment, such as e-mails addresses that may be used on the application, manuals and any sensitive information indexed by the major search providers (Google and Shodan for example). Any information found on this phase may help the further testing at some point and mostly, at the manual testing. The tasks of this phase are:

  • Provoke application errors and analyze responses for possible information leakage;
  • Identify potentially dangerous functionalities such as file uploads;
  • Attempt to identify possible hidden features of the application (e.g. Hidden debug / admin parameters or links);
  • Verify whether error pages can be influenced by user input parameters.

Any information that may seem useless at first should be stored for later analysis. I had countless assessments that I got myself using something that at first I thought I you’d never use.

  • Phase 3 – Manual Testing – Authentication Testing

This phase main objective is to test how the application works with authentication. Whether by using manual credentials input, SSO (Single Sign On) feature or none at all, we will focus our efforts in finding vulnerabilities regarding the authentication process.

  • Verify whether HTTPS is used to encrypt credentials and / or sensitive data;
  • Test for user enumeration vulnerabilities;
  • Test for bypassing authentication by forced browsing;
  • Test for bypassing authentication by SQL Injection on the login page;
  • Test if password reset/reminder can be guessed or bypassed;
  • If possible, verify that all users have a unique user id;

The idea is trying to bypass the regular authentication of the application, accessing it without any authentication at all or even by using a profile you don’t have access.

  • Phase 4 – Manual Testing – Session Management Testing

Session management is how the application handles user sessions by the time you get first authenticated, for example, after you provide the login credentials. The main idea in this phase is to check whether the application can maintain good track of your privileges as an authorized user and which actions you can take “inside” of the application. Tasks in this phase can be summarized as:

  • Verify session timeout enforced in a reasonable amount of time;
  • Check for session fixation (not invalidating/re-issuing current session token after authenticating or forcing a known session ID on a user);
  • If the site is secure (HTTPS), check if the session ID passed over an unencrypted connection at any stage (HTTP);
  • Check is the session ID is sent in a GET request at any stage. Verify if it is possible to force it into the GET request;
  • Verify that all pages that require authentication also contain a clear Logout button;
  • Check for weak obfuscation or encryption of cookie data;
  • Test if concurrent logins are possible.

Common exploits regarding session management are privileges elevation, user personification and session theft, these can be done by manipulating sessions cookies and by modifying users ID’s.

  • Phase 5 – Manual Testing – Data Validation & Business Logic Testing

This final phase aims for tests of vulnerabilities that your automated tool of choice may already had pointed out, such as SQL Injection, XSS (cross-site scripting), HTML injection and many others. If none of these were pointed out, manual testing should take place in areas where the current type of vulnerability you are testing normally appear, places like forms, fields and any place that data can be inputted by the user. This phase is also the time for specific business logic testing, functionality that are critical to the business.

  • Attempt to subvert critical business logic. (Change transfes limits, access data from client a with client B, change users preferences, etc.);
  • All suspicious parameters (POST & GET parameters, SOAP Headers, etc) manually tested for (Blind) SQL Injection;
  • Generic user input validation testing;
  • Command injection;
  • All suspicious parameters (POST & GET parameters, SOAP Headers, etc) manually tested for Cross-Site Scripting.

Remember that the best way to do any pentesting is by following good practices and most importantly doing what works for you, focus on your best field like programming, thinking out of the box or if everything else fails, reading a lot from Google. Check out OWASP Testing Guide that I’ve posted on my previous post, it should serve you just right to start off.