Sunday, December 08, 2013

When I was writing Using robots.txt to locate your targets, I felt the necessity of developing a tool to make automatic the task of auditing the Robots.txt file of the web servers.

Now, I am really proud of introducing you my first tool called Parsero. I hope you enjoy it...


One of the things you need to do when you are auditing a website is to look at the Robots.txt file, for example: The web administrators write this file to tell the crawlers like Google, Bing, Yahoo... what content they are allowed to index or what directories mustn't be indexed.

But... Why the administrators want to hide some web directories to the crawlers?

Sometimes they want to hide the web portal login, management directories, private info, sensitive data, page with vulnerabilities, documents, etc... If they hide these directories from the crawlers, then they can't be found making Google Hacking or just searching in the search engines...

Why do you need Parsero?

We've said that the administrators tell the crawlers what directories or files hosted on the web server are not allowed to be indexed. They achieve this purpose by writing so much "Disallow: /URL_Path" as they want in the Robots.txt file pointing to these directories. Sometimes these paths typed in the Disallows entries  are directly accessible by the users (without using a search engine) just visiting the URL and the Path even sometimes they are not available to be visited by anybody... Because it is really common that the administrators write a lot of Disallows and some of them are available and some of them are not, you can use Parsero in order to check the HTTP status code of each Disallow entry in order to check automatically if these directories are available or not. 

When we execute Parsero, we can see the HTTP status codes. For example, the codes bellow:

  • 200 OK                  The request has succeeded.
  • 403 Forbidden     The server understood the request, but is refusing to fulfill it.
  • 404 Not Found    The server hasn't found anything matching the Request-URI.
  • 302 Found             The requested resource resides temporarily under a different URI
  • ... 


Parsero needs at least Python3 and can be executed in all Operating Systems which support this language development. Also it needs Urllib3.
sudo apt-get install python3
sudo apt-get install python3-pip
sudo pip-3.3 install urllib3
When you have installed these software, just download the project from:

In Linux you can use the command bellow.

git clone

When you download Parsero, you will see a folder with three files.

Before start, you need to check that your default Python version is 3 or later. If you have already installed Python3 but is not your default version,  you can run the script using the command "python3" instead of "python".

If you don't type any argument, you will see the help bellow.

Example 1

In the picture below you can see the Robots.txt file of a web server in one of my environments. If you are a web security auditor, you should check all the Disallows in order to try to get some valuable information. The security auditor should want to know what directories or files are hosted in the web servers which the administrators don't want to be published on the search engines.

You can do this task automatically using Parsero with the command:
python -u 

Notice in the picture below that the green links are the links which are available in the web server. You don't need to waste your time checking the other links, just clicking on the green links.

If we visit the we can see the Apache logs which are public but hidden for the crawlers...

Example 2

In the picture below you can see another robots.txt. The picture has been cut because this server has a lot of Disallow. Can you imagine checking all of them manually?

If you use Parsero, you will audit all the Robots.txt file in just a seconds...

... and discover for example, the portal login for this site.

The future of Parsero

I am working on developing new features of this tool which will be delivered in the next months... I would be really grateful if you decide to give me your feedback about this tool.

I want to give the thanks to cor3dump3d for his support and help!!! He has saved me a lot of time thanks to sharing his knwoledge of Python with me!!

Posted on Sunday, December 08, 2013 by Javier Nieto


Sunday, December 01, 2013

You already know that the malware developers create packed executables in order to try to thwart the security analyst job and make a lighter file easier to download... If the executable is packed we cannot examine the original program which obstructs us from employing a static analysis to know what the malware does...

In this post we have a malware sample called DEMO.exe which we will work with.

If we open the file with PEiD we can check that it has been compressed with UPX. You already know that it is a common compression...

The majority of times we are able to unpack this type of compression automatically with the UPX program. But this time, we haven't been lucky...

If we want to make a static analysis, first we need to unpack the file. To achieve this purpose, we need to follow the next steps:

  • To unpack the original file into memory.
  • To resolve all of the imports of the original file.
  • To find and transfer execution to the original entry point (OEP).

The key to this technique is to run the packed file until it is decompressed by itself, setting a breakpoint, dumping it and setting the OEP and rebuilding their imports.

To follow these steps we will use OllyDbg v1.10 and the plugin OllyDump.

The first thing we need to do is to load the executable in OllyDbg. We can see that an advertisement appears telling us what we already know: "...reports that its code section is either compressed, encrypted ,or contains large amount of embedded data..."

Ok, let's go. Just loaded the file we can see the program stop at PUSHAD. Press F7 or click on the button to Step into.

Right click at the ESP value and select Follow in Dump...

 ... select the first four characters and set a Breakpoint.

Press the play button and wait until the program stops at the breakpoint. Now we can see the tail jump at 00ACD7BD. The tail jump is the last instruction of the uncompressing action and in this type of compression is usually followed by a lot of 0x00 bytes. These 0x00 bytes are filling to ensure that the section is properly byte-aligned. Notice that function jumps to a site which is very far away, 004090E8. Just in that instruction, the original program will start.

We can set a breakpoint at the tail jump.

We resume the program again and it is stopped at the tail jump. We need to press F7 or click on the Step Into bottom.

Now we are in the OEP. Right-clik on this line and click on Dump debugged process in order to dump the process to a disk.

Notice that the OEP (004090E8) is the same that the last address in the "Modify" field, 90E8 . You can press the "Get EIP as OEP". Untick the "Rebuild import" because the import will be rebuilt using Import REConstructor. It is a good idea to copy the value "90E8" for the future...

Save the dump where you want. I have saved the file as "Dumped.exe". Notice if you want to run the program it will fail because it doesn't have the imports. To fix that, just open Import REConstructor and select the file which is being debugged with OllyDbg. In this case "demo.exe"

The next window will appear.

Just paste the value which was copied before in the OPE section and click on IAT AutoSearch.

Click on the Get Imports first and then click on the Fix Dump buttom.

Just pressed "Fix Dump" select the file which was dumped.

Now, we have two files, the first one which was dumped with OllyDbg and the last one, Dumped_.exe which has the imports rebuilt.

If  you look at the strings in OllyDbg you can see the differences. In the picture below you can see on the left the strings detected in the packed file and on the right, you can see the strings of the unpacked file.

Now, if we open the unpacked file we can detect that is was developed in Visual basic and we can start with the static analysis.

With VB Decompiler we could try to decompiler it...

Posted on Sunday, December 01, 2013 by Javier Nieto

No comments

Sunday, November 24, 2013

We usually need to create an executive report when we are involved in an incident handling. In these cases, a good option could be to include in it a world map with the connections which were established in the incident. Maybe we are interested in showing on a map where the command an control servers are hosted or for example to show which countries the distributed denial of service came from...

To achieve this purpose I am going to show you how to create a map using Wireshark. The last Wireshark version 1.10.2 will be used in this guide.

The first thing we need to do is to  download the GeoIP database: GeoLite City, Country, and ASNum from the link below: (free download).

Then, we need to put into a folder the files contained in the downloads above, for example "C:\Geoip".

Now, we need to tell Wireshark where the GeoIP files are. To achieve this, we need to open Wireshark and go to Edit -> Preferences - > Name Resolution and click on Edit in the "GeoIP database directories" section...

... and create a New path where the files were saved, in this case "C:\Geoip".

It is necessary to restart Wireshark in order to apply the changes. Now, we only need to load a PCAP file or create a new traffic capture. When we have all the traffic captured and we want to create the map with the connection involved in the incident, we need to go to Statistics -> Endpoints...

... select the IPv4 tab and click on the map bottom. Notice that if for example you have set a filter in Wireshark only with the UDP connections which are related to the malware, you can select "Limit to display filter" in order to only print these connections on the map. Then you click on map.

Finally, we have a dynamic map complete connections on the map. In this case, I've used the PCAP file related to the attack to which can be downloaded from the Barracuda website here.

Posted on Sunday, November 24, 2013 by Javier Nieto


Sunday, November 17, 2013

The RFC 1945 says in the 10.15 section:

"The User-Agent request-header field contains information about the user agent originating the request. This is for statistical purposes, the tracing of protocol violations, and automated recognition of user agents for the sake of tailoring responses to avoid particular user agent limitations. Although it is not required, user agents should include this field with requests."

We know that the infected hosts which belong to a botnet make callbacks to the command and control (C&C) server usually through the port 80 which is commonly open in the majority of the networks. Currently the  network security  administrators have "next generation firewalls" which are able to detect if a connection crossing by this port is an standard HTTP connection or not (like a shell on tcp/80 port) in order to allow or drop it. For this reason, some malware developers create malicious binaries with HTTP capabilities and sometimes they use the User-Agent field to send information to the C&C server to achieve their goals. (Notice that if the malware is implementing SSL the next generation firewall administrator would need to configure a SSL decryption in their firewalls if they want to look into these connections)

In this post, I am going to show you some examples which use the User-Agent to transmit information in this HTTP field.

Malware Sample 1

In this example, the malware creates a visual basic script which will be running to connect with the C&C server. We talked about it in the last post Decoding the code encoded. In this case, I could edit the the visual basic in order to change the C&C domain name by the localhost address where I have a netcat listening in the port 80. Netcat will receive the malware connections instead of the C&C server. In the picture below, we can see the result.

  1. This line corresponds to Netcat running in the computer where the malware is being analyzed. It is listening "-l" in the 80/tcp port "-p 80".
  2. We can see the connection executes a POST request.
  3. The malware is sending information about the compromised host in the User-Agent field with no more data in the HTTP body. This is the information which is being send to the C&C server:

    User-Agent: {DiskVolumeSerial}<|>{Hostname}<|>{Username}<|>{OS}<|>underworld final<|>{AVProductInstalled or nan-av}<|>{USBSpread: true or false}

Malware Sample 2

I sent a malware sample to my Cuckoo Sandbox to analyze its behavior and I got a traffic capture. If we use Wireshark to see the connections in the traffic capture, we can filter by "http.user_agent" to show only the information about the requests made by the malware which contains  this field  in the HTTP connections. You can see these connections in the picture below.

Right-clicking in each HTTP request, we can select "Follow TCP Stream" in order to see the data in like the application layer sees it. In the picture below, we can see the Follow TCP Stream of the first connection.

  1. A GET petition including the MAC address of the infected host has been made. Normally the malware sends information with POST connections, but in this case, the malware request a URL with contains the infected computer's  MAC address to send this information to the C&C server in a different way... The MAC address will be register in the remote HTTP server logs.
  2. The User-Agent its the same that the name of the malicious executable. Maybe the malware developer has the same malmare hosted in different servers and he wants to trace them or maybe wants to know the malicious program version.
  3. In the GET connection, the infected host receives three codes: 1,1,0. We would need to dig on it making a reversing engineer to try to figure out what the malware exactly does with these codes.
The next petition the compromised host does, is to make another GET request to the C&C server.

  1. We can see another GET request including the MAC address. Now, the HTTP petition has tree fields: "v1", "v2" and "v3".
  2. The connection continues using the same User-Agent.
  3. Now, the the host receive the "0" code.
In the picture below, we can see that in the next request made by the malware, the User-Agent has changed.

  1. The malware visit "/version" path.
  2. The User-Agent has changed. With this name, we can figure out that the malware is trying to check if it has connection to the Internet but it does not make sense, because the malware has received two codes before in two different petitions...
  3. We can see four different executables separated by @date| . Maybe it is the date when they were compiled.
The next connections are related with the last request which I have described above. We can see how the malware request the same executables it received before. The question here is why it download them twice each binary.

The malware downloads the four executables and changes its User-Agent again when it is requesting them.

  1. The URL path where the malware is hosted.
  2. User-Agent is changed to "installer-agent".
  3. The executable download.
The last connection the malware does is the same that the second one.

 This malware seems to use the User-Agent like it were a radio announcer.

Malware Sample 3

This example came from the Fireye blog and belongs to the well-know Flamer malware.

In the picture below, which has been taken from the blog mentioned above, we can see that the .NET version used by this malware is NET CLR 1.1.2150. This version has not been released by Microsoft ever. It is really difficult to know what these numbers want to say... It may be the malware version...

Malware Sample 4

This example came from the Fireye blog too.

In the picture below (taken from the Fireye blog) we can see that the User-Agent contains the strings "sleep 300000" and a date"ct:Mon Feb 25 23:11:58 2013". It could be possible that the zombie computer is telling the C&C how much time the malware has been sleeping and from what day or maybe it informs to the C&C sever that it received the sleep command.


In this post we have seen how the malware exchanges information with the command and control servers using the User-Agent and the importance of this HTTP field. Many of them do not modify its User-Agent or implements  a well-known User-Agent to try to be undetectable, but if it changes you can create a custom signature for you IDS (Snort, Suricata...) in order to locate more infected hosts in your network which are making connections with the User-Agent customized by the malware.

Posted on Sunday, November 17, 2013 by Javier Nieto


Sunday, November 10, 2013

This post is the continuation of the last one:

Remember that in the last post, we obtained the first password "r0b0RUlez!" for the challenge offered by In this post, I am going to show you how to get the first and the second password using IDA Pro instead of OllyDbg. Ok, let's go...

In order to get the first password we can do a similar thing to we did in the previous post. (I am going to explain this swiftly because it was explained in the previous one).

If we set a breakpoint at "call strcnp" at 0x00401B6C, when the program is being debugged it will be stopped when it is comparing two strings, our password and the real one. After setting the breakpoint, press F9 in order to debug the program.

The program is open and we just need to type a password. In this case, "behindthefirewalls".

If we go to the Stack...

... we can see the the picture below.

  1. Our attempt to figure out the password
  2. The real password which the first one is being compared to.
  3. We are not sure about this string... Could it be the second password?
  4. String which will ask for the second password...
It seems too easy... We type the first password "r0b0RUlez!" which we already know is correct, and we try "u1nnf2lg" as second password...

But it does not work... The next step we can take is to set a breakpoint at "u1nnf2lg" "0x0023FDFC" in the stack, in order to stop the program at this address when it is being debugged and look at the code there... Just press F2 over the string to set the breakpoint.

After pressing OK, you will see a red line where the program will be stoped.

We debug the program again by pressing F9. It is necessary to type the first password again and then, the program will be stopped. But...

... the program has been debugged at, "0x0040161F" instead of "0x0023FDFC" where we set a breakpoint... What is happening? If we look at the assemble code in the picture below, we can see "int 3"... It seems that the software developer is trying to thwart our attempts to make a reverse engineer setting a breakpoint in its executable code source...

Don't worry, the pop up below appears. We need to click on "Change exception definition"...

... tick the "Pass to application"...

... and press OK and Yes and press F9 again.

After that the second password is required. We type for example "behindthefirewalls" and press F9 one more time.

Now, the program is stops at the right address, "0x0023FDFC".

If we look at the assemble code of the stack in a graphic, we can see the picture below where we can check that the program has been stopped at "cmp al, 2". We can see that there is a loop and a "xor eax, 2" instruction...

We can check that the EAX value is equal at 75 in hexadecimal which in ASCII is equal to "u" (the first character of "u1nnf2lg") and then it will be XOR with 0x02. 75 + 2 = 77 in hexadecimal is "w"... We can suppose the first character of the password could be "w"...

What would happen if we XORED with 0x02 the string "u1nnf2lg" which was found at the beginning of our post?

python -c "print ''.join([chr(ord(c) ^ 0x2) for c in 'u1nnf2lg'])"

We have the string "w3lld0ne" which seems to be the second password...

... and Yes!! We win!!!

If we analyze the loop we can say that it XOR with 0x02 character by character the string "u1nnf2lg" and the result is compared character by character with the typed password. If the first XORed character is the same as the first character typed by the user, then continue with the second one and so on... If not, the game is over...

Posted on Sunday, November 10, 2013 by Javier Nieto

1 comment