Archive for the ‘Network Security’ category

Tshark Challenge – Uber-geek Answer

October 13th, 2009

In my last post I left you with a question: Given what we have seen in the decode file with tshark, what impact (if any) would there be if we place a stateful inspection firewall between the attacker and the Web server? In other words, if the attacker is running a packet sniffer, would they still see the Web server leaking out 404 errors?

And the answer is…

Maybe. :)

Not all stateful inspection firewalls are equal. Some handle packets slightly differently than others. For example some (like Checkpoint, Netscreen and others) let non-SYN packets to permit rules generate new state table entries. Some (Cisco, Netfilter and others) will only generate a state table with proper session establishment (must see TCP three packet handshake).

The NIPS sent a valid reset packet to the attacker on the Internet. Each of the above firewalls would see the reset packet and remove it’s entry from the state table. When the Web server continued to communicate however, only the first set of firewalls would let the packets leak out to the attacker. The second set of firewalls would simply drop the traffic. In fact if we set them up with a deny rule instead of a drop rule, they would kill the session on the Web server thus fixing the problem created by the NIPS.

Why do some vendors let acknowledgement packets generate new state table entries? This seems a bit counterintuitive as a legitimate session is always going to start with a SYN packet. There are two reasons this makes good functional sense:

  • Updating the firewall rules does not kill active sessions
  • Active-active setup will pass traffic prior to state table sync

Of course we’ve increased functionality at the cost of security. Unfortunately that is the typical trade off.

Hope you had fun with this challenge. If there is interest I will post more in the future.

Analyzing packets with tshark

October 1st, 2009

In an earlier post I discussed how to adjust the display output in tshark. The post generated a lot of interest, so I decided to add some additional information on using tshark to decode packets. This post assumes you have read the one linked to above.

Why use tshark instead of tcpdump/windump?

Many old time decoders swear by tcpdump, and it’s Windows counterpart windump. Both are great tools, but they have become a little dated. While patches are still released from time to time, little has been done to update or expand their decode capability. Wireshark on the other hand, as well as it’s included tools such as tshark, include decode support for hundreds of protocols and the list is growing all of the time. While you can certainly analyze packets without the decoders, they make the process go far quicker.

Why use tshark instead of Wireshark?

Wireshark is a great tool when you are doing an in-depth payload analysis. It can be a little tedious however if you wish to follow a specific field over multiple packets. For example let’s say we wish to watch the TCP sequence number increment over multiple packets. With Wireshark, I would have to note the sequence number location in the middle pane and page through each packet. Since there is no way to line up the value over multiple packets, I’m forced to remember previous values when performing my calculations. With tshark however, we can do something like this:

tshark -n -T fields -e ip.src -e tcp.seq -e tcp.len    0        0    1        0    1        363    364     1448    1812    1448    3260    1448    4708    1448    6156    1448    7604    1448    10500   310    9052    1448    10810   0

Remember that the TCP sequence number (the second field) should increment based on the size of the payload (the third field). Note that packets 10 and 11 where received out of order. This could mean there are multiple paths available between our location and the identified IP address. While Wireshark would show us this information as well, in this view it is a bit easier to follow the flow.

More on displaying fields

As discussed in my earlier post, as well as shown above, the “-T” switch can be used to manipulate the output being displayed. You can choose from XML, Postscript or plain text. The most useful option is “fields” as it lets you pick and choose which specific fields you want printed out. As shown above, the “-e” switch can then be used to identify which fields you wish to display. The complete list of filters can be found here. A nice cheat sheet of the most commonly used values can be found here.

If you define a specific protocol, tshark will display some of the more important fields from that header. For example to look at only the Ethernet header:

tshark -T fields -e eth

Ethernet II, Src: TyanComp_56:3b:14 (00:e0:81:56:3b:14), Dst: Dell_d1:fe:ef (00:12:3f:d1:fe:ef)

Ethernet II, Src: TyanComp_56:3b:14 (00:e0:81:56:3b:14), Dst: Dell_d1:fe:ef (00:12:3f:d1:fe:ef)

Ethernet II, Src: TyanComp_56:3b:14 (00:e0:81:56:3b:14), Dst: Dell_d1:fe:ef (00:12:3f:d1:fe:ef)

Note the type and CRC fields are not displayed, as they are not as “interesting” as the source and destination MAC address. We would have to define these fields specifically (ip.type and ip.trailer) if we wish to see them.

One side effect of printing fields is that tshark will add a blank line for any packet that does not contain the specified field. This can be a pain when analyzing HTTP packets as not every packet will contain a URI. An easy way to clean this up is to pipe it through grep. For example:

tshark -T fields -e http.request.uri | grep -v “^$”







In my last post I discussed grep as well as where to grab a free version for Windows. The above grep command uses the “-v” switch to match all lines that do not contain the specified value. “^$” defines a blank line. So the above grep command filters out all blank lines.

More display options

Tshark has a number of other useful display options. For example you can print headers at the beginning of the output:

tshark -n -T fields -e ip.src -e ip.dst -E header=y

ip.src  ip.dst

If you plan on importing the information into a spreadsheet or database, you can define which character to use between fields:

tshark -T fields -e ip.src -e ip.dst -e tcp.dstport -E header=y -E separator=;


Packet statistics

Tshark has solid statistical capability as well. If you need to process a lot of files, sometimes it is help to start with looking at the raw stats. The “-z” switch is used to specify the statistics you wish to analyze. Normally these will be printed at the end of the decode information, but if you use the “-q” switch only the stats will be printed. Here’s an example:

C:\testing>tshark -q -z http,stat, -z http,tree -r test.cap


HTTP/Packet Counter           value            rate         percent


Total HTTP Packets            64915       0.048999

HTTP Request Packets           459       0.000346           0.71%

GET                                       24       0.000018           5.23%

HEAD                                   433       0.000327          94.34%

OPTIONS                                 2       0.000002           0.44%

HTTP Response Packets          448       0.000338           0.69%

???: broken                            0       0.000000           0.00%

1xx: Informational                    0       0.000000           0.00%

2xx: Success                         12       0.000009           2.68%

200 OK                                12       0.000009         100.00%

3xx: Redirection                   0       0.000000           0.00%

4xx: Client Error                436       0.000329          97.32%

404 Not Found                  432       0.000326          99.08%

403 Forbidden                      4       0.000003           0.92%

5xx: Server Error                  0       0.000000           0.00%

Other HTTP Packets           64008       0.048314          98.60%



HTTP Statistics

* HTTP Status Codes in reply packets


HTTP 403 Forbidden

HTTP 404 Not Found

* List of HTTP Request methods

GET  24


HEAD  433


A couple of things stick out in this output. First, we have four 403 errors indicating that someone was attempting to access something they did not have permission to. Also, out of 459 HTTP requests, 432 of them were for non-existent files. We are also seeing a lot of “HEAD” requests which could be a proxy, or could be an attacker attempting to keep from being logged to the Web server’s access log. Clearly this capture file includes some suspect traffic that warrants further investigation.

Tshark can even produce general throughput statistics if you need them. This is an excellent way to check for DoS attacks:

tshark -q -z io,stat,10 -r test.cap


IO Statistics

Interval: 10.000 secs

Column #0:

|   Column #0

Time            |frames|  bytes

000.000-010.000     254    145081

010.000-020.000     145     80003

020.000-030.000     125     65527

030.000-040.000       4       264


Note tshark will print the frame and byte count for any interval specified, defined in seconds. The only problem is that if you are capturing packets off of the wire, stats are not displayed until the capture ends.

Exec Summary

Tshark is an extremely capable packet analysis tool that has surpassed it’s counterparts tcpdump and windump. Combine the extensive decode capability along with the flexible output display, and tshark has become the tool of choice for many packet decoders.

How To Review A Firewall Log In 15 Min Or Less – Part 2

September 29th, 2009

In my last post I introduced the concept of using white listing in order to review firewall logs. I discussed how this process can both simplify as well as expedite the log review process, by automating much of the up front work. In this post we will look at some actual examples, as well as start creating a firewall log parsing script.

The basics of grep

In order to show you the process of white listing your firewall logs, I am going to use grep. Grep is a standard Linux/UNIX tool, with free versions available for Windows (grab both the Binaries as well as the Dependencies). Grep is certainly not the most efficient tool for the job, but it is by far the simplest to learn. If you are a Perl, PHP, AWK&SED, SQL, etc. guru, by all means stick with your tool of choice. Simply mimic the process I’ve defined here using your appropriate command set.

Grep is a pattern-matching tool. It allows you to search one or more files looking for a specific pattern. When the pattern is found, the entire line is printed out. So for example the command:

grep firewall.log

would produce all lines in the file “firewall.log” that contains the IP address “”. Grep has a number of supported switches, but the only one we need for firewall log review is the “-v” switch. This switch tells grep to match all lines that DO NOT contain the specified pattern. So for example:

grep –v firewall.log

would only print out lines that do not contain the specified IP address.

With grep, a period is actually a wild card character. So while I said the first grep command would match on the IP address, it could actually match on more than that. Grep interprets the string to read:

Match on: 192 <any single character> 168 <any single character> 1  <any single character> 10

If we want grep to match periods as periods, we have to preced them with a backslash character. So the proper syntax would actually be:

grep 192\.168\.1\.10 firewall.log

Finally, sometimes we want to match on multiple patterns that are strung together. For example what if we are only interested in traffic when that’s the destination IP address? Depending on your firewall log format, the command may look something like this on a Linux or UNIX system:

grep ‘dst 192\.168\.1\.10’ firewall.log

On Windows, the command would look like this:

grep “dst 192\.168\.1\.10” firewall.log

Note the only difference is that Linux and UNIX uses single quotes, while Windows uses double quotes.

Logical AND’s and OR’s

Sometimes we need to match on multiple patterns within the same line. For example what if we only wish to see TCP/80 traffic to our Web server? In this case there are actually two patterns we wish to match on the same line. The problem is there may be other stuff in the middle we don’t care about.

To perform a logical AND, simply use the grep command twice on the same line. For example:

grep “dst 192\.168\.1\.10” firewall.log | grep “dst_port 80 ”

The pipe symbol simply lets you run the second grep command before execution ends. So the first grep command will grab all traffic going to and then pass it to the second grep command. The second grep command then searches this output for all traffic headed to port 80. Look closely after the port number and you will see I included a space character. Without a space character, we could potentially match on port 800, 8080, etc.

Sometimes we may wish to match on either of two values. For example what if we wanted to see both HTTP and HTTPS traffic to our Web server? In this case we would need to do a logical AND combined with a logical OR. Here’s how to do that with grep:

grep ‘dst 192\.168\.1\.10’ firewall.log | grep ‘dst_port \(80 \|443 \)’

The first half of the command should look familiar, but the second half needs some explaining. We need to tell grep that the parenthesis characters are actual commands and not part of the string we wish to match. We do this by preceding them with a backslash character. The pipe character is what tells grep to process this command as a logical OR. Note the pipe also needs to be preceded by a backslash.

Sorting logs with grep

OK, so we have the basics, now let’s start applying them to reviewing a firewall log file. The first thing you need to do is get the log file into ASCII format. This is the native format for many firewalls, so no conversion may be required. If the log uses a proprietary format, the vendor usually supplies a tool to do the conversion. Personally I just send the logs to a SIM ( From there you can simply copy them off to a working directory.

Next we need to open a text editor. While we can run our grep commands on the command line to test their accuracy, we want to place the commands in a shell script or batch file so they can be easily run later. For the rest of this post I will use single quotes, which is the syntax for both Linux and UNIX. Remember that your Windows version of grep may want to see double quotes instead.

The next step is to review the log file looking for traffic patterns you recognize. Let’s take an easy one like HTTP traffic to your Web server. Look closely at a log entry and identify what unique characteristics it has which tells you its HTTP traffic to your Web server. Most likely this will be the target IP address of your Web server, as well as a target port of TCP/80. Now simply create a grep command to copy these entries to a new file:

grep ‘dst 192\.168\.1\.10’ firewall.log | grep ‘dst_port 80 ’ > web_server_http.txt

Note that rather than print the output to the screen, we redirected it to a file with a descriptive name. That way the log entries are available for later review.

If our Web server is only offering HTTP, we should only see traffic headed to port TCP/80. Any other port connect attempts can be considered suspect traffic, and may be part of a scan or probe. With port TCP/80 traffic in it’s own file, we now simply redirect everything else to another file:

grep ‘dst 192\.168\.1\.10’ firewall.log | grep –v ‘dst_port 80 ’ > web_server_scan.txt

This should account for all traffic headed to our Web server. Now we need to get all of this saved traffic out of the way so it will be easier to spot other entries. While we could try to delete them, it would be prudent to keep an unmodified version of the firewall log just in case we need to refer back to it. The easiest way to handle this conflict is to simply create a new temporary file.

grep –v ‘dst 192\.168\.1\.10’ firewall.log > temp1.txt

Now we simply open “temp1.txt” and look for the next pattern we recognize. Let’s say that’s inbound and outbound SMTP. That section of our script may look something like this:

grep ‘dst 192\.168\.1\.12’ temp1.txt | grep ‘dst_port 25 ’ > smtp_inbound.txt

grep ‘dst 192\.168\.1\.12’ temp1.txt | grep –v ‘dst_port 25 ’ > smtp_server_scan.txt

grep –v ‘dst 192\.168\.1\.12’ temp1.txt > temp2.txt

grep ‘src 192\.168\.1\.12’ temp2.txt | grep ‘dst_port 25 ’ > smtp_outbound.txt

grep ‘src 192\.168\.1\.12’ temp2.txt | grep –v ‘dst_port 25 ’ > smtp_server_compromise.txt

grep –v ‘src 192\.168\.1\.12’ temp2.txt > temp3.txt

SMTP is a bidirectional service, so the first three lines take care of inbound traffic, while the last three look at outbound. Note that in line five we expect to see the server only communicating out on TCP/25. Any other port attempts may indicate the system has been compromised and is now calling home. Obviously it would be a good idea to do the same for the Web server:

grep ‘src 192\.168\.1\.10’ temp3.txt  > web_server_compromise.txt

grep –v ‘src 192\.168\.1\.10’ temp3.txt > temp4.txt

Closing out your script

Now simply repeat this process until you are left with a temp file that has log entries you don’t expect to see. It is now time to start closing out our script. First, rename your last temporary file to something that will catch your attention. On Windows the command would be:

ren temp23.txt interesting_stuff.txt

On Linux or UNIX the command would be:

mv temp23.txt interesting_stuff.txt

This interesting file will probably be the first file you are going to want to review, as it will contain all of the unexpected patterns. Now that all the normal traffic flow is out of the way, it should take substantially less time to spot anything you truly need to worry about.

One nice thing about the temp files is that they can aid in troubleshooting. For example if grep moves 1.5 MB of log entries into a new file, I should expect to see the next temp file shrink by 1.5 MB as well. If not, something is wrong in my script. Also, if you notice that all of your temp files after “temp12.txt” have a zero file length, chances are you have a syntax error just after you created “temp12.txt”.

Once your script is vetted and working however, you may not want to have the temp files in your working directory. That way it is easier to focus in on the sorted files during a review. When you reach this point, simply have the last line of your script delete the temp files. On Windows the syntax would be:

del /q temp*.txt

and on UNIX or Linux the command would be:

rm –f temp*.txt

Automating the process

Once you have a working script, it is now time to automate the process. If you are running Linux or UNIX, simply setup the script to run via cron. If you need help with configuring a cron job there are some excellent help pages. The equivalent on Windows is called a Scheduled Task, and Microsoft has some excellent help in the knowledgebase.

Final thoughts

I mentioned in my last post that I like to look for error packets. I will typically do this right at the beginning of my script. Also, it is not uncommon for systems to call home in order to check for patches. I usually put these exceptions at the beginning of my script as well. Something like:

grep ‘dst 1\.2\.3\.[60-61] ‘ firewall.log > server_patching.txt

grep ‘dst 10\.20\.30\.[1-8] ‘ firewall.log >> server_patching.txt

The first command grabs all traffic headed to or The second looks for traffic headed to – Be very careful in specifying multiple systems! This is a great way to miss something interesting in your log files. You should only use this syntax to describe known patch server. Never specify all the subnets for a particular company. For example if you filter out all Microsoft IPs this way, than a single owned system on their network could control bots on your network and you would never know it.

This syntax is also handy when writing filters for outbound log entries. For example assume that we use – for internal addressing. To grab all outbound HTTP traffic we could say:

grep ‘src 192\.168\.[1-24]\.[0-255] ‘ temp15.txt | grep ‘dst_port 80 ’ > outbound_http.txt

As mentioned in the last post, we may wish to check outbound HTTP during non-business hours to see if we have any Malware calling home.

Don’t forget you can use grep as part of your review process as well. For example let’s say we are reviewing interesting_stuff.txt and spot a source IP ( we know is hostile. We want to check the file to see if there are any other IP addresses we need to be concerned with. While we could keep paging through the file, a simpler solution would be to use grep:

grep –v 1\.2\.3\.4 interesting_stuff.txt

You review script has the potential to act as a mini-IDS system. By this I mean if you find a suspect pattern but figure out what’s going on, don’t be afraid to leverage your script to categorize this pattern in the future.

For example, let’s say we notice a number of systems on the Internet attempting to access port 5060, but we do not have this port open through our firewall. A quick Google search will tell us that this is SIP ( traffic. Why bother to look it up every time we see it however if our script can identify it for us? Now that we know 5060 is SIP, simply add the following lines to our script:

grep ‘dst_port 5060 ‘ temp.txt > sip_traffic.txt

We can do the same for any ports that are commonly being probed.

Exec Summary

What makes firewall log review so time consuming is that you need to sift through all of the normal traffic patterns in order to find the log entries that identify a true security issue. By white listing all expected traffic patterns, it becomes far simpler to find and review any unexpected entries within your logs. By further automating the process of sorting the logs, log review can be reduced to a quick activity that can easily be performed on a daily basis.

How To Review A Firewall Log In 15 Min Or Less – Part 1

September 25th, 2009

One of the most difficult and time consuming parts of maintaining a perimeter is reviewing firewall logs. It’s not uncommon for an organization to generate 50, 100, 500 MB or more worth of firewall log entries on a daily basis. The task is so daunting in fact, that many administrators choose to ignore their logs. In this series I’ll show you how to expedite the firewall log review process so that you can complete it faster than that morning cup of coffee.

Why firewall log review is important

I once took part in a panel discussion where one of my fellow SANS instructors announced to the crowd “the perimeter is dead and just short of useless”. I remember thinking I was glad I was not one of his students. I occasionally take on new clients and find that 7/10 times I can identify at least one compromised system they did not know about. In every case it has been the client’s own firewall logs that pointed me to the infected system.

In the old days firewall log review was all about checking your inbound drop entries to look for port scans. Today the focus is on outbound traffic. Specifically, you should be checking permitted patterns. With the plethora of non-signature Malware today it has become far too easy for an attacker to get malicious code onto a system. A properly configured perimeter will show you when a compromised system tries to call home. This is typically your best chance to identify when a system has become compromised.

What needs to be logged?

Dropped traffic does not have to be logged provided you are not blind to DoS flood attacks. For example if you are running a tool such as NTOP on your perimeter, collecting RMON or Netflow data, than it is OK not to log dropped packets as you can collect this information through other means.

When traffic is permitted across the perimeter however, you need to log it. This includes all permitted traffic, regardless of direction (egress as well as ingress). At a minimum we want to see header information for the first packet in a session. Anything beyond that can be considered a bonus.

Some kernel level rootkits do an excellent job of hiding themselves within the infected system. In fact many are so stealthy they cannot be detected by checking the system directly. One possible option is to pull the hard drive and check it from a known to be clean system. Obviously this is highly impractical whenever you have more than just a couple of systems.

A better option is to check the network for tell tale signs of the Malware calling home. Malware typically creates outbound sessions either to transfer a toolkit or check in for marching orders. The firewall is in an optimal position to potentially block, or at the very least log, both of these activity patterns. So by reviewing our firewall logs, we can quickly check every system on our network for indications of a compromise.

Malware can leverage any socket to call home, but most use TCP/80 (HTTP) or TCP/443 (HTTPS). This is because Malware authors know most firewall administrators do not log these outbound sessions because they are responsible for the greatest portion of perimeter traffic. So again, if we are going to permit the traffic to pass our perimeter, we must insure we are logging it.

Log review as a process

The mistake I see most administrators make is they perform a time linear analysis of their log entries looking for “the interesting stuff”. The problem is suspect traffic can be extremely difficult to detect this way as it will be mixed in with normal traffic flow. So the first thing we need to do is get the normal traffic out of the way.

Think of the rectangle in Figure #1 as representing your firewall log. Assume it contains a mixture of normal as well as suspect traffic patterns. Rather than immediately looking for the suspect patterns, let’s first get the normal patterns out of the way. For example HTTP headed to our Web server from the Internet is an expected pattern. If we pull all of these entries out of the log file, the log file becomes a little bit smaller. Inbound and outbound SMTP to our mail server is another expected pattern. Again, if we can remove these entries as well the firewall log file becomes even smaller.


Now we simply continue this process for every traffic pattern we expect to see crossing our perimeter. The more traffic patterns we recognize and move out of the way, the smaller the final log file becomes. What’s left is just the unexpected traffic patterns that require review time from a firewall administrator. I’ve seen sites that typically generate 250-300 MB worth of logs daily end up with a final file less that 100 KB in size. Needless to say 100 KB takes far less time to review that 300 MB.

Automate, automate, automate

If this seems like a lot of work, it only will be initially. What I do is create a batch file, shell script, or set of database queries to automate the process of parsing the firewall log. We can then run this process as a CRON job or scheduled task. This means that all of the hard work (breaking up the main log file into smaller files) can be done off hours. When you walk in the door in the morning, the log file will already be segregated. You can then immediately focus in on the suspect patterns.

Helpful tips

Here are some tips I’ve developed over the years:

  • There is no “single right way” to segregate log entries. It is all about how you personally spot unsuspected patterns. You can sort by IP address, port number, or whatever info you have to work with in your logs.
  • This is not about obsessively putting one log entry into every sort file. This process is about creating easier to spot patterns. For example a TCP reset in an HTTP stream could go in both an “error” file and an “HTTP” file. Each would make it easier to spot different types of patterns.
  • Start by pulling our error packets (TCP resets, ICMP type 3’s & 11’s). They always indicate something is broke or someone did something unexpected.
  • A smart attacker will never make your “top 5 communicators” list. I’ve seen infected systems make as few as four outbound connections in a day.
  • Make a note of the average size of each of your sort files. A sharp spike in traffic may warrant further investigation.
  • Sometimes it is helpful to parse the same pattern into two different files. For example I create an “outbound HTTP” file, and then parse out all of the traffic generate during non-business hours. This makes it much easier to find infected systems calling home.
  • Whitelist know patch sites. For example systems may call home all night long to Microsoft and Adobe to check for updated patches. If you can parse out these entries, you’ll end up with far less noise in your final file.
  • Some sites find it helpful to parse out users checking their personal email. This can be helpful information if data leakage occurs.
  • I like to segregate traffic based on security zone. For example I would be far less concerned about SSH from the internal network to the DMZ, than I would about SSH headed to the Internet. If you are not sure why, read this.
  • In an ideal world, ever traffic pattern you find will be described in your organization’s network usage policy. If its not, then further investigation may be required.
  • Expect to tweak your script over time, as networks are an evolving entity.

Exec Summary

White listing expected traffic patterns in your firewall log can help to expedite the log review process. Similar traffic becomes grouped together, and can be more easily checked for suspect patterns. In part 2 of this series I’ll walk you through the process of creating your own script using a number of different firewall products.

Spoofing Your IP Address During A Port Scan – Part 2

August 31st, 2009

In my last post I discussed an idle scan and how it can permit an attacker to mask their IP address during a port scan. In this installment we’ll look at some traces, as well as discuss how to identify when an idle scan has been used against your network.

Monitoring the IP ID increment

Let’s start by looking at the packets that were monitoring the IP ID field on the Windows system. Here is what our probe packet looked like:

07:22:15.367140 IP (tos 0×0, ttl 64, id 63897, offset 0, flags [none], proto TCP (6), length 40) > ., cksum 0xeca2 (correct), win 512

A few of these fields are kind of interesting. The TTL value is set to “64”, which suggests a Linux or UNIX system. Since we are raw writing the packet directly to the wire this value is actually controlled by hping. 64 just happen to be the program default, but we could change the value to anything we wish with the “-t” switch.

The target address is printed as “”, which means the packet was sent to TCP port 0 at IP address Windows does not offer services on TCP port 0 but that’s OK. We’re not actually trying to connect to the Windows system. We just need it to send us an IP packet so we can check the IP ID value. If we wanted to hit a different port, we could use hping’s “-p” switch.

The “.,” after the target specification means that no TCP flags were set within the packet. This is referred to as a null packet and would never occur during normal IP communications. For this reason, may NIDS, NIPS and firewalls will flag it. We created a null packet because we did not tell hping to sent any of the TCP flags. For example adding the “-S” switch would have turned on the SYN flag, “-A” would turn on the acknowledgment flag, and so on.

All of our probe packets to the Windows systems would look similar to this one. The only values that would change are the time stamp (obviously), the TCP source port and the CRC checksum value.

Here’s what the responses look like coming back from the Windows system:

07:22:15.367296 IP (tos 0×0, ttl 128, id 108, offset 0, flags [none], proto TCP (6), length 40) > R, cksum 0xc431 (correct), 0:0(0) ack 918250228 win 0


07:22:16.367453 IP (tos 0×0, ttl 128, id 109, offset 0, flags [none], proto TCP (6), length 40) > R, cksum 0xfa78 (correct), 0:0(0) ack 2127488152 win 0


07:22:17.367763 IP (tos 0×0, ttl 128, id 110, offset 0, flags [none], proto TCP (6), length 40) > R, cksum 0x2b9f (correct), 0:0(0) ack 1611256374 win 0

The important value here is the IP ID, identified as “id”. In the first packet it is set to 108, and then the value increments by +1 for each subsequent packet (109 then 110). As long as we continue to see an uninterrupted sequence in the IP ID, we know the Windows system is not transmitting any other packets except the ones that are part of this session.

Probing a closed port

Remember that as part of our scan, we will be spoofing the source IP address of the Windows system when target ports on a remote system. A change in the IP ID increment is what tells us we found an open port.

Let’s take a look at one of these can packets:

10:30:28.852602 IP (tos 0×0, ttl 64, id 41256, offset 0, flags [none], proto TCP (6), length 40) > S, cksum 0x97a6 (correct), 1704542340:1704542340(0) win 512

Note the source IP address is that of the Windows system ( We know this could not have come from the Windows system however because the TTL is set to 64. So this is a packet we generated from using a second instance of hping.

Also note we have targeted TCP port 79 and have turned on the SYN flag. Here is the response we received from the remote target:

10:30:28.852839 IP (tos 0×0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 40) > R, cksum 0xbb1a (correct), 0:0(0) ack 1704542341 win 0

Note the “R” value that tells us the reset flag was turned on in the TCP header. We also see “ack” which tells us the acknowledgement flag was turned on as well. This is an error packet informing the source that the TCP port is in a closed state. Also note the packet is being transmitted to the Windows system since that was the source IP address in the probe packet.

So how will the Windows system respond? You may remember from my last post that the RFCs state you should never respond to an error packet. Even though the Windows system has no idea what the target is talking about, it will quietly discard this error packet. In other words, the Windows system WILL NOT send a response. This means we should see no change in the IP ID increment.

Probing an open port

Now let’s take a look at what happens when we target a port that is actually in an open state. Here’s the probe packet. Note it is pretty similar to the last one, except now we are checking TCP port 80.

10:29:46.947964 IP (tos 0×0, ttl 64, id 15249, offset 0, flags [none], proto TCP (6), length 40) > S, cksum 0x476b (correct), 947341260:947341260(0) win 512

Here is the response from the target. Note that we see an “S” instead of an “R” which means they SYN flag is turned on rather than the reset flag. This is the targets way of informing the source that TCP port 80 is open and has an application sitting behind it servicing connections.

10:29:46.953333 IP (tos 0×0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 44) > S, cksum 0x0de6 (correct), 262246932:262246932(0) ack 947341261 win 5840 <mss 1460>

So this time the Windows system receives a packet saying “Sure, you can connect to TCP port 80”. This is a problem for the Windows system because it has no connection entry saying that it actually tried to connect to port 80 on the target. Without the connection entry, Windows has no way of knowing which application requested the session. With this in mind, it does the only thing it can do; transmit a reset packet to the target killing the session. Here is what the packet looked like:

10:29:46.953439 IP (tos 0×0, ttl 128, id 176, offset 0, flags [none], proto TCP (6), length 40) > R, cksum 0x5df1 (correct), 947341261:947341261(0) win 0

Note that Windows stamped the next available IP ID in the IP header (176). So had we been monitoring the IP ID increment on the Windows system in a different session, we would see: 173, 174, 175, 177, 178, 179. In other words, we would have seen that 176 was missing (+2 from the previous packet). This would be our indication that the TCP/80 probe sent to the target hit an open port.

Identifying an idle scan

So an idle scan does an excellent job of masking the true source IP address of a port scan. This means that without access to log information on the Windows host (something that will not exist if the attack has done their homework), we will never be able to discover the true source IP address of the attack.

Is there a way to tell an idle scan was performed rather than a straight forward port scan so we know if it is worth investigating the source IP address? There are in fact a couple of clues we can look for.

Probe retries

If the targeted system is sitting behind a firewall, you will notice a difference in the number of probe retries vs. a normal port scanner. When someone runs a normal port scanner, the scanner will typically hit open ports only once but firewalled ports multiple times. This is because the firewalled ports return no response. The scanner will make multiple attempts to ensure the packets are not just getting lost. Since the open port will actually respond, only one probe packet is required.

When an idle scan is performed, the opposite is true. If a firewalled port is probed there will be no IP ID increment change on the Windows system. So only one check is required to confirm this is not an active listening port. When an open port is probed however, we’ll detect a change in the Windows system’s IP ID increment. While we think this is due to us finding an open port, it could just as likely be because the Windows system sent a broadcast. So we will want to perform multiple probes against the open port to ensure the Windows IP ID data remains constant.


  • Probe firewalled ports more often than open ports = normal port scanner
  • Probe open ports more often that firewalled ports = idle scan


Normal port scans be extremely fast. It is not uncommon to see hundreds of probe packets per second. With an idle scan however, we have to take things a bit slower. This is because we must check the IP ID of the spoofed address, send the probe packet, permit enough time for the probe to elicit a response, give the spoofed system enough time to respond with a reset if required (thus using up an IP ID), and then check the IP ID again to see if the increment has changed.

Try to perform all of these steps too quickly, and you will pay the price in accuracy.  For example nmap supports idle scans with the “-sI” switch. The default extremely fast scanning speed works fine if all of the systems are on the same switched network. If the hosts are 15 hops away from each other on the Internet however, my experience has been the accuracy rating drops off considerably unless you slow down the scanning speed.


  • Hundreds of packets per second = normal port scanner
  • A dozen or less packets per second = possible idle scan, could just be a slow scan

TTL field

Let’s look again at the spoofed probe packet as well as the legitimate reset response sent by the Windows system:

10:29:46.947964 IP (tos 0×0, ttl 64, id 15249, offset 0, flags [none], proto TCP (6), length 40) > S, cksum 0x476b (correct), 947341260:947341260(0) win 512


10:29:46.953439 IP (tos 0×0, ttl 128, id 176, offset 0, flags [none], proto TCP (6), length 40) > R, cksum 0x5df1 (correct), 947341261:947341261(0) win 0

You may have noticed the TTLs are way off from each other. If both of these packets had in fact originated from then they would both have the same starting TTL value. The fact that they are different tells us that one of them is spoofed.

If you are monitoring the perimeter of the target system and notice that the SYN and reset packets have different TTL values, that’s a clue that you may be experiencing an idle scan. Now, as mentioned earlier, it is entirely possible to set the TTL to what ever we wish in our spoofed packet. So a smart attacker is simply going to set their starting TTL to 128 to better mimic the Windows system. If they do this, your only hope is that the attacker and the Windows system are a different number of hops away. In other words, if you consistently see the SYN packets with a TTL of 115, but the resets consistently have a TTL of 113, you are most likely looking at an idle scan.

If the TTLs do match, we can’t entirely rule out an idle scan. The attacker may just be that good.


  • TTLs of SYNs and RSTs are consistent but do not match = idle scan
  • TTLS of SYNs and RSTs match = normal port scanner or smart idle scanner

IP ID value

The best way to tag an idle scan is leverage the same thing the attacker uses, namely the IP ID value. Take another look at those last two decoded packets. Note that the IP ID in the first one is 15249 but in the second it is 176. If you are seeing an idle scan, you will notice the rest packets consistently have an IP ID of +2 (same thing the attacker sees), but the SYN packets will have some completely unrelated value. The IP ID increment in the SYN packets might be predictable or it might be random. Quite honestly it does not matter. What does matter is that you can spot some pattern in IP ID of the rest packets that does not jive with the IP IDs in the SYN packets.


  • IP ID of SYN and RST packets appear related = normal port scanner
  • IP ID increment of RST does not match SYNs = idle scan

Exec Summary

Hopefully you now have a better understanding of what is happening on the wire when an attacker performs an idle scan. If we are targeted with an idle scan, we cannot determine the true source IP address of the attacker. The best we can hope to accomplish is to determine that the scan is an idle scan vs. a normal port scan. There are a number of tell tale clues in packets, the best of which is the IP ID value.

Spoofing Your IP Address During A Port Scan – Part 1

August 28th, 2009

I love debunking myths, one of my favorites is “a port scanner must reveal his true source IP address”. In this series I’ll show you how to perform a port scan while hiding your source IP address from the host being scanned. I’ll also tell you how you can detect the technique when it is used against you.

Nmap’s decoy mode

An alternative to the technique I will describe is nmap’s decoy mode. With decoy mode you identify a number of bogus source IP addresses. From the target host, it looks like all of the bogus IP addresses, as well as the true source IP address, are all performing a port scan at the same time. The concept is the administrator under attack will have no way of knowing which IP address is in fact the true IP performing the scan.

This technique really does not mask the true source, as the source IP address is one of the IPs performing the scan. If you know what to look for, you can easily figure out which source IP is actually scanning you. So while this technique will work, it is not completely effective at hiding the source IP address.

What is an idle scan?

When we perform an idle scan, we do not actually directly detect open ports. Rather, we detect the effect an open port would have on a third party system. The technique is similar to how many viruses are detected in the human body. Rather than detecting the actual virus, we look for antibodies that get produced when the virus is present in the system. An idle scan detects open ports in much the same fashion.

Before we can dig too deeply into an idle scan, we need to look at some of the intricacies of IP.

Predictable header values

While the RFCs are designed to be specific enough that dissimilar operating systems will still be able to communicate via IP, they still leave quite a bit open to interpretation. For example, the RFCs specify that the maximum Time To Live (TTL) value that can be used is 255. They do not however specify what initial TTL value must be used; so different operating systems use different starting TTLs. The RFCs describe how Ping should work, but do not specify what should be in the payload of Echo-Request packets. Again, different vendors use different values. These nuances can permit you to identify the source operating system based variations in the packet contents. The technique is referred to as passive fingerprinting.

The IP identifier (IP ID) field in the IP header (bytes 4 and 5) is a similar situation.  RFC 791 specifies that the number must be unique on a per host, per session basis. For example let’s say I connect to a remote SSH server. Each IP ID in that session must be unique. If I close the session and then connect back later, it is RFC compliant if one or more IP ID values get used again. They don’t have to be, but if it does happen it is not a problem.

So the RFCs say the IP ID needs to be unique, but it does not really tie down how to go about generating the value. This has lead to different operating systems deploying different methodologies. For example Windows starts at an IP ID value of 1 and simply increments the value by +1 for every packet leaving the system. When the maximum value of 65,535 is reached, it starts back over at 1. BSD puts a random value into the IP ID field of each packet leaving the system. Linux is random for TCP packet (except initial responses which are always zero), +1 incremental for ICMP, and time based for UDP. Whew!

The one that is interesting for our purposes is Windows. The fact that each packet leaving the system gets a +1 IP ID makes the value extremely predictable. For example, consider the following output:

[root@fubar ~]# hping -r

HPING (eth0 NO FLAGS are set, 40 headers + 0 data bytes

len=46 ip= ttl=128 id=108 sport=0 flags=RA seq=0 win=0 rtt=0.4 ms

len=46 ip= ttl=128 id=+1 sport=0 flags=RA seq=1 win=0 rtt=0.4 ms

len=46 ip= ttl=128 id=+1 sport=0 flags=RA seq=2 win=0 rtt=0.4 ms

len=46 ip= ttl=128 id=+2 sport=0 flags=RA seq=3 win=0 rtt=0.4 ms

len=46 ip= ttl=128 id=+1 sport=0 flags=RA seq=4 win=0 rtt=0.4 ms

len=46 ip= ttl=128 id=+1 sport=0 flags=RA seq=5 win=0 rtt=0.4 ms

hping is a packet crafting tool which allows you create your own IP packets. In the above output we are using the “-r” switch to have hping monitoring the IP ID increment of a remote system. We know it is a Windows system, because Windows always uses a starting TTL of 128. Now look at the “id=” values. In the first line of output hping always prints out the absolute IP ID value used by the system. In this case here the value is 108. Each subsequent line then prints out the delta change from the previous packet. So in the second line the actual IP ID was 109, which is “+1” from the previous value of 108. The next packet had an IP ID of 110, which is “+1” from the previous IP ID value of 109.

Look closely at the fourth line of output. Note the delta change was “+2”. Since Windows uses sequential IP IDs, this tells us a packet we didn’t get to see just left the Windows system. We don’t know where it was going, but that’s OK. What’s important is that we can identify when the Windows system transmits and how many packets it sends out. For example had that line read “+5”, we would know that the Windows system transmitted four other packets since responding to our last probe.

Detecting open ports

So how can we leverage the predictable IP ID value of Windows for evil? One possibility is to turn the Windows system into an open port sensor. Here’s how we do it:

  1. Monitor the current IP ID being used by a Windows system. We should check the value at regular intervals over a relatively short period of time. Say once second.
  2. Find a target system we wish to port scan.
  3. While spoofing the source IP address of the Windows system, send a SYN packet to the TCP port we wish to probe on the target.

The target system will send a response packet back to the Windows system.  This response will either be:

The RFCs state you should never respond to error packets, regardless of whether you consider them to be legitimate or not. So when the Windows box receives the TCP reset error packet from the target host, it quietly ignores and discards the packet.

Things get a bit more interesting when a SYN/ACK is received however. From the Windows system’s perspective, it is just hanging out minding it’s own business when some unknown system sends it a SYN/ACK packet (remember we spoofed the Windows system’s IP address in the probe packet). A SYN/ACK effectively means “Sure, you can connect to me on that TCP port, no problem”. Of course since the Windows system didn’t actually send the SYN packet, it has no idea what the remote target is talking about.

With this in mind the Windows system sends a TCP reset error packet back to the target host. When the reset packet is transmitted, the next available IP ID is used within the IP header. This missing IP ID would be detected if we are still monitoring the IP ID increment once per second. So to review:

  • Closed port on target = No packets leaving Windows system
  • Open port on target = Windows sends a TCP reset using up an IP ID

So by monitoring the IP ID increment, we can identify when an open port is discovered as only probes to open ports will cause the IP ID increment to change.


You can’t use just any Windows system for this attack. The box must meet certain criteria:

  • Relatively quite system generating little traffic (like a home system)
  • No stateful filtering of TCP traffic

Of course go to any cable or DSL network at 2:00 AM local time and you can find hundreds of thousands of systems that meet these criteria. Remember that Windows systems love to arbitrarily broadcast, so you may wish to perform multiple check of each open port just to ensure the IP ID increment change was in fact due to an open port being probed.

Exec Summary

An idle scan lets you probe open ports on a remote target, while fooling the target into believing that some third party system is performing the scan. Open ports are detected by monitoring for irregularities in the IP ID increment of the Windows box.

In the next installment we’ll actually see what these packets look like on the wire as well as discuss how to detect an idle attack when it is used against you.

Network Mapping Through A Firewall – Part 3

August 26th, 2009

In my last two posts I talked about two different methods that can be used to map a network through a firewall. The first leveraged ICMP time exceeded in transit errors, while the second used the IP record route option. In both posts I also gave possible solutions for preventing an attacker from using these techniques against your network.

In both cases however, supported features available in commercial grade firewalls limited our security options. In this third and final part of the series, I will cover how to properly prevent these attacks if you are using an open source firewall. I will specifically be using Netfilter, but many of the techniques are applicable to pf as well.

What is Netfilter?

Netfilter is the stateful inspection firewall that is included in every modern distribution of Linux. If you have a copy of Linux, you also have a copy of Netfilter. Netfilter is sometimes referred to as iptables, but this is because iptables is the name of the binary you use to manipulate the Netfilter rulebase. Netfilter is an extremely capable firewall with too many features to cover in this post. I highly recommend you check out some of the FAQs and tutorials as they do an excellent job of describing many of the features.

Controlling tcptraceroute

In the first post I described how tools like tcptraceroute could punch through an open firewall rule to map the network sitting behind it. With commercial firewalls, we were limited to controlling the flow of outbound ICMP time exceeded in transit errors.

With Netfilter, we have the ability to control traffic based on the TTL value. We can look for a specific value, or a value above or below a certain threshold. The supported switches are:

  • -m ttl –ttl-eq = Match packets with a TTL of a specified value
  • -m ttl –ttl-gt = Math packets with a TTL higher than a specified value
  • -m ttl –ttl-lt = Match packets with a TTL below a specified value

Here is a possible Netfilter rule we can use:

iptables -A FORWARD -m ttl –ttl-lt 5 -j DROP

This rule would be processed prior to any permit rules in the rulebase. The rule simply checks the TTL value to see if it is less than 5. If so, the packet is dropped. Since the lowest TTL used by a modern OS is 64, and most systems are about 15 hops away from each other on the Internet, we should never inadvertently filter out legitimate traffic.

Here’s tcptraceroute running though a regular firewall:

[root@fubar ~]# tcptraceroute -n -f 1 -m 5 -q 1 -S 80
Selected device eth0, address, port 39142 for outgoing packets
Tracing the path to on TCP port 80 (http), 5 hops max
1 0.353 ms
2 0.450 ms
3 [open] 0.586 ms

And here is what tcptraceroute sees once we implement the above Netfilter rule:

[root@fubar ~]# tcptraceroute -n -f 1 -q 1 -S 80
Selected device eth0, address, port 54531 for outgoing packets
Tracing the path to on TCP port 80 (http), 30 hops max
1 10.175 ms
2 0.464 ms
3 *
4 *
5 *
6 *
7 [open] 1.007 ms

Note that once we start filtering on TTL value, the appearance of our perimeter changes. Without the rule an attacker could enumerate or IP addressing scheme. Even if we filtered outbound TimeX packets, they would still know the proper hop count. The Netfilter rule makes it much more difficult to accurately identify our network layout.

Adding in some deception

One of the more powerful features of Netfilter is the ability to customize reject messages. While most firewalls reject packets by returning an administratively prohibited error message, Netfilter lets you choose from a number of different unreachable error codes. This makes for some interesting possibilities. For example, consider the following rule:

iptables -A FORWARD -m ttl –ttl-lt 5 -j REJECT –reject-with icmp-host-unreachable

This rule tells Netfilter that whenever it sees a packet with a TTL less than 5, it should return an ICMP destination host unreachable packet. In other words, Netfilter will impersonate a router and tell the transmitting system that the target host is off-line. Here’s an example of tcptraceroute output once this rule has been implemented:

[root@fubar ~]# tcptraceroute -n -f 1 -q 1 -S 80
Selected device eth0, address, port 47555 for outgoing packets
Tracing the path to on TCP port 80 (http), 30 hops max
1 0.299 ms
2 0.450 ms
3 0.403 ms !H

Compare this output to the first tcptraceroute output shown above. Note that line 3 is now different. With a regular firewall, hop three was a response from the target host. In this output however, it appears the upstream router is returning an ICMP host unreachable (designated as “!H”) signifying the host is off-line. Since tcptraceroute thinks the host is off-line, it gives up trying and never actually reaches the target host.

So while this technique is a bit of security through obscurity, it is effective at disabling a tool that would normally punch right through a firewall. Since regular traffic would not have an abnormally low TTL value, it does not match this rule and is unaffected.

Controlling record route

In my second post in this series I talked about record route and how it can be leveraged to map through a firewall. I discussed that the range of the tool is limited (max 8 hops, 3 if you want hop info in both directions), but that there are ways for an attacker to get around this restriction. I also mentioned that commercial firewalls typically do not give you the ability to control record route traffic.

With Netfilter, there is support for controlling IP options via the ipv4options module ( The supported switches are:

  • -m ipv4options –ssrr = Match packets with strict source routing set
  • -m ipv4options –lsrr = Match packets with loose source routing set
  • -m ipv4options –rr = Match packets with record route set
  • -m ipv4options –ts = Match packets with timestamp set
  • -m ipv4options –ra = Match packets with router-alert set
  • -m ipv4options –any-opt = Match packets with at least one IP option set

Here’s an example of a rule that would block packets with the source route option set:

iptables -A FORWARD -m ipv4options –rr -j REJECT –reject-with icmp-host-unreachable

Note we are sending back an ICMP host unreachable in response. This is in order to shutdown the tool mapping our network.

Exec Summary

While commercial firewall excel at centralized management and selecting pleasing colors for their graphical interface, they usually pale in comparison to open source firewalls with regards to controlling traffic on the wire. In order to protect their networks, firewall administrators need greater control of the IP header than simply scrutinizing the source and destination IP address.

Network Mapping Through A Firewall – Part 2

August 25th, 2009

In my last post I discussed how to use ICMP time exceeded in transit errors to map a network perimeter. I also discussed how to prevent attackers from using this technique against your network. In this post I’ll discuss another network mapping technique using the record route IP header options.

Ipv4 header options

The IP header is normally 20 bytes in size but can grow larger if one or more options are enabled. IP options get added to the end of the IP header, as shown in Figure #1. There are a number of registered IP options. The ones most frequently implemented however are the ones defined in RFC 791. Most operating systems and hardware devices have implemented the IP option record route (option 7), which is a part of the RFC 791 specification.


Record Route

The record route option can produce similar data to traceroute, but has a completely different methodology for identifying intermediary hops. As I discussed in my last post, traceroute uses the receipt of ICMP time exceeded in transit errors to map all of the network hops between two points. This requires multiple packets to be transmitted, as the tool needs to increment the TTL value.

Record route does not vary the TTL, and only requires a single packet to record hops along a link. Since the option exists within the IP header, it can be leverage with any IP transport or application.

Here is example output of a record route session using Ping under Linux:

[root@fubar ~]# ping -c 1 -R

PING ( 56(124) bytes of data.

From icmp_seq=1 Redirect Host(New nexthop:

64 bytes from icmp_seq=1 ttl=125 time=6.56 ms



— ping statistics —

1 packets transmitted, 1 received, 0% packet loss, time 6ms

rtt min/avg/max/mdev = 6.564/6.564/6.564/0.000 ms

Note that by setting the record route option in Ping (the “-R” switch) we’ve recorded all the router hops out to the target system at, and back again. So we’ve effectively generated a map of the network between the two points.

Record route decode

Here is an example decode of a record route packet:

07:04:32.934999 IP (tos 0×0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 124, options (NOP,RR, > ICMP echo request, id 43604, seq 1, length 64

0×0000:  4f00 007c 0000 4000 4001 6858 c0a8 c90a  O..|..@.@.hX….

0×0010:  c0a8 cc0a 0107 2708 c0a8 c90a 0000 0000  ……’………

0×0020:  0000 0000 0000 0000 0000 0000 0000 0000  …………….

0×0030:  0000 0000 0000 0000 0000 0000 0800 5df6  …………..].

0×0040:  aa54 0001 4022 914a 2544 0e00 0809 0a0b  .T..@”.J%D……

0×0050:  0c0d 0e0f 1011 1213 1415 1617 1819 1a1b  …………….

0×0060:  1c1d 1e1f 2021 2223 2425 2627 2829 2a2b  …..!”#$%&’()*+

0×0070:  2c2d 2e2f 3031 3233 3435 3637               ,-./01234567

A couple of points in the above decode are worth noting. Normally the beginning of the IP header starts with a Hex value of 4500. This means:

  • 4 = IP version
  • 5 = 5 32-bit words, or (32/8) x 5 = 20 bytes, the size of the IP header
  • 00 = Type Of Service (TOS) field, no values set

The decode above starts with the Hex value “4f00”, which means the IP header is larger than a regular IP header. This is our first clue that at least one IP option is set. How big is the IP header? If we convert “f” in Hex to decimal we get 15. 15 32-bit words converts to 60 bytes, which is the largest possible size for an IP header.

Also, note the series of zeros at the end of the header. When a record route packet is transmitted, the sending system needs to reserve space for all of the IP addresses that must be included. Windows will ask you to identify this value up front. Linux and UNIX simply go for the maximum. It does not cause a problem if reserved space goes unused. The rest of the packet carries a normal Echo-Request payload.

Record route limitations

You may have noticed that the above decode only reserved space for 8 IP addresses. Since most systems on the Internet are about 15 hops away from each other, what happens when 8 is not enough? Remember we said 60 bytes is the maximum size for an IP header. If we remove the rest of the IP header fields, that leaves us enough room to store 9 IP addresses. The transmitting system always stores it’s IP address in the option field, since technically it is the first IP address to forward the packet. This leaves us enough room for 8 more IP addresses maximum. If the packet travels over more than 8 hops, the remaining routers will simply ignore the record route option.

Here’s an example of what I mean. This output was generated with the Ping utility under Windows. The “-r” switch identifies that the record route option should be set. The numeric value identifies how many hops to record.

C:\test>ping -r 8 -n 1

Pinging [] with 32 bytes of data:

Reply from bytes=32 time=702ms TTL=50

Route: -> -> -> -> -> -> ->

Ping statistics for

Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),

Approximate round trip times in milli-seconds:

Minimum = 702ms, Maximum = 702ms, Average = 702ms

The Wikipedia Web site is actually 19 hops away from my current location. Record route is only capable of recording the first 8 hops along the way.

Do I need to be concerned with record route?

Since record route is only capable of recording 8 hops, and most of us are 15 hops away from each other, is it truly a valid security concern? The 15 hop rule is only true a majority of the time. If I attempt to record route to a network that uses the same ISP that I do, I’ll probably generate a full network map. Further, if an internal system becomes compromised, record route can easily be leveraged to map the network from the compromised host’s location.

So record route is not a common attack vector, but its certainly going to be one of the tools a smart attacker will leverage when possible.

Protecting against record route

Record route is one of those communication parameters that gets ignored by most commercial firewall vendors. By that I mean they include support for record route in their RFC compliant IP stack, but give you little ability to control it via policy enforcement. Open source firewalls tend to do a better job controlling record route, but I’ll get into that in part 3 of this series.

If your firewall, HIPS or HIDS gives you access to the signature language, you can usually write a signature to flag all packets with an IP header size larger than 20 bytes. This does not guarantee the packet is using record route, as it could also mean that some other IP option is being used. To be frank however, all of the IP options can be leveraged for evil. Every one of them should be blocked, or at the very least detected, at the perimeter. I’ll cover more about IP options in a later post.

Exec Summary

Record route can produce a network map similar to the traceroute tool, but is limited to only recording 8 hops. While this limits its usefulness to an attacker, its entirely possible to run a record route session close enough to the target network to enumerate valuable data. Most firewalls do not give you the ability to control record route traffic, but you may be able to control/detect it with a signature based device.

Network Mapping Through A Firewall – Part 1

August 24th, 2009

When we create a set of firewall rules, one of our objectives is usually to stop attackers on the Internet from being able to map the internal network sitting behind the firewall. In this write up I’ll discuss two different techniques which will let an attacker punch right though most firewall setups, and what additional steps must be taken to prevent them.

The two techniques we will cover are:

  • Eliciting time exceeded in transit errors
  • IP header record route options

Understanding Time exceeded in transit errors

When a router receives a packet traveling from one network to another, it is required to decrement the TTL value by one. So if the packet currently has a TTL of 120, the router would change the value to 119 as it passes the packet along the network. The TTL field is byte 8 within the IP header and is shown in Figure #1.


If a router receives a packet with a TTL value of 1, it is not allowed to decrement the value to 0. Rather, the router generates an ICMP type 11, code 0 packet; referred to as an ICMP time exceeded in transit (TimeX) error. The TimeX error is then sent to the source IP address listed in the packet that had a TTL value of 1. Here’s an example TimeX packet. Note that 28 bytes of the original packet that caused the TimeX to be generated is embedded in the payload. The TTL value of this embedded header is 1.

10:14:19.947925 IP (tos 0xc0, ttl 63, id 26344, offset 0, flags [none], proto ICMP (1), length 88) > ICMP time exceeded in-transit, length 68
IP (tos 0×0, ttl 1, id 34730, offset 0, flags [none], proto ICMP (1), length 60) > ICMP echo request, id 18212, seq 1, length 40

One interesting point here is that RFC 792 defines that packets should be dropped when the TTL reaches 0, not 1. I’m unaware of any router or system that actually follows the RFCs. Every device I’ve seen drops the packet when the TTL is 1. You will however find many incorrect documents that describe this process quoting the RFCs rather than reality.

Network mapping with TimeX

Most network administrators are familiar with the traceroute and LFT tools under Linux and UNIX, and tracert and pathping under Windows. Each tool will identify all of the router hops from a source system to a specified target. This is accomplished by transmitting multiple packets and incrementing the TTL value.

Each of the above mentioned tools use TimeX errors to map all of the routers between two hosts. An example is shown in Figure #2. The tool would start by transmitted packets with an initial TTL value of 1. This causes the first router to return a TimeX error. The tool then looks at the source IP address of the TimeX error, and records this as the first hop along the link.


Packets with a TTL of 2 are then transmitted. When they pass through the first router, the TTL is decremented to 1. This causes the second router to generate a TimeX error. Again, we simply record the source IP address of the TimeX error as the second hop along the link. When an initial TTL value of 3 is transmitted, the third router generates the TimeX error. This continues until we eventually reach the target system. We’ve now efficiently mapped the IP addresses of all of the routers between the source and target system.

Here’s an example of what the output might look like:

[root@fubar ~]# traceroute -I -q 1 -N 1
traceroute to (, 30 hops max, 60 byte packets
1 ( 0.270 ms
2 ( 0.395 ms
3 ( 0.589 ms
4 ( 0.707 ms

Mapping through a firewall with time exceeded packets

The tools tracert and traceroute are easily defeated by a firewall. This is because tracert transmits Echo-Request packets which most environments block at the border. traceroute will also transmit Echo-Requests if the “-I” switch is used, but by default it targets UDP ports above 33,000. Again, most firewalls block this by default so the tool is easily defeated.

But what if an attacker targets an open port on the firewall? In other words, what if they transmit TCP/80 packets to your Web server, but vary the TTL values in a similar fashion to traceroute? This is exactly how the tool tcptraceroute operates. There is even a version available for Windows. Usually, tools like this can map right though a firewall.

For example, we have a Web server at with a firewall sitting in front of it. The firewall has the standard “only let in TCP/80 to the Web server” policy set. Here is what traceroute reports:

[root@fubar ~]# traceroute -q 1 -N 1 -m 5
traceroute to (, 5 hops max, 60 byte packets
1 ( 0.279 ms
2 ( 0.521 ms
3 *
4 *
5 *

And here is the same networks mapped with tcptraceroute:

[root@fubar ~]# tcptraceroute -n -f 1 -m 5 -q 1 -S 80
Selected device eth0, address, port 39142 for outgoing packets
Tracing the path to on TCP port 80 (http), 5 hops max
1 0.353 ms
2 0.450 ms
3 0.586 ms
4 [open] 0.701 ms

Because traceroute is sending UDP packets, our firewall policy drops them at the border. tcptraceroute however is sending TCP/80 packets to the Web server’s IP address. Since this is permitted by the policy, the packets make it through. We now know is acting as a firewall. We also know that it is sitting directly in front of the Web server.

Here’s a copy of one of the packets generated by tcptraceroute. To the untrained eye, it looks like a perfectly normal TCP/80 SYN packet, except the TTL value is very low (there are other clues that this packet is not normal, but I’ll save that for another post):

18:33:21.531117 IP (tos 0×0, ttl 3, id 41587, offset 0, flags [none], proto TCP (6), length 40) > S, cksum 0x7eaa (correct), 1793661553:1793661553(0) win 0

Protection against TimeX mapping

Most stateful inspection based firewalls are horrible at stopping TimeX mapping. In part 3 of this post, I’ll get into the proper way to control TimeX if you are running an open source firewall. For now however, I want to limit the advice I give to solutions that will work for every product.

There are two parts to every conversation, the stimulus and the response. When it comes to network mapping we can effectively nullify a scan if we can control either portion of the conversation. In this case here we have:

  • Stimulus = IP packet with an abnormally low TTL value
  • Response = TimeX from routers, port response from target

Since most commercial firewalls do not permit you to filter traffic based on TTL, we can’t control the stimulus in this situation. Nor can we control the port response, because it will be identical to a normal conversation. This leaves us with the outbound TimeX packets.

As close as possible to the edge of your perimeter, install a filter preventing ICMP type 11, code 0 (Time Exceeded in transit) packets from being sent to the Internet. For example if you have a border router outside of your firewall, install the filter on the router. Note that if you are running Cisco IOS, the router will partially ignore the filter and still transmit TimeX packets generated by it’s own interface. Running the “no ip unreachables” command can prevent this, but this command disables all ICMP error reporting and can cause communication problems. Make sure you understand the full impact of this command before using it.

By filtering outbound TimeX packets, we will prevent the attacker from seeing the IP address of all routers and firewalls between the filter installation point and the target host. The attacker will still be able to enumerate how many hops are on the link; they just will not be able to determine the IP address of each.

Exec Summary

Tools that perform traceroute type activity through open ports on a firewall are effective at mapping the links along a target network. Further, these tools are usually effective as enumeration of network address translation (NAT) settings. Since most firewalls cannot filter traffic based on TTL, we are usually left with trying to control the transmission of TimeX packets headed out towards the Internet.

Top 5 Firewall Threats – Part 2

August 3rd, 2009

In the last post I started counting down the five greatest threats to perimeter security. In this post I’ll complete the list.

Firewall Threat #3: Outbound HTTP

The popularity of HTTP (TCP/80) has become both a blessing and a tragedy. Certainly the Internet would not be as popular as it is today without the World Wide Web. While HTTP has lead to the greatest exchange of information in mankind’s history, our implementation of the service has caused it to become one of our greatest security problems on the Internet.

Why is it a threat?

The first issue is that TCP/80 access has become so commonplace; many firewall administrators have chosen to ignore it. An overwhelming majority of the networks I have audited permit outbound TCP/80 access but then never log its use. When I ask why the permitted traffic pattern is not being logged, the standard answer I receive is “it makes my firewall logs too big”. Hummm, didn’t realize the perimeter security mantra was “no fatties”. ;)

Permitted traffic is inherently a higher risk than denied traffic because it facilitates the exchange of information. That passing traffic could be a zombie calling home or an internal system leaking sensitive data. If we do not log the use of a permitted protocol, we are completely blind to its abuse. The problem “how do I process large log files?” is much easier to solve than “how do I spot evil traffic when I’m not bothering to look for it?”.

Since HTTP has become a “turn it on and forget it” service, vendors and attackers alike have started running everything through this port. The brainchild at Microsoft who thought tunneling RPC through DCOM though HTTP was a good idea obviously had zero concern for how we would actually secure the implementation. While IRC used to be the protocol of choice for call home Malware, it is now HTTP because attackers can usually count on that port being wide open and unlogged.

How to prevent it

All permitted traffic patterns need to be logged. This includes outbound HTTP traffic traveling from the internal network to the Internet. In a later post I’ll tackle the problem of processing firewall log files so it is relatively easy to pull out the interesting bits.

Firewall threat #2: Banner grabbing

Most Internet based servers will happily identify themselves to connecting clients. For example whenever you connect to this hosted server, your browser sees:

Server: Apache/2.2.11 (Unix) mod_ssl/2.2.11 OpenSSL/0.9.8k DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/

While this information can be useful for troubleshooting, it can also be extremely useful for someone wanting to attack the system.

Why is it a threat?

Consider the following analogy: You are playing Five Card Draw against a number of opponents. Your opponent’s cards are well hidden in their hands, while your cards are laid out on the table for all to see. What are your chances of walking away the big winner at the end of the night?

Displaying version banners to connecting clients puts you in a similar position. If an attacker can see what software you are running along with the specific version, they can immediately determine if you are vulnerable to any of the attacks in their arsenal. So by displaying a software banner you’ve effectively helped the attacker get it right on the first try.

Without the benefit of the banner, the attacker would be forced to try each of their attacks in order to see if they will work. If we are vulnerable, we’re still going to get whacked. If we’re not, we’ve just forced the attacker to start generating log entries that will clue us in that the source IP is hostile. In other words, we’ve called their bluff so we now get to see their losing cards. This gives us an audit history and time to respond accordingly.

How to prevent it

Change the banners on any Internet facing services. This includes Web, FTP, name servers, mail servers, or any other service that can be reached from the Internet. Do not forget to restart the service after you make the change.

How easy or hard this is depends on the vendor. For example to fix this problem with the Apache Web server we would simply edit the “httpd.conf” file and change the “ServerTokens” parameter to be “Prod”. With IIS however, we do not have this type of flexibility. Microsoft does not let you change the banner to help insure they can properly identify their current market share. Your only real option is to put a reverse proxy in front of the Web server and leverage the proxy to scrub the banner.

Can changing the banner cause problems?

Most vulnerability scanners are primarily banner grabbing devices. For example when you run a vulnerability scanner against your mail server, it does not try every attack pattern its been programmed to test. Rather, it will grab the server’s banner and check it against a built in database. If the reported software version has known vulnerabilities, they get printed to a report. If you have ever run a vulnerability scan which claims to check for thousands of known attacks, but your IDS barely notices the scan, this is why.

Now with that said, not all checks are banner based. For example reading the banner will not tell the vulnerability scanner whether your mail server can be used as a spam relay. The scanner has to specifically test for that condition. So some exploits do need to be tested directly. Simply reading the banner however can satisfy a majority of the verification testing.

So, changing the banners not only makes it more difficult for attackers to assess your vulnerabilities, but it makes it more difficult for you to do so as well. You may be forced to drop to the command line to verify the version of software you are running. Luckily most software supports a “-v” or “-V” option which will print out it’s version information. Sometimes a different switch value is used so we will need to do a bit of research in the application’s help files. For example to get version information for Sendmail we would type:

[root@fubar ~]# sendmail -d0.1

Version 8.14.3





Firewall threat #1: Non-signature Malware

I’ve written extensively on the problems with detecting Malware. Feel free to use the search option on this site to pull up earlier posts for more info.

The bottom line is we try very hard to solve this problem at the perimeter by leveraging Unified Threat Management (UTM), firewall plug-ins, anti-virus proxies, etc. These solutions will never be 100% effective. In fact, their effectiveness has been declining sharply over the last few years. If you truly want to get a handle on modern day Malware threats you have to look at an application control solution.

Exec Summary

So the top five firewall threats are:

  1. Non-signature Malware
  2. Leaking banner info
  3. Outbound HTTP
  4. Outbound SSH
  5. Commercial VPN services

Note that the last four of the five are outbound traffic patterns. While we tend to focus heavily on what is trying to get in to our network, we also tend to blindly trust the traffic leaving it. Its this misplaced confidence that has lead to each of these items making it to our list.