cipherdyne.org

Michael Rash, Security Researcher



2009 Blog Archive    [Summary View]

Creating Ghost Services with Single Packet Authorization

Creating Ghost Services with Single Packet Authorization Most usages of Single Packet Authorization focus on gaining access to a service such as sshd that is behind a default-drop packet filter. The point being that anyone who is scanning for the service cannot even tell that it is listening - let alone target it with an exploit or an attempt to brute force a password. This is fine, but given that firewalls such as iptables offer well designed NAT capabilities, can a more interesting access model be developed for SPA? How about accessing sshd through a port where another service (such as Apache on port 80) is already bound? Because iptables operates on packets within kernel space, NAT functions apply before there is any conflict with a user space application such as Apache. This makes it possible to create "ghost" services where a port switches for a short period of time to whatever service is requested within an SPA packet (e.g. sshd), but everyone else always just sees the service that is normally bound there (e.g. Apache on port 80).

To illustrate this concept, let's use fwknop from the spaclient system below to access sshd on the spaserver system, but request the access be granted through port 80. Further, on the spaserver system, let's verify that Apache is running and from the perspective of any scanner out on the Internet this is the only service that is accessible. That is, sshd and all other services are firewalled off by iptables. We'll assume that the spaclient system has IP 1.1.1.1, the spaserver system has IP 2.2.2.2, and the scanner system has IP 3.3.3.3.

First, let's scan the spaserver system from the scanner system and verify that only port 80 is accessible: [scanner]# nmap -P0 -n -sV 2.2.2.2

Starting Nmap 5.00 ( http://nmap.org ) at 2009-11-29 15:14 EST
Interesting ports on 2.2.2.2:
Not shown: 999 filtered ports
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.2.8 ((Ubuntu) mod_python/3.3.1 Python/2.5.2)

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 90.94 seconds
Good. Now, from the spaclient system, let's fire up fwknop and request access to sshd on the spaserver system. But instead of just gaining access to port 22, we'll request access to port 22 via port 80 - the fwknopd daemon will build the appropriate DNAT iptables rules to make this work: [spaclient]$ fwknop -A tcp/22 --NAT-access 2.2.2.2:80 --NAT-local -a 1.1.1.1 -D 2.2.2.2

[+] Starting fwknop client (SPA mode)...
[+] Enter an encryption key. This key must match a key in the file
/etc/fwknop/access.conf on the remote system.

Encryption Key:

[+] Building encrypted Single Packet Authorization (SPA) message...
[+] Packet fields:

Random data: 3829970026924871
Username: mbr
Timestamp: 1259526613
Version: 1.9.12
Type: 5 (Local NAT access mode)
Access: 0.0.0.0,tcp/22
NAT access: 2.2.2.2,80
SHA256 digest: Om/GsIVQIRyAp6UWyqjXVqlEQhxz+lVsQhCl1rFBfuI

[+] Sending 182 byte message to 2.2.2.2 over udp/62201...
Requesting NAT access to tcp/22 on 2.2.2.2 via port 80
With the receipt of the SPA packet, the fwknopd daemon has reconfigured iptables to allow an ssh connection through port 80 from the spaclient IP 1.1.1.1 (note the "-p 80" argument on the ssh command line): [spaclient]$ ssh -p 80 -l mbr 2.2.2.2
mbr@2.2.2.2's password:
[spaserver]$
If we can the spaserver again from the scanner system, we still only see Apache: [scanner]# nmap -P0 -n -sV 2.2.2.2

Starting Nmap 5.00 ( http://nmap.org ) at 2009-11-29 15:29 EST
Interesting ports on 2.2.2.2:
Not shown: 999 filtered ports
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.2.8 ((Ubuntu) mod_python/3.3.1 Python/2.5.2)

Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 89.22 seconds
So, the IP 1.1.1.1 has access to sshd, but all the scanner can ever see is 'Apache httpd 2.2.8'.

A legitimate question at this point is 'why is this useful?'. Well, I have been on networks before where local access controls only allowed outbound DNS, HTTP and HTTPS traffic, so this technique allows ssh connections to be made in a manner that is consistent with these access controls. (This assumes that HTTP connections are not made through a proxy, and that the fwknopd daemon is configured to sniff SPA packets over port 53.) Further, to anyone who is able to sniff traffic, it can be hard to figure out what is really going on in terms of SPA packets and associated follow-on connections. This is especially true when other tricks such as port randomization are also applied.

I demonstrated the service ghosting technique at DojoCon a few weeks ago, and the video of this is available here (towards the end of the video).

Speaking at the SANS Incident Detection Summit

speaking at SANS Incident Detection Summit At the upcoming SANS Incident Detection Summit on December 9th and 10th I will be participating in two panel discussions. The first is entitled "Enterprise Network Detection Tools and Tactics" and is described by Richard Bejtlich (who has organized the whole conference) as a venue where "speakers with large-scale experience will share their tools and tactics for identifying suspicious and malicious activity". The second, "Detection Using Logs", focuses on the usage of platform, operating system, and application logs to detect intrusions, and Security Information Management and log aggregation and search systems will be discussed.

If you are going to be at the conference, please say hello!

Speaking at DojoCon

speaking at DojoCon UDPATE: The talk slides can be downloaded here and a video of the talk is available too.

On Friday November 6th I will be giving a talk on Single Packet Authorization with fwknop at the upcoming DojoCon conference at Capitol College in Laurel, MD. This talk will focus on some recent developments in the world of fwknop development and the resulting enhancements to SPA capabilities. Topics to be emphasized include:

  • The new libfko C implementation and what it means for SPA deployments.
  • Interface monitoring for administrative control changes and increasing packet counts. This adds an extra dimension of reliability for fwknopd server operations.
  • HTTP proxy support, as well as the development of an SSL web gateway for sending SPA packets on behalf of client systems that cannot construct SPA packets themselves.
  • Development efforts for future fwknop features (such as additional client integration, and deployments on embedded Linux distributions).

Software Release - fwknop-1.9.12

software release fwknop-1.9.12 The 1.9.12 release of fwknop is ready for download. This is a significant release that moves by default to the new libfko and the new FKO module for SPA encryption and decryption. Other new features include interface monitoring for the fwknopd daemon so it can survive administrative changes due to things like a DHCP address changes, and the ability to send SPA packets through HTTP proxies.

An excerpt from the 1.9.12 ChangeLog appears below:

  • Fully integrated the FKO module that is part of the libfko library for all SPA routines - encryption/decryption, digest calculation, replay attack detection, etc. The default is to always use the FKO module if it has been installed, but the original perl code remains intact as well just in case FKO does not exist on the local system. The libfko code can be viewed with Trac here
  • Added the ability to recover from interface error conditions, such as when fwknopd sniffs a ppp interface (say, associated with a VPN) that goes away and then is recreated. In previous versions of fwknop, this would result in the fwknopd daemon no longer able to receive SPA packets. This new functionality is controlled by five new configuration variables in the fwknop.conf file: ENABLE_INTF_CHECKS, INTF_CHECKS_INTERVAL, ENABLE_INTF_EXISTS_CHECK, ENABLE_INTF_RUNNING_CHECK, and ENABLE_INTF_BYTES_CHECK. By default, all of these checks are enabled and are run every 20 seconds by the knoptm daemon. If any check fails, then knoptm stops the fwknopd daemon once the error condition is corrected (such as when the interface comes back) so that knopwatchd will then restart it. The fwknopd daemon cannot receive packet data until the error condition is cleared (most likely except perhaps for the "RUNNING" check, but restarting the fwknopd daemon is better than not being able to access a service).
  • Updated the fwknop client to include the SPA destination before DNS resolution when sending an SPA packet over an HTTP request. This allows more flexible Apache configurations with virtual web hosts to function properly with HTTP requests that contain SPA packet data. Also updated the fwknop client to include a leading "/" in SPA packets over HTTP, and updated the fwknopd server to strip this out before attempting SPA packet decryption.
  • Updated the fwknop client to resolve external IP addresses (with the -R argument) here by default.
  • (Jonathan Bennett): Submitted patch to the fwknop client to add HTTP proxy support when sending SPA packets over HTTP. The result is a new --HTTP-proxy option that expects the proxy host to be given as "http://HOST", and it also supports the "http://HOST:PORT" notation as well.

Google Indexing of Trac Repositories

Google Trac Indexing There has been a Trac repository available on trac.cipherdyne.org since 2006, and in that time I've collected quite a lot of Apache log data. Today, the average number of hits per day against the fwknop, psad, fwsnort, and gpgdir Trac repositories combined is over 40,000. These hits come mostly from search engine crawlers such as Googlebot/2.1, msnbot/2.0b, Baiduspider+, and Yahoo! Slurp/3.0;. However, not all such crawlers are created equal. It turns out that Google is far and away the most dedicated indexer of the cipherdyne.org Trac repositories, and to me this is somewhat surprising considering the reputation Google has for efficiently crawling the web. That is, I would have expected that the non-Google crawlers would most likely hit the Trac repositories on average more often than Google. But perhaps Google is extremely interested in getting the latest code made available via Trac indexed as quickly as possible, and for svn repositories that are made accessible only via Trac (such as the cipherdyne.org repositories), perhaps brute force indexing of all possible links in Trac is better. Or, perhaps the other search engines are simply not as interested in code within Trac repositories so they don't bother to aggressively index it, or maybe they are just not very thorough when compared to Google.

Let's see some examples. The following graphs are produced with the webalizer project. This graph shows the uptick in hits against Trac shortly after it was first made available in 2006: cipherdyne 2006 Trac usage So, the average number of hits goes from 803 starting out in May and jumps rapidly to nearly 17,000 in December. Here are the top five User-Agents and associated hit counts:

HitsPercentageUser-Agent
506237.06%Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
500436.63%noxtrumbot/1.0 (crawler@noxtrum.com)
12839.39%msnbot/0.9 (+http://search.msn.com/msnbot.htm)
4753.48%Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.13) Gecko/20060418...
4573.35%Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.7.12) Gecko/20051229...

Right off the bat Google is the top crawler, but only just barely. In December, this changes to the following:

HitsPercentageUser-Agent
47806390.94%Mozilla/5.0 (compatible; Googlebot/2.1; ...
112992.15%Mozilla/5.0 (compatible; BecomeBot/3.0; ...
82871.58%msnbot-media/1.0 (+http://search.msn.com/msnbot.htm)
64231.22%Mozilla/5.0 (compatible; Yahoo! Slurp; ...
40290.77%Mozilla/2.0 (compatible; Ask Jeeves/Teoma; ...

Now things are drastically different - Google's crawler now accounts for over 90% of all hits against Trac, and has created over 42 times the number of hits from the second place crawler "BecomeBot/3.0" (a shopping-related crawler - maybe they like the price of "zero" for the cipherdyne.org projects).

Let's fast forward to 2009 and take a look at how things are shaping up (note that data from 2008 is not included in this graph): cipherdyne 2009 Trac usage The month of May was certainly an aberration with over 56,000 hits per day, and August topped out at 42,000 hits per day. In May, the top five crawlers were:

HitsPercentageUser-Agent
151043586.14%Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
737404.21%Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/
334821.91%Mozilla/5.0 (compatible; Charlotte/1.1; http://www.searchme.com/support/)
226991.29%Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; ...
209801.20%Mozilla/5.0 (compatible; MJ12bot/v1.2.4; ...

Google maintains a crawling rate of over 20 times as many hits as the second place crawler "DotBot/1.1". In August, 2009 (some of the most recent data), the crawler hit counts were:

HitsPercentageUser-Agent
94770386.02%Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
438763.98%msnbot/2.0b (+http://search.msn.com/msnbot.htm)
309512.81%Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/
115781.05%Mozilla/5.0 (compatible; Yahoo! Slurp/3.0; ...)
108800.99%msnbot/1.1 (+http://search.msn.com/msnbot.htm)

This time "msnbot/2.0b" makes it to second place, but it is still far behind Google in terms of hit counts. So, what is Google looking at that needs so many hits? One clue perhaps is that Google likes to re-index older data (probably to ensure that a content update has not been missed). Here is an example of all hits against a link that contains the "diff" formated output for changeset 353 in the fwknop project. The output is organized by each year since 2006, and the first command counts the number of hits from Google, and the second command shows all of the non-Googlebot hits: [trac.cipherdyne.org]$ for f in 200*; do echo $f; grep diff $f/trac_access* |grep "/trac/fwknop/changeset/353" | grep Googlebot |wc -l; done
2006
6
2007
4
2008
6
2009
2

[trac.cipherdyne.org]$ for f in 200*; do echo $f; grep diff $f/trac_access* |grep "/trac/fwknop/changeset/353" | grep -v Googlebot ; done
2006
74.6.74.211 - - [10/Oct/2006:07:38:21 -0400] "GET /trac/fwknop/changeset/353?format=diff HTTP/1.0" 200 - "-" "Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)"
205.209.170.161 - - [19/Nov/2006:14:17:57 -0500] "GET /trac/fwknop/changeset/353?format=diff HTTP/1.1" 200 - "-" "MJ12bot/v1.0.8 (http://majestic12.co.uk/bot.php?+)"
2007
2008
38.99.44.102 - - [27/Dec/2008:17:51:34 -0500] "GET /trac/fwknop/changeset/353/?format=diff&new=353 HTTP/1.0" 200 - "-" "Mozilla/5.0 (Twiceler-0.9 http://www.cuil.com/twiceler/robot.html)"
2009
208.115.111.250 - - [14/Jul/2009:20:42:32 -0400] "GET /trac/fwknop/changeset/353?format=diff&new=353 HTTP/1.1" 200 - "-" "Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, crawler@dotnetdotcom.org)"
208.115.111.250 - - [28/Jun/2009:10:00:32 -0400] "GET /trac/fwknop/changeset/353?format=diff&new=353 HTTP/1.1" 200 - "-" "Mozilla/5.0 (compatible; DotBot/1.1; http://www.dotnetdotcom.org/, crawler@dotnetdotcom.org)"
So, Google keeps hitting the "changeset 353" link multiple times each year, whereas the other crawlers (except for DotBot) each hit the link once and never came back. Further, many crawlers have not hit the link at all, so perhaps they are not nearly as thorough as Google.

A few questions come to mind to conclude this blog post. Please contact me if you would like to discuss any of these:
  • For any other people out there who also run Trac, what crawlers have you seen in your logs, and does Google stand out as a more dedicated indexer than the other crawlers?
  • For anyone who runs Trac but also makes the svn repository directly accessible, does Google continue to aggressively index Trac? Does the svn access imply that whatever code is versioned within is used as a more authoritative source than Trac itself?
  • It would seem that any crawler could implement an optimization around the Trac timeline feature so that source code change links are indexed only when a timeline update is made. But, perhaps this is too detailed for crawlers to worry about? It would require additional machinery to interpret the Trac application, so search engines most likely try to avoid such customizations.
  • Why do the non-Google crawlers lag so far behind in terms of hit counts? Are the differences in resources that Google can bring to bear on crawling the web vs. the other search engines so great that the others just cannot keep up? Or, maybe the others are just not so interested in code that is made available in Trac?

Software Release - gpgdir-1.9.5

software release gpgdir-1.9.5 The 1.9.5 release of gpgdir is ready for download. This release adds support for decrypting files that have been encrypted with PGP (so they have the .pgp file extension). This feature was essentially implemented for free in gpgdir because GnuPG along with the GnuPG::Interface perl module can decrypt PGP-encrypted files. Here is an example: $ cd dir
$ find . -type f
./file0.pgp
./file5.pgp
./subdir2/file8.pgp
./subdir2/file7.pgp
./subdir1/file6.pgp
./file4.pgp
./file3.pgp
./file2.pgp
./file1.pgp
$ gpgdir -d .
[+] Executing: gpgdir -d .
Using GnuPG key: ABCD1234
Password:

[+] Decrypting files in directory: /home/mbr/tmp/dir
[+] Building file list...
[+] Decrypting: /home/mbr/tmp/dir/file3.pgp
[+] Decrypting: /home/mbr/tmp/dir/subdir2/file8.pgp
[+] Decrypting: /home/mbr/tmp/dir/subdir2/file7.pgp
[+] Decrypting: /home/mbr/tmp/dir/subdir1/file6.pgp
[+] Decrypting: /home/mbr/tmp/dir/file1.pgp
[+] Decrypting: /home/mbr/tmp/dir/file2.pgp
[+] Decrypting: /home/mbr/tmp/dir/file5.pgp
[+] Decrypting: /home/mbr/tmp/dir/file4.pgp
[+] Decrypting: /home/mbr/tmp/dir/file0.pgp

[+] Total number of files decrypted: 9

$ !find
find . -type f
./file4
./file3
./file5
./subdir2/file8
./subdir2/file7
./subdir1/file6
./file1
./file2
./file0

Morpheus-fwknop Windows UI update

The speed at which the open source software movement develops new code is frequently quite impressive, and people crank out higher levels of production when they do something they really like to work on. This remains true for the new Morpheus-fwknop Windows UI written by Daniel Lopez, and a new release (0.7) of Morpheus is now available from Sourceforge. The release notes for 0.7 read as follows:

  • Implement PGP Encryption.
  • Allow to Open multiple port on Forward/Nat Local mode.
  • New UI with a little more color.
  • Launch an Application after sending the SPA Packet.
Here are a couple of screenshots of Morpheus in action. The look and feel has changed quite a bit from the previous release. morpheus-fwknop 0.7 morpheus-fwknop 0.7

Nmap-5.00, Zenmap, and ndiff

Nmap-5.00 Fyodor recently released Nmap-5.00, and this release marks a major milestone in Nmap development as it is now quite mature, has a large following, and is feature rich. With Nmap, service discovery and interrogation has never been easier or more automated, and for me Nmap provided much of the inspiration to develop psad (scan detection via iptables log messages) and fwknop (hide services from all types of scans behind a default-drop packet filter with Single Packet Authorization).

Some of the more interesting Nmap features these days include the Nmap Scripting Engine (NSE), the Zenmap user interface, and ndiff. The NSE is an Nmap extension that allows users to express networking tasks via the Lua embedded programming language, and the resulting scripts are executed via Nmap against targeted systems. As an example of some of the power that NSE provides, a recent update allows Nmap to interrogate a system in order to see if it is infected with the Conficker worm.

Zenmap provides a nice graphical interface to point and click Nmap scanning, complete with interactive editing of Nmap command line arguments, scan results display with context sensitive text colors, and even a network topology viewer to represent scan targets. The screenshot below illustrates the scan results view of a scan against a Netgear router: zenmap scan view

An excellent example of the topology view in Zenmap can be found here.

With the new Nmap release, some questions the security community may ask include:

  • Will scan activity significantly increase? Most likely there will be a burst of scanning over the next few weeks as people adopt and experiment with the new release - especially after the broad news coverage Nmap is getting.
  • By direct examination of network traffic Is it possible to differentiate Nmap-5.00 scans from those that originate from older versions? My guess is most likely not, but a source code diff from older versions should make this clear.
  • Does the new release imply that the Conficker worm will accelerate its decline as more scans are made to find infected systems? Note that Conficker seems to already be on the decline by one measure.
Finally, I wonder if ndiff will change how people use Nmap in the long run? It is great to have accurate scan information about a target, but it is even better to see how this information changes over time. For example, if a system is compromised and is forced to stand up a new backdoor service, then this will cause a "blip" in ndiff results if this system is the target of regular Nmap scans. Or, if a broad policy change is made in a router ACL or firewall rule set, then this can result in broad ndiff changes too. Another example might be if a networked application is upgraded such that it advertises itself differently from one scan to the next (say, via a banner such as "Apache/2.2.11 (Ubuntu) Server at localhost Port 80"), ndiff might alert you more effectively than other techniques (this assumes that you have enabled version scanning).

On a technical note, it is possible to introduce false positives into ndiff output if the Nmap command line is altered from one diff to the next. Suppose that scans for a particular UDP service seem to finish fairly quickly and reliably because the target returns an ICMP port unreachable message (indicating that the service is not filtered). But, in the interest of speeding scans up further, suppose the --max-rtt-timeout argument is used on the Nmap command line, and suppose that timeout is reduced too far. In this case, through no fault of its own, Nmap would report the service as filtered because the ICMP port unreachable message returned after the reduced timer had already expired. If the before and after Nmap scan results are compared, ndiff would report the difference even though the user is responsible for creating it. Nmap is doing its job though, and changing how Nmap is invoked for automated scans is probably not very common. At least, over time the way Nmap is invoked would average out to the same. The main goal of comparing scan results is wonderfully automated by ndiff, and is a powerful mechanism for seeing how network service availability changes over time.

Congratulations to Fyodor and the Nmap developers on a great release.

Software Releases - libfko and morpheus-fwknop

libfko and morpheus-fwknop This past week one a good one for Single Packet Authorization with two software releases. First, an initial version of libfko, the all C implementation of SPA developed by Damien Stuart can be downloaded along with documentation. The libfko library allows other programs to easily implement the SPA protocol, and a new C client is bundled with fwknop-c-0.62 as well as a new perl module "FKO" that implements a perl XS extension of libfko functions. Once the fwknopd server piece is also developed (a new replacement fwknop C client is included, but not a replacement for fwknopd yet), the libfko code will allow SPA to easily be extended to systems where perl is either not installed or cannot be run (due to hardware constraints such as small routers running OpenWRT).

fwknop-c follows the standard autoconf method of installing open source software, so just: $ ./configure --prefix=/usr && make
$ su
# make install
The new fwknop-c client can be found at /usr/bin/fwknop once you have installed per the above, and all important options are supported similarly to the perl fwknop client. So, the familiar commands like: $ fwknop -A tcp/22 -R -D <host_or_ip> should work just the same. A few of the command line arguments have been changed in the C version, and by default the output on stdout is reduced (just use -v to change this).

Second, a new project called morpheus-fwknop was developed and released by Daniel Lopez. This project is a Windows UI written in .NET to send SPA packets that fwknopd can decrypt. With morpheus-fwknop, there is no need to install Cygwin in order to access services protected by SPA from Windows. This is the first viable UI to succeed Sean Greven's UI developed in Delphi (which still works too!), and morpheus-fwknop is released under the GPL. Naturally, source code can be downloaded, and here are two screenshots that show the look and feel: morpheus-fwknop in action morpheus-fwknop in action It is excellent to see work going on in the world of user interfaces to fwknop and SPA. Without an effective UI on Windows, a large user base is effectively cut off.

iptables Script Update - Logging and IPv6 Issues

ipv6 traffic vs. iptables Recently, Bobby Krupczak, a reader of "Linux Firewalls" pointed out to me that the iptables script used in the book does not log traffic over the loopback interface, and that such traffic is also blocked because of the INPUT and OUTPUT policies of "DROP" (instead of having a separate DROP rule). This should be made more clear in the book. Quite right - all logging is excluded for traffic that is sent or received over the loopback interface, and the iptables policy also drops loopback traffic because no ACCEPT rule exists. The lack of a logging rule is mostly because logging traffic generated locally and restricted to the loopback interface is usually a distraction from logging more important (and potentially malicious) traffic from remote networks. However, if a local process seems to have connectivity issues, then making sure that loopback traffic flows unimpeded is important. The iptables.sh script has been updated to ACCEPT all loopback traffic handled by the INPUT and OUTPUT chains.

On another note, I would also like to mention that the script has been updated to block IPv6 traffic altogether. With more networks routing IPv6 these days, and with things like Federal mandates for IPv6 compliance on Federal networks, IPv6 adoption is... still slow. However, Linux has had the ability to speak IPv6 for a long time, and Nmap can scan for IPv6-enabled services. Hence it is important to apply iptables controls to IPv6 traffic via ip6tables. The consequences of not doing this could be a system compromise via a service that can communicate over IPv6, but that is normally firewalled off in the IPv4 world.

Here is an example of scanning ::1 on an Ubuntu-9.04 system with Nmap without any ip6tables controls applied. Note that three important services are available over IPv6: [root@isengard ~]# nmap -6 -P0 ::1

Starting Nmap 5.00 ( http://nmap.org ) at 2009-07-28 21:10 EDT
Interesting ports on ip6-localhost (::1):
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
53/tcp open domain
80/tcp open http

Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds

With the updated iptables script deployed, Nmap no longer sees these services.

Have you checked the output of ip6tables -v -nL | grep DROP lately on your Linux system? If you are running a different operating system, are you confident that IPv6 traffic is being filtered appropriately? [root@isengard ~]# ip6tables -v -nL | grep DROP
Chain INPUT (policy DROP 0 packets, 0 bytes)
Chain FORWARD (policy DROP 0 packets, 0 bytes)
Chain OUTPUT (policy DROP 0 packets, 0 bytes)