From HerzbubeWiki
Jump to: navigation, search


Before jumping headlong into configuring a network, it is always a good idea to lose a few moments to think about how the network should look like. In my case it was clear from the beginning that I did not want a complicated network, but that I would be satisfied with a single, central server that would

  • Connect the home intranet to the internet, by whatever means are available (xDSL, cable, etc.)
  • Serve as a router for the home intranet
  • Protect the home intranet against the internet (i.e. act as a firewall)
  • Provide services (e.g. DHCP, email) to the computers on the home intranet

This page tries to explain how I achieved these goals, starting with the layout of the physical network, then moving on to kernel and basic network configuration, and finally explaining some details about the iptables package which I use for NAT, security/firewalling stuff and other IP packet mangling.

Physical network layout

 +----------+     +------------------+     +--------------+
 | Internet | --- | xDSL/cable modem | --- | Switch/Hub 1 |
 +----------+     +------------------+     +--------------+
                ^                      ^          |          eth2
                +--- WAN        LAN ---+          | <- eth1   |
                                           +--------------+   v                         +--------------------+
                                           | Linux server | --------------------------- | Wi-Fi access point |
                                           +--------------+                             +--------------------+
                                                  | <- eth0                                        |
                                                  |                                                |
                                           +--------------+                                        |
                                           | Switch/Hub 2 |                                        |
                                           +--------------+                                        |
                                                  |                                                |
                                     +------------+                                                |
                                     |                                                             |
           +-------------------------+------------------+                     +--------------------+---------------+
           |                         |                  |                     |                    |               |
 +---------------------+   +---------------------+   +-----+          +----------------+   +----------------+   +-----+                 
 | Intranet computer 1 |   | Intranet computer 2 |   | ... |          | Wi-Fi client 1 |   | Wi-Fi client 2 |   | ... |                 
 +---------------------+   +---------------------+   +-----+          +----------------+   +----------------+   +-----+                 

The schema shows that the Linux server has two different network interfaces: eth1 to connect to the internet, and eth0 to connect to the intranet. Physically eth0 and eth1 could be one and the same network adapter card (the Linux kernel supports this feature), but I chose to have two different cards, one for each network interface.

More important is that eth0 and eth1 are not connected to the same switch/hub. This physical separation of internet and intranet is an important step towards securing the home network against intruders from the internet. Any network traffic coming in from the internet must first pass through the Linux server, so if I manage to give the server good security, my intranet computers are automatically safer, too.


  • Physical separation is vital if the xDSL/cable modem box is not equipped with firewalling capabilities, because in such a scenario the modem box is merely a bridge that translates between different physical network system types (xDSL, Ethernet) but leaves the LAN side wide open to any intruders. If the modem box is a router-like system with firewalling capabilities, physical separation is not strictly necessary, but still a good idea - two system barriers to overcome are better than one.
  • Ten flawless firewalls with superb configurations are still powerless if the enemy comes from within the protected network: An intranet machine may have been infected by a virus or some other bit of malicious software (trojan, spybot, whatever) because the user has been carelessly opening email attachments or visiting infected websites. The best protection against this threat is a wary and knowledgable user! Supporting roles may be filled with server and/or desktop virus scanners and desktop firewalls monitoring outgoing connections (e.g. Little Snitch on Mac OS X, ZoneAlarm on Windows).

Logical network layout

In this diagram I have added network addresses and removed boxes that referred to hubs and/or switches. Only the most important routers and gateways have been left to illustrate the network boundaries.

   Internet  -------------  | xDSL/cable modem |
                              +--------------+                                        +--------------------+
                              | Linux server |  ----------- ------------  | Wi-Fi access point |
                              +--------------+                                        +--------------------+
                                     |                                                           |
                                     |                                                           |
                                     |                                                           |
                                     |                                                           |
           +-------------------------+------------------+                   +--------------------+---------------+
           |                         |                  |                   |                    |               |
 +---------------------+   +---------------------+   +-----+        +----------------+   +----------------+   +-----+                 
 | Intranet computer 1 |   | Intranet computer 2 |   | ... |        | Wi-Fi client 1 |   | Wi-Fi client 2 |   | ... |                 
 +---------------------+   +---------------------+   +-----+        +----------------+   +----------------+   +-----+                 

Kernel and modules

Basic networking support

If you build your own kernel, you will have to enable some things in the kernel configuration sub-menu /kernel/device drivers/networking support:

  • Set network device support = YES
  • Choose the ethernet sub-menus that contain the device drivers for your network cards, then enable the device drivers either as a module or compiled directly into the kernel
  • Compile the new kernel & reboot
  • If you configured the device drivers as a module, you should use "modconf" to make sure that the modules have been loaded and are available for the following configuration
  • I did not choose the module option because networking is an integral part of the system I intend to build


Kernel configuration options required for iptables are located in the kernel configuration hierarchy under "Networking->Networking options->Networking packet filtering framework (Netfilter)". Most important for iptables and NAT are the connection tracking options.

It seems as if netfilter options can be built as modules only. On the running system, available modules can be found in


Kernel modules can be managed interactively through the modconf utility, or on the command line via insmod, rmmod and lsmod.

Note: If the modules should be unloaded, any rules that are currently active must be removed first.

Interface names

Network interface names are assigned by the udev subsystem by rules located in


Normally udev uses the MAC address of a network interface card to assign interface names persistently and across system reboots. If this does not happen for any reason, it is possible to manually add a udev rule to the file named above.

For instance, I once had a system with an Asus motherboard and an on-board nForce NIC. The nVidia kernel driver forcedeth, which is responsible for managing this type of NIC, was not capable of determining a valid MAC address and therefore assigned a random MAC address to the NIC on each reboot. This confused udev and without help, it assigned a strange network interface name eth2_rename to the nForce NIC. To fix this, I had to manually add the following udev rule to the file named above:

# PCI device 0x10de:0x0066 (forcedeth)
SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{vendor}=="0x10de", ATTRS{device}=="0x0066", NAME="eth2"

Note: The nForce NIC still got a random MAC address assigned on each reboot, which may have caused some problems with systems on the same physical network as that NIC. At least the ARP cache might have needed to be cleared on such systems after a reboot of the Linux server.

Interface configuration


This file contains the network interfaces and their configuration:


Some or all interfaces may already be present if you have configured them during the initial system setup (e.g. "base-config"). Otherwise you will have to add them manually now.

man interfaces

gives useful information about the options that you can tweak. Basically the following rules apply:

  • The definition of a TCP/IP interface always starts with the line "iface eth<xxx> inet". Note that the interface name is merely a reference; the name definition has happened somewhere else in the udev subsystem
  • On the same line you add a word that determines how the interface gets its characteristics (e.g. IP address)
    • The word "dhcp" is used if the interface is configured by a DHCP server; this usually concludes the interface definition
    • The word "static" is used if you want to statically configure the interface in the interfaces file; you have to add a few more lines in this case
      • A line that defines the IP address: "address <ip address>"
      • A line that defines the netmask: "netmask <netmask>"
  • There should be a line "auto eth<xxx>" for every network interface that should be automatically brought up when the system is booting

If you need to know more, you should read the man page mentioned above. Here is an example how my configuration in /etc/network/interfaces looks like:

# The loopback network interface
auto lo
iface lo inet loopback

# Integrated gigabit ethernet controller: 34:15:9e:2e:ca:38
# The Intranet interface for wired connections
auto eth0
iface eth0 inet static

# Fast ethernet-over-USB controller: 00:24:32:01:a7:83
# The Internet uplink
auto eth1
iface eth1 inet dhcp

# Gigabit ethernet-over-USB controller: 00:12:17:f2:34:05
# The Intranet interface for wireless connections
auto eth2
iface eth2 inet static


Even though the Linux server runs its own DHCP server, the eth0 interface must still be statically set up. If it were configured via DHCP, there would be potential timing problems because the DHCP server comes up rather late in the boot process, and other services may already need the proper IP address on eth0 (e.g. slapd).

Command line utilities

This command prints out the currently defined interfaces, and their status:


These commands manually start or stop interfaces:

ifup eth<xxx>
ifdown eth<xxx>

To renew a DHCP lease, first issue an ifdown command, followed by an ifup command.

Note: I am not entirely sure about the consequences, but possibly the command dhclient eth1 could also be used to renew a DHCP lease on a specific interface (eth1 in the example). When no interface is specified, the command renews the leases on all interfaces. Use with caution!


Before I go into more details about concrete network configuration, here is a short chapter about the general concepts and usage of the iptables software package.


iptables implements a kernel-level packet filter. Basically, you can do everything you like with iptables:

  • Routing
  • NAT
  • Firewall rules
  • Arbitrary security rules and other stuff

This section provides information condensed from the excellent iptables man page. The section is probably not useful if you do not have a basic understanding of how iptables works.


Every IP packet that arrives in the Linux kernel is fed into iptables. iptables is configured with a number of rules. Each rule can match a set of packets. Each rule specifies what to do with a packet that matches.

Rules are organized in two levels:

  • tables
  • chains


The following tables exist:

  • filter = the default table
  • nat = rules in this table are consulted first when a packet that creates a new connection is encountered
  • mangle = this table is used for specialized packet alteration.
  • raw = not considered here


Every table contains a number of pre-defined chains. A table may also contain user-defined chains.

The filter table has the following pre-defined chains:

  • INPUT: for packets that come in from outside and have the host as their target
  • FORWARD: for packets that come in from outside and that the host is supposed to route to a target != the host
    • Note: if IP forwarding is disabled, or if the kernel does not know how to route a packet, the packet is discarded before it is given to iptables (= it never arrives in the FORWARD chain)
  • OUTPUT: for packets that are generated on the host

The nat table has the following pre-defined chains:

  • PREROUTING: for altering packets that come in from outside and that the host is supposed to route to a target != the host
  • OUTPUT: for altering packets that are generated on the host, before the packets leave the host
  • POSTROUTING: for altering packets as they are about to leave the host (packet source does not matter!)

The mangle table has the following pre-defined chains:

  • INPUT: see filter/INPUT; only available since kernel 2.4.18
  • FORWARD: see filter/FORWARD; only available since kernel 2.4.18
  • POSTROUTING: see nat/POSTROUTING; only available since kernel 2.4.18
  • OUTPUT: see filter/OUTPUT

Schematic overview

The following schematic provides an overview of the existing tables/chains and the flow of a packet:

      |                                                                                                         |
  IN  |     +-----------------------+                                         +-----------------------+         |  OUT
--------->  | PREROUTING (chain)    |                                         | POSTROUTING (chain)   |  --------------->
      |     | mangle + nat (tables) |                                         | mangle + nat (tables) |         |
      |     +-----------------------+                                         +-----------------------+         |
      |                                                                                                         |
      |                 |                                                                 ^                     |
      |                 |                                                                 |                     |
      |                 v                                                                 |                     |
      |                                                                                   |                     |
      |                 ^                                                                 |                     |
      |                / \                                                                |                     |
      |               /   \                +--------------------------+                   |                     |
      |              routing?  --------->  | FORWARD (chain)          |  -------------->  |                     |
      |               \   /                | filter + mangle (tables) |                   |                     |
      |                \ /                 +--------------------------+                   |                     |
      |                 v                                                                 |                     |
      |                                                                                   |                     |
      |                 |                                                                 |                     |
      |                 |                                                                 |                     |
      |                 v                                                                 |                     |
      |                                                                                                         |
      |     +--------------------------+        +---------------+        +--------------------------------+     |
      |     | INPUT (chain)            |  --->  | local process |  --->  | OUTPUT (chain)                 |     |
      |     | filter + mangle (tables) |        +---------------+        | filter + mangle + nat (tables) |     |
      |     +--------------------------+                                 +--------------------------------+     |
      |                                                                                                         |

Rules / Policy

Every chain consists of a number of rules (may be 0). Every built-in chain also has a policy:

  • A policy is a kind of default target (see below)
  • The policy is applied if none of the chain's rules match
  • The target defined by a policy cannot be another chain

Every rule consists of a number of "selection criteria" that determine whether or not a packet is matched by the rule. If a match occurs the rule is further consulted for a "statement" about what should happen to the packet.

  • The "statement" (not the "selection criteria!) is called a "target"
  • If the "selection criteria" does not match a packet, the next rule in the chain is consulted
  • If the "selection criteria" matches a packet, the target determines the next rule that should be consulted; the following is a short overview of the most common and important targets
    • The target can be a "jump" to a user-defined chain in the same table
    • The target ACCEPT accepts the packet
    • The target DROP completely drops the packet
    • The target QUEUE passes the packet to userspace (not covered in this document)
    • The target RETURN goes back to the "calling" chain
      • Exception: if the current chain is a built-in chain, the target RETURN uses the the target defined by the chain's policy
    • Many more targets are possible; often targets depend on the table or chain that the rule is located in
  • If a packet is not matched by any rule in a built-in chain, the target defined by the chain's policy is used


This chapter provides a few useful examples for copy-paste.

List the rules in all chains in the table filter, without trying to resolve host names:

iptables -t filter -L -n
iptables -L -n

List the rules in the chain PREROUTING in the table nat, without trying to resolve host names:

iptables -t nat -L PREROUTING -n

List the rules in all chains in the table mangle, including rule numbers:

iptables -t mangle -n --line-numbers

Insert (-I) a rule at position 666 into the chain INPUT in the table filter. The rule is: DROP all packets from source to destination

iptables -t filter -I INPUT 666 -s -d -j DROP

Delete a rule in table filter:

iptables -t filter -D <all options used when the rule was added with -A or -I>

Append a rule to the chain FORWARD in the table filter:

iptables -A FORWARD ...

Useful examples

Log TCP packets coming in to this host with destination port 1234:

iptables -t filter -A INPUT -p tcp --dport 1234 -j LOG --log-prefix "foobar: "

Log TCP packets coming in to this host via interface eth42:

iptables -t filter -A INPUT -i eth42 -j LOG --log-prefix "eth42 input: "


Behind the scenes iptables is organized as a collection of Linux kernel modules under the common name "netfilter". The man page describes many of these modules, but if you need to be sure if a certain module is available on your system, have a look at the content of the folder



IP Forwarding

By default, a Linux box will not route IP packets from one IP network to the other. To enable routing, you have to edit


(/etc/network/options is deprecated), and add the line


To activate the changes in sysctl.conf, run the command

sysctl -p

(sysctl is used to change kernel parameters at runtime). From now on, /etc/init.d/networking will turn on routing whenever it is run (e.g. during a system boot). It is also possible to enable or disable routing on the fly, by issuing one of the following commands:

echo 0 >/proc/sys/net/ipv4/ip_forward
echo 1 >/proc/sys/net/ipv4/ip_forward

Now that we have basically enabled routing, we also have to define HOW routing should be done.

Basic routing

Basic routing is done via the IP routing table (or via entries in iptables, but this is not covered in this document). The command


is used to display the current entries in the table. An alternative is

netstat -rn

Example output on my machine:

pelargir:~# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface   U         0 0          0 eth1   U         0 0          0 eth2   U         0 0          0 eth0         UG        0 0          0 eth1

I have already mentioned in the previous chapter about the physical network layout that eth1 is the interface that connects to the outside world. This is the reason why the default route (destination points to that interface.

Automatic updates to the routing table

When ifup brings up a network interface that is configured to use DHCP, entries to the routing table are automatically created from the information provided by the DHCP server. If the DHCP server defines a gateway (router) and a subnet mask of, the entries to the routing table might look like this:

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface   U         0 0          0 eth1         UG        0 0          0 eth1

On a system that has more than one interface configured to use DHCP, care must be taken in the DHCP configuration that ifup does not add multiple default routes to the routing table. See the DHCP page for more details.

Manual updates to the routing table (static routing)

To add/remove entries to/from the routing table, you use the route command with the add or del option. For instance:

route add -net netmask dev eth1
route add -net netmask dev eth0
route add -net netmask dev eth2

To manually add a default route (i.e. a route that covers all traffic for which there is not a specific network route):

route add default gw

In the default route example no device/interface was specified. In this case the kernel tries to determine the correct device/interface by examining already existing routes.


Network-address translation (NAT) can be enabled using a single iptables rule. For better understanding you may have to read the general chapter above about iptables.

iptables -t nat -A POSTROUTING -o <outgoing interface> -j MASQUERADE

Dissemination of the rule:

  • -t nat = we want to add a rule to the nat table, i.e. we want to manipulate packets using NAT
  • -A POSTROUTING = we manipulate only after the routing target has been determined
  • -o <outgoing interface> = we manipulate only packets that are going to leave the host on the given interface
  • -j MASQUERADE = we want to manipulate the packet so that the source address is changed to be the address of <outgoing interface>

The iptables man page says that the MASQUERADE target should be used with dynamically assigned IP (dialup) connections. If a static IP is present, the man page recommends usage of the SNAT target. I have not investigated the reason for this, but the appropriate rule would look like this:

iptables -t nat -A POSTROUTING -o <outgoing interface> -j SNAT --to-source <ip adress of outgoing interface>

Note: I no longer use NAT on my Linux box because NAT services are now provided by my xDSL router.



Once we go beyond the pure realm of IP addresses we come to the humanly tainted realm of domain and host names. This starts with the hostname of the system and progresses via the domain name that the system is in towards the full domain name system (DNS) of the Internet.

This section provides the essentials and pointers to other pages in this wiki with more information.

For information about configuration of a DNS server, see the BIND page on this wiki.


The system's current hostname can be queried by invoking the hostname command without parameters.

While the system is running, its hostname is set by invoking the hostname command with the desired name:

hostname pelargir

Note: The hostname is set purely in memory, no file is changed by this command.

When the system is booting, a startup script reads the hostname from the file /etc/hostname and sets it by invoking hostname.

To permanently change the system's hostname even across a reboot:

  1. Invoke the command hostname to change the hostname while the system is running
  2. Modify /etc/hostname to make sure that the same name is used when the system reboots the next time

Domain name and FQDN

The FQDN (fully qualified domain name) of the system is determined by concatenating the system's hostname and the system's domain name.

To query the system's current domain name:

hostname --domain

To query the system's FQDN:

hostname --fqdn

The question remains: How do these utilities determine the system's domain name? The answer (from the man page of hostname): They ask the so-called resolver. From the man page (man hostname):

The FQDN of the system is the name that the resolver(3) returns for the host name. Technically: The FQDN is the name gethostbyname(2) returns for the host name returned by gethostname(2). The DNS domain name is the part after the first dot.

Read the next section for the ongoing discussion of the topic.

The resolver

What is the resolver? From the man page (man resolver):

The resolver is a set of routines in the C library that provide access to the Internet Domain Name System (DNS).

The resolver man page mainly talks about its configuration file /etc/resolv.conf which is used when the resolver talks to a DNS server. We come to this in a moment. What is also important (and what the resolver man page does not tell us) is that the resolver may have other sources of information besides DNS servers: For instance, the /etc/hosts file, or an LDAP server. Also important is that one can influence the order in which the resolver queries these sources. The mechanism involved is called "Name Service Switch" (NSS) and the configuration file involved is



  • The hostname man page mentions another resolver configuration file /etc/host.conf. This is misleading, that file is no longer used by modern Linux systems. As the "Resolver library" section in this interesting FAQ explains: "The older Linux standard library, libc, used /etc/host.conf as its master configuration file, but Version 2 of the GNU standard library, glibc, uses /etc/nsswitch.conf."
  • For more information on my use of NSS, see the NSS page on this wiki

On my system, the relevant entry in /etc/nsswitch.conf is this:

hosts: files dns ldap

Before we try to solve the mystery of how the system's FQDN is determined, we have a quick look at /etc/resolv.conf. The resolver man page tells us that in that file the "domain" configuration option refers to the "local domain name". A further quote:

Most queries for names within this domain can use short names relative to the local domain. If no domain entry is present, the domain is determined from the local hostname returned by gethostname(2); the domain part is taken to be everything after the first '.'.

For instance, on my system /etc/resolv.conf looks like this:


The file is automatically generated when the system acquires a DHCP lease and the DHCP server hands out various information such as the domain and the name servers. On a system that does not use DHCP, /etc/resolv.conf must be edited by hand.

The mystery of the FQDN

From what we have learned by now, we can construct the following sequence of events:

  • User issues the command hostname --fqdn
  • hostname makes system call gethostname(); result is "pelargir"
  • hostname makes system call gethostbyname; triggers the resolver and returns the result
  • resolver performs lookups in the order defined by /etc/nsswitch.conf; as we have seen, on my system this is "files dns ldap"
  • resolver looks in /etc/hosts and finds nothing for "pelargir" (/etc/hosts contains only one entry that maps to "localhost")
  • resolver queries the DNS server; because "pelargir" is not a FQDN the resolver uses the "local domain name" defined in /etc/resolv.conf for completion; the actual query to the DNS server therefore is ""
  • the DNS server, of course, knows the name and returns a result to the resolver, which in turn returns the FQDN to hostname

Note: We have ignored nscd in our discussion because on my system this caching daemon is currently set not to cache host lookup information.

Why is the FQDN important?

All kinds of programs need this information, it is just part of a properly configured system.

At one time, when I tried to remove "pelargir" from /etc/hosts but hadn't set up my own DNS server yet, I ran into trouble because slapd couldn't be started anymore. Also hostname would completely block. The reason for this was that NSS was configured with "ldap" as the last lookup source, so when slapd or hostname made the lookup, NSS couldn't find the hostname in /etc/hosts, nor in any of the (at that time external) DNS servers, so it tried to contact the LDAP server which was either down, or not properly configured for host lookups. Certainly this was misconfiguration on my part, but it just shows that a lot of tools require not only a hostname but rely also on a properly configured FQDN.

Note: One incorrect solution I have found on the Internet was to place the FQDN directly into /etc/hostname. This is completely wrong, as it would result in the hostname to be set to something that resembles a FQDN.

FQDN without a DNS server

Without a DNS server, the FQDN would have to be defined in /etc/hosts. The FQDN would be the first parameter, with optional abbreviations and aliases as the following parameters. For instance:        localhost  pelargir

List hostname in /etc/hosts

Although this is not strictly necessary if DNS is up and running, it might still be a good idea to add the system's hostname to /etc/hosts. For instance      localhost pelargir
  • This entry might save the day if DNS is down for any reason
  • The entry might also help if a service makes trouble for some network-oriented reason (e.g. for some strange reason Exim 4.76 has problems and hangs when 1) it gets an invalid HELO host name that cannot be resolved, and 2) it also cannot find the local hostname in /etc/hosts (a running DNS server that is capable of providing the FQDN appears to be insufficient))

The problem with the entry above is that it sets the FQDN to "localhost". This is not really a FQDN, and a few software packages (e.g. Apache) do not like that. So in the end, to make all parties happy, I have decided to put the following line in /etc/hosts:  pelargir

Packet mangling with iptables

Avoid routing local traffic to ADSL router

The problem

Regular access to one of the public hostnames (e.g. comes in from the WAN side. In this case the Router correctly routes traffic to the Linux server, which can then serve the request properly.

Unfortunately the same happens when I try to access a public hostname from within the LAN: Traffic is first routed to the ADSL router, which then re-routes everything to the Linux server. The result is reduced LAN speed (in some cases the reduction is quite dramatic!) because packets need to travel 3 network segments instead of just 1 (or 6 for the roundtrip, instead of just 2).

The reason for this behaviour is that public hostnames always resolve to my public IP address, and that address is occupied by the ADSL router.

The solution that does not work

The correct solution for this problem would be, of course, to set up a proper DNS configuration, in which names such as would resolve to a local IP address when the DNS request comes in from the LAN.

Unfortunately this is not possible because the domain name registrar for the TLD ch (Switch) does not accept arbitrary DNS servers for delegation, which means I am forbidden to deploy my own primary DNS server for I am therefore stuck with using a public DNS server (currently as primary DNS server for, and that public server is, of course, incapable of resolving to LAN IP addresses.

iptables to the rescue

A solution with iptables is this: whenever traffic is routed from the Linux server to the static IP (= the ADSL router), iptables must make sure that the traffic is not actually delivered to the ADSL router, but is instead mangled so that the destination interface is (eth1).

The rule looks like this:

iptables -t nat -A PREROUTING -s -d -j DNAT --to-destination

Dissemination of the rule:

  • We use NAT so that the client has no clue that we are mangling
  • Unlike the "regular" NATting where the source address is mangled (SNAT), we mangle the destination address (-j DNAT)
  • Mangling happens before routing (-A PREROUTING)
  • Mangling occurs for all packets that have their origin in the network (= the LAN, including all subnets)
    • If the LAN has more networks, the -s option would have to include those, too
    • An alternative might be to specify an incoming interface, e.g. -i eth0

The rule above uses the PREROUTING chain, which means that the rule affects only packets that come in from outside and that the Linux server is supposed to route. We need another rule that affects packets that are generated on the Linux server itself. That rule uses the OUTPUT chain and looks like this:

iptables -t nat -A OUTPUT -d -j DNAT --to-destination

Dissemination of the rule:

  • The basic elements are the same as in the first rule
  • We can leave out the -s option because using the OUTPUT chain already restricts the rule to packets originating on the Linux server

In order to activate the rules on a server reboot, the iptables command is added to one of the init scripts in /etc/init.d/S99<foo>.

Route file sharing traffic to dedicated host

When a client on the LAN side starts a file sharing application (e.g. for the BitTorrent or the eDonkey file sharing networks), the client typically needs to be accessible from the outside of the LAN on certain ports. These file sharing ports are always located in the port range >= 1000.

One solution is to modify the ADSL router's port forwarding rules (Menu 15.2.1 "NAT Setup -> NAT Server Sets -> Server Set 1 (Used for SUA only)") so that traffic on the required ports is directly forwarded to the IP address of the client that runs the file sharing application(s).

The problem with this is that the router supports only a limited number of slots for defining port forwarding rules (12 slots in my case). I prefer to use these slots for defining exceptions to the "block all ports under 1000" rule. Since all traffic on ports >= 1000 (i.e. file sharing ports are included by this) is already routed to the Linux server, it is therefore easier to define an iptables rule that forwards file sharing traffic to the correct client.

The rule looks like this:

iptables -t nat -A PREROUTING -p tcp -i eth1 -d --dports 50116 -j DNAT --to-destination 

Dissemination of the rule:

  • We use NAT so that the routing occurs transparently, i.e. the file sharing application will think that the Linux server generated the request (it sees the Linux server's IP address as the source IP address)
  • Unlike the "regular" NATting where the source address is mangled (SNAT), we mangle the destination address (-j DNAT)
  • Mangling happens before routing (-A PREROUTING)
  • Mangling occurs for all packets that come in on the eth1 interface, i.e. that have their origin on the Internet
    • Note: it is easier to use -i to match all Internet traffic than to use -s (which would require to specify a source IP address)
  • -p tcp is required so that a port can be specified (the IP protocol has no ports)
  • Last but not least, --dport is used to specify the port
    • If multiple ports need to be specified and the port numbers are not consecutive (i.e. cannot be expressed as a port range), the multiports module (loadable by the -m option if the kernel module exists) provides the --dports option

The following file sharing ports are known:

  • BitTorrent
    • Azureus
      • Determines a random port number when the application is launched for the first time
      • I always use 50116
    • Default Ports
      • TCP 6881-6889
  • eDonkey2000 (eD2k)
    • aMule
      • Uses default ports
    • Default Ports
      • TCP 4662 (to get an eD2k HighID)
      • UDP 4665 (always TCP port +3)
      • UDP 4672
  • Gnutella
    • Phex
      • Determines a random port number when the application is launched for the first time
      • I always use 5381
    • Default Ports
      • TCP + UDP 6346

xDSL configuration

See the DSL page.

WiFi router configuration

Currently my WiFi router is an Airport Extreme base station. Details about its configuration can be found on the WiFi page.



At one time at my workplace it looked as if I would have to configure a Linux machine to connect to the corporate Novell network. The following chapters document the research I did on this topic. The information presented here is probably not complete because I abandoned the effort after it became clear that a network connection would be feasible through TCP/IP after all. I leave these chapters to stand as they are in case I will have to follow-up on the issue at a later time.


(partly quoted from the Wikipedia articles)

IPX/SPX is a networking protocol used by the Novell NetWare operating systems. IPX is comparable to IP because it is responsible for end to end (source to destination) packet delivery. SPX is comparable to TCP because it sits on top of IPX and provides connection-oriented services between two nodes on the network.

NCP (the NetWare Core Protocol) is used to access file, print and other network service functions. TCP/IP and IPX/SPX are the supported underlying protocols.

On Linux

The IPX/SPX protocol is available as a Linux kernel module. On Debian, the ipx package provides utilities to configure the IPX network.

NCP server services can be provided through the MARS_NWE NetWare emulator (

An NCP client implementation is available through the Debian ncpfs package.


ipx_* utilities

The Wikipedia article about IPX has the following to say about "IPX addressing":

  • Logical networks are assigned a unique 32-bit hexadecimal address in the range of 0x1 - 0xFFFFFFFE.
  • Hosts have a 48-bit node address which by default is set to the network interface card's MAC address. The node address is appended to the network address to create a unique identifier for the host on the network.

The Wikipedia article also lists 4 encapsulation types of "IPX over Ethernet"

  • 802.3 (raw)
  • 802.2 (Novell)
  • 802.2 (SNAP)
  • Ethernet II

From this information, and the name of the available ipx_* utilities, I deduce that an IPX network needs to be set up as detailed in the following chapters.

IPX network interface

First an IPX network interface needs to be set up with ipx_interface. If for instance IPX traffic uses the ethernet interface eth0, we can define the IPX interface like this:

ipx_interface add eth0 802.2 0

According to the man page for ipx_interface:

  • 802.2 is a valid frame type; I don't know whether 802.2 (Novell) or 802.2 (SNAP) will be used
    • by specifying an invalid frame type "foobar", we can fool ipx_interface into printing out a help text that informs us that the following frame types are valid
      • 802.2
      • 802.2TR
      • 802.3
      • SNAP
      • EtherII
    • so we still don't know that exactly 802.2 is, but we assume that it's the reasonable sounding "Novell" variant
  • a network number "0" is the same as not specifying a network number at all; the effect is that the network number will be detected automatically from the traffic on the network

Note: There is also an option to specify a primary interface. I don't know what this is, so I don't care...

As an alternative to all this stuff above, it might also be feasible to let IPX configure itself automatically. It does not do so by default, so we have to instruct it:

ipx_configure --auto_interface=on
ipx_configure --auto_primary=on [auto-select a primary interface]

IPX routing

ipx_route add target_network router_network router_node

ncp* utilities

The man page for ncpmount says

  • on IPX: "You must configure the IPX subsystem before ncpmount will work. It is especially important that there is a route to the internal network of your server."
  • on IP: "You must specify both -S logical_name and -A dns_name. logical_name is used for searching .nwclient, other configuration files and is logged into /etc/mtab, dns_name is used for connecting to server."

ncpmount provides a huge number of options that can be used for mountint a NetWare volume.

Worth investigating:

  • mount option "tcp" = use TCP for connection to server
  • -A dns name (mount option ipserver=dns name): When you are mounting volumes from NetWare 5 server over UDP, you must specify dns name of server here and logical server name in -S (or in server=). This name is used to switch ncpmount into UDP mode and to specify server to connect. Currently, DNS is only supported IP name resolution protocol. There is currently no support for SLP.
  • -b (mount option bindery): If you are connecting to NetWare 4 or NetWare 5 through bindery emulation instead of NDS, you must specify this option.
  • -i level (mount option signature=level): Enables packet signing. level is from 0 to 3: 0 means disable, 1 means sign if server needs it, 2 means sign if server allows it and 3 means sign packets always.

Mount all volumes (becauses no -V passed) from a server:

ncpmount -S <server> -U <user> -P <pwd> /mnt/<mount point>