Autoconfiguration
By udhayaforu
@udhayaforu (8)
India
October 12, 2006 2:24am CST
IEEE INTERNET COMPUTING 1089-7801/01/$10.00©2001 IEEE http://computer.org/internet/ MAY • JUNE 2001 81
On the Wire
Autoconfiguration
for IP Networking:
Enabling Local Communication
It would be ideal if a host implementation of the
Internet protocol suite could be entirely self-configuring.
This would allow the whole suite to be
implemented in ROM or cast into silicon, it would
simplify diskless workstations, and it would be an
immense boon to harried LAN administrators as
well as system vendors. We have not reached this
ideal; in fact, we are not even close. —RFC 11221
IP hosts and network infrastructure have historically
been difficult to configure—requiring network
services and relying on highly trained network
administrators—but emerging networking
protocols promise to enable hosts to establish IP networks
without prior configuration or network services.
Even very simple devices with few computing
resources will be able to communicate via standardstrack
protocols wherever they are attached. Current
IETF standardization efforts, such as those in the
Zeroconf working group, aim to make this form of
networking simple and inexpensive.
Hosts that are permanently connected to an
administered network are usually assigned static
network configurations by network administrators.
Other hosts are attached to administered networks
(such as corporate local-area networks or dial-in
accounts) using dynamic network configuration.
In these, all necessary parameters are assigned to
the host by a network configuration service, which
also requires configuration. In many situations—
impromptu meetings, administered network misconfigurations,
or network service failures, for
example—establishing an IP network is desirable,
but administering it can be impractical or impossible.
In these cases, automatic network configuration
of parameters is valuable for hosts. The IETF
Zeroconf WG’s real goal is to enable direct communications
between two or more computing
devices via IP. In this tutorial, I examine the background,
current status, and future prospects for
zero configuration networking.
Zero Configuration Networking
Automatic configuration parameters have different
properties from those assigned by static and
dynamic configuration. They are ephemeral; they
will likely be different each time they are obtained
and might even change at any time. Automatically
configured hosts actively participate in assigning
and maintaining their configuration parameters,
which have only local significance. Autonomy
from network services implies that hosts must network
configuration.
In direct contrast, normal IP configuration is
persistent (especially for servers), or at the very
least, stable. The IP protocol suite aims at scalability,
especially with respect to configuration.
Addresses and names often have global significance,
which has proven essential for enabling
Internet growth. Obtaining and managing global
addresses and names requires a great deal of
administrative work, however. These processes are
not at all automatic and likely never will be.
Despite these differences, the essential zero configuration
networking protocols really imply
changes to only the lower layers of IP-enabled
devices. (See the sidebar “IP Host Layering” for an
introduction to the terminology required for discussing
automatic configuration.)
Existing network-based applications will work
without modification over enhanced network service
and application layers using standard interfaces.
Indeed, users should not even be aware that the network
service layer has been configured automatically
rather than statically or dynamically.
Four functions will benefit from zero configu-
Erik Guttman
Sun Microsystems, Germany
ration protocols, in the context of both IPv4 and
IPv6. With no modification to existing interfaces,
zero configuration protocols will improve nameto-
address translation (at the application level) and
IP interface configuration (at the network level).
Functions previously unavailable to IP hosts will
introduce new interfaces: service discovery at the
application layer and multicast address allocation
at the network layer.
These additional services will not disrupt existing
applications. They will “raise the bar” by providing
additional features long absent from the
Internet protocol suite, but (in the case of service
discovery) available in proprietary network protocol
suites from Apple, Microsoft, and Novell. (See
the sidebar “Early Autoconfiguration Efforts”, next
page). These proprietary protocols continue to be
used only because of their ease-of-configuration
features. Adopting emerging zero configuration
protocol standards will let us retire proprietary networking
—a move that has broad support. Even network
equipment vendors uniformly accept that
proprietary network protocols have seen their day
and should be replaced by IP standards.
For reasons of scalability and reducing impact
on existing networks, the zero configuration protocols’
effect on the overall network must be limited.
The algorithms used for zero configuration
protocols generally use multicast. In practice, these
protocols are limited to either a single network link
(that is, routers do not forward these protocol messages)
or to a set of networks (where some routers
are configured as boundaries, over which protocol
messages are not forwarded).
Defining an Approach
Those working on IETF zero configuration protocol
standardization (currently in the Zeroconf, Service
Location Protocol, DNS Extensions, and IPng
working groups) have considered two main
approaches to overcoming the differences between
configured and automatic operation.
The first strategy requires transitions between
local and global configuration and has been
explored through consumer-oriented operating
system software since 1998. This strategy implies
that hosts would support automatic configuration
only for as long as they lacked global configuration.
The two modes are exclusive, and the presence
of a dynamic configuration service requires
a transition from automatic (local) to dynamic
(global) configuration.
An example of this transition strategy is the
network interface autoconfiguration protocol
adopted for desktop operating systems from Apple
and Microsoft. This protocol (which the IETF has
not yet standardized) enables a host to simply
choose an unassigned IP address from a reserved
range. The host then attempts to obtain (global) IP
configuration parameters from the network via the
Dynamic Host Configuration Protocol.2 The host
issues periodic DHCP requests, which will eventually
succeed in reaching a DHCP server if one ever
becomes available on the network. Once a DHCP
server responds and offers IP configuration parameters,
these replace automatic configuration.
This mechanism works fine for clients
employing common client-server protocols
because very few make use of long-duration
connections. Individual network application
operations result in distinct transactions even
when connections fail. If the client host experi-
82 MAY • JUNE 2001 http://computer.org/internet/ IEEE INTERNET COMPUTING
Tutorial
Layering provides the foundation for numerous, stable extensible computing
platforms. Figure A depicts the pervasive layered architecture, often
called the IP stack, which is used for Internet hosts.This roughly corresponds
to the OSI seven-layer model.1 The figure excludes the OSI presentation
and session layers. IP applications implement data presentation
functions themselves. Session features such as encryption,compression, or
persistence between protocol transactions are added in an ad hoc fashion
at various layers.
Each layer provides services to the layer above it through standard interfaces.
If lower layers provide the same functionality using the same interfaces,
services can be implemented in different ways; new mechanisms
defined at the network services layer can thus support unmodified existing
applications.Avoiding changes in upper layers eases the adoption of new
Internet technologies. Network service layer enhancements that require
client applications (such as e-mail readers and Web browsers) to be upgraded
are not broadly adopted.
Each layer could be automatically
configured. In practice, the less
configuration required, the better,
because simpler technology works
more predictably and eases deployment.
The transport service, link
control, and media access layers
rarely require configuration in
Internet hosts. By contrast, the
application and network service
layers nearly always require configuration
in order to operate at all.
References
1. A.Tanenbaum, Computer Networks, Second
Edition, Prentice-Hall, Englewood
Cliffs, N. J., 1989.
Application
Transport service
Network service
Link control
Physical network
Figure A. Internet host layering. Zero
configuration protocols will be implemented
at the application and network service
layers of the Internet protocol stack.
IP Host Layering
IEEE INTERNET COMPUTING http://computer.org/internet/ MAY • JUNE 2001 83
ences network reconfiguration, applications simply
establish new connections.
If a server’s configuration changes, however,
recovery is not so easy: Client applications cease
to function if they cannot find a server. Servers
with dynamic addresses can only be located via
a dynamic service discovery protocol, and very
few IP-based applications currently employ service
discovery.
Server address reconfiguration can break server
software, which typically binds to a (presumably
immutable) address to accept incoming messages.
Moreover, when a server reconfigures via DHCP, it
can no longer communicate with clients that have
not yet reconfigured. Conversely, if DHCP configures
client systems and then fails to configure a
server (if the DHCP server becomes
1 response