Securing the Infrastructure - The Internet Protocol Journal - Volume 3, No. 3

by Chris Lonvick, Cisco Systems

People are becoming much more reliant upon the proper operation of their networks. Consequently, the administrators of these networks are being tasked with providing an ever-increasing level of service. At this time of high reliance upon the network, methods and procedures need to be instilled into the network so the operators can maintain control of their network and they can know with some certainty the effect of each potential change. This may become increasingly difficult as network resiliency techniques are being proposed and deployed with the intent of automatically keeping these networks in top operation. Having a predictable network that is secured in a proper manner results in a network that is more suitable for the users and better meets the intended purpose of the network.

Most of the current network security models start with the physical perimeter of the network as its defining boundary. All things within this boundary are supposed to be protected from the perceived inimical forces that are outside of the perimeter. We are, however, finding that the perimeter of the network is no longer solidly defined. There are many exceptions to the "hard-shell perimeter" model companies merge, remote sites are linked through Virtual Private Networks (Site-to-Site VPNs) across untrusted paths, access is granted in-bound for the network users through Access Virtual Private Networks (Access VPNs), and there are several other exceptions. For this article, let's consider a different model. This model has a boundary of the acceptable network users rather than any geographical or logical perimeter. It is important that these users are allowed access to the services provided by the network. It is equally important that the people who are not authorized to use the network must be prevented from consuming its resources and otherwise disrupting its services.

Other models tend to focus on the restrictions of the users to access devices to provide security to the network. This model, however, looks at the effect that the users and each of the devices have upon the state of the network. To conceptualize this model, visualize that the only time this network would be running at a "steady state" is when there is no user traffic, no administrative or management traffic, and no routing update changes. The insertion of any traffic, or the addition or removal of any device or link, would change the state of this network. Changes to the state of this network may come from any number of sources, but they can be seen as coming from four different, quantifiable areas.
Operators may enable or disable lines and devices.
A network device publishing a new route or a different metric to a destination may cause the remainder of the network devices to dynamically recompute paths to all other destinations.
Servers may insert traffic.
Users may insert traffic.

Of these, the last two should be the least disruptive to the network as long as the traffic amounts are within the predicted and acceptable ranges. Changes that are within the goals of the network for example to provide a service to the users are considered good, while changes that cause outages or other disruptions are to be avoided. As such, it is vital that the network administrators understand the potential impact and consequences of each possible change in their network.

In this model, then, the administrators must know and understand the influences that will change the state of the network. The desire to achieve this goal sometimes leads to improper restrictions placed upon the users. Consider one extreme case of this model where each change in the network must be stringently authorized and authenticated. As a narrow example, this would mean that even traffic that is fundamentally taken for granted as a proper process of the network would have to be authenticated and authorized. Domain Name System (DNS) transactions would show that this extreme case is impractical. Each DNS query would have to be associated with a user or authenticated process, and that user or process would have to be authorized to make each specific query. A vastly more practical case for real networks would be for the administrators to allow any DNS query from any device without authentication as it is done in existing dynamic networks today. In the model, the normal DNS queries and responses would be an influence upon the state of the network. For this influence to change the network in a way that meets the goals of the network, the administrators would have to feel comfortable that the servers and the available bandwidth will adequately handle the amount of DNS traffic as well as all other traffic. On the other hand, the administrators do need to establish a strict set of rules for the influences that they consider sensitive or possibly disruptive to their network. Continuing this example, the administrators may want to place restrictions upon the devices and processes that can insert and update the DNS records. It would be rather inappropriate, and potentially devastating, if any unauthorized person or network device were allowed to overwrite any existing records. If anyone were allowed to perform any DNS update that he or she wished, chaos would soon result. There must be a center position for this example that allows the operators to maintain control but still permits the dynamic freedoms expected by the users. Specifically to address this, the DNS Extensions Working Group has proposed several Internet Drafts [1].

In the broader sense, this places a very heavy responsibility upon the people who are running the network. They must find some acceptable median between the desire to rigidly control all aspects of the network and the freedoms that are expected by the users, while at the same time satisfying the business requirements of their network. However, defining the freedoms and restrictions of the users is only one part of maintaining the network. The administrators and operators must have an understanding of the influences on the network as described in the model. In this, each aspect of the parts of the network must be understood well enough to predict their behavior as they are normally used, and to limit the potential for disruption if they are used beyond their means. The one area that is vital to the proper working of the network is the infrastructure. This article explores some of the thoughts that may go into the process of securing the network infrastructure.

Table: 1: Sources of Change to the Network

Description of Problem
In this abstracted network model, four sources of change were noted. As shown in Table 1, these changes, or influences to the network, may come from the operators, the network devices, the servers, and the users of the network. Let's first look at the influences that each of these groups can effect upon the network by first categorizing the network devices. All the devices on the network may be somewhat separated into four groups that correspond to the four sources. These groups of network devices can be seen in the third column of the table.
Operators: For the purpose of this article, let's describe the Operators as all the people who operate the network, including the network engineers, the installers, the people who monitor the network, and all the other people who make it work. The first group then is made of the operators and the devices that these operators use to run the network, such as the network management stations and all other operations consoles. Operators periodically make changes for moves and additions for better network performance, or to overcome disruptions. They will also monitor the network through polling, receiving alerts, and sometimes directly interacting with the network devices. Generally the amount of traffic inserted into the network from their activities is minimal. Because they generally have physical access to all locations, they can insert or remove network devices. Operators can have influence over all aspects of the network at all layers from the physical layer, all the way up the stack. Operators can influence the network either in band or out of band, and they should be the only people who directly access the network infrastructure devices such as the routers and DNS servers. Usually this access will be from the management platforms, but in many situations, operators require access from devices that would otherwise be classified as a user's workstation.
Infrastructure Devices: The network infrastructure devices themselves have the ability to change the network as well. This is mostly done through the dynamic nature of the network. At some times the physical portions of the network might fail and cause outages. In some cases, such as self-healing ring topologies, physical-layer devices may heal the network. In other cases, such as when a router is taken out of the network for maintenance, the routing updates will heal the network to the best of their abilities. The network infrastructure devices can be somewhat separated into two categories. The first of these would be the infrastructure devices that have no direct interaction with the users of the network. This category would consist of the devices such as the routers, switches, access control devices, and perhaps even the physical-layer devices such as multiplexers and modems. The user machines and content servers normally would not form sessions or require any information from these devices. The second category would be the devices with which customers indirectly interact. These would be devices such as the DNS servers, Dynamic Host Configuration Protocol (DHCP) servers, Network Time Protocol (NTP) servers, authentication servers, and the like. The users and servers would form sessions with these supporting devices and would require information from them for the basic operation of the network. In some cases, such as with a DNS/DHCP server, the results of the indirect user interaction would even update the servers with information. This latter group may be called "supporting devices." These two categories can be taken together with all the wires, circuits, and lines to form the infrastructure of the network. Although the users do not actively see their presence, this infrastructure must be available and functioning before any user can actually do anything productive on the network.
Servers: The servers in this group are those that contain content or services with which the users directly interact. These would be databases, Web servers, application servers, and the like. Like the operators group, this group is not considered to be part of the network infrastructure.
Users: The users and their machines constitute the bulk of the network. The changes that the users make upon the network will probably come through transferring content or requesting and utilizing services. They can change the nature of the network by withdrawing from the network, or by causing others to withdraw from the network. In a nonmalicious way, the user base can degrade the state of the network by using it beyond its expected capacity. In certain situations, users with malicious intent may find exploitable network vulnerabilities. In most normal cases, however, the influence from the users upon the network will be through their interactions with the servers.

Each type of influence may also be considered to have a different weight. For example, the insertion of a new router into an existing network would be expected to have a larger effect upon the operations of the network than the change to the network caused by a user retrieving some information through a Web browser. To quantify some of the expected network changes, consider that there may be spheres and levels of influence. Any influence that may cause a change over the entire network may be considered to have a global sphere of influence. A router recently inserted into the network would start exchanging routing information with its neighbors. With no restrictions placed upon routing updates, this router could announce a new network, or it could announce the best path to an otherwise difficult-to-reach network. The remainder of the network would be affected, and all other routers would have to recalculate their paths. If the announcements were true, then the network would continue servicing the needs of the users. If the announcements were false, possibly because of an incorrect configuration, then the whole network could suffer. In this case, it is possible to limit the sphere of influence by restricting the acceptance of routing updates. In one method, all the routers could be restricted to disallow the acceptance of an announcement to the "default" network. Additionally, all the routers may be restricted to accept only announcements that are known to be within an acceptable address range. In another method, the routers could be grouped to accept announcements only from a select set of other routers. Additionally, some routing protocols have an option to include an authentication and integrity check through signing the updates. Any of these methods would help to reduce the sphere of influence and thus the potential for changes that could be made by the insertion of a router. There is, however, a cost associated with this; the operators would have to diligently enforce this control.

The level of influence can also be considered a factor in this model. The sphere of influence of a single transmission line can be defined to include any portion of the network that uses that line. If that line develops a fault, it may corrupt or discard packets and the associated network devices may automatically disable that line. If there is a backup line or an alternate path, then this change will be a small problem to the operations staff and its loss may go unnoticed by the users. That would be a low level of influence upon the network. On the other hand, if the line has an intermittent fault that can cause a route flap, or if the line has no backup, then major disruptions may occur. That would be considered a high level of influence.

If the goal of the network is to provide a service to its users, then its operators must try to quantify each of the influences. In a theoretically ideal network, the administrators would appropriately limit the sphere and would try to minimize the level for every influence. As was noted above, however, attempting to do this would require numerous operations tasks. Many of those may be unnecessary for their specific environment. For example, in a small business where there is a high degree of trust that no one has any malicious intent, controls would still be placed upon the influences that would most probably cause network problems through accidents. If the security policy allowed anyone to connect any device to the network, it may still be prudent to disallow the routers from receiving routing updates from any source other than the other routers.

A well-running network is the result of a well-controlled network. These networks must have a separation of authorized administration from other influences, and these other influences must be understood well enough to know how they will change the network. The following diagram shows the network and the groupings of influences upon it, and the table below that describes the elements of this model. This model does not show access paths, but rather the influences that each grouping of devices has upon the infrastructure and upon other devices. As can be seen, the users are pervasive throughout the network (because they are a principal reason for its existence), and they must have the access paths to contact the servers and necessary infrastructure devices. The users will influence the infrastructure as they insert traffic upon the lines, but they should have no direct influence upon the infrastructure devices such as the routers and digital access cross-connects. The operators do have influence upon the infrastructure devices and must have an access path to those devices. It would be most appropriate if the users were not allowed to usurp the access paths of the operators. However, because the two are sometimes nearly indistinguishable, the task of separating the administrative channels from the user channels becomes difficult.

Figure 1: The Network Security Model

The following table describes the elements in this model.

Table: 2: Network Model Elements

Know Your Business
All well-running networks must have a security policy defined. This must reflect the goals of the network and must also be acceptable to the users and administrators. There are good examples of policies as well as methods that can be used to generate them. RFC 2196 [2] contains several thoughts about constructing a policy and SANS [3] offers courses on this. While defining a network security policy, it will be advantageous to list the most likely disruptive influences to the network. This is commonly called The Threat Model. All potentially disruptive factors should be considered when forming the threat model, and they must be addressed when writing the security policy. It may, however, be beyond the capabilities of the operations staff to negate all of them. It may also be prohibitively expensive to try. In those cases, the writers of the policy should acknowledge the factors that won't be negated, but they should still find ways to minimize them. For example, in an Enterprise network the operators are somewhat likely to require access to routers and switches from any physical location in the network. In Service Provider networks, there may be less of a chance of that because the operators traditionally reside with the network management devices. In both cases, it would not be considered good for the network if a user could gain control of a router. The security policy for an Enterprise network may explain that network access to routers will be opened and available for any other device within the network. This will allow any operator to access the routers from any location. On the other hand, the security policy for a Service Provider network may state that access to the routers will be opened only for specific address ranges. Implementing this will prevent users, who reside within the address spaces assigned to the users, from accessing the infrastructure devices while allowing the operators, who reside within their own address space, to access the routers. In both cases, however, strong authentication wil probably be required to additionally limit access to only authorized people.

Within the business of the network, operators must have the ability to control the infrastructure devices. Traditionally, the ways to interact with a device have been called "interfaces." A terminal with keyboard attached to the console port of a router is an interface just as is a Web browser accessing the router via the Hypertext Transfer Protocol (HTTP) through the network. Also, the path between the controlling device and the infrastructure device has traditionally been called a "channel." The wire connecting the terminal to the router is a channel just as is the TCP session that transports the HTTP in the prior example. The channels between the operators and the infrastructure devices must be secured, as well as the channels between the infrastructure devices. The first step in obtaining this goal is to identify all of the interfaces needed by the operations staff to access each of the remote devices. Along with this, they also need to identify each of the interfaces needed for the proper functioning of the network. The following lists are some of the possible network-available interfaces to some of the infrastructure devices in a dynamic network. This is somewhat broken down into the interfaces needed by the operations staff, the interfaces usually needed by the other infrastructure devices, and some ancillary interfaces.

Table: 3: Some Interfaces of Infrastructure Devices

Each of these interfaces may be exposed to the nefarious forces that are known to inhabit large networks, and each of these exposures has vulnerabilities that may be exploited. Telnet sessions may be hijacked, DNS queries may be answered by nonauthoritative and possibly maliciously incorrect responses, and sinister people can insert forged routing updates to confound and disrupt the network. The network security policy should expect that these vulnerabilities may be exploited and it should address the mechanisms that may be used to either negate the vulnerabilities or to minimize the exposures. In this model, the process may be used to limit the sphere and level of the influences. The policy may also make some attempt to identify the potential consequences of the disruption caused by the exploitation of these interfaces. It should also describe an escalation procedure for dealing with encountered problems.

Possibly, during the exercise of identifying the open interfaces in an existing network, some of them may be closed or removed if it is determined that they are not needed or if their function can be fulfilled by the use of another interface. As an example, consider a UNIX host that has both the Secure Shell Protocol (SSH) and finger services running on it. If the policy of the network is to tightly control the information that anyone can obtain from any device, then the operators may want to remove the finger service. The operators will be able to obtain similar information by running the who command on the UNIX system through an SSH remote execution request. On the other hand, if the operations processes have been built upon the format of the information returned by finger, then the operators may want to prevent direct access to finger from the network and require that it be run on the device or through the SSH request.

At some point, it would be a good idea to run a scanner against the infrastructure devices. The Network Mapper (NMAP) [4] is a freely available tool that can pick out some of the active interfaces of a device. This, or a similar tool, should be periodically used by the operations staff to ensure that the open ports of an infrastructure device are those that are known to be open. This investigation should not be limited to operations channels, but should also include application channels. For example, the question should be asked if the operations workstations should have open application interfaces such as Simple Mail Transfer Protocol (SMTP) or Network File System (NFS). There are exploitable vulnerabilities associated with some application interfaces that should be addressed in the security policy. In most cases, it would be prudent to remove applications that are not needed from infrastructure devices and supporting servers, as well as from operations devices when they are not needed. In all cases, it is usually considered to be a good practice to review the entries in the inetd configuration in UNIX systems.

It should be remembered that there will almost certainly be an access path between the users and the network interfaces of the infrastructure devices. The network security model diagram shows that neither the users nor the servers should have any direct influence to change or control the infrastructure devices. This is somewhat analogous to the policy of giving privileges on a multiuser system. In most well-run multiuser computing systems, the operators give only the most meager of privileges to the users of the system. This prevents most accidental and malicious disruptions. If the users need to run a privileged process or to access the files of other users, processes that utilize setuid are used or consensual groups are established. Generally, efforts are made to prevent users from having significant privileges on these machines. The alternative of giving each user high-level privileges usually results in disaster after a short time because the users then have the ability to overwrite or delete files, and may run processes that are generally disruptive to the operating system and to others.

Similarly, giving users high-level access to the routers of a network would have a deleterious effect. In the case of Quality of Service (QoS), a user given the privileges to reconfigure routers along a path would be able to provide his/her own designated flows with bandwidth and priority assurances. Subsequent users would also have that capability, and their modifications may leave the first user without his/her expected QoS and possibly without a session at all. A far better mechanism to fairly deploy QoS is through the use of a brokering service. In a "policy network," users or authenticated processes may request a level of service for their flows through a Policy Manager. This Policy Manager should have the capability to arbitrate requests to provide a semblance of fairness. The Policy Manager would then directly control the appropriate routers within the rules established by the administrators.

Along these lines, conveying security-related policy to infrastructure devices should take a similar path. For example, if the network security policy states that user access to a particularly sensitive network resource must be authenticated and controlled, the operators may elect to place a firewall between the users and that resource. That firewall would be classified as an infrastructure device and users should not directly access or control it. Rather, the users may authenticate themselves to an authentication service, which would notify the firewall that their access to the resource is permitted or denied. The authentication service may also send a set of restrictions for the access method; it may permit HTTP access but deny Telnet and File Transfer Protocol (FTP) for one person, but for another it may permit only Telnet.

The reasons for authentication, authorization, and access control must be described in the network security policy. It would be simple to mandate strict controls at many places in the network. However, that may not meet the needs of the business or the tolerance of the users. More to the point in this article is the requirement in the model that the disruptive influences be negated or minimized. Having a firewall or other access control device silently discard disruptive packets may be preferable to having a user or unconstrained process continue to spew garbage around the network.

Decide on the Methods of Securing the Channels and Interfaces
Some of the very first computing devices were designed to be managed locally and not remotely. Consoles consisting of a teletype device and a roll of paper were among the first interfaces to modern computing devices. Various methods were devised to extend these administrative interfaces beyond the confines of the frigid "Computer Room." The first efforts were to keep these interfaces out of band, a scenario that meant separate wires from the physical port on the machine to a console in the operations room. In many cases, the wires from the remote terminals to the system were still visible because they were laid along the floor and could, therefore, be considered a secure channel. While this maintained a secure administrative channel or path that could not be tapped or exploited by others, it didn't scale as more and more computing and ancillary devices were placed into the computer room, each requiring its own console. When remote terminals became commonplace, administrative functions were allowed over that channel. In almost all cases, the operating systems were mature enough to require some form of authentication before critical management operations were allowed.

The out-of-band channels for secure remote administration of devices may no longer be applicable to large networks. There are costs associated with running separate secure networks for the sole purpose of out-of-band operations, and there is the impracticality of one-at-a-time access through the console port of each device. This applies equally to the practice of placing a modem on the console ports of devices a deployment that is not considered secure because there are still many automated dialers looking for answering modems. For these reasons, in-band access of operations has become the preferred method for modern networks. Telnet has been the oldest remote channel and interface for remote operations. Since then, other remote interfaces have been opened for controlling, commanding, and operating devices.

Many attempts have been made to "secure" Telnet and its use as a command and control channel. These efforts address the vulnerabilities of the protocol, and some address the interface itself. The Berkeley Software Distribution (BSD) "r" command set, such as rlogin, rsh, rexec, and others, were meant to be a substitute for the most common uses of Telnet within a trusted environment. It was assumed that the person initiating the command had previously been successfully authenticated. SSH was meant to be a secure replacement of the Berkeley "r" tools. The SSH console session has been widely deployed to remotely operate devices. This replicated the Telnet interface while replacing the channel. The protocol addressed machine authentication, user authentication, and session confidentiality and integrity. When used as it was intended, it can effectively replace Telnet as a secure interface and channel. The scp feature of SSH can also securely replace rcp, and it has been used as a replacement for FTP. Likewise, a Kerberized Telnet and Kerberized FTP have been released to do the same.

Several other efforts have also been undertaken to secure some of the other administrative interfaces and channels. For example, the security issues of Simple Network Management Protocol (SNMP) are being addressed with the options of SNMPv3 [5]. Also, applications that utilize HTTP can be secured with HTTP over SSL (HTTPS) (Secure Sockets Layer/Transport Layer Security [SSL/TLS]) [6]. At this time, it appears that SSL/TLS is emerging as a mechanism that can be utilized to provide some security to many different applications. Beyond the operational interfaces and channels, work has been done to secure some of the infrastructure and ancillary interfaces. Some routing protocols have built-in authentication and integrity through the use of signing the routing updates with a shared key. Each mechanism that has been secured has been the subject of a focused effort to address that specific interface and channel. However, unlike those named above, some channels, such as Syslog and Trivial File Transfer Protocol (TFTP), have not been explicitly secured at this time.

IP Security (IPSec) [7] was developed as a general-purpose mechanism that may be used to provide a secure wrapper around any unicast flow. Its cryptographic mechanisms can provide strong authentication, confidentiality, and integrity. While IPSec can be used to secure any flow, it may require additional infrastructure. A Public Key Infrastructure (PKI) must be established within the network. The alternative is to use preshared keys, a solution that is operationally intensive and doesn't scale well. IPSec also requires consistent time synchronization between the devices, as well as a consistent DNS. If these pieces are in place, the operations staff can utilize IPSec to secure each of the needed operations channels. If the operators and administrators choose this method, then they should ensure that the unsecured channels are unavailable to anyone but themselves. For example, if the Telnet channel is secured with IPSec, then the Telnet port on remote devices should be closed for inbound access.

One method of closing the exposures is through Access Control Lists. Routers and switches usually have mechanisms that can be used to allow inbound and outbound sessions from only certain devices. UNIX devices usually have the ability to run TCP wrappers that can provide access-control mechanisms for inbound and outbound sessions. If infrastructure devices can be grouped together, the operators may decide to place them behind an internal firewall. The decision to do that should be thought through. Generally the internal firewall will limit access of the protected devices to the specified interfaces [8]. If this is done for a group of network management stations, the net effect may be that any attempts to access those workstations from outside of the firewall would be denied. The only inbound flows may be SNMP responses and traps. This implementation would limit the operations staff to being physically present before they could operate those devices. On the other hand, the firewall would prevent users from mistakenly or intentionally forming sessions with those devices. Because any received packet would have to be assessed by the device, a firewall that would discard packets before they are received by the device would help to prevent denial-of-service attacks. The use of internal firewalls should not be used as an excuse for poor security measures on the protected devices. Regardless of how effective the operators feel their firewall is, the protected devices must be treated as if they were otherwise exposed.

In determining the channels that will be used for the administration of the infrastructure devices, the packages will also be selected. At this time, many devices are sporting Telnet, FTP, and HTTP channels and the operators may utilize workstations that have these packages already loaded onto them. Also, networks comprising Microsoft NT servers may be managed remotely by the NT administrative tools, which commonly run on NetBIOS over TCP/IP (NBT). When given the choice, most often the operations staff will select easy-to-use and commonly available packages to access the interfaces of the infrastructure devices for remote operations and control. In all cases, these will be packages that will be available to the user community of the network as well. The users of the network may also easily download packages of these types if they don't already have them on their machines. For example, the operators may choose to utilize SSH for secured access to some devices. It is a trivial task for the users to also download an SSH client package and to start poking around the network to see what they can find. Even SNMP packages can be easily downloaded to the workstations of the users.

The operators and administrators must avoid the temptation to select a less-well-known package for infrastructure management based upon the thought that the users probably won't know about it. Users may not be initially aware that some packages are being used, but they can also download sniffer packages. Given enough time, even passive sniffing will give them enough clues to determine the channels used for administration. When they know that, they can then probably download the package themselves, and may then attempt to use it to explore the network. It should also be noted that the more heavily used packages have been scrutinized much more than the newer or less used packages. As a very general rule, the older a package gets, the more it becomes trusted because more people have been using it and probably attempting to break it.

As described above, and as it is seen in the diagram of the model, some of the channels that are available to the operators are also available to the users. This means that if the operators utilize Telnet to control their routers, it may be possible for a user to also initiate a Telnet session to a router. There must be an extremely strong discriminator to differentiate between the authorized operators and the unauthorized users before access to control the device is granted. Almost exclusively, the discriminator used is some form of authentication. An operator should be able to satisfy an authorization challenge, whereas an unauthorized user should not. A username and password is the most common form of in-band authentication. Specifically within Telnet and FTP, an in-stream challenge is presented to the user attempting a session; the user is asked for a username and then for a password. If these credentials match the values stored on the host, then the session is permitted. In these sessions, the credentials are exposed to casual observation. Anyone with a packet-sniffing device will be able to plainly see the username and password. These credentials must be regarded as secrets that must be protected. If they are compromised or stolen, then the operators have lost their control of their network. Some packages, such as SSH and Kerberos, have addressed these problems and have found ways to prevent secrets from being passed during authentication.

It must also be noted that some infrastructure devices do not offer any in-band channels for control. Many Channel Service Units/Data Service Units (CSU/DSUs) are not IP aware and do not offer any in-band channels for control. In cases like those, physical access may be the discriminator that prevents unauthorized users from controlling the device. Typically, a lock on a door or a cabinet would be the "challenge," and the key would be the authentication credential, which must be treated like a secret. It cannot be emphasized enough that these secrets must be protected. The CERT Coordination Center has written a very broad Tech Tip, which explores the topic of password security [9]. Many companies have found it very beneficial to periodically hold training courses to highlight the importance of this subject both to their operators and to their users.

Ancillary Channels Also Require Security
One of the parallel problems with using authentication credentials is its distribution. Many devices are capable of maintaining a local database of usernames and passwords. However, maintaining identical databases on each device throughout large networks is infeasible. More often, the authentication credentials are stored in a centralized database and an Authentication, Authorization, and Accounting (AAA) protocol is used to transfer them as needed. The AAA protocols most often used are Remote Access Dial-In User Service (RADIUS), TACACS+, and Kerberos authentication. Each of these has different characteristics and security mechanisms. Kerberos authentication was designed to securely transport authentication material. A password is never transferred across the network in this architecture. This protocol has withstood the test of time, but it has been difficult to establish in networks that aren't committed to maintaining it. This situation seems to be changing because more "productized" versions are becoming available on the market. TACACS+ has a mechanism to hide the exchanges between the TACACS+ client and the server. It is also capable of transferring authorization rules for each user. RADIUS uses a mechanism to hide portions of the exchange between the RADIUS client and server as well.

Beyond this, the channels for telemetry, audit, and accounting may need to be secured. There are no inherent mechanisms to secure syslog at this time, and SNMPv1 may be protected with a Community String, but that solution is considered weak. It is possible to allow read-only access to the SNMP interface, but SNMPv3 has many of the security features that have been requested to secure this protocol. Other channels that are required by the operations staff should also be critically reviewed because many forms of attacks are on open channels.

It would be appropriate for the operations staff to keep up with new exploits and to assume that the users of the network have access to the latest "hacker" tools. It is quite common for people to hear about an exploit or published vulnerability and then "try it out" in the nearest available network. For this reason, it should be in the security policy of the network that "security patches" be given the highest priority and should be loaded on the affected platforms as soon as they are available and have been approved for the environment.

When any security mechanism is applied, the appropriateness and applicability of the solution should be questioned. On the surface, some security solutions may appear to be good; however, their applicability to the situation must be verified. As an example, SSL may be used to secure HTTP traffic, and it is commonly found in many Web browsers. Unfortunately, not many people explore the browser options that are enabled by default. In most browsers, SSLv2 is still available, even though it has published and exploitable vulnerabilities. Additionally, even in SSLv3 which negates the vulnerabilities of SSLv2 low key-length cipher suits are still available and enabled by default. In many cases, a null-cipher crytpo algorithm is available. In the internal networks of many companies, SSL may be selected and implemented using a self-signed certificate. Care must be taken to ensure that this certificate is the one distributed to each administrative workstation. SSL sessions may be formed without certificates supplied by either endpoint. An attacker could exploit this through a man-in-the-middle attack. Another example would be the use of SSH. SSHv1 has known vulnerabilities. If the administrators decide to deploy SSH for the control of the remote infrastructure devices, they should first decide if they should be worried about attacks against those known vulnerabilities in their infrastructure. If they are, then they should either deploy SSHv2, which addresses the vulnerabilities of SSHv1, or they should explore the use of Telnet with IPSec.

In many cases, rather than using the "most secure" solution, perhaps a simpler solution would still provide adequate protection. The "most secure" solution the one that mitigates all perceived threats is usually too costly to implement. In many cases, network operators and administrators with many years of experience have decided that SSHv1 is adequate for their needs and they can mitigate or minimize the exposure. In other cases, some operators are turning to SSHv2 or IPSec to cover the vulnerabilities that have been found in SSHv1. In some cases, the use of SNMPv1 may also be acceptable as long as its exposures are understood and the operators determine that its use will not pose a problem.

Excessive "security" may also intolerably reduce the usability of the network. It is important to remember that the network is there for the users. Placing security restrictions upon them to keep them out of the infrastructure is like keeping the doors locked to the building boiler room. Untrained people entering that area may hurt themselves or they may cause serious problems to others. If they have malicious intent, they could damage the machinery. Excessive security for that analogy would be similar to locking the boiler room, locking the ingress and egress points to the building, and mandating that armed guards accompany anyone that is permitted to enter the building. In some cases, that may be appropriate for the perceived threat. However, in the case that this applies to an elementary school building, it is inappropriate and would make some parents think of moving their children to other schools.

The model described in this article may be used as a thought process to review an entire network at a high layer to see the relationships between the various devices. It may also be used to design the security policy and the acceptable use policy of the network. Another use for it may be to define the operational procedures for the operators to securely administer the network and to define how the infrastructure devices will communicate. However it is used, some settlements must be made between the desire to provide security and the usefulness of the network. The cost of the security mechanisms cannot be unreasonably high, and the mechanisms cannot change the business model of the company. The enforcement of the policy must be effective, yet above all it must not change the expectations of the users. In all cases, the administrators and operators must find some balance between their need to secure the infrastructure and the need for the users to have the ability to actually use their network.

[1] Internet Engineering Task Force DNS Extensions Working Group, last updated July 2000,

[2] Fraser, B., "Site Security Handbook," RFC 2196, September 1997.

[3] System Administration, Networking, and Security Institute,

[4] Fyodor, "NMAP?The Network Mapper,"

[5] Stallings, William, "Security Comes to SNMP: The New SNMPv3 Proposed Internet Standards," The Internet Protocol Journal, Vol. 1, No. 3, December 1998.

[6] Stallings, William, "SSL: Foundation for Web Security," The Internet Protocol Journal, Vol. 1, No. 1, June 1998.

[7] Stallings, William, "IP Security," The Internet Protocol Journal, Vol. 3, No. 1, March 2000.

[8] Avolio, Fred, "Firewalls and Internet Security," The Internet Protocol Journal, Vol. 2, No. 2, June 1999.

[9] CERT? Coordination Center, Tech Tips, "Protecting Yourself from Password File Attacks," Last revised February 12, 1999.

CHRIS LONVICK holds a Bachelor of Science degree from Christian Brothers College and is in the Consulting Engineering Department of Cisco Systems in Austin, Texas. He is currently the chair of the IETF Syslog Working Group. Chris can be reached at