Cisco on Cisco

Routing and Switching Case Study: How Cisco IT Architected Headquarters MAN


Cisco Catalyst 6500 Series Switch and Gigabit Ethernet provide platform for campus standard and support for future applications.
BACKGROUND

The fabled growth that Cisco Systems® experienced during the 1990s was driven by a broader vision than the business community had ever anticipated. The growing availability of the public Internet and networking technologies made geographically dispersed groups manageable and distributed assets easier to access, and it enabled a single network to support data and voice. Cisco understood the power of the network it was helping to build and continued to pour resources into developing new solutions for old problems and supporting applications that would lead to better ways of doing business.

It was also a time when each new product from the Cisco engineering group answered new challenges that Cisco-and enterprise businesses just like it-was encountering. From a business and an IT perspective, it has always been important for Cisco to be an early adopter of its own technologies and products, acting as a technology showcase and advocate for customers, and demonstrating what the network makes possible.

CHALLENGE

Cisco, in many instances, has solved its own challenges first, designing solutions that would end up in customer networks just after they were tested and deployed by the Cisco IT team. Historically, when it was time to expand the network, the Cisco IT team would install the most appropriate and often the newest equipment available at the time. Usually, the equipment wasn't just new to customers; it was new to the IT team as well.

Over time, the Cisco headquarters production network in and around San Jose resembled a large patchwork quilt, stitched together from many product families and generations. Although patchwork makes great heirlooms, heirloom networks are expensive and difficult for any business to manage.

By the late 1990s, the phenomenal growth had taken its toll on the San Jose headquarters. The Cisco LAN had two sites: the original Tasman campus and the new Zanker campus, two-thirds of a mile away. The company needed to connect 50 locations in and around those two San Jose sites efficiently and cost-effectively. Connecting the sites was a challenge because high speeds would be needed to carry data and applications. Because the infrastructure was so varied, often it wasn't possible to determine the effects that adding new applications would have on network traffic and on the network itself. It was important to keep the network ahead of application needs. Although all the equipment worked, there was no standard across the enterprise. And without that standard, network management became more and more demanding, and its cost skyrocketed. The cost of spares, replacements, and training to support those many different products skyrocketed, too.

Beyond manageability, the future was calling, figuratively and literally. IP telephony loomed as a profoundly important application that would again force the IT team to rethink network design across the San Jose sites. Network convergence-moving data and voice across a single network-would save Cisco money in the long run and act as a platform for growth; but in the short term, it would tax the heterogeneous platforms that dotted the Cisco network infrastructure.

SOLUTION

The solution had to be threefold. First, the IT team needed to plan and deploy better connectivity between the two campuses. Second, it had to develop a single-platform standard strategy that would support the growing demands for data, voice, and new applications. Finally, it needed to be well-positioned for unanticipated future applications. The answers emerged in two technologies: Gigabit Ethernet, and the Cisco Catalyst 6500 Series Switch.

Connecting the San Jose Campuses: the San Jose Gigabit Ethernet Triangle

Darrell Root, Cisco IT network engineer for the San Jose LAN, described the campus connectivity challenge: "When we built the new Zanker campus, we needed to connect it with our original campus across North First Street on Tasman. We were looking at the options that would let us have high-speed connectivity between the two. In the end, we decided to lease dark fiber, and we opted to run Gigabit Ethernet throughout the campuses. The combination of fiber and capacity would give us the performance and bandwidth between the campuses that we really needed. Running Gigabit Ethernet on the LAN had a lot of advantages: It guaranteed plentiful bandwidth, and once we switched over, we wouldn't need to worry about new provisioning or running new fiber." It was also critical to provide separate redundant physical paths for this fiber, to be certain that any fiber failure would not bring the network down.

Standard for the Future: Cisco Catalyst 6500 Series Switch

As Cisco evolved, the network infrastructure on the LAN/MAN had also evolved. Cisco IT had deployed several different routers within the San Jose LANs as new technology became available, but supporting multiple router models within the same network brought with it some management problems. Although each component had operational integrity and could solve a set of tactical problems, Cisco needed a standardized solution that would give administrators predictable application behavior and would make management, training, sparing, and repairing cost-effective.

When the Cisco Catalyst 6500 Series switches could offer both routing and switching capabilities, the series became an attractive platform for standardization. Craig Huegen, chief architect for the global Cisco IT network, described the situation: "We already had a significant operational investment in the Cisco Catalyst 6500 Series. It was already present to support the beginnings of IP telephony, along with the data center and the core. But it had new capabilities, including high-density Gigabit Ethernet. Before, when we'd chosen to use Cisco Catalyst 6500s, we'd confined them to a campus, but with this new capability, we ended up migrating from the Cisco 12000 routers to the Cisco Catalyst 6500 Series for intercampus links. The Cisco 12000 is a great platform, but we decided on the migration for purely operational reasons: a campus environment supported entirely by Cisco Catalyst 6500s was going to be easier to administer. We could get economies of scale for training, spares, and maintenance, and we were driving down costs by driving down operational expenses."

Today, across both campuses at San Jose headquarters, the IT team has deployed the full range of Cisco Catalyst 6500 Series switches, including the 6-, 9-, and 13-slot models. Cisco is using a variety of feature cards, including supervisor engine modules and policy feature cards, to upgrade the network with new technology features as they come available. For example, Cisco IT built a Layer 3 (routing) core in the San Jose MAN while still using Cisco Catalyst 6500 Series switches by adding the Cisco Catalyst 6500 Series Supervisor Engine 2, which supports the Cisco Catalyst 6500 Multilayer Switch Feature Card MSFC2, adding robust routing capabilities to the original switching functions. The cards act like Layer 3 gateways, moving traffic through the distribution and core layers into data centers across the metropolitan-area network (MAN). "Cisco Catalyst 6500s are an edge-to-core platform," Root says. "They are easier to use, and our people are familiar with them. It eases both training and spares; Cisco Catalyst 6500s are in our labs for training." There are other advantages to using policy cards, too. Root added: "Periodically, people misconfigure DHCP [Dynamic Host Configuration Protocol] servers on their desktops. With the policy feature cards, we can deny that traffic."

Huegen was equally enthusiastic: "The Cisco Catalyst 6500 Series is a 'Swiss Army knife' platform. We can use the Cisco Catalyst 6500 as the access layer switch, but it also works in the core and for distribution. Its services and features-like the content delivery, content switching, Secure Sockets Layer, and firewall modules-give us real flexibility. We use the T1 blades for PSTN connectivity and FSX blades (analog phone port blades) for analog phone ports, like fax machines. The network analysis modules let us look for problems. The Cisco Catalyst 6500 does it all."

IP Telephony: The Future Calls

Before IP telephony came on the scene, Cisco needed to support bandwidth growth across the LAN, especially in and near the data centers. The Cisco 7500 router and Cisco Catalyst 5000 series switches and software-based routers were in use earlier, but they weren't going to address Gigabit Ethernet speeds and required densities effectively, and the data centers needed greater speeds, as did the LAN/WAN. But one application more than any other, encouraged Cisco IT to a quick migration to the Cisco Catalyst 6500 Series switches, as Root says: "In the data network and in the core, the Cisco Catalyst 6500 Series was important because of the bandwidth demands in the data center and the core, but the most important driver for Gigabit Ethernet and the Cisco Catalyst 6500 in the LAN was IP telephony. Up to that point, we had low utilization numbers in the LAN, and our Cisco Catalyst 5000s were doing most of the job. They had some Gigabit Ethernet capabilities, but when IP telephony came around, we wanted the powered line cards to power our telephones. I guess we were preparing for the IP telephony revolution." The Cisco Catalyst 6500 Series supports inline power, which makes IP phone deployment-and in addition wireless LAN deployment-much easier. It supports the quality-of-service features that IP telephony and IP video transmission demand, as well as multicast, which is required to broadcast Cisco IP/TV® communications within the company. The Cisco Catalyst 6500 Series also supports high-availability features that help the Cisco IT LAN teams provide 99.998 percent availability for many of the data center LANs within Cisco.

A Detailed Look at the SAN Jose MAN

In the mid 1990's the SAN Jose LAN expanded along Tasman Drive in San Jose, initially in three interconnected sites (sites 1, 2, and 3). Buildings A through P in these three sites were connected together as needed by trenching dark fiber between the buildings within the campus. Both single mode and multi-mode fiber were used, and were turned up as needed between pairs of buildings. Each building had at least two fiber paths back to every other building to maintain reliability. When the Cisco San Jose campus needed to expand beyond these first 16 buildings there was no adjoining property available, and Cisco purchased land over half a mile further east along Tasman Drive, and completed buildings 1 through 19 in what was to be called Site 4 in 2000. A second LAN interconnected these newer site 4 buildings, this time in a much more organized and hierarchical fashion, with primary and secondary physical paths connecting each building to one of three centralized hub sites.

Figure 1. Fiber Topology of Cisco Headquarters in San Jose

Click on Image to Enlarge popup

Cisco IT was lucky enough to be able to lease dark fiber along Tasman to connect the LAN in sites 1 through 3 to the LAN in site 4. When Cisco expanded once more in San Jose, and site 5 (buildings 20 through 25) were added to the Campus, the same hierarchical LAN connections were built to a hub site, and dark fiber was again leased to connect site 4 to site 5. However, there was no way to provide for a third separate physical path to back up the dark fiber trenched between these three sites. Cisco IT decided to lease a Gigabit Ethernet service from SBC to provide this redundant path, forming the third leg of a Gigabit Ethernet triangle connecting the three main sites. In the end the three LAN sites were interconnected to become a miniature MAN. Figure 1 shows how the MAN, and the LANs that it connects, appears today.

Although it's not easy to see from the fiber map, each of the switches in each building connects into a standard hierarchy in the Cisco San Jose MAN. Each campus contains core, distribution, and access layers. Before standardization on the Cisco Catalyst 6500 Series occurred, the core contained several different Cisco router models. This standardization helps control management and maintenance costs and complexity.

At the top of the hierarchy is the router MAN backbone, consisting of two Cisco Catalyst 6500 routers for each of the three geographical regions on the campus MAN. These routers interconnect with each other, and also connect the Cisco sites with the outside world. As a MAN, all traffic from the 50 locations in San Jose aggregate and then make outside connections through several OC-48 rings to two central offices: one in San Jose (connecting to San Francisco) and one in Milpitas (connecting to Oakland). All other campuses connect over dual physical links, ranging from T1 to OC-12.

Figure 2. San Jose Desktop LAN Architecture

Click on Image to Enlarge popup

The router MAN backbone connects to a redundant pair of site backbone routers (also Cisco Catalyst 6500s), which connect to a one or more desktop cluster router pairs. These desktop cluster routers aggregate traffic from each of the 41 buildings on the main San Jose campus (there are other Cisco buildings in San Jose, at a greater distance from the main campus). Traffic from each building is aggregated by a pair of routers in the building distribution frame (BDF), one per building. These interconnect with one or more Cisco Catalyst 6500 switches in the intermediate distribution frames (IDF). Each desk connects to the nearest IDF switch over 10/100 switched Mbps Ethernet, running on category 5 twisted pair copper cable. There are two IDFs in every floor of a building, since the buildings in San Jose are large enough such that we can't run category 5 cable to a single IDF without running into the 100 meter cable limitation. Figure 2 shows the hierarchy of Cisco Catalyst 6500s, both routers and switches, within the San Jose MAN.

Every desk or office, in every building, has a set of six RJ-45 ports. Older buildings have four IP ports and two telephony ports. Newer buildings are wired without a separate telephony network (it saves a lot of money to not have to build and maintain two separate copper cable networks per building), and they have four IP ports which provide both data and voice. The IP ports support switched 10/100-Mbps Ethernet back to the Cisco Catalyst 6500 Series Switch in the IDF. Two IDFs reside on each floor of every building, with one or more Cisco Catalyst 6500 Series switches supporting roughly two live ports per person. Each switch is connected with two Gigabit Ethernet fiber links to the two Cisco Catalyst 6500 Series routers in the building distribution frame (BDF). One BDF resides in each building. The two Cisco Catalyst 6500 Series routers use Hot Standby Router Protocol (HSRP) between them, along with a spanning tree for path redundancy. Each BDF switch connects to a desktop cluster hub (Cisco Catalyst 6500 Series Router) through a routed Gigabit Ethernet link. Figure 3 shows connectivity in each building within the San Jose campus, with Cisco Catalyst 6500 switches connecting to 6500 routers in the BDF.

Figure 3. Cisco Catalyst 6500 Series Standardization Strategy in Cisco Buildings

Click on Image to Enlarge popup

Deployment

IP telephony took 15 months to deploy. That included the replacement of all Cisco Catalyst 5000 switches with Cisco Catalyst 6000 Series switches across 55 buildings. The average time for building retrofit was one building per week. Putting IP phones on each desk took more time , but the infrastructure had to be deployed first.

The Cisco Catalyst 6500 Series switches brought with them inline power, meaning that IP phones did not need an extra connection. When Cisco IT first deployed IP phones during early pilots, the IP phones required separate power supplies. The extra power supplies were bulky, and each desk that got an IP phone needed extra wires to support it. Wireless access points also run on inline power. Before the Cisco Catalyst 6500 Inline power upgrade became available, wireless access points needed special power wiring installed in the ceilings for each new access point, and that raised labor costs significantly. Upgrading the IDF switches to the Cisco Catalyst 6500 Series with inline power eliminated the need for additional, extensive IP phone and wireless access point power wiring.



RESULTS

Supporting IP telephony and connectivity between the two campuses meant that the Cisco IT team had to rebuild the Cisco network uniformly. The results were impressive. Availability improved dramatically. Campus LAN networks have surpassed 99.995 availability, and fully power-protected data centers have reached 99.998 percent availability, which translates to less than 12 minutes of unplanned downtime per year.

The IT team found that it was far easier to troubleshoot problems on a single hardware platform with a single diagnostic screen, or a small set of them, and a standard configuration. When administrators needed to fix problems, their efficiency and speed increased dramatically. Outages were less frequent, and when they did occur, they did not last as long.

The identical network mapping also means that it is easier to anticipate problems. The team can find a flaw and prevent problems before they occur across all the buildings because the architecture is the same from building to building.

Cisco users have seen huge benefits. Gigabit connectivity lets them move files faster, they have faster access to the data center, and IP phones-with more and better features than their analog cousins-sit on every desk.

NEXT STEPS

Because of their success with Gigabit Ethernet, the team plans to deploy 10 Gigabit Ethernet in the core for even greater performance. "Cisco IT has a number of Gigabit Ethernet-connected servers in data centers right now, although many are for intra-data-center backups. These are mostly Gigabit Ethernet-connected servers that are performing backups between data centers on the San Jose campus, although individual servers tend not to generate more than 200 Mbps of traffic at a time. The aggregate of this traffic is pushing Cisco IT to install 10 Gigabit Ethernet in the core between the two campuses. Cisco IT plans to migrate the core of the San Jose campus to 10 Gigabit Ethernet by upgrading its campus backbone Cisco Catalyst 6500 Series switches with Cisco Catalyst 6500 Series Supervisor Engine 720s and 4-port 10 Gigabit Ethernet cards," says John Cornell, technical lead in the Cisco IT Network Design group. Cisco IT is still evaluating other technologies, such as running Small Computer System Interface over IP (iSCSI) protocol to extend Fibre Channel fabric and attached storage devices to IP servers between data centers, which could drive further needs for bandwidth growth on the San Jose campus backbone. The use of 10 Gigabit Ethernet in the core and distribution layers also allows Cisco IT to extend Gigabit Ethernet to the desktop. "In future network build-outs, we will support 10/100/1000-Mbps Ethernet to the user using the Cisco Catalyst 6500 Series platform," Huegen added.

In addition, the team continues to investigate new and planned functions of the Cisco Catalyst 6500 Series. Cisco IT is already using network access module (NAM) and NAM-2 blades on Cisco Catalyst 6500 Series switches in data centers and campus distribution layer gateways to provide application-level visibility into the network infrastructure for real-time traffic analysis, performance monitoring, and troubleshooting. In addition, Cisco IT is doing preliminary planning to use other Cisco Catalyst 6500 Series service modules in several new areas:

  • Standardizing on the high-performance Cisco Firewall Services Module at the Internet gateway and at data center gateways, supporting Layer 2 and Layer 3 firewall policy checking with stateful failover.
  • Providing a content switching service within the Cisco network for any new application or applications architecture spanning multiple servers, from a pool of Cisco Catalyst 6500 Series switches in the data centers using the new Cisco Content Switching Module.
  • Providing accelerated SSL traffic performance for secure Web-enabled applications using the SSL Services Module on the same or nearby Cisco Catalyst 6500 Series services switches within the data centers.
  • Using the Cisco Communication Media Module (CMM) in field sales offices for consolidated T1 services in sites with a Cisco Catalyst 6500 Series distribution layer.