Physical Infrastructure Deployment for Level 3 Site Operations
The Level 3 Site Operations Area provides the switching, compute, and storage resources needed to efficiently operate a plant-wide IACS architecture. This area is the foundation for data collection and application hosting in the industrial setting. Level 3 equipment may be physically housed in an Industrial Data Center, in a rack in the control room, or several other locations on the premise. Level 3 Site Operations applications range from MES measures such as Overall Equipment Effectiveness (OEE), lot traceability preventive maintenance schedules, process monitoring/management, safety/security dashboards, and productivity key performance indicators (KPIs). Continuity of service is imperative as these functions are used for daily decision-making on an ever-increasing basis. Manufacturing downtime is readily measured in minutes and in thousands of dollars from missed customer commitments. Reliable and secure network support for these applications keep operations and business communication running smoothly.
The successful deployment of Level 3 Site Operations depends on a robust network infrastructure built on a rock solid physical layer that addresses the environmental, performance, and security challenges present when deploying IT assets (servers, storage arrays, and switching) in industrial settings. The Level 3 Site Operations is a key convergence point for IT and OT. Many businesses obtain these functions in a pre-engineered package, Industrial Data Center (IDC). IDC systems include the proper IT assets housed in an appropriate cabinet with patching, power, grounding/bonding, identification and physical security considerations already addressed a plug and play solution.
The Level 3 Site Operations Area presented in this chapter is a model for integrating a scalable, modular, logical network and compute systems into the physical infrastructure. Some benefits of this approach are listed below:
- Improves network availability, agility, scalability and security
- Reduces operational costs, including energy costs, with improved efficiencies
- Can help to simplify resource provisioning
- Lays the foundation for consolidation and virtualization
- Helps to create a path to future technology requirements such as 10/40GbE
This chapter includes the following major topics:
Key Requirements and Considerations
Industrial network deployments have evolved over the years from a network gateway layout to a converged plant-wide architecture. CPwE architecture provides standard network services to the applications, devices, and equipment found in modern IACS applications and integrates them into the wider enterprise network. The CPwE architecture provides design and implementation guidance to achieve the real-time communication and deterministic requirements of the IACS and to help provide the scalability, reliability and resiliency required by those systems.
In support of industrial network performance, many physical infrastructure aspects of the Level 3 Site Operations serve an important role, and must be considered in the design and implementation of the network:
- Industrial Characteristics—Plant/site networking assets and cabling used in Level 3 Site Operations are not environmentally hardened but are almost exclusively installed in IP20 or better environments. Environmental risks at Level 3 Site Operations involve thermal management of heat dissipated by equipment and power quality considerations.
- Physical Network Infrastructure Life Span—IACS and plant/site backbone can be in service as long as 20 year or more. Hardware used in Level 3 Site Operations being IT gear has a much shorter life span, generally three to five years. The infrastructure used to connect and house the hardware such as cabinets, cabling, connectivity, and enclosures has a much longer life span, generally 10-15 years. Consideration of higher performance cabling enables the needs of current and future data communications to be fully met. Choices between copper and fiber-optic cabling deliver higher data rate transport requirements.
- Maintainability—MACs at Level 3 have dependencies that affect many Cell/Area Zones. Also, changes must be planned and executed correctly to avoid bringing down the IACS process. Proper cable management such as bundling, identification and access is vital for proper Level 3 Site Operations maintenance.
- Scalability—The high growth of EtherNet/IP and IP connections can strain network performance and cause network sprawl that threatens uptime and security. A strong physical building block design accounts for traffic growth and management of additional cabling to support designed network growth. Use a physical zone topology with structured copper and fiber-optic cabling chosen for high data throughput. Choose building block pre-configured solutions to enable a network infrastructure comprised of modular components that scale to meet increasing industrial Ethernet communications needs in the IACS network.
- Designing for High Availability—A robust, reliable physical infrastructure achieves service levels required of present and future IACS networks. The use of standards-based cabling with measured, validated performance confirms reliable data throughput. Use of redundant logical and physical networks assures highest availability. Properly designed and deployed pathways should be employed to achieve resilient redundant cable paths.
- Network Compatibility and Performance—Cable selection is the key to optimal physical network performance. Network performance is governed by the poorest performing element in any link. Network compatibility and optimal performance are essential from port to port, including port data rate and cabling bandwidth.
- Grounding and Bonding—A well architected grounding/bonding system is crucial for of industrial network performance at every level whether internal to control panels, across plants/sites, or between buildings. A single, verifiable grounding network avoids ground loops that can degrade data and have equipment uptime and safety implications.
- Security—Network security is a critical element of network uptime and availability. Physical layer security measures, such as logical security measures, should follow a defense-in-depth hierarchy. The Level 3 Site Operations physical defense in-depth strategy could take the form of locked access to industrial data center/control room spaces and cabinet key card access to help limit access, use of LIBO devices to control port usage and keyed patch cords to avoid inadvertent cross patching. Using a physical strategy in concert with your logical strategy helps prevent inadvertent or malicious damage to equipment and connectivity achieving service level goals.
- Wireless—Unified operation of wireless APs requires a WLC at Level 3 Site Operations and distribution of Lightweight Wireless Access Points (LWAPs) across Industrial Zone and Cell/Area Zones. Autonomous wireless APs, typically WGBs, in Cell/Area Zones involve cabling for APs and WGBs. The Industrial Zone backbone media selection and cabling for APs using PoE are critical for future readiness and bandwidth considerations. PoE is evolving to deliver more power over copper cabling. Therefore, understanding industrial applications with scalability and environmental considerations is critical.
Industrial Characteristics
Room Environment
A sustainable cooling system design that follows industry best practices is essential to the success of modern Level 3 Site Operations deployment. Optimized cooling management and a more efficient energy system are the results, which means IT equipment is safe from unplanned downtime due to overheating and significant operational expense (OpEx) savings are realized.
The Level 3 Site Operations room is divided into a hot aisle/cold aisle arrangement, to separate the cold inlet air on the front side of the cabinets and the hot exhaust air on the rear side of the cabinets. Cold air in this design is fed through the raised floor via a Computer Room Air Conditioning (CRAC) unit. Cold air can be delivered by several different means, if it maintains the hot aisle/cold aisle arrangement. Hot air in this design is exhausted through fans in the ceiling.
Thermal Ducting
Network switches deployed in data centers (for example, Catalyst 4500-X/6800) often uses side-to-side airflow cooling. This airflow design requires less vertical space and permits high switch port density. Given proper inlet air conditions, these switches are well designed to cool themselves. However, large bundles of cabling driven by high port density impede airflow. Also, hot exhaust air can recirculate to the intake, raising inlet temperatures and inhibiting the self-cooling ability of the switch. For network equipment that uses side-to-side airflow patterns, in-cabinet ducting optimizes cooling system efficiency by establishing front-to-back airflow patterns throughout the cabinet. By using Computational Fluid Dynamics (CFD) analysis and testing, ducting solutions have been developed that improve inlet air conditions for cabinet applications.
Physical Network Infrastructure Life Span
IACS and plant/site backbone technologies can have a lifespan of up to 20 years. IT hardware has a much shorter life span, generally in the 3-5 year range. While a three year refresh cycle is commonplace for IT personnel in enterprise and data center domains, it is not routine in current OT domains. Level 3 physical infrastructure that supports IT hardware should have a much longer lifespan, usually three to four IT hardware refresh cycles. As a result, it is important to consider the structural dependability of cabinets, future proof types of cabling, and pathways optimized for capacity growth projections. It is also important to adhere to structured cabling practices and install upgraded and spare media to help protect against physical layer obsolescence. Making informed physical infrastructure decisions from the beginning affect the ability to complete future projects in a timely and cost-effective manner.
Patching
The best practice for patching is to land fiber cable into an enclosure or box following a structured cabling approach. When fiber is terminated, several inches of the glass strand are exposed. This part of the fiber is fragile and needs the enclosure or box to help protect the fiber from damage. Terminating directly to a connector without protection or support can lead to failures. For rack/cabinet installations, a 19" style fiber enclosure is a suitable choice. When installing fiber into a control panel, components such as the DIN Patch Panel or Surface Mount Box provide patching. Another advantage for structured cabling is to build in spares ready for use to accommodate growth or for recovery from link failure.
Bundling
The best practice for bundling numerous standard jacket fiber cables is to use a hook and loop cable tie. This cable tie does not overtighten the bundle, which can lead to damage. However, the environmental impact on the hook and loop tie is an important consideration and for these instances, a more durable cable tie for fiber is the elastomeric tie. It can stretch to help prevent over tension and guard against weather, oil, salts, and so on. The use of nylon cable ties can potentially damage the cable and is not recommended for standard jacket fiber cable but it is appropriate for armored or dielectric double jacketed fiber cabling.
Identification
Identification facilitates MACs and aids in troubleshooting. Fiber cabling identification includes labeling and color coding. Labeling can be applied to enclosures, ports, patch cords, horizontal cables, and so on. Since some fiber cable diameter is small, applying a label can be a challenge. A sleeve can be applied to increase the diameter to make the label readable. The text on the labeling should follow TIA-606A standards. A breakdown usually occurs from the enclosure/rack, RU, port, and so on, that is consistent throughout the facility. For example, CP1.PP1.02 could mean control panel 1, patch panel 1 and port 2. Color coding can identify VLANs, zones, functional areas, network traffic type, and so on, and can be achieved with labels, hook and loop cable ties, and color bands. Although fiber cable is color coded by cable type, additional color coding can be performed for clarity using the preceding methods.
Pathway Decisions—Type and Sizing
Pathways for cables are critical for distributing copper and fiber cabling securely across the plant/site while helping protect from physical threats that can degrade or damage the cable. The TIA-1005 standard, ODVA Media Planning and Installation manual, and TIA resources provide recommendations on pathways including cable spacing and installation guidance to minimize risks from environmental threats. Several options exist for routing cables via pathways that simplify deployment using best practices for various environments across the plant. Figure 5-1 describes some of the options.
Figure 5-1 Pathway Options
The simplest and lowest cost pathways are J-Hooks. J-Hooks can be mounted to a wall, beam or other surface. Network cables are held in place by the hook feature and often secured with a cable tie. The J-Hook maintains proper bend radius control when transitioning down. J-Hook systems should be used with cables that have rigidity to have an acceptable bend between spans and is suitable for a small bundle. Standard fiber distribution cable is not suitable for J-Hooks unless supported by corrugated loom tube.
When routing large or many cable bundles, a tray or wire basket can be installed overhead to form a solid and continuous pathway. Since cabling is exposed to the plant environment, cable jackets must be specified for the environment. Cable tray material should also be rated for the environment. An enclosed tray such as a fiber tray provides a high level of environmental protection for light to heavy cable densities. For highest protection with few network cables, conduit is the preferred choice. Care must be taken to maintain proper bend radius and lineal support to prevent cable sag.
Designing for High Availability
Cable Management
Proper cable management is essential for high system performance, availability, and reliability. A resilient cable management system is critical in areas such as MACs (for example, identification and color coding) to verify that the proper action is taken and to aid in troubleshooting. Care must also be taken to help protect the cabling from excessive bend, flexing, sag, rubbing, crush, and so on. A strong cable management system needs to be designed around the following key elements:
- Bend radius control
- Cable routing and protection
Bend Radius Control
It is important to have a bend radius greater than or equal to the manufacturer-specified minimum when handling fiber-optic cabling to help prevent it from excessive bending, which can cause physical damage. Two considerations for bend radius control are:
- Dynamic—Cable flexing (especially during installation)
- Static—Cable held in place
Fiber cable manufacturers specify both radiuses where the dynamic bend radius can be 20 times the Outer Cable Diameter (OD) and the static bend radius is typically 10 times the OD. Although fiber strands are very thin and made of glass, the strand is durable and can withstand some flexing. However, flexing in general can cause deformities in the fiber, leading to signal loss over time (microbend). Therefore, the stricter bend radius is dynamic. Another risk to a properly terminated and validated fiber cable is attenuation due to excessive cable bend (static). Essentially, the bend allows the light to escape the glass, reducing signal strength (macrobend). The bend radius is based on the OD, therefore the minimum bend radius for the application can vary. For example, a fiber patch cord cable has a static bend radius of 1.1" while a 12 fiber distribution cable has 2.4" bend radius. Bend radius is maintained by spools, clips, fins, product features, and proper coiling. In a 19" data center style server or switch rack/cabinet, bend radius accessories can be installed to route fiber cable vertically and horizontally. Fiber enclosures and boxes used in data center cabinets/racks typically have built-in slack spools. Also, supplied adhesive backed clips secure the cabling and provide strain relief.
Cable Routing and Protection
Often, fiber cable is routed through conduit for protection. Although conduit is an excellent safeguard, it is designed for power cabling and has fittings, junction boxes, and so on, where fiber cable under some tension can potentially bend below the minimum bend radius. Pathways such as wire basket and J-hooks have challenges for sagging and bends that can attenuate the signal, especially for unprotected standard distribution cable. Therefore, the cable must be supported and bends must be controlled. Draping unprotected distribution cable over J-Hooks or laying the cable over an open tray with grids can potentially lead to fiber sagging below the minimum bend radius. To avoid this risk, fiber cabling can be pulled through corrugated loom tube, providing rigidity and support. Typically, fibers in interlocking armor or DCF do not encounter these issues because the armor provides support and restricts the bend radius.
To protect fiber cabling when routed in control panels, it must be carefully laid in duct or tubing to avoid snags and kinks, maintaining proper bend radius. Adhesive backed mounting clips can hold fiber cabling in place outside of ducts along the panel wall or top.
Network Compatibility and Performance
Cable Media Selection
Depending on cable construction, many considerations exist for cable media selection such as reach, industrial characteristics, life span, maintainability, compatibility, scalability, performance, and reliability. See Table 5-1 .
Table 5-1 Cable Media Selection Characteristics
|
|
|
|
|
100 m |
2,000 m (1 Gbps) 400 m (10 Gbps) |
10 km (1 Gbps) 10 km (10 Gbps) |
|
Foil shielding |
Noise immune* |
Noise immune* |
|
100 Mbps (Cat 5e) 1 Gbps (Cat 6) 10 Gbps (Cat 6a) |
1 Gbps 10 Gbps |
1 Gbps 10 Gbps |
|
Large |
Small |
Small |
Power over Ethernet (PoE) capable
|
Yes |
Yes, with media conversion |
Yes, with media conversion |
*Fiber-optic media is inherently noise immune; however, optical transceivers can be susceptible to electrical noise.
Fiber and Copper Considerations
When cabling media decisions are made, the most significant constraint is reach. If the required cable reach exceeds 100 m (328 feet), then the cable media choice is optical fiber cable. Copper Ethernet cable is limited to maximum link length of 100 m. However, other considerations exist for distances less than 100 m such as EMI where fiber-optic cable is chosen due to its inherent noise immunity. Another consideration is the fact that IES uplinks connected with optical fiber converge faster after an IES power interruption, lessening the duration of the network outage. When it comes to data rates, copper and fiber are similar for typical industrial applications. However, other higher performing optical fiber is available for high demand networks.
Regarding media performance, a major consideration is network lifespan. For instance, installing a network with Cat 5e cabling could be obsolete in a couple of years, especially if transporting video. If the expectation is 10 or more years of service, the fastest category cabling should be considered. Upgrading networked equipment and switches in the future may only come with higher data rate ports.
Multimode versus Single-mode
Fiber has two types, single-mode and multimode. Multimode has many glass grades from OM1 to OM5. Single-mode has two glass grades, OS1 and OS2. When selecting any optical fiber, the device port must be considered first and must be the same on both ends (that is, port types cannot be mixed). In general, the port determines the type of fiber and glass grade. If the device port is SFP, flexibility exists to select compatible transceivers and the transceiver best for the application. In addition to different media, the number of strands, mechanical protection, and outer jacket protection should be considered.
Grounding and Bonding
It is essential that robust and clean power is supplied to the Level 3 Site Operations to keep the IACS network running smoothly without compromises in communications and equipment performance. The incoming power feed typically includes an UPS and one or more Power Outlet Units (POUs) to distribute power for IT assets where needed. POU voltages range from 100V to 125V or 220V to 250V depending upon the region of the world, with currents ranging from 15 amp to 30 amp. Connectors may be straight blade or twist locks. Popular IEC configurations are C13 to C14 and C19 to C20. Additionally, POUs may include intelligent features such as power and environmental monitoring to aid in troubleshooting and diagnostics.
Grounding of the Level 3 Site Operations is critical to optimizing performance of all equipment located within the Level, reducing downtime due to equipment failures and reducing the risk of data loss. Exemplary grounding and bonding provide demonstrable improvement in bit error rate in data center and enterprise applications. Given the higher potential for EMI in industrial applications, the improvement stands to be remarkable. Also, Cisco and Rockwell Automation logic components require good grounding practices to maintain warranty support. The use of a single ground path at low impedance is commonly achieved through a busbar.
Bridging the grounding connection from Level 3 Site Operations to busbar can occur in several ways: First, a braided grounding strap connects the rack or cabinet to the building ground network. Second, grounding jumpers connect equipment to the housing structure. Finally, paint piercing screws and washers achieve a direct metal-to-metal connection throughout the Level 3 Site Operations. Unless a clear and deliberate effort to confirm that proper grounding has been performed, ground loops can be established that result in lost communication data and compromised equipment performance.
Security
Both physical and logical security should be important considerations when designing Level 3 Site Operations. Physical layer security measures, such as locked access to data center/control room spaces and key card access to cabinets help limit access to and help prevent inadvertent or malicious damage to equipment and connectivity to help achieve service level goals. Often overlooked basics such as labeling, color coding, and use of keyed jacks can help prevent crossing channels or inadvertently removing active links. Lock-in connectors can secure connections in switches or patching to control who can make changes and help protect against cables being unintentionally disconnected.
Note For more information, please see the Securely Traversing IACS Data Across the Industrial Demilitarized Zone Design and Implementation Guide at the following URL:
Wireless
Secure wireless access in Industrial environments is a growing need. Considerations for jobs to be completed wirelessly and an idea of future growth are needed to begin planning and design. Two main applications are served wirelessly today. These are mobility and WGB communications.
Mobility helps knowledge workers and guest workers securely access data to make more timely and accurate decisions, increasing their overall productivity and effectiveness. Therefore, mobility designs must give sufficient coverage to afford access as needed. In addition, knowledge workers such as engineers and technicians employed by the business typically need to access not only the Internet but internal network resources. Guest workers such as contractors and repair personnel deployments are typically used to enable machinery communications. Often the machinery is dynamic (for example, movement through the facility) and needs to quickly reestablish wireless communications with the network once in place. WGB wireless does an excellent job for this use case.
Note For more information, see the Deploying 802.11 Wireless LAN Technology within a Converged Plantwide Ethernet Architecture Design and Implementation Guide at the following URLs:
Physical Network Design Considerations
Logical to Physical Simplified
Figure 5-2 represents a potential layout of a plant/site with enterprise office space. The CPwE logical architecture (on the top) is mapped to the locations that the different levels of operation could potentially be placed within the plant/site.
Figure 5-2 CPwE Logical to Physical Plant/Site
Level 3 Site Operations—Large Scale Deployment
Figure 5-3 Level 3 Site Operations Logical to Physical Deployment
Overview
The Level 3 Site Operations can function differently in diverse environments and layouts, but generally it houses the networking to connect the plant/site to the enterprise and applications that allow the plant/site to operate efficiently. The layout described in Figure 5-3 demonstrates a larger industrial deployment. This deployment is likely much larger than most plants/sites would require, but demonstrates how the Level 3 Site Operations infrastructure would be deployed if this scale were required. The core of the network is using two Cisco Catalyst 6800 with a full fiber distribution network. This deployment is employing virtualized servers to provide efficient and resilient compute power to the plant/site and beyond. Four distinct cabinets house equipment for three different areas of the CPwE architecture: Level 3 Site Operations or IDC, IDMZ, and Core Switching Cabinets. Each of the cabinets is detailed further below.
IDC Cabinets
The IDC cabinet (see Figure 5-4) houses the Catalyst 9300 switches that deliver access to WLC and the virtualized servers which can provide several services and applications to the plant/site.
Figure 5-4 IDC Cabinet Logical to Physical Deployment
Within the cabinet, the servers are placed at the bottom of the cabinet, and switches are at or near the top. The servers are the heaviest pieces of equipment within the cabinet and are placed at the bottom to create the lowest center of gravity possible. This helps prevent the cabinet from tipping over and makes it more stable in case it needs to be moved as a unit. The Catalyst 3850/9300 switches are placed at the top of the cabinet to allow for easy connections to the Catalyst 6800 switches. The Catalyst 3850/9300 switches have a back to front airflow, which means that the port side of the switch is facing the hot aisle. The Cisco WLCs are placed a few RU below the Catalyst 3850/9300 switches. The Cisco WLCs have their ports on the front side and have an airflow from front to back only. This means that to connect to the Catalyst 3850/9300 switches, which have their ports on the back side, a cable pass-through is required to connect. The servers have their network ports on the hot aisle side (or back side), which allows them to connect easily to the Catalyst 3850/9300 switches.
IDMZ Cabinet
The IDMZ cabinet (see Figure 5-5) houses the equipment that logically separates the enterprise from the plant/site.
Figure 5-5 IDMZ Cabinet Logical to Physical Deployment
The IDMZ Cabinet has two Cisco Catalyst 2960 switches, two (active/standby) Cisco firewalls, and several virtualized servers that house applications and software, which must reside within the IDMZ. The servers are the heaviest pieces of equipment within the cabinet and are placed at the bottom to create the lowest center of gravity possible. This helps prevent the cabinet from tipping over and makes it more stable in case it needs to be moved as a unit. The Catalyst 2960 switches are placed at the top of the cabinet to allow for easy connections to the Catalyst 9500/6800 switches. The equipment has network ports on the hot aisle (back side) that allows for easy connections between equipment.
Server Cabinets
Server Cabinets offer several features that enhance cable and thermal management and should be used when deploying servers, such as the Cisco UCS C220. Cabinet air seal features and integration with passive hot and cold air containment components drive efficient utilization of cooling capacity and reduce cooling energy consumption. Modular cable management systems available on server cabinets, enable simplified organization of cables and proper bend radius control. Smart cabinet features allow access to environmental, power, and security information.
Core Switching Cabinets
The core switching cabinets (see Figure 5-6) house the redundant Cisco Catalyst 9500/6800 switches that serve as the plant/site core switches and the gateway between the plant/site and the enterprise.
Figure 5-6 Core Switching Cabinets Logical to Physical Deployment
The Catalyst 9500/6800 cabinets use switch cabinets or network cabinets, which provide additional space for cable and thermal management. The switch cabinet has an air dam installed that separates the cold air in the front of the cabinet, from the hot exhaust air that is on the back side of the cabinet. Because the Cisco Catalyst 9500/6800 switch has a side-to-side airflow, the use of a 6800 thermal ducting system is required to operate within a hot/cold aisle arrangement in the data center. The Cisco Catalyst 9500/6800 switches are at the bottom of the cabinet because they are the heaviest equipment within the cabinet and create the lowest center of gravity. The patch panels at the top of the cabinet provide connectivity to the plant/site and to the IDMZ cabinet.
Network Cabinets
Network cabinets offer several features that enhance cable and thermal management and should be used when deploying large chassis-based switches, such as the Cisco Catalyst 9500/6800 switch. An inset frame design efficiently manages large quantities of cables and provides space for access maintenance. Cabinet air seal features and integration with passive hot and cold air containment components drive efficient utilization of cooling capacity and reduce cooling energy consumption. Dual-hinged doors reduce time needed to perform MACs. Modular cable management systems available on network cabinets, enable simplified organization of cables and proper bend radius control.
Physical to Physical Simplified
IDMZ Cabinet to Core Switching Cabinets
The IDMZ to Core Switch connections shown in Figure 5-7 and Figure 5-8 can be copper or fiber depending on the model of firewall and the line card used in the Catalyst switch.
Figure 5-7 IDMZ Cabinet to Core Switches Cabinets Physical to Physical Deployment
In Figure 5-7 and Figure 5-8, copper connects the Cisco firewall and the Cisco Catalyst 9500/6800 switch. Starting at the Cisco firewall, the one side of the Cat 6A patch cable plugs into the Cisco firewall port and the other side plugs into the jack module that is housed in the adapter, which is connected into the patch panel. On the back side of the jack module the Cat 6A horizontal cable is connected and runs to the core switch cabinet. When the Cat 6A horizontal cable reaches the Catalyst cabinet, it is attached to the back side of the jack module. On the front of the jack module, the Cat 6A patch cable is plugged in and on the opposite side the Cat 6A patch cable is plugged into the RJ45 port on the Cisco Catalyst 9500/6800 line card.
Figure 5-8 Cisco Firewall to Catalyst 9500/6800 Switch Single Line
IDC Cabinet to Core Switching Cabinets
The IDC Cabinet-to-Core Cabinets connections shown in Figure 5-9 and Figure 5-10 can also be copper or fiber depending on the model of switch in the IDC cabinet and the line card used in the Catalyst switch.
Figure 5-9 IDC Cabinet to Core Switching Cabinets Physical to Physical Deployment
In this scenario, copper connects the Cisco Catalyst 9300 switch and the Cisco Catalyst 9500/6800 switch. Starting at the Cisco Catalyst 9300 switch, the one side of the Cat 6A patch cable plugs into the RJ45 port on the Cisco Catalyst 9500/6800 switch and the other side plugs into the jack module that is housed in the adapter, which is connected into the patch panel. On the back side of the jack module, the Cat 6A horizontal cable is connected and runs to the Catalyst cabinet. When the Cat 6A horizontal cable reaches the Catalyst cabinet, it is attached to the back side of the jack module. On the front of the jack module, the Cat 6A patch cable is plugged in. On the opposite side, the Cat 6A patch cable is plugged into the RJ45 port on the Cisco Catalyst 9500/6800 line card.
Figure 5-10 Catalyst 9300 Switch to Catalyst 9500/6800 Switch Single Line
Level 3 Site Operations—The Industrial Data Center from Rockwell Automation
The IDC from Rockwell Automation (see Figure 5-11) can help your business realize the cost savings of virtualization in a production environment through a pre-engineered, scalable infrastructure offering. The hardware you need to run multiple operating systems and multiple applications off virtualized servers are included in the cost. Industry-leading collaborators including Cisco, Panduit, EMC, and VMware work together with Rockwell Automation to help your business realize the benefits of virtualization through this integrated offering.
As a pre-engineered solution, the IDC from Rockwell Automation is designed to ease the transition to a virtualized environment for your business, saving you time and money. Instead of ordering five different pieces of equipment with five purchase orders, in addition to hiring the correct certified installation professionals to get you up and running, the IDC combines equipment from industry leaders that are pre-configured specifically for the manufacturing and production industries. All equipment is shipped pre-assembled and a Rockwell Automation professional will come to your site and commission the system.
Figure 5-11 The Industrial Data Center from Rockwell Automation—Physical Layout
Features and Benefits:
- Reduced Cost of Ownership—Decrease the server footprint in your facility and realize savings over the lifetime of your assets
- Uptime Reliability—Deliver high availability and fault tolerance
- Designed for Your Industry—Engineered specifically for use in production and manufacturing environments
- Ease of Ordering and Commissioning—Pre-assembled solution that includes configuration service. No need to place multiple equipment orders
- One Number for Technical Support—Includes TechConnect ™ support so you have one phone number to call. Additional support levels include 24x7 support and remote monitoring