Oracle PeopleSoft Payroll 9.0 for North America provides the tools to calculate earnings, taxes, and deductions efficiently; maintain balances; and report payroll data while reducing the burden on IT managers and payroll staff.
With Oracle PeopleSoft Payroll for North America, you can design the payroll system to meet your organization's specific requirements. You provide the system with some basic information about the types of balances that you want to maintain, how you want to group the workforce, and when you want to pay them. You can define and establish earnings, deductions, taxes, and processes that fit your unique business needs. The payroll system enables you to calculate gross-to-net or net-to-gross pay, leave accruals, and retroactive pay. You can automatically calculate imputed income for group-term life insurance and process unlimited direct deposits.
For scalability to be studied and understood, an adequate workload needs to be applied so that the systems under test can be stressed and show the optimum utilization of the architecture under peak loads. For this particular test, the Oracle PeopleSoft Payroll for North America batch workload was chosen.
The payroll workload requires very intensive computing and throughput (I/O) capabilities to perform the large batch jobs being run, whereas the human resources workload offers an online-style workload with demanding throughput characteristics.
Why Upgrade to Oracle PeopleSoft Payroll 9.0 for North America
The Oracle PeopleSoft Payroll for North America batch workload measures three payroll application business process runtimes for one database model representing a large organization. For this workload, different sizes of tests were run, for environments of up to 240,000 employee.
Cisco Unified Computing System
™ (Cisco UCS
™) benchmarking and reference architecture demonstrates Cisco UCS performance characteristics for a range of Oracle PeopleSoft processing volumes in a specific controlled configuration. Oracle PeopleSoft prospects can use this information to determine the right matrix of software, hardware, and network configurations necessary to support their processing volumes.
The new garnishment processing model in Oracle PeopleSoft Payroll 9.0 for North America has been restructured from the previous version, providing flexible calculations through a configurable rules-based engine and easier management of definitions of disposable earnings deductions. There is also a new employee garnishment history for tracking garnishments over time.
Improved integration of Oracle PeopleSoft Payroll for North America with Oracle Enterprise Time and Labor offers filtering parameters for the Oracle Enterprise Time and Labor load. Error reporting has been enhanced to include the capability to view error messages for each employee for whom time has not been successfully loaded into payroll. The payable-time reversal process automatically creates offsets for a paycheck reversal in the payable timetable flagged with the reversed status.
Cisco UCS: Flexible System Architecture for Oracle PeopleSoft
This performance result is one clear indication that a properly configured Cisco UCS platform can complete the payroll payment process for a 240,000-employee organization in less than one hour. Following Oracle's best practice of running the process scheduler on the database server eliminates the TCP/IP overhead and improves processing time and performance. Customers who are running older versions of Oracle PeopleSoft and may be considering upgrading to Oracle PeopleSoft Payroll 9.0 and run the North America (NA) module should strongly consider this performance result in their system evaluations.
Cisco Unified Computing System
Cisco UCS is a set of preintegrated data center components that includes blade servers, adapters, fabric interconnects, and extenders integrated under a common embedded management system. This model results in far fewer system components within the servers and much better manageability, operation efficiency, and flexibility than comparable data center platforms. Through a number of world-record benchmarks and proof points, Cisco has proven that a system architecture designed around the network that balances I/O, memory, and processor resources delivers superior performance. Furthermore, the capability to repurpose blades in real time to meet various workload challenges provides an architecture that is a clear departure from competitive alternatives and that allows the user to implement cloud computing when desired and to achieve the full benefits of virtualization.
Main Differentiating Technologies of Cisco UCS
The technologies described here are what make Cisco UCS unique and give it advantages over competing offerings. The technologies presented here are high level, and the discussions do not include the technologies (such as Fibre Channel over Ethernet [FCoE]) that support these high-level elements.
Unified fabric can dramatically reduce the number of network adapters, blade-server switches, cables, and management touch points by passing all network traffic to parent fabric interconnects, where the traffic can be prioritized, processed, and managed centrally. This approach improves performance, agility, and efficiency and dramatically reduces the number of devices that need to be powered, cooled, secured, and managed.
Embedded Multirole Management
Cisco UCS Manager is a centralized management application that is embedded on the fabric switch. Cisco UCS Manager controls all Cisco UCS elements within a single redundant management domain. These elements include all aspects of system configuration and operation, eliminating the need to use multiple, separate element managers for each system component. Massive reductions in the number of management modules and consoles and in the proliferation of agents resident on all the hardware (which must be separately managed and updated) are important deliverables of Cisco UCS. Cisco UCS Manager, using role-based access and visibility, helps enable cross-function communication efficiency, promoting collaboration between data center roles for increased productivity.
Cisco Extended Memory Technology
Significantly enhancing the available memory capacity of some Cisco UCS servers, Cisco
® Extended Memory Technology helps increase performance for demanding virtualization and large-data-set workloads. Data centers can now deploy very high virtual machine densities on individual servers as well as provide resident memory capacity for data bases that need only two processors but can dramatically benefit from more memory. The high-memory dual in-line memory module (DIMM) slot count also lets users more cost-effectively scale this capacity using smaller, less costly DIMMs.
Cisco Data Center Virtual Machine Fabric Extender Virtualization Support and Virtualization Adapter
With Cisco Data Center Virtual Machine Fabric Extender (VM-FEX), virtual machines have virtual links that allow them to be managed in the same way as physical links. Virtual links can be centrally configured and managed without the complexity of traditional systems that interpose multiple switching layers in virtualized environments. I/O configurations and network profiles move along with virtual machines, helping increase security and efficiency while reducing complexity. Cisco Data Center VM-FEX helps improve performance and reduce network interface card (NIC) infrastructure.
Dynamic Provisioning with Service Profiles
Cisco UCS Manager delivers service profiles, which contain abstracted server-state information, creating an environment in which everything unique about a server is stored in the fabric, and the physical server is simply another resource to be assigned. Cisco UCS Manager implements role- and policy-based management focused on service profiles and templates. These mechanisms fully provision one or many servers and their network connectivity in minutes, rather than hours or days.
Cisco UCS Manager
Cisco UCS Manager is an embedded, unified manager that provides a single point of management for Cisco UCS. Cisco UCS Manager can be accessed through an intuitive GUI, a command-line interface (CLI), or the comprehensive open XML API. It manages the physical assets of the server and storage and LAN connectivity, and it is designed to simplify the management of virtual network connections through integration with several major hypervisor vendors. It provides IT departments with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to individuals based on their roles as managers of server, storage, or network hardware assets. It simplifies operations by automatically discovering all the components available on the system and enabling a stateless model for resource use.
The elements managed by Cisco UCS Manager include:
• BIOS firmware and settings, including server universal user ID (UUID) and boot order
• Converged network adapter (CNA) firmware and settings, including MAC addresses and worldwide names (WWNs) and SAN boot settings
• Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology
• Interconnect configuration, including uplink and downlink definitions, MAC address and WWN pinning, VLANs, VSANs, quality of service (QoS), bandwidth allocation, Cisco Data Center VM-FEX settings, and Ether Channels to upstream LAN switches
Cisco UCS Components
Figure 1 shows the Cisco UCS components.
Figure 1. Cisco UCS Components
Cisco UCS is designed from the start to be programmable and self-integrating. A server's entire hardware stack, ranging from server firmware and settings to network profiles, is configured through model-based management. With Cisco virtual interface cards (VICs), even the number and type of I/O interfaces is programmed dynamically, making every server ready to power any workload at any time.
With model-based management, administrators manipulate a model of a desired system configuration and associate a model's service profile with hardware resources, and the system configures itself to match the model. This automation speeds provisioning and workload migration with accurate and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced risk of failure due to inconsistent configurations.
Cisco Fabric Extender Technology (FEX Technology) reduces the number of system components that need to be purchased, configured, managed, and maintained by condensing three network layers into one. It eliminates both blade server and hypervisor-based switches by connecting fabric interconnect ports directly to individual blade servers and virtual machines. Virtual networks are now managed exactly like physical networks, but with massive scalability. This approach represents a radical simplification compared to traditional systems, reducing capital and operating costs while increasing business agility, simplifying and accelerating deployment, and improving performance.
Cisco UCS Fabric Interconnects
Cisco UCS fabric interconnects create a unified network fabric throughout Cisco UCS. They provide uniform access to both networks and storage, eliminating the barriers to deployment of a fully virtualized environment based on a flexible, programmable pool of resources. Cisco fabric interconnects comprise a family of line-rate, low-latency, lossless 10 Gigabit Ethernet, IEEE Data Center Bridging (DCB), and FCoE interconnect switches. Based on the same switching technology as the Cisco Nexus
® 5000 Series Switches, Cisco UCS 6100 Series Fabric Interconnects provide additional features and management capabilities that make them the central nervous system of Cisco UCS. The Cisco UCS Manager software runs inside the Cisco UCS fabric interconnects. The Cisco UCS 6100 Series Fabric Interconnects expand the Cisco UCS networking portfolio and offer higher capacity, higher port density, and lower power consumption. These interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers and Cisco UCS blade server chassis. All chassis and all blades that are attached to interconnects are part of a single, highly available management domain. By supporting unified fabric, the Cisco UCS 6100 Series provides the flexibility to support LAN and SAN connectivity for all blades within its domain at configuration time. Typically deployed in redundant pairs, Cisco UCS fabric interconnects provides uniform access to both networks and storage, facilitating a fully virtualized environment.
The Cisco UCS fabric interconnect portfolio currently consists of the Cisco 6100 and 6200 Series Fabric Interconnects:
• Cisco UCS 6248UP 48-Port Fabric Interconnect: The Cisco UCS 6248UP 48-Port Fabric Interconnect is a one-rack-unit (1RU), 10 Gigabit Ethernet, IEEE DCB, and FCoE interconnect providing more than 1 terabit per second (Tbps) throughput with low latency. It has 32 fixed ports of Fibre Channel, 10 Gigabit Ethernet, IEEE DCB, and FCoE Enhanced Small Form-Factor Pluggable (SFP+) ports. One expansion module slot can provide up to 16 additional Fibre Channel, 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports.
• Cisco UCS U6120XP 20-Port Fabric Interconnect: The Cisco UCS U6120XP 20-Port Fabric Interconnect is a 1RU, 10 Gigabit Ethernet, IEEE DCB, and FCoE interconnect providing more than 500 Gbps throughput with very low latency. It has 20 fixed 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports. One expansion module slot can be configured to support up to six additional 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports.
• Cisco UCS U6140XP 40-Port Fabric Interconnect: The Cisco UCS U6140XP 40-Port Fabric Interconnect is a 2RU, 10 Gigabit Ethernet, IEEE DCB, and FCoE interconnect built to provide 1.04 Tbps throughput with very low latency. It has 40 fixed 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports. Two expansion module slots can be configured to support up to 12 additional 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports.
Cisco UCS 2100 and 2200 Series Fabric Extenders
The Cisco UCS 2100 and 2200 Series Fabric Extenders multiplex and forward all traffic from blade servers in a chassis to a parent Cisco UCS fabric interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between blades on the same chassis or virtual machines on the same blade, is forwarded to the parent interconnect, where network profiles are managed efficiently and effectively by the fabric interconnect. At the core of the Cisco UCS fabric extender are application-specific integrated circuit (ASIC) processors developed by Cisco that multiplex all traffic.
Up to two fabric extenders can be placed in a blade chassis.
• The Cisco UCS 2104XP Fabric Extender has eight 10GBASE-KR connections to the blade chassis midplane, with one connection per fabric extender for each of the chassis' eight half slots. This configuration gives each half-slot blade server access to each of two 10-Gbps unified fabric-based networks through SFP+ sockets for both throughput and redundancy. It has four ports connecting the fabric interconnect.
• The Cisco UCS 2208XP Fabric Extender is the first product in the Cisco UCS 2200 Series. It has eight 10 Gigabit Ethernet, FCoE-capable, and SFP+ ports that connect the blade chassis to the fabric interconnect. Each Cisco UCS 2208XP has thirty-two 10 Gigabit Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in pairs for redundancy, two fabric extenders provide up to 160 Gbps of I/O to the chassis.
Cisco UCS M81KR Virtual Interface Card
The Cisco UCS M81KR VIC is unique to the Cisco UCS blade system. This mezzanine adapter is designed around a custom ASIC that is specifically intended for VMware-based virtualized systems. It uses custom drivers for the virtualized host bus adapter (HBA) and the 10 Gigabit Ethernet NIC. As is the case with the other Cisco CNAs, the Cisco UCS M81KR VIC encapsulates Fibre Channel traffic within the 10 Gigabit Ethernet packets for delivery to the fabric extender and the fabric interconnect.
The Cisco UCS VIC is also unique in its ability to present up to 128 virtual PCI devices to the operating system on a given blade. Eight of those devices are used for management, leaving 120 virtual devices available for either storage or network use. The configurations can be changed as needed using Cisco UCS Manager. To the guest operating system, each virtualized device appears to be (from the viewpoint of the operating software that is running within VMware or other virtualized environment) a directly attached device. The adapter supports Cisco Data Center VM-FEX, which allows visibility all the way through to the virtual machine. This adapter is exclusive to Cisco and is not offered outside the Cisco UCS B-Series Blade Server product line.
Cisco UCS 5100 Series Blade Server Chassis
The Cisco UCS 5108 Blade Server Chassis is a 6RU blade chassis that accepts up to eight half-width Cisco UCS B-Series Blade Servers or up to four full-width Cisco UCS B-Series Blade Servers, or a combination of the two. The Cisco UCS 5108 Blade Server Chassis can accept four redundant power supplies with automatic load sharing and failover and two Cisco UCS 2100 or 2200 Series Fabric Extenders. The chassis is managed by Cisco UCS chassis management controllers, which are mounted in the Cisco UCS fabric extenders and work in conjunction with Cisco UCS Manager to control the chassis and its components.
A single Cisco UCS managed domain can scale to up to 20 individual chassis and 160 blade servers.
Basing the I/O infrastructure on a 10-Gbps unified network fabric allows Cisco UCS to have a streamlined chassis with a simple yet comprehensive set of I/O options. The result is a chassis that has only five basic components:
• The physical chassis with passive midplane and active environmental monitoring circuitry
• Four power supply bays with power entry in the rear and hot-swappable power supply units accessible from the front panel
• Eight hot-swappable fan trays, each with two fans
• Two fabric extender slots accessible from the back panel
• Eight blade server slots accessible from the front panel
Cisco UCS B200 M2 Blade Server
The Cisco UCS B200 M2 Blade Server is a half-slot, 2-socket blade server. The system uses two Intel Xeon processors 5600 series, up to 192 GB of double-data-rate-3 (DDR3) memory, two optional Small Form Factor (SFF) SAS and SSD disk drives, and a single CNA mezzanine slot for up to 20 Gbps of I/O throughput. The Cisco UCS B200 M2 Blade Server balances simplicity, performance, and density for production-level virtualization and other mainstream data center workloads.
Cisco UCS B250 M2 Extended Memory Blade Server
The Cisco UCS B250 M2 Extended Memory Blade Server is a full-slot, 2-socket blade server using Cisco Extended Memory Technology. The system supports two Intel Xeon processors 5600 series, up to 384 GB of DDR3 memory, two optional SFF SAS and SSD disk drives, and two CNA mezzanine slots for up to 40 Gbps of I/O throughput. The Cisco UCS B250 M2 blade server provides increased performance and capacity for demanding virtualization and large-data-set workloads, with greater memory capacity and throughput.
Cisco UCS B230 M2 Blade Server
The Cisco UCS B230 M2 Blade Server is a full-slot, 2-socket blade server offering the performance and reliability of the Intel Xeon processor E7-2800 series and up to 32 DIMM slots, which support up to 512 GB of memory. The Cisco UCS B230 M2 supports two SSD drives and one CNA mezzanine slot for up to 20 Gbps of I/O throughput. The Cisco UCS B230 M2 Blade Server platform delivers outstanding performance, memory, and I/O capacity to meet the diverse needs of virtualized environments with advanced reliability and exceptional scalability for the most demanding applications.
Cisco UCS B440 M2 High-Performance Blade Server
The Cisco UCS B440 M2 High-Performance Blade Server is a full-slot, 2-socket blade server offering the performance and reliability of the Intel Xeon processor E7-4800 series and up to 512 GB of memory. The Cisco UCS B440 M2 supports four SFF SAS and SSD drives and two CNA mezzanine slots for up to 40 Gbps of I/O throughput. The Cisco UCS B440 M2 blade server extends Cisco UCS by offering increased performance, scalability, and reliability for mission-critical workloads.
Cisco conducted a benchmark test in the Cisco Oracle Competency Center with an Oracle PeopleSoft workload to measure the response times of Oracle's PeopleSoft Enterprise Human Resources Management System (HRMS) 9.1 with Oracle PeopleSoft Payroll for North America using Oracle Database 11g on Red Hat Enterprise Linux(RHEL) 5.6. The Cisco UCS platform consisted of a standard Oracle PeopleSoft three-tier tech stack of web, application, and database servers. The database servers were set up as two-node Oracle Real Application Clusters (RAC). The web server was on a Cisco UCS B200 M2 (two CPUs and 4 cores), the application servers were on the Cisco UCS B200 M2 (two CPUs and six cores), and the database servers were on the Cisco UCS B230 M2 (two CPUs and 10 cores). The entire setup was SAN bootable as specified by Cisco UCS standards. The web, application, and database storage was carved out of an EMC VNX5500. Table 1 summarizes the results.
Table 1. Summary of Results
Oracle PeopleSoft Payroll for North America
Payments per hour
255,319 payments per hour (calculated)
This benchmark measured response times for three of the major payroll activities that are normally run in sequence. The workload was for a standard database composition model that represented a small to medium-sized business with a profile of approximately 240,000 employees. The testing was conducted in a controlled environment with no other applications running. All the parameter changes made across the batch scheduler and database to fine-tune the Oracle PeopleSoft setup on Cisco UCS followed best practices for Oracle PeopleSoft. The objective was to create a baseline benchmark result for Oracle PeopleSoft HRMS with Oracle PeopleSoft Payroll 9.0 for North America running Oracle Database 11g in a 2-node Oracle RAC environment on RHEL 5.6 on Cisco UCS servers and EMC VNX storage.
For this benchmark, all jobs were intiated on the server from the browser. The application was run with eight concurrent processes.
Batch processes are background processes requiring no operator intervention or interactivity. Results of these processes are automatically logged in the database. The run times are posted to the process request database table, where they are stored for subsequent analysis.
For this benchmark, two scenarios were available for the Cisco UCS test infrastructure.
Scenario 1: Process Scheduler and Database Hosted on Separate Cisco UCS Servers
Running the process scheduler on a device other than the database server uses a TCP/IP connection to connect to the database. Because the batch process may involve extensive SQL processing, this TCP/IP connection has a lot of overhead and may affect processing times. The impact is more evident in processes in which intensive row-by-row processing is performed. In processes in which most of the SQL statements are set based, TCP/IP overhead is likely less. Dedicate a network connection between the batch server and the database to reduce the overhead (Figure 2).
Figure 2. Database Server and Batch Server Isolation
Scenario 2: Process Scheduler and Database Hosted on Cisco UCS Database Server
Running the process scheduler on the database server eliminates the TCP/IP overhead and improves the processing time (Figure 3). However, keep in mind that this scenario does use additional server memory. Set the following value UseLocalOracleDB=1 in the psprcs.cfg process scheduler configuration file to use direct connection instead of TCP/IP.
This kind of a set up is useful for programs that perform intensive row-by-row processing.
Figure 3. Database Server Hosting Batch Server
The process scheduler runs Oracle PeopleSoft batch processes. Just as in the Oracle PeopleSoft architecture, you can set up the process scheduler (batch server) to run from a database server or any other server.
As specified in the best practices it is always advisable to install the batch server on the database server if you have enough server resources to accommodate the batch server along with the database, so that processing will be faster.
For the Oracle PeopleSoft Payroll for North America benchmark activity, each tier was run on a discrete server or servers (Figure 4). The test used the same three-tier setup that was used for benchmarking the self-service load (search and retrieval). The batch servers were run on the database server tier. Two batch servers were set up on the 2-node Oracle RAC, each running on one of the Oracle RAC nodes. This approach was used not only as a best practice to get the best use from the database server resource, but also to override the COBOL compiler license limitation for running COBOL SQL processes on a single batch server in the Cisco lab setup.
Figure 4. Three-Tier Configuration
The following three payroll processes were used in the test:
• Paysheet creation: This process generates payroll data worksheets for employees consisting of standard payroll information for each employee for the given pay cycle. The paysheet process can be run separately from the other two tasks, usually before the end of the pay period.
• Payroll calculation: This process examines the paysheets and calculates checks for those employees. Payroll calculation can be run any number of times throughout the pay period. The first run will perform most of the processing, while each successive run updates only the calculated totals of changed items. This iterative design reduces the amount of time required to calculate a payroll as well as the processing resources required. In this workload, payroll calculation was run only once, as though at the end of a pay period.
• Payroll confirmation: This process uses the information generated by the payroll calculation process to update the employees' balances with the calculated amounts. The system assigns check numbers at this time and creates direct-deposit records. The payroll confirm process can be run only once, and therefore it must be run at the end of the pay period.
Batch Process Strategies
Figure 5 summarizes the processing strategies that were undertaken for this benchmark. This run did not use the single-check option, but instead used multiple job streams.
Figure 5. Batch Job Stream Processing
Performance may vary on other hardware and software platforms and with other data composition models.
Tables 2 and 3 show the actual runtimes in minutes for the payroll processes. They also show the number of employees processesd and the number of checks produced.
Table 2. Oracle PeopleSoft 9.1 Payroll Process Run Times
Table 3. Average Payroll Run-Time Processes
Run Count ID
Duration in Minutes
Average in Group
Elapsed Time for Group
Cisco UCS System Performance
Figure 6 shows the average CPU utilization for each of the Oracle RAC servers in a standard Oracle PeopleSoft three-tier tech stack that was under test. The batch scheduler used to run the Oracle PeopleSoft Payroll for North America process is hosted on the Oracle RAC servers. The Cisco UCS servers had two CPUs per server, but there were multiple cores and almost 10 to 20 threads. The Cisco UCS hardware components used are listed in the "Benchmark Environment" section of this document. Table 4 summarizes the utilization metrics for all CPUs in each server.
Figure 6. Average Server CPU Utilization
Table 4. Average Server CPU Utilization Metrics
Paysheet creation 1
Paysheet creation 2
Payroll calculation 1
Payroll calculation 2
Payroll confirmation 1
Payroll confirmation 2
The latest EMC VNX5500 SAN storage was set up in different RAID levels to cater to different database components and SAN boot requirements. I/O performance is crucial for any benchmark, and hence the storage was optimally carved to take advantage of the latest features of EMC VNX (Figure 7).
Figure 7. Disk Layout of the EMC VNX5500 for Hosting Oracle PeopleSoft
The I/O performance is summarized in Table 5 and Figure 8.
Table 5. I/O Metrics
I/O Metrics (Kbps)
Paysheet creation 1
Paysheet creation 2
Payroll calculation 1
Payroll calculation 2
Payroll confirmation 1
Payroll confirmation 2
Figure 8. I/O Performance
Data Composition Description
The system contains 279,000 active employees, 240,000 employees are part of the Oracle PeopleSoft Payroll for North America system, and each employee has 10 months of payroll history. All employees of Oracle PeopleSoft Payroll for North America have single active jobs.
The employees were distributed over four weekly and four biweekly paygroups with eight different employee profiles. Each of these groups was assigned to one of the eight paygroups. Hence, the benchmark for this test was set up for eight concurrent processes. The profiles are divided across pay groups and pay frequencies as shown in Table 6.
Table 6. Pay Groups and Pay Frequencies
Benchmarking Payroll End Date
Part-time 20 hours
October 16, 2012
October 15, 2012
October 15, 2012
Part-time 30 hours
October 15, 2012
Part-time 20 hours
December 17, 2012
December 18, 2012
Part-time 30 hours
December 17, 2012
December 17, 2012
Before choosing the Cisco UCS servers, you should check the interoperability matrices for Cisco UCS components and configurations that have been tested and validated by Cisco, by Cisco partners, or both. Cisco provides a document hardware and software interoperability matrix, which is updated on a regular basis. The current version of this document applies to the Cisco UCS B-Series Blade Servers in Cisco UCS Release 1.4(3). See
Table 7 lists the hardware components and Table 8 lists the software components of the benchmark environment.
This performance result is one clear indication that a properly configured Cisco UCS platform can complete the payroll payment process for a 240,000-employee organization in less than one hour. This test also demonstrated that this result was achieved without increasing CPU utilization to undesirable levels, and that the workload was balanced between the nodes in a 2-node Oracle RAC cluster. Following Oracle's best practice of running the process scheduler on the database server eliminates the TCP/IP overhead and improves processing time and performance. Customers who are running older versions of Oracle PeopleSoft and may be considering upgrading to Oracle PeopleSoft Payroll 9.0 and run the North America module should strongly consider this performance result in their system evaluations. While the use of the end users' own data may alter the exact performance results documented herein, Cisco believes that these results show that in most customer environments Cisco UCS can provide a leading level of performance for Oracle PeopleSoft applications, and that future Cisco UCS product advancements will serve to improve the performance attained by these joint solutions.