Guest

Design Zone for Data Centers

Oracle JD Edwards on FlexPod with Oracle Linux

  • Viewing Options

  • PDF (8.2 MB)
  • Feedback

Table Of Contents

About the Authors

Acknowledgment

About Cisco Validated Design (CVD) Program

Oracle JD Edwards on FlexPod with Oracle Linux

Executive Summary

Purpose of this Guide

Business Needs

Audience

Solution Overview

Technology Overview

FlexPod

Cisco Unified Computing System Overview

Main Differentiating Technologies

Cisco Unified Computing System Components

Cisco UCS Fabric Interconnects

Cisco UCS Virtual Interface Card 1240

Cisco UCS Virtual Interface Card 1280

Cisco UCS 5108 Blade Server Chassis

Cisco Nexus 5500 Switch

Cisco UCS Service Profiles

Programmatically Deploying Server Resources

Dynamic Provisioning with Service Profiles

Service Profiles and Templates

UCS Central

Features and benefits

NetApp FAS 3270 Storage

Oracle JD Edwards

HTML Server

Enterprise Server

Database Server

Deployment Server

Server Manager

Batch Server

Design Considerations for Oracle JD Edwards implementation on FlexPod

Scalable Architecture Using Cisco UCS Servers

Boot from SAN

Cisco UCS and NetApp Unified Storage

Benefits of the NetApp FAS Family of Storage Controllers

RAID-DP

FlexVol

NetApp OnCommand Unified Manager Software

Snapshot

NetApp Strategy for Storage Efficiency

Sizing Guidelines for Oracle JD Edwards

Oracle JDE HTML Server

Oracle JDE Enterprise Server

JDE Database Server

Oracle JD Edwards Deployment Architecture on FlexPod

Infrastructure Setup

Cisco Nexus Switch Configuration

Setting up of Cisco Nexus 5548 UP Switch

Enabling Features and Global Configuration

Creating VSAN and Adding FC Interfaces

Configuring VLAN

Configuring Virtual Port Channel (vPC)

Cisco Unified Computing System Configuration

Validate Installed Firmware

Chassis Discovery Policy

Enabling Network Components

Creating MAC Address Pools

Creating WWPN Pools

Creating WWNN Pools

Creating UUID suffix pools

Creating VLANs

Creating Uplink Ports Channels

Creating VSANs

SAN Boot Policy Configuration

BIOS Policy

Cisco UCS Manager Quality-of-Service System and Policy

UCS Service Profile Configuration

Creating Service Profile Templates

Creating Service Profile from the Template and associating it to Blade

Creating Zoneset and Zones on Cisco Nexus 5548 UP Switch

Configuring NetApp FAS 3270

NetApp FAS3270HA (Controller A)

NetApp Multimode Virtual Interfaces

Oracle Linux Installation

Oracle RAC Setup

Installing Oracle Database 11g R2 GRID Infrastructure with Oracle RAC and the Database

Oracle JD Edwards Installation

Pre-Requisites

General Install Requirements

Oracle JD Edwards Specific Install Requirements

Oracle JDE Install Port Numbers

Installing Oracle JDE Deployment Server

Tools Upgrade on the Deployment Server

Install Planner ESU

Enterprise Server Install

Database Server Install

Installation Plan

Installation Workbench

Change Assistant

Baseline ESU Install

UL2 Install

HTML Server Install

Configuring the Cluster

Oracle HTTP Server Installation

Oracle JDE User Creation

Full Package Build

Summary

Oracle JD Edwards Performance and Scalability

Workload Description

Test Methodology

Test Scenarios

Interactive Workload Mix

Interactive with Batch Test Scenario

Interactive Workload Scaling

User Response Time

CPU Utilization

Memory Utilization

I/O Performance

Individual UBEs

Only Batch Execution

Interactive with Batch on Separate Physical Server

User Response Time

CPU Utilization

Memory Utilization

I/O Performance

Best Practices & Tuning Recommendations

System Configuration

Oracle RAC Configuration

WebLogic Server Configuration

Oracle JD Edwards Enterprise Server Configuration

Conclusion

Bill of Materials

Appendix A - Workload Mix for Batch and Interactive Test

Appendix B - Reference Documents

Appendix C - Reference Links


Oracle JD Edwards on FlexPod with Oracle Linux
Last Updated: December 19, 2013

Building Architectures to Solve Business Problems

About the Authors

Anil Dhiman, Technical Marketing Engineer, SAVBU, Cisco Systems

Anil Dhiman is a Technical Marketing Engineer with Server Access Virtualization Business Unit (SAVBU) at Cisco. Anil has over 12 years of experience in benchmarking and performance analysis of large multi-tier systems, such as Foreign Exchange products. Anil specializes in optimizing and tuning of applications deployed on J2EE application servers and has delivered world record numbers for SPECjbb2005 and SPECjbb2013 benchmarks on Cisco Unified Computing Systems. Anil has worked as a performance engineer for Oracle IAS team. Prior to joining Cisco, Anil worked as a performance engineering architect with Symphony Services.

Acknowledgment

For their support and contribution to the design, validation, and creation of the Cisco Validated Design, we would like to thank:

John McAbel- Cisco

Vadiraja Bhatt- Cisco

Shankar Govindan- Cisco

Amar Venkatesh- Cisco

Ramakrishna Nishtala- Cisco

Niranjan Mohapatra- Cisco

Rini Kuriakose- Cisco

Naveen Harsani- NetApp

About Cisco Validated Design (CVD) Program


The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit:

http://www.cisco.com/go/designzone

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

Oracle JD Edwards on FlexPod

© 2013 Cisco Systems, Inc. All rights reserved.

Oracle JD Edwards on FlexPod with Oracle Linux


Executive Summary

Oracle JD Edwards is a suite of products from Oracle that caters to the Enterprise Resource Planning (ERP) requirements of an organization. Oracle has three flag-ship ERP applications, Oracle E-Business Suite, PeopleSoft, and Oracle JD Edwards. ERP applications have been thriving and improving the productivity of organizations for a couple of decades now. However, with the increased complexity and extreme performance requirements, customers are constantly looking for better infrastructure to host and run ERP applications.

This design guide presents a differentiated solution using FlexPod that validates the Oracle JD Edwards environment on Cisco Unified Computing System (UCS) with Oracle Linux 5.8, Oracle 11gR2 RAC database, Cisco UCS Blade Servers, Cisco Nexus 5548 UP switches, UCS Management System (UCSM), and NetApp FAS 3270® Storage Controller. Cisco Oracle Competency Center tested, validated, benchmarked, and showcased the Oracle JD Edwards ERP Application using Oracle Day in the Life (DIL) Kit.

Purpose of this Guide

This design guide elaborates the best practices for deployment of Oracle JD Edwards on FlexPod. It provides on step-by- step procedure to deploy the Cisco UCS®, Cisco Nexus® family of switches, NetApp FAS 3270® storage controllers, and Oracle JD Edwards application.

The design is validated by measuring the performance and scalability of Oracle JD Edwards on the FlexPod environment. This is achieved by benchmarking Oracle JD Edwards, Day in the life (DIL) Kit on the FlexPod environment. Oracle JDE DIL kit comprises of interactive application workloads, batch workloads, and Universal Batch Engine Processes (UBEs). The interactive application users were validated and benchmarked by scaling from 1000 to 15,000 concurrent users and various sets of UBEs were executed concurrently with 5000 interactive users. Achieving sub-second response times for various Oracle JDE application workloads with a large variation of interactive applications and UBEs, clearly demonstrates the suitability of FlexPod for small to large enterprise application of the Oracle JD Edwards. In addition, it helps customers to make an informed decision on choosing FlexPod for their Oracle JD Edwards implementation.

Business Needs

Customers constantly look at value-for- money propositions, when they transition from one platform to another or migrate from proprietary systems to commodity hardware platforms; they endeavor to improve operational efficiency and optimal resource utilization.

Other important aspects are management and maintenance; ERP applications are business critical applications and need to be up and running all the time. Ease of maintenance, efficient management with minimal staff, and reduced budgets are pushing infrastructure managers to balance uptime and Return On Investment (ROI).

Server sprawl, old technologies that consume precious real estate space, and power to meet increased cooling requirements, have pushed customers to look for innovative technologies that can address some of these challenges.

Audience

The target audience for this guide includes sales engineers, field consultants, professional services staff, IT managers, partner engineering staff, and customers who want to deploy Oracle JDE on Flexpod.

This design guide is intended to assist Solution Architects, Oracle JDE Project Manager, Infrastructure Manager, Sales Engineers, Field Engineers, and Consultants in planning, designing, and deploying of the Oracle JD Edwards hosted on the FlexPod environment. The pre-requisite for the users of this design guide is an architectural understanding of the Cisco UCS servers, Cisco Nexus 5548 UP switches, Oracle JD Edwards, NetApp FAS 3270 Storage Controllers, and related software.

Solution Overview

The solution in this design guide elaborates the deployment of Oracle JD Edwards on FlexPod. The Oracle JD Edwards solution architecture is designed to run on multiple platforms and on multiple databases. In this deployment, the Oracle JD Edwards Enterprise One (JDE E1) Release 9.0.2 is deployed on Oracle Linux 5.8 (RedHat compatible kernel). The Oracle JDE E1 database is hosted on two-node Oracle 11gR2, and the Oracle JDE HTML server is run on Oracle WebLogic server Release 10.3.6. The design validates the use of Oracle RAC database for mission critical applications, such as Oracle JDE E1, to provide high availability and scalability.

The deployment and testing has been conducted in a Cisco® test and development environment to measure the performance of Oracle's JDE E1 Release 9.0.2 on the FlexPod infrastructure stack. The Oracle JDE E1 DIL kit is deployed as a test workload, which is a suite of 17 test scripts representative of the most popular Oracle JDE E1 applications, including Supply Chain Management (SCM), Supplier Relationship Management (SRM), Human Capital Management (HCM), Customer Relationship Management (CRM), and Financial Management System. This complex mix of applications, simulate workloads that closely reflect customer environments.

The solution describes the following aspects of an Oracle JD Edwards deployment on Cisco UCS:

Sizing and Design guidelines for Oracle JD Edwards for both interactive and batch processes.

Configuring Cisco UCS for Oracle JD Edwards

Configuring Fabric Interconnect

Configuring Cisco UCS Blades Servers

Configuring Cisco Nexus 5548 UP switches

Configuring NetApp FAS 3270 Storage Controller for Oracle JD Edwards

Configuring the storage and creating Aggregates, Volumes and LUN

Configuring FC LUNs and initiator groups

Exporting data volumes to servers

Installing and configuring Oracle JDE E1 release 9.02 with Tool update 8.98.4.10

Provisioning the required server resource

Installing and configuring Oracle JDE HTML server, Oracle JDE Enterprise server, and Oracle 11gR2 RAC on Oracle Linux 5.8

Performance characterization of Oracle JD Edwards on FlexPod

Performance and Scaling analysis for Oracle JDE E1 interactive applications

Performance Analysis of Oracle JDE Universal Batch Engine Processes (UBEs)

Split Configuration Scaling: Performance Analysis of interactive and UBEs executed on separate server.

Best practices and tuning recommendations to deploy Oracle JD Edwards on Cisco UCS.

Figure 1 elaborates the components of Oracle JD Edwards using Cisco UCS Servers.

Figure 1 Deployment Overview of Oracle JD Edwards on FlexPod

Technology Overview

FlexPod

Business leaders are laying out clear mandates for their IT departments as they navigate the current economy. IT infrastructure has to consolidate to save energy costs, and it has to host more applications, share resources across different departments, and become more secure.

Chief information officers (CIOs) translate these requirements into fewer data centers and consolidated server, networking, and storage resources, that can host multiple applications shared by diverse departments. They want IT to operate as a service, such as, large public service providers.

Cisco and NetApp have developed the FlexPod platform to meet these demanding requirements.

FlexPod is a data center platform that hosts infrastructure software and business applications in virtualized and bare-metal environments. The platform has been tested and validated across leading hypervisors and operating systems from VMware, Red Hat, and Microsoft and can be managed by FlexPod ecosystem partner software.

FlexPod combines servers, storage resources, and the network fabric to create an agile, efficient, and scalable platform for hosting applications. FlexPod allows hardware to be abstracted into service profiles, resulting in a very agile platform. The platform is configured and tested with industry-leading virtualization and native OS environments using the Cisco Validated Design methodology.

The resulting design guides are used by FlexPod partners and service teams of Cisco and NetApp to deploy the platform across many different industries. FlexPod is delivered with a unique set of programmable interfaces that allow automation of deployment, fault monitoring, and use-based billing.

Figure 2 The FlexPod Platform

FlexPod builds on the core capabilities of both the Cisco and NetApp portfolios of industry-leading products (Figure 2):

Cisco Unified Computing System (Cisco UCS): Deploys high-performance, expanded-memory server architecture

Cisco Nexus switching: Converges Fibre Channel and Ethernet on a unified 10 Gigabit Ethernet fabric.

NetApp Storage: Provides storage access through Network File System (NFS), Internet Small Computer System Interface (iSCSI), Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) all together with the unified approach of Data ONTAP.

Figure 3 depicts the components of the FlexPod.

Figure 3 FlexPod System Components

Cisco Unified Computing System Overview

Cisco UCS is a set of pre-integrated data center components, including blade servers, adapters, fabric interconnects, and fabric extenders, which are integrated within a common embedded management system. This approach results in far fewer system components and much better manageability, operational efficiencies, and more flexibility than comparable data center platforms.

Main Differentiating Technologies

The main differentiating technologies described here, makes Cisco UCS unique and provide several advantages over competing offerings. The technologies presented here are high level, and the discussions do not include the technologies (such as Fibre Channel over Ethernet [FCoE]) that support these high-level elements.

Unified Fabric

Unified fabric reduces the number of network adapters, blade-server switches, cables, and management touch points by passing all network traffic to parent fabric interconnects, where it is prioritized, processed, and managed centrally. This approach improves performance, agility, and efficiency and dramatically reduces the number of devices that need to be powered, cooled, secured, and managed.

Embedded Multirole Management

Cisco UCS Manager (UCSM) is a centralized management application that is embedded on the fabric switch. Cisco UCSM controls all the Cisco UCS elements within a single redundant management domain. These elements include all aspects of system configuration and operation, eliminating the need to use multiple, separate element managers for each system component. Massive reduction in the number of management modules and consoles and in the proliferation of agents resident on all the hardware (which must be separately managed and updated) are important deliverables of Cisco UCS. Cisco UCS Manager, using role-based access and visibility, helps enable cross-function communication efficiency, promoting collaboration between data center roles for increased productivity.

Cisco Extended Memory Technology

Significantly enhancing the available memory capacity of some Cisco UCS servers, Cisco Extended Memory Technology helps increase performance for demanding virtualization and large-data-set workloads. Data centers can deploy very high virtual machine densities on individual servers as well as provide resident memory capacity for databases that need only two processors but can dramatically benefit from more memory. The high-memory Dual In-Line Memory Module (DIMM) slot count also allows users to cost-effectively scale the capacity using smaller and less expensive DIMMs.

Cisco Data Center VM-FEX Virtualization Support and Virtualization Adapter

With Cisco Data Center VM-FEX, virtual machines have virtual links that allow them to be managed in the same way as physical links. Virtual links can be centrally configured and managed without the complexity of traditional systems, which interpose multiple switching layers in virtualized environments. I/O configurations and network profiles move along with virtual machines, helping increase security and efficiency while reducing complexity. Cisco Data Center VM-FEX helps improve performance and reduce Network Interface Card (NIC) infrastructure.

Dynamic Provisioning with Service Profiles

Cisco UCS Manager delivers service profiles, which contain abstracted server-state information, creating an environment in, which everything unique about a server is stored in the fabric, and the physical server is simply another resource to be assigned. Cisco UCS Manager implements role-based and policy-based management focused on service profiles and templates. These mechanisms provision servers and their network connectivity in minutes, rather than hours or days.

Cisco UCS Manager

Cisco UCS Manager (UCSM) is an embedded, unified manager that provides a single point of management for Cisco UCS. Cisco UCSM can be accessed through an intuitive Graphical User Interface (GUI), a command-line interface (CLI), or a comprehensive open XML API. It manages the physical assets of the server and storage-LAN connectivity and it is designed to simplify the management of virtual network connections through integration with several major hypervisor vendors. It provides IT departments with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to individuals based on their roles as managers of server or network hardware assets. It simplifies operations by automatically discovering the available components on the system and enabling a stateless model for resource use. Every instance of Cisco UCSM and all the components managed by it form a domain. Multiple domains can be managed through Cisco UCS Central, which integrates with Cisco UCS Manager, and utilizes it to provide global configuration capabilities for pools, policies, and firmware. Cisco UCS Central software manages multiple, globally distributed Cisco UCS domains with thousands of servers from a single pane.

The elements managed by Cisco UCSM include:

Cisco UCS Integrated Management Controller (CIMC) firmware

Redundant Array of Independent Disks (RAID) controller firmware and settings

Basic Input/Output System (BIOS) firmware and settings, including server Universal User ID (UUID) and boot order

Converged Network Adapter (CNA) firmware and settings, including MAC addresses and worldwide Names (WWNs) and SAN boot settings

Virtual port groups used by virtual machines, using Cisco Data Center VM-FEX technology

Interconnect configuration, including uplink and downlink definitions, MAC address and WWN pinning, VLANs, VSANs, Quality of Service (QoS), bandwidth allocations, Cisco Data Center VM-FEX settings, and Ether Channels to upstream LAN switches.

Cisco Unified Computing System Components

Figure 4 depicts the Cisco Unified Computing System (UCS) components.

Figure 4 Cisco UCS Components

Cisco UCS is designed to be programmable and self-integrating. A server's entire hardware stack, ranging from server firmware and settings to network profiles, is configured through model-based management. With Cisco Virtual Interface Cards (VICs), even the number and type of I/O interfaces is programmed dynamically, making every server ready to power any workload at any time.

With model-based management, administrators manipulate a model of a desired system configuration and associate a model's service profile with hardware resources, and the system configures itself to match the model. This automation accelerates provisioning and workload migration with accurate and rapid scalability. The result is increased IT staff productivity, improved compliance, and reduced risk of failures due to inconsistent configurations.

Cisco FEX technology reduces the number of system components that need to be purchased, configured, managed, and maintained by condensing three network layers into one. It eliminates both blade server and hypervisor-based switches by connecting fabric interconnect ports directly to individual blade servers and virtual machines. Virtual networks are now managed exactly the same way that physical networks are, but enable massive scalability. This approach represents a radical simplification compared to traditional systems, reducing capital expenditures (CapEx) and operating expenses (OpEx) while increasing business agility, simplifying and accelerating deployment, and improving performance.

Cisco UCS Fabric Interconnects

Cisco UCS Fabric Interconnects create a unified network fabric throughout Cisco UCS. They provide uniform access to both networks and storage, eliminating the barriers to deployment of a fully virtualized environment based on a flexible, programmable pool of resources. Cisco fabric interconnects comprise a family of line-rate, low-latency, lossless 10 Gigabit Ethernet, IEEE Data Center Bridging (DCB), and FCoE interconnect switches. Based on the same switching technology as the Cisco Nexus 5000 Series Switches, Cisco UCS 6200 Series Fabric Interconnects provide additional features and management capabilities that make them the central nervous system of Cisco UCS. The Cisco UCS Manager software runs inside the Cisco UCS fabric interconnects. The Cisco UCS 6200 Series Fabric Interconnects expand the Cisco UCS networking portfolio and offer higher capacity, higher port density, and lower power consumption. These interconnects provide the management and communication backbone for the Cisco UCS B-Series Blade Servers and Cisco UCS Blade server chassis. All the chassis and the blades that are attached to fabric interconnects are part of a single, highly available management domain. By supporting unified fabric, the Cisco UCS 6200 Series Fabric Interconnects provides the flexibility to support LAN and SAN connectivity for all blades within its domain at configuration time. Typically deployed in redundant pairs, Cisco UCS fabric interconnects provide uniform access to both networks and storage, facilitating a fully virtualized environment.

The Cisco UCS Fabric Interconnect portfolio currently consists of the Cisco 6200 Series Fabric Interconnects.

Figure 5 Cisco UCS 6248UP 48-Port Fabric Interconnect

The Cisco UCS 6248UP 48-Port Fabric Interconnect is a one-rack-unit (1RU) 10 Gigabit Ethernet, IEEE DCB, and FCoE interconnect providing more than 1-terabit-per-second (Tbps) throughput with low latency. It has 32 fixed ports of Fibre Channel, 10 Gigabit Ethernet, IEEE DCB, and FCoE Enhanced Small Form-Factor Pluggable (SFP+) ports.

One expansion module slot can provide up to 16 additional Fibre Channel, 10 Gigabit Ethernet, IEEE DCB, and FCoE SFP+ ports.

Cisco UCS Virtual Interface Card 1240

A Cisco innovation, the Cisco UCS VIC 1240 is a four-port 10 Gigabit Ethernet (GbE), FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series blade servers. When used in combination with an optional port expander, the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10 GbE.

Cisco UCS Virtual Interface Card 1280

A Cisco innovation, the Cisco UCS VIC 1280 is an eight-port 10 Gigabit Ethernet, FCoE-capable mezzanine card designed exclusively for Cisco UCS B-Series blade servers.

The Cisco UCS VIC 1240 and 1280 enable a policy-based, stateless, agile server infrastructure that can present up to 256 PCI Express (PCIe) standards-compliant interfaces to the host that can be dynamically configured as either NICs or HBAs. In addition, the Cisco UCS VIC 1280 supports Cisco Data Center VM-FEX technology, which extends the Cisco UCS Fabric Interconnect ports to virtual machines, simplifying server virtualization deployment.

Cisco UCS 5108 Blade Server Chassis

The Cisco UCS 5108 Blade server chassis is a 6RU blade chassis that accepts up to eight half-width Cisco UCS B-Series Blade Servers or up to four full-width Cisco UCS B-Series Blade Servers, or a combination of the two. The Cisco UCS 5108 can accept four redundant power supplies with automatic load sharing and failover and two Cisco UCS 2100 or 2200 Series Fabric Extenders. The chassis is managed by Cisco UCS chassis management controllers, which are mounted in the Cisco UCS fabric extenders and work in conjunction with Cisco UCS Manager to control the chassis and its components.

Figure 6 Cisco UCS 5100 Series Blade Server Chassis

A single Cisco UCS managed domain can theoretically scale to up to 40 individual chassis and 320 blade servers. At this time, Cisco UCS supports up to 20 individual chassis and 160 blade servers.

Basing the I/O infrastructure on a 10-Gbps unified network fabric allows Cisco UCS to have a streamlined chassis with a simple yet comprehensive set of I/O options. The result is a chassis that has only five basic components:

The physical chassis with passive midplane and active environmental monitoring circuitry.

Four power supply bays with power entry in the rear and hot-swappable power supply units accessible from the front panel.

Eight hot-swappable fan trays, each with two fans.

Two fabric extender slots accessible from the back panel.

Eight blade server slots accessible from the front panel.

Figure 7 Cisco UCS B200 M3 Blade Servers

The Cisco UCS B200 M3 Blade server delivers performance, versatility, and density without compromise. It addresses the broadest set of workloads, from IT and web infrastructure to distributed databases. Building on the success of the Cisco UCS B200 M2 Blade server, the enterprise-class Cisco UCS B200 M3 Blade server further extends the capabilities of the Cisco UCS portfolio in a half-width blade form factor. The Cisco UCS B200 M3 harnesses the power of the latest Intel Xeon processor E5-2600 product family, with up to 384 GB of RAM (using 16-GB DIMMs), two disk drives, and up to dual 4x 10 Gigabit Ethernet throughput. In addition, Cisco UCS has the architectural advantage of not having to power and cool excess switches in each blade chassis. With a larger power budget per blade server, Cisco can design uncompromised expandability and capabilities in its blade servers, as evidenced by the new Cisco UCS B200 M3, with its leading memory slot and drive capacity.

Cisco Nexus 5500 Switch

The Cisco Nexus 5500 Series Switch is designed for data center environments with cut-through technology that enables consistent, low-latency Ethernet solutions with front-to-back or back-to-front cooling and data ports in the rear, bringing switching into close proximity with servers and making cable runs short and simple. The switch series is highly serviceable, with redundant, hot-pluggable power supplies and fan modules. It uses data-center-class Cisco NX-OS Software for high reliability and ease of management.

Cisco Nexus 5500 series switches provide higher density, lower latency, and multilayer services. The Cisco Nexus 5500 series switch is well suited for enterprise-class data center server access layer deployments across a diverse set of physical, virtual, storage-access, and High-Performance Computing (HPC) data center environments.

Figure 8 Cisco Nexus 5548UP

The Cisco Nexus 5548UP is a 1RU 10 Gigabit Ethernet (10 GE), Fibre Channel (FC), and Fibre Channel over Ethernet (FCoE) switch offering up to 960 Gbps of throughput and up to 48 ports. The switch has 32 unified ports and one expansion slot supporting modules with 10 Gigabit Ethernet and FCoE ports or connectivity to Fibre Channel SANs with 8/4/2/1 Gbps Fibre Channel switch ports, or both.

Cisco UCS Service Profiles

Programmatically Deploying Server Resources

Cisco UCS Manager provides centralized management capabilities, creates a unified management domain, and serves as the central nervous system of the Cisco UCS. Cisco UCS Manager is an embedded device management software that manages the system from end-to-end as a single logical entity through an intuitive GUI, CLI, or XML API. Cisco UCS Manager implements role- and policy-based management using service profiles and templates. This construct improves IT productivity and business agility. Now infrastructure can be provisioned in minutes instead of days, shifting IT's focus from maintenance to strategic initiatives.

Dynamic Provisioning with Service Profiles

Cisco UCS resources are abstract since their identity, such as I/O configuration, MAC addresses and WWNs, firmware versions, BIOS boot order, and network attributes (including QoS settings, ACLs, pin groups, and threshold policies) all are programmable using a just-in-time deployment model. The Cisco UCSM stores this identity, connectivity, and configuration information in service profiles that reside on the Cisco UCS 6200 Series Fabric Interconnect. This service profile can be applied to any blade server to provision it with characteristics required to support a specific software stack. A service profile allows server and network definitions to move within a management domain, enabling flexibility in the use of system resources. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

Service Profiles and Templates

A service profile contains configuration information about the server hardware, interfaces, fabric connectivity, and server and network identity. The Cisco UCS Manager provisions servers utilizing service profiles. The Cisco UCS Manager implements a role-based and policy-based management focused on service profiles and templates. A service profile can be applied to any blade server to provision it with the characteristics required to support a specific software stack. A service profile allows server and network definitions to move within the management domain, enabling flexibility in the use of system resources.

Service profile templates are stored in the Cisco UCS 6200 Series Fabric Interconnects for reuse by server, network, and storage administrators. Service profile templates consist of server requirements and the associated LAN and SAN connectivity. Service profile templates allow different classes of resources to be defined and applied to a number of resources, each with its own unique identities assigned from predetermined pools.

The Cisco UCS Manager can deploy the service profile on any physical server at any time. When a service profile is deployed to a server, the Cisco UCS Manager automatically configures the server, adapters, Fabric Extenders, and Fabric Interconnects to match the configuration specified in the service profile. A service profile template parameterizes the UIDs that differentiate between server instances.

This automation of device configuration reduces the number of manual steps required to configure servers, Network Interface Cards (NICs), Host Bus Adapters (HBAs), and LAN and SAN switches. Figure 9 shows the service profile, which contains abstracted server state information, creating an environment to store unique information about a server.

Figure 9 Service Profile

UCS Central

Cisco UCS Manager provides a single point of management for an entire Cisco Unified Computing System (Cisco UCS) domain of up to 160 servers and associated infrastructure. Cisco UCS Manager uses a policy-based approach to provision servers by applying a desired configuration to physical infrastructure. Using Cisco UCS service profiles, administrators can reproduce existing physical environments, including the I/O configuration, firmware, and settings. The configuration is applied quickly, accurately, and automatically, improving business agility. A Role-Based Access Control (RBAC) model helps ensure security of the system configurations.

Cisco UCS Central software extends the simplicity and agility of managing a single Cisco UCS domain across multiple Cisco UCS domains. Cisco UCS Central software allows companies to easily work on a global scale, putting computing capacity close to users while managing infrastructure with centrally defined policies. Cisco UCS Central software makes it easy to help ensure global policy compliance, with subject-matter experts choosing the resource pools and policies that need to be enforced globally or managed locally. With a simple drag-and-drop operation, Cisco UCS service profiles can be moved between geographies to enable fast deployment of infrastructure, when and where it is needed, to support business workloads. Cisco UCS Central software does not replace Cisco UCS Manager, which is the basic engine for managing a Cisco UCS domain. It builds on the capabilities provided by Cisco UCS Manager and requires it to be in place to effect changes in individual domains.

Figure 10 Cisco UCS Central Software Architecture

Features and benefits

Cisco UCS Central software enables global management of many Cisco UCS domains, making staff more efficient and effective. It gives Cisco UCS administrators a high-level view and management of all, or groups of, Cisco UCS domains with:

Centralized inventory of all Cisco UCS components for a definitive view of the entire infrastructure and simplified integration with current Information Technology Infrastructure Library (ITIL) processes.

Centralized, policy-based firmware upgrades that can be applied globally or selectively through automated schedules or as business workloads demand.

Global ID pooling to eliminate identifier conflicts.

Global administrative policies that enable both global and local management of the Cisco UCS domains.

An XML API, building on the Cisco UCS Manager XML API, for easy integration into higher-level data center management frameworks.

NetApp FAS 3270 Storage

The NetApp FAS 3270 can handle today's diverse, virtualized workloads and easily respond to future data center expansion. The NetApp FAS 3270 series meets the storage needs of business applications in both virtual and traditional environments in a cost-effective manner. It is ideal for demanding business applications in Enterprise Resource Planning environments and dramatically reduces the consumption of raw storage, power, cooling, and space with NetApp FAS 3270 highly efficient storage utilization.

NetApp FAS 3270 supports the FC, FCoE, IP SAN (iSCSI), NFS, CIFS, HTTP, FTP storage networking. It provides high availability features, such as Alternate Control Path (ACP), Ethernet-based service processor and NetApp Data ONTAP management interface, redundant hot-swappable controllers, cooling fans, power supplies, and optics. Additionally, it supports highly available controller configurations, such as active-active controller with controller failover and multipath HA storage.

NetApp Data ONTAP is the fundamental NetApp software that runs on all NetApp storage systems. Data ONTAP is a highly optimized, scalable operating system that supports mixed NAS and SAN environments and a range of protocols, including Fibre Channel, iSCSI, FCoE, NFS, and CIFS. The platform includes a patented file system, Write Anywhere File Layout (WAFL), and storage virtualization capabilities. By leveraging the NetApp Data ONTAP platform, the NetApp Unified Storage Architecture offers the flexibility to manage, support, and scale to different business environments by using a common NetApp Data ONTAP operating System. This architecture allows users to collect, distribute, and manage data from all locations and applications at the same time. This allows the investment to scale by standardizing processes, cutting management time, and increasing availability. Figure 11 shows the different NetApp Unified Storage Architecture platforms.

Figure 11 NetApp Unified Storage Architecture Platforms

Oracle JD Edwards

Oracle's JD Edwards (JDE) is the ERP solution of choice for many small and medium-sized businesses (SMBs). Oracle JDE E1 offers an attractive combination of a large number of easy-to-deploy and easy-to-use ERP applications across multiple industries. These applications include Supply Chain Management (SCM), Human Capital Management (HCM), Supplier Relationship Management (SRM), Financials, and Customer Relationship Management (CRM). Figure 12 describes the various components of Oracle JD Edwards.

Figure 12 Oracle JD Edwards Components

HTML Server

HTML server is the interface of the Oracle JDE to the outside world. It allows the Oracle JDE ERP users to connect to their applications using their browsers using the Web server. It is one of the tiers of the standard three-tier Oracle JDE Architecture. HTML server is not merely an interface, but has logical capabilitites and runs Web Services, which processes some of the data and only the result set is sent through the WAN to the end-users.

Enterprise Server

An enterprise server hosts the Oracle JDE applications that execute all the basic functions of the Oracle JDE ERP systems, such as running the transaction processing service, batch services, data replication, security and the entire time stamp and distributed processing. Multiple enterprise servers can be added for scalability, especially when customers need to apply Electronic Software Updates (ESU's) to one server while the other is online.

Database Server

Database server in a Oracle JDE environment is used to host the data. This server is a data repository and is used to process Oracle JDE logic. The Oracle JDE Database server can run many supported databases, such as Oracle, SQL server, DB2 or Microsoft Access. Since, this server does not run any of the Oracle JDE applications, the only licensing that is required for this server is the database license, therefore, the server should be sized correctly. If a server has excess capacity, then the UBE's can be run on this server to improve their performance.

Deployment Server

Deployment server essentially is a centralized software (C Code) repository for deploying software packages to all the servers and workstations that are part of the Cisco and Oracle JDE solution. Although the deployment server is not a business critical server, it is a critical piece of the Oracle JDE architecture, without which installing, upgrading, developing, or modifying packages (codes) or reports would become impossible.

Server Manager

The Server Manager is a key Oracle JDE software component that helps customers deploy the latest Oracle JDE tools software onto various Oracle JDE servers that are registered with the server manager. The server manager is Web based and enables lifecycle management of Oracle JDE products, such as the enterprise server and HTML server via a Web based console. It has in-built abilities for configuration management and it maintains an audit history of changes made to the components and configuration of various Oracle JDE server software.

Batch Server

Batch processes are background processes requiring no operator intervention or interactivity. One important batch process in Oracle JDE is Material Requirements Planning (MRP) Process. Batch Process can be scheduled using a Process Scheduler which runs in the batch server. Oracle JDE customers running a high volume of reports often split the load on their enterprise server such that they have one or more batch servers that handles the high volume reporting loads. This load sharing frees up the enterprise server to handle interactive user loads more efficiently.

Design Considerations for Oracle JD Edwards implementation on FlexPod

The design document provides best practices to design the Oracle JD Edwards environment and provide several advantages to organizations choosing the FlexPod platform. These best practices are applicable to organizations of all sizes and requirements. There are several factors which need to be considered vis-à-vis JD Edwards HTML Server, JDE E1 application server for interactive and batch (UBE) processes and most importantly the scalability, ease of deployment, and maintenance of hardware installed for JD Edwards deployment.

Scalable Architecture Using Cisco UCS Servers

An obvious immediate benefit with Cisco is a single trusted vendor providing all the components needed for the Oracle JD Edwards deployment. Cisco components also provide scalable platform, dynamic provisioning, failover with minimal downtimes, and reliability.

Some of the capabilities offered by Cisco UCS that complement the scalable architecture include the following:

Dynamic provisioning and service profiles: Cisco UCS Manager supports service profiles, which contain abstracted server states, creating a stateless environment. It implements role-based and policy-based management focused on service profiles and templates. These mechanisms fully provision one or many servers and their network connectivity in minutes, rather than hours or days. This can be very valuable in Oracle JD Edwards environments, where new servers may need to be provisioned on short notice, or even whole new farm for specific development activities.

Cisco Unified Fabric and Fabric Interconnects: The Cisco Unified Fabric leads to a dramatic reduction in network adapters, blade-server switches, and cabling by passing all network and storage traffic over one cable to the parent Fabric Interconnects, where it can be processed and managed centrally. This improves performance and reduces the number of devices that need to be powered, cooled, secured, and managed. The Cisco UCS 6200 Series Fabric Interconnects offer key features and benefits, including:

High performance Unified Fabric with line-rate, low-latency, lossless 10 Gigabit Ethernet, and Fibre Channel over Ethernet (FCoE).

Centralized unified management with Cisco UCS Manager software.

Virtual machine optimized services with the support for VN-Link technologies.

Unified Fabric for Oracle JD Edwards deployment helps to use the same fabric path for both the Ethernet and FC traffic. This provides multiple storage options using either block or file-level storage. 10GE lossless Ethernet connectivity provides high throughput between Oracle JDE E1 and the database server, which helps in reducing the long running UBE execution time, such as Material Requirements Planning (MRP).

To accurately design Oracle JD Edwards on any hardware configuration, engineers need to understand the characteristics of each tier in the Oracle JDE deployment, such as CPU, memory, and I/O operations. For instance, Oracle JDE Enterprise server for interactive processes is both CPU- and memory-intensive, but is low on disk utilization, whereas, the database server is more memory- and disk- intensive. Table 1 describes some of the important design characteristics for the Oracle JD Edwards on a Cisco UCS server solution.

Table 1 Design Considerations for the Solution

Server Type
CPU
Memory
Disk I/O
Comments

Oracle JDE HTML server

Medium

High

Low

Multiple JVMs run across a single HTML server and due to high Garbage Collection (GC) activity, each JVM requires intensive CPU processing cycles. Therefore, a server which is optimum on CPU cores and clock frequency, and has high physical memory, despite fewer number of local disks attached to the server, is an ideal choice. Multiple JVMs configured on a single server with optimum heap size reduces the Full GC pause time for each JVM as compared to a single JVM with large heap size. If customers are attaching SAN to a server, then a RAID Pool with low disk giving lesser IOPs is a good option.

Cisco UCS B200 M3 server with 256 GB memory operating at 1600 MHz and two Intel Xeon E5-2690 processors operating at 2.9 GHz is the ideal choice as it supports the above mentioned requirements.

Oracle JDE Enterprise server for Interactive apps

High

Medium

Low

Oracle JDE Enterprise server for Interactive is CPU-intensive, requires very less disk I/O, and has moderate memory utilization.

Cisco UCS B200 M3 server with 128GB memory operating at 1600 MHz, and two Intel Xeon E5-2690 processors operating at 2.9 GHz is an ideal choice as it supports the above mentioned requirements.

Oracle JDE Enterprise server for Batch server

High

Low

Low

Oracle JDE Enterprise server for batch is CPU-intensive but is low on memory and disk utilization. Therefore, a Cisco UCS B200 M3 server, with 128GB memory configurations is an ideal choice for the Oracle JDE Batch server. Memory configuation of 128 GB is selected, as organizations use the batch server as a failover option for interactive applications.

Database server

High

High

High

Database server has high memory and disk utilization, notably the UBE processes.

Cisco UCS B200 M3 equipped with 256GB of memory, operating at 1600 MHz, two Intel Xeon E5-2690 processor, and two I/O adapters card (VIC1240 & VIC1280) is a good choice for Oracle JDE Database server.

Deployment server and Server Manager

Low

Low

Low

Deployment server is utilized to build and deploy packages for the Oracle JDE deployment. Server Manager is used for Monitoring Oracle JDE run time, such as, monitoring the Daemon process.

Thus, Cisco UCS B200 M3 with two Intel E5-2609 (4 core) processors and just 32 GB of physical memory is deployed.

The utilization of this server is maximized by deploying both Server Manager and Deployment server on a single physical server.


Boot from SAN

Boot from SAN is a critical feature that maximizes the benefits of UCS stateless computing, which does not require static binding between a physical server and the OS applications hosted on that server. The OS is installed on a SAN LUN and is booted using a service profile. When the service profile is moved to another server, the server policy and the PWWN of the HBAs also moves along with the service profile. Following that, the new server takes the identity of the old server and looks identical to the old server.

Following are the benefits of the boot from SAN feature:

Reduce server footprint: Boot from SAN eliminates the need for each server to have its own direct-attached disk (internal disk), which is a potential point of failure. Following are the advantages of diskless servers:

Require less physical space

Require less power

Require fewer hardware components

Less expensive

Disaster Recovery: Boot information and production data stored on a local SAN can be replicated to another SAN at a remote disaster recovery site. When server functionality at the primary site goes down in the event of a disaster, the remote site can take over with a minimal downtime.

Recovery from server failures: Recovery from server failures is simplified in a SAN environment. Data can be quickly recovered with the help of server snapshots, and mirrors of a failed server in a SAN environment. This greatly reduces the time required for server recovery.

High Availability: A typical data center is highly redundant in nature with redundant paths, redundant disks, and redundant storage controllers. The operating system images are stored on SAN disks, which eliminates potential problems caused due to mechanical failure of a local disk.

Rapid Re-deployment: Businesses that experience temporary high production workloads can take advantage of SAN technologies to clone the boot image and distribute the image to multiple servers for rapid deployment. Such servers may only need to be in production for hours or days and can be readily removed when the production need has been met. Highly efficient deployment of boot images makes temporary server usage highly cost effective.

Centralized Image Management: When operating system images are stored on SAN disks, all upgrades and fixes can be managed at a centralized location. Servers can readily access changes made to disks in a storage array.

With boot from SAN, the server image resides on the SAN and the server communicates with the SAN through a Host Bus Adapter (HBA). The HBA BIOS contain instructions that enable the server to find the boot disk. After Power On Self Test (POST), the server hardware component fetches the designated boot device in the hardware BOIS settings. After the hardware detects the boot device, it follows the regular boot process.

Cisco UCS and NetApp Unified Storage

This section elaborates Cisco UCS networking and computing design considerations when deploying the Oracle JD Edwards and the Oracle Database 11g R2 GRID Infrastructure with RAC option in Oracle Linux baremetal environment. In this design, the NFS traffic is isolated from the regular management and application data network using the same Cisco UCS infrastructure by defining logical VLAN networks. This helps to reduce OpEx and CapEx compared to a topology in which a separate dedicated physical switch is deployed to handle the NFS traffic.

Figure 13 presents a detailed view of the physical topology, identifying the various levels of the architecture and the main components of a Cisco UCS in an FC and NFS network design.

Figure 13 Cisco UCS Component in FC and NFS Network Design

As shown in Figure 13, a pair of Cisco UCS 6248UP Fabric Interconnects carries both storage and network traffic from the blades with the help of Cisco Nexus 5548UP switch. Both the fabric interconnect and the Cisco Nexus 5548UP switches are clustered with the peer link between them to provide high availability. Six virtual Port Channels (vPCs) are configured to provide public network, private network, and storage access paths for the blades to northbound switches. Each vPC has VLANs created for application network data, storage data, and management data paths. This allows separation of the Oracle RAC interconnect traffic, storage traffic, and public traffic. It helps to secure the network traffic from each of the three different networks. For information on the VLAN and vPC configuration, see Cisco Nexus Switch Configuration section.

Quality of Services (QoS) is applied on each of the three VLANs to set priorities for different networks. For instance, a private network (Oracle RAC Interconnect) requires high bandwidth and, therefore, is configured with Platinum QoS. For more information, see Cisco UCS Manager Quality-of-Service System and Policy section.

Benefits of the NetApp FAS Family of Storage Controllers

The NetApp Unified Storage Architecture offers customers an agile and scalable storage platform. All NetApp storage systems use the NetApp Data ONTAP operating system to provide SAN (FCoE, FC, iSCSI), NAS (CIFS, NFS), and primary and secondary storage in a single unified platform so that all virtual desktop data components can be hosted on the same storage array.

A single process for activities, such as installation, provisioning, mirroring, backup, and upgrading is used throughout the entire product line, from the entry level to enterprise-class controllers. Having a single set of software and processes simplifies even the most complex enterprise data management challenges. Unifying storage and data management software and processes streamlines data ownership, enables companies to adapt to their changing business needs without interruption, and reduces total cost of ownership.

Flash Cache module on each of the NetApp FAS 3200 series platform allows faster reads, which is beneficial for Oracle JD Edwards interactive, and short running UBE modules executed during the working hours. It increases storage efficiency and helps reduce capacity requirements and costs by 50 percent or more.

Customers willing to run multiple Oracle applications on the same storage infrastructure can benefit by isolation of shared storage and network. It allows secured multi-tenancy and, therefore, is an ideal choice for datacenters hosting multiple Oracle applications for different customers on same infrastructure.

Oracle JD Edwards customers running multiple UBEs reports over the working hours or over-night can easily benefit with NetApp service automation and analytics, automate storage provisioning, comprehensive visibility, and monitoring.

RAID-DP

RAID-DP is NetApp's implementation of double-parity RAID 6, which is an extension of NetApp's original Data ONTAP WAFL RAID 4 design. Unlike other RAID technologies, RAID-DP provides the ability to achieve a higher level of data protection without any performance impact while consuming a minimal amount of storage. For more information on RAID-DP, see: http://www.netapp.com/us/products/platform-os/raid-dp.html

FlexVol

NetApp® FlexVol® storage-virtualization technology enables you to respond to changing storage needs fast, lower your overhead, avoid capital expenses, and reduce disruption and risk. FlexVol technology aggregates physical storage in virtual storage pools, so you can create and resize virtual volumes as your application needs change. NetApp FlexVol allows storage to be provisioned in a similar fashion as the traditional storage.

This reference architecture focuses on the use case of deploying Oracle JD Edwards on NetApp to solve customers' challenges and to meet their needs in the data center. Specifically, this entails FC boot of Cisco UCS hosts, provisioning of data by using NFS, and application access through Gigabit Ethernet- all while leveraging NetApp unified storage.

In a shared infrastructure, the availability and performance of the storage infrastructure are critical because storage outages and performance issues can affect thousands of users. The storage architecture must provide a high level of availability and performance. For detailed documentation about best practices, NetApp and its technology partners have developed a variety of best practice documents.

This reference architecture highlights the use of the NetApp FAS3200 product line, specifically the FAS3270-A with the 10GE mezzanine card and SAS storage. This architecture also supports multiple protocols while allowing the customer to select an affordable and powerful choice for delivering shared infrastructure.

For more information on NetApp storage systems support and best practices, see:

NetApp storage systems: www.netapp.com/us/products/storage-systems/

NetApp FAS3270 storage systems: http://www.netapp.com/us/products/storage-systems/fas3200/

NetApp TR-3437: Storage Best Practices and Resiliency Guide

NetApp TR-3450: Active-Active Controller Overview and Best Practices Guidelines

NetApp TR-3884: FlexPod Solutions Guide

NetApp OnCommand Unified Manager Software

NetApp OnCommand management software delivers efficiency savings by unifying storage operations, provisioning, and protection for both physical and virtual resources.

Following are the important product benefits that add value to the businesses:

Simplicity: A single unified approach and a single set of tools to manage both the physical world and the virtual world as there is a move towards a services model to manage the service delivery. This makes NetApp the most effective storage for the virtualized data center. It has a single configuration repository for reporting, event logs, and audit logs.

Efficiency: Automation and analytics capabilities deliver storage and service efficiency, reducing IT CapEx and OpEx expenditure by up to 50 percent.

Flexibility: With tools that let you gain visibility and insight into your complex multiprotocol, multivendor environments and open APIs enabling integration with third-party orchestration frameworks and hypervisors, OnCommand offers a flexible solution that helps you rapidly respond to changing demands.

OnCommand gives you visibility across your storage environment by continuously monitoring and analyzing its health. You get a view of what is deployed and how it is being used, enabling you to improve your storage capacity utilization and enhance the productivity and efficiency of the system. This unified dashboard gives at-a-glance status and metrics, making it far more efficient than having to use multiple resource management tools.

Snapshot

NetApp Snapshot technology provides zero-cost, near-instantaneous backup, point-in-time copies of the volume or LUN by preserving the Data ONTAP WAFL consistency points (CPs).

Creating Snapshot copies incurs minimal performance effect because data is never moved, as it is with other copy-out technologies. The cost for Snapshot copies is at the rate of block-level changes and not 100 percent for each backup as it is with mirror copies. Using Snapshot can result in savings in storage cost for backup and restore purposes and opens up a number of efficient data management possibilities.

NetApp Strategy for Storage Efficiency

NetApp's strategy for storage efficiency is based on the built-in foundation of storage virtualization and unified storage provided by its core Data ONTAP operating system and the WAFL. Unlike competitors' technologies, NetApp's technologies surrounding its FAS and V-Series product line have storage efficiency built into their core. Customers who already have other vendors' storage systems and disk shelves can still leverage all storage saving features that come with the NetApp FAS system simply by using the NetApp V-Series product line. This is in alignment with NetApp's philosophy of storage efficiency because customers can continue to use their existing third-party storage infrastructure and disk shelves, and save more by leveraging NetApp's storage-efficient technologies.

Sizing Guidelines for Oracle JD Edwards

Sizing ERP deployments is a complicated process, and getting it right depends largely on the input provided by customers with respect to ERP system usage, business priorities, and end user expectation.

Some of the common questions related to ERP sizing, such as the number of concurrent interactive users using the system, total number of ERP end-users, the kind of applications that the end-users will access, as well as the number of reports, and type of reports generated during peak activity, can help size the system for optimal performance. Analyzing the demand characteristics during different time periods in the fiscal year that the Oracle JDE system is expected to handle is necessary to do a proper sizing.

The Oracle JDE Edwards configuration used in this solution deployment is geared to handle a very high workload of end-users running heavy SRM interactive applications as well as a high number of batch processes. A physical three-tier solution with the enterprise server, HTML Web server and database server all residing on different physical machines is used to provide an optimal solution in terms of end user response times as well as optimal batch throughput.

The following sections briefly describe the sizing aspects of each tier of the three tier Oracle JD Edwards deployment architecture.

Oracle JDE HTML Server

The Oracle JDE HTML server serves interactive application requests from Oracle JDE users. The Oracle JDE HTML server loads the application forms and requests services from the Oracle JDE Enterprise server for application processing. Some very lightweight application logic also runs on the Oracle JDE HTML server. Client requests result in significant load on the Oracle JDE HTML servers since these servers make and manage the database as well as network connections. The Oracle JDE HTML server CPU and memory utilization depends on the number of interactive users using the server. Disk utilization is not a major factor in the sizing of the Oracle JDE HTML server.

Typically, on the Oracle Linux server, the number of interactive users for each Java Virtual Machine (JVM) is capped to around 250 to 300 interactive users for optimal performance. This allows minimizing the heap size for each of the JVM to reduce Full GC pause time. It is recommended to configure multiple JVMs to load balance through the Oracle HTTP server, to reduce GC pause time and offer high availability of application. Due to high number of JVMs, typically an HTML server is configured with high memory and medium computing power.

Oracle JDE Enterprise Server

The Oracle JDE Enterprise server acts as the central point for serving requests for application logic. The Oracle JDE EnterpriseOne clients make requests for application processing, and depending on the Oracle JDE environment used as well as user preferences, the input data is then processed and returned to the client. The call object kernels running on the Oracle JDE Enterprise server processes end user application requests, and the security kernel handles authentication of the end-users. Application processing is CPU-intensive and the CPU frequency and number of cores available to the Enterprise server plays a large part in the performance and throughput of the system. As the number of interactive users requests grow, the memory requirements of the Oracle JDE Enterprise server also increases. This is also true for the batch (UBE) reports that the Oracle JDE Enterprise server processes.

The typical sizing recommendation for number of users per call object kernel on Linux server is between 8 and12 users/call object kernels and one security kernel for every 50 interactive users. The in-memory cache usage of call object kernels increases with increase in user loads.

JDE Database Server

The Oracle JDE Database server services data requests made by both the Oracle JDE Enterprise server and the Oracle JDE HTML servers. The Oracle JDE Database server sizing depends on the type of reports being processed as well as the interactive user loads. Some of the Oracle JDE reports can be very Disk I/O-intensive, and depending on the kind of reports being processed, careful consideration must be given to disk layout. If the Oracle Database server's database has ample memory available to it and the memory is utilized to cache data, it can benefit application performance by reducing disk I/O operations. The Oracle JDE Database server typically benefits from faster disk and high memory allocation. The choice of Oracle RAC database for the Oracle JD Edwards environment provides high availability, increased transaction response time for concurrent UBE processes, and enhanced scalability for Oracle JD Edwards online users. Table 2 lists the minimum server configuration required to deploy Oracle JD Edwards on Cisco UCS. In this solution deployment, the Cisco UCS B200 M3 servers with E5-2690 processors is used as it allows scaling to high concurrent interactive users.

Table 2 Suggested Minimum Server Configuration

Workload Type
Component
Suggested Minimum Configuration
Blade Type
CPU Type
Memory

Small

(less than 500 users)

Deployment server

B22 M3

Dual Socket E5-2403

16 GB

HTML

B200 M3

Dual Socket E5-2650

32 GB

Oracle JDE Enterprise for Interactive and Batch

B200 M3

Dual Socket E5-2650

32 GB

Database server-Single Instance

B200 M3

Dual Socket E5-2650

64 GB

Medium (500 to 1500 users)

Deployment server

B22 M3

Dual Socket E5-2403

16 GB

HTML server

B200 M3

Dual Socket E5-2650

64 GB

Oracle JDE Enterprise for Interactive

B200 M3

Dual Socket E5-2650

64 GB

Oracle JDE Enterprise for Batch

B200 M3

Dual Socket E5-2650

64 GB

Database server-node1

B200 M3

Dual Socket E5-2650

64 GB

Database server-node2

B200 M3

Dual Socket E5-2650

64 GB

Large

(more than 1500 users)

Deployment server

B22 M3

Dual Socket E5-2403

16 GB

HTML server 1

B200 M3

Dual Socket E5-2650

64 GB

HTML server 2

B200 M3

Dual Socket E5-2650

64 GB

JDE Enterprise for Interactive 1

B200 M3

Dual Socket E5-2650

64 GB

JDE Enterprise for Interactive 2

B200 M3

Dual Socket E5-2650

64 GB

JDE Enterprise for Batch 1

B200 M3

Dual Socket E5-2650

64 GB

JDE Enterprise for Batch 2

B200 M3

Dual Socket E5-2650

64 GB

Database server - node1

B200 M3

Dual Socket E5-2650

128 GB

Database server - node2

B200 M3

Dual Socket E5-2650

128 GB


Figure 14 elaborates the typical Cisco UCS server configuration in ideal lab conditions for small, medium and large size enterprise. It may vary depending on customer workload and technology landscape. It is recommended that the customer contact the Cisco consultants for the server configuration and related technology implementation.

Figure 14 Oracle JD Edwards on Cisco UCS- Sizing Chart

Oracle JD Edwards Deployment Architecture on FlexPod

Figure 15 elaborates the deployment architecture of Oracle JD Edwards on the FlexPod.

Figure 15 Deployment Architecture of Oracle JD Edwards on FlexPod

Table 3 discusses the main components and their configurations employed in this design guide.

Table 3 Configuration Components

Oracle JDE HTML server 1 and 2

Cisco UCS B200 M3 Blade server equipped with two Intel Xeon E5-2690 2.9-GHz processors and 256 GB of physical memory.

Oracle WebLogic 10.3.6 on Oracle Linux 5.8 (RedHat compatible kernel).

Oracle JD Edwards HTML server code release 8.98.4.10 is deployed.

Oracle JDE E1 Enterprise server (interactive applications)

Cisco UCS B200 M3 Blade server equipped with two Intel Xeon E5-2690 2.9-GHz processors and 128 GB of physical memory.

Oracle JDE E1 Release 9.0, Update 2, with Tools Release 8.98.4.10, deployed on Oracle Linux 5.8 (RedHat compatible kernel).

Oracle JDE E1 Enterprise server (batch / UBEs)

Cisco UCS B200 M3 Blade server equipped with two Intel Xeon E5-2690 2.9-GHz processors and 128 GB of physical memory.

Oracle JDE E1 Release 9.0, Update 2, with Tools Release 8.98.4.10, deployed on Oracle Linux 5.8 (RedHat compatible kernel).

Oracle JDE Database server (node1 and node2)

Cisco UCS B200 M3 Blade server equipped with two Intel Xeon E5-2690 2.9-GHz processors and 128 GB of physical memory.

Oracle RAC 11.2.0.3

Deployment server and Server Manager

Cisco UCS B200 M3 Blade server equipped with two 4-core Intel Xeon E5-2609 2.4-GHz processor and configured with 32 GB of physical memory was deployed on Microsoft Windows 2008 R2.

Storage

2X1 FAS 3270 (HA Pair) with ONTAP 8.1.2

Operating System (64- bit)

Oracle Linux 5.8 (RedHat compatible kernel).

Test Client

HP LoadRunner 9.5 on Microsoft Windows 2003 server.


Infrastructure Setup

This section elaborates the infrastructure setup details used to deploy the Oracle JD Edwards on the FlexPod. Figure 16 elaborates the detailed high-level workflow to configure this design solution.

Figure 16 Oracle JD Edwards on FlexPod Setup Workflow

Cisco Nexus Switch Configuration

This section elaborates the procedure to configure Cisco Nexus 5548 UP Switches. Some of the important configurations detailed in the section are:

Setting up of Cisco Nexus 5548 UP Switch

Enabling Features and Global Configuration

Creating VSAN and Adding FC Interfaces

Configuring VLAN

Configuring Virtual Port Channel (vPC)

Setting up of Cisco Nexus 5548 UP Switch

The NX-OS setup should automatically start on the initial boot and connection to the serial or console port of the switch. Enter the following commands on the NX-OS CLI, to configure the Cisco Nexus Switch:

1. Enter yes to enforce secure password standards: yes

2. Enter the password for the administrator (adminuser): <xxxxx>

3. Enter the password a second time to commit the password; <xxxxx>

4. Enter yes to enter the basic configuration dialog: yes

5. Create another login account (yes/no) [n]: Enter

6. Configure read-only SNMP community string (yes/no) [n]: Enter

7. Configure read-write SNMP community string (yes/no) [n]: Enter

8. Enter the switch name: FlexPod-N5K-A: Enter

9. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter

10. Mgmt0 IPv4 address: <Nexus A mgmt0 IP> Enter

11. Mgmt0 IPv4 netmask: <Nexus A mgmt0 netmask> Enter

12. Configure the default gateway? (yes/no) [y]: Enter

13. IPv4 address of the default gateway: <Nexus A mgmt0 gateway> Enter

14. Enable the telnet service? (yes/no) [n]: Enter

15. Enable the ssh service? (yes/no) [y]: Enter

16. Type of ssh key you would like to generate (dsa/rsa): rsa

17. Number of key bits <768-2048>:1024 Enter

18. Configure the ntp server? (yes/no) [y]: n Enter

19. NTP server IPv4 address: <NTP Server IP> Enter

20. Enter basic FC configurations (yes/no) [n]: Enter

21. Would you like to edit the configuration? (yes/no) [n]: Enter


Note Be sure to review the configuration summary before enabling it.


22. Use this configuration and save it? (yes/no) [y]: Enter


Note Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Cisco Nexus A.


23. Log in as user admin with the password entered in step 3.

Follow the same steps to configure the Cisco Nexus 5548 Switch B (FlexPod-N5K-B).

Enabling Features and Global Configuration

To enable appropriate Cisco Nexus 5548 switch licensing, perform the steps in this section on Cisco Nexus 5548 A - (FlexPod-N5K-A), and Cisco Nexus 5548 B - (FlexPod-N5K-B) separately.

Login to the Cisco Nexus 5548 Switch, and on the NX-OS CLI, type the following commands:

1. Type "config t" to enter into the global configuration mode.

2. Type "feature lacp".

3. Type "feature fcoe".

4. Type "feature npiv".

5. Type "feature vpc".

6. Type "feature fport-channel-trunk".


Note FCoE feature needs to be enabled first before enabling npiv.


Verification: Figure 17 lists the enabled features on Cisco Nexus 5548 (show feature | include enabled").

Figure 17 Features enabled in Cisco Nexus 5548

Creating VSAN and Adding FC Interfaces

To create VSAN and add FC interfaces to the Cisco Nexus 5548 A - (FlexPod-N5K-A), and Cisco Nexus 5548 B - (FlexPod-N5K-B) separately, follow the steps mentioned in this section:

Login to the Cisco Nexus 5548 Switch, and type the following commands in the NX-OS CLI:

1. Type "config t" to enter into the global configuration mode.

2. Type "vsan database".

3. Type "vsan16 name JDE".

4. Type "vsan 16 interface fc1/29-32".

5. Type "y" on the "Traffic on fc1/29 may be impacted. Do you want to continue? (y/n) [n]".

6. Similarly type "y" for fc1/29 to fc1/32 interfaces.

Verification: The command sh vsan membership is executed to verify the list of the ports fc1/29-32 under "vsan 16".

To configure the ports 29-32 as FC ports on Cisco Nexus 5548 A - (FlexPod-N5K-A), and Cisco Nexus 5548 B - (FlexPod-N5K-B) separately, follow the steps mentioned in this section.

Login to the Cisco Nexus 5548 Switch, and type the following commands in the NX-OS CLI.

1. Type "config t" to enter into the global configuration mode.

2. Type "slot 1".

3. Type "interface fc 1/29-32".

4. Type "switchport mode F".

5. Type "no shut".

Follow the same steps to configure the Cisco Nexus Switch B.

Verification: The command "show interface brief" lists these interfaces as FC (Admin Mode "F").

Configuring VLAN

A VLAN is a group of end stations in a switched network that is logically segmented by function, project team, or application, irrespective of the physical locations of the users. VLANs have the same attributes as physical LANs, but the end stations can be grouped even if they are not physically located on the same LAN segment.

Table 4 lists the solution design components used to define the VLANs.

Table 4 VLAN Configuration

VLAN Name
VLAN Purpose
VLAN ID

VLAN0613

For external traffic

613

VLAN0760

For traffic among HTML server, E1 server and DB server

760

VLAN 0192

For NFS storage traffic

192

VLAN0191

For Oracle Clusterware InterConnect Traffic

191


To configure the VLANs, follow these steps:

1. Type "config t" to enter into the global configuration mode.

2. From the global configuration mode, type "vlan613" and press "Enter".

3. Type <<name>> to enter a descriptive name for the VLAN.

4. Type "vlan760".

5. Type <<name>>.

6. Type "vlan191".

7. Type <<name>>.

8. Type "Interface ethernet1/17-19" (make sure to choose the Ethernet interfaces where Fabric Interconnects are connected).

9. Type "switchport mode trunk".

10. Type "switchport trunk allowed vlan 613, 760".

11. Type "exit".

12. Repeat step 8 to step 10 to configure storage ports and Oracle RAC interconnect ports with allowed VLAN as 192 and 191 respectively.

Verification: The command "show vlan" should list the VLANs and interfaces assigned to it. Or, the command "show run interface <interface name> should show the configuration for a given interface or port channel. Figure 18 lists the executed command result.

Figure 18 Summary of all the VLAN Configurations

Configuring Virtual Port Channel (vPC)

A virtual port channel (vPC) allows links that are physically connected to two different Cisco Nexus 5000 Series devices to appear as a single port channel by a third device.

Table 5 lists the Cisco Nexus 5548UP vPC configurations with the vPC domains and corresponding vPC names and IDs for this design solution. To provide Layer 2 and Layer 3 switching, a pair of Cisco Nexus 5548UP Switches with upstream switching are deployed, providing high availability in the event of failure to Cisco UCS to handle management, application, and network storage data traffic. In the Cisco Nexus 5548UP switch topology, a single vPC feature is enabled to provide high availability, faster convergence in the event of a failure, and greater throughput.

Table 5 vPC Mapping for Cisco Nexus 5548A

vPC Domain
vPC Name
vPC ID
LAN UpLink Ports
Connected Components

100

vPC-Public1

110

FI A - eth1/9

N5KA - eth1/7

FI-A and N5kA provide vPC for all public and storage network traffic

100

vPC-Public2

111

FI B- eth1/9

N5KA - eth1/8

FI-B and N5kA provide vPC for all public network traffic

100

vPC-Private1

120

FI A - eth1/11

N5KA - eth1/13

FI-A and N5kA provide vPC for Oracle cluster interconnect

100

vPC-Private2

121

FI B - eth1/11

N5KA - eth1/14

FI-B and N5kA provide vPC for Oracle cluster interconnect

100

vPC-Storage1

130

NetApp - A - Port - eth1a

N5KA - eth1/1

N5k-A to NetApp Controller A for NFS traffic

100

vPC-Storage2

131

NetApp - B Port - eth1a

N5KA - eth1/2

N5k-A to NetApp Controller B for NFS traffic


Table 6 vPC Mapping for Cisco Nexus 5548B

vPC Domain
vPC Name
vPC ID
LAN UpLink Ports
Connected Components

100

vPC-Public1

110

FI A - eth1/10

N5KB - eth1/7

FI-A and N5kB provide vPC for all public & Storage network traffic

100

vPC-Public2

111

FI B- eth1/10

N5KB - eth1/8

FI-B and N5kB provide vPC for all public and Storage network traffic

100

vPC-Private1

120

FI A - eth1/12

N5KB - eth1/13

FI-A and N5kA provide vPC for Oracle cluster interconnect

100

vPC-Private2

121

FI B - eth1/12

N5KB - eth1/14

FI-B and N5kA provide vPC for Oracle cluster interconnect

100

vPC-Storage1

130

NEtApp - A - Port - eth1b

N5KB - eth1/1

N5kB to NetApp Controller A for NFS traffic

100

vPC-Storage2

131

NetApp - B Port - eth1b

N5KB - eth1/2

N5kB to NetApp Controller B for NFS traffic



Note Table 5 and Table 6 elaborates the configuration of ports used as a demonstration. Customers can configure ports as per their environment availability. Figure 13 illustrates recommended network diagram, with Cisco UCS component in FC and NFS network design.


Table 6 lists the vPC design elements, where a single vPC domain, Domain 100, is created across Cisco Nexus 5548UP member switches to define vPCs to carry specific network traffic. This topology defines six vPCs. The vPC IDs 110, 111, 120, and 121 are defined for traffic from Cisco UCS fabric interconnects, and vPC IDs 130 and 131 are defined for traffic to NetApp storage. These vPCs are managed within the Cisco Nexus 5548UP switches, which connects Cisco UCS Fabric Interconnects and the NetApp storage system.

To create vPC for both Cisco Nexus 5548 A, and Cisco Nexus 5548 B separately as per the configuration details elaborated in Table 5 and Table 6, follow the steps mentioned in this section.

Login to the Cisco Nexus 5548 UP, and type the following commands in the NX-OS CLI:

1. In the global configuration mode, type "vpc domain 100".

2. Type "role priority 1000".

3. Type "peer-keepalive destination 10.x.x.x". (This IP is the Cisco Nexus 5548 B Management IP).

4. Type "int port-channel 100".

5. Type "switchport mode trunk".

6. Type "switchport trunk allowed vlan all".

7. Type "vpc peer-link".

8. Type "int ethernet 1/3" (peer link port).

9. Type "switchport mode trunk".

10. Type "switchport trunk allowed vlan all".

11. Type "channel-group 100 mode active".

12. Type "Exit".


Note The vPC domain creation command from step 1 to step 12 has to be executed only in one Cisco Nexus 5548 UP switch.


13. Type "int port-channel 110".

14. Type "switchport mode trunk".

15. Type "switchport trunk allowed vlan192, 760, 613".

16. Type "vpc 110".

17. Type "int ethernet 1/7".

18. Type "channel-group 110 mode active".

19. Type "switchport mode trunk".

20. Type "int port-channel 111".

21. Type "switchport mode trunk".

22. Type "switchport trunk allowed vlan192, 760, 613".

23. Type "vpc 111".

24. Type "int ethernet 1/8".

25. Type "channel-group 111 mode active".

26. Type "switchport mode trunk".

27. Type "int port-channel 120".

28. Type "switchport mode trunk".

29. Type "switchport trunk allowed vlan191".

30. Type "vpc 120".

31. Type "int ethernet 1/13".

32. Type "channel-group 120 mode active".

33. Type "switchport mode trunk".

34. Type "int port-channel 121".

35. Type "switchport mode trunk".

36. Type "switchport trunk allowed vlan191".

37. Type "vpc 121".

38. Type "int ethernet 1/14".

39. Type "channel-group 121 mode active".

40. Type "switchport mode trunk".

41. Type "int port-channel 130".

42. Type "switchport mode trunk".

43. Type "switchport trunk allowed vlan192".

44. Type "vpc 130".

45. Type "int ethernet 1/1".

46. Type "channel-group 130 mode active".

47. Type "switchport mode trunk".

48. Type "int port-channel 131".

49. Type "switchport mode trunk".

50. Type "switchport trunk allowed vlan192".

51. Type "vpc 131".

52. Type "int ethernet 1/2".

53. Type "channel-group 131 mode active".

54. Type "switchport mode trunk".

Verification: Verify that the status for all the vPCs is "Up" for connected Ethernet ports (run show vpc), as displayed in Figure 19.

Figure 19 vPC Status on Cisco Nexus 5548UP A & B

For more information on vPC configuration for Cisco Nexus 5548 UP switches, see:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html

Cisco Unified Computing System Configuration

This section details the Cisco UCS configuration that is performed as part of the infrastructure build for deploying the Oracle JD Edwards. Information on racks, power, and installation is beyond the scope of this document. For more information on these tasks, see:

Cisco UCS 5108 Server Chassis Installation Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/install.html

Cisco Unified Computing System CLI Configuration Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.1/b_UCSM_CLI_Configuration_Guide_2_1_chapter_010.html

Cisco UCS Manager GUI Configuration Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.1/b_UCSM_GUI_Configuration_Guide_2_1.html

Validate Installed Firmware

To log into Cisco UCS Manager, follow these steps:

1. Open the Web browser with the Cisco UCS 6248 Fabric Interconnect cluster address.

2. Click Launch to download the Cisco UCS Manager software.

3. You might be prompted to accept security certificates; accept as necessary.

4. In the login page, enter "admin" for both username and password text boxes.

5. Click Login to access the Cisco UCS Manager software.

6. Click Equipment and then Installed Firmware.

7. Verify the Firmware version installed. The current firmware during the deployment is 2.1(1a).

For more information on Firmware Management, see: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/firmware-mgmt/gui/2.1/b_GUI_Firmware_Management_21_chapter_0100.html

Verification: The Installed Firmware should be displayed as 2.1(1a) as displayed in Figure 20.

Figure 20 Verifying the Installed Firmware Version

Chassis Discovery Policy

To edit the chassis discovery policy, follow these steps:

1. Click Equipment tab in the right pane of the UCS Manager window.

2. In the right pane, click Policies tab.

3. Under Global Policies, change the Chassis Discovery Policy to 8-link.

4. Click Save Changes.

Verification: The chassis discovery policy configured to 8-link is displayed in Figure 21.

Figure 21 Displaying the Chassis Discovery Policy Configuration

Enabling Network Components

To enable Fibre Channel, servers, and uplink ports, follow these steps:

1. Click Equipment tab in the UCS Manager window.

2. Choose Equipment> Fabric Interconnects > Fabric Interconnect A (primary) > Fixed Module.

3. Expand the Unconfigured Ethernet Ports section.

4. Select ports 1-8 that are connected to the Cisco UCS chassis and right-click on the ports and select Configure as Server Port.

5. Click Yes to confirm, and then click OK to continue.

6. Select ports 9 and 10. These ports are connected to the Cisco Nexus 5548 UP switches for NFS traffic. Right-click on the ports and select Configure as Uplink Port.

7. Click Yes to confirm, and then click OK to continue.

8. Select ports 11 and 12. These ports are connected to the Cisco Nexus 5548 UP switches. Right-click on the ports and select Configure as Uplink Port.

9. Click Yes to confirm, and then click OK to continue.

10. On the FI Expansion Module, configure port 15 and port 16 as Fibre Channel.

11. Choose Equipment > Fabric Interconnects >Fabric Interconnect A (primary).

12. Right-click, and select Set FC End-Host Mode to put the Fabric Interconnect in Fibre Channel Switching Mode.

13. Click Yes to confirm.

14. A message displays stating that the "Fibre Channel Wnd-Host Mode has been set and the switch will reboot.

15. Click OK to continue. Wait until the Cisco UCS Manager is available again and log back into the interface.

16. Repeat step 2 to step 21 for Fabric Interconnect B.

Verification: Check if all configured links show their status as "up" as shown in Figure 22 for Fabric Interconnect A. This can also be verified on the Cisco Nexus switch side by running "show int status" and all the ports connected to the Cisco UCS Fabric Interconnects are shown as "up".


Note The FC ports are shown as enabled with VSAN default ID 1. After the VSAN id is created (vsan16 as created in Cisco Nexus 5548 UP FC Configuration steps), re-enable the FC port, with VAN16.


Figure 22 Enabling the Server Ports on Fabric Interconnect

Creating MAC Address Pools

To create MAC address pools, follow these steps:

1. Click LAN tab in the UCS Manager window.

2. Choose Pools > root.


Note Two MAC address pools will be created, one for fabric A and one for fabric B.


3. Right click on MAC Pools under the root organization, and select Create MAC Pool to create the MAC address pool for Fabric A.

4. Enter FlexPod_MAC_FIA in the name text box for the MAC pool for fabric A.

5. Enter a description of the MAC pool in the description text box. This is optional; you can choose to omit the description.

6. Click Next to continue.

7. Click Add to add the MAC address pool.

8. Specify a starting MAC address for fabric A.


Note Default pool address can be used, however, it is recommended that the pool address be changed as per the deployment requirements. This also differentiates the MAC address for Fabric A and Fabric B. In this solution it is configured as: (DE:25:B5:0A:00:00).


9. Specify the size as 24 for the MAC address pool for fabric A. (MAC address Size 24 is just for demonstration and not a hard limit for MAC address pool)

10. Click OK.

11. Click Finish. A pop-up message box appears, click OK to save changes.

12. Right click on MAC Pools under the root organization and select Create MAC Pool to create the MAC address pool for fabric B.

13. Enter FlexPod_MAC_FIB in the name text box for the MAC pool of fabric B.

14. Enter a description of the MAC pool in the description text box. This is optional; you can choose to omit the description.

15. Click Next to continue.

16. Click Add to add the MAC address pool.

17. Specify a starting MAC address for fabric B.


Note Default pool address can be used, however, it is recommended that the pool address be changed as per the deployment requirements. This also differentiates the MAC address for Fabric A and Fabric B. In this solution it is configured as: (DE:25:B5:0B:00:00)


18. Specify the size as 24 for the MAC address pool for fabric B.

19. Click OK.

20. Click Finish.

21. A pop-up message box appears; click OK to save changes and exit.

Verification: Choose LAN tab > Pools > root. Select MAC Pools to expand and show the MAC pools created. In the right pane, details of the MAC pools are displayed as shown in Figure 23.

Figure 23 MAC Pool Details

Creating WWPN Pools

To create WWPN pools, perform the following steps:

1. Click SAN tab in the UCS Manager window.

2. Choose WWPN Pools > root.

3. Two WWPN pools will be created, one for fabric A and one for fabric B.

4. Right-click on WWPN Pools, and select Create WWPN Pool.

5. Enter FlexPod_wwpn_A as the name for the WWPN pool for fabric A.

6. Enter a description of the WWPN pool in the description text box. This is optional; you can choose to omit the description.

7. Click Next.

8. Click Add to add a block of WWPNs.

9. Enter 20:00:ED:25:B5:A0:00:00 as the starting WWPN in the block for fabric A.


Note Change the WWPN prefix as per the deployment. This would help in identifying WWPNs initiated from Fabric A or Fabric B.


10. Set the size of the WWPN block to 24. (WWPN Pool Size 24 is just for demonstration and not a hard limit for WWPN Pool)

11. Click OK to continue.

12. Click Finish to create the WWPN pool.

13. Click OK to save changes.

14. Right-click the WWPN Pools and select Create WWPN Pool.

15. Enter FlexPod_wwpn_B as the name for the WWPN pool for fabric B.

16. Enter a description of the WWPN pool in the description text box. This is optional; you can choose to omit the description.

17. Click Next.

18. Click Add to add a block of WWPNs.

19. Enter 20:00:ED:25:B5:B0:00:00 as the starting WWPN in the block for fabric B.

20. Set the size of the WWPN block to 24.

21. Click OK to continue.

22. Click Finish to create the WWPN pool.

23. Click OK to save changes and exit.

Verification: As displayed in Figure 24, the new name with the 24 block size.

Figure 24 WWPN Pool Details

Creating WWNN Pools

To create WWNN pools, follow these steps:

1. Click SAN tab in the UCS Manager window.

2. Choose Pools > root.

3. Right-click the WWNN Pools and click Create WWNN Pool.

4. Enter FlexPod_JDE as the name of the WWNN pool.

5. Enter a description of the WWNN pool in the description text box. This is optional; you can choose to omit the description.

6. Click Next to continue.

7. A pop-up window Add WWN Blocks appears; click Add button at the bottom of the page.

8. A pop-up window Create WWN Blocks appears; set the size of the WWNN block to 24.

9. Click OK to continue.

10. Click Finish.

11. Click OK to save changes and exit.

Verification: The new name with the 24 block size displays in the right panel when WWNN pools is selected on the left panel, as shown in Figure 25. (WWNN Pool Size 24 is used for demonstration and is not a hard limit for WWNN Pool)

Figure 25 WWNN Pool Details

Creating UUID suffix pools

To create UUID suffix pools, perform the following steps:

1. Click Servers tab in the UCS Manager window.

2. Choose Pools > root.

3. Right-click the UUID Suffix Pools and select Create UUID Suffix Pool.

4. Enter the name the UUID suffix pool as FlexPod_JDE.

5. Enter a description of the UUID suffix pool in the description text box. This is optional; you can choose to omit the description.

6. Prefix is set to derived by default. Do not change the default setting.

7. Click Next to continue.

8. A pop-up window Add UUID Blocks appears. Click Add button to add a block of UUID suffixes.

9. Retain the default setting in From field.

10. Set the size of the UUID suffix pool to 24.

11. Click OK to continue.

12. Click Finish to create the UUID suffix pool.

13. Click OK to save changes and exit.

Verification: Ensure that the UUID suffix pools created are displayed as shown in Figure 26.

Figure 26 UUID Suffix Pool Details

Creating VLANs

To create VLANs, follow these steps:

1. Click LAN tab in the UCS Manager window.


Note In this solution design, four VLANs will be created, Management Traffic, Data traffic, Oracle RAC database inter-node private traffic, and Storage NFS traffic as elaborated in Table 4.


2. Right-click the VLANs in the tree, and click Create VLAN(s).

3. Enter <name> as the name of the VLAN (for e.g.: 760). This name will be used for traffic management.

4. Keep the option Common/Global selected for the scope of the VLAN.

5. Enter a VLAN ID for the management VLAN. Keep the sharing type as none.

6. Similarly create other three VLANs.

Verification: Choose LAN tab > LAN Cloud > VLANs.Expand VLANs and all of the created VLANs are displayed. The right pane gives the details of all individual VLANs as shown in Figure 27.

Figure 27 Details of Created VLANs

Creating Uplink Ports Channels

To create uplink port channels to Cisco Nexus 5548 UP switches, perform the following steps:

1. Click LAN tab in the UCS Manager window.


Note Two port channels should be created on Fabric A, connecting to both Cisco Nexus 5548 UP switches. Each of the port channels in Fabric A is configured for Oracle RAC interconnect and storage & public traffic respectively. Similar configuration is deployed for Fabric B.


2. Expand the Fabric A tree.

3. Right-click on the Port Channels and click Create Port Channel.

4. Enter 110 as the unique ID of the port channel.

5. Enter Public-Storage as the name of the port channel.

6. Click Next.

7. Select ports 1/9 and 1/10 to be added to the port channel.

8. Click >> to add the ports to the Port Channel.

9. Click Finish to create the port channel.

10. A pop-up message box appears, click OK to continue.

11. In the left pane, click the newly created port channel.

12. In the right pane under Actions, choose Enable Port Channel option.

13. In the pop-up box, click Yes, and then click OK to save changes.

14. Repeat step 4 to step 13 to create port channel for private interconnect with Ports 1/11 and 1/12 and Port Channel ID as 120

15. Expand the Fabric B tree.

16. Right-click on the Port Channels and click Create Port Channel.

17. Enter 111 as the unique ID of the port channel.

18. Enter Public_Storage as the name of the port channel.

19. Click Next.

20. Select ports 1/9 and 1/10 to be added to the Port Channel.

21. Click >> to add the ports to the Port Channel.

22. Click Finish to create the port channel.

23. A pop-up message box appears, click OK to continue.

24. In the left pane, click the newly created port channel.

25. In the right pane under Actions, choose Enable Port Channel option.

26. In the pop-up box, click Yes, and then click OK to save changes.

27. Repeat step 4 to step 14 to create Port Channel for Private Interconnect with Ports1/11 and 1/12 with Port Channel ID as 121

Verification: Choose LAN tab > LAN Cloud. In the Right Pane, select the LAN Uplinks and expand the Port channels listed as shown in Figure 28.


Note The vPC should be configured earlier for the Fabric Interconnect port channels to get enabled.


Figure 28 Details of Port Channels

Creating VSANs

All the servers under Oracle JD Edwards deployment on FlexPod are booted from SAN. The following section elaborates on VSAN creation, which was configured in the Cisco Nexus Switch Configuration section.

To create VSANs, perform the following steps:

1. Click SAN tab in the UCS Manager window.

2. Expand the SAN cloud tree.

3. Right-click on the VSANs and click Create VSAN.

4. Enter VSAN16 as the VSAN name

5. Enter 16 as the VSAN ID.

6. Enter 16 as the FCoE VLAN ID.

7. Click OK to create the VSANs.

Verification: Choose SAN tab >SAN Cloud >VSANs on the left panel. The right panel displays the created VSANs as shown in Figure 29.

Figure 29 Details of the VSANs

SAN Boot Policy Configuration

In this solution deployment, the Cisco UCS servers boot from NetApp FAS 3270. With boot from SAN, the OS image resides on the SAN and the server communicates with the SAN through a Host Bus Adapter (HBA). The HBA BIOS contains instructions that enable the server to find the boot disk. After the power on self test (POST), the server hardware component retrieves the boot device that is designated as the boot device in the hardware BOIS settings. When the hardware detects the boot device, it follows the regular boot process.

In this setup, vhba0 and vhba1 are used for SAN Boot. Storage WWPN ports are connected in the boot policy as listed in Table 7. (It is essential to retrieve WWPN numbers of FC ports from NetApp Storage Configuration).

Table 7 SAN Boot Ports

vhba0

Storage Controller A Port 0c Primary Target - 50:0A:09:83:9D:93:40:7F

Storage Controller B Port 0d Secondary Target - 50:0A:09:84:8D:93:40:7F

vhba1

Storage Controller B Port 0c Primary Target - 50:0A:09:83:8D:93:40:7F

Storage Controller A Port 0d Secondary Target - 50:0A:09:84:9D: 93:40:7F



Note Create the same vHBA names for Fabric A and Fabric B (vhba0 & vhba1) while creating the service profiles.


To create a boot policy, follow these steps:

1. Choose Servers tab > Policies > Boot Policies, and the click Add. A pop-up window Create Boot Policy appears.

2. Enter the name as JDE_FlexPod in the Name text box, and for Oracle JD Edwards in the Description text box; ensure that the check box Reboot on Boot Order Change is checked.

3. Add the first target as CD-ROM, as this will allow you to install Operating System from the KVM Console.

4. Click Add SAN Boot on the vHBAs section; in the Add SAN Boot pop-up window, type vHBA0 and select the type as Primary and click OK. This is the SAN Primary Target.

5. Click Add SAN Boot Target to add a target to the SAN Boot Primary in the vHBAs window. In the Add SAN Boot Target pop-up window, type 51 in the Boot Target LUN. Enter WWPN of port 0c for Storage Controller A in the Boot Target WWPN and select the type as Primary. Click OK.


Note The same LUN ID (51) would be used in Initiators for LUN Configured for NetApp Storage.


6. To add another target to the SAN Boot Primary, click Add to add another SAN Boot Target in the vHBAs window; in the Add SAN Boot Target pop-up box, type 51 in the Boot Target LUN; Enter WWPN of port 0d for Storage Controller B in the Boot Target WWPN, and ensure that the type selected is Secondary. Click OK.

7. Similarly, for the SAN Secondary, click Add SAN Boot in the vHBAs window; in the Add SAN Boot pop-up window, type vHBA1 and select the type as Secondary. Click OK.

8. Click Add SAN Boot Target to add a target to the SAN Boot Secondary (vHBA1) in the vHBAs window. In the Add SAN Boot Target pop-up window, type 51 in the Boot Target LUN. Enter WWPN of port 0c for Storage Controller B in the Boot Target WWPN and select the type as Primary. Click OK.

9. To add another target to the SAN Boot Secondary, click Add to add another SAN Boot Target in the vHBAs window; in the Add SAN Boot Target pop-up box, type 51 in the Boot Target LUN; Enter WWPN of port 0d for Storage Controller A in the Boot Target WWPN and ensure that the type selected is Secondary. Click OK.

10. Click Save Changes to save all the settings. The Boot Policy window in Cisco UCS Manager is as shown in Figure 30.

Figure 30 Boot Policy Service Profile

BIOS Policy

BIOS policy is one of the features of Cisco UCS Service Profiles, which enables users to incorporate similar BIOS settings all across the deployed servers. It helps create application specific BIOS setting, for instance in a virtualized environment VT is enabled for direct IO while in the non-virtualized environment it is kept at default. This helps in maintaining a consistent configuration and the administrators do not need to interrupt the boot process on each server to alter the BIOS settings. This helps faster deployment as the F2 option is not used to change BIOS of hundreds of servers deployed in a data center. The BIOS policy configured for the Oracle JDE deployment and the memory configuration is elaborated in Figure 31 and Figure 32.

Figure 31 BIOS Setting for CPU performance

Figure 32 BIOS setting for Physical Memory Performance

Cisco UCS Manager Quality-of-Service System and Policy

Cisco UCS uses IEEE Data Center Bridging (DCB) to handle all traffic within Cisco UCS. This industry-standard enhancement to Ethernet divides the bandwidth of the Ethernet pipe into eight virtual lanes. System classes determine how the DCB bandwidth in these virtual lanes is allocated across the entire Cisco UCS platform.

Each system class reserves a specific segment of the bandwidth for a specific type of traffic, providing an assured level of traffic management even in an oversubscribed system. For example, you can configure the Fibre Channel priority system class to determine the percentage of DCB bandwidth allocated to FCoE traffic.

Table 8 describes the system classes.

Table 8 QoS Classes and Description

System Class
Description

Platinum Priority

Gold Priority

Silver Priority

Bronze Priority

These classes set the Quality of Service (QoS) for all servers that include one of these system classes in the QoS definition in the service profile associated with the server. Each of these system classes manages one lane of traffic. All properties of these system classes are available and can be used to assign custom settings and policies to the servers.

Best-Effort Priority

This class sets the QoS for the lane that is reserved for basic Ethernet traffic. Some properties of this system class are preset and cannot be modified. For example, this class has a drop policy to allow it to drop data packets if required.

Fibre Channel Priority

This class sets the QoS for the lane that is reserved for FCoE traffic. Some properties of this system class are preset and cannot be modified. For example, this class has a no-drop policy to help ensure that it never drops data packets.


QoS policies assign a system class to the outgoing traffic for a vNIC or a virtual HBA (vHBA). You must select a QoS policy in a vNIC policy and then include that vNIC policy in a service profile to make the QoS setting applicable to specific vNIC.

QoS system classes and corresponding policies are defined for network traffic generated by NFS storage, the Oracle Database 11g R2 GRID, communication between Oracle JD Edwards tiers and the public network. The QoS policy options, which provide efficient network utilization and bandwidth control in an Oracle JD Edwards deployment with Oracle Database 11g R2 GRID Infrastructure and RAC on FlexPod over an NFS network are explained below.

Oracle Clusterware heartbeat requires high bandwidth and a fast response for cache fusion and interconnect traffic. To meet this requirement, a RAC_HB QoS policy is created and defined with the Platinum class with the highest weight (bandwidth), and a maximum transmission unit (MTU) of 9000. This helps in higher throughput required for heavy Oracle RAC interconnect traffic.

Data files and redo logs of Oracle Database, binaries and application files of Oracle JD Edwards Enterprise One and HTML server are deployed with Oracle DB QoS policy. This policy is created with Gold class, which has the second highest weight (bandwidth) and an MTU of 9000.

For Oracle JD Edwards network traffic for clients, inter-tier communication and operations that have lower bandwidth requirements, the Best-Effort QoS class with the least weight (bandwidth) is defined on Cisco UCS.


Note To apply QoS across the entire system, from Cisco UCS to the upstream switches (Cisco Nexus 5548UP Switches), you need to configure similar QoS class and policy types in Cisco Nexus 5548 UP switches with the right Class-of-Service (CoS) values that match the Cisco UCS QoS classes.


On the Cisco Nexus 5548UP switch, a single policy type map is defined with multiple class types, with Cisco UCS QoS matching configuration values that are applied on the global system level. Table 9 shows Cisco UCS and Cisco Nexus 5548UP switch QoS mapping to achieve end-to-end QoS.

Table 9 QoS Mapping and QoS Policy Configurations

Cisco UCS QoS
Cisco Nexus 5548UP QoS
Policy Name
Priority
MTU
CoS
Class Type:
Network QoS and QoS
Policy Type:
Network QoS and QoS

RAC_HB

Platinum

9000

5

Network QoS: MTU 9000 and CoS 5

QoS: QoS group 5

Cisco Nexus 5548UP QoS

OracleDB

Gold

9000

4

Network QoS: MTU 9000 and CoS 4

QoS: QoS group 4

Public

Best Effort

1500

Any

Network QoS: MTU 1500


To configure CoS on Cisco Nexus 5548 UP for Oracle RAC private interconnect traffic and storage traffic, type the following commands at the CLI:

CiscoNexus5548UP Switch A  
 
 
FlexPod-N5K-A# Configure Terminal 
FlexPod-N5K-A(Conf)# Interface port channel 120 
FlexPod-N5K-A(Conf-if)#untagged cos 5 
 
 
FlexPod-N5K-A# Configure Terminal 
FlexPod-N5K-A(Conf)# Interface port channel 130
FlexPod-N5K-A(Conf-if)#untagged cos 4 
 
 
CiscoNexus5548UP Switch B 
 
 
FlexPod-N5K-B# Configure Terminal 
FlexPod-N5K-B(Conf)# Interface port channel 121
FlexPod-N5K-B(Conf-if)#untagged cos 5 
 
 
FlexPod-N5K-B# Configure Terminal 
FlexPod-N5K-B(Conf)# Interface port channel 131
FlexPod-N5K-B(Conf-if)#untagged cos 4 

For more information on Cisco Nexus QoS configurations, see: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/qos/Cisco_Nexus_5000_Series_NX-OS_Quality_of_Service_Configuration_Guide_chapter3.html#con_1150612

Table 10 shows each QoS policy name with the corresponding priority, weight, and MTU value. These values are applied to static vNICs in the Oracle JD Edwards Infrastructure deployment environment.

Table 10 Cisco UCS QoS Policy Variables

Policy Name
Priority
Weight (Percentage)
MTU

RAC-HB

Platinum

10

9000

OracleDB

Gold

9

9000

Public

Best Effort

5

1500


Figure 33 shows Cisco UCS QoS system class, and QoS policy configurations defined for an application on static and dynamic vNICs for accessing the Oracle database server.

Figure 33 Summary of the QoS System Class and Policy Configuration

Figure 34 elaborates the navigation to QoS policy screen.

Figure 34 QoS policies Screen Navigation

Figure 35 shows how the class priorities are applied to the named QoS policies in Cisco UCS Manager.

Figure 35 Applying the QoS policies to the QoS System Class

For more information about configuration, see: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/qos/Cisco_Nexus_5000_Series_NX-OS_Quality_of_Service_Configuration_Guide_chapter3.html#con_1150612

UCS Service Profile Configuration

An important aspect of configuring a physical server in a Cisco UCS 5108 chassis is to develop a service profile through Cisco UCS Manager. A service profile is an extension of the virtual machine abstraction applied to physical servers. The definition has been expanded to include elements of the environment that span the entire data center, encapsulating the server identity (LAN and SAN addressing, I/O configurations, firmware versions, boot order, network VLAN, physical port, and Quality-of-Service [QoS] policies) in logical service profiles that can be dynamically created and associated with any physical server in the system within minutes rather than hours or days. The association of service profiles with physical servers is performed as a simple, single operation. It enables migration of identities between servers in the environment without requiring any physical configuration changes and facilitates rapid bare metal provisioning of replacements for failed servers.

Service profiles can be created in several ways:

Manually: Create a new service profile using the Cisco UCS Manager GUI.

From a Template: Create a service profile from a template.

By Cloning: Cloning a service profile creates a replica of a service profile. Cloning is equivalent to creating a template from the service policy and then creating a service policy from that template to associate with a server.

In this scenario, a service profile updating template is created, which is used to instantiate a service profile through the template. A service profile template parameterizes the UIDs that differentiate one instance of an otherwise identical server from another. Templates can be categorized into two types: initial and updating.

Initial Template: The initial template is used to create a new server from a service profile with UIDs, but after the server is deployed, there is no linkage between the server and the template. So changes to the template will not propagate to the server, and all changes to items defined by the template must be made individually to each server deployed with the initial template.

Updating Template: An updating template maintains a link between the template and the deployed servers, and changes to the template cascade to the servers deployed with that template on a schedule determined by the administrator.

Service profiles, templates, and other management data is stored in a high-speed storage system on the Cisco Unified Computing System fabric interconnects, with mirroring between fault-tolerant pairs of fabric interconnects.

Creating Service Profile Templates

Create two Service Profile templates, first for Oracle JD Edwards HTML and E1 server and the second for Oracle RAC nodes. This is done to allow creation of two more vNICs for Oracle RAC nodes. These vNICs would be assigned with VLAN191 for Cluster Interconnect traffic.

To create service profile templates in UCSM, follow these steps:

1. Click Servers tab at the top left of in the UCS Manager window.

2. Choose Service Profile Templates > root. In the right window, click Create Service Profile Template under the Actions tab. The Create Service Profile Template window appears.

3. Identify the Service Profile Template section.

a. Enter the name of the service profile template as JD Edwards Template.

b. Select the type as Updating Template. When there is a change in the server configuration, this template helps propagate the changes to all the service profiles.

c. In the UUID section, select FlexPod_JDE as the UUID pool.

d. Click Next to continue to the next section.

e. Storage Section

f. Keep default for the Local Storage option.

g. Select the option Expert for the field How would you like to configure SAN connectivity.

h. In the WWNN Assignment field, select FlexPod_JDE_WWNN.

i. Click Add to add vHBAs to the template. Create vHBA window appears. Ensure that the vHBA is vhba0.

j. In the WWPN Assignment field, select FlexPod_wwpn_A.

k. Ensure that the Fabric ID is set to A.

l. In the Select VSAN field, select VSAN16.

m. Click OK to save changes.

n. Click Add to add vHBAs to the template. The Create vHBA window appears. Ensure that the vHBA is vhba1.

o. In the WWPN Assignment field, select FlexPod_wwpn_B. Ensure that the Fabric ID is set to "B".

p. In the Select VSAN field, select VSAN16.

q. Click OK to save changes.

r. Click Add to add vHBAs to the template. Ensure that both the vHBAs are created.

s. Click Next to continue.


Note The WWPN of vhba0 and vhba1 would be used during zoning in Cisco Nexus 5548 UP and in Initiators during NetApp FC LUN configuration.


4. Network Section

a. Restore the default setting for Dynamic vNIC Connection Policy field.

b. Select the option Expert for the field How would you like to configure LAN connectivity.

c. Click Add to add a vNIC to the template.

d. The Create vNIC window appears. Enter the name of the vNIC as eth0.

e. Select the MAC address assignment field as FlexPod_MAC_FIA.

f. Select Fabric ID as Fabric A.

g. Select appropriate VLANs (760) in the VLANs.

h. Click OK to save changes.

i. Click Add to add a vNIC to the template.

j. The Create vNIC window appears. Enter the name of the vNIC eth1.

k. Select the MAC address assignment field as FlexPod_MAC_FIB.

l. Select Fabric ID as Fabric B.

m. Select appropriate VLANs (760) in the VLANs. The VLAN was already created in Creating VLANs section.

n. Select Adaptor Policy as Linux.

o. Similarly, create two vNICs for storage using VLAN 192. Select QoS Policy as Oracle-DB created under Cisco UCS Manager Quality-of-Service System and Policy section. Name them as vStorageNIC1 and vStorageNIC2 for easy identification.

p. Click OK to add the vNIC to the template. Ensure that both the vHBAs are created.

q. Click Next to continue.

5. vNIC/vHBA Placement section

a. Restore the default setting as Let System Perform Placement in the Select Placement field.

b. Ensure that all the vHBAs are created.

c. Click Next to continue.

d. Server Boot Order section

e. Select the Boot Policy (JDE_FlexPod_BootyPolicy) configured under SAN Boot Configuration.

6. Select default settings for Maintenance Policy and click Next.

7. Server assignment.

a. Create a Server Pool (JDE_ServerPool) for Oracle JD Edwards deployment and click Next.

A server pool contains a set of servers. These servers typically share the same characteristics, such as their location in the chassis, or an attribute, such as server type, amount of memory, local storage, type of CPU, or local drive configuration. You can manually assign a server to a server pool, or use server pool policies and server pool policy qualifications to automate the assignment.

If your system implements multi-tenancy through organizations, you can designate one or more server pools to be used by a specific organization. For example, a pool that includes all servers with two CPUs could be assigned to the marketing organization, while all servers with 64 GB memory could be assigned to the finance organization.

For Oracle JD Edwards deployment, this feature helps in assigning servers to a Server Pool with specific CPU and memory requirements for batch workloads or for interactive workloads. Cisco UCS XML API enables easy integration of Oracle Enterprise Manager (OEM) and provide automatic failover during the failure of server assigned to specific server pool. For instance, in the event of a batch server failure assigned to a batch server pool, OEM initiates a call to the Cisco UCS XML API scripts and a server from spare server pool moves to a batch server pool and is automatically assigned to the Service Profile of Failed Batch server.

1. Under Operational Policies, select the BIOS Policy (JDE_BIOS) created under BIOS Policy section.

2. Click Finish.

Repeat step 1 to step 12 to create a service profile template for Oracle RAC nodes. While creating vNICs, you must create two more vNICs named as vPrivateNIC1 and vPrivateNIC2, and assign QoS Policy as RAC_HB. Two separate vNICs for Oracle RAC nodes allows Oracle RAC Interconnect traffic between cluster nodes. Both of these vNICs are selected as Fabric ID A.

Figure 36, shows successful creation of Service Profile templates for Oracle JD Edwards HTML and E1 Server and Oracle RAC database.

Figure 36 Service Profile Templates

Creating Service Profile from the Template and associating it to Blade

To create a service profile from a template and associate it to a blade, follow these steps:

1. Click Servers tab in the UCS Manager window.

2. Choose Service Profile Templates > root > Sub-Organizations > Service Template JD Edwards Template.

3. Click Create Service Profiles From Template in the Actions tab of the right pane of the window.

4. Enter JDEHTML in the Naming Prefix text box and the number as 1.

5. Click OK to create service profile.

6. Choose the created Service profile Servers > Service profiles > root > SP-JDEHTML1 and go to Change Service Profile Association.

7. Select Existing Server under the option Server Assignment and from the list.

8. Select the right server based on Chassis ID/Slot number.

9. Click OK to associate the service profile to that blade. Figure 37 shows successful association of the service profile.

Figure 37 Successful Association of the Service Profile

Similarly, associate the service profiles for the Oracle JDE enterprise server, the Oracle JDE batch server and the Oracle RAC database servers. The Oracle RAC nodes would be instantiated through service profile template created specifically for the Oracle RAC nodes. Figure 38 shows all the service profiles created.

Figure 38 Summary of Service Profiles

Figure 39 elaborates an example of Oracle RAC node1.

Figure 39 Displaying the Service Profile for Oracle RAC node1

Creating Zoneset and Zones on Cisco Nexus 5548 UP Switch

After the server association to a Service Profile is completed, verify the published WWPNs of vhab0 and vhba1 on Cisco Nexus 5548 UP A, and Cisco Nexus 5548 UP B respectively for that specific server. This section describes the steps to configure zoneset and zones on each of the Cisco Nexus 5548 switch for server JDEApp1. Similar steps have to be executed for other servers on the Oracle JD Edwards deployment on Cisco Nexus 5548 A and Cisco Nexus 5548 B switches.

To configure the Zoneset and Zones on Cisco Nexus 5548, follow these steps:

1) Identify Storage and Server WWPNs
FlexPod-N5K-A# sh flogi database
--------------------------------------------------------------------------------
INTERFACE        VSAN    FCID           PORT NAME               NODE NAME
--------------------------------------------------------------------------------
fc2/5            16    0x6d000a  50:0a:09:83:9d:93:40:7f 50:0a:09:80:8d:93:40:7f
fc2/6            16    0x6d0002  50:0a:09:84:8d:93:40:7f 50:0a:09:80:8d:93:40:7f
fc2/7            16    0x6d0000  20:4f:54:7f:ee:56:ca:00 20:10:54:7f:ee:56:ca:01
fc2/7            16    0x6d0003  20:00:00:25:b5:de:aa:0f 20:de:00:25:b5:00:00:0f
fc2/7            16    0x6d0005  20:00:00:25:b5:de:aa:0c 20:de:00:25:b5:00:00:0c
fc2/7            16    0x6d0006  20:00:00:25:b5:de:aa:0a 20:de:00:25:b5:00:00:0a
fc2/7            16    0x6d0007  20:00:00:25:b5:de:aa:09 20:de:00:25:b5:00:00:09
fc2/8            16    0x6d0001  20:50:54:7f:ee:56:ca:00 20:10:54:7f:ee:56:ca:01
fc2/8            16    0x6d0004  20:00:00:25:b5:de:aa:0e 20:de:00:25:b5:00:00:0e
fc2/8            16    0x6d0008  20:00:00:25:b5:de:aa:08 20:de:00:25:b5:00:00:08
fc2/8            16    0x6d0009  20:00:00:25:b5:de:aa:17 20:de:00:25:b5:00:00:17
Total number of flogi = 11.

Note The storage and JDEApp1 server WWPNs are marked in bold. The WWPN starting with 20: is from vhba0 of JDEApp1 and WWPNs starting with 50: is from the NetApp controller.


2) Create zone and zoneset
FlexPod-N5K-A# conf t
Enter configuration commands, one per line. End with CNTL/Z.
FlexPod-N5K-A(config)# zone name jde-app1-vhba0 vsan 16
FlexPod-N5K-A(config-zone)# member pwwn 20:00:00:25:b5:de:aa:0c
FlexPod-N5K-A(config-zone)# member pwwn 50:0a:09:83:9d:93:40:7f
FlexPod-N5K-A(config-zone)# member pwwn 50:0a:09:84:8d:93:40:7f
FlexPod-N5K-A(config-zone)# zoneset name jde-n5k1 vsan 16
FlexPod-N5K-A(config-zoneset)# member jde-app1-vhba0
FlexPod-N5K-A(config-zoneset)# zoneset activate name jde-n5k1 vsan 16
Zoneset activation initiated. check zone status
FlexPod-N5K-A(config)# sh zo
zone                   zone-attribute-group   zoneset
FlexPod-N5K-A (config)# sh zoneset active vsan 16
zoneset name jde-n5k1 vsan 16
  zone name jde-app1-vhba0 vsan 16
  * fcid 0x700007 [pwwn 20:00:00:25:b5:de:aa:0c]
  * fcid 0x7000ef [pwwn 50:0a:09:83:9d:93:40:7f]
  * fcid 0x7001ef [pwwn 50:0a:09:84:8d:93:40:7f]
FlexPod-N5K-A(config)# copy r s
[########################################] 100%
 
 

Similar configuration has to be applied on Cisco Nexus 5548 B for vhba1 of JDApp1 Service Profile. After zones and zoneset are configured on Cisco Nexus 5548 B the following configuration can be seen.

FlexPod-N5K-B# sh zoneset active vsan 16
zoneset name jde-n5k2 vsan 16
zone name jde-app1-vhba1 vsan 16
  * fcid 0x5f0000 [pwwn 50:0a:09:83:8d:93:40:7f]
  * fcid 0x5f0001 [pwwn 50:0a:09:84:9d:93:40:7f]
  * fcid 0x5f0007 [pwwn 20:00:00:25:b5:de:bb:0c]

Configuring NetApp FAS 3270

This section elaborates the steps to configure the NetApp storage required for Oracle JD Edwards deployment on FlexPod.

The NetApp aggregation layer provides a large virtualized pool of storage capacity and disk IOPS to be used on demand by all virtual machines hosted on the aggregation layer. Aggregate Aggr0 is defined for hosting a root volume, which is used by the NetApp Data ONTAP operating system for storing NetApp storage configurations settings and configuration files. For detailed NetApp storage command options, see: https://library.netapp.com/ecmdocs/ECMP1147528/html/index.html

Figure 40 shows a high-level storage design overview of a NetApp FAS3270 HA pair.

Figure 40 Design Overview on NetApp Storage HA

Table 11 shows the NetApp storage layout with the volumes and LUNs created for various purposes.

Table 11 NetApp Storage Layout with Volumes and LUNs

NetApp Storage Layout
Aggregation and NetApp Controller
NetApp FlexVol
LUN
Size
Comments

OS_Aggr_A on Controller A

OS_VOL_A

OS_HTML1

100 GB

FC boot LUN for the Oracle JD Edwards HTML server

OS_Aggr_A on Controller A

OS_VOL_A

OS_JDEAPP1

60 GB

FC boot LUN for the Oracle JD Edwards Enterprise server

OS_Aggr_A on Controller A

OS_VOL_A

OS_JDEDB1

150 GB

FC boot LUN for the Oracle RAC node1 binaries

OS_Aggr_B on Controller B

OS_VOL_B

OS_HTML2

100 GB

FC boot LUN for the Oracle JD Edwards HTML server

OS_Aggr_B on Controller A

OS_VOL_B

OS_JDEBATCH1

60 GB

FC boot LUN for the Oracle JD Edwards Enterprise server

OS_Aggr_B on Controller B

OS_VOL_B

OS_JDEDB2

150 GB

FC boot LUN for the Oracle RAC node2 binaries

OS_Aggr_B on Controller B

OS_VOL_B

OS_JDEDEPMGR

200 GB

FC boot LUN for the Oracle JD Edwards Deployment server

Data_Aggr_A on Controller A

JDE_OCR_VOTE

 

30 GB

Used to store OCR and Voting Disk using NFS

Data_Aggr_A on Controller A

JDE_DATA_A

 

400 GB

Used to store data files, spfiles and copies of control files.

Data_Aggr_A on Controller A

JDE_LOG_A

 

40 GB

Used to store redo log files and copies of control files

Data_Aggr_A on Controller A

JDE_BATCH1

 

160 GB

Used for Oracle JD Edwards Enterprise server and batch binaries and execution logs

Data_Aggr_A on Controller A

JDE_HTML1

 

200 GB

Used for Oracle JDE HTML server binaries and execution logs

Data_Aggr_A on Controller A

JDE_OCR_VOTE

 

30 GB

Used to store OCR and Voting Disk using NFS

Data_Aggr_B on Controller B

JDE_DATA_B

 

400 GB

Used to store data files, spfiles and copies of control files.

Data_Aggr_B on Controller B

JDE_LOG_B

 

40 GB

Used to store redo log files and copies of control files.

Data_Aggr_B on Controller B

JDE_APP1

 

160 GB

Used for Oracle JD Edwards enterprise server binaries and execution logs

Data_Aggr_B on Controller B

JDE_HTML2

 

200 GB

Used for Oracle JDE HTML server binaries and execution logs


To configure NetApp storage systems to implement the storage layout design described above, follow these steps:

NetApp FAS3270HA (Controller A)

1. Create Data_Aggr_A with a RAID group size of 16, 48 disks, and RAID_DP redundancy for hosting NetApp FlexVols as shown in Table 11.

FlexPod-Oracle-A> aggr create Data_Aggr_A -t raid_dp -r 16 48 -B 64
 
 

2. Create OS_Aggr_A with a RAID group size of 8, 6 disks, and RAID_4 as shown in Table 11.

FlexPod-Oracle-A> aggr create OS_Aggr_A -t raid_4 -r 8 55 -B 6
 
 

3. Create NetApp FlexVols on Data_Aggr_A for hosting Oracle JD Edwards and Database binaries as described in Table 11. These volumes are exposed to deployed servers for Oracle JD Edwards.

FlexPod-Oracle-A> vol create JDE_DATA_A Data_Aggr_A 400g
FlexPod-Oracle-A> vol create JDE_LOG_A Data_Aggr_A 60g
FlexPod-Oracle-A> vol create JDE_APP1 Data_Aggr_A 160g
FlexPod-Oracle-A> vol create JDE_HTML1 Data_Aggr_A 200g
FlexPod-Oracle-A> vol create OS_VOL_A OS_Aggr_A 2048g
 
 

4. Create FC LUNs on OS_Aggr_A for hosting Oracle JD Edwards and as described in Table 11.

FlexPod-Oracle-A> lun create -s 60g -t linux /vol/OS_VOL_A/OS_JDEAPP1
FlexPod-Oracle-A> lun create -s 100g -t linux /vol/OS_VOL_A/OS_JDEHTML1
FlexPod-Oracle-A> lun create -s 150g -t linux /vol/OS_VOL_A/OS_JDEDB1
 
 

5. Repeat step 3, to create the aggregate and FlexVol on Controller B, as described in Table 11.

6. Login to NetApp OnCommand System Manager and select Controller A Storage.

7. Select LUNs in the left pane and view all the LUNs created.

8. Select OS_JDE_APP1 and verify the lun size, container path and OS with the parameters of lun create command executed in step 4. Figure 41 shows the summary of the LUNs created.

Figure 41 Verifying the LUN Properties

9. LUN Management and Initiator Groups:

a. Right-click OS_JDEAPP1 and select Edit.

b. Click Initiator Group tab and select Add Initiator Group.

c. Define Initiator Group Name as JDEAPP1 in the text box.

d. Select Operating System as Linux.

e. Check supported protocol as FC/FCoE.

f. Click Initiators tab and add WWPN of vhba0 and vhba1 of JDEAPP1 Service Profile.

g. Click Create.

h. Select the Initiator Group and define the LUN ID as 51.

Figure 42 WWPNs of JDEAPP1 Service Profile

10. Repeat step 9 to create other FC LUNs in Controller A and Controller B. Figure 43 elaborates the initiator groups for JDEAPP1 lun.

Figure 43 Initiator Group of JDEApp1.

NetApp Multimode Virtual Interfaces

The NetApp multimode virtual interface (VIF) feature is enabled on NetApp storage systems on 10 Gigabit Ethernet ports (e1a and e1b) for configuring all the flexible volumes created to store Oracle JD Edwards data files and Oracle RAC Database files using NFS protocol.

To configure the multilevel dynamic VIF on NetApp FAS3270A (Controllers A and B) HA Pair storage system in the NetApp CLI, type the following commands:

NetApp FAS3270HA (Controller A)
FlexPod-Oracle-A> ifgrp create lacp nfs-A
FlexPod-Oracle-A> ifgrp add nfs-A e1a e1b
FlexPod-Oracle-A> ifconfig nfs-A mtusize 9000 192.191.1.5 netmask 255.255.255.0 partner 
nfs-B up
NetApp FAS3270HA (Controller B)
FlexPod-Oracle-B> ifgrp create lacp nfs-B
FlexPod-Oracle-B> ifgrp add nfs-B e1a e1b
FlexPod-Oracle-B> ifconfig nfs-B mtusize 9000 192.191.1.6 netmask 255.255.255.0 partner 
nfs-A up
 
 

Set the MTU to 9000 and enable the jumbo frames on the Cisco UCS static and dynamic vNICs and on the upstream Cisco Nexus 5548UP switches.

To verify the Virtual Interface (VIF) values, follow these steps:

1. Login to Netapp OnCommand System Manager.

2. Choose Configuration Network > Network Interfaces.

3. Verify the Virtual Interface (VIF) nfs-A created. The MTU size should be 9000 and the trunk mode should be set to multiple, using two 10 Gigabit Ethernet ports (e1a and e1b) on NetApp storage Controller A.

4. Similarly, verify the MTU, trunk mode values on the NetApp Controller B.

For more on information on configuring the Cisco Nexus 5548 UP switch, see: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/qos/Cisco_Nexus_5000_Series_NXOS_Quality_of_Service_Configuration_Guide_chapter3.html#con_1150612

Oracle Linux Installation

Oracle Linux 5.8 can be installed once the Boot LUN is visible to the host.

Some of the important steps during Oracle Linux installation are detailed below.

1. Add linux mpath at the boot prompt.

Figure 44 Linux mpath definition

2. Select the NetApp LUN configured in the NetApp onCommand System Manager.

3. Select appropriate Oracle Packages under Base System Software Configuration and install Oracle Linux operating system.

Some of the important steps executed post Oracle Linux installation are detailed below:

1. Verify that native Linux multipath is mapped to all the paths to NetApp OS LUN.

Figure 45 Verify mpath configuration

2. Edit Network, to add IP configuration for VLAN 760 (Public) and VLAN191 (for Storage) to the respective ethernet interfaces.

3. Edit /etc/hosts to define IP address mapping for Storage 1 (NetApp Controller A) and Storage 2 (NetApp Controller B).

4. Edit /etc/fstab to include the NFS volume for Oracle JD Edwards and Oracle Database binary installation. Figure 46 shows the summary of the mounted NFS volumes.

Figure 46 Verifying the NFS Volume

5. Edit /etc/grub.conf with Oracle Linux Server-base (2.6.18-308.el5) as the default kernel and reboot the server.

6. After the reboot of the Oracle Linux, it gets configured with RedHat compatible kernel and the appropriate data volumes for Oracle JDE E1 server.

Follow the same steps to configure Oracle JDE HTML and Database servers. For more information on installing Oracle Linux operation system on B-Series server, see: http://www.cisco.com/en/US/docs/unified_computing/ucs/os-install-guides/linux/BSERIES-LINUX_chapter_010.html

Oracle RAC Setup

This section describes the deployment of Oracle Database 11g R2 GRID Infrastructure with RAC option for Oracle JD Edwards deployment on FlexPod. After installing the Oracle Linux 5.8 (RedHat Compatible kernel) on each RAC node, verify that all the required remote preemptive monitor (rpms) are installed as part of OS installation, which is required for Oracle GRID installation.

For information on pre-installation tasks, such as setting up the kernel parameters, RPM packages, user creation for the Oracle RAC setup, see:

http://download.oracle.com/docs/cd/E11882_01/install.112/e10812/prelinux.htm#BABHJHCJ

Follow the steps to complete the Oracle Database 11g R2 GRID Infrastructure with RAC option installation. For this solution the user, group, directory structure, kernel parameters and user limits can be created as shown in the steps below. You can resize the kernel parameter, user limits and rename the directory structure, user name, groups as per the business requirement.

1. Create required Oracle users and groups in each RAC node.

groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 2000 -g oinstall -G dba grid
passwd grid
useradd -u 1100 -g oinstall -G dba oracle
passwd oracle
 
 

2. Create directories on each RAC node, and give the ownership of these directories to newly created users in step 1.

mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/oracle
mkdir /data_A
mkdir /data_B
mkdir /log_A
mkdir /log_B
mkdir /ocrvote
chown -R oracle:oinstall   /u01/app/oracle  /data_A   /data_B   /log_A  /log_B
chmod -R 775   /u01/app/oracle   /data_A   /data_B   /log_A  /log_B
chown -R grid:oinstall   /u01/app   /ocrvote 
chmod -R 775   /u01/app   /ocrvote

Note The grid user owns the Oracle GRID installation whereas oracle user owns the Oracle Database installation. In this solution, OS lun directory is used for GRID installation and Database binary installation. However, you can install the binaries in shared directory on NFS volumes.


Table 12 shows the mapping of NFS volumes with newly created directory in each RAC node.

Table 12 NFS Volumes mapping in Oracle RAC node

Local Directory on Guest OS
NetApp NFS Volumes
Owner
FC LUN
Purpose

/u01/app/11.2.0/grid

NA

grid

/vol/OS_VOL_A/OS_JDEDB1

Oracle GRID binary installation

/u01/app/oracle

NA

oracle

/vol/OS_VOL_A/OS_JDEDB1

Oracle Database binary installation

/data_A

/vol/JDE_DATA_A

oracle

NA

Datafiles and control files

/data_B

/vol/JDE_DATA_B

oracle

NA

Datafiles and control files

/log_A

/vol/JDE_LOG_A

oracle

NA

Redo log files and control files

/log_B

/vol/JDE_LOG_B

oracle

NA

Redo log files and control files

/ocrvote

/vol/JDE_OCR_VOTE

grid

NA

Ocr and voting disks


3. Edit /etc/fstab file in each RAC node, and add the entry for all volumes and its corresponding local directories created in the above steps with the proper mount options as shown below.

Storage1:/vol/JDE_OCR_VOTE     /ocrvote        nfs     
rw,bg,hard,rsize=65536,wsize=65536,vers=3,actimeo=0,nointr,suid,timeo=600,tcp   0 0
Storage1:/vol/JDE_DATA_A       /data_A  nfs     
rw,bg,hard,rsize=65536,wsize=65536,vers=3,actimeo=0,nointr,suid,timeo=600,tcp     0 0
Storage1:/vol/JDE_LOG_A        /log_A   nfs     
rw,bg,hard,rsize=65536,wsize=65536,vers=3,actimeo=0,nointr,suid,timeo=600,tcp     0 0
Storage2:/vol/JDE_DATA_B       /data_B  nfs     
rw,bg,hard,rsize=65536,wsize=65536,vers=3,actimeo=0,nointr,suid,timeo=600,tcp     0 0
Storage2:/vol/JDE_LOG_B        /log_B   nfs     
rw,bg,hard,rsize=65536,wsize=65536,vers=3,actimeo=0,nointr,suid,timeo=600,tcp     0 0
 
 

For more information on how to find the proper mount options for different file systems of Oracle 11g R2, see:

https://kb.netapp.com/support/index?page=content&id=3010189


Note The rsize and wsize of 65536 is supported by NFS v3 and used in this configuration to improve performance.


4. Mount all the local directories created to store database, OCR file and voting disks.

jdedb-node1# mount /ocrvote
jdedb-node1# mount /data_A
jdedb-node1# mount /data_B
jdedb-node1# mount /log_A
jdedb-node1# mount /log_B
Give ownership of mounted directories to appropriate user.
chown -R oracle:oinstall /data_A  /data_B   /log_A  /log_B
chown -R grid:oinstall /ocrvote
 
 

5. Configure the private and public NICs with the appropriate IP addresses.

6. Identify the virtual IP addresses, SCAN IPs and configure them in DNS, as per Oracle's recommendation. Alternatively, if the DNS service is not available, you can update the /etc/hosts file with all the details (private, public, SCAN and virtual IP).

7. Create raw devices under /ocrvote.

a. Login as grid user from any one node and create the following raw files.

dd if=/dev/zero of=/ocrvote/ocr/ocr1 bs=1m count=500
dd if=/dev/zero of=/ocrvote/ocr/ocr2 bs=1m count=500
dd if=/dev/zero of=/ocrvote/ocr/ocr3 bs=1m count=500
dd if=/dev/zero of=/ocrvote/vote/vote1.dbf bs=1m count=500
dd if=/dev/zero of=/ocrvote/vote/vote2.dbf bs=1m count=500
dd if=/dev/zero of=/ocrvote/vote/vote3.dbf bs=1m count=500
 
 

8. Configure the ssh option (with no password) for the Oracle user and grid user. For more information about ssh configuration, see Oracle installation documentation.

9. Configure the /etc/sysctl.conf file by adding shared memory and semaphore parameters required for Oracle GRID Installation. Also configure /etc/security/limits.conf file by adding user limits for Oracle and Grid users. For more information, see: http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm#BABIADGG

Installing Oracle Database 11g R2 GRID Infrastructure with Oracle RAC and the Database

This section highlights the major steps to install Oracle Database 11g R2 GRID infrastructure with Oracle RAC.

1. Download the Oracle Database 11g Release 2 Grid Infrastructure (11.2.0.3.0) and Oracle Database 11g Release 2 (11.2.0.3.0) for Linux x86-64.

2. Install Oracle Database 11g Release 2 Grid Infrastructures (11.2.0.3.0).

During the grid installation, you are prompted to place cluster registry (OCR) files and voting disk files. Select the option Shared File System as shown in Figure 47 and click Next.

Figure 47 Shared File System to Store OCR and Voting disks

3. Specify the location for OCR files and the voting disk files, and point to a valid block device, as shown in Figure 48 and Figure 49.

Figure 48 Configure OCR File Location for Normal Redundancy Level

Figure 49 Voting Disk File location

For more information on Linux Grid Infrastructure installation, see: http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10812/toc.htm

After installing the Oracle Grid Infrastructure, install Oracle Database 11g Release 2. The important steps for Oracle Database 11g installation are detailed below.

1. Install Oracle Database 11g Release 2 Database Software Only; do not create the database after Oracle GRID Installation as Oracle user. Create orcl as the instance for Oracle Database.

2. Run the dbca tool as Oracle user to create the database (orcl).

3. Make sure to place the data files, redo logs and control files in proper directory/ paths created in the previous steps.

4. Configure Direct NFS client. For improved performance, Oracle recommends using the Direct NFS client shipped with Oracle 11g. The direct NFS client looks for mount point entries in the following order:

$ORACLE_HOME/dbs/oranfstab
/etc/oranfstab
/etc/mtab
 
 

The NFS mount point details are defined in the /etc/fstab file. So, there is no need to configure additional connection details. While setting up NFS mounts, see the Oracle documentation for information on types of data that can and cannot be accessed through the Direct NFS client. For this NFS client to work, we must switch the libodm11.so library to the libnfsodm11.so library, as shown below:

srvctl stop database -d orcl
cd $ORACLE_HOME/lib
mv libodm11.so libodm11.so_stub
ln -s libnfsodm11.so libodm11.so
srvctl start database -d orcl
 
 

For more information on Oracle DNFS configuration, see: http://www.oracle.com/technetwork/articles/directnfsclient-11gr1-twp-129785.pdf

After the configuration is complete, verify the direct NFS client usage as listed below:

o v$dnfs_servers
o v$dnfs_files
o v$dnfs_channels
o v$dnfs_stats
For example:
SQL> SELECT svrname, dirname FROM v$dnfs_servers;
SVRNAME DIRNAME
---------- --------------------------------------------------
Storage1 /vol/JDE_DATA_A
Storage1 /vol/JDE_LOG_A
Storage2 /vol/JDE_DATA_B
Storage2 /vol/JDE_LOG_A
SQL>

Note The Direct NFS Client supports direct I/O and asynchronous I/O (by default).


For more information on Oracle RAC database installation, see: http://www.oracle.com/pls/db112/to_toc?pathname=install.112/e10813/toc.htm

Oracle JD Edwards Installation

This section describes the steps to install Oracle JD Edwards (JDE) 9.0.2 suite on Oracle Linux 5.8, with Oracle RAC 11.2.0.3 as the RDBMS.

Pre-Requisites

Ensure that you refer to the latest Oracle JDE MTR documentation for information regarding the pre-requisites for your install. For information, see: http://docs.oracle.com/cd/E24902_01/nav/installation.htm

Ensure that all the required Oracle JDE software is downloaded from Oracle eDelivery and Oracle UpdateCenter.

Ensure that there is network connectivity between all the machines involved.

General Install Requirements

Following are the general requirements for installing the Oracle JDE Enterprise server.

Make sure the disk space is sufficient enough for the installation. For more information on space requirements, see: https://support.oracle.com/epmos/faces/DocumentDisplay?id=747323.1

Database server software must be installed in Oracle JDE Database server

Database client software must be installed in the Oracle JDE Application server and Deployment server.

Make sure enough temporary disk space is available for the installers and wizards.

Oracle JD Edwards Specific Install Requirements

Table 13 lists the requirements for installing Oracle JD Edwards.

Perform the installation by running the installers using the 'run as administrator' option.

Table 13 Installation Requirements for the Servers

Server Type
Install Requirements

Oracle JDE Deployment server

Oracle JDE Deployment server for Oracle JDE version 9.0

Application update 2 for 9.0 Oracle JDE applications

Oracle JDE Deployment server tools version 8.98.4.10

Oracle JDE Server Manager for 8.98.4.6

Microsoft Visualization 2008 SP1

Microsoft Windows SDK v6.0A

Visual Studio 2005sp1 runtime libraries on the deployment server

Oracle JDE Enterprise server

Oracle JDE Enterprise server tools version 8.98.4.10

Oracle Linux 5.8

Oracle JDE Database server

Oracle JDE 9.0 databases in Database server machine

Oracle Linux 5.8

Oracle 11.2.0.3 RAC DB

Oracle JDE HTML server

Oracle Linux 5.8

JDK or jrockit-jdk1.6.0_29-R28.2.2-4.1.0 in Web Server machine before installing the WebLogic server

Oracle WebLogic 10.3.6 in web server machine

Oracle HTTP server

Oracle JDE HTML server 8.98.4.10


Oracle JDE Install Port Numbers

The port numbers used for WLS, Oracle JDE, and HTTP are listed below:

WLS: 7503-7561

Oracle JDE Enterprise Server: 9700

HTTP: 7777

Figure 50 illustrates the workflow to install Oracle JD Edwards on Cisco UCS.

Figure 50 Oracle JD Edwards Installation Workflow

Installing Oracle JDE Deployment Server

The Deployment server is used as a repository of Oracle JD Edwards installation and upgrade software and data artifacts. This section describes the steps for installing the Oracle JD Edwards Deployment server. The installation steps listed in this section are specific to the Oracle JD Edwards 9.0.2 application suite used in conjunction with Oracle JD Edwards tools release 8.98.4.10.

To install the Oracle JDE Deployment server, follow these steps:

1. Download the Deployment server binaries from Oracle eDelivery into a directory, and extract the zip files in place using tools like Winzip or 7zip.

Run the steps listed below, as referenced from Oracle solution ID: Document 1310036.1

Download the installer for the database and the installer for the.NET Framework from Microsoft. In addition, you must download a program from Oracle that runs the two Microsoft installers, passing them parameters that EnterpriseOne needs to run properly.

2. To download the Microsoft SQL Server Express 2005 SP3 Installer on the Deployment server, follow these steps:

a. Go to the Microsoft Download Center: http://www.microsoft.com/downloads

b. In the search field near the top of the screen, enter SQL server 2005 Express Edition SP3 and click search.

c. Click SQL Server 2005 Express Edition SP3 link.

d. Next to the file called SQLEXPR.EXE, click Download button.

e. Save the file to your Deployment server in this location: <dep_svr_install_dir>\OneWorld Client Install\ThirdParty\SSE

3. The .NET Framework contains new Windows files that applications, such as SSE can use. Oracle highly recommends that you install at least version 4.0 of the Microsoft .NET Framework. For this procedure, you should download the installer to the Deployment server as described below.

a. Go to the Microsoft Download Center: http://www.microsoft.com/downloads.

b. In the menu bar at the top of the screen, click on Downloads A-Z and type N and NET.

c. Click Microsoft .NET Framework 4 (Web Installer) link.

d. Next to the file called dotNetFx40_Full_setup.exe, click Download button.

e. Save the file to your deployment server in this location: <dep_svr_install_dir>\OneWorld Client Install\ThirdParty\SSE.

4. The DotNetSSESetup.exe program runs the .NET Framework and SSE installers. Locate and download the EnterpriseOne DotNetSSESetup.exe and related file called settings.ini from E-Delivery using the part number and description listed below:

a. V24818-01 Oracle JD Edwards EnterpriseOne Tools 8.98.4.2 - Microsoft SQL Server 2005 Express SP3 Local Database Installer for Deployment server and Development client

b. Place the SSE 2005 SP3 installer SQLEXPR.exe and the .NET Framework 4 installer dotNetFx40_Full_setup.exe onto your Deployment Server in the following directory: <dep_svr_install_dir>\OneWorld Client Install\ThirdParty\SSE

5. Edit the settings.ini file in the following directory: <dep_svr_install_dir>\OneWorld Client Install\ThirdParty\SSE.

The settings.ini file contains settings for installing the .NET Framework and SSE. For completeness, these settings include those for .NET Framework 2.0/SSE 2005 prior to SP3 and for .NET Framework 4.0/SSE 2005 SP3.

6. In the settings.ini file, uncomment the settings for the set of installers that you use, and comment out (by adding a semicolon at the start of the line) the settings for the set of installers that you will use.


Note Only one set OS installers must be uncommented, and only one set must be commented out.


7. Save the settings.ini file.

8. Run the DotNetSSESetup.exe file as administrator.

9. After the .NET Framework is installed successfully, download the appropriate SQL Server JDBC driver for SQL server 2005 SP3 from MSDN.

10. Place the SQL Server JDBC drivers in a folder named JDBC.

11. Execute the RunInstaller in Admin mode from the deployment server disk1 folder that was unzipped in step 1.

12. Choose the directory for installing deployment server as well as the directory, which contains the SQLServer driver.

13. Install the Deployment Server Installer and then install the Deployment server.

14. Install Microsoft Visual Studio C Runtime libraries using the vcredist.exe for VisualStudio 2005 SP1.

15. Run the vcredist.exe as an administrator. Click Yes to accept the license agreement, and proceed to Install the Microsoft Visual Studio C runtime libraries.

Figure 51 Accepting the License Agreement Prior to Install

Tools Upgrade on the Deployment Server

After installing the Deployment server you must perform a tools upgrade to 8.98.4.10. To upgrade the tools release, follow these steps:

1. Download the appropriate tools release (8.98.4.10 deployment server in this case) from Update center and run the InstallManager.exe as an administrator. Click Workstation Install link.

2. Install the chosen tools release for the deployment server (8.98.4.10 in this case).

Figure 52 Installing the Oracle JD Edwards Enterprise One Workstation

3. Click Next to proceed with the installation, and click Tools Release radio button to define the setup type.

4. Click Finish to complete the tools upgrade.

Figure 53 Selecting the Tools Upgrade Setup Type

Install Planner ESU

To download and install the Install Planner ESU, follow these steps:

1. Download the appropriate planner ESU from Oracle update center onto the deployment server and unzip the contents.

2. Run the executable as an administrator. The planner is installed using the EnterpriseOne client workstation installation program.

Figure 54 Selecting the Setup Type

3. After the Planner has installed successfully, run the Special Instructions for the planner. Open Windows cmd prompt in the administrator mode.

4. In the cmd prompt, change the directory to the scripts directory of the planner ESU.

Figure 55 Changing the Directory to Scripts

5. Run specialInstrs.bat command on the command prompt.

Figure 56 Running the command SpecialInstrs

6. Choose the SQL Server Express option and type S in the command prompt.

Figure 57 Selecting the SQL Server Express

7. Run the R98403XB XJDE0002 report, which copies control records from the shipped XML database into the local planner databases.

Figure 58 Copying the Control Records

8. Edit jdbj.ini in the OC4J webclient deployment on the deployment server (C:\JDEdwards\E900\SYSTEM\OC4J\j2ee\home\applications\webclient.ear\webclient\WEB-INF\classes).

9. Comment out as400 and oracle drivers with #

Figure 59 Commenting the AS400 and Oracle Drivers

10. In the Connection Pooling Settings, increase the connection Timeout in jdbj.ini to 3600000.

Figure 60 Incrementing the Connection Timeout

11. Run report R98403XB, which copies the control records from the shipped XML database into the local planner databases.

12. Open ActiveConsole in the administrator mode.

13. Sign in using the Oracle JDE user credentials using Environment-JDEPLAN, Role-ALL.

14. Type BV in fastpath text box to open the batch versions.

15. Type in R98403XB in Batch Application text box and click search.

Figure 61 Searching for the Batch Application

16. Select the Copy Control Tables to planner environment batch version.

Figure 62 Selecting the Batch Application version

17. Set the processing options for the selected version of the UBE. Choose Copy Control Tables to planner environment> Row> Processing Option to launch the Processing Option window.

Figure 63 Setting the Processing Options

18. Enter the path to the planner data folder, which contains the planner ESU folder and enter target environment name. Click OK to proceed.

Figure 64 Defining the Path to the Planner Data Folder

19. Check the Data Selection checkbox, and click submit icon.

Figure 65 Submitting the Data Selection

20. Click the On Screen radio button to define the report output destination.

Figure 66 Selecting the Report Destination

21. After the report download completes, verify it for any errors.

Figure 67 Verifying the Report for Errors

Enterprise Server Install

In this setup, the Oracle JDE Enterprise server and Oracle JDE database server are on different machines. The platform pack software for these two Oracle JDE servers must be installed separately onto the two machines. Download the relevant platform pack software (part id V12449-01 -> V12452-01) from Oracle eDelivery and extract its contents by unzipping the files. Note that all zip files must be extracted in a folder, such as sub-folders disk1 and disk2.

To install the Enterprise server, follow these steps:

1. After extracting the zip files, go to subfolder disk1, open an xWindows session, cd the disk1 directory and run install.sh.

Figure 68 Running the install.sh Command

2. Launch the InstallShield wizard and Click Next.

3. Choose the directory where the Oracle JDE Enterprise server binaries must be installed.

Figure 69 Defining the Directory

4. Click Install EnterpriseOne radio button. Click Next.

Figure 70 Selecting the Install Option

5. Click Custom radio button to define the installation type.

Figure 71 Selecting the Installation Type

6. Check the Oracle JD Edwards EnterpriseOne Enterprise Server checkbox, and all the components. Click Next.


Note The DB components are not chosen since DB is installed on a different server.


Figure 72 Selecting the Features to be Installed

7. Type the Database Server Name and Database Type in the text boxes. Select the Database creation options. Use the default password. Click Next.

Figure 73 Defining the DB Attributes

8. Set the ini for the appropriate number of users. Click Yes radio button. Click Next.

Figure 74 Defining the Concurrent User Number

9. Enter the Oracle DB SID and the name of the UNIX account for Oracle DB. Click Next.

Figure 75 Naming the Database Instance

10. Verify the selections used for installing the Enterprise server in the summary page. Click Next.

Figure 76 Summary of the Enterprise Server Features

11. Click Finish.

Database Server Install

The Platform Pack binaries used for the Enterprise server are copied onto the DB server.

1. Choose the directory where the Oracle JDE Enterprise server binaries are to be installed. Click Next.

2. Click Install EnterpriseOne radio button. Click Next.

3. Click Custom Install radio button. Click Next.

4. Check the Oracle JD Edwards EnterpriseOne Database Server checkbox, and all the components. Click Next.

Figure 77 Selecting the DB Server Features

5. Type the Database Server Name and Database Type in the text boxes, select the Database creation options, and use the default password in the Database Options window. Click Next.

Figure 78 Defining the DB Options

6. Type the UNIX owner and the group owning the Oracle install, connect string, sysadmin user and password in the corresponding text boxes. Define the directory.

Figure 79 Defining the Database Instance References

7. Click Yes radio button to install the database. Click Next.

8. Verify the selections made in the summary page. Click Next to complete installation. Click Finish to exit.

Figure 80 Summary of the Database Server Features

Installation Plan

To install the Installation Plan, follow these steps:

1. Open the Oracle JDEJDE admin console on Deployment server. Login as an Oracle JDE user in the JDEPLAN environment.

2. In the JD Edwards Solution Explorer window, type GH961 in the Fast Path text box to go to the Installation Plan.

3. Right-click Custom Installation Plan. Choose Prompt For> Values.

Figure 81 Selecting the Values for Installation Plan Prompt

4. In the Processing Options window, type 2 in the Prompt Mode text box and click OK.

Figure 82 Defining the Verbose Mode

5. Click Process Mode tab. Type 1 in the Installation text box and click OK.

Figure 83 Defining the Process Mode

6. Click Default tab. Type 1 as prompt value for all default options at runtime and click OK.

Figure 84 Defining the Default Mode Values

7. Click Status Change tab. Type the values in the text boxes. Click OK.

Figure 85 Defining the Detail Status

8. Click Completion tab. Type 2 (run automatically) in finalize and validate plan text boxes and click OK.

Figure 86 Defining the Process after Completion

9. Click Replication tab. Type 1 (Prompt for option at runtime) in all the text boxes and click OK.

Figure 87 Defining the Values for Replication

10. Click Packages tab. Type 1 (Prompt for pushing at runtime) in the text box and click OK.

Figure 88 Defining the Package Push Options

11. Click Add to add the custom installation plan.

Figure 89 Adding the Custom Installation Plan

12. Enter the name, description, and status in the text boxes. Click Install radio button.

Figure 90 Define the Name and Description for the Plan

13. After adding the plan, define a location that must be associated with the Plan. Click OK in the Location message box.

Figure 91 Adding the Location

14. Type the location properties, and click OK.

Figure 92 Defining the Location Parameters

15. For various Oracle JDE servers, type the Machine Name, Description, Release, and Primary User in the text boxes. Define the Deployment Server Share Path.

Figure 93 Defining the Oracle JDE Servers Details

16. Type Enterprise server installation path and description and the name of deployment server.

Figure 94 Defining the Installation Path and Server Details

17. Choose to go through DataSource revisions so as to confirm default data source information.

Figure 95 Confirming the Data Source

18. Verify the installation information for the HTML server in the HTML tab. Type the ports, server url, installation path, and the Deployment server name.

Figure 96 Defining the Information for HTML Server

19. Verify the database specific installation plan information, as database resides on separate server. Click OK to define Data Server properties.

Figure 97 Defining the Data Server on Separate Server

20. Type Datasource type, Platform, Database Server Name and Database Name and ID.

Figure 98 Defining the Name and Type

21. Verify the Data Dictionary data source configuration values- platform, server name and data source type for all the datasources.

Figure 99 Verify all Data Sources

22. Check the Default Environments, and Default Data Load check boxes to configure valid environments and load relevant environment data.

Figure 100 Defining the Environment Data

23. Click No in the Location message box, as another location is not added.

Figure 101 Completion of Plan

24. The Information Prompt message box confirms the plan is finalized. Click On Screen radio button to choose the report destination. Click OK.

25. Run the Planner Validation Report, R9840B, and validate all records.

Installation Workbench

To install the Oracle JDE Workbench, follow these steps:

1. Execute the custom plan created previously.

2. Sign into JDEPLAN on Deployment server, and fastpath to GH961.

3. Choose Installation workbench >Prompt For>Values. The Processing Options window opens.

4. In the Process tab, type1 for unattended workbench mode, and 60 in the Plan Detail Status text box. Click OK.

Figure 102 Defining the Workbench Mode

5. Search for the available plan status.

Figure 103 Search the Plans Available

6. Select the custom plan created in the Installation Plan section.

Figure 104 Selecting the Plan


Note For the unattended workbench mode, all workbenches are completed without any intervention, since no task breaks were set.


7. After the Installation Plan is completed, the status changes from Validated to Installed.

Figure 105 Verifying the Plan Status

Change Assistant

To download and install the Change Assistant, follow these steps:

1. Download the Change Assistant from Oracle update Center onto the Deployment server.

2. Install the Change Assistant on the Deployment server.

Figure 106 Downloading the Change Assistant

Baseline ESU Install

To install the Baseline ESU, follow these steps:

1. On the Deployment server, sign into the Change Assistant using Oracle support credentials.

2. Choose Search for Packages> JD Edwards> Electronic SoftwareUpdate> 9.0> 9.0 baseline ESUs. Choose the unattended mode and click OK.

Figure 107 Selecting the Unattended Mode

3. Wait for the ESUs to be downloaded and applied. After the Baseline ESUs are applied, the summary status for all the ESUs is displayed as Suceeded.

Figure 108 Downloading the Baseline ESU

Figure 109 Summary of the ESU Downloaded


Note Apply the special instructions for the baseline ESUs. There might be variations in the special instructions due to localization requirements that may not be relevant, therefore it is essential to check them.


UL2 Install

The Oracle JDE DIL kit, which represents a standard customer workload for Oracle JD Edwards applications, requires the Oracle JDE application level to be at 9.0 update level 2. Therefore, UL2 should be applied to the installed base application.

To install the UL2, follow these steps:

1. Download the UL2 file from eDelivery (Oracle software delivery cloud) and extract the files and run them as administrator.

2. Click 9.0 Update 2 Installation link in the Oracle JD Edwards Install Manager window.

Figure 110 Clicking the UL2 Update Installation

3. The Client Workstation Setup window launches. Click Next.

4. Click Next after disk check completes. Click Finish.

5. Click UL2 radio button to select the Setup type. Click OK in the installation message box.

Figure 111 Successful Installation of the UL2

6. Perform the software update after logging into the active console on the Deployment server, and apply UL2 to the chosen environments.

Figure 112 Defining the Software Update Parameters

Figure 113 Select the Environment (DV900

Figure 114 Uncheck the Create OMW Project and Package Assembly

Figure 115 Completion of the Software Update

HTML Server Install

Download the WebLogic Server (WLS) 10.3.6 binaries from Oracle eDelivery, and install the WLS server on the HTML server. For more information on installing Oracle WebLogic server 10.3.6, see:

http://docs.oracle.com/cd/E21764_01/doc.1111/e14142.pdf

Create a domain. Configure the admin server port to 7501 as detailed in the above link.

After the installing the WLS, perform the steps described in this section to create a cluster on WLS and then deploy Oracle JDE HTML server on it. A cluster defines groups of WebLogic servers that work together to improve scalability and reliability.

Configuring the Cluster

To configure a cluster, follow these steps:

1. Log into the admin console using the user and password configured during WLS install.

Figure 116 WebLogic Admin Console Page

2. Choose Environment > Cluster > Lock&Edit > New.

3. Type Cluster E1C1 in the Name field.

Figure 117 Naming the Cluster

4. Click OK to finish creating the cluster.

Figure 118 Summary of the Clusters Created

5. Click Activate Changes, and then select the machines that will be configured in the current WebLogic server domain.

Figure 119 Create a new Machine entry

Figure 120 Summary of Machines Added

6. Type the Cluster Node name and other properties for the new server. Click Next.

Figure 121 Defining Properties for the New Server

7. Select the machine and cluster from the respective drop down menus.

Figure 122 Configuring the General Features

8. Sign into Server Manager, and click on Create/Register A Managed Instance of type HTML server. Use the port configured in previous step.

Figure 123 Defining the Instance Properties

9. Type JDV900 in the Bootstrap Environment text box, and type the values for the related configuration items. Click Continue.

Figure 124 Validating the Configuration Items

10. Click Create Instance. On completion, you will be redirected to the management page for the newly created instance.

Figure 125 Completing the Creation of the Managed Instance

Oracle HTTP Server Installation

Install the Oracle HTTP server on the Oracle JDE HTML server. It is beyond the scope of this document to describe the steps involved in installing the Oracle HTTP server on Oracle JDE HTMTL server.

For information on how to install Oracle JRF and Oracle HTTP server, see:

Oracle JRF: http://www.oracle.com/technetwork/developer-tools/adf/documentation/index.html

Oracle HTTP Server: http://docs.oracle.com/cd/E23943_01/doc.1111/e14260/overview.htm

Oracle JDE User Creation

To create the Oracle JDE user login into the DEP900 environment on the Deployment server, follow these steps:

1. Login to the Oracle JDE using the login credentials.

2. In the JD Edwards Solution Explorer type P980001 in the Fast Path text box.

3. Click Add to add users.

Figure 126 Adding the Users

4. Type the Oracle JDE user and set the password. Type Default in the Data Source text box.

Figure 127 Defining the User Credentials

5. To verify the user created, search for the user using the name.

Figure 128 Searching for the User Created

Figure 129 Verifying the User

6. Type P98OWSEC in the Fast Path text box.

7. Search for Oracle JDE user created in previous steps.

8. Configure system user mapping and password management.

Figure 130 Setting the Password Management Options

Figure 131 Modifying the Password Change Frequency

Full Package Build

To build the Full package, follow these steps:

1. Login into the Oracle JDE and open the Active Console.

2. Type GH9083 in the Fast Path text box, and fastpath to the Package and Deployment Tools menu.

3. Select the Package Assembly from the Package and Deployment Tools menu.

Figure 132 Selecting the Package Assembly

4. Click Add to create a new package assembly.

Figure 133 Defining the new Package Assembly

5. The Package Assembly Director opens. Click Next.

Figure 134 Launching the Package Assembly Director

6. Type package name, description and pathcode. Click Express radio button.

Figure 135 Defining the Package Information

7. Verify the Package Assembly properties. Click End.

Figure 136 Summary of the Package Features

8. Select the package and click Activate. Click Define Build.

Figure 137 Selecting the Package Build

9. Click Next.

Figure 138 Proceeding to Package Building

10. Check the Client, the Server, and the Share Specs checkboxes in the Build Location area.

Figure 139 Defining the Package Build Location

11. Click Next to get a list of available Enterprise servers. Select the Enterprise server.

Figure 140 Selecting Enterprise Servers

12. Verify the server specifications. Click End.

Figure 141 Server Build Specifications

13. Select the package to add, and click Sequence.

Figure 142 Adding the Package Sequence

14. Click Active/Inactive.

Figure 143 Activating the Package Build

15. Click Submit Build.

Figure 144 Submitting the Package Build Defined

16. Click On Screen radio button to print report to screen.


Note The full package build usually takes between 4 to 6 hours to complete. After the full package is built, you can deploy it on the enterprise server.


Summary

The above section elaborated the approach taken for installing Oracle JD Edwards 9.0.2 on a physical N Tier Oracle Linux environment with a two-node Oracle RAC database. The benchmarking effort required the use of a WebLogic cluster with an Oracle HTML server front end to load balance the users among the various cluster nodes. It is important to check Oracle support documents for information on the latest support statements in the various Oracle JDE MTRs. Also, check for the recently released patches.

Oracle JD Edwards Performance and Scalability

Workload Description

The Oracle JD Edwards Day in the life (DIL) kit is an attempt to capture how a typical customer interacts with the Oracle JDE system during the course of a typical day. The DIL kit accomplishes this with a set of scripts for 17 interactive applications as well as a set of Oracle JDE reports (UBEs), which processes a specific set of data, which is part of the DIL database. Due to the availability of this standard set of scripts and UBEs, various hardware vendors, including Cisco, have endeavored to characterize Oracle JDE implementations on their hardware platforms to deliver a value proposition for prospective customers.

The DIL kit interactive application workload skews more towards SRM applications, which feature prominently in application workloads used by the large Oracle JDE customer base in the mid-scale manufacturing industry segment. The UBE workload is also representative of the type of reports that are run by customers in this segment, though it does incorporate reports that cater to a larger audience of customers.

The DIL workload incorporates a good mix of applications ranging from multiple line items sales order and purchase order entries, coupled with light weight applications like supplier ledger enquiry. Similarly, the UBEs range from long running MRP processing and general ledger post reports to the short running company constants and business unit reports.

The LoadRunner scripts for the Oracle JDE interactive applications that the DIL kit incorporates measures the response times for certain key, representative transactions, which are incorporated in this document. The UBE performance is measured in terms of the total time taken to generate the report, as measured by timings recorded in the Oracle JDE logs for those UBEs.

Test Methodology

The interactive and batch version of the Oracle JDE E1 DIL kit is run to capture the end-user response time variation and batch execution rate with important system characteristics, such as CPU memory, and I/O across the servers in the test system. All four components of the Oracle JDE E1 deployment-HTML server, Enterprise server for interactive user, Enterprise server for Batch and Oracle database server-were monitored through the nmon Linux monitoring tool. NetApp OnCommand System Manager 2.1 is used to measure the total IOPs and latency on NetApp FAS 3270 Cluster.

Test Scenarios

The test scenarios included a broad range of Oracle JDE applications ensuring that they closely mimic how a potential customer would use Oracle JD Edwards. The documented response times and the best practices to deploy Oracle JDE E1 server offer a good indication on how this solution can be expected to perform when deployed in the customers production environment.

Cisco endeavors to truly stress the hardware configuration, as well as provide customers with scenarios that provides a mix of interactive and batch processes running concurrently. Cisco devised various scenarios to test and record the impact of running a mixed batch workload on the interactive performance of Oracle JDE applications, since batch processes are typically resource hungry, thereby impacting the responsiveness of Oracle JDE interactive applications.

Following are the test scenarios executed for the Oracle JDE deployment on Cisco UCS:

Interactive Scaling: Scaling of Oracle JDE interactive users from 1000 to 15,000 concurrent users.

Individual UBEs: Execution of individual long running UBEs on Oracle JDE E1 server for batch/UBE processes.

Only Batch: Execution of batch/UBE processes on Oracle JDE Enterprise server without interactive applications.

Interactive with Batch on separate physical servers: Concurrent execution of interactive users on Oracle JDE E1 server for interactive applications and a mix of batch/UBE processes on Oracle JDE E1 server for batch/UBEs. In this scenario, interactive applications and UBEs were configured to run on separate servers, and observations on the scaling characteristics of this scenario were recorded. Around 5000 concurrent Interactive users were run on the Enterprise server with a mix of UBEs running on a separate Enterprise server.

Interactive Workload Mix

The Oracle JDE E1 DIL kit is a set of 17 scripts that include Oracle SCM, SRM, HCM, CRM, and Financials applications.

Table 14 shows the transaction mix used for the Oracle JD Edwards interactive test with the Oracle JDE E1 DIL kit.

Table 14 Workload Mix

Oracle Application
Percentage Weight

Financial Management System

20

Supplier Relationship Management

24

Supply Chain Management

49

Customer Relationship Management

5

Human Capital Management

2

Total

100


Interactive with Batch Test Scenario

In this scenario, a combination of short running and long running UBEs were executed with a constant load of 5000 concurrent interactive users. The batch and the interactive users were executed in different Enterprise servers with a common Oracle RAC database. The batch workload mix executed is listed under Appendix A - Workload Mix for Batch and Interactive Test. This scenario helps the customers analyze how the interactive user response time varies when they execute batch during the working hours. It helps them to understand the capabilities of Cisco UCS servers, which help in minimal degradation in interactive user response time with batch executions.

Interactive Workload Scaling

The test scenario for Interactive Scaling is executed to determine the variation in end-user response time in the event of increasing interactive users from 1000 to 15,000 concurrent users. The system resource utilization, CPU memory, and disk I/O is captured across all the three tiers i.e. HTML server, Oracle JDE E1 server and Oracle Database server. For 5000 concurrent interactive users, a single HTML server and E1 server with a two-node Oracle RAC is utilized. For users beyond 5000 concurrent users, two HTML and two E1 servers with a common two-node Oracle RAC was utilized. This is deployed to enable high interactive user scaling as well as to demonstrate failover setup for HTML and E1 servers.

Figure 145 illustrates the weighted average response time for 1000 to 15,000 interactive users.

Figure 145 Oracle's JDE E1 DIL Kit Weighted Average Response Time

As illustrated in Figure 145, the Oracle JDE E1 deployment on a Cisco UCS blade server infrastructure scales exceptionally well, with a response time of around 0.13 seconds while scaling from 1000 to 10,000 concurrent users. The response time increases to around 0.18 seconds when scaled to 15,000 users.

User Response Time

The user response time is captured at the LoadRunner Controller for all 17 interactive Oracle JDE E1 DIL test scripts. The five important Oracle JD Edwards applications measured using the Oracle JDE E1 DIL Kit were:

Financial Management System (FMS)

Supplier Relationship Management (SRM)

Supply Chain Management (SCM)

Customer Relationship Management (CRM)

Human Capital Management (HCM)

The transaction mix for these applications is detailed in the Interactive Workload Mix section.

Figure 146 shows the weighted average response time for all 17 Oracle JDE E1 DIL kit scripts and for the five Oracle JD Edwards applications.

Figure 146 Oracle JDE E1 DIL Kit Weighted Average Response Time

As illustrated in Figure 146, the average response time of any of the Oracle JD Edwards application for interactive user tests is always below 0.56 seconds during the scalability from 1000 to 15,000 concurrent users.

CPU Utilization

Components forming each tier-HTML server, Enterprise server and the two-node Oracle RAC database is deployed on separate Cisco UCS B200 M3 Blade Servers, equipped with two-socket Intel Xeon E5-2690 processors. Figure 147 illustrates the average CPU utilization across the 3-Tier Oracle JD Edwards Technology Stack.

Figure 147 Oracle JDE E1 CPU Utilization

Observations

Following observations were made about the CPU utilization tests:

The maximum CPU utilization for Oracle JDE Enterprise server is around 34 percent for 15,000 interactive users.

Each of the Oracle RAC nodes is at similar average CPU utilization of around 34 to 35 percent.

The average CPU utilization recorded on each of HTML servers is around 16 to 17 percent.

CPU utilization across all tiers gradually increased, reflecting the linear scalability of the workload.

Memory Utilization

Memory utilization for the test with 1000 to 15,000 Interactive users across the 3-Tier Oracle JD Edwards technology stack is illustrated in Figure 148.

Each of the Oracle JDE HTML Servers and the Oracle two-node cluster is deployed on separate Cisco UCS B200 M3 servers, each equipped with 256 GB of memory. The Oracle JDE E1 server deployed on Cisco UCS B200 M3 server is installed with 128 GB of memory. The average memory utilization on each of the Oracle Databases nodes is between 144 GB to 192 GB for users ranging from 1000 to 15,000 concurrent users. The Oracle JD Edwards HTML server average memory utilization is around 110 GB whereas the Oracle JDE E1 server is just around 68 GB, for 15,000 concurrent interactive users.

Figure 148 Oracle JDE E1 Memory Utilization

Observations

Following observations were made about the memory utilization tests:

Memory utilization on the Oracle JDE HTML server is relatively high with a maximum of around 110 GB for 15,000 users. This is due to the fact that for 15,000 concurrent users, around 30 JVM instances with heap size of 3 GB each were configured in Oracle WebLogic. These instances were load balanced through the Oracle HTTP server, which is installed on the same HTML server.

For lower user loads, the Enterprise server configuration is set so that the memory scaled linearly, but as higher user loads were introduced, the Oracle JDE E1 configuration is further optimized through Oracle JDE E1 kernel processes to provide ample memory for running additional Oracle JDE E1 processes, such as UBE processes.

Oracle RAC database is configured with 110G of SGA Target and 40G of PGA target.

I/O Performance

The NetApp FAS 3270 A is configured as the storage system for each of the three components of the Oracle JDE E1 deployment: HTML server, Oracle JDE E1 server, and Oracle RAC database. The Cisco UCS servers were configured to SAN boot from NetApp LUNs, which allows unleashing of the full capabilities of Cisco UCS statelessness. Cisco UCS stateless configuration allows migration of Cisco UCS Service Profiles from a failed physical server to a standby server.

Oracle JD Edwards HTML and E1 servers were installed on NFS volumes. The Data files, REDO logs and OCR/ voting disk files of Oracle RAC database were also created on NFS volumes.

Figure 149 illustrates the total disk I/O performance captured through NetApp OnCommand System Manager.

Figure 149 Oracle JDE E1 Average IOPs on NetApp FAS 3270

Observations

The following observations were made about the I/O performance tests:

The number of I/O operations per second (IOPS) generated on Netapp FAS 3270 scaled linearly, reflecting the gradual increase in the user count.

The IOPS count on the HTML and Enterprise servers is very low.

The response time observed from the NetApp OnCommand System Manager is less than three milliseconds for the duration of the test.

The CPU utilization on each of the NetApp FAS 3270 controllers is around 25-26% for 15,000 concurrent interactive users.

The NetApp FAS 3270 is capable of handling a higher number of IOPS than reflected in this graph. The IOPS shown is a result of what is driven by this Oracle JDE DIL kit workload.

Individual UBEs

Batch processing is another critical activity in the Oracle JD Edwards environment and it is important to test and determine the execution time for long running individual UBE processes. This helps in proper sizing of batch servers for a Oracle JD Edwards deployment. In a real-world Oracle JD Edwards deployment scenario, several long running UBEs are run after business hours, and they must complete within a fixed duration of time. Table 15 summarizes the performance characteristics of these UBEs, with a brief description of what the UBE provides and what dataset they operate on. These long running reports were run on a fixed set of data with a standard set of processing options against the Oracle JDE DIL database.

Table 15 Long Running UBE Execution Time

UBE
Time Taken
(mm:ss)
Description

R43500

14:37

This Purchase Order Print UBE processed records in F4311 table (Purchase Order Detail File) with a status code of 280 in one business unit. 255,471 records were processed using this data range.

R3483

12:02

This MRP UBE processed 50,000 records in F4102 using one business unit. This is a night only process.

R31410

20:30

The Work Order Processing UBE acts on the document invoice numbers in the Work Order Master File, F4801 and 28,751 records processed with the data selection.

R09801

5:47

This General Ledger post UBE acts on records located in the Batch Control Records table, F0011 the data selection used a batch status A and batch type G, and processed 990,099 records.

R31802a

19:12

The Manufacturing Account Journal UBE acted on document invoice numbers in the Work Order Master File, F4801 with a status code of 95. and processed 1,501 records for our data selection criterion.


Only Batch Execution

There are customers who run only Oracle JDE Batch processing for various business functions during a dedicate windows and it is imperative that Cisco validates and provides enough information for such customers to make an informed decision regarding their deployment of Oracle JDE on Cisco UCS. For such customers, a test scenario is configured wherein, high volume of short running UBEs as well as 3 long running UBEs were executed, and the impact of running this test scenario is measured in terms of CPU and memory consumed on the Enterprise and Database server. This test scenario revealed that in the absence of a large number of concurrent interactive user loads, the Oracle JDE system can handle a lot more throughput in terms of UBE completions/minute.

The test successfully achieved 802 UBEs/minute. The average IOPS measure on the NetApp FAS 3270 is around 12,200. The CPU utilization on each of the NetApp FAS 3270 controllers is around 30 to 31 percent. It would be a good strategy for real-world Oracle JD Edwards customers, to schedule very high volume of UBEs during the non-peak hours when minimal interactive users are logged into the Oracle JDE system.

Figure 150 illustrates the CPU and memory utilization on Oracle JDE Enterprise server and two-node Oracle RAC database during the execution of Only Batch Processes.

Figure 150 Resource Utilization for Only Batch Execution

Interactive with Batch on Separate Physical Server

As a best practice, Cisco decided to execute a test scenario, which would determine the effect of interactive user response time, when executed with a mix of short and four long running UBEs, with the Oracle JDE interactive server and Oracle JDE Batch server deployed on two separate Cisco UCS B200 M3 servers. The two-node Oracle RAC cluster is deployed on separate Cisco UCS B200 M3 servers. The Oracle RAC cluster is common to both Oracle JDE interactive and batch servers, thereby maintaining the same database schema for interactive and batch processes.

The number of interactive application users is fixed at 5000 users and various batch loads, ranging from low, medium and high, were run to measure the impact on Interactive application response times.

For more information on workload mix, see Interactive with Batch Test Scenario.

User Response Time

As shown in Figure 151, the weighted average response time for 5000 concurrent interactive users is around 0.12 seconds for a batch load ranging from 60 UBEs/min to 210 UBEs/min. There is no degradation observed in response time for UBE concurrent load of up to 60 UBEs/minute. The medium and high UBE load of 114 and 210 UBEs/min had minimal degradation in response time for 5000 interactive users, and increased from 0.1 seconds to 0.12 seconds. This is attributed to the fact that there is a common Database server for Oracle JDE interactive and batch server.

Figure 151 Response Time for 5000 User

CPU Utilization

The HTML server, the Oracle JDE Enterprise server for interactive applications, the Oracle JDE Enterprise server, for batch and the two-node Oracle RAC cluster were deployed on separate Cisco UCS B200 M3 Blade Servers configured with two Intel Xeon E5-2690 processors.

Figure 152 illustrates the average CPU utilization across all four Oracle JD Edwards tiers.

Figure 152 Oracle JDE E1 CPU Utilization for Interactive

Observations

The following observations were made during the CPU utilization tests:

Average CPU utilization on the HTML server and the Oracle JDE Enterprise server for interactive, remained almost steady throughout the test. This is expected, as the workload on the UBE batch server is the only one that is increased.

CPU utilization on the two-node Oracle RAC server increased from 12 to 13 percent with 5000 interactive users and no UBE load to around 24 to 25 percent with the same 5000 interactive users and a high UBE load of 210 UBEs/min.

For low-to-high batch load, the CPU utilization on Oracle JDE Batch server varied from 8 to 22 percent.

Memory Utilization

In this test scenario, the batch server is deployed on a separate Cisco UCS B200 M3 server, which had the same physical memory configuration of 128 GB as used for the Oracle JDE Enterprise server for interactive applications. Figure 153 illustrates the memory utilization for a batch load with 5000 interactive users.

Figure 153 Memory Utilization for Interactive with Batch

Observations

The following observations were made during the memory utilization tests:

As the batch server is deployed on a separate server and the interactive load is constant at 5000 users, it is an expected behavior that memory utilization of HTML server and the Oracle JDE Enterprise server for interactive is almost the same as that of 5000 users without the UBE load.

Memory utilization on the Oracle RAC nodes increased marginally from 152 GB to 154 GB with low batch load of 60 UBEs/min to around 162GB for a batch load of 210 UBEs/min.

Memory utilization on the batch server is relatively low at around 10 GB because only 40 call object kernels have been configured in the Oracle JDE Batch server. This also confirms the fact that batch server is CPU-intensive not memory-intensive.

I/O Performance

Figure 154 illustrates the total average IOPs measured on the NetApp FAS 3270 with the help sysstat and NetApp OnCommand System Manager.

Figure 154 Average IOPs on FAS 3270 A for Interactive with Batch

Observations

The following observations were made during the I/O performance tests:

The number of I/O operations per second (IOPS) generated on the NetApp FAS 3270 increased from around 1700 IOPs to 1000 IOPs for 5000 users with no batch to 5000 users with high batch load of 201 UBEs/min.

90 to 95 percent of total IOPs were generated from Oracle RAC Database server. This is an expected behavior as the executed UBEs significantly stressed the Oracle Database data and log files.

The response time observed from the NetApp is less than 3 milliseconds for the duration of the test.

The CPU utilization on each of the controllers for 5000 concurrent interactive users with 210 UBE/minute batch load is around 35 to 40 percent.

Best Practices & Tuning Recommendations

Oracle JD Edwards deployed on FlexPod is configured for medium to large scale ERP deployment. The benchmark of Oracle JDE DIL Kit demonstrated exceptional performance for Oracle JDE interactive users and Oracle JDE Batch processes both when executed in isolation as well as when run concurrently. The subsequent sections elaborate on the tuning parameters and best practices incorporated across the hardware and the software stack.

System Configuration

All tests were executed using Cisco UCS B200 M3 servers and NetApp FAS 3270 A with Data ONTAP 8.1.2.

All Cisco UCS Blade Servers were attached to a BIOS policy. BIOS policy is one of the features of Cisco UCS Service Profiles, which enable users to incorporate similar BIOS settings all across the deployed servers. This helps achieve a consistent configuration and the administrators need not interrupt the boot process on each server to alter the BIOS setting. The BIOS policy configured for Oracle JDE deployment is elaborated in Figure 155.

Figure 155 BIOS Setting for CPU Performance

Figure 156 BIOS Setting for Physical Memory Performance

Oracle RAC Configuration

Several settings were changed on the Oracle RAC cluster to support the high load that the RDBMS is handling. Following are the important tuning parameters:

The data files and REDO logs were split between the two NetApp Controllers A and B. IOPs on each of the controllers were analyzed and the data files were split accordingly between Controller A and B, such that each of the two controllers receive similar number of IOPs.

To understand the memory required for high interactive and batch workloads, the awr report is analyzed and the SGA target and PGA target is set to 110 GB and 40 GB respectively.

In general, an Oracle JDE Edwards online user consumes around 150 to 200 processes per 100 concurrent interactive users. For high interactive user tests, the process limit is increased to 20,000.

At high interactive user concurrency, very high UDP socket count on the Oracle Database server was seen due to Oracle RAC interconnect traffic. The root cause for port exhaustion is an Oracle RAC Bug 13951907 - SKGXP SHOULD REUSE UDP PORT ACROSS MULTIPLE IPS. You must apply patch 10071792 to resolve this issue.

Each of the data aggregates on NetApp controllers A and B is carved with 48,15k RPM SAS drives. These aggregates were configured with RAID DP.

Table 16 lists some of the important Oracle RAC configuration parameters.

Table 16 Oracle RAC Configurations Parameters

Name
Value

memory_max_target

0

memory_target

0

parallel_servers_target

512

pga_aggregate_target

40 GB

sga_target

110 GB

processes

20000

session_cached_cursors

6000

sessions

30048


WebLogic Server Configuration

The JRockit Java Virtual Machine (JVM) is used along with WebLogic 10.3.6. A vertical cluster of up to 20 JVMs is created and the Oracle HTTP server is used to load balance the load among the various nodes of the vertical cluster.

Following are important configuration details for the WebLogic server:

For optimal performance, about 250 to 300 Oracle JDE interactive users were hosted per cluster node/JVM.

The minimum and maximum heap size for each node is set to 3 GB.

The garbage collection policy is set to gencon since the pattern of object creation and destruction on the Oracle JDE HTML server indicated that a large number of short lived objects were created and destroyed frequently.

The nursery size is set to 512 MB.

The number of 'gcthreads' is set to 6, java flight recorder is switched off for formal runs, and a minimum TLA size of 4k is chosen, with the preferred size of 1024 kb.

Oracle JD Edwards Enterprise Server Configuration

The Oracle JD Edwards tools release 8.98.4.10 is used with Oracle JD Edwards application release 9.0.2. The number of interactive users per callobject peaked at around 18/callobject kernel.

Following are the important configuration settings for Oracle JDE initialization files:

JDE.ini

Kernel configurations

Security kernels 60

Call Object kernels 400

Workflow kernels 30

Metadata kernels 1

[JDENET]

maxNetProcesses=60

maxNetConnections=8000

maxKernelProcesses=1000

maxNumSocketMsgQueue=1000

maxIPCQueueMsgs=600

maxLenInlineData=4096

maxLenFixedData=16384

maxFixedDataPackets=2000

internalQueueTimeOut=90

[JDEIPC]

maxNumberOfResources=4000

maxNumberOfSemaphores=2000

startIPCKeyValue=6001

avgResourceNameLength=40

avgHandles=200

hashBucketSize=53

maxMsgqMsgBytes=5096

maxMsgqEntries=1024

maxMsgqBytes=65536

msgQueueDelayTimeMillis=40

jdbj.ini

JDBj-CONNECTION POOL

minConnection=5

maxConnection=800

poolGrowth=5

initialConnection=25

maxSize=500

jas.ini

OWWEB

MAXUser=500

OWVirtualThreadPoolSize=800

JDENET

maxPoolSize=500

Conclusion

This CVD elaborates how FlexPod, comprising of Cisco UCS servers along with NetApp Unified Storage system, forms a highly reliable, robust solution for Oracle JDE Edwards implementation.

Enterprise Resource Planning (ERP) has been around for many decades and has provided agile IT practices to the business. Organizations that have used the ERP Packages have immensely benefitted by streamlining their backend processes to improve management and improve ROI.

ERP being a business critical application, takes a long time to implement and test; there is always a concern to move to newer technologies or experiment with the advanced features that are available today. Another important concerns is predictability- will it work for us, how will it work and at what cost?

Cisco has invested considerable time and effort to test, validate and characterize Oracle JD Edwards deployment on FlexPod, to provide comprehensive scalable architecture and best practices. By leveraging the best practices and lessons learned in our extensive Oracle JDE benchmark activity, customers can confidently deploy Oracle JD Edwards on FlexPod and reduces the risks involved.

Cisco Oracle Competency Center has provided considerable information in this design guide by testing and characterizing the Cisco UCS environment using the Oracle JD Edwards software stack. With the scalability demonstrated by the test results, Cisco is confident that these astounding results will substantiate FlexPod as a solid fit for any customer considering Oracle JD Edwards as their ERP platform.

Bill of Materials

Table 17 and Table 18 offers details of all the hardware and software components used in the CVD.

Table 17 Solution Components-Hardware

Description
Part#

Cisco Unified Computing System

N20-Z0001

Cisco UCS 5108 Blade Server Chassis

N20-C6508

Cisco UCS 6248UP 1RU Fabric Int/No PSU/32 UP/ 12p LICfans/no SFP+

UCS-FI-6248UP

Cisco UCS 2208XP Fabric Extender/ 8 external 10Gb ports

UCS-IOM-2208XP

Cisco UCS B200 M3 Blade Server w/o CPU, mem, HDD, mLOM/mezz

UCSB-B200-M3-CH

Intel Xeon E5-2690CPUs (2.9 GHz & 8 Cores)

UCS-CPU-E5-2690=

16GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v

UCS-MR-1X162RY-A=

8GB DDR3-1600-MHz RDIMM/PC3-12800/dual rank/1.35v

UCS-MR-1X082RY-A=

Cisco VIC 1280 dual 40Gb capable Virtual Interface Card

UCS-VIC-M82-8P=

Cisco Nexus 5548UP

N5K-C5548UP-FA

Cisco Nexus 5548UP Storage Protocols Services License

N5548P-SSK9

10GBASE-SR SFP Module

SFP-10G-SR

10GBASE-CU SFP+ Cable 3 Meter

SFP-H10GB-CU3M

8 Gbps Fibre Channel LW SFP+, LC

DS-SFP-FC8G-LW=

NetApp Unified Storage System

NetApp FAS 3270 A with Data ONTAP 8.1.2


Table 18 Solution Components-Software

Platform
Software Type
Name
Value

Cisco UCS 6248

Management

UCSN

2.1(1a)

Cisco UCS 6248

OS

NX_OS

2.1(1a)

Cisco Nexus 5548 UP

OS

NX-OS

5.0(3)N2 (1)

NetApp FAS 3270

Management

OnCommand System Manager

2.1

Blade Servers

OS

Oracle Linux 5.8 (RedHat compatible Kernel)

OL 5.8

Database

Database

Oracle RAC

11.2.0.3

Application

Application Software

Oracle JD Edwards 9.02 with Tools 8.98.4.10

9.02/ 8.98.4.10


Appendix A - Workload Mix for Batch and Interactive Test

Table 19 Batch Workload Mix

UBE name
Description
Long/short

R03b31

Activity Log Report

Short

R03b155

A/R Summary Analysis

Short

r0004p

UDC Records Types Print

Short

r0006p

Business Unit Report

Short

r0008p

Date Patterns Report

Short

r0010p

Company Constants Report

Short

r0012p1

AAI Report

Short

r0014

Payment terms Report

Short

r0018p

Tax Detail Report

Short

r01402w

Who's Who Report

Short

r41542

Item Ledger As Of Record Generation

Short

r42072

Price Category Print

Short

r41411

Select Items Cost Count

Short

R31410

Work Order Processing

Long

R43500

Purchase Order Print

Long

R3483

MRP report

Long


Table 20 Oracle JD Edwards Enterprise One Interactive Transactions

Transaction Name
Description
Virtual Users per transaction
for 500 interactive user workload

H03B102E_OK

Apply Receipts

50

H0411I_1_FIND

Supplier Ledger inquiry

50

H051141E_Row_OK

Daily Time Receipt

10

H17500E_Find

Case Management Add

25

H31114U_OK

Work Order Completion

15

H3411AE_Post_OK

MRP Messages (WO Orders)

10

H3411BE_Post_OK

MRP Messages (OP Orders)

10

H3411CE_Post_OK

MRP Messages (OT Orders)

10

H4113E_OK

Inventory Transfer

25

H42101E_Submit_Close

Sales Order Entry - 10 Line Items

125

H42101U_SubmitClose

Sales Order Update

25

H4310E_Post_OK

Purchase Order Entry - 25 Line Items

100

H4312U_OK

Purchase Order Receipts

10

H4314U_Row_OK

Voucher Match

10

H4915AU_Find

Ship Confirmation - Approval only

15

H4915CE_Find

Ship Confirmation - Confirm/Ship only

5

H4915CU_Find

Ship Confirmation - Confirm and Change Entry

5


Appendix B - Reference Documents

Oracle JD Edwards MTRs for Windows client, enterprise server, web server and database server.

Oracle JD Edwards 9.0.2 Release notes.

Oracle JD Edwards EnterpriseOne Applications Release 9.0 Installation Guide for Oracle On Linux.

D Edwards EnterpriseOne 8.98.3 Clustering Best Practices with Oracle WebLogic server.

Appendix C - Reference Links

Cisco UCS 5108 Server Chassis Installation Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/hw/chassis/install/ucs5108_install.html

Cisco Unified Computing System CLI Configuration Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/cli/config/guide/2.1/b_UCSM_CLI_Configuration_Guide_2_1_chapter_010.html

Cisco UCS Manager GUI Configuration Guide: http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/2.1/b_UCSM_GUI_Configuration_Guide_2_1.html

NetApp Storage Command Options: https://library.netapp.com/ecmdocs/ECMP1147528/html/index.html

NetApp OnCommand System Manager: http://support.netapp.com/documentation/docweb/index.html?productID=61456

NetApp Data ONTAP 8.1.2: http://support.netapp.com/documentation/docweb/index.html?productID=61539

Cisco Nexus 5548 Configuration Guide: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide.html