Cisco UCS B-Series Blade Servers

SAP Applications with Sybase ASE on Cisco UCS with NetApp Storage

  • Viewing Options

  • PDF (3.1 MB)
  • Feedback

October 2013



The Challenge

Today's IT departments are increasingly challenged by the complexity of managing disparate components within their data centers. Rapidly proliferating silos of server, storage, and networking resources, combined with numerous management tools and operational processes, have led to crippling inefficiencies and costs. Savvy organizations understand the financial and operational benefits of moving from infrastructure silos to a virtualized, shared environment. However, many of them are hesitant to make the transition because of potential short-term business disruptions and long-term architectural inflexibility, which can impede scalability and responsiveness to future business changes. Enterprises and service providers need a tested, cost-effective virtualization solution that can be easily implemented and managed within their existing infrastructures and that scales to meet their future cloud-computing objectives.

Business Challenges Facing the SAP Customer

Corporations deploying SAP software today are under pressure to reduce costs, minimize risk, and control change by accelerating deployments and increasing the availability of their SAP landscapes. Changing market conditions, restructuring activities, and mergers and acquisitions often result in the creation of new SAP landscapes based on the SAP NetWeaver platform. Deployment of these business solutions usually exceeds a single production instance of SAP. Business process owners and project managers must coordinate with IT management to optimize the scheduling and availability of systems to support rapid prototyping and development, frequent parallel testing or troubleshooting, and appropriate levels of end-user training. The ability to access these systems as project schedules dictate-with current datasets and without affecting production operations-often determines whether SAP projects are delivered on time and within budget.

The Solution

To meet these challenges, NetApp, VMware, and Cisco have collaborated to create the SAP applications built on a flexible, shared infrastructure that can scale easily or be configured for secure multitenancy and cloud environments. This approach uses a prevalidated configuration that delivers a virtualized data center in a rack composed of leading computing, networking, storage, and infrastructure software components.

SAP Applications Built on Cisco UCS and NetApp Storage

This solution differs from other virtualized infrastructure offerings by providing these advantages:

• Validated technologies from industry leaders in computing, storage, networking, and server virtualization

• A single platform, built from unified computing, fabric, and storage technologies, that scales to meet the largest data center requirements without disruption or architectural changes in the future

• Integrated components that enable you to centrally manage all your infrastructure pools

• An open-design management framework that integrates with your existing third-party infrastructure management solutions

• Support for VMware vSphere and bare metal server

• Virtualization on all layers of the solution stack

• Secure multitenancy for operating fenced SAP systems or landscapes

• Application and data mobility

• Integrated storage-based backup

• Provisioning of infrastructure components; for example, tenants and operating systems

• Automated SAP system copies

• Provisioning of fenced SAP systems based on clones of production systems

Solution Overview

SAP Applications with Sybase ASE Database on Cisco UCS and NetApp Storage with VMware and Cisco Nexus 1000V

This solution provides an end-to-end architecture with the Cisco Unified Computing System (Cisco UCS ®), VMware, SAP, and NetApp technologies that demonstrate the implementation of SAP applications on Cisco UCS, NetApp storage, and VMware with the Cisco Nexus ® 1000V Switch and highlight the advantages of using the Sybase Adaptive Server Enterprise (ASE) database.
The following components are used for the design and deployment:

• SAP NetWeaver 7.31

• Sybase ASE 15.7

• Cisco UCS 2.1 (1a) server platform

• Cisco Nexus 5548UP Switch

• Cisco Nexus 1000V Switch

• VMware vSphere 5.1 virtualization platform

• Data center business advantage architecture

• LAN architectures

• NetApp storage components

• NetApp OnCommand System Manager 2.1

Figure 1 shows the physical architecture of the design solution discussed in this white paper

Physical Architecture

The setup used in this white paper, Cisco ® servers, fabric interconnects, and Nexus switches, connects with NetApp storage controller and disk shelves, retaining the essential FlexPod configuration, as shown in Figure 1. The Cisco UCS C220 M3 Rack Servers and Cisco Nexus 2232PP 10GE Fabric Extenders (FEX) are for management purposes and optional.

Figure 1. Physical Architecture

Logical Architecture

SAP applications are built on this setup using a multitenancy model, as documented in the "SAP Applications Built on FlexPod" Cisco Validated Design. A tenant is defined as a set of standardized, virtualized resources taken from a shared pool. Each tenant is isolated by VLAN technology on the networking layer and by NetApp vFiler technology on the storage layer.
This setup consists of two tenants, one infrastructure tenant and one tenant for all SAP applications. Additional tenants can be created following multitenancy requirements-for example, isolation of subsidiaries or isolation of clients. Additional tenants are also used to cover specific use cases, such as fenced clones of SAP systems or landscapes. The infrastructure tenant is used to run all management components for the infrastructure and the application layer. All "managed tenants" are administered from the infrastructure tenant.
Figure 2 shows how an SAP landscape is deployed with a single managed tenant. All SAP systems are running within this tenant (Managed Tenant 1).

Figure 2. Logical Architecture

In this setup, separate virtual machines (VMs) are created for Sybase ASE and SAP NetWeaver in Managed Tenant 1. The guest OS (Red Hat Enterprise Linux [RHEL] 6.4) boots from a VMDK residing on the VMware NFS datastore, and database/log NFS volumes are mounted directly on the guest OS.

Technology Overview

Cisco Unified Computing System

Cisco UCS is a third-generation data center platform that unites computing, networking, storage access, and virtualization resources into a cohesive system designed to reduce TCO and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class x86-architecture servers. The system is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain that is controlled and managed centrally. Figure 3 shows the unification of the computing, network, and storage in a Cisco UCS environment.

Figure 3. Cisco Unified Computing Components in a Data Center

The following are the main components of the Cisco UCS:

Computing: The system is based on an entirely new class of computing system that incorporates blade servers based on Intel® Xeon® E5-2600 Series processors. The Cisco UCS blade servers offer the patented Cisco Extended Memory technology to support applications with large datasets and allow more VMs per server.

Network: The system is integrated into a low-latency, lossless, 80-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks that are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.

Virtualization: The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are extended into virtualized environments to better support changing business and IT requirements.

Storage access: The system provides consolidated access to both SAN storage and network-attached storage (NAS) over the unified fabric. By unifying the storage access, Cisco UCS can access storage over Ethernet, Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI. This provides customers with choices for storage access and investment protection. In addition, the server administrators can preassign storage-access policies for system connectivity to storage resources, simplifying storage connectivity and management for increased productivity.

Management: The system uniquely integrates all system components, enabling the entire solution to be managed as a single entity by Cisco UCS Manager. Cisco UCS Manager has an intuitive GUI, a command-line interface (CLI), and a robust API to manage all system configuration and operations.

Cisco UCS is designed to deliver the following benefits:

• Reduced TCO, increased ROI, and increased business agility.

• Increased IT staff productivity through just-in-time provisioning and mobility support.

• A cohesive, integrated system that unifies the technology in the data center. The system is managed, serviced, and tested as a whole.

• Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.

• Industry standards supported by a partner ecosystem of industry leaders.

Cisco UCS Components

This section describes the various components that constitute Cisco UCS. Figure 4 shows these components.

Figure 4. Cisco UCS Components

Cisco UCS Blade Server Chassis

The Cisco UCS 5100 Series Blade Server Chassis is a crucial building block of Cisco UCS, delivering a scalable and flexible blade server chassis.
The Cisco UCS 5108 Blade Server Chassis is six rack units (6RU) high and can mount in an industry-standard 19-inch rack. A single chassis can house up to eight half-width Cisco UCS B-Series Blade Servers and can accommodate both half-width and full-width blade form factors.

Cisco UCS B200 M3 Blade Server

The Cisco UCS B200 M3 Blade Server is a half-width, two-socket blade server. The system uses two Intel Xeon E5-2600 Series Processors, up to 384 GB of DDR3 memory, two optional hot-swappable small form-factor (SFF) serial-attached SCSI (SAS) disk drives, and two virtual interface cards (VICs) that provide up to 80 Gbps of I/O throughput. The server balances simplicity, performance, and density for production-level virtualization and other mainstream data center workloads.

Cisco UCS Virtual Interface Card 1240

A Cisco innovation, the Cisco UCS VIC 1240 is a 4-port 10 Gigabit Ethernet, FCoE-capable modular LAN on motherboard (mLOM) designed exclusively for the M3 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional port expander, the capabilities of the Cisco UCS VIC 1240 can be expanded to eight ports of 10 Gigabit Ethernet.

Cisco UCS Virtual Interface Card 1280

The Cisco UCS VIC 1280 is an 8-port 10 Gigabit Ethernet, FCoE-capable mezzanine card designed exclusively for Cisco UCS B-Series Blade Servers.
The Cisco UCS VIC 1240 and 1280 enable a policy-based, stateless, agile server infrastructure that can present up to 256 PCI Express (PCIe) standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1280 supports Cisco Nexus 1000V technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying the server virtualization deployment.

Cisco UCS 6248UP 48-Port Fabric Interconnects

The Cisco UCS 6248UP 48-Port Fabric Interconnects provide a single point for connectivity and management for the entire system. Typically deployed as an active-active pair, the system's fabric interconnects integrate all components into a single, highly available management domain controlled by Cisco UCS Manager. The fabric interconnects manage all I/O efficiently and securely at a single point, resulting in deterministic I/O latency regardless of a server's or virtual machine's topological location in the system.
The Cisco UCS 6200 Series Fabric Interconnects support the system's 80-Gbps unified fabric with low-latency, lossless, cut-through switching that supports IP, storage, and management traffic using a single set of cables. The fabric interconnects feature virtual interfaces that terminate both physical and virtual connections equivalently, establishing a virtualization-aware environment in which blade servers, rack servers, and virtual machines are interconnected using the same mechanisms.

Cisco UCS 2200 Series Fabric Extenders

The Cisco UCS fabric extenders are zero-management, low-cost, low-power-consuming devices that distribute the system's connectivity and management planes into rack and blade chassis to scale the system without complexity. Designed never to lose a packet, Cisco fabric extenders eliminate the need for top-of-rack switches and blade-server-resident Ethernet and FC switches and management modules, dramatically reducing the infrastructure cost per server.

Cisco UCS Manager

Cisco UCS Manager is an embedded, unified manager that provides a single point of management for Cisco UCS. It can be accessed through an intuitive GUI, a CLI, or the comprehensive open XML API. Cisco UCS Manager manages the physical assets of the server and storage and LAN connectivity and is designed to simplify the management of virtual network connections through integration with several major hypervisor vendors. It provides IT departments with the flexibility to allow people to manage the system as a whole, or to assign specific management functions to individuals based on their roles as managers of server, storage, or network hardware assets. It simplifies operations by automatically discovering all the components available on the system and enabling a stateless model for resource use.

Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP is a 1RU 1 and 10 Gigabit Ethernet switch offering up to 960 Gbps throughput and scaling up to 48 ports. It offers thirty-two 1 and 10 Gigabit Ethernet fixed Enhanced Small Form-Factor Pluggable (SFP+) Ethernet/FCoE or 1/2/4/8-Gbps native FC unified ports and three expansion slots. These slots have a combination of Ethernet/FCoE and native FC ports.

Cisco Nexus 1000V

The Cisco Nexus 1000V Switch for VMware vSphere is a virtual machine access switch that is an intelligent software switch implementation based on the IEEE 802.1Q standard for VMware vSphere environments running the Cisco NX-OS operating system. Operating inside the VMware ESX hypervisor, the Cisco Nexus 1000V Switch supports Cisco VN-Link server virtualization technology.
With the Cisco Nexus 1000V, you can have a consistent networking feature set and provisioning process all the way from the virtual machine access layer to the core of the data center network infrastructure. Virtual servers can use the same network configuration, security policy, diagnostic tools, and operational models as their physical server counterparts attached to dedicated physical network ports. Virtualization administrators can access predefined network policy that follows mobile virtual machines to help ensure proper connectivity, saving valuable time. Developed in close collaboration with VMware, the Cisco Nexus 1000V Switch is certified by VMware to be compatible with VMware vSphere, vCenter, ESX, and ESXi, and with many other vSphere features. You can use the Cisco Nexus 1000V Switch to manage your virtual machine connectivity with confidence in the integrity of the server virtualization infrastructure.
The Cisco Nexus 1000V Switch is compatible with VMware vSphere as a VMware vNetwork distributed switch (vDS) with support for the VMware ESX and the ESXi hypervisors and integration with VMware vCenter Server. The Cisco Nexus 1000V is also compatible with the various VMware vSphere features. The Cisco Nexus 1000V Switch has two major components:

• The Virtual Ethernet Module (VEM), which runs inside the hypervisor

• The external Virtual Supervisor Module (VSM), which manages the VEMs

VMware Architecture

VMware vSphere provides a foundation for virtual environments, including clouds. In addition to the hypervisor itself, it provides tools, such as VMware vMotion, to manage the virtual landscape, and it allows the creation of secure private landscapes. With VMotion, you can move a virtual machine from one physical compute node to another without service interruption. The powerful VMware virtualization solution enables you to pool server and desktop resources and dynamically allocate them with service-level automation so you can deploy a private cloud and deliver IT as a service (ITaaS). VMware components provide a scalable approach to virtualization that delivers high availability and agility to meet changing business requirements. VMware vSphere, the industry's most complete and robust virtualization platform, increases IT efficiency through consolidation and automation, reducing capital and operating costs while giving you the freedom to choose your applications, OS, and hardware. VMware vCenter Server Standard offers proactive end-to-end centralized management of virtual environments, delivering the visibility and responsiveness you need for cloud-ready applications.

VMware Network Distributed Switch

The VMware vNetwork Distributed Switch maintains network runtime state for VMs as they move across multiple hosts, enabling inline monitoring and centralized firewall services. It provides a framework for monitoring and maintaining the security of virtual machines as they move from physical server to physical server and enables the use of third-party virtual switches such as the Cisco Nexus 1000V to extend familiar physical network features and controls to virtual networks.
In combination with a Cisco VIC such as the Cisco UCS VIC 1240, the Cisco UCS VM-FEX can be used to connect and manage the VMware vNetwork Distributed Switch directly. Using the Cisco UCS Manager, VM-FEX instead of a Cisco Nexus 1000V will shift the management from a "network device" to UCS Manager by keeping all the advantages of a distributed switch, as discussed in the "Network Architecture" section of the "SAP Applications Built on FlexPod" Cisco Validated Design.
Figure 5 shows the VMware components and the plug-in architecture for VMware vCenter Server, which enables you to integrate additional plug-ins from NetApp and Cisco to manage integrated landscapes.

Figure 5. Overview of VMware Components and Management Plug-In

Storage Architecture

SAP applications built on FlexPod use the NetApp MultiStore software, which lets you quickly and easily create discrete and private logical partitions on a single storage system. Each virtual storage partition, a vFiler secure partition, maintains separation from every other storage partition, so you can enable multiple tenants to share the same storage resource without compromising privacy or security.
The infrastructure tenant is used to run all management components for the infrastructure and the application layer. All "managed tenants" are administered from the infrastructure tenant. During the process of provisioning storage volumes for an SAP system, you can decide on the primary vFiler unit on which the SAP system should run. This allows different SAP systems to run on different storage hardware. Each SAP system consists of two volumes, the sapdata and saplog volumes. Backups of SAP systems with Sybase ASE are controlled by NetApp SnapCreator and NetApp OnCommand Unified Manager. The backups are stored on the backup vFiler unit of each tenant.
Figure 6 shows an example configuration with two Fabric-Attached Storage (FAS) high-availability (HA) pairs located in different data centers. An FAS HA pair consists of two storage controllers in an HA configuration. Each storage controller is assigned to a resource pool in NetApp Provisioning Manager, and vFiler units can be provisioned to each resource pool. In the figure, Resource Pools 2 and 4 have SAS and SATA drives and are therefore used for both primary and backup vFiler units. The other resource pools are mainly used for primary vFiler units.

Figure 6. vFiler Configuration for Infrastructure Tenant

SAP Sybase ASE 15.7

SAP Sybase ASE is an enterprise-grade relational database management system (RDBMS) that is focused on providing extreme transaction processing and continuous availability to business-critical applications. It provides compelling technical and commercial optimizations for SAP applications.
Commercially, SAP Sybase ASE is a preferred database for transaction processing environments, especially for SAP ERP and other SAP Business Suite applications, including the SAP Solution Manager and SAP NetWeaver Business Warehouse. As part of SAP, the application and the database releases are synchronized through joint roadmaps, and the maintenance periods follow the supported solutions. SAP uses the joint technical roadmap to optimally integrate SAP solutions with the Sybase ASE database and thus achieves the goal of making SAP Sybase ASE the most cost-efficient database for customers.
Technically, SAP Sybase ASE 15.7 has been greatly enhanced to provide optimal performance and operational efficiency for SAP applications. While this paper is not the right medium to go into all the advantages, a number of key innovations have been implemented to address key requirements in running SAP applications.


SAP Sybase ASE for SAP Business Suite uses a number of compression strategies to achieve high compression ratios. This includes compression within a single row to remove empty space and zeros in fixed length columns. At the page/block level, this includes both page dictionary and page index compression strategies. Repeated data items and repeated sets of data items are replaced by a single reference, resulting in dramatic savings for duplicate data. For indexes, SAP Sybase ASE uses duplicate key suppression and suffix compression techniques to further reduce storage. Finally, SAP Sybase ASE also supports large object (LOB) compression. Given that LOBs can be very large in size (up to 2 GB), compression can result in very significant space savings. FastLZ and ZLib compression techniques are supported. While the first provides lower CPU usage and execution times, the latter provides higher compression ratios. Data and LOBs are also buffered in compressed form in ASE's data cache(s), reducing the memory resources required to run SAP on ASE.

In-Row LOBs

SAP pool and cluster tables make heavy use of LOBs, such as text (CLOB) and binary (BLOB) data types. SAP Sybase ASE supports in-row LOBs for situations in which the LOBs are fairly small and can readily fit within the corresponding data row. This helps reduce I/O while accessing small LOBs and also further decreases the overall database size. The in-row LOB size is freely configurable.

Data Partitioning

SAP Sybase ASE supports several data partitioning types (range, hash, list, round-robin). Partitioning can be selectively used to reduce contention on hot tables. In SAP BW, partitioning is used to optimize information lifecycle management tasks.

Task Scheduler

In contrast to most other DBMSs, ASE controls decisions regarding which user task to run in its own task scheduler. This provides maximum throughput and minimum response time by minimizing the time needed to perform user context switches. In addition, it enables a level of resource management to optionally separate user workloads according to business priorities. User tasks are organized in engine run queues, where the number of engines can be configured up to the number of processor cores or hardware threads available. The "threaded" kernel introduced with ASE 15.7 greatly enhances ASE scalability on systems with a very large number of processors, processor cores, and hardware threads. Figure 7 depicts ASE's internal architecture and its use of both internal DBMS threads (known as SPIDs) as well as operating system threads for system services to balance performance and resource consumption for optimal performance, even when servicing thousands of concurrent users.

Figure 7. Overview of SAP Sybase ASE's Kernel

Expanded Resource Configuration Limits

SAP Sybase ASE can manage up to 4 TB of physical memory, 64 TB of storage for a single database, and 1024 cores. Many SAP Sybase ASE customers are running systems with thousands of concurrent users.

Solution Setup and Operation

This section gives an overview of the tasks and workflows for setting up and operating SAP NetWeaver with Sybase ASE built on Cisco UCS and NetApp storage. Although it lists all the tasks involved, details of these tasks are described in the "SAP Applications Built on FlexPod" Cisco Validated Design, and the following sections describe only the differences and additional tasks required.

Infrastructure Setup Tasks

1. Set up VMware vSphere built on FlexPod.
2. Perform additional steps to configure SAP applications built on FlexPod.
3. Set up infrastructure tenant:

a. Configure NetApp Operations, Provisioning, and Protection Manager.

b. Set up infrastructure volumes.

c. Back up configuration of infrastructure volumes.

d. Install SAP Landscape Virtualization Management (LVM).

e. Install SnapCreator server.

f. Configure tenant-specific services (DNS).

4. Install and configure the OS:

a. OS template for VMware (SUSE Linux Enterprise Server [SLES] and/or RHEL)

b. OS template and/or autoinstall framework for bare metal (SLES and/or RHEL)

5. Provision one or more managed tenants:

a. Configure network.

b. Configure storage.

c. Configure tenant-specific services (Dynamic Host Configuration Protocol [DHCP], DNS, Network Information Service [NIS]).

Operational Tasks

1. Provision additional managed tenants:

a. Configure network.

b. Configure storage.

c. Configure tenant-specific services (DHCP, DNS, NIS).

2. Provision OS into target tenants.
3. Provision SAP system:

a. Prepare.

b. Install system.

4. Configure backup services for SAP systems:

a. Create Protection Manager protection policy.

b. Create SnapCreator profile.

c. Configure data protection.

5. Configure SAP LVM for SAP systems.

SAP System Provisioning

Preparing the SID-Specific Configuration File

Before starting the provisioning process, you must provide a SID-specific configuration file. For example, t002-SYB.conf is the name of the configuration file for SID=SYB in tenant 2. The configuration file must be stored at /mnt/data/conf in the infrastructure tenant. For details about all parameters of the configuration file, refer to the "SID-Specific Configuration File" section of the "SAP Applications Built on FlexPod" Cisco Validated Design. In addition, the following parameters must be changed to meet the specific requirements of the Sybase ASE installation. After adjusting all parameters, you must copy the configuration file to /mnt/data/conf in the target tenant. The Sybase ASE specific file system layout would look as follows:
db_mountlist_1="t002-1-prim:/vol/t002_saplog_SYB/saplog_SYB/sybase_SYB ==> /sybase/SYB"
db_mountlist_2="t002-1-prim:/vol/t002_sapdata_SYB/sapdata_SYB/sapdata_1 ==> /sybase/SYB/sapdata_1"
db_mountlist_3="t002-1-prim:/vol/t002_sapdata_SYB/sapdata_SYB/sapdata_2 ==> /sybase/SYB/sapdata_2"
db_mountlist_4="t002-1-prim:/vol/t002_sapdata_SYB/sapdata_SYB/sapdata_3 ==> /sybase/SYB/sapdata_3"
db_mountlist_5="t002-1-prim:/vol/t002_sapdata_SYB/sapdata_SYB/sapdata_4 ==> /sybase/SYB/sapdata_4"
db_mountlist_6="t002-1-bck:/vol/t002_backup/data ==> /mnt/backup"

Provisioning a New OS

The new OS image is deployed as described in the "OS Provisioning" section of the "SAP Applications Built on FlexPod" Cisco Validated Design.

Configuring DNS and NIS

To configure the necessary DNS and NIS entries at the tenant-specific services VM, follow the description in the Cisco Validated Design. Instead of the ora<sid> user, the Sybase specific user syb<sid> must be created with primary group sapsys. For the system used in this example (SID=SYB), the user should look like this:
uid=1016(sybsyb) gid=1001(sapsys) groups=1001(sapsys)

Provisioning Storage

To provision storage, run the commands in the following steps on the NetApp Data Fabric Manager (DFM) host. These steps can be executed either manually or by using the example script This script must be executed at the DFM host. Make sure to use a Sybase ASE specific configuration file.
/mnt/software/scripts/ /mnt/data/conf/t002.conf /mnt/data/conf/t002-SYB.conf

Creating Subdirectories

After the storage volumes have been provisioned, several subdirectories must be created within these volumes. These steps can be executed either manually or by using the script The script must be executed at the tenant-specific services VM in the target tenant of the SAP system. For example:
/mnt/software/scripts/ /mnt/data/conf/t002.conf /mnt/data/conf/t002-SYB.conf

Mounting File Systems and Configuring the IP Alias

The script is used to mount the necessary file systems and configure the IP alias. This script is executed at the host where the SAP system is to be installed or migrated with the following command:
/mnt/software/scripts/ <tenant_parameter_file> <SID_parameter_file> startmountonly
The script executes the following tasks:

• Creates a directory for archive log backups (if none exists)

• Configures virtual interfaces for the SAP and database services

• Creates mountpoints (if none exist)

• Mounts file systems

Installing the SAP System with SAPinst

The SAP system is installed by using SAPinst. To install an SAP system using virtual host names (necessary to move systems between hosts and for SAP LVM integration), the SAP instances must be installed in the following order (select the Distributed System option):
1. "Global host preparation" using the SAPinst parameter
2. "Database instance" using the SAPinst parameter
3. "Central instance" using the SAPinst parameter

Configuring Backup Services

Backup services can be configured for newly installed or migrated SAP systems as well as for SAP systems that have been created as a system copy. Backup Services for Sybase ASE based SAP systems are supported by the NetApp SnapCreator framework. Therefore, if SnapCreator is not yet available in the environment, it must be installed first.

Creating SnapCreator Profiles

For each Sybase ASE based SAP system, a NetApp SnapCreator backup configuration must be set up. Configurations are grouped into profiles. In the SAP application built on the FlexPod environment, backup configurations are grouped into tenant profiles. If no tenant profile exists for a new SAP system, it must be created first. The following steps show how to configure a tenant profile and a backup configuration for the SAP system SYB.
1. Create a new tenant profile.
2. Follow the wizard to create a new backup configuration.
3. Enter a configuration name.
4. Select the Application plug-in type.
5. Select the Sybase ASE plug-in.
6. Enter the Sybase ASE database parameters.
7. Enter the Sybase ASE database host. Make sure to use the virtual hostname of the database host.
8. Select OnCommand proxy (DFM).
9. Use the credentials from the global config. Select HTTPS as the transport protocol.
10. Enter the hostname of the NetApp vFiler that hosts the SAP installation volumes.
11. Select all volumes that belong to the system.
12. Configure the retention settings. In this example all backup snapshots will be prefixed with "sc-backup" and the retention is set to 5 hourly backups and 10 daily backups. Daily backups will be deleted only if they are older than 7 days.
13. Keep the default settings.
14. Do not configure SnapMirror or SnapVault. Backup protection will be handled via OnCommand Protection Manager capabilities.
15. Select OnCommand data protection capabilities.
16. A new empty application dataset is created in OnCommand.
17. Confirm all settings and finish the wizard.
The configuration is stored on the NetApp SnapCreator server. Some parameters must be adapted in the configuration the wizard created. To change these settings, log in to the SnapCreator server VM and open the configuration file for editing.
1. Open the configuration file in the SnapCreator server installation path <path_to_scserver>/engine/configs/<profile>/<configuration>.conf. For example,
t001-scserver:/opt/NetApp/scServer4.0p1/engine/configs/t002 # vi t002_SYB.conf
2. Go to the ProtectionManager Options sections and change the NTAP_DFM_DATA_SET parameter, according to the example below.
# ProtectionManager Options #
3. In the Plug-In Parameter section, add the SYBASE_USER parameter.
# Plug-In Parameter #
SYBASE_ISQL_CMD=/sybase/SYB/OCS-15_0/bin/isql -X

Configuring Data Protection

To protect backups created via SnapCreator, the framework integrates with NetApp OnCommand Protection Manager capabilities. During the profile creation, an empty application dataset is created in Protection Manager. Perform the following steps to configure the dataset according to the required protection policies.
1. Open the OnCommand management console and select the dataset that has been created via SnapCreator. The dataset is named snapcreator_<backup_profile_name>.
2. Edit the dataset and add the qtrees of the sapdata and saplog volumes.
3. Finish the dataset wizard.
4. In the dataset list, select the SnapCreator dataset again and click "Protection Policy." Select the appropriate protection policy to meet the requirements of the SAP system.
5. Let Protection Manager provision new backup volumes as part of the configuration.
6. Select the backup resource pool.
7. Select the backup vFiler unit for the Backup node.
8. Check and confirm all changes.
9. Finish the wizard.
10. Protection Manager starts to create the necessary backup volumes and SnapVault relationships.
11. Wait until the initial transfer finishes and the dataset is conformant.

Performing a Test Backup

To verify that all configuration settings are correct, run a first test backup. Perform the following steps to do the test.
1. Open the configuration profile for the Sybase ASE system in SnapCreator. Click Actions -> Backup to start a backup. Select the required policy, for example, hourly.
2. Monitor the log window and verify that all steps of the backup run successfully.
3. Go to the NetApp management console and verify that the newly created backup shows up. Once the next scheduled backup transfer has finished (or after an on-demand protection run), the backup should also be shown as available on the Backup node.
The backup services for the SAP system are not completely configured. Regular backups can now be scheduled via the SnapCreator scheduler or an external scheduler as needed.

Configuring SAP LVM for SAP Systems

SAP Landscape Virtualization Management (LVM) is an optional component of the SAP applications built on the FlexPod environment. With SAP LVM, you can reduce the capital investment and IT operational costs for managing your SAP systems and landscapes running on traditional or virtual infrastructures and increase your business agility. The SAP applications built on the FlexPod architecture, including the Sybase ASE deployment covered in this paper, are fully prepared to be easily integrated into and managed by the SAP LVM. The following steps describe how to configure a Sybase ASE system into an existing LVM system in the FlexPod environment.

Configuring the SAP System

1. Log in to LVM and go to the Configuration page. Select the Systems tab.
2. Click Discover and enter the host information of the host on which the Sybase system is running.
3. Click Detect. Three new instances should be discovered (Sybase ASE database, SAP Central Services, and SAP Central Instance).
4. Select the correct pool for the new system and click Auto Assign. Verify all settings.
5. Finish the wizard and save the configuration.
6. The new SAP system is now listed in the Systems and Instances overview.
7. Select the system and click Edit.
8. Provide a system description.
9. Select additional options to flag the system as a source for provisioning tasks. For Cloning and Renaming, you also have to provide a valid remote function call (RFC) connection to the system.
10. Select the ACM (Automatic Capacity Management) option, if required.
11. Verify the network isolation settings. Then save and exit the wizard.

Configuring the Database Instance

1. In the Overview of Systems and Instances, select the database instance of the new Sybase ASE system. Then click Edit.
2. Verify all settings and test the database connection.
3. Select the AC-enabled option and the Linux operating system.
4. Configure all necessary NFS file systems for the database instance. Mark the /home/<sid>adm and /sapmnt/<SID> mounts as System Wide.

Configuring the Central Services Instance

1. Select the Central Services instance in the overview list, and click Edit.
2. Verify all settings for the instance.
3. Select the AC-Enabled option and the Linux operating system.
4. Configure the necessary file system mounts. Mark all entries as System Wide.
5. On the Mass Configuration screen, select the remaining Central Instance of the system and click Save & Apply Mass Configuration.
6. Select all entries you want to set via the mass configuration.
7. Finish the wizard and go to the Operations page. The new system shows up as running and ready for all LVM use cases.

Verifying the Installation Using DBA Cockpit

In this example, DBA Cockpit was installed to verify the setup, as well to gather monitoring data at the database level. A SAP GUI client screenshot is captured below:


FlexPod combines various technologies, mainly the Cisco UCS, VMware vSphere 5.1, and NetApp storage technologies, to form a highly reliable, robust, and virtualized solution for SAP applications built on Sybase ASE Database.
Here's what makes the combination of Cisco UCS with NetApp storage so powerful for SAP environments:

• The stateless computing architecture provided by the service profile capability of Cisco UCS allows for fast, nondisruptive workload changes to be executed simply and seamlessly across the integrated Cisco UCS infrastructure and the Cisco x86 servers.

• Cisco UCS, combined with a highly scalable NAS platform from NetApp, provides the ideal combination for SAP's complex and demanding workloads.

• All of this is made possible by Cisco's Unified Fabric, with its focus on secure IP networks as the standard interconnect for server and data management solutions.

The Cisco Nexus 1000V technology employed in this solution is compatible with VMware vSphere as a VMware vNetwork distributed switch (vDS), as it supports the VMware ESX and the ESXi hypervisors and integrates efficiently with the VMware vCenter server.
In addition to the traditional switching capability, the Cisco Nexus 1000V Switch offers the Cisco vPath architecture to support virtualized network services, while the Cisco VN-Link technology provides a common management model for both physical and virtual network infrastructures through policy-based virtual machine connectivity, mobility of virtual machine security and network properties, and a nondisruptive operational model.
The Cisco server fabric switch enables utility computing by dramatically simplifying the data center architecture. It creates a unified, "wire-once" fabric that aggregates I/O and server resources. With the unified fabric, instead of servers having many cables attached to them, the server switch connects every server with a single high-bandwidth, low-latency network cable (two cables for redundancy).

Bill of Materials

Table 1, Table 2, and Table 3 detail the components used in this solution design.

Table 1. Hardware Components Used in the Deployment

Server Details

Storage Details

2 Cisco UCS B200 M3 Blade Servers

NetApp FAS 3270

CPU: Intel Xeon E5-2690

Protocol license: NFS, iSCSI

Memory: 256 GB

Network: 10-Gbps Ethernet and iSCSI

Network: VIC adapter with 80-Gbps bandwidth

Flash Cache: Two 500 GB

Server role: VMware ESXi Server hosting guest VM

Type and number of disk drives: 144 SAS 15,000 rpm

Table 2. Component Description


Part Number

Cisco UCS 5108 Blade Server Chassis


Cisco UCS 2208XP I/O Module (8 external, 32 internal 10 Gigabit Ethernet ports)


Cisco UCS B200 M3 Blade Server; dual Intel Xeon E5-2690 CPUs (2.7 GHz and 8 cores), 256 GB RAM (DDR3 1600 MHz)

UCS B200 M3

Cisco UCS 6248UP 1RU Fabric Interconnect, no PSU, 32 UP, 12p LIC


Cisco UCS 6200 16-port expansion module, 16 UP, 8p LIC


NetApp FAS3240 single enclosure HA (single 3U chassis)


Dual-port 10 Gigabit Ethernet unified target adapter with fiber


Disk shelf with 600-GB SAS drives, 15,000 rpm, 4 PSU, 2 IOM3 modules


NFS Software License


Cisco Nexus 5548UP Switch


Cisco Nexus 5548UP Storage Protocols Services License




Table 3. Software Details


Software Type

Cisco UCS 6248UP


Cisco UCS 6248UP


Cisco Nexus 5548UP


NetApp 3240


Cisco UCS blade servers


Cisco Nexus 1000V


SAP NetWeaver 7.31

SAP application

Sybase ASE Database 15.7



SAP Applications Built on FlexPod Cisco Validated Design

• NetApp Virtual Storage Console 2.1 for VMware vSphere Backup and Recovery Administration Guide

• TR-3939: VMware vSphere Built on FlexPod Implementation Guide

NetApp VSC Provisioning and Cloning PowerShell Cmdlets

• NetApp SnapCreator Installation and Configuration Guide

• Note 1496410-Red Hat Enterprise Linux 6.x: Installation and Upgrade

• Note 1597765-Known Problems with Support Packages in SAP NW 7.31 AS ABAP

• Note 1539124-SYB: Database Configuration for SAP on Sybase ASE

• Note 1554717-SYB: Planning information for SAP on Sybase ASE

• Note 1680803-SYB: Migration to SAP Sybase ASE-Best Practice

• Note 1605680-SYB: Troubleshoot the Setup of the DBA Cockpit on Sybase ASE

• Note 1749935-SYB: Configuration Guide for SAP Sybase ASE

• Note 1835008-Database Performance Optimizations for SAP ERP


Appendix A: SnapCreator for Sybase Database Backup

Backup services for Sybase ASE based SAP systems are supported by the NetApp SnapCreator framework. Therefore, a SnapCreator installation must be performed in the FlexPod environment. The SnapCreator framework consists of a server and an agent layer. The server sends operations such as quiesce or unquiesce to a given database by using the agent, which runs remotely or locally. The server installation becomes part of the infrastructure tenant, whereas the agent must be installed as part of the operating system image for new SAP systems.

Installing and Configuring SnapCreator Server

Perform the following steps to create a new SnapCreator server installation in the infrastructure tenant.
1. Deploy a new Linux virtual machine into the infrastructure tenant. Create the installation directory /opt/NetApp and extract the SnapCreator installation files there.
2. Run the SnapCreator server installer to set up SnapCreator. For example,
t001-scserver:/opt/NetApp/scServer4.0p1 # ./snapcreator -setup
3. Follow the instructions to finish the installation. For details, refer to the SnapCreator Framework installation and administration guide.
4. Start the SnapCreator server. For example,
t001-scserver:/opt/NetApp/scServer4.0p1 # /opt/NetApp/scServer4.0p1/bin/scServer start
Starting scServer:
Checking Status of scServer:
To perform initial configuration of the SnapCreator server, open a web browser and go to the following URL: http://hostname:8080" or "http://hostname:<gui_port>. Log in using the credentials provided during the installation. Perform the following steps to go through the initial configuration.
1. Cancel the process to create a new profile. This will be done later for each new SAP system.
2. Select Management -> Global Configuration from the top menu.
3. Click "Create Global" to create a new global configuration.
4. Keep the default values for the configuration type and the plug-in type.
5. In the Storage Connection Settings dialog, select the "Use OnCommand Proxy" option.
6. Provide connection settings for the OnCommand (DFM) server.
7. Finish the wizard.

Installing and Configuring the SnapCreator Agent

The SnapCreator agent should be installed as part of a new operating system master template for Sybase ASE systems. Follow the process described in the "Linux Template Maintenance" section of the "SAP Applications Built on FlexPod" Cisco Validated Design. The main steps are as follows:
1. Create a new master VM.
2. Start the new master VM and make the modifications.
3. Clean up the VM, shut down, and convert the VM to a template.
4. Test and release the new template.
To install the SnapCreator agent, as part of step 2, perform the following steps:
1. Create the installation directory /opt/NetApp and extract the SnapCreator installation files there.
2. Delete the scServer folder.
3. Run the SnapCreator agent installer to set up SnapCreator. For example,
t002-25-lnx:/opt/NetApp/scAgent4.0p1 # ./snapcreator --setup
4. Follow the instructions to finish the installation. For details, refer to the SnapCreator Framework installation and administration guide.
5. Start the SnapCreator server. For example,
t002-25-lnx:/opt/NetApp/scAgent4.0p1 # bin/scAgent start
Starting scAgent:
Checking Status of scAgent:
Running on port 9090
6. To make sure the agent is started every time the operating system boots, add the agent start command to the flexpod_config boot script or create a new boot script.