Guest

Support

Microsoft SharePoint 2010 with Microsoft Hyper-V on Cisco UCS Rack-Mount Servers

  • Viewing Options

  • PDF (4.7 MB)
  • Feedback
Microsoft SharePoint 2010 with Microsoft Hyper-V on Cisco UCS Rack-Mount Servers

Table Of Contents

About the Authors

Acknowledgment

About Cisco Validated Design (CVD) Program

Microsoft SharePoint 2010 with Microsoft Hyper-V on Cisco UCS Rack Server

Executive Summary

Introduction

Objective

Audience

Purpose of this Guide

Solution Overview

Cisco Unified Computing System

Cisco C240 M3 Rack Mount Servers

Cisco C220 M3 Rack-Mount Servers

Cisco UCS P81E VIC

Cisco Nexus 5548UP Switch

Cisco Nexus 3048 Switch

Microsoft SharePoint 2010 SP1

Three-Tier Role Based Architecture

Advantages of Three-Tier Architecture

Microsoft SharePoint 2010 SP1 Sizing Considerations

Web Front-End Server

Application Server

Microsoft SharePoint 2010 SP1 Search

Database Server

Microsoft SharePoint 2010 SP1 Design Considerations

Cisco UCS Virtual Interface Cards

Cisco Nexus 5548 Switches

Cisco Nexus 3048 Switches

Microsoft Windows Server 2008 R2 SP1 Hyper-V

Hyper-V Architecture

Microsoft SharePoint 2010 Farm Architecture

Prepare the Cisco UCS C240 M3 Servers

Configure Cisco Integrated Management Controller (CIMC)

Connecting and Powering on the Server (Standalone Mode)

Enable Virtualization Technology in BIOS

Configure RAID

Install Microsoft Windows Server on Cisco UCS C240 M3 Servers

Install Microsoft Windows Server 2008 R2 SP1

Install the Device Driver

Configure the Network

NIC Teaming of Cisco VIC P81E

Prepare and Configure the Cisco Nexus 5548UP Switch

Configure the Cisco Nexus Switches

Enable Features

Set Global Configurations

Configure VLANs

Configure Port Channels

Adding Port Channel Configurations

Configuring Virtual Port Channels

Enabling Jumbo Frames

Prepare and Configure the Cisco Nexus 3048 Switch

Microsoft Windows Server 2008 R2: Adding Roles and Features

Create Virtual Machines

Network Design Considerations

Network Components

Storage Considerations

Implement High Availability in SharePoint 2010

Microsoft SharePoint 2010 Farm Scale Out with Cisco UCS C220 M3 Rack-Mount Servers

Measuring Microsoft SharePoint 2010 Server Performance

Workload Characteristics

Workload Mix (60 RPH)

Dataset Capacity

Performance Test Framework

Test Methodology

VSTS Test Rig

Performance Tuning

Environment Configuration Tuning

HTTP Throttling

Performance Results and Analysis

Requests Per Second

Average Page Time

Average Response Time

Pages per Second

Virtual Server- Processor Utilization

Physical Host CPU Utilization

Performance Results and Analysis

Summary

Bill of Material

References


Microsoft SharePoint 2010 with Microsoft Hyper-V on Cisco UCS Rack-Mount Servers
Last Updated: November 1, 2012

Building Architectures to Solve Business Problems

About the Authors

SY Abrar, Technical Marketing Engineer, SAVBU, Cisco Systems

SY Abrar is a Technical Marketing Engineer with Server Access Virtualization Business Unit (SAVBU). With over 10 years of Experience in information technology, his focus area includes Microsoft Product technologies, server virtualization and storage design. Prior to joining Cisco, Abrar was Technical Architect at NetApp. Abrar holds a Bachelors degree in computer science, and is a Storage Certified and Microsoft Certified Technology Specialist.

Vadiraja Bhatt, Performance Architect, SAVBU, Cisco Systems

Vadiraja Bhatt is a Performance Architect at CISCO, managing the solutions and benchmarking effort on UCS Platform. Vadi has over 17 years of experience in performance and benchmarking the large enterprise systems deploying mission critical applications. Vadi specializes in optimizing and fine tuning complex hardware and software systems and has delivered many benchmark results on TPC and other industry standard benchmarks. Vadi has 6 patents to his credits in the Database (OLTP and DSS) optimization area.

Acknowledgment

For his support and contribution to the design, validation, and creation of this Cisco Validated Design, we would like to thank:

Tim Cerling

About Cisco Validated Design (CVD) Program


The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.

Cisco and the Cisco Logo are trademarks of Cisco Systems, Inc. and/or its affiliates in the U.S. and other countries. A listing of Cisco's trademarks can be found at http://www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1005R)

Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental.

© 2012 Cisco Systems, Inc. All rights reserved.

Microsoft SharePoint 2010 with Microsoft Hyper-V on Cisco UCS Rack Server


Executive Summary

A Microsoft SharePoint Server 2010 Environment, comprising of various servers, that collectively host the core applications and provides services, is termed a SharePoint Farm. This Farm is responsible for providing various functions to the user. A three-tier architectural topology, in which the SharePoint tiers (Web, Application, and Database) are deployed using independent servers responsible for each tier, is among the most used SharePoint 2010 farm topology. To size each tier of a SharePoint farm it is essential to do a detailed study of the workload requirement and the farm usage patterns along with the performance capabilities of each hardware component deployed in the system.

This document describes the performance of a medium-sized SharePoint farm built using Microsoft Hyper-V on Cisco UCS Rack Servers implementing a three-tier architecture. A load generation framework developed by the SharePoint engineering team at Cisco performs load test and measures the performance metrics while keeping the required response time less than one second. This paper shares the test results that provide guidelines and better understanding of the performance impact of different SharePoint workloads. It also assists you in sizing and designing the best farm architecture to support different workloads and recommends the best infrastructure elements for an optimal SharePoint implementation.

This study provides detailed information on how the recommended farm architecture supports up to 20,000 users with 10 percent of the total users working concurrently. It also shows how to achieve sub-second response time and highlights the performance benefits of the Cisco Servers used in this study.

The virtualized SharePoint Server 2010 small farm is deployed on multiple virtual machines hosted by the Cisco UCS Rack C240 M3 Servers, using Microsoft® Windows Server® 2008 R2 with Microsoft Hyper-V™ instead of a conventional solution deployed on physical servers.

The SharePoint Server 2010 medium farm whitepaper describes how it is built and configured on physical servers. This solution also offers performance results of the physical server SharePoint 2010 for various load tests. For more information, see:

http://www.cisco.com/en/US/solutions/collateral/ns340/ns517/ns224/ns944/cisco_ucs_scalability_performance.pdf

For Larger configurations, the SharePoint 2010 for FlexPod on VMware farm design solution provides an end-to-end architecture with Cisco, VMware, Microsoft, and NetApp technologies. This solution virtualizes the Microsoft SharePoint 2010 servers, to support up to 100,000 users with 10 percent concurrency; offering high availability and server redundancy.

This performance study aims at understanding the performance characteristics of the large SharePoint farm on FlexPod architecture, where Cisco UCS servers with Virtual Machine Fabric Extender (VM-FEX), (universal pass through) switching provide improved application performance and operational efficiency, NetApp tools and technologies improve storage efficiency, and the innovative stack of VMware virtualization.

For more information, see: http://www.cisco.com/en/US/docs/unified_computing/ucs/UCS_CVDs/ucs_sharepoint2010_flexpod_vmware.html

Introduction

Microsoft SharePoint 2010 allows users to easily create, share and connect information and applications. Microsoft SharePoint 2010 has evolved from an earlier version of Microsoft SharePoint Office 2007. Microsoft SharePoint 2010 provides all the useful features present in the earlier version such as collaboration, information sharing, and document management. It has several new features and important architectural changes that have improved the product. The essentials for any SharePoint capacity planning are to understand the requirements, such as, the total total number of users, number of concurrent users, number of service applications, and so on. It is also important to take into consideration how the requirement would scale in the future and how a custom developed application, which can be a part of an implementation, may influence the sizing of storage and servers.

Objective

The main objective of this document is to study the capacity and performance characteristics of a medium-sized Microsoft SharePoint 2010 farm in a Microsoft Hyper-V environment using the Microsoft Visual Studio 2010 Team System to generate the workload. This workload consists of SharePoint collaboration, search, and design based on the most commonly adopted three-tier architectural topology built using the Cisco UCS C240 M3 Rack Servers.

Audience

The target audience for this document are solution architects, sales engineers, field engineers and consultants involved in the planning, design, and deployment of the Microsoft SharePoint Server 2010 hosted on Microsoft Hyper-V virtualization solutions on the Cisco Unified Computing System. This document assumes that the reader has an architectural understanding of the Cisco Unified Computing System, Microsoft Hyper-V, Microsoft Office SharePoint 2010 Server, and other related software.

Purpose of this Guide

This SharePoint 2010 deployment on the Cisco UCS Rack Server for Microsoft Hyper-V Cisco Validated Design guide demonstrates how small and medium-sized business (SMB) customers can apply best practices in the Microsoft Hyper-V SharePoint environment, Cisco Unified Computing System, and Cisco Nexus family of switches.

The design validation for this study is completed using specific Microsoft Visual Studio 2010 workloads, and Visual Studio Load Test agents simulating a realistic 20,000 user load with 10 percent concurrency across SharePoint 2010.

Solution Overview

This solution provides an end-to-end architecture employing Cisco and Microsoft technologies.

It demonstrates that the Microsoft SharePoint 2010 servers can be virtualized to support a medium farm topology and provide high availability for small and medium-sized business (SMB) customers.

The following components are used for the design and deployment of SharePoint 2010:

SharePoint 2010 SP1 application

Unified Computing System server platform

Microsoft Windows 2008 R2 SP1

Microsoft Network Load Balancer

Microsoft SQL 2012 Mirroring

This solution is designed to host scalable and mixed application workloads. The scope of this design is limited to the SharePoint 2010 SP1 deployment only. The SharePoint farm consists of the following physical and virtual machines:

Two Load Balanced Web Front-End Servers (WFE)

Two Application Servers with redundant services

Two SQL Servers Mirrored with Witness Server

Figure 1 Application Server Assignment with Cisco UCS Rack Server

Cisco Unified Computing System

The Cisco Unified Computing System (UCS) is a next-generation data center platform that unites computing, network, and storage access. The platform, optimized for virtual environments, is designed using open industry-standard technologies. It aims to reduce the Total Cost of Ownership (TCO) and increase business agility. The system integrates a low-latency; lossless 10 Gigabit Ethernet (GE) unified network fabric with enterprise-class, x86-architecture servers. It is an integrated, scalable, multi chassis platform in which all the resources participate in a unified management domain.

The main components of the Cisco Unified Computing System are:

ComputingThe system is based on an entirely new class of computing system that incorporates blade/rack servers based on Intel Xeon 5500/5600/E5-2600 Series Processors. Selected Cisco UCS blade/rack servers offer the patented Cisco Extended Memory Technology to support applications with large datasets and allow more virtual machines per server.

NetworkThe system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates Local Area Networks (LANs), Storage Area Networks (SANs), and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.

VirtualizationThe system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended to virtualized environments to support the changing business and IT requirements.

Storage Access—The system provides consolidated access to both SAN and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access, the Cisco Unified Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet (FCoE), and Internet Small Computer System Interface (iSCSI). This provides customers with choice of storage access and investment protection. In addition, the server administrators can pre-assign storage access policies for system connectivity to the storage resources. This results in simplified storage connectivity and increased productivity.

ManagementThe system uniquely integrates all the components within the solution. This single entity can be effectively managed using the Cisco UCS Manager (UCSM). The Cisco UCS Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust application programming interface (API) to manage all system configuration and operations.

The Cisco Unified Computing System is designed to deliver:

A reduced Total Cost of Ownership and increased business agility.

Increased IT staff productivity through just-in-time provisioning and mobility support.

A cohesive, integrated system which unifies the technology—at the data center. The system is managed, serviced and tested as a whole.

Scalability through a design for hundreds of discrete servers and thousands of virtual machines and the capability to scale I/O bandwidth to match demand.

Industry standards supported by a partner ecosystem of industry leaders.

Cisco C240 M3 Rack Mount Servers

Building on the success of the Cisco UCS C-Series M2 Rack Servers, the enterprise-class Cisco UCS C240 M3 server enhances the capabilities of the Cisco Unified Computing System portfolio in a 2-rack-unit (2RU) form factor. With the addition of the Intel® Xeon® processor E5-2600 product family, it delivers significant performance and efficiency gains. Figure 2 shows the Cisco UCS C240 M3 rack server.

Figure 2 Cisco UCS C240 M3 Rack Server

The Cisco UCS C240 M3 Rack Server also offers up to 384 GB of RAM, 24 drives or SSDs, and two 4 GE LAN interfaces built into the motherboard. This translates to outstanding levels of density and performance in a compact package.

The Cisco UCS C240 M3 Rack Server balances simplicity, performance, and density for production-level virtualization and other mainstream data-center workloads. The server is a two socket server with substantial throughput and scalability. The Cisco UCS C240 M3 Rack Server extends the capabilities of the Cisco Unified Computing System. It uses Intel's latest Xeon E-2690 series multi-core processors to deliver enhanced performance and efficiency. These processors adjust the server performance according to application needs and use DDR3 memory technology with memory scalable up to 384 GB for demanding virtualization and large dataset applications. Alternatively, they can have a more cost-effective memory footprint for less demanding workloads. The options for the mezzanine card include either the Cisco UCS VIC P81KR Virtual Interface Card, a converged network adapter (Emulex or QLogic compatible), or a single 10GB Ethernet Adapter.

Cisco C220 M3 Rack-Mount Servers

Building on the success of the Cisco UCS C-Series M2 Rack-Mount Servers, the enterprise-class Cisco UCS C220 M3 Rack-Mount Server enhances the capabilities of the Cisco Unified Computing System portfolio in a 1-rack-unit (1RU) form factor. With the addition of the Intel® Xeon® processor E5-2600 product family, it delivers significant performance and efficiency gains. Figure 3 shows the Cisco UCS C220 M3 rack server.

Figure 3 Cisco UCS C220 M3 Rack Server

The Cisco UCS C220 M3 Rack Server also offers up to 256 GB of RAM, eight drives or SSDs, and two 1 GE LAN interfaces built into the motherboard. This results in outstanding levels of density and performance in a compact package.

The Cisco UCS C220 M3 Rack Server is ideal in situations where the SharePoint topology needs to be scaled out as it offers excellent computing resources that can easily accommodate the scaling out of Web and Application tiers. Both these tiers do not require large storage, but need greater compute resources which the Cisco UCS C220 M3 Rack-Mount Server aptly offers.

Cisco UCS P81E VIC

The Cisco UCS Rack-Mount Server has various Converged Network Adapters (CNA) options. The Cisco UCS P81E Virtual Interface Card (VIC) option used in this design is unique to the Cisco UCS Rack-Mount Server system. This mezzanine card adapter is designed around a custom ASIC specifically intended for virtualized systems. As is the case with the other Cisco CNAs, the Cisco UCS P81E VIC encapsulates Fibre channel traffic within the 10 GE packets for delivery to the Ethernet network.

The Cisco UCS P81E VIC provides the capability to create multiple VNICs (up to 128) on the CNA. This allows complete I/O configurations to be provisioned in virtualized or non-virtualized environments using just-in-time provisioning. It leads to tremendous system flexibility and allows consolidation of multiple physical adapters.

System security and manageability is improved by providing visibility and portability of network policies and security even to the virtual machines. Additional P81E features like VN-Link technology and pass-through switching minimize implementation overhead costs and complexity. Figure 4 shows the Cisco UCS P81E VIC.

Figure 4 Cisco UCS P81E VIC

Cisco Nexus 5548UP Switch

The Cisco Nexus 5548UP is a 1 RU 1 Gigabit and 10 GE switch that offers up to 960 gigabits per second throughput and scaling up to 48 ports. It offers 32 1/10 Gigabit Ethernet fixed enhanced Small Form-Factor Pluggable (SFP+) Ethernet/FCoE or 1/2/4/8-Gbps native FC unified ports and three expansion slots. These slots have a combination of Ethernet/FCoE and native FC ports. The Cisco Nexus 5548UP switch is shown in Figure 5.

Figure 5 Cisco Nexus 5548UP switch

For the purpose of this study, the Cisco Nexus 5548 has been used. However, the Cisco Nexus 3048, a 1 GE switch, is also recommended, as it offers solution that is very well suited for a SMB environment.

Cisco Nexus 3048 Switch

The Cisco Nexus® 3048 Switch is a line-rate Gigabit Ethernet top-of-rack (ToR) switch and is part of the Cisco Nexus 3000 Series Switches portfolio. The Cisco Nexus 3048, with its compact one-rack-unit (1RU) form factor and integrated Layer 2 and 3 switching, complements the existing Cisco Nexus family of switches. This switch runs the industry-leading Cisco® NX-OS Software operating system, providing customers with robust features and functions that are deployed in thousands of data centers worldwide.

Figure 6 Cisco Nexus 3048 switch

The Cisco Nexus 3048 is ideal for SMB customers that require a Gigabit Ethernet ToR switch with local switching that connects transparently to upstream Cisco Nexus switches, providing an end-to-end Cisco Nexus fabric in their data centers. This switch also provides features such as vPC, which has been demonstrated using 5548 in this paper. The switches can be interchanged to suit the enviroment without any impact on the performance or results as observed in the Microsoft SharePoint Farm 2010 farm reported in this document.

Microsoft SharePoint 2010 SP1

Microsoft SharePoint 2010 is an extensible and scalable web-based platform. It consists of tools and technologies that support collaboration and sharing of information within teams throughout the enterprise and on the web. The total package is a platform on which one can build business applications for better storage, sharing, and management of information within an organization. Microsoft SharePoint turns users into participants; allowing users to easily create, share, and connect with information, applications, and people. SharePoint 2010 provides all the useful features present in the earlier versions along with several new features and important architectural changes that improve the product.

Three-Tier Role Based Architecture

The three-tier role based architecture of Microsoft SharePoint 2010 Farm includes Web Server Role, Application Server Role, and Database Server Role. See Figure 7.

Web Server RoleSharePoint web server is responsible for hosting web pages, web services, and web parts that are necessary to process the requests served by the farm. The server is also responsible for directing requests to the appropriate application servers.

Application Server RoleSharePoint Application Server is associated with services, where each service represents a separate application service that can potentially reside on a dedicated application server. Services with similar usage and performance characteristics can be grouped on a server. The grouped services can then be scaled out onto multiple servers.

Database Server RoleSharePoint databases can be categorized broadly by their roles as Search Database, Content Database, and Service Database. In larger environments, SharePoint databases are grouped by roles and deployed onto multiple database servers.

Figure 7 Three-Tier Architecture

Advantages of Three-Tier Architecture

A few of the major benefits that a three-tier architecture provides are described below:

MaintainabilityThree-tier architecture follows a modular approach; each tier is independent of the other tiers. Thus, each tier can be updated without affecting the service as a whole.

ScalabilityScalability is a major benefit of incorporating the three-tier architecture. We can scale each tier as and when required without dependence on the other tiers (independent scaling). For instance, in the scaling of web servers, provisioning of servers at multiple geographical locations enables faster end user response time, and avoids high network latency. Another aspect is scaling of application servers that require high computing resources. A farm of clustered application servers can provide on-demand application performance.

Availability and ReliabilityApplications using the three-tier approach can exploit three-tier modular architecture to scale components and servers at each tier. This provides redundancy and avoids single point of failure which in turn improves the availability of the overall system.

Microsoft SharePoint 2010 SP1 Sizing Considerations

In the context of SharePoint, the term "farm" is used to describe a collection of one or more SharePoint servers and one or more SQL servers. These servers together provide a set of basic SharePoint services bound together by a single Configuration Database in SQL. A farm in SharePoint marks the highest level of SharePoint administrative boundary.

Microsoft SharePoint 2010 SP1 can be configured as a small, medium, or large farm deployment. The topology service provides you with an almost limitless amount of flexibility, so you can tailor the topology of your farm to meet your specific needs.

A key concept of any SharePoint design is "right sizing" the SharePoint implementation to meet the needs of the organization. Classic errors in design include, either over building, or under building the SharePoint environments. Over building can result in an overly complex SharePoint environment, with upwards of a dozen servers (or even more), often with an overwhelming number of features enabled (such as managed metadata, workflows, forms, business intelligence and other SharePoint Enterprise features) that exceed the ability of the IT to support. This results in the impression that SharePoint is "too complicated" or "never works right." Under building can yield results that include slow page loads, time outs during uploads or downloads, and system outages.

Analyzing the characteristics of the demand that the solution is expected to handle is necessary for proper sizing. One must understand both the workload characteristics, such as the number of users /concurrent users at the peak time and the most frequently used operations and dataset characteristics such as content size and distribution.

The farm used in this solution has two web front-end servers, two application servers and a mirrored database server. The farm serves various tiers fulfilling the realistic enterprise needs while being flexible, scalable, and maintainable. The farm has no single point of failure which makes it reliable.

The following sections briefly describe the sizing aspects of each tier of the three-tier role-based SharePoint farm.

Web Front-End Server

Web Front-End servers form the SharePoint connection point for clients that request content or services. All client requests results in some load on the WFE servers, as the WFE servers render pages before returning the requested pages to a browser. WFE servers do not require large amount of disk storage but rely heavily on processor and memory for performance.

The below table provides a high level description of the resource requirements for the different services on the web front-end server. Table 1 describes the processor and memory load characteristics of web front-end servers.

Table 1 Processor and Memory Load Characteristics of Web Front-End Servers

Service Application
CPU Load
Memory Load

SharePoint Foundation Service

High

High

Timer Service

Very High

Very High

Logging Service

Very High

Very High

Sand Boxed Solution

Low

Low

Workflow Capabilities

High

High


Application Server

Different service applications have different workload profiles but specific servers can be dedicated to specific service applications. Scale out can be achieved by assigning multiple servers for a specific service application. Most application services do not require local storage on the application server but demand a faster processor and low latency memory.

The below table provides a high level description of the resource requirements for the different services on the application server. Table 2 describes the processor and memory load characteristics of application servers.

Table 2 Processor and Memory Load Characteristics of Application Servers

Service Application
CPU Load
Memory Load

SharePoint Foundation Service

None

None

Central Administration Service

Medium

Medium

Timer Service

Very High

Very High

Logging Service

None

None

User Profile Service

Very High

Very High

World Viewing Service

High

High

SharePoint Search Service

Very High

Very High


The following are the best practices for web front-end and application servers:

Have more than one web server in the farm that hosts the Microsoft SharePoint Foundation Web Application which allows end users to access SharePoint sites and data.

Make a careful choice of the service applications which go on the web server, as these applications may impact the overall performance of the web servers and the performance perception of end users.

Evaluate the need of the application server based service applications for the SharePoint solution. Install and enable only those service applications which will enable the organization to meet specific business and technology goals. Also keep in mind the skill sets of the IT and the user population which supports and uses these features.

Enabling some SharePoint 2010 Enterprise features, such as Excel Services or PerformancePoint can put a strain on the application and SQL servers and requires additional configuration such as the installation of SQL Server Analysis Services for PerformancePoint.

Installing "companion products" such as Project Server 2010 or Office Web Applications may justify the addition of other application servers to the farm.

Microsoft SharePoint 2010 SP1 Search

Microsoft SharePoint 2010 SP1 Search service offers significant benefits for users but places a large workload burden on the farm. When considering the farm performance, search performance must be considered specifically in the context of the farm.

The following are the components of the search servers:

Crawl component—The crawl component crawls and indexes content in the SharePoint content databases and in other types of storage repositories. The crawl role builds the index and submits index updates to the search query role. Crawl components use the CPU bandwidth aggressively. Memory is not critical for crawl components.

Query component—The query component responds to the user's search requests. When a user enters a search in a SharePoint site, Microsoft SharePoint 2010 SP1 submits the query to a server that hosts the query role and returns a result set. All servers that host the query role have a copy of the index that the crawl role generates.

Database Server

All the data including content, configuration and metadata are stored on the SQL server. Not all service applications affect database servers because only some of them require databases. However, storage access times and storage capacity are a key requirement for this role.

The below table provides a high level description of the resource requirements for the different services on Database Server.

Table 3 describes the processor and memory load characteristics of database servers.

Table 3 Processor and Memory Load Characteristics of Database Servers

Service Application
CPU Load
I/O Load
Storage

SharePoint Foundation Service

Very High

High

High

Timer Service

None

None

None

Logging Service

Very High

High

High

User Profile Service

High

High

Very High

Word Viewing Service

None

None

None

Sandboxed Solution

None

None

None

Workflow Capabilities

None

None

None

SharePoint Search Service

High

High

High


In the default configuration, SharePoint 2010 stores data by uploading it to a SharePoint site in a SQL Server database, with SQL Server 2012 being the version. Since the process of uploading a document to the SQL database is not as efficient as simply storing a file on a file share, optimizing the I/O on the SQL server is very important.

When using a Cisco UCS-based environment, an organization can choose to create physical SQL 2012 servers or create a Microsoft Hyper-V virtual SQL 2012 environment and implement a server cluster or high-availability mirror (HA mirror).

Microsoft SharePoint 2010 SP1 Design Considerations

Best practices for designing SharePoint environments have been arrived at to show the many advantages to the organizations which chose Cisco platform and these practices are applicable for organization of all sizes. There are various options that need to be considered while designing an enterprise class SharePoint 2010 environment. Some of the options include SQL Server design, SharePoint "front-end" server configurations, and the configuration of the software and the hardware that runs SharePoint 2010 farms. Organizations must look ahead when planning the management and governance of the SharePoint environment. This ensures that the environment will continue to offer acceptable levels of performance and reliability even in future. An obvious immediate benefit with Cisco is a single trusted vendor providing all the components needed for a SharePoint farm.

The additional capabilities offered by the Cisco Unified Computing System include the following:

Cisco UCS Virtual Interface Cards

Cisco offers a variety of adapter cards designed for use with Cisco UCS B-series and C-series servers. A Cisco innovation, the Cisco UCS P81E Virtual Interface Card is a virtualization-optimized Fibre Channel over Ethernet (FCoE) PCI Express (PCIe) 2.0 x8 10-Gbps adapter. It is designed for use with Cisco UCS C-Series Rack-Mount Servers. The virtual interface card is a dual-port 10 GE PCIe adapter that can support up to128 PCIe standards-compliant virtual interfaces. These interfaces can be dynamically configured so that, both, their interface type (network interface card [NIC] or host bus adapter [HBA]) and their identity (MAC address and worldwide name [WWN]) are established using just-in-time provisioning. In addition, the Cisco UCS P81E can support network interface virtualization and Cisco VM-FEX technology.

To an operating system or a hypervisor running on a Cisco UCS C-Series Rack-Mount Server, the virtual interfaces appear as regular PCIe devices. In a virtualized environment, Cisco VM-FEX technology allows virtual links to be centrally configured and managed without the complexity that traditional approaches interpose with multiple switching layers in virtualized environments. I/O configurations and network profiles move along with virtual machines, helping increase security and efficiency while reducing complexity. As a result of close cooperation between Cisco and VMware, network policies and virtual interfaces can be applied to virtual machines in the VMware vCenter. The partnership also enables pass-through switching in the virtual switch, improving hypervisor performance.

Cisco Nexus 5548 Switches

Cisco Nexus® 5000 Series switches provide low latency and choice of front-to-back or back-to-front cooling, copper or Fibre access ports, and rear-facing data ports. The Cisco Nexus 5000 Series is designed for a broad range of physical, virtual, storage access, and high-performance computing environments. This gives customers the flexibility to meet and scale their data center requirements in a gradual manner and at a pace that aligns with their business objectives.

The switch series, using cut-through architecture, supports line-rate 10 GE on all ports while maintaining consistently low latency independent of packet size and services enabled. It supports a set of network technologies known collectively as Data Center Bridging (DCB) that increases the reliability, efficiency, and scalability of Ethernet networks. These features allow the switches to support multiple traffic classes over a lossless Ethernet fabric, thus enabling consolidation of LAN, SAN, and cluster environments. The ability of the switch series to connect Fibre Channel over Ethernet (FCoE) to native Fibre Channel protects existing storage system investments while dramatically simplifying in-rack cabling.

Figure 8 Cisco Nexus 5548UP

Cisco Nexus 3048 Switches

Cisco Nexus 3048 switches can also be considered for designing the Microsoft SharePoint 2010 Server Farm in a SMB environment. The Cisco Nexus 3048 offers Wire-rate Layer 2 and 3 switching and integrates transparently with the Cisco Nexus family of switches to deliver a consistent end-to-end Cisco Nexus fabric. The default system software has a comprehensive Layer 2 feature set with extensive security and management features. The switch supports SFP+ direct-attach 10 GE copper, an innovative solution that integrates transceivers with Twinax cables into an energy-efficient and low-cost solution. The modular operating system is built for resiliency, and provides integration with Cisco Data Center Network Manager (DCNM) and XML management tools. Virtual PortChannel (vPC) provides Layer 2 multipathing through the elimination of Spanning Tree Protocol and enables fully utilized bisectional bandwidth and simplified Layer 2 logical topologies without the need to change the existing management and deployment models. The Cisco Nexus 3048 is ideal for small medium business enterprises that require a Gigabit Ethernet ToR switch with local switching that connects transparently to upstream Cisco Nexus switches.

The next critical components of the SharePoint 2010 environment are the servers, virtual or physical, that run the Windows Server 2008, the SQL Server 2012 and the SharePoint 2010. A key part of the design process involves complying with organizational standards while meeting the anticipated needs of the organization for the foreseeable life cycle of the technology. For example, some organizations have standards that require all servers to be virtualized and designs that will meet the anticipated end user requirements for the next three to five years.

The growth of SharePoint in an organization demands more hardware. The implementation of this new hardware, either physical or virtual, is often the biggest bottleneck. A lot of work is involved in implementation of the server configurations and in controlling the overall complexity and the number of servers.

The above mentioned design considerations and Microsoft best practices were followed to design the different SharePoint 2010 SP1 server roles, and for the servers that were deployed virtually on Microsoft Hyper-V, as well as for the servers installed natively on the Cisco UCS Rack Servers.

Microsoft Windows Server 2008 R2 SP1 Hyper-V

Microsoft Hyper-V is an integral part of the Windows Server operating system environment and provides a foundational virtualization platform that enables you to transition to the cloud. With Windows Server 2008 R2, you get a compelling solution for core virtualization scenarios—production server consolidation, dynamic datacenter, business continuity, VDI, and test and development.

Microsoft Hyper-V provides you more flexibility with features like live migration and cluster shared volumes for storage flexibility. Microsoft Hyper-V also delivers greater scalability with support for up to 64 logical processors, 2 TB of RAM, NUMA awareness, and an improved performance with support for dynamic memory and enhanced networking support.

Hyper-V Architecture

For detailed information on Hyper-V Architecture, see: http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv-R2.mspx

Microsoft SharePoint 2010 Farm Architecture

The enterprise deployment design was determined using results from the evaluation deployment based on concurrent users, request per second, and page response times for different features. The final design incorporated additional Cisco UCS Rack Servers and Cisco Nexus end-to-end solution components. The environment comprised two web front-end servers, two application servers, and a mirrored SQL database with a witness server. See Figure 9.

Figure 9 Microsoft SharePoint 2010 Farm Scenario

Table 4 lists the various hardware and software components that occupy the different tiers of the Microsoft SharePoint 2010 SP1 large farm under test.

Table 4 Hardware/Software Components for Microsoft SharePoint 2010 Large Farm

Vendor
Name
Version
Description

Cisco

Cisco UCS Rack C240

BIOS Version: 1.4 (5g)

Rack Server

Cisco

Nexus 5548UP Switch

NX-OS

Nexus Switch 5000 series

Microsoft

Hyper-V

2008 R2 SP1

Hypervisor

Microsoft

Windows Server 2008 R2

2008 R2 SP1

Operating System

Microsoft

Microsoft SharePoint server

SharePoint 2010 SP1

Web Front-End SharePoint 2010 Enterprise Edition

Microsoft

Microsoft SharePoint server

SharePoint 2010 SP1

Application Server SharePoint 2010 Enterprise Edition

Microsoft

SQL Server

2012

Database Server SQL 2012 Enterprise Edition

Microsoft

SQL Server

2012

Witness server SQL 2012 Standard Edition

Microsoft

VSTS 2010

SP1

Load test Controller VSTS 2010 Ultimate Edition

Microsoft

VSTS 2010

SP1

Load test Agents VSTS 2010 Ultimate Edition


Table 5 lists the details of all the hardware components used during the deployment. Figure 9 depicts the components of the medium Microsoft SharePoint 2010 SP1 Farm.

Table 5 Hardware Components Details

Servers
Quantity
Role
Hosted
Server
role
Processor
Max Memory
Supported
Storage
Network

Cisco UCS Rack Server C240M3

2

Microsoft Hypervisor

Virtual Servers WFE/ Application Tier /Database Tier

Intel Xeon E5-2690

384 GB

Internal Drive SAS 300x 24 Drives,15K ,RPM

10 Gbps

Cisco UCS P81E VIC

2

Network Adapter

Cisco UCS Rack Server C240M3 /IO Adapter

NA

NA

NA

10 Gbps

Cisco UCS Blade Server B200 M2

1

VSTS Controller

Load Test Controller

Intel Xeon 5680

96 GB

Internal Drive SAS 2x2.5"

10 Gbps

Cisco UCS Blade Server B200 M2

2

VSTS Agent

Load Test Agents

Intel Xeon 5680

96 GB

Internal Drive SAS 2x2.5"

10 Gbps


Table 6 Virtual Server Configuration Details

VM
Role
vCPU #
Memory (GB)
Network
Adapters

SharePoint 2010 SP1

Web Front-End (NLB)

4

8GB

2

SharePoint 2010 SP1

Web Front-End (NLB)

4

8GB

2

SharePoint 2010 SP1

Application Server (Search)

4

16GB

2

SharePoint 2010 SP1

Application Server (Search)

4

16GB

2

SQL 2012

Database Server (Principal server )

4

32GB

2

SQL 2012

Database Server (Mirrored server)

4

32GB

2

SQL 2012

Witness server

2

4GB

1


Prepare the Cisco UCS C240 M3 Servers

Preparing the Cisco C240 M3 servers is a common step for all the Hyper-V architectures. To begin with, you need to install the C240 M3 server in a rack. For more information on mounting the Cisco C240 servers, see:

http://www.cisco.com/en/US/docs/unified_computing/ucs/c/hw/C240/install/install.html

To prepare the servers, follow these steps:

Configure Cisco Integrated Management Controller (CIMC)

Enable Virtualization Technology in BIOS

Configuring RAID

These steps are discussed in detail in the following sections.

Configure Cisco Integrated Management Controller (CIMC)

This section describes the procedure to configure the Cisco Integrated Management Controller.

Connecting and Powering on the Server (Standalone Mode)

For connecting and powering on the server in Standalone Mode, follow these steps:

1. Attach the supplied power cord to each power supply in your server.

2. Attach the power cord to a grounded AC power outlet.

3. Connect a USB keyboard and VGA monitor using the supplied KVM cable connected to the KVM connector on the front panel.

4. Press the Power button to boot the server. Watch for the prompt to press F8.

5. During bootup, press F8 when prompted to open the BIOS CIMC Configuration Utility.

6. Set the "NIC mode" to Dedicated and "NIC redundancy" to None. See Figure 10.

7. Choose whether to enable DHCP for dynamic network settings or to enter static network settings.

8. Press F10 to save your settings and reboot the server.

When the CIMC IP is configured, the server can be managed using the https based web GUI or CLI.

Figure 10 Configuring CIMC


Note The default username for the server is "admin" and the default password is "password". Cisco strongly recommends changing the default password.


Enable Virtualization Technology in BIOS

Microsoft Hyper-V requires an x64-based processor, hardware-assisted virtualization (Intel VT enabled), and hardware data execution prevention (Execute Disable enabled).

To enable Intel ® VT and Execute Disable in BIOS, follow these steps:

1. Press the Power button to boot the server. Watch for the prompt to press F2.

2. During bootup, press F2 when prompted to open the BIOS Setup Utility.

3. Choose Advanced > Processor Configuration. See Figure 11.

Figure 11 Enabling Virtualization Technology in BIOS

Configure RAID

The RAID controller type is Cisco UCSC RAID SAS 2008 and supports 0, 1, 5 RAID levels. You need to configure RAID level 1 for this setup and set the virtual drive as boot drive.

To configure the RAID controller, follow these steps:

1. Using a web browser, connect to the CIMC using the IP address configured in the CIMC Configuration section.

2. Launch the KVM from the CIMC GUI. See Figure 12.

Figure 12 Cisco Integrated Management Controller Home Page

3. During bootup, press <Ctrl> <H> when prompted to configure RAID in the WebBIOS. See Figure 13.

Figure 13 Logging into WebBIOS

4. Choose the adapter and click Start.

Figure 14 Adapter Selection for RAID Configuration

5. Choose the New Configuration radio button and click Next.

Figure 15 Selecting Configuration Type

6. Click Yes and then click Next to clear the configuration.

7. If you choose the Automatic Configuration radio button and the "Redundancy when possible" option from the "Redundancy" drop-down list, and if only two drives are available, the WebBIOS will create a RAID 1 configuration.

Figure 16 Creating RAID Configuration

8. Click Accept when you are prompted to save the configuration.

Figure 17 Saving RAID Configuration in Virtual Drives

9. Click Yes when you are prompted to initialize the new virtual drives.

10. Choose the Set Boot Drive (current=U) radio button for the virtual drive created above and click Go.

Figure 18 Setting a Boot Drive for the Created Virtual Drive

11. Click Exit and reboot the system.

Install Microsoft Windows Server on Cisco UCS C240 M3 Servers

This section describes the installation of the Microsoft Windows Server 2008 R2 SP1 along with the driver installation.

Install Microsoft Windows Server 2008 R2 SP1

To install Microsoft Windows Server 2008 R2 SP1, using the virtual media, on all the Cisco UCS C240 M3 Bare Metal servers, follow these steps:

1. Find the drivers for the devices installed on the Cisco UCS C-Series Drivers DVD provided with your C-Series server. Alternatively, download the drivers from: http://www.cisco.com/cisco/software/navigator.html and extract them to a local machine such as your laptop.

2. Log in to the CIMC Manager using your administrator user ID and password.

Figure 19 CIMC Login Page

3. Enable the Virtual Media feature, which enables the server to mount virtual drives:

a. In the CIMC Manager Server tab, click Remote Presence.

b. In the "Remote Presence" pane, choose the Virtual Media tab and check the Enabled check box in the Virtual Media Properties.

c. Click Save Changes.

Figure 20 Enabling Virtual Media

4. In the "Remote Presence" pane, choose the Virtual KVM tab and click Launch KVM Console.

Figure 21 Launching Virtual KVM

5. When the "Virtual KVM Console" window launches, choose the Virtual Media tab.

6. In the "Virtual Media" window, provide the path to the Windows installation image by clicking Add Image. Use the dialog to navigate to your Microsoft Windows 2008 R2 SP1 ISO file and select it. The ISO image is displayed in the "Client View" pane.

Figure 22 Adding ISO Image

7. In the "Virtual KVM Console" window, watch during bootup for the F2 prompt and then press F2 to enter the BIOS setup. Wait for the setup utility screen to appear.

8. In the "BIOS Setup utility" screen, choose the Boot Options tab and verify that the virtual DVD device that you added in the Step 6 is listed as a bootable device.

9. Move the device to the top under "Boot Option Priorities" as shown in Figure 23.

Figure 23 Verifying New Virtual Drive

10. Exit the BIOS Setup utility.

11. The Microsoft Windows installation begins when the image is booted.

12. Press Enter when prompted to "boot from CD".

13. Observe the Windows installation process and respond to prompts in the wizard as per your preferences and company standards.

14. When Windows prompts you with "Where do you want to install Windows?", install the drivers for the mass storage device. To install the drivers, follow these steps:

a. In the "Install Windows" window, click Load Driver. You are prompted by a "Load Driver" dialog to choose the driver to be installed. In the following steps, you first have to define a virtual device with your driver ISO image.

Figure 24 Creating a Storage Drive for Installing Microsoft Windows

Figure 25 Selecting Drivers

b. If not already open, open a KVM Virtual Media window as you did in Step 5.

c. In the "Virtual Media" window, unmount the virtual DVD that you mapped in Step 6 (uncheck the check box under "Mapped").

d. In the "Virtual Media" window, click Add Image.

e. Use the dialog to navigate to the location where you saved the Cisco driver ISO image for your mass storage device in Step 1 and choose it. The ISO appears in the "Client View" pane.

f. In the "Virtual Media" window, check the check box under "Mapped" to mount the driver ISO that you just chose. Wait for the mapping to complete, as indicated in the "Details" pane. After the mapping is done, you can choose the device for Windows installation. In the "Load Driver" dialog that you opened in Step a, click Browse.

g. Use the dialog to choose the virtual DVD device that you just created.

h. Navigate to the location of the drivers, choose them, and click OK. Windows loads the drivers and when finished, the driver is listed under the prompt "Select the driver to be installed". Driver Path- CDROM Drive: \ \Windows\Storage\LSI\2008M\W2K8R2\x64.

i. After Windows loads the drivers, choose the driver for your device from the list in the "Install Windows" window and click Next. Wait while the drivers for your mass storage device are installed, as indicated by the progress bar.

Figure 26 Selecting the Driver

15. After driver installation finishes, unmap the driver ISO and map the Windows installation image.

To unmap the driver ISO and map the Windows installation image, follow these steps:

a. In the "Virtual Media" window, uncheck the check box under "Mapped" that corresponds to the driver ISO.

a. In the "Virtual Media" window, check the check box under "Mapped" that corresponds to your Windows installation image (the same one that you defined in Step 6). Wait for the mapping to complete. Observe the progress in the "Details" pane.

b. In the "Install Windows" window, choose the disk or partition where you want to install Windows from the list, and then click Next.

Figure 27 Allocating Disk Space

16. Complete the Windows installation according to the requirements and standards of your company. Continue to observe the Windows installation process and answer prompts as per your preferences. Verify that Windows lists the drivers that you added.

17. After the Windows installation is complete, Windows reboots the server again and you are prompted to press Ctrl-Alt-Del and to log in and access the Windows desktop. Use the login credentials that you supplied during the Windows installation process.

Install the Device Driver

The Cisco UCS C240 M3 solution contains the LAN on Motherboard (LOM) and Cisco VIC P81E adapter for which you need to install the drivers. This section explains how to locate and install the chipset and adapter drivers for Microsoft Windows Server 2008 R2 SP1.

Figure 28 Installing LOM Drivers

To locate and install the chipset and adapter drivers for Microsoft Windows Server 2008 R2 SP1, follow these steps:

1. Use a Windows file manager to navigate to the folder where you extracted the Cisco driver package that you got from the Cisco UCS C-Series Drivers DVD or downloaded from Cisco.com in Step 1 of the Install Microsoft Windows Server 2008 R2 SP1 section. Drivers for all of the devices are included in the folders named for each device.

2. Install Intel chipset drivers from: \Windows\ChipSet\Intel\C240\W2K8R2\setup.exe and reboot the server.

3. Install the LOM drivers from: \Windows\Network\Intel\I350\W2K8R2\x64\PROWinx64.exe and reboot the server, if prompted.

Figure 29 LOM Drive Installation in Progress

4. Install the drivers for Cisco P81E VIC from: \C:\Windows\Network\Cisco\P81E\W2K8R2\x64 folder.

5. Repeat the driver installation process for each device that still needs drivers, as indicated by yellow flags in the Microsoft Windows Server 2008 R2 SP1 Device Manager.

Configure the Network

This section provides the steps to configure the NICs and assign IP addresses on all Windows host servers.

The following steps explain how to modify the existing vNICs properties and create vNICs on Cisco P81E VIC for the configuration.

To create and modify the existing vNICs properties, follow these steps:

1. Using a web browser, connect to the CIMC using the IP address configured in the CIMC Configuration section.

2. Click Inventory on the left pane under the Server tab and choose the Network Adapters tab from the right pane.

3. Choose UCS VIC P81E under "Adapter Cards".

4. To add vNIC, click Add button under the vNICs tab as shown in Figure 30.

Figure 30 Adding vNIC

5. To modify the vNIC Properties, click Properties button as shown in Figure 31.

Figure 31 Modifying vNIC Properties

6. Repeat the above steps 1-3. Now choose the vNICs tab. Choose eth0 and click Properties.

7. Set the MTU size as "9000", uplink port as "0", channel number as "1" in the MTU, Uplink Port, and Channel Number fields respectively.

8. Repeat the above steps to create additional vNICs as per the requirement of the solution. The created vNICs will be teamed in the next step for redundancy and also increased bandwidth.

Figure 32 Window Showing vNIC Properties

NIC Teaming of Cisco VIC P81E

Teaming the Cisco VIC P81E provides redundancy and doubles the available bandwidth. This teamed adapter is used for Microsoft Hyper-V VMs access.

For NIC teaming of Cisco VIC P81E, follow these steps:

1. Open Control Panel > Network Connections window.

2. Right click and choose the properties of any of the Ethernet interfaces.

3. Click Install and choose Protocol > Add.

4. Point the installation to the drivers directory and click OK.

5. The Cisco NIC Teaming Protocol driver will be installed and listed in the Ethernet interface properties.

6. Open the command prompt and run the enictool.exe utility to create and delete teams.

Figure 33 Command to Create and Delete NIC Teams

Figure 34 Network Connections Window Showing NIC Teaming

7. Choose Internet Protocol Version 4 (TCP/IPv4) > Properties and assign an IP address from the Management VLAN (VLAN 1) subnet.

Figure 35 Assigning IP Address to the Teamed Adapters

8. Repeat the above steps to complete the configuration of Cisco VIC P81E on all the Cisco UCS C240 M3 servers. This Cisco VIC P81E teamed adapter will be used VMs access.


Note Before enabling the Microsoft Hyper-V role, NIC teaming must be completed.


Prepare and Configure the Cisco Nexus 5548UP Switch

This section provides a detailed procedure for preparing and configuring the Cisco Nexus 5548 switches for the Microsoft SharePoint 2010 farm.

In the vPC a pair of switches acting as vPC peer endpoints, one as primary and another as secondary, appear as a single entity to the devices attached to the port-channel. The devices that can be attached to the port-channel are Fabric Extender or a switch, server, or any other networking device. Thus the vPC provides hardware redundancy with port-channel benefits. Figure 36 shows two Cisco Nexus 5548UP switches configured for vPC.

Figure 36 Cisco Nexus 5548 Switches Configuration for vPC

Configure the Cisco Nexus Switches

The NX-OS setup should start automatically after booting and connecting to the serial port of the switch. In the NX-OS CLI, follow these steps to setup Cisco Nexus switches A and B:

1. Enter yes to enforce secure password standards

2. Enter the password for the administrator (adminuser)

3. Enter the password a second time to commit the password

4. Enter yes to enter the basic configuration dialog

5. Create another login account (yes/no) [n]: Enter

6. Configure read-only SNMP community string (yes/no)[n]: Enter

7. Configure read-write SNMP community string (yes/no) [n]: Enter

8. Enter the switch name: <Nexus A Switch name> Enter

9. Continue with out-of-band (mgmt0) management configuration? (yes/no) [y]: Enter

10. Mgmt0 IPv4 address: <Nexus A mgmt0 IP> Enter

11. Mgmt0 IPv4 netmask: <Nexus A mgmt0 netmask> Enter

12. Configure the default gateway? (yes/no) [y]: Enter

13. IPv4 address of the default gateway: <Nexus A mgmt0 gateway> Enter

14. Enable the telnet service? (yes/no) [n]: Enter

15. Enable the ssh service? (yes/no) [y]: Enter

16. Type of ssh key you would like to generate (dsa/rsa): rsa

17. Number of key bits <768-2048>:1024 Enter

18. Configure the ntp server? (yes/no) [y]: Enter

19. NTP server IPv4 address: <NTP Server IP> Enter

20. Enter basic FC configurations (yes/no) [n]: Enter

21. Would you like to edit the configuration? (yes/no) [n]: Enter

22. Be sure to review the configuration summary before enabling it.

23. Use this configuration and save it? (yes/no) [y]: Enter

24. Configuration may be continued from the console or by using SSH. To use SSH, connect to the mgmt0 address of Nexus A or B.

25. Log in as user admin with the password previously entered.

Enable Features

In the NX-OS CLI, follow these steps to enable lacp, interface-vlan and vPC features on Cisco Nexus switches A and B:

1. Type config t to enter the global configuration mode

2. Type feature lacp

3. Type feature interface-vlan

4. Type feature vpc

Set Global Configurations

To set the global configurations on Cisco Nexus switches, follow these steps:

1. From the global configuration mode, type spanning-tree port type network default to make sure that, by default, the ports are considered as network ports with regard to the spanning-tree.

2. Type spanning-tree port type edge bpduguard default to enable bpduguard on all edge ports by default.

3. Type spanning-tree port type edge bpdufilter default to enable bpdufilter on all edge ports by default.

Configure VLANs

Configure VLANs on the Cisco Nexus switches as per the Table 7.

Table 7 VLAN ID List to Configure Cisco Nexus Switches

VLAN Name
VLAN Purpose
ID used in this
Document
Host NICs in VLANs

VM_External Traffic

For VM data

810

2 Cisco VNIC

VM_Inbound Traffic

For VM data

192

2 Cisco VNIC

Default

For Host management

1

1 Cisco 1 GigE 1350 LOM


To configure VLANs on Cisco Nexus switches A and B, follow these steps:

1. Type config-t.

2. Type vlan < vm_External traffic VLAN ID>.

3. Type name vm_traffic.

4. Type exit.

5. Type vlan < VM_Inbound traffic VLAN ID>.

6. Type name VM_Inbound traffic.

7. Type exit.

Configure Port Channels

This section provides information on creating and configuring the port-channels.

To create port channels in Cisco Nexus switches A and B, follow these steps:

1. From the global configuration mode, type interface Po7.

2. Type description vPC peer-link.

3. Type exit.

4. Type interface Eth1/9-10.

5. Type channel-group 1 mode active.

6. Type no shutdown.

7. Type exit.

To create port channels on the Cisco UCS Server 1, follow these steps:

1. Type interface Eth1/7.

2. Type channel-group 7 mode active.

3. Type no shutdown.

4. Type exit.

5. Type interface Po7.

6. Type description<Cisco VIC on UCS Server 1- For Nexus A>.

7. Type exit.

To create port channels on the Cisco UCS Server 2, follow these steps:

1. Type interface Eth1/8.

2. Type channel-group 8 mode active.

3. Type no shutdown.

4. Type exit.

5. Type interface Po8.

6. Type no shutdown.

7. Type description<Cisco VIC on UCS Server 2- For Nexus B>.

8. Type exit.

Adding Port Channel Configurations

To add port channel configurations on Cisco Nexus switches A and B, follow these steps:

1. From the global configuration mode, type interface Po100.

2. Type switchport mode trunk.

3. Type switchport trunk allowed vlan < VLAN ID 810,VLAN ID 192>.

4. Type no shutdown.

5. Type exit.

6. Type interface Po7.

7. Type switchport mode trunk.

8. Type switchport trunk allowed vlan < mgmt. VLAN ID vm_traffic VLAN ID >.

9. Type no shut.

10. Type exit.

11. Type interface Po8.

12. Type switchport mode trunk.

13. Type switchport trunk allowed vlan < mgmt. VLAN ID, vm_traffic VLAN ID >.

14. Type no shut.

15. Type exit.

Configuring Virtual Port Channels

To configure Virtual Port Channels (vPCs) on Cisco Nexus switches A and B, follow these steps:

1. From the global configuration mode, type vpc domain <Nexus vPC domain ID>.

2. Type peer-keepalive destination <Nexus B mgmt0 IP> source <Nexus A mgmt0 IP>.

3. Type exit.

4. Type interface Po100.

5. Type vpc peer-link.

6. Type exit.

7. Type interface Po7.

8. Type vpc 7.

9. Type exit.

10. Type interface Po8.

11. Type vpc 8.

12. Type exit.

13. Type copy run start.

Now, all the ports and port-channels are configured with necessary VLANs, switchport mode and vPC configuration.

To validate this configuration type "show vpc brief" command in the CLI as shown in Figure 37.

Figure 37 Validating vPC Configuration

Make sure that all the required VLANs are in "active" state and correct set of ports and port-channels are part of the necessary VLANs in both the switches.

Port channel configuration can be verified using "show port-channel summary" command. Figure 38 shows the expected output of this command.

Figure 38 Validating the Port Channel Configuration

Make sure that the vPC peer status is "peer adjacency formed ok" and all the port-channels, including the peer-link port channels' status are "up".

Enabling Jumbo Frames

Cisco solution for SharePoint 2010 on Microsoft Hyper-V architectures require MTU to be set at 9000 (jumbo frames) for efficient storage traffic. MTU configuration on Cisco Nexus 5000 series switches fall under global QoS configuration. You may need to configure additional QoS parameters as needed by the applications.

The following commands enable jumbo frames on Cisco Nexus switches A and B:

switch(config)#policy-map type network-qos jumbo

switch(config-pmap-nq)#class type network-qos class-default

switch(config-pmap-c-nq)#mtu 9216

switch(config-pmap-c-nq)#exit

switch(config-pmap-nq)#exit

switch(config)#system qos

switch(config-sys-qos)#service-policy type network-qos jumbo

Figure 39 Verifying the Jumbo Frame Size

Prepare and Configure the Cisco Nexus 3048 Switch

To prepare and configure Cisco Nexus 3048 switch, see: http://www.cisco.com/en/US/docs/switches/datacenter/nexus3000/sw/layer2/503_U3_1/b_Cisco_n3k_Layer_2_Switching_Config_503_U31.pdf

Microsoft Windows Server 2008 R2: Adding Roles and Features

To install server roles and features in the Microsft Windows Server 2008 R2, follow these steps:

1. Click Start > Server Manager.

2. In the "Roles Summary" view area of the "Server Manager" window, click Add Roles.

Figure 40 Adding Roles in Server Manager

3. In the "Select Server Roles" window, check the "Hyper-V" check box and click Next.

Figure 41 Adding Hyper-V

4. In the Hyper-V > Virtual Networks window, click Next without choosing any Ethernet card.

5. In the "Confirmation" window, click Install and reboot the server when prompted.

6. Login to the server and click Start > Administrative Tools > Hyper-V Manager.

7. In the "Hyper-V Manager", choose the server and click Virtual Network Manager.

8. In "New virtual network", choose External from the "What type of virtual network do you want to create" list and click Add.

Figure 42 Defining Type and Adding the Virtual Network

9. Enter a valid name and click the External radio button. From the drop-down list, choose the NIC assigned to vm_traffic VLAN and click Next.

Figure 43 Adding Virtual Network to NICs for Teaming

10. Repeat the above steps on all other servers to install the roles and features.

Create Virtual Machines

After the server roles and features have been configured, the next step would be to create Virtual Machines.

1. To create the first VM1, follow these steps:

a. Click Start >Administrative Tools> Hyper-V Manager.

b. In the Action area, click New, and then click Virtual Machine.

c. Click Next in the New Virtual Machine Wizard.

Figure 44 Creating the First Virtual Machine

d. In the Specify Name and Location window, enter the name of the virtual machine and the location to store it.

Figure 45 Adding Name and Location for the VM

e. In the Assign Memory window, enter the amount of memory required to run the guest operating system used on the VM.

Figure 46 Allocating Memory Required for the Operating System

f. To establish the network connectivity, connect the network adapter to an existing virtual network in the Configure Networking window.

Figure 47 Connecting the Network Adapter


Note Select the External network in case you want to use a remote image server to install an operating system on the test virtual machine.


g. In the Connect Virtual Hard Disk window, enter a name, location, and size to create a virtual hard disk, required to install the operating system.

Figure 48 Creating Storage to Install the Operating System

h. In the Installation Options window, choose the method you want to use to install the operating system: See Figure 49


Note To install an operating system from a network-based installation server, it is important that you configure the virtual machine with a legacy network adapter connected to an external virtual network. The external virtual network must have access to the same network as the image server.


Figure 49 Choosing an Option to Install the Operating System

i. Click Finish in the Summary Page window.

Figure 50 Summary of the VM Created

2. Configure the first VM created, as defined by the solution specification.

3. Run the Sysprep utility in the VM and shut down the VM.

4. To create other VMs copy the sysprepped VHD as per the requirement.

5. Define new VMs using the copied VHD.

6. Build each target VM.

Microsoft SharePoint 2010 servers now can be installed and configured on the VMs created. For more information, see: http://technet.microsoft.com/en-us/library/cc262957.aspx

Network Design Considerations

In the SharePoint environment, there is a possibility of high network traffic between clients and WFE servers, WFE servers and the database server, and WFE servers and Application servers. Addition of new features in Microsoft SharePoint 2010 such as Office Web Apps, digital asset storage and playback introduces additional network traffic requirements over previous versions of Office SharePoint. Best practice is to separate the Client-WFE HTTP traffic from the WFE-database traffic and this improves the network efficiency.

To fulfill the high network demands of the Microsoft SharePoint 2010 environment, the Cisco UCS is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates the three separate networks:

LANs

SANs

High-performance computing networks

The unified fabric lowers cost by reducing the number of network adapters, switches, and cables, and by decreasing power and cooling requirements; thus making it an efficient and a cost effective network.

Figure 51 VLAN Inbound Outbound Network Connections

For the purpose of this performance study the servers were configured with Cisco UCS P81E VIC and virtual servers with two network adapters that will take part in the Network Load Balanced Cluster. One Network Adapter was connected to the private LAN (inbound network traffic) and the second network adapter was connected to the User LAN (outbound network traffic). Jumbo packet of size 9000 MTU was enabled at the adapter end.

Figure 52 Virtual Port Channel Network Configuration

The Cisco UCS C240 M3 with Cisco UCS P81E VIC access layer LAN configuration consists of a pair of Cisco Nexus 5548UP, a family of low-latency, line rate, 10 Gigabit Ethernet and FCoE switches for this test SharePoint 2010 deployment.

Two 10 Gigabit Ethernet uplink ports are configured with vPC whereas on the OS side NIC teaming was configured. vPC and NIC teaming combines multiple network connections in parallel to increase the throughput much more than that compared to a single connection, as well as provide redundancy in case of a link failure.

Network Components

Table 8 lists the network components used for the deployment.

Table 8 Network Configuration Details

Network Components
Number of Components Used

Nexus 5548up Series Switch

2

Cisco UCS P81E VIC Adapter

2


See the steps in the corresponding section to complete the task:

Configure the Cisco Nexus Switches

Enable Features

Set Global Configurations

Configure VLANs

Configure Port Channels

Adding Port Channel Configurations

Configuring Virtual Port Channels

Enabling Jumbo Frames

Storage Considerations

For content storage on SharePoint 2010, you must choose suitable storage architecture. SharePoint 2010 content storage has significant dependency on the underlying database; therefore, database and SQL Server requirements will drive the storage choices.

One of the major advantages of the Cisco UCS C240 M3 Rack Server is its capability to expand over a wide range of storage-intensive infrastructure workloads ranging from large data to collaboration. The Cisco UCS C240 M3 offers up to 384 GB of RAM, 24 drives, and four 1 GE LAN interfaces built into the motherboard.

The Cisco UCS P81E VIC is a dual-port 10 Gigabit Ethernet PCIe adapter that can support up to 128 PCIe standards-compliant virtual interfaces to provide outstanding levels of internal memory and storage expandability along with exceptional performance.

Since this performance study is in regard to SharePoint 2010 for SMBs, we have considered 7.2 TB of disk utilization just for the SharePoint environment; we have used 300 GB drives. However, customers can utilize higher space disk drives (SAS, SATA or SSD) for their implementation, as the Cisco UCS C240 M3 Rack Server supports 300, 600, 1TB, 2TB and 3 TB drives for their various other workloads.

For more information on supported hardware, see:

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10493/ps12370/data_sheet_c78-700629.html

Following are the Storage Provisioning used in the SharePoint 2010 farm:

2 x 300 GB SAS drives were configured as boot Drives (parent partition with RAID 10.

4 x 300 GB SAS drives were provisioned to store VM files (Child partitions) with RAID 5.

8 x 300 GB SAS drives were configured to host SQL Data with RAID 5.

4 x 300 GB SAS drives were configured to host SQL Log with RAID 10.

6 x 300 GB SAS drives were provisioned for Back up of VM and Data with RAID 10.

SharePoint database storage is provisioned on separate Drives for databases and logs.

Disks are configured with Raid 5 and Raid 10. Databases (.mdf) files are hosted on Raid 5 and (.ldf) on Raid10.

To complete the RAID configuration, see the section Configure RAID.

Table 9 Storage Components

Storage Subsystem
LSI MegaRAID
Disks
Drive Type

Rack C240 M3 Local

LSI MegaRAID SAS9266-8i with SuperCap Power Module

(RAID backup unit) (RAID 0, 1, 5, 6, 10, 50, or 60)

24

SAS 15K RPM, 300 GB


Figure 53 SharePoint 2010 Storage Layout

Implement High Availability in SharePoint 2010

SharePoint 2010 farm topology discussed in this paper is architected with SharePoint high availability on the application layer. High Availability for a three tier SharePoint Farm under test is implemented as described in the following section.

To implement high availability for Web Front-End servers, two Web Front-End servers are used. These servers host the same Web application, with Windows Network Load Balancing (NLB) feature.

Figure 54 Status of WFE Servers with Network Load Balancing

NLB is available in Windows Server 2008 Web, Standard, Enterprise, and Datacenter editions.

To Set up NLB, follow these steps:

1. Click Start > Control Panel > Network Connections, and then click Local Area Connection on which Network Load Balancing is installed.

2. Click Properties. In the Local Area Connection Properties dialog box, click Internet Protocol (TCP/IP), and then click Properties.

3. In IP address, enter the address that you used as the "Cluster IP address" in the Network Load Balancing Properties dialog box under Cluster parameters. If the correct address is already shown, do not change the address.


Note You can also enter the dedicated IP address (that corresponds to the "Dedicated IP address" that you typed in the Network Load Balancing Properties dialog box under Host parameters) in this space, and then wait to enter the primary IP address of the cluster in the Advanced TCP/IP Settings dialog box.


4. In Subnet Mask, enter the subnet mask and the default gateway information for your TCP/IP network.

5. If you have to configure additional virtual IP addresses for your cluster (for example, if you are running a multihomed Web server), click Advanced and then click Add. You can specify additional virtual IP addresses for this network adapter and provide other information.

Figure 55 shows how to create an entry in DNS for the cluster.

Figure 55 Create New Host in DNS

Figure 56 shows how to set the site binding at the Web front-end server.

Figure 56 Set Binding in the Web Front-End Server

Figure 57 shows how to modify the Alternate Access Mapping for the default site collection http://Webfront-end-1 to http://SAV.

Figure 57 Modifying the Alternate Access Mapping Through System Settings

For information on configuring NLB, see:

http://technet.microsoft.com/en-us/library/cc754833(WS.10).aspx

Configuring High Availability at Database Tier -SQL Mirroring to implement high availability for service applications, two application servers are chosen that host the same services.

To implement high availability for databases, two machines are used to run the SQL Server. You can configure these for database mirroring (which requires duplicate storage), or you can configure them as part of a SQL Server failover cluster (which requires shared storage).

For the purpose for this study Database tier is implemented with SQL mirroring along with a witness server. SQL Mirroring is integrated with SharePoint 2010 Application. Figure 58 shows the Database Mirroring.

Figure 58 SQL Mirroring Configuration for SharePoint Databases

1. Select the SharePoint database in the source server (Principal database), for mirroring.

2. Create a backup of the SharePoint database in the source SQL Server (Principal database) and restore it in the destination server with the same name. Make sure to restore the database with NORECOVERY. The destination server SharePoint database is the Mirror database.

3. Select Mirroring. Select Configure Security to configure mirroring for the required database.

4. Select the Mirror Server Instance and click Connect. Here we have to specify the destination, that is the Mirror database server credentials, to connect.

5. Leave service accounts for both Principal and Mirror.

6. Click Start Mirroring to start the mirroring from Principal database to Mirror database.

7. Check if the databases are fully synchronized.

8. The SharePoint database in the source server is now marked as Principal, Synchronized.

9. You can do a manual failover, by selecting the Failover option.

Figure 59 Window Showing Principal and Mirror Database

Figure 60 SharePoint 2010 configuration of Failover Server

Failover instances for the SharePoint 2010 Content databases can be set through the SharePoint Central Administration Web UI, but for the Configuration database it can only be set through PowerShell.


Note Microsoft Windows PowerShell is a task automation framework consisting of a command-line shell and associated scripting language built on top of, and integrated with the .NET Framework. The Microsoft PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems.


For more information on PowerShell see: http://technet.microsoft.com/en-us/library/bb978526.aspx

Figure 61 shows the output of PowerShell when the failover instances for the SharePoint 2010 configuration databases are set.

Figure 61 PowerShell Output Showing the Failover Sever Instance

For more information on configuring database mirroring, see:

http://technet.microsoft.com/en-us/library/ms188712.aspx

Microsoft SharePoint 2010 Farm Scale Out with Cisco UCS C220 M3 Rack-Mount Servers

In order to meet the needs of the ever growing user base and their requirements, the enterprise should plan for growth, and look for solutions. SharePoint 2010 with its flexible architecture can scale to meet enterprise demands.

SharePoint topology can be easily scaled out and the Cisco UCS C220 M3 Rack Servers meets the vast compute requirements of the Web and Application Tiers. The two tiers do not have large storage requirements, therefore, Cisco UCS 220 M3 is an ideal fit with its excellent computing resources.

Cisco UCS C220 M3 is a 1-rack-unit (1RU) form factor. And with the addition of the Intel® Xeon® processor E5-2600 product family with Cisco UCS VIC P81E, a Cisco innovation, the Cisco UCS P81E Virtual Interface Card is a virtualization-optimized Fibre Channel over Ethernet (FCoE) PCI Express (PCIe) 2.0 x8 10-Gbps adapter designed for use with Cisco UCS C-Series Rack-Mount Servers. The virtual interface card is a dual-port 10 Gigabit Ethernet PCIe adapter that can support up to 128 PCIe standards-compliant virtual interfaces, which can be dynamically configured so that both their interface type (network interface card [NIC] or host bus adapter [HBA]) and identity (MAC address and worldwide name [WWN]) are established using just-in-time provisioning, it delivers significant performance and efficiency gains.

The Cisco UCS Rack Servers C220 M3 also offer up to 256 GB of RAM, eight drives or SSDs, and two 1 Gigabit Ethernet LAN interfaces built into the motherboard, delivering outstanding levels of density and performance in a compact package.

Figure 62 SharePoint 2010 Farm Architecture with Cisco UCS Rack 220 M3

Table 10 SharePoint 2010 Farm VM Components

VMTier
RoleTier Name
vCPUNumber
Memory Size(GB)

WFE

Web Front-End Server (NLB)

4

8GB

App

Application server (Search Service)

4

16GB

SQL

SQL 2012 Server (Mirrored)

4

32GB


Measuring Microsoft SharePoint 2010 Server Performance

The physical architecture of the test harness consists of one Microsoft Visual Studio Team System (VSTS) 2010 SP1 controller and VSTS Agents. Servers and the network infrastructure, which together create the SharePoint environment, is tailored with respect to the size and topology as explained in the subsequent sections.

Modeling a SharePoint Server 2010 SP1 environment begins with analyzing current requirements and estimating the expected future demands and targets the deployment.

Decisions on the key solution are based on all the important metrics and parameters that an environment must support and establish.

The SharePoint Server 2010 SP1 environment is modeled considering enterprise workload characteristics like number of users, most frequently used operations, datasets like content size, and content distribution demands, and is tailored in accordance with Microsoft recommendations.

Figure 63 shows the specific architecture of the SharePoint 2010 server farm used as the test.

Figure 63 SharePoint Farm Architecture Under Test

Workload Characteristics

Sizing a SharePoint environment workload is one of the key factors of the solution. The system under test should sustain the described work load demands, user base, and usage characteristics.

Table 11 provides workload characteristics considered to size the farm under test.

Table 11 Workload Characteristics

Workload Characteristics
Value

Number of Users (of unique users in a 24 hour period)

20,000

Concurrent users at Peak Time (Distinct users generating requests in a given time frame)

2,000


Workload Mix (60 RPH)

Requirement on the farm includes the number of users, and their usage characteristics. This performance test considers a heavy profile, where a single active user makes 60 different requests per hour to the SharePoint 2010 farm under test, that is, 60 requests/ hour/ user.

User activities or usage characteristics are modeled based on an enterprise business environment needs such as marketing and engineering. In general the environment hosts central team sites, publishing portals for internal teams as well as enterprise collaboration of organizations, teams and projects. Sites created in these environments are used as communication portals host applications for business solutions and provides channel for general collaboration. Searching, editing and uploading documents, participating in discussions, blog posts, commenting on blogs, etc. are among the most common activities. These activities, considered to be part of a typical enterprise user, are included in the workload used for the performance test.

Figure 64 shows various requests, which a user should make over a period of one hour depicting a workload generated during peak hour. These workloads are loaded onto the SharePoint farm with a brief warm up time.

Figure 64 Workload Requests Over an Hour

Dataset Capacity

The dataset capacity of the farm under test consists of defined workload for SharePoint 2010 content.

Table 12 provides few key metrics which were used to determine the capacity of the dataset for the test.

Table 12 Dataset Characteristics

Dataset Characteristics
Value

Content Database size each

1TB

Number of sites

4000

Number of content databases

2

Total number of databases

10

Number of site collections

500

Number of Web applications

20


Performance Test Framework

This performance test provides the responsiveness, throughput, reliability, and scalability of a Microsoft SharePoint 2010 SP1 farm under a given workload. The results and analysis of this performance test can help you estimate the hardware configuration required for a Microsoft SharePoint 2010 SP1 farm to support up to 20,000 users with 10 percent concurrency under production operation.

Test Methodology

For this performance test the load on the farm is applied using Microsoft Load Testing Kit (LTK). The kit is modified to make it more flexible and enhanced to generate the desired load. The LTK generates a Visual Studio Team System 2010 (VSTS); SP1 load test is based on the Windows SharePoint Services 3.0 Internet Information Services logs. The content database of size 1 TB is created which has sites and site collections and other important features that constitute the dataset. For more information on the created dataset, see the section Dataset Capacity of the farm under test.

VSTS Test Rig

A group of servers are used to generate a simulated load for testing. A single controller machine is used to run tests remotely and simultaneously on several servers with the help of one or more agents, collectively called a rig. The rig is employed to generate more load than a single computer can generate. A controller is used to coordinate with the agents that means, a controller sends the load test to the agents and the agents perform the test received from the controller. The controller is also responsible to collect the test results.

Figure 65 shows the rig created for this performance test. In the test scenario the rig consists of one controller and multiple agents with a Domain Controller.

Figure 65 Rig to Generate Loads


Note The agent takes a set of tests and set of simulation parameters as input. A key concept in Test Edition is that the tests are independent of the computer on which they are run.


Performance Tuning

Caching Microsoft SharePoint 2010 Farm

Microsoft SharePoint 2010 SP1 has several methods of caching data and objects to help improve the Performance for the end user. When a client requests a page, a copy of the page is stored temporarily in the output cache. Although the duration of the cache is typically small (the default is 60 seconds), this can boost the performance of WFE servers and reduce latency.

Cache profiles are available so that the site administrators can control the output cache behavior. Administrators can also determine whether different users such as the content editors can receive cached pages. Administrators can also adjust the output cache at the site, site collection, or Web application level.

Microsoft SharePoint 2010 uses the object cache to temporarily store objects such as lists, libraries, or page layouts on the WFE server. Caching enables the WFE server to render pages more quickly, reducing the amount of data that is required from the SQL Server databases.

Microsoft SharePoint 2010 has a BLOB cache that temporarily stores digital assets, such as image or media files, but you can use it with any file type. Using the BLOB cache in conjunction with the Bit Rate Throttling feature in IIS 7.0 enables the progressive download feature for digital assets. Progressive download feature enables to download media files in chunks of data, and playback feature starts after downloading the first chunk rather than the whole file. You can enable and control the size of the BLOB cache at the Web application level. The default size is 10 GB and the default setting is in disable mode.

In the performance test we have enabled the Microsoft SharePoint 2010 cache to improve the overall response time of the farm.

For more information on Cache Setting Operations, see: http://technet.microsoft.com/en-us/library/cc261797.aspx

Environment Configuration Tuning

Table 13 shows the settings to be done at the time of configuration of the environment to enhance its performance and capacity.

Table 13 Environment Configuration Tuning

Settings
Value
Notes

Site collection:

Object Caching (On | Off)

Anonymous Cache

Profile (Select)

Object Cache (Off | n MB)

Cross List Query Cache Changes (Every Time | Every n seconds)

ON

Enabled

ON- 10GB

60 seconds

Enabling the output cache improves server efficiency by reducing calls to the database for data that is frequently requested.

Site collection cache profile (Select)

Intranet (Collaboration Site)

"Allow writers to view cached content" is checked, bypassing the ordinary behavior of not letting people with edit permissions to have their pages cached.

Object Cache (Off | n MB)

ON- 1GB

The default is 100 MB. Increasing this setting enables additional data to be stored in the front-end Web server memory.

Usage Service:

Trace Log- days to store log files (Default- 14 days)

5 days

The default is 14 days. Lowering this setting can save disk space on the server where the log files are stored.

Database Server - Default Instance- Max degree of parallelism

1

The default is 0. To ensure optimal performance, Microsoft strongly recommends that you set max degree of parallelism to 1 for database servers that host SharePoint Server 2010 SP1 databases.

For more information about how to set max degree of parallelism, see max degree of parallelism Option (http://go.microsoft.com/fwlink/?LinkId=189030).


HTTP Throttling

HTTP throttling is a new feature in SharePoint 2010 that allows the server to discard the server requests when it is too busy. Every five seconds, a dedicated job runs, which will check the server resources to compare with the resource levels configured for the server. By default the Server CPU, Memory, Request in Queue and Request wait time are monitored. The server enters a throttling period after three consecutive unsuccessful HTTP GET checks. The server remains in this period till a successful check is done. Requests generated prior to the server entering into the throttling mode will be taken and completed. Any new HTTP GET and Search Robot request will generate a 503 error message and will be logged in the event viewer. Also while the server is in a throttling period no new timer jobs will get started.

To monitor the server performance the HTTP throttling is turned off as shown in Figure 66.

Figure 66 Setting Off HTTP Throttling to Monitor Server Performance

Performance Results and Analysis

Microsoft SharePoint 2010 server performance in general varies from environment to environment, depending on the complexity of the deployment and the components involved in the architecture.

Performance of the SharePoint architecture is determined by the user experience.

The following are the major performance counters that measure user experience:

Requests per second—Number of requests per second taken by the SharePoint Server.

Average response time—Amount of time the SharePoint Server takes to return the results of a request to the user.

Average page response time—Average time to download the page and all of its dependent requests like images, css, js etc.

The following graphs show the results as we applied the described workload (60 RPH) on the test SharePoint Farm.

Requests Per Second

The following graph shows the highest number of received requests per second for several user loads (60 RPH). The graph shows smooth performance of the virtual servers on Cisco UCS C240 M3 servers. Request per second is increased along with the user load without causing any stress on the server. On an average, the Microsoft SharePoint 2010 farm performance was 171 pages per second. The following graph also shows the possibility of further scaling up the user load with stablity of the server performance ensured.

Figure 67 Requests Received for Varying User Loads

Average Page Time

The following graph shows the SharePoint 2010 average page time well below one second. The SharePoint response time was less than one second in the performance test for concurrent user load of 2,000 users. The response time varies with the increased load on SharePoint 2010.

Figure 68 Average Page Response Time for Varying User Load

Average Response Time

The following graph shows average response time metrics for several user loads (60 RPH) of the SharePoint farm. The designed SharePoint farm can support more than 2,000 user concurrency and achieve sub-second response time. The graph shows the average response time to be well below one second proving the efficiency and potential.

Figure 69 Sub-second Response Time Average Response Time

Pages per Second

The following graph shows the pages per second metrics for several user loads (60 RPH) of the SharePoint farm. The SharePoint 2010 farm served on an average 113 pages per second.

Figure 70 Page Response for Varying User Loads

Virtual Server- Processor Utilization

Web Front-End Server

The following graph shows the CPU utilization of the two SharePoint Web Front-End servers. Under heavy workloads of 2,000 users and with Network Load Balancing feature to balance the load at the web front-end tier, virtual servers are hosted on the Cisco UCS C240M3 servers. CPU utilization remained at around 65 percent on an average. The graph shows a linear ascent in the CPU utilization with the increase in the user load and their workloads.

Processor Utilization

WFE server CPU utilization is tested with the search service turned off.

Figure 71 CPU Utilization of the Web Font-End Tier

Application Server

The following graph shows the Application Server CPU Utilization. The Application server hosted the Central Administration and search services. The spikes shown on the graph are due to the search crawl during the performance test. At the Application tier both the virtual servers spiked showing 100 percent CPU utilization.

Figure 72 Application Server CPU Utilization

Database Server

The following graph shows the SQL 2012 Database Server CPU utilization. The high availability scenario requires one server to be active and another to be mirrored. The CPU spikes in the graph shows the search crawl services while updating the search databases. On an average, the overall CPU utilization at the database tier with virtual servers remained at around 30 percent. This is observed when the server is hosted on the Cisco UCS C240 M3.

Figure 73 Database Server CPU Utilization

Database Server CPU Utilization

In the database server CPU utilization, search crawl is kept at idle state.

Figure 74 Database Server CPU Utilization

Network Utilization

The following graph shows the network utilization at Web Front-End, Application and Database tiers of the SharePoint 2010 SP1 farm. The graph also shows the aggregated performance numbers of the network utilization on all servers in the farm.

Figure 75 Network Utilization at Web Front-End, Application and Database Tiers

SharePoint 2010 Server Memory Utilization

The following graph shows the memory utilization of the SharePoint 2010 server farm under heavy workloads of 2000 users.

Maximum memory utilization on the Web Front-End servers, Application server and SQL Server with maximum user load is within 50 percent of the available physical memory. This indicates the CPU availability for further expansion, while providing high availability for all the SharePoint roles hosted on various SharePoint tiers.

Figure 76 Memory Utilization of SharePoint 2010 Server Farm

Physical Host CPU Utilization

Hyper-V has performance counters specific to both the host environment (Hyper-V) and each virtual machine, allowing characterization of the workload across the entire environment.

To monitor the performance of Hyper-V, we have used the perfmon.exe tool. Perfmon displays and records the Hyper-V performance counters. The names of the relevant counter objects are prefixed with "Hyper-V". You should always measure the CPU usage of the physical host by using the Hyper-V Hypervisor Logical Processor performance counters. The CPU utilization counters in Task Manager and Performance Monitor report within the root and child partitions do not exactly capture the CPU usage.

For this performance study we have considered these counters \Hyper-V Hypervisor Logical Processor (*) \% Total Run Time and \Hyper-V Hypervisor Logical Processor (*) \% Guest Run Time. The (*) indicates that statistics for all logical processors on the physical host are being gathered. The below graph represents \Hyper-V Hypervisor Logical Processor (*) \% Total Run Time.

The counter represents the total non-idle time of all logical processors.

Figure 77 Total Execution Time of all Logical Processors of Physical Host 1

The below graph represents \Hyper-V Hypervisor Logical Processor (*) \% Guest Run Time.

The counter is an average percentage of time the guest code runs across all the logical processors of the host.

Figure 78 Guest Execution Percentage for Physical Host 1

The below graph represents \Hyper-V Hypervisor Logical Processor (*) \% Total Run Time.

The counter represents the total non-idle time of the logical processor(s).

Figure 79 Total Exceution Time of all Logical Processors of Physical Host 2

The below graph represents \Hyper-V Hypervisor Logical Processor (*) \% Guest Run Time.

The counter is an average percentage of time the guest code runs across all the logical processors of the host.

Figure 80 Guest Execution Percentage for Physical Host 2

Performance Results and Analysis

The test is functionally successful meeting the criteria set to achieve a 20,000 user workload with approximately 10 percent concurrency. Table 14 provides the summary of the performance results with respect to the most important realistic enterprise concerns.

Table 14 Performance Results for Significant Enterprise Concerns

Crucial Concerns
Performance Results

Response Time (User Concern)

Less than one second

Throughput (Business Concern)

60 Tests per User per Hour

Request Per Second

171 RPS

Resource Utilization (System Concern)

Under 25 percent CPU utilization of Hosts; hosting SharePoint 2010 SP1 farm.


Summary

The Cisco Unified Computing System meets server virtualization challenges with the next-generation datacenter platform that unifies computing, networking, storage access, and virtualization support in a cohesive system managed centrally, and coordinated with virtualization software such as Microsoft Hyper-V server. The system integrates enterprise-class servers in a 10 Gigabit Ethernet unified network fabric that provides the I/O bandwidth and functions that virtual machines and the virtualization software both need. Cisco Extended Memory Technology offers a highly economical approach for establishing the large memory footprints that requires high virtualization density. Finally, the Cisco Unified Computing System integrates the network access layer into a single easily managed entity in which, links to virtual machines can be configured, managed, and moved as readily as physical links. The Cisco Unified Computing System continues Cisco's long history of innovation and delivers innovation in architecture, technology, partnerships, and services.

Cisco UCS C240 M3 Rack-Mount Server is all about high performance and internal storage from a single vendor known for its long history of innovation in architecture, technology, partnerships, and services.

The Cisco UCS C240 M3 Rack-Mount Server is geared towards organizations with mounting data and storage demands. The enterprise-class Cisco C240 M3 Server is equipped with Intel Xeon E5-2600 processing technology, which delivers top-class performance, energy-efficiency and flexibility. Cisco engineers have designed the Cisco C240 M3 Rack Servers with Cisco UCS P81E VIC to handle a broad range of applications, including workgroup collaborations, virtualization, consolidation, massive data infrastructures and SMB databasing. The Cisco UCS C240 M3 Rack Server with its rich stack of technological offerings leverages optimum performance in terms of compute, network and provides internal storage in one cohesive system to meet the challenges of virtualization technologies and growing demands of Small Medium Business.

Microsoft SharePoint 2010 is an extensible and scalable web-based platform consisting of tools and technologies that support collaboration and sharing of information within teams, throughout the enterprise and on the web. Microsoft SharePoint 2010 is both performance and storage intensive. Not all storage-intensive workloads are alike, and the Cisco UCS C240 M3 server's disk configuration delivers balanced performance and expandability to best meet individual workload requirements. The Cisco UCS C240 M3 Rack Server is designed for both performance and expandability over a wide range of storage-intensive infrastructure workloads ranging from large data to collaboration.

Three-tier architecture provisions an ideal SharePoint topology. Several servers at individual tier render various SharePoint components together to make up a SharePoint 2010 farm. Servers at web tier render web and search query functions, servers on the application tier are responsible for search indexing and various service application functions and server at the database tier hosts SQL Server databases for the farm. The paper provides ample guidelines on how to go about creating a virtual SharePoint 2010 farm using Microsoft Hyper-V on the Cisco UCS C240 Rack Server.

This performance study is intended to understand the performance and the capacity of medium SharePoint 2010 Hyper-V virtualized farm, hosted on the Cisco UCS C240 M3 Rack Server.

The performance study showed that the SharePoint medium Hyper-V virtualized farm hosted on the Cisco UCS C240 M3 Rack Server with its own internal storage offered, improved application performance and operational efficiency that could easily support 20,000 users with a minimum 10 percent concurrency. The SharePoint farm with a network access of 10 Gbps connectivity between the tiers provided an average response time well below one second.

Bill of Material

Table 15 and Table 16 provide details of the components used in this Solution.

Table 15 Component Description

Description
Part #

Cisco UCS C240 M3 rack servers

UCSC-C240-M3S

CPU for C240 M3 rack servers

UCS-CPU-E5-2690

Memory for C240 M3 rack servers

UCS-MR-1X082RY-A

RAID local storage for rack servers

UCSC-RAID-11-C240

Cisco VIC adapter

N2XX-ACPCI01

Cisco UCS C220 M3 rack servers

UCSC-C220-M3S

CPU for C220 M3 rack servers

UCS-CPU-E5-2650

Memory for C220 M3 rack servers

UCS-MR-1X082RY-A

RAID local storage for rack servers

UCSC-RAID-11-C220

Cisco VIC adapter

N2XX-ACPCI01

Cisco Nexus 5548up

N5K-C5548UP-FA

Cisco Cisco Nexus 3048

N3K-C3048TP-1 GE

Cisco Nexus 5548up Storage Protocols Service License

N5548P-SSK9

10GBASE-SR SFP Module

SFP-10G-SR


Table 16 Software Details

Platform
Software Type

Cisco UCS Rack C240

Management

Cisco Nexus 5548up OS

OS

NetApp 3270 OS

OS

Database SQL

SQL 2012

Application

SharePoint 2010 Enterprise

Application

Management


References

Microsoft SharePoint 2010:

http://technet.microsoft.com/en-us/sharepoint/ee263917

Cisco UCS:

http://www.cisco.com/en/US/netsol/ns944/index.html

VMware vSphere:

http://www.vmware.com/products/vsphere/overview.html

NetApp Storage Systems:

http://www.netapp.com/us/products/storage-systems/

Cisco Nexus:

http://www.cisco.com/en/US/products/ps9441/Products_Sub_Category_Home.html

Cisco Validated Design-FlexPod for VMware:

http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/flexpod_vmware.html

Cisco Nexus 5000 Series NX-OS Software Configuration Guide:

http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/configuration/guide/cli/CLIConfigurationGuide.html

NetApp TR-3298: RAID-DP: NetApp Implementation of RAID Double Parity for Data Protection:

http://www.netapp.com/us/library/technical-reports/tr-3298.html

Microsoft Visual Studio Ultimate 2010:

http://www.microsoft.com/visualstudio/en-us/products/2010-editions/ultimate

Microsoft TechNet Articles:

Capacity management and sizing overview for SharePoint Server 2010

http://technet.microsoft.com/en-us/library/ff758647.aspx

Cache settings for a Web application (SharePoint Server 2010)

http://msdn.microsoft.com/en-us/library/ms189910.aspx

Microsoft SharePoint 2010

http://technet.microsoft.com/en-us/sharepoint/ee263917