Last Updated: June 25, 2015
About Cisco Validated Designs
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2015 Cisco Systems, Inc. All rights reserved.
Table of Contents
Data Center Virtualization and Cloud Management
VMware vSphere ESXi and VMware vCenter Server
IBM Storwize V7000 Unified Storage
Cloud Overview and Considerations
On-Demand Self-Service Provisioning and Automation
Solution Architecture and Design
Cloud Management Environment Sizing
Minimum System Requirements for a Single-Node Setup
Cisco UCS Director Installation and Configuration
Initial Cisco UCS Director Setup
IBM Storwize V7000 Storage Tiering
IBM Storwize V7000 Storage Pool Setup
Case Study - Infrastructure Management for In-House Software Development
Create End User Self -Service Policy
Clone Cisco UCS Director Policies
Workflow Triggers and Schedules
Scenarios for triggers and Schedulers
UCS Director Bare-Metal Provisioning
Cisco UCS Director-BMA Configuration
Bare Metal Workflow Orchestration
Virtual machine lifecycle services
Manage/Reconfigure an existing virtual machine
Decommission a Virtual Machine
Bare-metal workflow for Linux:
IT departments have embraced efficiencies such as hardware consolidation and agility brought about by virtualization and have looked to extend such efficiencies, in an agnostic manner, to platforms that are application ready. A platform with efficient characteristics mentioned above, sets the foundation for the delivery of IT resources as a service – Cloud. Since all workloads cannot or will not be virtualized on a hypervisor, it is also desirable to extend essential Infrastructure-as-a-Service (IaaS) features of agility and measured self-services to non-virtual environments. Capabilities that will allow for the easy introduction of such an application ready and platform independent approach will lead to a more cost-effective and inclusive IT-as-a-Service (ITaaS) Cloud. Cloud computing requires automation and self-service mechanisms that allow users to consume infrastructure without manual intervention for provisioning or configuration of pooled resources. Customized workflows for orchestration of catalog items offered through a self-service portal in a secure manner, gives businesses the opportunity to offer IT-as-a-service.
This Cisco Validated Design (CVD) leverages the capabilities of Cisco UCS Director (UCSD) to deploy a multi-tenant IaaS cloud environment on the VersaStack integrated platform. The deployment model described in this document is an enterprise private-cloud.
IaaS is a cloud service model where Information Technology (IT) resources are delivered as a service rather than a product. Due to the nature of delivery and capabilities expected and provided, cloud computing offers a value proposition that is different from traditional enterprise IT environments. Instances can be provisioned and terminated more quickly while sharing resources. The consumer can therefore expect to be billed only for resources used without incurring steep initial capital costs or hiring a dedicated IT department. For the provider, since the Cloud can reside in a remote location with a lower cost structure, a centralized model that can provide greater economies of scale is feasible. However, a standard implementation of an IaaS platform requires certain key features to be available. These features include self-service provisioning, a means of measuring and billing for services used and security to ensure appropriate access to data.
Any shared platform, including cloud, opens up access to key resources such as infrastructure, users and applications. Ensuring the consistent and correct delivery of data on a shared platform comes with increased risk and complexity. System consolidation efforts have also accelerated the movement toward co-hosting on integrated platforms and the likelihood of compromise is increased in a highly shared environment. This situation presents a need for enhanced security and an opportunity to create a framework and platform that instills trust. Many enterprises and IT service providers are developing cloud service offerings for public and private consumption. Regardless of whether the focus is on public or private cloud services, these efforts share several common objectives:
· Cost-effective use of IT resources through co-hosting
· Better service quality through resiliency and QoS features
· Increased operational efficiency and agility through automation
Enabling enterprises to migrate environments to a cloud architecture requires the capability to provide customer confidentiality while delivering the management and flexibility benefits of shared resources. Both private and public cloud providers must secure all customer data, communication and application environments from unauthorized access.
This document illustrates the design and deployment steps required for implementing an IaaS solution using Cisco UCS Director (UCSD) 5.3.1 on VersaStack platform consisting of Cisco UCS compute, Cisco Nexus Ethernet switches, MDS fiber-channel switches and IBM Storwize V7000 Unified storage array. The hypervisor used for virtual machines is VMWare 5.5 U2. The solution implemented as proposed provides for an enterprise Private Cloud (ePC) which can be hypervisor/OS agnostic and application ready. Standardized integration points between Cisco UCS Director and other third-party tools for trouble-ticketing, notification and event monitoring functions provide for a cohesive and complete IaaS solution.
Most Cisco UCS Director Features covered in this Cisco Validated Document are available in a platform-agnostic manner. Features such as self-service portal, monitoring, chargeback for billing, orchestration/automation and Role-Based Access Control (RBAC) lead to benefits such as agility, efficiency and cost savings while providing necessary levels of security. RBAC also can be leveraged to conform to organizational roles and responsibilities where different groups may manage compute, network and storage resources. In this scenario, groups of users are assigned privileges (read/write for tasks) and mapped to resources (compute, network or storage) with provisions to set resource limits and approvals, as necessary. This framework provides for a consistent method through a single tool as the management point for the entire infrastructure while maintaining established organizational boundaries.
Configuration details unique to this deployment are mentioned, while VersaStack platform deployment procedure is referenced to an earlier CVD consisting of similar components. This end-to-end enterprise Private Cloud (ePC) solution takes full advantage of unified infrastructure components and Cisco UCS Director Device support to provide provisioning, monitoring and management of the infrastructure by consumers.
It is beyond the scope of this document to consider performance related details pertaining to the platform. Also excluded are details about integrating Cisco UCS Director with third-party enterprise tools, such as those for trouble-ticketing and monitoring.
The reader of this document is expected to have the necessary training and background on Cisco UCS Director, along with knowledge about installing and configuring the VersaStack platform. References to previous works of relevance, both internal and external, are provided where applicable and it is recommended that the reader be familiar with these documents. Readers are also expected to be familiar with the infrastructure and database security policies of the customer installation. This document is intended for executives, partners, system architects and cloud administrators of IT environments who want to implement or use an IaaS platform with Cisco UCS Director.
This solution consists of a foundational platform of VersaStack and a management suite as shown in Figure 1.
Figure 1 VersaStack Components
This Cisco Cloud solution integrates the best of Cisco’s hardware and management suite with IBM Storwize V7000 unified storage and VMware products to accelerate implementation and adoption of cloud infrastructure. The architecture provides sufficient flexibility to allow for customer choice while ensuring compatibility and support for the entire stack. The solution is applicable to customers who wish to preserve their investment and to those who want to build out new infrastructures dedicated to a cloud. This solution takes advantage of the strong integration between Cisco and IBM products/technologies with Cisco UCS Director. The Cisco Nexus 9396 switch used in this configuration operates in standalone mode, with capabilities similar to other Cisco Nexus 9000 series switches. Also included is a pair of Cisco MDS fiber-channel switches. The MDS switches support storage cluster traffic and provide for scaling the architecture past the Point-of-Delivery (PoD) to eliminate stranded capacity. At the storage layer, the configuration has been tested with IBM Storwize V7000 unified storage system. Validated architecture used for this CVD is shown in Figure 2.
Figure 2 Validated Architecture
Cisco UCS Director enables customized, self-service provisioning and lifecycle management of cloud services that comply with established business policies. Cisco UCS Director provides a secure portal where authorized administrators, developers, and business users can request new IT services and manage existing computer resources from predefined user-specific menus. It also enables administrators and architects to develop complex automation tasks within the workflow designer using pre-defined tasks from a library.
VMware vSphere ESXi is a virtualization platform for building cloud infrastructures. vSphere enables users to confidently run their business-critical applications to meet demanding service level agreements (SLAs). This solution gives the consumer operational insight into the virtual environment for improved availability, performance and capacity utilization.
IBM Storwize V7000 Unified is a virtualized, flash-optimized, enterprise-class storage system that provides the foundation for implementing an effective storage infrastructure with simplicity, and transforming the economics of data storage. Designed to complement virtual server environments, these modular storage systems deliver the flexibility and responsiveness required for changing business needs. It has support for Fiber-Channel (FC), iSCSI and NFS/CIFS protocols. IBM Storwize V7000 Unified can deploy the full range of Storwize software features, including:
· IBM Real-time compression
· IBM Easy Tier for automated storage tiering
· External storage virtualization and data migration
· IBM Active Cloud Engine for policy-based file tiering
· Synchronous data replication with Metro Mirror
· Asynchronous data replication with Global Mirror
· FlashCopy for near-instant data backups
Cloud computing is a model for enabling convenient and on-demand access to a shared pool of configurable computing resources. The expectation is to be able to rapidly provision and release with minimal effort or interaction. The cloud model promotes availability and consists of characteristics deemed essential and categorized along service and deployment models.
In keeping with the National Institute of Standards and Technology (NIST) model (Figure 3), this solution with Cisco UCS Director will be shown to provide the capability to provision processing, storage, network and other fundamental computing resources where the consumer can deploy and run arbitrary software including operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications within allocated resources.
Figure 3 National Institute of Standards and Technology (NIST) Model
With respect to the above NIST definition, this solution leverages the functionality of Cisco UCS Director for implementing an Infrastructure-as-a-service (IaaS) for a Private Cloud to be deployed with all essential characteristics detailed.
This feature focuses on the ability of the platform to be able to support dynamic provisioning and decommissioning based on needs of the consumers (capacity-on-demand), in an efficient manner (faster time-to-market) and without any service interruption. Consumers can take advantage of the Cloud when they have a large fluctuation in their IT resource usage. For example, an organization may be required to double the number of web and application servers for the entire duration of a specific task. You would not want to pay the capital expense of having dormant (idle) servers on the floor most of the time and would want to release these server resources after the task is completed. The Cloud model enables you to grow and shrink these resources dynamically and allows the organization to pay on a usage basis. Elasticity requires seamless integration between the orchestration piece (UCSD) and the underlying integrated VersaStack to take full advantage of compute, network and storage resource scalability options.
Given the borderless nature of our networks and the number of devices used for access, this broad network access requirement translates to support for non-traditional end-points such as tablets and cell phones in a secure manner. Cisco UCS Director supports secure technologies such as TrustSec and security related devices, such as the ASA and VSG firewalls. Mobile and tablet access is provided by Android based Cloud Genie application, which interfaces with Cisco UCS Director. Cloud Genie access is not within the purview of this CVD at this time.
In Cisco UCS Director, users get access privileges based on their roles (RBAC). The cloud administrator sets privileges based on available role templates and has the flexibility to create new roles or modify existing ones to suit the need. There is a separation between users within the group and across groups as well. Preservation of user-space confidentiality through encryption and other means at multiple levels through use of access controls, virtual storage controllers, VLAN segmentation, firewall rules, and intrusion protection should be employed where possible. Data protection through continuous encryption of data in flight and at rest is essential for integrity. Cisco TrustSec SGT and NetFlow support by Cisco UCS Director on most Cisco devices makes it easy to enable proper access control and visibility in a distributed manner for a scalable and secure platform.
In this deployment, the requirement is for flexibility in resourcing the tenant at the virtual level while preventing un-authorized data access. To this end, individual FC boot LUNs are created for each ESX host in the PoD. Data on the SAN through Network File System (NFS) can be mapped either from a common share or individually. To make sure there is secure separation, user access controls at the hypervisor level (VMware) ensures users will not have unauthorized access to NFS space. Further access controls may be exercised through Trust Sec (SGT) and VMware vShield if desired. System access controls at the time of creating NFS exports on IBM Storwize allows for mapping to a host.
An IaaS platform consists of pooled resources serving multiple workloads and tenants. Given the services model followed, end-users are expected to pay only for resources used. End-users can belong to different departments within an enterprise or come from entirely different business entities. Whether internal to a company or across multiple companies, the platform, due to the shared nature, needs to incorporate a means to measure resource utilization for the purpose of billing. Cisco UCS Director has chargeback/show back capabilities based on cost models that can be set by the cloud administrator/provider. Data generated from chargeback can then be integrated with a payment gateway (First Data). Internal to Cisco UCS Director, there are also budget mechanisms tied to individual groups for resource management.
The Chargeback module in Cisco UCS Director gathers metering information at frequent intervals. This data can then be matched with cost-models to arrive at tenant costs and for reporting as well. Dashboard reports are also an offshoot of this module. The first step is to configure a budget policy to individual organizations.
Within Cisco UCS Director, cost models can be created for each tenant. Costs for resources used in a vDC may be computed by the hour, month or year. Each tenant is typically created in a separate vDC to facilitate easy separation for billing purposes.
The Standard cost model is a basic and linear cost model based on resource consumption over the allotted period. CPU, Memory and Disk Resources used and idle over the period and their respective cost structure are used to estimate cost. The advanced cost model is more customized and allows for greater granularity in choices and billing using scripts. The scripts that are tailored to customer needs have to be generated, as they are not packaged with the system.
The setup below considers a straight-line Standard cost model to illustrate functionality and setup.
1. Select Policies > Virtual/Hypervisor Policies > Service Delivery. Edit the default cost model.
Select a Standard Cost model type to illustrate a chargeback with an initial setup cost of $50.00 (say). The initial setup cost is assumed to include only costs pertaining to setting up the account. The VM cost needs to contain amortized fixed (CapEx) and variable costs (OpEx) for all under-lying system components that constitute a virtual instance – compute, network and storage. The capital expense component will be due to infrastructure – facilities and host platform. The variable operational expense portion could include such components as power and cooling, management and support costs. The approximate baseline used here to estimate chargeback is a unit active VM cost of $1.0 per hour and inactive VM cost of $0.10 per hour. The figures chosen are approximate and only used to illustrate method used and functionality on Cisco UCS Director. The reader is referred to external whitepapers if there is a need for more detailed chargeback figures. The assumption is that the VM contains compute, network and storage. It is also possible to define units and costs for individual components for greater accuracy.
2. Integration with a payment gateway such as First Data is available for third-party billing.
An end-user/customer must setup a merchant account with First Data, which will then provide the necessary secure certificate and password for authorizing payments through their gateway. The provided First Data certificate and password needs to be input in above form to setup payments to the provider for IaaS resources used.
The customer needs to be able to provision and manage their environment on a shared platform with the least amount of intervention and delay from the provider. Providing this functionality requires the establishment of a self-service portal with necessary privileges. The portal should provide a catalog of items available for consumption to which the customer has access. It should also include automated means of deploying instances to contribute to overall agility. Cisco UCS Director provides out-of-the-box self-service portal capability after setting up a set of policies and mapping entities (groups and users) to resources (on VersaStack). Orchestration of workflows consisting of available and customizable tasks is enabled through a graphical workflow designer. Standard items are mapped to available workflows while advanced catalog items are mapped to newly created workflows for deployment.
Policies and cost model presented above, along with quota’s set for tenants, come together while designing a self-service portal defined below.
1. Select Physical > Compute and then highlighting the VersaCloud and the Summary tab presents the following. A list of available metrics is shown above the graphs when the arrow next to the wheel to the right of the screen (below CloudSense tab) is selected. Below is a summary of compute related metrics.
2. A snapshot of VM related metrics by selecting Virtual > Compute and then the PoD provides the following metrics. If any of these metrics/graphs need to be on the main dashboard, click the down arrow to the right of each graph or summary and select Add to Dashboard.
3. Private Cloud Storage Metrics. Select Virtual > Storage and then select the VMware-Cloud and the summary tab.
4. Virtual Network Metric snapshot. Select Virtual > Network and then select VMware-Cloud and click the summary tab.
The features mentioned in the previous section must be supported throughout the integrated stack for correct and consistent execution. The VersaStack platform, with Cisco UCS compute, Nexus and MDS switches and IBM Storwize V7000 unified storage system, has the necessary flexibility at every layer to allow for elasticity within and external to the PoD. Compute can scale to 160 hosts/blades within a single Cisco UCS domain with storage on the Storwize system scaling to a maximum of 1056 disks of varying capacity and performance distributed across four redundant block storage control enclosures. The architecture includes common infrastructure components and services such as Active Directory, DNS, DHCP, vCenter, Nexus 1000v VSM (optional) and Cisco UCS Director to be hosted external to the IaaS PoD to provide a centralized and uniform management structure. This model also allows for the addition of more integrated PoD’s for growth, while preserving the cloud capabilities of Cisco UCS Director.
The current setup consists of several components and their respective native tools leading to a myriad of integration points as depicted in Figure 4. Cisco UCS Director has tight integration at the infrastructure layer with all underlying components within the VersaStack – UCS Manager, Nexus and IBM Storwize V7000 unified storage system. The Cisco Nexus 1000v VSM communicates with both vCenter and Cisco UCS Director for distributed virtual switch functionality. Cisco UCS Director also has integration into vCenter and with its bare-metal agent (BMA) to extend functionality to non-virtual instances within the integrated stack.. External to this setup, Cisco UCS Director provides standard north-bound API’s for integration with third-party ITSM tools for event monitoring, trouble-ticketing and billing.
The architecture for this solution uses two sets of hardware resources:
· Common Infrastructure services on redundant and self-contained hardware
· VersaStack for IaaS workloads under Cisco UCS Director Management
The common infrastructure services include Active Directory (A/D), Domain Name Services (DNS), Dynamic Host Configuration Protocol (DHCP), VMware vCenter, Cisco UCS Director and Cisco Nexus 1000v virtual supervisor module (VSM). These components are considered core infrastructure as they provide necessary data-center wide services where the IaaS Point-of-Delivery (PoD) resides. Since these services are integral to the deployment of IaaS, it is required to adhere to best practices in their design and implementation. This includes such features as high-availability, appropriate RAID setup and performance and scalability considerations given they may have to extend their services to multiple PoD’s. Another consideration is to not introduce dependencies between management tools and the hosts/platforms they manage. For example, installing vCenter on ESX. At a customers’ site, depending on whether this is a new data center, there may not be a need to build this infrastructure piece. In our setup, given the limited scope of one VersaStack, this environment consists of a pair of Cisco UCS C-220 servers with internal disks. VMware has been used to clone the VM’s to serve as backups.
The IaaS VersaStack consists of Cisco UCS blade and rack servers. FC LUN’s from the IBM Storwize V7000 unified storage system were provisioned for booting these servers. The FC connections can go either directly to the fabric-interconnects (6248) from the servers or through redundant MDS FC switches, as in this case. The NFS space and the corresponding mount-point are visible to all hosts with hypervisor based user access control. At the network layer, there are five VLAN’s – OOB-Mgmt (3171), NFS (3172), vMotion (3173), VM Traffic (3174) and IB-MGMT (3175). Cisco UCS Director appliance was setup as a single node with bare metal agent connected over a VLAN (3175). A highly-available and scalable multi-node Cisco UCS Director setup is available if there is a need to scale across multiple data centers.
User groups and accounts for the IaaS platform are created and managed from Cisco UCS Director. For this exercise, three groups with two users in each group were created. The user groups were mapped to resources through the virtual data centers (vDC) construct to constitute a multi-tenant setup. Each tenant has an administrator user and another end-user role. Catalog items were created and shared by the cloud administrator (admin) through appropriate access to the self-service portal after setting up required policies. Each tenant group was assigned a budget and resources within the PoD and approximate cost for active and in-active instances was assigned. The understanding is that instances use compute, network and storage resources and as such, capture the overall requirements of the customer while also simplifying cost estimation from the providers’ perspective. In the event there is a need for more granular/accurate cost estimation, Cisco UCS Director has provisions for specifying cost at the compute, network and storage levels as well.
Cisco UCS Director uses Role-Based-Access-Control (RBAC) in assigning resource privileges to users. Many standard roles are pre-defined and there is the flexibility to add users with customized access levels. The group admin role has the privilege to create end-users within the group. Thus, the cloud admin needs to only create a group admin for each tenant. This framework speaks to the delegation features available within Cisco UCS Director to facilitate assignment of roles and responsibilities by a local administrator. It also points to support of the existing organizational boundaries, for example, between compute, network and storage maintained through the same tool. This will make sure there is a single-pane-of-glass with consistent management practices that can add to efficiencies.
The minimum system requirements depend on how many VMs you plan to manage.
For optimal performance, reserve additional CPU and memory resources. We recommend that you reserve the following resources in addition to the minimum system requirements listed in the tables below: CPU resources of more than or equal to 3000MHz, and additional memory of more than or equal to 4GB.
For information about minimum system requirements for a multi-node setup, see Minimum System Requirements for a Multi-Node Setup.
If you plan to manage up to 2,000 VMs, the Cisco UCS Director environment must meet at least the minimum system requirements in the following table.
Table 1 Minimum System Requirements for up to 2,000 VMs
Element |
Minimum Supported Requirement |
vCPU |
4 |
Memory |
8 GB |
Hard Disk |
100 GB |
If you plan to manage no more than 5,000 VMs, the Cisco UCS Director environment must meet at least the minimum system requirements and recommended configurations in the following tables.
Table 2 Minimum System Requirements for up to 5,000 VMs and Recommended Memory Configuration for Cisco UCS Director Services
Element |
Minimum Supported Requirement |
|
||
vCPU |
4 |
|
||
Memory |
20 GB |
|
||
Hard Disk |
100 GB |
|
||
Service |
Recommended Configuration |
File Location |
Parameter |
|
broker |
256 MB |
/opt/infra/broker/run.sh |
-Xms -Xmx |
|
client |
512 MB |
/opt/infra/client/run.sh |
-Xms -Xmx |
|
controller |
256 MB |
/opt/infra/controller/run.sh |
-Xms -Xmx |
|
eventmgr |
512 MB |
/opt/infra/eventmgr/run.sh |
-Xms -Xmx |
|
idaccessmgr |
512 MB |
/opt/infra/idaccessmgr/run.sh |
-Xms -Xmx |
|
inframgr |
8 GB |
/opt/infra/inframgr/run.sh |
-Xms -Xmx |
|
Tomcat |
1 GB |
/opt/infra/web_cloudmgr/apache-tomcat /bin/catalina.sh |
JAVA_OPTS="$JAVA_OPTS -Xmsm -Xmxm" |
|
Table 3 Minimum Database Configuration
Element |
Minimum Supported Configuration |
thread_cache_size |
100 |
max_connections |
1000 |
innodb_lock_wait_timeout |
100 |
query_cache_size |
128 MB |
innodb_buffer_pool_size |
4096 MB |
max_connect_errors |
10000 |
connect_timeout |
20 |
innodb_read_io_threads |
64 |
innodb_write_io_threads |
64 |
This deployment will consider a single VersaStack and its management with Cisco UCS Director in a non-redundant fashion. This is because Cisco UCS Director is not in the data path and one instance can support multiple integrated stacks. To ensure best practices, the Cisco UCS Director instance is installed external to the managed PoD (VersaStack) on common infrastructure components consisting of a pair of Cisco UCS C-220 rack servers with local storage. For deployments that require greater scale and/or connectivity across Data Centers, a highly redundant setup of Cisco UCS Director is available (multi-node setup). The user is referred to the following Cisco UCS Director guide if there is a need for such a redundant and scalable setup:
The private cloud platform can reside in premises or in provider space (hosted). As such, this deployment will be an enterprise Private Cloud (ePC) with characteristics deemed essential in the model defined by the National Institute of Standards and Technology (NIST). Common areas of monitoring, management and on-boarding pertaining to the ePC will also be shown through Cisco UCS Director.
Cisco UCS Director uses a policy-based model for managing assigned resources. Policies are sets of rules that set forth the framework for how resources will be provisioned and accounted. Fox example, setting up a self-service portal requires establishing compute, network, storage and system policies and the application of a cost model to leverage chargeback for billing purposes. Setting up the required policies to provide necessary functionality for an IaaS platform is covered in the following sections.
This Document assumes that you have followed the procedure detailed in the CVD link below to build the base VersaStack platform:
http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/Versastack_n9k_vmw55.pdf
Figure 5 Cisco UCS Director Infrastructure Abstraction- Single Pane Management
The above referenced CVD presents fiber-channel (FC) LUN’s and NFS mount-points from the IBM Storwize V7000 unified storage system as datastores to the Cisco UCS compute within VersaStack. The connectivity is through a set of redundant MDS (FC) and Cisco Nexus 9396 (Ethernet) switches. One FC LUN is used to boot each ESXi host with an NFS share providing storage for workloads across the host. There are five VLAN’s, – OOB-Mgmt (3171), NFS (3172), vMotion (3173), VM Traffic (3174) and IB-MGMT (3175). Changes to the VersaStack infrastructure detailed above include use of VMware 5.5U2 in place of VMware 5.5U1 and UCS Director 5.3.1 for providing IaaS Cloud functionality.
Figure 6illustrates the high-level architecture for all devices in this solution. The focus is on using a validated converged infrastructure (VersaStack) to provide resources for the cloud with IaaS features provided by Cisco UCS Director.
Figure 6 High-Level Architecture
The following section outlines prerequisites to install and setup a working instance of Cisco UCS Director. The intent is to leverage the automation features of Cisco UCS Director for correct and consistent cloud deployment.
You must obtain a license to use Cisco UCS Director. Please see sections titled About Licenses and Fulfilling the Product Access Key (PAK) at the following link before you begin: http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ucs-director/vsphere-install-guide/5-3/b_Installing_UCSDirector_on_vSphere_5_3.pdf
Download VMware virtual appliance (ovf) for Cisco UCS Director 5.3.1 and Cisco UCS Director Bare metal Agent 5.2 from the following link:
You will need the bare metal agent software for building bare metal instances such as installing Linux or ESX on blade and rack servers. This step is a prerequisite to installing applications or databases on operating systems and hypervisor supported virtual-instances:
1. Through the vSphere web client, connect to vCenter 5.5U2 installed external to the VersaStack on common infrastructure.
2. Select File, and then Deploy OVF Template. Browse to the directory where the CUCSD_5_3_0_0_VMWARE_GA ovf file is located and click Next twice then Accept the terms and click Next. Provide a name and click Next three times without changing default selections. Provide new root and shelladmin user passwords, IP subnet mask and default router for the appliance. Click Next. You will see a confirmation screen with your selections next. After making sure your selections are correct, select the check box before Power on after deployment and click Finish. The installation will take about 2 minutes. Proceed to install the bare-metal agent (CUCSD_BMA_5_2_0_0.ovf) in a similar manner.
To configure the Cisco UCS Director Virtual Machine on VMware, complete the following steps:
1. Right-click the UCSD VM and click Edit Settings.
2. Select the Resources tab.
3. Select CPU, and change the Reservation to about 4000 MHz, then select Memory, and change Reservation to over 4000 MB and click OK
Note: Upgrade the reserved resources for the newly created VM.
4. Select the UCSD VM and right-click, Guest and click Install/Upgrade VMware Tools.
5. When the Cisco UCS Director VM comes up, it will have the necessary interface plumbed up and available for connectivity. Logging-in as user shelladmin takes one to the Cisco UCS Director shell Menu:
You may also access through the console or a browser (shown below):
6. Connect to the URL for your UCSD system through the IP address you assigned.
7. Login as default user admin with the password of admin and click Login.
8. Click OK to temporarily ignore the popup information message for login profile.
Note: UCSD version 5.3.1 comes with a set of wizards that help with setup after installing the product.
9. Deploying the Initial System Configuration wizard takes the user through a series of steps. The first step is to upload a UCSD license file. Under License tab Browse to the license file and click Upload.
10. Click Next and under Locale, select Language and click Next. Selecting Next after each step in the Initial System Configuration wizard allows for a mapping to the DNS, Mail (SMTP) and NTP servers, in that order.
1. The Device Discovery wizard is deployed next. It takes inputs such as an address range and account type for each device to be discovered.
Note: Number of credential policies * Number of addresses should be less than 1000 for discovery. If this value exceeds 1000, break the discovery process into smaller batches.
2. Each Account type has a credential policy associated with it. Clicking the “+” sign under Credential Policy results in a pop-up where the Account Type is selected by checking a box.
Note: MDS 9148S switches have the Account type of Cisco Nexus OS.
3. When an Account Type is selected, another pop-up with policy name, user name and password is presented. The policy name is any unique name for the device type of interest. User name and password should be for an account that is setup on the device in question for logging-in to allow adding into the PoD.
4. After above inputs are provided, the process discovers required devices and allows for grouping into a PoD. In this case, it is called the VersaCloud.
5. Select Next and then click the “+” sign after Select Pod to bring-up the following screen. Enter VersaCloud as the Name and pick a Type of Generic from the drop-down.
6. Click the Add button at bottom left (above screen) and Close the wizard.
7. At the end of this stage, the converged infrastructure PoD – VersaCloud is built in Cisco UCS Director.
8. After initial polling, data from all devices is gathered and displayed in a tabular manner.
Note: For some browsers you may need to add the web URL to trusted sites to display correctly.
In Cisco UCS Director, you can use local accounts and/or LDAP accounts. Here we will go through necessary steps to create local groups and users within each group.
1. From the Main Menu click Administration and select User and Groups.
2. Under the User Groups tab, click to add Group.
3. In Name field name of the group like (DevGroup), email address, First and Last Name.
4. Repeat Step (2 and3) to Create Two more group (Test and Prod).
5. Select Login Users tab and click to Create Group Admin logins for the three groups.
6. Select User Role of Group Admin for a user group that was recently created. Provide login name and password for the account and click the Add button.
7. Repeat Steps 4 and 5 to create Test and Prod Group Admin accounts.
8. Click to Create a Service-End User login for each of DevGroup, TestGroup and ProdGroup.
9. Click Add.
Note: The User Role determines whether an account is specific to a group or not. Therefore, only accounts with privileges that can be limited to the group will be presented with the User Group field and a drop-down list.
LDAP can be integrated to synchronize the LDAP server groups and users with Cisco UCS Director. LDAP authentication enables synchronized users to authenticate with the LDAP server. LDAP synchronization for users and groups can be done either automatically or manually. In addition, LDAP synchronization is also available as a system task. When new organizational units (OU) are added in the LDAP directory, and a synchronization process is run, either manually or automatically, the recently added LDAP users and groups are displayed in Cisco UCS Director.
Note Users that do not belong to a group or a domain user’s group display in LDAP as User With No Group. These users are added under the domain user’s group in Cisco UCS Director.
1. From Main menu, click Administration Tab > Select User and Group
2. Click Authentication Preferences Tab and Select Authentication Preferences LDAP First, fallback to local.
3. Click Save.
4. Click the LDAP Integration tab, and then click
5. Add LDAP Configuration detail, Account Name, Server Type, Server Name/IP, Domain Name and LDAP user name and password.
6. Click Next.
7. In LDAP Search Base click Search Base DN and make selections on the popup to get a list to use for the correct Base DN.
8. Click Select, Submit, and then click OK
9. To update records again, click Request LDAP Sync, click Submit,
10. Click OK
11. Click Login Users tab and click Refresh you will see your LDAP users, if present.
Note: local groups and users can also be added and managed.
A Virtual Data Center (vDC) is an environment that combines virtual resources, operational details and policies to manage specific group requirements. It is a construct that allows for the mapping of resources and policies to groups of users on shared infrastructure. Tenant requirements (SLA’s) are captured as policies, which are then mapped to allocated resources to deliver the desired QoS. An organization/department can manage multiple vDC’s and each vDC allows for a set of approvers (if necessary) and quotas to control and limit resource usage for the assigned user. Operational details such as budgeting of resources with or without limits can be included with chargeback to help with billing. A VM that is provisioned using a service request can be associated with a vDC. When you are creating a service request, you can choose the vDC on which this VM is provisioned. You can view a list of vDC that are available for a particular group and choose the required vDC when provisioning VMs. Following is a mapping between the constructs of Policies, vDCs, Users and Groups.
In VersaStack, tiering can be controlled at multiple levels – at the storage pool, the mdisk or the vdisk level. This allows for complete flexibility when choosing the data to be managed by tiering. When tiering is turned on, the system will automatically allocate the most active data to the best performing set of disks in the pool. For a Storwize V7000 storage system, it is recommended to use disks of the same type/performance characteristic within each mdisk. The system will automatically determine the tier of the mdisk from its member drives. Each pool has an extent size, the default being 1GiB. This is the unit of allocation from the pool to volumes. A volume is typically striped over all the MDisks in the pool with the extent size being the unit of striping. Given the mix of disks available, the tiers are set as follows with Gold focused on performance, Silver showing a mix of capacity and performance and Bronze geared only for capacity.
Just as storage pools containing multiple tiers of disks can benefit from EasyTier, storage pools with a single tier can also benefit from the system automatically balancing data across disks in that tier. A set of storage pools with the following characteristics is created to provide three levels of service to cater to SLA’s.
Storage Pool No. |
QoS |
Disk Types in Pool |
Tiering Status |
1 |
Gold |
Flash/SSD + SAS & NL-SAS disks [or] Flash/SSD + SAS |
ON |
2 |
Silver |
SAS + NL-SAS disks |
ON |
3 |
Bronze |
NL-SAS |
ON |
Note: For the Gold tier, the second option consisting of Flash and SAS without NL-SAS disks was validated. Based on performance requirements, tiers could be deployed in various configurations. For example, Gold could be all flash followed by Silver consisting of all SAS and Bronze with all NL-SAS disks.
1. On the Storwize V7000 management GUI, navigate to the Internal Storage page (under Pools > Internal Storage).
2. Click the Configure Storage button to bring up the Configure Internal Storage wizard.
Note: you need to have candidate/free disks available to assign to the pool.
3. Select Use a different configuration, choose Flash drive class, and select the desired RAID type and number of drives to provision. Click Next.
4. Select Create one or more new pools and enter the pool name of Gold. Click Finish, and wait for the task to complete, and then click Close.
5. Click Configure Storage and select Select a different configuration. This time select Enterprise drive class, the desired RAID type and number of drives to provision, then click Next.
Note: Retain some enterprise drives for the Silver tier.
6. Select Expand an existing pool then select the Gold pool and click Finish.
7. Click Configure Storage and select a different configuration. Select Enterprise drive class, the desired RAID type and number of drives to provision, and then click Next.
8. Select Create one or more new pools, enter Silver for the name and click Finish. Wait for the task to complete, and then click Close.
9. Click Configure Storage and select Select a Different Configuration. Select Nearline drive class, the desired RAID type and number of drives to provision, and then click Next.
10. Select Expand an existing pool then select the Silver pool and click Finish. Wait for the task to complete, and then click Close.
11. Click Configure Storage and click Select a Different Configuration. Select Nearline drive class, the desired RAID type and number of drives to provision, and then click Next.
12. Select Create one or more new pools enter a name of Bronze, and then click Finish. Wait for the task to complete, and click Close.
13. In the MDisks by Pools view, you can see the three storage pools. These pools can now be used to allocate volumes.
14. Clicking the (+) sign before each pool provides for an expanded view with details on mdisks in each pool.
Note: Enterprise located in the last column shown below points to SAS disks.
The following section details a method of deploying a multi-tenant platform using Cisco UCS Director based on above storage QoS levels and the vDC construct. At the vDC level, relevant compute, network and storage constructs are assigned based on desired customer Service Level Agreements (SLA).
In this hypothetical structure, there are three vDC’s – Dev_vDC, Test_vDC and Prod_vDC. They represent the three functions required for developing software in-house - Development, Testing and Production.
The following case is presented to underscore the relevance of features in Cisco UCS Director to a real-world scenario. As with any hypothetical case, certain assumptions are made as detailed below.
The department in question builds a three-tiered proprietary Geospatial Information System (GIS) software in-house and provides this as a service for consumption by customers of the parent service provider. Thus, there is a need for a production environment as well. The software architecture allows for horizontal scaling where capacity at each of the web, application and database layers can be scaled by adding more number of instances from templates of corresponding catalog items.
Developers of GIS software are expected to make frequent changes to their environment, including build and teardown of instances. They are also expected to focus on functional and interop features more so than performance. This criterion translates to the development group requiring loose infrastructure change-control procedures, dedicated platforms to prevent collateral damage and possibly Bronze tier for storage. Further, if developers are performing different functions, interop and resiliency components could be allocated more resources as there may be a need for it. In Cisco UCS Director, this translates to having no approvers to facilitate quick deployments with a group budget to prevent excessive cost due to high infrastructure use. Allowing for exceeding the budget to accommodate for interop and resiliency pieces while also limiting resource usage for the functional portion of development through resource limits can provide a necessary layer of control that leads to flexibility and greater alignment with business goals.
The Test group may require tighter change-control mechanisms to ensure a group of testers can work in lock step to perform end-to-end testing. This will prevent un-foreseen consequences due to missteps such as deletion of instances or results. Apart from testing functional and interop pieces, performance and resiliency will play a major part as well. This points to a need for faster and redundant infrastructure resources. Therefore, the Test group may require Silver tier storage with an approver within the group (say testadm) for workflows that delete or alter running instances. Group budgeting could still be instituted to prevent excessive infrastructure cost along with deletion of inactive instances, upon approval, to reclaim un-used resources after 90 days (for example).
The resource requirements for the production group is expected to be higher due to a greater focus on performance, high-availability and geographical redundancy for higher uptime (99.999%). Change control mechanism needs to be the strongest for this group to prevent changes from affecting critical services. From a Cisco UCS Director perspective, there needs to be multiple approvers for all catalog items, both within and external to the vDC. Group budgeting needs to be turned off to allow for agile/elastic scaling to meet demand upon approval of changes.
Within Cisco UCS Director, there are two methods of providing Catalog items – the ISO method and the template method. The ISO method allows for certain flexibilities, which can also lead to more mistakes.
The Cisco UCS Director template method leverages options presented by vCenter – Clone to template or Convert to template. Corresponding vCenter template needs to be created and made available through Cisco UCS Director discovery and mapped to the catalog item. Converting to a template is available when the virtual instance is down while clone to a template is available when it is up. With the virtual instance up, select the “clone to template” under Template after right-clicking the VM. After the vCenter template is created, back in Cisco UCS Director, complete the following steps:
1. Force iso gathering by going to Administration > System > System Tasks. Then, select VMware inventory collector and select Run Now.
2. Policies > Virtual/Hypervisor Policies > Service Delivery.
3. Select the Mark Datastores for ISO Inventory Collection tab and click the datastore where the template was saved in vCenter. Select Edit and then pick the datastore for Select Data Stores option and Submit.
4. In Guest OS ISO Image Mapping Policy, select the policy to Edit and click the (+) next to ISO. Provide a name and select the ISO Image to map. Click Submit twice.
Cloning an existing virtual instance to a template and using the template from Cisco UCS Director is the most restrictive method where settings and data are captured without a way to reverse engineer the VM. Converting to a template in vCenter provides a way to convert back the template to a VM if required such as in the event of applying patches. Deploying new instances based on templates of catalog items allows for consistent and standards based rollout with determinable outcome. Depending on the level of control needed in an environment, a combination of templates and clones through vCenter and Cisco UCS Director can be used.
The vDC’s (and groups) differ with respect to resource limits and approvers with potentially the same consistent set of templates available to deploy a three-tiered application on which development, test and production can take place.
Note: Cost presented as $x < $y <$z where $x is for Development group with bronze storage tier capped by a budget. $y is when silver tier is used with partial budget capping as in the case of the Test group. $z is for production where gold tier storage is required. Thus, total cost for each vDC is determined by number of instances predicated by performance and resiliency needs (multiple instances for redundancy/D.R).
5. Click Administration Tab, from drop down Menu select Guided Setup.
6. Select vDC Creation.
7. View the Prerequisites and click Next.
8. In the vDC General Information tab, Write vDC Name Dev_vDC.
9. Provide access to resources in this vDC to previously created group. Select Group Name DevGroup.
10. Select Cloud Name VMware.
11. Enter required approvers and other details as below.
12. Click Next.
1. In Policies Tab, policies for System, Compute, Network and Storage are to be entered. Click next to each of these policies to add required policies. When
next to System Policy is selected, a screen requiring following inputs is presented.
Field |
Value |
Policy Name |
VersaAdm |
VM Name Template |
Versa-SR${SR_ID} |
Host Name Template |
${VMNAME} |
DNS Domain |
cloud.versa.com |
Linux Time Zone |
US/Pacific |
VM Image Type |
Windows and Linux |
Product ID |
Windows Product ID |
License Mode |
Per Seat |
Number of License User |
Optional (say 5) |
Auto Login Count |
Number of Auto Login Count (say 5) |
Administrative Password |
Administrative Password 1st time |
Domain/Workgroup |
Select Domain or Workgroup |
Workgroup |
In case of work group Name of the work group |
2. Click Submit. Click Next to add the Compute Policy.
1. Back in the Policies tab, click next to create Compute Policy. Enter the details as follows.
2. Click Submit and then OK.
3. Optional. Uncheck Allow Resizing of VM and Check Override Template and enter required values for vCPU (2) and memory (4096) with CPU and memory shares as normal. This will allow for a mapping of the previously cloned template (through vCenter) to the service request to prevent the operator/self-service user from altering resources used by the instance.
1. Back in the Policies tab, click next to create Network Policy.
2. Click to Add VM Networks, provide a NIC Alias and click
to select Port Groups.
3. Click Select to Select Port Group.
4. Click Port Group to Check it and click Select.
5. Pick adapter type of E1000 and select the port group, and then click the pen-like icon to edit.
6. Select IP Address Type DHCP and click Submit and then click OK.
7. Click Submit and OK twice.
Note: Adaptor Type needs to be matched with the corresponding item mapped to the template in vCenter. This will allow for a mapping of the previously cloned template (through vCenter) to the service request to prevent the operator/self-service user from altering resources used by the instance.
1. Back in Policies Tab, click to create a Storage Policy. Policy Name Versa_Dev_Web_Pol.
2. Selected Data Stores, click Select and make the selection as below.
3. Click Select.
4. Click Next.
5. In Additional Disk Policies, click Submit.
6. Click OK.
7. Optional: Check Override Template and Manual Disk Size boxes and enter required values for disk space (40G).
Note: The disk size specified should be larger than what is used in the template created in vCenter. This will allow for a mapping of the previously cloned template (in vCenter) to the service request to prevent the operator/self-service user from altering resources used by the instance.
1. In Policies Tab click to create Cost Model Policy.
There is the provision to assign cost to both virtual and bare-metal instances. For virtual instances, cost may be assigned in a more granular manner by providing cost for compute, memory, network and storage components or by setting cost at the VM level. Bare-metal cost is calculated by considering individual component cost with a provision to also include fixed costs that may have been incurred due to other incidentals. In our case, we will provide cost estimates at the VM level as one time cost, active vm cost and vm cost for inactive instances.
2. Click Add and then OK.
Note: Skip the User Action Policy at this stage. It is used as a post-provisioning option within the vDC. The option to delete inactive VM’s allows for the reclaiming of resources that are not powered on (inactive) between 1 to 90 days, if desired.
The End User Self-Service Policy allows for setting of access to particular user operations within the vDC. There is the flexibility to set user access to be as restrictive or open as needed. User management and access includes such areas as VM Power Management, Resizing, Snapshots, VM deletes, Disk management, Network and Console Management. This policy controls actions and tasks that a user can perform on a vDC.
1. Back in Policies Tab; click to create End User Self-Service Policy. Policy Name versa_Dev_SelfServ.
2. In the End User Policy dialog box, provide the Policy Name versa_Dev_SelfServ (say), Description (optional) and select required options as below.
3. Click Submit and then click OK to get back to the vDC creation screen.
4. Click Next and in the Summary tab, view all the steps and status and click OK.
5. Click Next and then Close.
At this point, we have created a vDC (Dev_vDC) and all associated policies pertaining to the DevGroup. Next, we will clone policies with a view to use these policies in cloned virtual data-centers (vDC) for test and production groups. This will provide a framework for a multi-tenant platform with room to customize policies as required.
Cloning is a quick and easy method to replicate policies and /or vDC’s. Further, it ensures consistency and contributes to a reduction of human errors during setup.
1. From main menu bar, click Policies > Virtual/Hypervisor Policies > Computing.
2. Select Versa_Dev_Web_Pol, click Clone.
3. Change Policy Name as Versa_Test_Web_Pol.
4. Description as required.
5. Select Cloud Name as VMWare.
6. Host Node/Cluster Scope Al.l
7. Click Resource Pool and Check Resource Pool.
8. Click Select.
9. Under Resizing Options, leave default values for Permitted Values for vCPUs and Memory.
10. Deploy to Folder as Test.
11. Click Submit and click OK.
12. From main menu bar, click Policies > Virtual/Hypervisor Policies > Storage.
13. Click VMware Storage Policy tab.
14. Select Versa_Dev_Web_Pol, click Clone.
15. Change Policy Name as Versa_Test_Web_Pol.
16. Description as required.
17. Data Store/Datastore Clusters Scope as Include Selected Datastore.
18. Selected Data Stores, click Select.
19. Check versa_common_nfs as this is a datastore, which will be available through the SAN.
20. Click Select and then Next.
Note: There is the option to filter storage selections on disk characteristics of capacity, performance and usage.
21. In Additional Disk Policies, click Submit.
22. Click OK.
23. From main menu bar, click Policies > Virtual/Hypervisor Policies > Network
24. Click VMware Network Policy tab.
25. Select Versa_Dev_Web_Pol and click Clone.
26. Change Policy Name as Versa_Test_Web_Pol.
27. Description as required.
Note: There is the option of picking required port-groups and addressing scheme (static/DHCP) for desired resiliency, bandwidth and scale.
28. Click Submit.
29. Click OK.
30. From main menu bar, click Policies > Virtual/Hypervisor Policies > Service Delivery.
31. Click the VMware System Policy tab.
32. Select VersaAdm, click Clone.
33. Change Policy Name to Versa_User.
34. Click Submit.
35. Click OK.
Note: Above policies are created for TestGroup from DevGroup. Edit cloned policies as required. Follow the same process and create another set of policies for the ProdGroup.
1. Click Policies > Virtual/Hypervisor Policies > Virtual Data Centers.
2. Pick the recently created vDC (Dev_vDC) and click Clone button.
3. Change the names as required and map to cloned policies tied to the vDC in question.
4. Click Add and then click OK to add the new vDC.
5. Perform the same set of vDC cloning steps to create another vDC called Prod_vDC.
A Catalog includes the definition of service items and how they are delivered or provisioned. The self-service portal user interface (UI) in Cisco UCS Director provides a non-administrative interface to the Cisco UCS Director Catalog service items. The end user will see a catalog for self-provisioning. A catalog item is created by the system administrator/Cloud admin, and defines parameters such as cloud name, and group name to which it is bound. To aid in managing catalogs, Cisco UCS Director allows you to group similar catalog items within a folder. While creating a catalog, you can either select a specific folder, which has been created earlier or create a new folder, if one does not exist. A folder is visible only when it contains a catalog item. Following is a depiction of how different functions within a software development company may be setup.
Note: All users have the same catalog items with slight variations due to tier of storage with corresponding cost differences. Given that we use template based catalog items in Cisco UCS Director, there is more flexibility. This is because they deliver different functions (development, test and production) on the same application. However, if the tenants happen to have different requirements, Cisco UCS Director has the flexibility to accommodate different catalog items for each group.
1. Click Policies > Catalog and then click (+) Add.
2. Select catalog type Standard and Submit. Provide a catalog name and other details as shown.
3. Click Next and then Submit.
4. Perform the same set of steps for other two catalog items of application and database instances for the Dev group.
The steps above create a catalog with three items listed below for the Dev user.
5. Optional. Uncheck Provision New VM Using ISO Image checkbox as in step 4 above and select the image to map the catalog item to the corresponding vCenter template. This mapping will prevent the operator/self-service user from altering resources used by the instance.
Advanced catalog type allows for mapping custom workflows to catalog items for end user deployment. Therefore, End users can use these catalog items during a service request to execute workflows. Following is the procedure to create an advanced catalog item.
1. Click Policies > Catalogs and then Add. Pick Catalog Type of Advanced and click Submit.
2. Enter the details as shown and click Next.
3. Click Select and select the workflow of interest. In this case, select a shell workflow (Create VM from ISO) which creates a basic VM with option to mount an iso for install.
4. Click Next, Submit and then OK.
5. When above sequence of steps are completed, the devgroup (say devuser1) will have access to an advanced catalog item mapped to the workflow “Create VM from ISO”.
The Cloud Administrator creates tenant groups and users within the group as a prerequisite step. Following this step, the tenant group is associated with cloud resources and privileges assigned to users. Catalog items for self-service portal are then created and associated with tenant users. These steps are required prior to tenant user provisioning activities on the VersaStack Cloud platform with Cisco UCS Director. Tenant users generate a service request when one of the catalog items is selected for deployment with optional approvals prior to execution. Tenant Administrators and Operations personnel will then consume/release cloud resources as needed with chargeback tied to resource utilization.
Figure 7 Tenant Catalog for Self-Service Portal
The self-service portal in Cisco UCS Director is available out of the box with catalog items created by the administrator as described in previous sections. When a catalog is created and made available to a group of users, the user has the capability to deploy tasks and workflows associated with the catalog item. This is done through service requests with provisions for approvals, if necessary and budget/resource constraints applied as set by the cloud administrator.
1. Go to the folder (Advanced) with the catalog item to be deployed and select the icon associated with the item.
2. Click Next and then provide details as below.
3. Click Next and Submit to generate a service request. Go to the services tab and select the recently deployed SR for details on progress and outcome.
4. The instance corresponding to above workflow is created and available under the Virtual Resources area. Click the VMs tab to see a list of VM’s available with status.
5. VM actions permitted for the VM for the user can be viewed by selecting any VM and right-clicking as follows.
The administrator sets a permitted operation shown above in the end-user policy and it can be modified as required. Service requests created by the user can be seen under the Services button above. Service requests created by the administrator will not be visible to the end-user. However, outcome of such requests is seen.
Workflows can be deployed manually through service requests, through triggers or in a scheduled fashion. Triggers are used to execute workflows when specified conditions are met. As an example, a trigger is used below to create a new VM for added capacity (say) when application is approaching system limits.
1. From main menu, click Policies > Orchestration. Click Trigger tab and then select to Add Triggers.
2. Click Next. Then, In Specify Conditions page, click to add condition and click Submit.
Note: Type of Object to Monitor can be one of the following with relevant parameters.
3. Back in the Specify Conditions screen, click Next
4. In Specify Workflow page, Select Maximum Invocations as 1 (for now or as many as required)
5. Select Workflow as Create VM from ISO (or something of relevance to the condition) for “when trigger State becomes Active”. Optionally, another workflow which will do the reverse (if available) for “When trigger State Becomes Clear” may be used to reclaim resources as when system demand goes down.
6. Click Next and provide details.
7. Click Submit and OK
The trigger will create a service request once the hosts reach the threshold limit. The same approach may be applied for scheduling a workflow.
1. From main menu, click Policies > Orchestration. In Orchestration page, click IBM Storwize folder and select workflow Create VM from ISO. Right click the workflow and select schedule.
2. In Schedule Workflow select Recurrence Type as Only Once, Start Time & User ID
3. Click Submit and OK. Select Workflow Schedules tab and verify the workflow schedule
4. Workflow will execute at given time as shown below.
Here are some suggestions on when triggers and schedulers may be used in conjunction with workflows to further automate infrastructure management.
1. Trigger a notification (say an email) if available (un-associated) compute/storage capacity is less than or equal to 15% of total capacity (in PoD).
2. Execute ESX bare-metal workflow as a pre-requisite step to adding capacity (Elasticity) based on (triggers -> available vCPU’s is less than or equal to 90% of capacity (say) [or] available memory is less than or equal to 90% of capacity of the PoD).
Assumption: there is un-associated capacity available in the PoD.
3. Trigger a notification (say a text message) if available (un-associated) compute/storage capacity is less than or equal to 5% of total capacity (in PoD)
4. Schedule snapshot/clone for production VM’s to ensure delivery of SLA. Trigger initial/full snapshot/clone at time of creation and schedule incremental once every week.
5. Trigger an end-of-life notification to testadm after 60 days (say) of creating a test VM saying “VM at EOL. Will be deleted in 2 weeks. Please re-create/delete”. Delete a test VM after 90 days to reclaim resources.
Resource limits at the group level and in units pertaining to either physical or virtual instances can be set as shown below.
1. Select Administration > Users and Groups > User Groups tab, then select the group of interest and click the Edit Resource Limits button.
2. Write the limit parameter
3. Click Save
4. Click OK
The Chargeback feature allows for accounting of resources. Thus,
Total cost incurred by a group is the sum of all resource costs within the group. Total cost for the group may be capped through a budget policy with provision to exceed (allow over budget), if necessary. Resource limits, for a non-service provider setup can be set at the parent organization level or below (group). Irrespective of the level at which limits are set, sum of all resources used cannot exceed set limit at the next higher level.
1. Select Administration > Users and Groups and then the group created (DevGroup) and click Budget Policy. Enabling Budget Watch is required for monitoring resource usage for this group. The other option allows for exceeding allocated budget, if necessary.
2. Click Save
3. Click OK
The Dashboard provides a snapshot and trend of selected data in easy to read graphs. It forms the basis of monitoring and provides a summary of the state of the enterprise on a single-pane. This functionality can be enabled as follows:
1. Select admin account on top right corner of the login screen and click the Dashboard tab. Then, select Enable Dashboard option and Apply.
2. Click Dashboard Tab and turn on automatic refresh
3. From main menu click Virtual > Cloud VMWare
4. Click summary tab, Select VMs Active vs Inactive, click top right corner and select Add to Dashboard
5. Select one by one All tabs you want to add in Dashboard
Following is an example screenshot of the metrics and reports selected.
The admin user has necessary privileges to monitor the entire Cloud or converged stack for a global view. Selecting each of the components (VMWare, Compute, Network or Storage) below brings up comprehensive sets of metrics in tabbed displays for the component. Following is a sampling of metrics and views offered.
1. From main menu select Converged and then click VersaCloud for individual components and their status
2. Select VMWare then click Summary tab to get familiar with the tool.
3. The Topology tab to the right presents various options to visualize physical and virtual instance mappings.
4. Selecting the Compute category brings up the following set of tabs with polled information for each compute component and other relevant data.
5. Similar operation (selecting IBM V7000 from Storage section) results in the following screen with tabs that present comprehensive data on the storage array.
Cisco UCS Director uses a Bare-Metal Agent (BMA) to automate the process of a Preboot Execution Environment (PXE) to install operating systems on bare metal servers and on hypervisor virtual machines. Bare metal Agent provides the following services that are required for a functional PXE install environment:
· Dynamic Host Control Protocol (DHCP)
· Hypertext Transfer Protocol (HTTP)
· Trivial File Transfer Protocol (TFTP)
When this environment is operational and Bare metal Agent and Cisco UCS Director are correctly configured, It is possible to build PXE installation tasks into any Cisco UCS Director Infrastructure workflow. Following is the architecture used in this VersaStack setup where the PXE and MGMT traffic flows over VLAN 3175.
1. Right click the deployed BMA and select Open Console
2. Wait for the initial boot of the UCSD-BMA until a blue screen appears on the console.
3. Back in UCSD, click Administration > Guided Setup and select the Bare Metal Agent Setup wizard.
4. Provide required inputs and click Next.
5. Click Next and then provide details on the DHCP network. Then, click Next and Close.
6. At this point, go to Administration > Physical Accounts and select the Bare Metal Agents tab. Select the agent and check its status and interfaces using the buttons on top. Then, click Set default BMA button to pick it as the default BMA.
7. Select the BMA and click Start Services
8. The BMA should show all services enabled (DHCP, TFTP and HTTP) and be reachable with a status of Active.
Cisco UCS Director has the capability to install operating systems and hypervisors on servers through customized workflows that automate the entire process. The preference on Cisco UCS is to provision servers to boot from the SAN, as this would allow for easy migration of workloads from one physical server to another, much like on virtual platforms. This functionality allows for two use-cases:
1. Maintain resiliency and capacity of the integrated platform by pre-provisioning resources with OS/hypervisors to readily accept workloads in the event of hardware failures.
2. Provide much needed elasticity for a cloud platform through quick and consistent provisioning of hardware resources based on demand.
Bare-metal provisioning is required to meet one of the four essential characteristics (Elasticity) of a shared platform (IaaS) as stipulated by the NIST.
Prior to using an image (say ESXi 5.5u2), it needs to be uploaded and made available for PXE boot. This is done as follows:
1. Upload the binary to /opt/image directory on the BMA (say through sftp/pscp)
2. Through a terminal session (such as Putty), login to the BMA server as root.
3. Change directory: cd /opt/infra
4. Setup your environment: ./infraenv.sh
5. Run the ISO extractor script: ./isoExtractor.sh
Following is a screenshot of the proceedings:
At this stage, a RHEL71 directory will be created under /opt/cnsaroot/images.
6. Make the iso available to the BMA through a common datastore.
7. Force inventory collection.
8. Administration > System > System Tasks tab.
a. Expand VMware VM Tasks, select VMware Inventory Collector and click Run Now at the top of the screen
b. Expand Cisco UCS Tasks, select UCS Inventory Collector and click Run Now at the top of the screen.
9. In UCSD, select Policies > Virtual/Hypervisor Policy > Service Delivery. Tab to Mark Datastores for ISO Inv. Collection. Pick the common share and click Mark Datastores for ISO.
10. Next, go to Guest OS ISO Image Mapping Policy tab, select the policy and add new iso.
Cisco UCS Director Orchestrator allows for automation of out-of-the-box tasks arranged as workflows using an intuitive graphical interface called the workflow designer. Please see Appendix-A for a list of relevant tasks supported in version 5.3.1. Both virtual and physical tasks can be included to design custom workflows. Triggers can be setup to help initiate workflows based on specified conditions or workflows may be executed by hand. A typical workflow consists of the following elements:
· Workflow Designer (GUI interface)
· Predefined Tasks for the supported component
The simplest workflow consists of two connected tasks. A task represents a particular action or operation. The workflow determines the order in which your tasks are executed by Orchestrator. When constructing workflows, by dragging-and-dropping tasks, it is necessary to route the output of one task into the input of the next task. This connecting of multiple tasks is how a workflow is created.
1. Click Policies > Orchestration and select the Workflows tab.
2. Click (+) Add Workflow and provide a Workflow Name and folder to place the workflow.
3. Click Next twice and then Submit to create a shell workflow.
4. Expand the folder (IBM Storwize) where the workflow was created and select it to bring up the workflow designer.
5. The design pane, to the right with Start and Completed conditions is where tasks from the library (Available Tasks) are dragged and dropped to create the workflow. The search pane under “Available Tasks” allows keywords to identify and locate required tasks as shown below.
6. Dropping the task in the design pane opens the task for inputs..
7. Once required inputs are provided, the task is ready to be linked.
8. Drag and drop other required tasks as before and provide necessary inputs for each to create a complete workflow. Below, we have a newly created ESX bare-metal workflow (Deploy ESXi FC host on IBM Storwize V7000). The workflow uses LUN on fiber-channel (FC) residing on IBM Storwize V7000 system for booting a new server. Initial image is picked up from the BMA through PXE. List of tasks that constitute the workflow are:
a. Create UCS Service Profile from Template
b. Setup PXE Boot
c. Create IBM Storwize Volume
d. Create IBM Storwize Host
e. Map IBM Storwize Volume to Host
f. Configure SAN Zoning
g. Power ON UCS Server
h. Monitor PXE Boot
i. Reset UCS Server and
j. Register Host with vCenter
Following presents input around tasks constituting the bare-metal workflow built for VersaStack with the boot LUN on fiber-channel (FC) residing on IBM Storwize V7000.
Note: only screen-shots where input is required are shown. Please continue to the next screen (click Next) for each task until you arrive at a screen as shown below to provide required inputs.
9. Create UCS Service Profile from Template.
10. Where the service profile template in UCS Manager, Versa-VM-Host-Fabric-A is as shown below.
11. Setup PXE Boot.
Variable Input/Selection
_______________________________________________________________
Server MAC Address 1303.OUTPUT_UCS_BLADE_MAC_ADDRESS
Server Hostname Host Name
_______________________________________________________________
12. Create IBM Storwize Volume.
13. Create IBM Storwize Host.
Variable Input/Selection
__________________________________________________________________
Host Name Host Name [or] IBM_STORWIZE_VOLUME_NAME
Fibre Channel Port Definitions 1303.SP_VHBA1_WWPN
__________________________________________________________________
14. Map IBM Storwize volume to Host.
Variable Input/Selection
______________________________________________________________
Volume 1329.IBM_STORWIZE_OUTPUT_VOLUME_IDENTITY
Host 840.IBM_STORWIZE_OUTPUT_HOST_IDENTITY
______________________________________________________________
15. Generic Configure SAN Zoning.
Variable Input/Selection
______________________________________________________________
Service Profile 1303.SERVICE_PROFILE_IDENTITY
Select vHBA 1303.SP_VHBA1
VLAN ID VLAN ID
VSAN ID 1303.SP_VSAN1
Select vHBA 1303.SP_VHBA2
VSAN ID 1303.SP_VSAN2
_____________________________________________________________
16. Power ON UCS Server.
Variable Input/Selection
________________________________________________________
Server 1303.SERVER_IDENTITY
________________________________________________________
17. Monitor PXE Boot.
Variable Input/Selection
________________________________________________________
PXE Request ID PXEBoot_134.OUTPUT_PXE_BOOT_ID
________________________________________________________
With a maximum wait time (hours) of 1 hr.
18. Wait for Specified Duration. Set the value to 30 seconds.
19. Reset UCS Server.
Variable Input/Selection
________________________________________________________
Server 1303.SERVER_IDENTITY
________________________________________________________
20. Register Host with vCenter.
Variable Input/Selection
________________________________________________________
PXEBoot Request ID PXEBoot_134.OUTPUT_PXE_BOOT_ID
________________________________________________________
The entire workflow, consists of creating a service profile from template, associating the UCS service profile with an available blade, creating required FC LUN on IBM Storwize V7000 storage array, mapping it to a host and SAN zoning on Cisco MDS switches. The ESX5.5u2 PXE image from the Bare-Metal Agent (BMA) is also installed and the server rebooted twice. This entire process, with waits, took approximately 22 minutes from start to finish as shown in the following service request report.
Use cases are a well-known tool for expressing requirements at a high level. It provides a description of how groups of users and their resources may interact with one or more cloud computing systems to achieve specific goals.
The following section presents descriptions of some actors, their goals and an idea of success and failure conditions with a view to clarify the interaction while meeting a set of IaaS tasks defined in the NIST model.
Table 4 Actors
Actor Name |
Description |
unidentified-user |
An entity in the Internet (human or script) that interacts with a cloud over the network and that has not been authenticated. |
cloud-subscriber |
A person or organization that has been authenticated to a cloud and maintains a business relationship with a cloud. |
cloud-subscriber-user |
A user of a cloud-subscriber organization who will be consuming the cloud service provided by the cloud-provider as an end user. For example, an organization's email user who is using a SaaS email service the organization subscribes to would be a cloud-subscriber's user. |
cloud-subscriber-administrator |
An administrator type of user of a cloud-subscriber organization that performs (cloud) system related administration tasks for the cloud-subscriber organization. |
cloud-user |
A person who is authenticated to a cloud-provider but does not have a financial relationship with the cloud-provider. |
payment-broker |
A financial institution that can charge a cloud-subscriber for cloud services, by either checking or credit card. |
cloud-provider |
An organization providing network services and charging cloud-subscribers. A (public) cloud-provider provides services over the Internet. |
transport-agent |
A business organization that provides physical transport of storage media such as high-capacity hard drives. |
legal-representative |
A court, government investigator, or police. |
identity-provider |
An entity that is responsible for establishing and maintaining the digital identity associated with a person, organization, or (in some cases) a software program. [NSTIC] |
attribute-authority |
An entity that is responsible for creating and managing attributes (e.g., age, height) about digital identities, and for asserting facts about attribute values regarding an identity in response to requests. [NSTIC] |
cloud-management-broker |
A service providing cloud management capabilities over and above those of the cloud-provider and/or across multiple cloud-providers. Service may be implemented as a commercial service apart from any cloud-provider, as cross-provider capabilities supplied by a cloud-provider or as cloud-subscriber-implemented management capabilities or tools |
Cisco UCS Director supports user roles. These user roles are system-defined and available by default.
Actors: unidentified-user (devuser1), cloud-subscriber (devadm), payment-broker, cloud-v Provider (admin).
Goals: Cloud-provider opens a new account for an unidentified-user who then becomes cloud Subscriber.
Assumptions: Service offered, cost and the payment mechanism is known and agreed upon and the user Request is valid.
Success Scenario: The unidentified-user gets:
· A unique name for the new account (devuser2)
· Optional: information about the unidentified-user's financials and
· When the unidentified-user wants the account opened. (Now)
The cloud-provider verifies the unidentified-user's information. If the information is deemed valid and approved by cloud-provider, the unidentified-user becomes a cloud-subscriber and the cloud-provider returns authentication information that the cloud-subscriber can subsequently use to access the service.
Observation: As admin, with system admin privileges, created a new user – devadm, with Group Admin privileges for DevGroup. Logged back in as devadm and ascertained access as provisioned. Devadm could see and do only what was allowed by the admin user. Devadm.
As Devadm, create a user with end-user privileges.
1. Administration > Organization and select the Users tab and click (+) Add
2. Select Add after entering above details to add new user devuser2.
3. Login as devuser2 to confirm.
Actors: unidentified-user, cloud-subscriber, cloud-provider, payment-broker.
Goals: Close an existing account belonging to a group for a cloud-subscriber.
Success Scenario: The cloud-subscriber requests closing an account. The cloud-provider:
· performs the requested actions on the timetable requested;
· Deletes the cloud-subscriber's payment-broker information from the cloud-provider's Systems; and
· Revokes the cloud-subscriber's authentication information. Now the cloud-subscriber is Classified as an unidentified-user.
Observation: proceeded to close (delete) devuser2 by devadmin. Tried logging in as devuser2 after deletion and was unsuccessful. Data categorized as ‘public’ was still available to the group Admin account (devadm) and hence recoverable if necessary.
Actors: unidentified-user, cloud-subscriber, cloud-provider.
Goals: Cloud-provider terminates a cloud-subscriber's account.
Assumptions: A cloud-provider determines that a cloud-subscriber's account should be terminated per the terms of the SLA. The issue of multiple accounts for a cloud-subscriber is not considered part of the scope of this use case, nor is the issue of retaining sufficient information to recognize an abusive cloud-subscriber trying to create a new account to continue the abuse.
Success Scenarios: (terminate, IaaS) Possible reasons for termination may be that the cloud-subscriber has violated acceptable usage guidelines (e.g., by storing illegal content, conducting cyber-attacks, or misusing software licenses), or that the cloud-subscriber is no longer paying for service. The cloud-provider sends a notice to the cloud-subscriber explaining the termination event and any actions the cloud-subscriber may take to avoid it (e.g., paying overdue bills, deleting offending content) or to gracefully recover data. Optionally, the cloud-provider may freeze the cloud-subscriber's account pending resolution of the issues prompting the termination. The requested actions, charges the cloud-subscriber according to the terms of the service, notifies the cloud-subscriber that the account has been terminated, deletes the cloud-subscriber's payment information from the cloud-provider's system, and revokes the cloud-subscriber's identity credentials. At this point, the cloud-subscriber becomes an unidentified-user.
Test/Observation: As ‘admin’, a password reset and not revealing the new password will lock the user out while retaining data and provide an opportunity for remediation. A permanent account delete has the effect of removing the user, associated data from the system, and convert the user into an unidentified user.
Actors: cloud-subscriber, cloud-provider, transport-agent.
Goals: Cloud-subscriber initiates a copy of data objects from the cloud-subscriber's system to a cloud- provider's system. Optionally, protect transferred objects from disclosure.
Assumptions: Assumes the Use Case "Open an Account" for cloud-subscriber on cloud-provider's system. The cloud-subscriber has modified access to a named data object container on the cloud-provider's system.
Success Scenario: (cloud-subscriber-to-network copy, IaaS) The cloud-subscriber determines a local file for Copying to the cloud-provider's system. The cloud-subscriber issues a command to the cloud-provider's system to copy the object to a container on the cloud-provider's system. The command may perform both the object creation and the data transfer, or the data transfer may be performed with subsequent commands. The command specifies the location of the local file, the data encoding of the local file, and the name of the new object within the container.
Observation: There are two scenarios for this case. An ‘upload’ option for placing ova/zip/jar files for build purposes. A second method pertains to file/data transfer from a virtual instance. The upload option is strict with only certain types of files allowed for upload to ‘public’, ‘user’ or ‘group’ space’. Files uploaded to public space are available to all users in the group.
Actors: unidentified-user, cloud-subscriber, cloud-provider.
Goals: Erase a data object on behalf of a cloud-subscriber or unidentified-user.
Assumptions: One or more data objects already exist in a cloud-provider's system. A request to erase a data object includes the unique identifiers of the objects to delete. There is no redundant data storage by cloud-provider or redundant copies are deleted together.
Success: A cloud-subscriber sends a delete-objects request to the cloud-provider's system. At the requested deletion time, the system disables all new attempts to access the object.
Observation: A user with the privilege to delete can remove images and data from VM’s created. The deleted Image becomes un-available for others in the group as well.
Actors: cloud-subscriber, cloud-subscriber-administrator, cloud-provider
Goals: The cloud-subscriber requires to provision (create) user accounts for cloud-subscriber-users to access the cloud. Optimally, the cloud-subscriber requires the synchronization of enterprise system-wide user accounts from enterprise data center-based infrastructure to the cloud, as part of the necessary process to streamline and enforce identical enterprise security (i.e., authentication and access control policies) on cloud-subscriber-users accessing the cloud.
Assumption: The cloud-subscriber has well defined policies and capabilities for identity and access management for its enterprise IT applications and data objects. The cloud-subscriber has enterprise infrastructure to support the export of cloud-subscriber-user account identity and credential data. The cloud-subscriber can establish trusted connections to these cloud services.
Success: This scenario illustrates how a cloud-subscriber can provision accounts on the Versacloud (IaaS).
Observation: User account provisioning allows for local and domain user creation (User Group -> Domain Users).
Actors: cloud-subscriber, cloud-subscriber-user, cloud-provider, identity-provider (optional)
Goals: The cloud-subscriber-user should be able to authenticate through a central LDAP/Active Directory system.
Assumption: The cloud-subscriber-user's account has been already provisioned in the cloud, see use case Identity Management – User Account Provisioning.
Success: This scenario illustrates how a cloud-subscriber-user can authenticate against a cloud-based Authentication service using the appropriate credentials to gain access to the cloud-based Applications/services.
Observation: A combination of steps such as setting Authentication Preferences, LDAP Integration and a Domain group account provides necessary mechanism.
Actors: cloud-subscriber, cloud-provider
Goals: The cloud-subscriber should have the capability to create VM images that meet functions, performance and security requirements and launch them as VM instances to meet IT support needs.
Assumption: The cloud-subscriber has an account with an IaaS cloud service that enables creation of Virtual Machine (VM) images and launching of new VM instances. The cloud-provider shall offer the following capabilities for VM Image creation to the cloud-subscriber:
· A set of pre-defined VM images that meets a range of requirements (O/S version,CPU Cores, memory, and security)
· Tools to create a new VM image from scratch
The cloud-provider shall support the following capabilities with respect to launching of a VM instance:
Secure administration of the cloud-subscriber's VM instance through the ability to configure certain ports (e.g., opening of port 3389 for window to enable remote desktop and 22 for Linux to enabling a SSH session.
Observation: A generic instance was created from the self-service catalog. Provisioning Succeeded after sufficient funds were made available for the group and a budget ceiling was removed.
Actors: cloud-subscriber, cloud-provider
Goals: A cloud-subscriber stops, terminates, reboots, starts or otherwise manages the state of a virtual Instance
Assumptions: A suitable VM image (operating system executables and configuration data) exists. Possible Formats include OVF.
Success: A cloud-subscriber identifies a VM image to run. The cloud-provider provisions VM and Performs the loading and boot-up cycle for the selected image for the requesting cloud-Subscriber. Power-on, power-off and resizing of the VM.
Observation: The selected VM was powered-off from UCSD, memory, and CPU resized prior to power on. vCenter status was monitored and noted to reflect correct operation.
Actors: cloud-subscriber, cloud-provider
Goals: The cloud-subscriber should have the capability to decommission VM resources that are no Longer needed or do not meet functional, performance and security requirements and either reclaim such resources or relinquish to the provider.
Assumption: The cloud-subscriber has an account with an IaaS cloud service that enables Decommissioning/removal of Virtual Machine (VM) images.
Success: The cloud-subscriber selects a specific Virtual Machine image supplied by the cloud-Provider (O/S, CPU cores, memory, and security) is decommissioned to reclaim/relinquish associated resources.
Observation: A shutdown of the VM in question, while reducing active resource usage from a customer perspective does not revert back resources for reuse by the provider. A VM delete option is preferred and sought.
Below is a summary of main components used in building the VersaStack.
Please refer to the “Bill of Material” in the following link for a complete list:
http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/Versastack_n9k_vmw55.pdf
Part Number |
Product Description |
Quantity |
Cisco Nexus 9300 Switching |
|
|
N9K-C9372PX |
Nexus 9300 with 48p 10G SFP+ and 6p 40G QSFP+ |
2 |
N9KDK9-612I3.1
|
Nexus 9500 or 9300 Base NX-OS Software Rel 6.1(2)I3(1)
|
2 |
Part Number |
Product Description |
Quantity |
Cisco MDS FC Switch |
|
|
DS-C9148S-12PK9
|
MDS 9148S 16G FC switch, w/ 12 active ports
|
2 |
M91S5K9-6.2.9
|
MDS 9100 Supervisor/Fabric-5, NX-OS Software Release 6.2.9
|
2 |
Part Number |
Product Description |
Quantity |
Cisco UCS Unified Compute System |
|
|
UCSB-5108-AC2 |
UCS 5108 Blade Server AC2 Chassis, 0 PSU/8 fans/0 FEX |
1 |
UCS-IOM-2208XP
|
UCS 2208XP I/O Module (8 External, 32 Internal 10Gb Ports)
|
2 |
UCSB-B200-M4
|
UCS B200 M4 w/o CPU, mem, drive bays, HDD, mezz
|
4 |
UCS-CPU-E52650D
|
2.30 GHz E5-2650 v3/105W 10C/25MB Cache/DDR4 2133MHz
|
8 |
UCS-MR-1X162RU-A
|
16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v
|
32 |
UCSB-MLOM-40G-01
|
Cisco UCS VIC 1240 modular LOM for blade servers
|
4 |
Part Number |
Product Description |
Quantity |
Cisco UCS UCS-FI-6248UP Fabric Interconnect |
|
|
UCS-FI-6248UP
|
UCS 6248UP 1RU Fabric Int/No PSU/32 UP/ 12p LIC
|
2 |
N10-MGT012
|
UCS Manager v2.2
|
2 |
Part Number |
Product Description |
Quantity |
Cisco FEX (optional) |
|
|
N2K-C2232PF
|
Nexus 2232PP with 16 FET, choice of airflow/power |
2 |
Part Number |
Product Description |
Quantity |
Cisco UCS Rack Servers |
|
|
UCSC-C220-M4S |
UCS C220 M4 SFF w/o CPU, mem, HD, PCIe, PSU, rail kit |
2 |
UCS-CPU-E52640D |
2.60 GHz E5-2640 v3/90W 8C/20MB Cache/DDR4 1866MHz |
4 |
UCS-MR-1X162RU-A |
16GB DDR4-2133-MHz RDIMM/PC4-17000/dual rank/x4/1.2v |
16 |
UCSC-PCIE-CSC-02 |
Cisco VIC 1225 Dual Port 10Gb SFP+ CNA |
2 |
Following table details the software revisions for various components of the VersaStack architecture:
Layer |
Device |
Version or Release |
Details |
Compute |
Cisco UCS fabric interconnect |
2.2(3e) |
Embedded management |
Cisco UCS C 220 M3/M4 |
2.2(3e) |
Software bundle release |
|
Cisco UCS B 200 M3/ M4 |
2.2(3e) |
Software bundle release |
|
Network |
Cisco Nexus 9396 |
6.1(2)I2(2a) |
Operating system version |
|
Cisco MDS 9148S |
6.2(9) |
FC switch firmware version |
Storage |
IBM Storwize V7000 |
7.4.0.2 |
Software version |
IBM Storwize V7000 Unified |
1.5.1.2-1 1 |
Software version |
|
Software |
Cisco UCS hosts |
VMware vSphere ESXi™ 5.5u2 |
Operating system version |
VMware vCenter™ |
5.5u2 |
VM (1 each): VMware vCenter |
|
|
Cisco UCS Director |
5.3.1 |
Cloud Orchestration Software |
The IaaS platform discussed and deployed using UCS Director uses the common components of Cisco UCS compute and Nexus switching with IBM Storwize V7000 Unified storage system. It addresses business requirements such as agility and cost with security. These functional requirements promote uniqueness and innovation in the integrated computing stack, augmenting the original VersaStack architecture with support for essential IaaS services as stipulated by the NIST model. The result is a framework for the easy and efficient consumption of resources, both within and external to the integrated platform in the form of an application ready IaaS. This setup is designed and built to appropriately address the diverse workloads, activities and business goals of any organization. This design and the validation discussed here describe the benefits of UCS Director on VersaStack integrated stack.
VersaStack for Data Center – Deployment of VMware 5.5 on Cisco UCS and IBM Storwize V7000 Unified storage:
http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/UCS_CVDs/Versastack_n9k_vmw55.pdf
The NIST Definition of Cloud Computing, Peter Mell and Timothy Grance.
http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf
Cloud Computing Use Cases, National Institute of Standards and Technology (NIST).
http://www.nist.gov/itl/cloud/use-cases.cfm
Cloud Computing Use Cases ver. 1.0, Cloud Standards Customer Council, 10/2011.
http://www.cloudstandardscustomercouncil.org/use-cases/CloudComputingUseCases.pdf
Cisco UCS Security: Target of Evaluation (ToE), 11/2012.
https://www.commoncriteriaportal.org/files/epfiles/st_vid10403-st.pdf
Cisco UCS Director Literature: http://www.cisco.com/en/US/products/ps13050
Cisco Validated Designs: http://www.cisco.com/go/designzone
Cisco UCS Director Installation and Upgrade Guide on VMware vSphere, release 5.3:
Cisco UCS Director Administration Guide, Release 5.3
Cisco Systems Inc., Whitepaper -
“Managing Real Cost of On-Demand Enterprise Cloud Services with Chargeback Models”
http://www.techdata.com/content/tdcloud/files/cisco/Cloud_Services_Chargeback_Models_White_Paper.pdf
Cisco UCS Director Bare Metal Agent Installation and Configuration Guide:
Following is a list of tasks grouped by category for components in this CVD. It is not comprehensive and only provided to inform on out-of-box task support for relevant components. These tasks may be used to create workflows through the designer tool in Cisco UCS Director as shown before. For detailed information, go to Cisco UCS Director –Online Help.
Login to UCSD and select Administration > Orchestration and then, Task Library button on top to generate the document for the first time. This brings up the hyperlinked list of tasks supported by the version of UCSD. Selecting the individual task presents detailed input and output information on the task as follows:
List of Tasks pertaining to components covered in this CVD.
Custom - VMware Host Tasks
Custom Tasks
Containers
VDCProvisioning
User and Group Tasks
vDC Tasks
Context Mapper Tasks
IBM Storwize Volume Tasks
IBM Storwize Pool Tasks
IBM Storwize Host Tasks
IBM Storwize Mdisk Tasks
IBM Storwize FileSet Tasks
IBM Storwize Snapshot Tasks
IBM Storwize Snapshot Rule Tasks
IBM Storwize DataStore Tasks
IBM Storwize FileShare Tasks
IBM Storwize FileSystems Tasks
IBM Storwize FC Consistency Group Tasks
IBM Storwize RemoteCopy Tasks
IBM Storwize Tasks
IPMI Tasks
General Tasks
VMware Host Tasks
Cisco Network Tasks
Cisco Security Tasks
Network Services Tasks
System Activity Tasks
RHEV KVM VM Tasks
Service Container Tasks
Tier3 VM Tasks
Cisco UCS Tasks
CIMC Tasks
VMware VM Tasks
VMware Network Tasks
VMware Storage Tasks
PNSC Tasks
Amazon VM Tasks
F5 Big IP Tasks
Cisco Fabric Tasks
Procedural Tasks
Generic VM Tasks
Business Process Tasks
Total number of tasks: 1508
The workflow below is a variant of the previous ESX bare-metal workflow in that it uses an almost similar set of tasks with minor variations to install a CentOS linux distribution on a Cisco UCS blade. The operating system is installed on FC LUN’s from the IBM Storwize V7000 Unified storage system. This workflow is shown to illustrate the flexibility and common methodology used for building infrastructure with UCS Director:
Overall workflow with list of tasks:
Here is a screenshot to give an idea of workflow progress with time spent at each stage:
Task Details/Inputs:
Note: In this section, only screenshots where default inputs are not taken is shown. Screenshots are in similar sequence as tasks in above workflow with task name shown on top left within parenthesis.
It takes approximately 10 minutes for this workflow to execute and another 5 minutes for the install to complete and the system to get rebooted. This platform can be the foundation for either a bare-metal workload or provide the basis for a docker container which requires Linux as the operating system. This step could be followed by execution of custom scripts to dockerize the linux platform. One option is to create a set of custom tasks and integrate with above workflow after the OS is installed. A basic task running a shell command (uname –a) and its output is shown below for reference:
K. Shivakumar Shastri, Technical Marketing Engineer(TME), DC Solutions Engineering, Cisco Systems, Inc.
Shiva has about 20 years of experience in various aspects of Systems Engineering. Currently, his focus is on Cloud and hyper-convergence on integrated platforms.
Jeffrey Fultz , Cisco Systems, Inc.
Chris O’Brien, Cisco Systems, Inc.
Phani Penmethsa, Cisco Systems, Inc.
Muhammad Ashfaq, Cisco Systems, Inc.
Sally Neate, IBM