The following sections
provide examples of Cisco VIM OpenStack configurations in the setup_data.yaml
file.
OpenStack Admin
Credentials
ADMIN_USER: <admin>
ADMIN_TENANT_NAME: <admin tenant>
OpenStack HAProxy and Virtual Router Redundancy Protocol
Configuration
external_lb_vip_address: An externally routable ip address in API nework
VIRTUAL_ROUTER_ID: vrrp_router_id #eg: 49 (range of 1-255)
internal_lb_vip_address: <Internal IP address on mgmt network>
#internal_lb_vip_ipv6_address: 2001:c5c0:1234:5678:1002::10 <== optional, enable IPv6 for OpenStack admin endpoint
OpenStack DNS Name Configuration
For web and REST
interfaces, names are commonly used instead of IP addresses. You can set the
optional external_lb_vip_fqdn parameter to assign a name that resolves to the
external_lb_vip_address. You must configure the services to ensure the name and
address match. Resolution can be made through DNS and the Linux /etc/hosts
files, or through other options supported on your hosts. The Cisco VIM
installer adds an entry to /etc/hosts on the management and other Cisco NFVI
nodes to ensure that this resolution can be made from within the pod. You must
ensure the resolution can be made from any desired host outside the pod.
external_lb_vip_fqdn: host or DNS name matching external_lb_vip_address
#external_lb_vip_fqdn: <host or DNS name matching external_lb_vip_address>
#external_lb_vip_ipv6_address: 2001:c5c0:1234:5678:1003::10 <== optional, enable IPv6 for OpenStack public endpoint
VIRTUAL_ROUTER_ID: <vrrp router id, eg:49>
OpenStack TLS and HTTPS Configuration
Enabling TLS is
important to ensure the Cisco VIM network is secure. TLS encrypts and
authenticates communication to the cloud endpoints. When TLS is enabled, two
additional pieces of information must be provided to the installer: haproxy.pem
and haproxy-ca-crt. These must be placed in the
~/installer-xxxx/openstack-configs directory.
haproxy.pem is the
server side certificate file in PEM format. It must include the server
certificate, any intermediate certificates, and the private key for the server.
The common name of the certificate must match the external_lb_vip_address
and/or the external_lb_vip_fqdn as configured in the setup_data.yaml file.
haproxy-ca.crt is the certificate of the trusted certificate authority that
signed the server side.
For production clouds,
these certificates should be provided by a trusted third party CA according to
your company IT policy. For test or evaluation clouds, self-signed certificates
can be used quickly enable TLS. For convenience, the installer includes a
script that will create and install self-signed certificates
Note |
Do not use the
certificates generated by this tool for production. They are for test purposes
only.
|
To use this tool, make
the following changes to the setup data file, then run the tool:
external_lb_vip_address: <IP address on external network>
external_lb_vip_tls: True
external_lb_vip_fqdn: host or DNS name matching external_lb_vip_address (if FQDN is needed)
external_lb_vip_ipv6_address: 2001:c5c0:1234:5678:1003::10 <== optional, enable IPv6 for OpenStack public endpoint
To run the tool, from
the /working_dir/ directory, execute
./tools/tls_cert_gen.sh -f
openstack-configs/setup_data.yaml.
OpenStack Glance Configuration with Dedicated Ceph
For OpenStack Glance,
the OpenStack image service, the dedicated Ceph object storage configuration is
show below. Do not change it. The Ceph and Glance keys are generated during the
Ceph installation step, so you do not need to specify the keys in the
setup_data.yaml file.
STORE_BACKEND: ceph #supported as ‘ceph’ for ceph backend store; don’t change
OpenStack Cinder Configuration with Dedicated Ceph
For OpenStack Cinder,
the OpenStack storage service, the dedicated Ceph object storage configuration
is show below. Do not change it. The Ceph and Cinder keys are generated during
the Ceph installation step, so you do not need to specify the keys in
setup_data.yaml file. Use the
vgs command to check your volume groups available on
your controller nodes. The controller nodes run the Cinder volume containers
and hold the volume groups for use by Cinder. If you have available disks and
want to create a new volume group for Cinder use the
vgcreate
command.
VOLUME_DRIVER: ceph
OpenStack Nova Configuration
To reduce the boot
time, the NOVA_BOOT_FROM parameter is set to local for Cisco VIM in the
OpenStack Newton release. While this reduces the boot time, it does not provide
Ceph back end redundancy. To overwrite it, you can set NOVA_BOOT_FROM to
ceph.
# Nova boot from CEPH
NOVA_BOOT_FROM: <ceph> #optional
OpenStack Neutron Configuration
OpenStack Neutron
configuration is shown below.
# ML2 Conf – choose from either option 1 or option 2
# option 1: LinuxBridge-VXLAN
MECHANISM_DRIVERS: linuxbridge
TENANT_NETWORK_TYPES: "VXLAN"
Or
## option 2: OVS VLAN
MECHANISM_DRIVERS: openvswitch
TENANT_NETWORK_TYPES: "VLAN"
# VLAN ranges can be one continuous range or comma separated discontinuous ranges
TENANT_VLAN_RANGES: 3001:3100,3350:3400
# Jumbo MTU functionality. Only in B series, OVS-VLAN
# more info here [Mercury] Jumbo MTU feature in Mercury (B Series)
# ENABLE_JUMBO_FRAMES: True
# for Provider networks, just specifying the provider in the segments under
# the NETWORKING section is enough.
# Note : use phys_prov as physical_network name when creating a provider network
Note |
When creating an
external or provider network, use physical_network=phys_ext (need to be
specified) or physical_network=phys_prov (need to be specified), respectively.
|
The JUMBO_MTU
functionality is available only for OVS over VLAN in a UCS B-Series pod. In a
VLAN setup, by default the MTU size is set to 1500 (1450 for VXLAN) and 8972
bytes. When JUMBO_MTU is enabled (with 28 bytes left for the header), the VLAN
MTU will be 9000 and VXLAN will be 8950.
Cisco VIM also
supports the installation of a handful of optional services, namely, Keystone
v3 and Heat. OpenStack Heat is an orchestration service that allows you to spin
up multiple instances, logical networks, and other cloud services in an
automated fashion. To enable Heat, add the following Optional Services section
in the setup_data.yaml file:
# Optional Services:
OPTIONAL_SERVICE_LIST:
- heat
To disable Heat,
remove the Optional Services section from the setup_data.yaml file. The
Optional Services support provides an infrastructure to support additional
services in the future.
Note |
Auto-scaling is not
supported in Cisco VIM, release 2.2.
|
To enhance the
security portfolio and multi-tenancy with the use of domains, the Keystone v3
support is added in Cisco VIM release 2.2 from an authentication end-point.
Keystone v2 and Keystone v3 are mutually exclusive; an administrator has to
decide the authentication end-point during installation. By default, the VIM
orchestrator picks keystone v2 as the authentication end-point.
To enable Keystone v3,
add the following line under the optional services section.
# Optional Services:
OPTIONAL_SERVICE_LIST:
- keystonev3
LDAP/AD support with
Keystone v3
With the introduction
of Keystone v3, the OpenStack service authentication can now be delegated to an
external LDAP/AD server. In Cisco VIM 2.2, this feature has been introduced
optionally if the authorization is done by Keystone v3.
The pre-requisite for
enabling LDAP/AD integration is that the LDAP/AD endpoint should be reachable
from all the Controller nodes that run OpenStack Keystone Identity Service.
To avail the LDAP/AD
support with Keystone v3 feature, add the following section to the setup_data
during the installation of the pod:
LDAP:
domain: <Domain specific name>
user_objectclass: <objectClass for Users> # e.g organizationalPerson
group_objectclass: <objectClass for Groups> # e.g. groupOfNames
user_tree_dn: '<DN tree for Users>' # e.g. 'ou=Users,dc=cisco,dc=com'
group_tree_dn: '<DN tree for Groups>' # e.g. 'ou=Groups,dc=cisco,dc=com'
suffix: '<suffix for DN>' # e.g. 'dc=cisco,dc=com'
url: '<ldaps|ldap>://<fqdn|ip-address>:[port]'
e.g. 'ldap://172.26.233.104:389'
e.g. 'ldap://172.26.233.104'
e.g. 'ldaps://172.26.233.104'
e.g. 'ldaps://172.26.233.104:636'
e.g. 'ldap://fqdn:389'
e.g. 'ldap://fqdn'
e.g. 'ldaps://fqdn'
e.g. 'ldaps://fqdn:636'
'<ldaps|ldap>://[<ip6-address>]:[port]'
e.g. ldap://[2001:420:293:2487:d1ca:67dc:94b1:7e6c]:389 ---> note the mandatory "[.. ]" around the ipv6 address
user: '<DN of bind user>' # e.g. 'dc=admin,dc=cisco,dc=com'
password: <password> # e.g. password of bind user
user_filter = (memberOf=CN=os-users,OU=OS-Groups,DC=mercury,DC=local)
user_id_attribute = sAMAccountName
user_name_attribute = sAMAccountName
user_mail_attribute = mail # Optional
group_tree_dn = ou=OS-Groups,dc=mercury,dc=local
group_name_attribute = sAMAccountName
Note |
The parameter values
differ based on the Directory Service provider. For Example, OpenLDAP or
Microsoft Active Directory.
|
Integrating identity
with LDAP/AD over TLS: The automation supports keystone integration with
LDAP over TLS. In order to enable TLS, the CA root certificate must be
presented as part of the /root/openstack-configs/haproxy-ca.crt file. The url
parameter within the LDAP stanza must be set to
ldaps.
The url parameter
supports the following format:
url: '<ldaps | ldap>://<FQDN | IP-Address>:[port]’
The protocol can be
one of the following: ldap for non-ssl and ldaps when TLS has to be enabled.
The ldap host can be
a fully-qualified domain name (FQDN) or an IPv4 or v6 address depending on how
the SSL certificates are generated.
The port number is
optional. If the port number is not provided, the ldap services are assumed to
be running on the default ports. For example, 389 for non-ssl and 636 for ssl.
However, if these ports are not the default ports, then the non-standard port
numbers must be provided.
OpenStack Object Storage integration with Cisco VIM
Cisco VIM supports
automated integration with a customer-managed object storage solution. The
integration points reside primarily in the OpenStack Identity (Keystone)
component of Cisco VIM. In the Cisco VIM 2.2, this integration is restricted to
Keystone v2 only. It currently integrates with SwiftStack as the choice of
object storage solution. The deployment assumes a customer-managed SwiftStack
solution. Installation of the SwiftStack Controller/PACO cluster is out of
scope of this document and customer should reach out to the SwiftStack team for
license and installation details. While OpenStack can support multiple
endpoints for a given object-store service, the current setup in the context of
automation supports a single Keystone object-store service per SwiftStack PACO
cluster endpoint.
The current automation
uses the admin role for authentication and authorization of SwiftStack users
between the Keystone SwiftStack tenant and SwiftStack account.
Pre-requisites
For a customer-managed
deployment model, the minimum pre-requisites are:
- You must have a SwiftStack
controller, Cluster deployed with appropriate PAC (Proxy/Account/Container) and
Object configured ahead of time.
- You must know the Swift
endpoint of the PAC outward facing IP address, the corresponding admin user,
password and service tenant information at the time of configuring Keystone
integration.
- The networking should be
configured in such a way that the PAC outward facing IP address and the pod API
network can talk to each other.The Keystone Auth and Keystone Auth Token
middleware must be pre-configured in SwiftStack (see
Keystone Configuration Requirements in SwiftStack)
The OpenStack
controllers must have network reachability to the SwiftStack API endpoints, so
that the Horizon and Cinder Backup service can talk to the SwiftStack
endpoints.
Keystone Configuration
Requirements in SwiftStack
To configure
Keystone authorization, from the SwiftStack controller, choose the
Cluster >
Manage >
Middleware
>
Keystone Auth
option.
Note |
The
reseller_prefix setting enables the Keystone Auth middleware invocation at the
time of authentication.
|
Figure 2. Configuring
Keystone
To configure
Keystone Auth Token Support, from the SwiftStack controller, choose the
Cluster >
Manage >
Middleware
>
Keystone Auth
Token Support option.
Note |
auth_uri is
deprecated.
|
Usage in Cisco
VIM
In order to support
SwiftStack endpoint configuration, the following section needs to be configured
in the setup_data.yaml file.
##########################################
# Optional Swift configuration section
##########################################
# SWIFTSTACK: # Identifies the objectstore provider by name
# cluster_api_endpoint: <IP address of PAC (proxy-account-container) endpoint>
# reseller_prefix: <Reseller_prefix configured in Swiftstack Keystone middleware E.g KEY_>
# admin_user: <admin user for swift to authenticate in keystone>
# admin_password: <swiftstack_admin_password>
# admin_tenant: <The service tenant corresponding to the Account-Container used by Swiftstack>
# protocol: <http or https> # protocol that swiftstack is running on top
The automation
supports two modes of Integration with SwiftStack- Integration during fresh
installation of the pod and a reconfigure option to add a SwiftStack endpoint
to an existing pod running Cisco VIM 2.2.
In the fresh
installation mode, the addition of the Optional Swift configuration section in
the setup_data.yaml file will automatically provision the following in
Keystone:
-
Keystone
service for Object Store.
-
Keystone
endpoints for the Object Store service.
-
A SwiftStack
admin user with admin role in a SwiftStack tenant.
Integration
Testing: In order to test if the Keystone integration has been successful,
request a token for the configured swift user and tenant.
The output must
contain a properly generated endpoint for the object-store service that points
to the SwiftStack PAC cluster endpoint with the expected "reseller_prefix".
For example:
KEY_curl -d
'{"auth":{"passwordCredentials":{"username": "<username>", "password":
"<password>"},"tenantName":"<swift-tenant>"}}' -H "Content-type:
application/json" < OS_AUTH_URL >/tokens
The output should
list endpoints generated by Keystone for the object-store cluster endpoint of
SwiftStack for the user tenant (SwiftStack account).
A sample output
snippet (all IP and Keys are just examples, they will vary from pod to pod):
{
"access": {
"metadata": {
"is_admin": 0,
"roles": [
"33f4479e42eb43529ec14d3d744159e7"
]
},
"serviceCatalog": [
{
"endpoints": [
{
"adminURL": "http://10.30.116.252/v1",
"id": "3ca0f1fee75d4e2091c5a8e15138f78a",
"internalURL": "http://10.30.116.252/v1/KEY_8cc56cbe99ae40b7b1eaeabb7984c77d",
"publicURL": "http://10.30.116.252/v1/KEY_8cc56cbe99ae40b7b1eaeabb7984c77d",
"region": "RegionOne"
}
],
"endpoints_links": [],
"name": "object-store",
"type": "object-store"
},
......
Verify that the
Keystone user has access to the SwiftStack cluster. Using the token generated
preceding for the swiftstack user and tenant, make a request to the SwiftStack
cluster:
curl -v -H "x-auth-token: <auth-token>" http://10.30.116.252/v1/KEY_8cc56cbe99ae40b7b1eaeabb7984c77d
This command
displays all the containers (if present) for the SwiftStack tenant (account).
Integrating
SwiftStack over TLS
Integrating SwiftStack
over TLS: The automation supports SwiftStack integration over TLS. To
enable TLS, the CA root certificate must be presented as part of the
/root/openstack-configs/haproxy-ca.crt file. The
protocol
parameter within the SWIFTSTACK stanza must be set to https. As a
pre-requisite, the SwiftStack cluster has to be configured to enable HTTPS
connections for the SwiftStack APIs with termination at the proxy servers.
Cinder Volume
Backup on SwiftStack
Cisco VIM, enables cinder service to be
configured to backup its block storage volumes to the SwiftStack object store.
This feature is automatically configured if the SWIFTSTACK stanza is present in
the setup_data.yaml file. The mechanism to authenticate against SwiftStack
during volume backups leverages the same keystone SwiftStack endpoint
configured for use to manage objects. The default SwiftStack container to
manage cinder volumes within the Account (Keystone Tenant as specified by
"admin_tenant") is currently defaulted to
volumebackups
Once configured,
cinder backup service is enabled automatically as follows:
cinder service-list
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | c43b-control-1 | nova | enabled | up | 2017-03-27T18:42:29.000000 | - |
| cinder-backup | c43b-control-2 | nova | enabled | up | 2017-03-27T18:42:35.000000 | - |
| cinder-backup | c43b-control-3 | nova | enabled | up | 2017-03-27T18:42:33.000000 | - |
| cinder-scheduler | c43b-control-1 | nova | enabled | up | 2017-03-27T18:42:32.000000 | - |
| cinder-scheduler | c43b-control-2 | nova | enabled | up | 2017-03-27T18:42:32.000000 | - |
| cinder-scheduler | c43b-control-3 | nova | enabled | up | 2017-03-27T18:42:31.000000 | - |
| cinder-volume | c43b-control-1 | nova | enabled | up | 2017-03-27T18:42:35.000000 | - |
| cinder-volume | c43b-control-2 | nova | enabled | up | 2017-03-27T18:42:30.000000 | - |
| cinder-volume | c43b-control-3 | nova | enabled | up | 2017-03-27T18:42:32.000000 | - |
+------------------+----------------+------+---------+-------+----------------------------+-----------------+
Backing up of an
existing cinder volume.
openstack volume list
+--------------------------------------+--------------+-----------+------+-------------+
| ID | Display Name | Status | Size | Attached to |
+--------------------------------------+--------------+-----------+------+-------------+
| f046ed43-7f5e-49df-bc5d-66de6822d48d | ss-vol-1 | available | 1 | |
+--------------------------------------+--------------+-----------+------+-------------+
openstack volume backup create f046ed43-7f5e-49df-bc5d-66de6822d48d
+-------+--------------------------------------+
| Field | Value |
+-------+--------------------------------------+
| id | 42a20bd1-4019-4571-a2c0-06b0cd6a56fc |
| name | None |
+-------+--------------------------------------+
openstack container show volumebackups
+--------------+--------------------------------------+
| Field | Value |
+--------------+--------------------------------------+
| account | KEY_9d00fa19a8864db1a5e609772a008e94 |
| bytes_used | 3443944 |
| container | volumebackups |
| object_count | 23 |
+--------------+--------------------------------------+
swift list volumebackups
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00001
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00002
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00003
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00004
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00005
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00006
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00007
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00008
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00009
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00010
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00011
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00012
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00013
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00014
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00015
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00016
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00017
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00018
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00019
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00020
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc-00021
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc_metadata
volume_f046ed43-7f5e-49df-bc5d-66de6822d48d/20170327185518/az_nova_backup_42a20bd1-4019-4571-a2c0-06b0cd6a56fc_sha256file