The industrial revolution modernized the techniques used to manufacture
goods, going from hand production methods to mechanized manufacturing. This
movement from manual to automated operations changed human productivity,
allowing people to free themselves from repetitive tasks that could be more
easily accomplished by a machine. The associated decrease in costs, increase in
speed and increased quality allowed for more work to be done for less money in
less time, yielding a higher quality product. Programmability promises to offer
the same outcome for networks as the industrial revolution did for goods.
The inevitable move toward automation in the IT industry has provided
people and businesses a faster way to achieve their desired goals, a more
cost-effective way to rapidly provision infrastructure in a timely fashion
according to demand, and yielded more consistency in the configured results.
ACI is able to take advantage of all of these benefits by completely exposing
all of the native functionality in programmable ways, using common tools and
languages to provide network engineers, developers and even novices an
approachable path toward automation. Though ACME is just getting started with
true DevOps in their IT organization, they realize that these benefits will
allow them to keep up with the pace of business.
Given the comprehensiveness of the programmability features available on
ACI, everyone can benefit. ACME's network engineering and design teams can
benefit from the quick time to provision large configurations, and the
consistency provided by the ability to automate all of the moving parts. Their
operations teams can utilize the plethora of information contained within the
APIC to streamline their processes, gather better metrics and correlate events
more accurately, yielding faster time to resolution and higher customer
The goals for network programmability are clear, however the methods by
which these goals may be realized have been more difficult to grasp.
Traditional networking devices provide output that is meant for visual
consumption by people, and configurations are driven using text input that is
simpler for a person to type, however these goals stand in contrast to an
automation-driven approach. Machines are able to more easily process data that
is provided in some structured form. Structured data that may not be visually
appealing can be rapidly parsed, and also can easily represent the full detail
that a comprehensive object-oriented configuration model may represent.
ACI uses an advanced object model that represents network configuration
with application-based semantics which can be consumed and posted against using
a well documented REST API. In addition to providing this interface into the
object model, ACI also provides a number of access methods to read and
manipulate this data, at a variety of levels that will cater to the level of
comfort the user has with programming, all of which use open standards and open
Reference to Object
While a comprehensive overview of the Object Model is outside of this
book, from a programmability perspective it is important to note that every
aspect of ACI functionality is encompassed within the object model. This means
that all of the configuration that can be made on the fabric, can be made
programmatically using the REST API. This includes internal fabric networking,
external networking, virtualization integration, compute integration, and all
other facets of the product.
This data is stored within the Management Information Tree, with every
piece of the model represented as a programmatic object with properties,
identity, and consistency rules that are enforced. This ensures that the
configured state of the model will never get out of hand with stale nodes or
entries, and every aspect can be inspected, manipulated, and made to cater for
the user's needs.
APIC is very flexible in terms of how it can accept configuration and
provide administrative and operable states, as well as extending that
configuration into subordinate components. There are two primary categories of
interfaces that facilitate these functions: the northbound REST API and the
southbound programmatic interfaces.
The northbound REST API is responsible for accepting configuration, as
well as providing access to management functions for the controller. This
interface is a crucial component for the GUI and CLI, and also provides a touch
point for automation tools, provisioning scripts and third party monitoring and
management tools. The REST API is a singular entry point to the fabric for
making configuration changes, and as such is a critical aspect of the
architecture for being able to provide a consistent programmatic experience.
Southbound interfaces on APIC allow for the declarative model of intent
to be extended beyond the fabric, into subordinate devices. This is a key
aspect to the openness of the ACI fabric, in that policy can be programmed once
via APIC and then pushed out to hypervisors, L4-7 devices and potentially more
in the future, without the need to individually configure those devices. This
southbound extension is realized through two methods: L4-7 Device Packages and
The L4-7 device package interface allows for ACI to apply policy to
existing L4-7 devices that do not have an implicit knowledge of ACI policy.
These devices can be from any vendor, so long as the device has some form of
interface which is accessible via IP. The actual implementation of device
packages is done via Python scripts which run on the APIC within a contained
execution environment, which can reach the device through their native
configuration interfaces, be that REST, CLI, SOAP or others. As a user makes
changes to service graphs or EPG policy, the device package will translate the
APIC policy into API calls on the L4-7 device.
OpFlex is designed to allow a data exchange of a set of managed objects
that is defined as part of an informational model. OpFlex itself does not
dictate the information model, and can be used with any tree-based abstract
model in which each node in the tree has a universal resource identifier (URI)
associated with it. The protocol is designed to support XML and JSON (as well
as the binary encoding used in some scenarios) and to use standard remote
procedure call (RPC) mechanisms such as JSON-RPC over TCP. In ACI, OpFlex is
currently used to extend policy to the Application Virtual Switch as well as
extend Group Based Policy into OpenStack.
About the REST API
Application Policy Infrastructure Controller
REST API is a programmatic interface that uses REST architecture. The API
accepts and returns HTTP (not enabled by default) or HTTPS messages that
documents. You can use any programming language to generate the messages and
the JSON or XML documents that contain the API methods or MO descriptions.
The REST API is the
interface into the MIT and allows manipulation of the object model state. The
same REST interface is used by the
command-line interface (CLI), GUI, and SDK, so that whenever information is
displayed, it is read through the REST API, and when configuration changes are
made, they are written through the REST API. The REST API also provides an
interface through which other information can be retrieved, including
statistics, faults, and audit events, and it even provides a means of
subscribing to push-based event notification, so that when a change occurs in
the MIT, an event can be sent through a web socket.
Standard REST methods
are supported on the API, which includes POST, GET, and DELETE operations
through HTTP. The POST and DELETE methods are idempotent, meaning that there is
no additional effect if they are called more than once with the same input
parameters. The GET method is nullipotent, meaning that it can be called zero
or more times without making any changes (or that it is a read-only operation).
Payloads to and from
the REST interface can be encapsulated through either XML or JSON encoding. In
the case of XML, the encoding operation is simple: the element tag is the name
of the package and class, and any properties of that object are specified as
attributes of that element. Containment is defined by creating child elements.
For JSON, encoding
requires definition of certain entities to reflect the tree-based hierarchy;
however, the definition is repeated at all levels of the tree, so it is fairly
simple to implement after it is initially understood.
All objects are
described as JSON dictionaries, in which the key is the name of the package and
class, and the value is another nested dictionary with two keys: attribute and
The attribute key
contains a further nested dictionary describing key-value pairs that define
attributes on the object.
The children key
contains a list that defines all the child objects. The children in this list
are dictionaries containing any nested objects, which are defined as described
After the object
payloads are properly encoded as XML or JSON, they can be used in create, read,
update, or delete operations on the REST API. The following diagram shows the
syntax for a read operation from the REST API.
Because the REST API
is HTTP based, defining the universal resource identifier (URI) to access a
certain resource type is important. The first two sections of the request URI
simply define the protocol and access details of the
Next in the request URI is the literal string
indicating that the API will be invoked. Generally, read operations are for an
object or class, as discussed earlier, so the next part of the URI specifies
whether the operation will be for an MO or class. The next component defines
either the fully qualified Dn being queried for object-based queries, or the
package and class name for class-based queries. The final mandatory part of the
request URI is the encoding format: either .xml or .json. This is the only
method by which the payload format is defined (the
ignores Content-Type and other headers).
Create and update
operations in the REST API are both implemented using the POST method, so that
if an object does not already exist, it will be created, and if it does already
exist, it will be updated to reflect any changes between its existing state and
Both create and
update operations can contain complex object hierarchies, so that a complete
tree can be defined in a single command so long as all objects are within the
same context root and are under the 1MB limit for data payloads for the REST
API. This limit is in place to guarantee performance and protect the system
under high load.
The context root
helps define a method by which the
distributes information to multiple controllers and helps ensure consistency.
For the most part, the configuration should be transparent to the user, though
very large configurations may need to be broken into smaller pieces if they
result in a distributed transaction.
Create and update
operations use the same syntax as read operations, except that they always are
targeted at an object level, because you cannot make changes to every object of
a specific class (nor would you want to). The create or update operation should
target a specific managed object, so the literal string
that the Dn of the managed object will be provided, followed next by the actual
Dn. Filter strings can be applied to POST operations; if you want to retrieve
the results of your POST operation in the response, for example, you can pass
rsp-subtree=modified query string to indicate that you want
the response to include any objects that have been modified by your POST
The payload of the
POST operation will contain the XML or JSON encoded data representing the
managed object the defines the Cisco API command body.
REST API username-
and password-based authentication uses a special subset of request URIs,
the Dn targets of a POST operation. Their payloads contain a simple XML or JSON
payload containing the MO representation of an
with the attribute name and
the username and password: for example,
name='admin' pwd='insieme'/>. The response to the POST operation will
contain an authentication token as both a Set-Cookie header and an attribute to
object in the response named token, for which the XPath is
/imdata/aaaLogin/@token if the encoding is XML. Subsequent
operations on the REST API can use this token value as a cookie named
to authenticate future requests.
The REST API
supports a wide range of flexible filters, useful for narrowing the scope of
your search to allow information to be located more quickly. The filters
themselves are appended as query URI options, starting with a question mark (?)
and concatenated with an ampersand (&). Multiple conditions can be joined
together to form complex filters.
The following query
filters are available:
The REST API supports the subscription to one or more MO during your
active API session. When Any MO is created, changed, or deleted because of a
user- or system-initiated action, an event is generated. If the event changes
the data on any of the active subscribed queries, the
will send out a notification to the API client that created the subscription.
All operations that
are performed in the GUI invoke REST calls to fetch and commit the information
being accessed. The API Inspector further simplifies the process of examining
what is taking place on the REST interface as the GUI is navigated by
displaying in real time the URIs and payloads. When a new configuration is
committed, the API Inspector displays the resulting POST requests, and when
information is displayed on the GUI, the GET request is displayed.
To get started with
the API Inspector, it can be accessed from the Account menu, visible at the top
right of the Cisco APIC GUI. Click Welcome, <username> and then choose
the Show API Inspector option
After the API
Inspector is brought up, time stamps will appear along with the REST method,
URIs, and payloads. There may also be occasional updates in the list as the GUI
refreshes subscriptions to data being shown on the screen.
From the output above
it can see that the last logged item has a POST request with the JSON payload
containing a tenant named
Cisco and some
attributes defined on that object:
ACI has a number of methods for developing code that can be used by
engineers who have varying levels of comfort with programming or interacting
with programmatic interfaces.
The most basic and straight-forward technique involves simply taking
information gleaned by the API inspector, Visore, or by saving XML/JSON
directly from the GUI, and using common freely available tools, such as
POSTman, to send this information back to the REST API.
A step up from this method enables users to use common terminology and
well understood networking constructs, coupling these with the power and
flexibility of the ACI policy language and the popular Python programming
language to configure ACI in a programmatic fashion. ACI Toolkit is a utility
developed in open-source that exposes the most common ACI building blocks, to
enable users to rapidly create tenants, application profiles, EPGs and the
associated concepts to connect those to physical infrastructure. The
streamlined interface provided makes it very quick to adopt and allows users to
begin to quickly develop their applications.
The most powerful of the development tools available is the Cobra SDK.
With a complete representation of the ACI object model available, comprehensive
data validation, and extensive support for querying and filtering, Cobra
ensures that the complete ACI experience is available to developers and users
POSTman is an open
source extension for the Chrome web browser, which provides REST client
functionality in an easy-to-use package. POSTman can be used to interact with
the APIC REST interface, to both send and receive data which may represent
configuration, actions, policy and operational state data. For an individual
unfamiliar with the structure of REST, it is very simple to utilize the API
Inspector to view what the underlying calls being made to the GUI are for
certain operations, capture those, and then use POSTman to replay those
operations. Furthermore POSTman allows for the requests to be modified: GUI
operations can be made once, attributes changed in the captured data and then
sent back to the REST API to make the modifications.
To get started with
POSTman, the first step is to download the plugin for the Chrome web browser,
which is available at
http://www.getpostman.com. Once the plugin is
installed, it can be accessed using the Chrome App launcher.
Initially the user
will be presented with an interface that has two primary sections: the sidebar
on the left and the request constructor on the right. Using the sidebar, the
user can switch between the history of REST requests sent by POSTman, as well
as Collections of requests that contain common tasks.
A useful post to
create in a collection is a basic Login operation. In order to do this, the
user should first click into the Collections tab in the sidebar. Within the
sidebar, a small folder with a plus (+) sign will become visible, which should
then be clicked, at which point a popup will appear prompting the user to give
a name to the collection. For this example, the collection can be named "APIC",
after which the Create button should be clicked.
Now a new request
can be built. In the request constructor, where "Enter request URL here" is
shown, the following request URI should be entered, substituting APICIPADDRESS
with the IP of the APIC:
This request URI
will call the Login method in the REST API. Since a Login will require posting
data, the HTTP method should be changed, which can be done by clicking the
dropdown list to the right of the request URL. By default it will be a GET
request, but POST will need to be selected from the drop down list.
With the POST method
selected, it is now possible to provide the REST payload. Given that the data
will be sent via REST, the "raw" Request body selector should be picked.
Now the payload for
the request can be entered, which will be the simple XML containing the
username and password that will be used for authentication. Note that the URL
is https, meaning that it will be encrypted between the web browser and the
APIC, so no data is being transmitted in clear text. The following request body
should be entered, substituting the correct username and password in place of
USERNAME and PASSWORD:
<aaaUser name='USERNAME' pwd='PASSWORD'/>
With this request
built, it is now possible to Send the request, but since this will be a
commonly used method, the request should be added to a collection. This can be
accomplished by clicking the "Add to collection" button beneath the request
body. Select the "APIC" collection from the existing collection list, and
change the Request name to "Login" and then click "Add to collection".
By adding the
request to a collection it can later be quickly accessed to establish a login
session with APIC as needed.
After completing the
above steps, the request is ready to be sent. Click the "Send" button in the
request constructor, and the REST API will return the XML representing a login
session with the APIC. The following will be visible in the POSTman GUI:
Make Query to
The next request
that will be built is one that queries the APIC for a list of tenants on the
system. First click the "Reset" button in the request constructor, and proceed
with the same steps as above, except that the request URL will be shown as:
and the request method will be changed to GET.
Click "Add to
collection" and place the request into the APIC collection, and for the name
enter "Query APIC for tenants"
Now upon clicking
"Send", this request will return an XML encoded list of tenants in the response
body section of the constructor pane on the right.
Configuration Change in APIC
configuration change will use a POST request similar to logging in, however the
request URL and body will contain a different set of information.
For this example, a
new tenant will be created in the fabric. Click the "Reset" button in the
request constructor to clear out all existing request fields, and use this URL:
and change the method to POST.
In the request
payload, enter the following data:
The request URL
specifies that the target for this query will be the policy universe, which is
where tenants live. With this target properly scoped, the data representing the
tenant can be provided in the payload, in this case creating a tenant named
Inspector for Query Guidance
As discussed in the
Introduction to Scripting section, API inspector can be used as a guideline for
building custom REST requests. Furthering on the example in that section, where
the request URL is:
It is possible to
modify the request URI and payload and substitute the tenant name "Cisco" with
another tenant name, to create an entirely new tenant, with the same
configuration. The new request URL and JSON would be:
These values can be
placed into a POST request in POSTman, and after establishing a Login session
using the saved Login request, the new tenant "Acme" can be created, identical
to the previously created Cisco tenant, without needing to manually click
through the GUI or use other manual methods.
Cobra SDK and
The complete Cisco ACI Python SDK is named Cobra. It is a pure Python
implementation of the API that provides native bindings for all the REST
functions and also has a complete copy of the object model so that data
integrity can be ensured, as well as supporting the complete set of features
and functions available in ACI. Cobra provides methods for performing lookups
and queries and object creation, modification, and deletion that match the REST
methods used by the GUI and those that can be found using API Inspector. As a
result, policy created in the GUI can be used as a programming template for
The installation process for Cobra is
straightforward, and you can use standard Python distribution utilities. Cobra
is distributed on the APIC as an .egg file and can be installed using
easy_install, and is also available on github at
The first step in any code that uses Cobra is establishing a login
session. Cobra currently supports username- and password-based authentication,
as well as certificate-based authentication. The example here uses username-
and password-based authentication.
This example provides an
MoDirectory object named
md, which is logged into and authenticated for Cisco APIC. If for
some reason authentication fails, Cobra will display a
cobra.mit.request.CommitError exception message. With the session logged in,
you are ready to proceed.
Use of the Cobra SDK to manipulate the MIT generally follows this
Identify the object to be
Build a request to change
attributes or add or remove children.
Commit the changes made to
For example, if you want to create a new tenant, you must first identify
where the tenant will be placed in the MIT, where in this case it will be a
child of the
policy Universe managed object (polUniMo):
All these operations have resulted only in the creation of Python
objects. To apply the configuration, you must commit it. You can do this using
an object called a
ConfigRequest acts as a container for MO-based classes that fall
into a single context, and they can all be committed in a single atomic POST
ConfigRequest object is created, then the
tenantMo object is added to the request, and then you commit the
configuration through the
For the preceding example, the first step builds a local copy of the
polUni object. Because it does not have any naming properties
(reflected by the empty double single quotation marks), you don't need to look
it up in the MIT to figure out what the full Dn for the object is; it is always
If you wanted to post something deeper in the MIT, where the object has
naming properties, you would need to perform a lookup for that object. For
example, if you wanted to post a configuration to an existing tenant, you could
query for that tenant and create objects beneath it.
tenantMo object will be of class
cobra.model.fv.Tenant and will contain properties such as .dn,
.status, and .name, all describing the object itself. The
lookupByClass() entry returns an array, because it can return more
than one object. In this case, the command is specific and is filtering on an
fvTenant object with a particular name. For a tenant, the name
attribute is a special type of attribute called a naming attribute. The naming
attribute is used to build the relative name, which must be unique in its local
namespace. As a result, you can be assured that
lookupByClass on an
fvTenant object with a filter on the name always returns either an
array of length 1 or None, meaning that nothing was found.
To entirely avoid a lookup, you can build a Dn object and make an object
a child of that Dn. This method works only in cases in which the parent object
These fundamental methods for interacting with Cobra provide the
building blocks necessary to create more complex workflows that can help
automate network configuration, perform troubleshooting, and manage the
Cisco APIC REST to
The process of building a request can be time consuming, because you
must represent the object data payload as Python code reflecting the object
changes that you want to make. Because the Cobra SDK is directly modeled on the
Cisco ACI object model, you should be able to generate code directly from what
resides in the object model. As expected, you can do this using a tool
developed by Cisco Advanced Services. The tool is the Cisco APIC REST to Python
Adapter, known as Arya.
The above figure clearly shows how the input that might come from the
API Inspector, Visore, or even the output of a REST query and can then be
quickly converted into Cobra SDK code, tokenized, and reused in more advanced
Installation of Arya is relatively simple, and the tool has few external
dependencies. To install Arya, you must have Python 2.7.5 and git installed.
Use the following quick installation steps to install it and place it in your
git clone https://github.com/datacenter/ACI.git
sudo python setup.py install
After Arya has been installed, you can take XML or JSON representing
Cisco ACI modeled objects and convert it to Python code quickly. For example,
arya.py -f /home/palesiak/simpletenant.xml
The entry will yield the following Python code:
Autogenerated code using arya.py
Original Object Document Input:
''' raise RuntimeError('Please review the auto generated code before ' +
'executing the output. Some placeholders will ' +
'need to be changed')
# list of packages that should be imported for this code to work
from cobra.internal.codec.xmlcodec import toXMLStr
# log into an APIC and create a directory object
ls = cobra.mit.session.LoginSession('https://184.108.40.206', 'admin', 'password')
md = cobra.mit.access.MoDirectory(ls)
# the top level object on which operations will be made
topMo = cobra.model.pol.Uni('')
# build the request using cobra syntax
fvTenant = cobra.model.fv.Tenant(topMo, name='bob')
# commit the generated code to APIC
c = cobra.mit.request.ConfigRequest()
The placeholder raising a runtime error must first be removed before
this code can be executed; it is purposely put in place to help ensure that any
other tokenized values that must be updated are corrected. For example, the
Cisco APIC IP address, which defaults to 220.127.116.11, should be updated to reflect
the actual Cisco APIC IP address. The same applies to the credentials and any
Note that if you provide input XML or JSON that does not have a fully
qualified hierarchy, Arya may not be able to identify it through heuristics. In
this case, a placeholder will be populated with the text
REPLACEME, which you will need to replace with the correct Dn. You
can find this Dn by querying for the object in Visore or inspecting the request
URI for the object shown in the API Inspector.
Cisco Application Centric
object model contains many entities, which may be daunting for a user being
first introduced to network programmability. The
Toolkit makes available a simplified subset of the model that can act as an
introduction to the concepts in
and give users a way to quickly bring up common tasks and workflows. In
addition, a number of applications have been built on top of
Toolkit provides some useful tools for an operator to immediately use, the real
value is in the ability to take these examples as a starting point, and modify
or extend these samples to suit your particular needs. Give it a try! Be sure
to share your work back with the community!
ACI Toolkit Applications
The endpoint tracker application creates a subscription to the endpoint
class (fvCEp) and populates a MySQL database with pertinent details about each
endpoint present on the fabric (for example servers, firewalls, load balancers,
and other devices). Installing MySQL is outside the scope of this book, so we
will assume you have access to create a new database on a MySQL server. The
endpoint tracker application has two primary components that are both python
aci-endpoint-tracker.py - This
script creates the subscription to the endpoint class and populates the MySQL
This script creates a web interface that provides a way to present the contents
of the database to the operator. A sample is shown below:
To launch Endpoint Tracker run the following python scripts. The first
script, aci-endpoint-tracker.py, will actually connect to the APIC and populate
the database. The second script enables the content to be viewed in an
understandable web UI.
MySQL IP address: 127.0.0.1
MySQL login username: root
user@linuxhost::~/acitoolkit/applications/endpointtracker$ python aci-endpointtracker-
MySQL IP address: 127.0.0.1
MySQL login username: root
* Running on http://127.0.0.1:5000/
* Restarting with reloader
After running those python scripts you can now bring up a browser and go
the Web UI. Using the ACI Endpoint Tracker is simply a matter of inputting an
IP or MAC address into the search field, and the table is filtered accordingly.
In the example below, the IP address 192.168.5.20 has been input into the
search field, and the matching results are displayed.
One more interesting usage of the endpoint tracker applications is a
series of visualizations that can represent how various endpoints are mapped to
other fabric constructs including Tenants, Applications, and EPGs.
Some sample screenshots are shown below. These are representations of
where end points are within the ACI fabric and how they relate to or depend on
other objects in the environment.
programming, Lint is a term that refers to identifying discrepancies, or simple
debug tool for common errors. In the sense that ACI provides infrastructure as
code, it is appropriate for ACI to also have a Lint application. ACI Toolkit
provides just that. ACI Lint is an application that checks and notifies an
operator of misconfiguration errors in two primary capacities:
Security Issues -
supports the ability to tag EPGs as either secure or insecure, and then runs a
validation that contracts are not used to cross security boundaries.
Configuration Issues -
checks for common configuration errors and reports them to the user.
A sample output is
provided here for reference:
Getting configuration from APIC....
Critical 001: EPG 'default' in tenant 'infra' app 'access' is not assigned security
Critical 001: EPG 'x' in tenant 'common' app 'default' is not assigned security
Warning 001: Tenant 'Cisco' has no Application Profile.
Warning 001: Tenant 'Books' has no Application Profile.
Warning 001: Tenant '3tierapp' has no Application Profile.
Warning 001: Tenant 'mgmt' has no Application Profile.
Warning 002: Tenant 'Books' has no Context.
Warning 002: Tenant '3tierapp' has no Context.
Warning 004: Context 'oob' in Tenant 'mgmt' has no BridgeDomains.
Warning 005: BridgeDomain 'CiscoBd' in Tenant 'Cisco' has no EPGs.
Warning 005: BridgeDomain 'inb' in Tenant 'mgmt' has no EPGs.
Warning 006: Contract 'default' in Tenant 'common' is not provided at all.
Warning 006: Contract 'WebServers' in Tenant 'Acme' is not provided at all.
Warning 006: Contract 'External' in Tenant 'Acme' is not provided at all.
Warning 007: Contract 'default' in Tenant 'common' is not consumed at all.
Warning 007: Contract 'WebServers' in Tenant 'Acme' is not consumed at all.
Warning 007: Contract 'External' in Tenant 'Acme' is not consumed at all.
Warning 007: Contract 'outside-to-web' in Tenant 'roberbur' is not consumed at all.
Cable Plan Module
Cable management is a
crucial aspect for supporting a data center, and cabling issues can causes
several hours of delay when deploying something new in the data center. The
Cable Plan module allows the programmer to import existing cable plans easily
from XML files, import the currently running cable plan from an
Application Policy Infrastructure Controller
controller, export previously imported cable plans to a file, and compare cable
plans. More advanced users can use the Cable Plan to build a cable plan XML
file easily, query a cable plan, and modify a cable plan.
CLI Emulator Module
Cisco Application Centric
has many new concepts and its policy-driven architecture can be hard for some
users to grasp at first. For users who want to integrate both the CLI that they
are previously familiar with along with the policy-driven architecture, the
CLI emulator assists with this task. With the CLI emulator module, you can run
show commands on tenants, contexts, and other policies
that are unique to
application allows a
Cisco Application Centric
fabric to be extended across multiple sites. These sites are independent
fabrics where each fabric has its own
Application Policy Infrastructure Controller
cluster. The multisite application preservers the group based policy model of
by allowing a contract to be extended across multiple sites so that endpoint
groups from different sites can communicate.
Configuration Snapshot and Rollback
Snapback is a
configuration snapshot and rollback tool for
Cisco Application Centric
fabrics. Specifically, the tool allows an administrator to perform the
Live snapshots of the
One-time and recurring
snapshots, both immediate and scheduled
Versioned storage of the
Full viewing of any snapshot
configuration including the differences between snapshots
Rollback to any previous
configuration snapshot: full or partial
Web-based or command line
Events to Atom Feed
application subscribes to
Application Policy Infrastructure Controller
managed objects and records any updates to the objects over a websocket
connection. These updates can be viewed in a variety of Atom feeds provided
Some sample use cases
Cisco Application Centric
Events to Atom Feed app:
Display recent endpoints in
Display updated tenants on
Monitor endpoint group
changes in a feed client
Open source software has been a popular movement in IT, and has been
the motivation behind many successful projects, including consumer software,
web servers, databases and even entire operating systems. One of the key
aspects to the success of open source is the ability for many developers around
the globe to collaborate together on a single project. Previous tools like
Concurrent Version Control (CVS), and Subversion (SVN) were used to allow many
developers to work together, with a central server maintaining a common
database of source code. While these tools have and continue to work well,
there has been a slow migration away from those server-based tools to
decentralized utilities, with the foremost being Git. Git was created by Linus
Torvalds, the author of the popular open-source operating system Linux. Git has
a number of advantages over most other source control tools: complete local
repository copies, distributed architecture, and more efficient support for
GitHub is a hosting platform based around git, which provides both
free and paid hosting services, that allow for individuals to collaborate with
over eight-million other GitHub users on projects together. Aside from being a
wrapper around git, GitHub also provides techniques for tracking issues,
securing access to projects, and built-in project documentation. The
combination of all of these features has made GitHub a very common place for
members of the community to share code with one another, build on each other's
work, and contribute their efforts back into larger projects.
What is stored on GitHub is usually source code, not limited to any
specific language, however the git protocol itself supports storage and version
control of any file type, so it's not uncommon for users to store documentation
or other constantly changing files in git. The primary advantage is that the
version control provided by git allows a user to revert a file back to any
previously stored version, or alternately move forward to a newer version. Git
also maintains an audit of changes that have been made to files and even has
advanced support for branching versions of files to allow multiple concurrent
modifications to a file to take place, and allow for them to be merged after
work efforts have completed.
"It's on github"
A common phrase in modern IT jargon is, "It's on github", and for
users familiar with GitHub, this is an invitation to download, modify and
contribute to the project, however for those who have not had an introduction
it can seem like a complex topic. GitHub is actually a very simple tool to use,
and the simplest way to begin to take advantage of the information stored on
GitHub is to simply access a projects main page and look for the "Download ZIP"
button at the bottom right of any project's main page. The resulting downloaded
file will contain the latest version of the files in the project. What a user
does with these files will greatly depend on what the contents are, however one
of the most highly encouraged behaviors on GitHub is to provide clear and
obvious documentation for a project, so if a new user accesses the front page
of a project on Git, they will typically be able to find instructions on how to
download and install the project, right on the first page they see.
For users looking to contribute back to a project, the next step would
be to sign up for an account on GitHub, and download a graphical-based client
to provide a simpler interface to the command line-based git tool. GitHub
itself has a graphical client with the Windows version available at
and the Mac versions at
Other common source control tools include SourceTree from Atlassian, available
Once a user has an account and a github client, they can "Fork", or
split off a project that is available into their own private repository, make
changes and commit those back to their private branch. If those changes work,
and the user wishes to contribute them back into the original project, it is
possible to submit a "Pull" request, which essentially means that the user is
proposing their efforts should be pulled back into the original project. The
process can be that simple, though many more advanced projects have standards
and rules for contributing to those projects that put in place requirements
around how work is committed back into the projects, which may require some
reading before attempting to contribute.