Digitisation and data centres: Building foundations for the future
The world is poised on the cusp of a transformation that will make the way we currently use digital networks seem quaint. Data centres are the unsung heroes of this shift but enormous challenges loom.
It has become a truism that today every company is a technology company. Businesses are digitising their operations at a relentless pace as they seek to secure their place in the digital future.
This white paper will centre upon the fundamental role data centres play in the efforts of any business as they prepare for the continued rise of Big Data, the emergence of the Internet of Things (IoT), and machine-to-machine communication—to name just a few of the looming digital hurdles.
One of the key challenges relates to the pace of change; most established businesses are saddled with legacy infrastructure and processes, handicapping them when it comes to competing with new entrants boasting faster, more capable, more modern IT infrastructure.
In order to keep up with nimble new players and ensure they thrive, today's businesses need to focus on four key areas—Networking and Storage, Security and Privacy, Application Delivery and Software-Defined Networking, and Analytics—all of which are the pillars, so to speak, that hold up the data centre as the key to moving toward greater digitisation.
These four pillars will form the basis of this white paper, and each section will discuss:
Networking and storage: With businesses increasingly relying on cloud computing, and the exponential growth in data that's being produced, numerous challenges lie ahead for businesses determined to stay at the forefront of their field. We'll look at current demands being placed on data centre networks, how this demand is likely to trend, and the options available for storing data, be that in private, public or hybrid clouds.
Security and privacy: A chief obstacle to businesses migrating to the cloud is security. That's because there are huge reputational and productivity consequences for those who don't put in place the proper systems and processes in to secure their data. This section will discuss how digitisation is increasing that risk, key approaches to maintaining security across virtual applications and workloads, and how data centre managers should balance security with digitisation goals.
Application delivery and software-defined networking: A successful digital business needs to be agile and have the ability to scale quickly to thrive in today's competitive online world, and this is where SDN is making huge strides. This section will provide an overview of best-practice SDN and give insights into the hardware that makes it possible. It will also discuss what the rise of container-based microservices means for data centre architecture.
Analytics: Exponential growth in data is presenting huge challenges to IT managers hoping to get visibility around their data centre operations, a challenge Cisco has confronted with the recent launch of its Tetration analytics platform. This section will explore common issues legacy systems can create for managers, the advantages good analytical insights can give you over your competitors and how machine learning is increasingly being used to improve data centre management.
Networking and storage
Businesses are increasingly relying on cloud computing as more of their operations move online, with growth forecasts suggesting that up to 86 percent of workloads will be processed by cloud data centres by 2019, according to the Cisco Global Cloud Index (GCI).
This growth in cloud adoption is seen as a response to the extraordinary growth in data being produced to meet high customer expectations for speed and security. As a result of this shift, data centre IP traffic loads will also increase enormously; in fact, annual global data centre IP traffic is expected triple in the next three years, reaching 10.4 zettabytes, up from a comparatively modest 3.4 zettabytes today.
The ongoing growth in Internet of Things (IoT) deployment across nearly all industries is also accelerating demand for cloud-based data centres, as traditional data centres would struggle to handle the ever-increasing load.
In fact, within three years, IoT data transmitted to data centres will be 269 times higher than the amount of data being transmitted from end-user devices, creating massive demand on the infrastructure needed to support this scale.
These trends will create numerous challenges for enterprises who want to stay at the forefront of their field, as many are facing exponential increases in the volume of data to be transmitted and stored as they digitise their operations.
Demands on data centre infrastructure and trends
Over the last few years, cloud adoption has evolved from an emerging technology to an established solution with widespread acceptance, driven by advantages such as faster delivery of services and data, increased application performance and improved operational efficiencies.
Cloud services also address varying customer requirements, such as privacy, mobility and multiple device access and for consumers. In an increasingly mobile world, cloud services offer worldwide access to content and services, on multiple devices, delivered to anywhere users are located.
In support of this trend, over the past year, a large swathe of world-renowned enterprises have reported plans to move their business critical applications to the cloud. Netflix, for example, recently announced plans to shut down the last of its traditional data centres.
With more and more well-known enterprises moving to the cloud, all three types of cloud service delivery models—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—will continue to grow as enterprises look to capitalise on the benefits of moving to a more flexible computing environment.
Public vs private vs hybrid
But which cloud model is right for your business? Private cloud, public cloud, or hybrid cloud? This choice will ultimately depend on each business's individual circumstances and the respective benefits offered by each of these cloud deployment models as there is no true “one-size-fits-all” approach to cloud.
As businesses are increasingly looking to cut down on IT-related costs, and at the same time wanting more agile platforms for application delivery, we are seeing a greater adoption of public clouds—especially as public cloud security continues to strengthen. Although today, public clouds deliver only 30 percent of all cloud workloads, by 2019, it is projected that this will grow to 56 percent, highlighting the shift in balance between private and public cloud solutions as the public cloud market matures and competition drives new competitive pricing and features.
When it comes to public clouds, one primary advantage seen by IT organisations is the low and predictable cost of ownership through a transition from capital expenditure and ongoing IT costs towards a predominantly operating expense driven model. Some of the key technical benefits offered by the public cloud include elastic scalability, automated deployment, and high reliability along with the ability for self-service provisioning.
With these benefits in mind, public cloud systems are a popular option for newer cloud native and mobile applications, agile application development and test environments, building scalable web tiers of applications with seasonal or unpredictable demand and also long-term data storage capacity.
Private clouds on the other hand offer a similar set of advantages including scalability, self service IT and automation, but delivered within the business' own IT environment (either self operated or delivered as a managed service eg Cisco MetaPod).
Many businesses looking to private cloud already have existing environments with mission-critical business applications built on traditional architectures with shared storage and have specific compliance requirements, require high performance application connectivity, all typically with predictable usage patterns.
New application demands—including both new microservice-based cloud-native apps as well as the redesign of traditional monolithic applications—are driving the demand for a cloud service delivery model to be delivered in-house as a private cloud, ensuring that both the existing and new application requirements can be catered for, while still meeting the business needs for security compliance and application performance.
Although the private cloud is also optimised for the delivery of cloud native applications, they can also be adopted to support any application, offering the ideal platform for applications requiring high bandwidth and low latency access to application components and data stored within the private cloud or data centre.
Considering the relative strengths of private and public clouds across a diverse application portfolio, many businesses are also looking to adopt a hybrid cloud approach to provide choice in cloud deployment across private and numerous public cloud providers to meet not only the needs of the application (including performance and availability), but the needs of the business (cost, control, security, governance etc).
In such a hybrid cloud environment, cloud resources are split between the in-house IT-managed private cloud and the externally controlled and delivered public cloud offered through one or more public cloud providers. Application components can then be deployed to the cloud of choice, leveraging the management tools and APIs offered by each cloud provider.
The hybrid cloud approach is typically an attractive option for organisations that desire flexibility and scalability to meet their needs for rapid application deployment, often driven by their digitisation strategy and a focus on new growth opportunities and competitive differentiation. Given the flexibility of deployment options, it also caters to enterprises concerned about storing sensitive data in the public cloud.
This flexibility allows existing, traditional applications to be deployed within the organisation’s existing IT infrastructure while launching new apps in the cloud, offering a cost-effective transition strategy to meet the differing needs of a complex application environment. Further examples of scenarios in which the hybrid cloud offers benefits include cloud-bursting, in which every-day computing requirements are handled within the private cloud infrastructure, but peak demands, whether seasonal or demand-driven, may be handled by the public cloud. This is one way IT teams can build flexibility into their IT infrastructure to cater for surging demands.
Despite these benefits, by its nature, the hybrid cloud consists of a set of unique environments, offering different terminology, capabilities, services, APIs and pricing models, creating a major challenge for an IT organisation to manage and operate. With each new cloud also comes a new set of skills required to successfully deploy applications in a consistent manner which can prove costly and time consuming and add risk to any new deployment.
In response to these challenges, Cisco CloudCenter offers an application centric hybrid cloud management platform, delivering a consistent view and application model, catering for the full lifecycle of application deployment across more than 19 separate clouds. This approach vastly simplifies the complexities of delivering applications across a diverse hybrid cloud environment, providing the ability to model, benchmark, deploy and monitor applications, delivering the complete application stack to the cloud of choice from a single VM to a complex multi-tier application environment.
Cisco IT's hybrid cloud strategy
As with any large organisation facing the challenge of keeping pace with the ever increasing speed of business, Cisco's own IT department is also responding to the changing demands that come with being a leading global IT provider.
Cisco IT has recently developed and released its new cloud strategy (2016 Global Cloud Strategy or GCS) in which over the next five years, Cisco will evolve its existing cloud model to design, build and operate a self-service, automated, multi-tenant hybrid cloud to meet the needs of an increasingly diverse application portfolio.
In essence, the GCS will create a unified, intelligent, virtualised IT infrastructure with an application focus that will improve capacity, flexibility, resiliency and performance. This new environment will also deliver an optimised environment for new container-based applications based on a micro-service, cloud-native architecture.
The GCS focuses on two key dimensions. The first is a programmable infrastructure extending capabilities for multi-tenant cloud services to support a hybrid private and public cloud environment.
The second dimension is in the architecture of applications. GCS transforms applications making them cloud-native—designed from the start to intelligently maximise their use of cloud capabilities to achieve positive business outcomes.
The key aims of the GCS are to achieve the following:
Speed and agility:
Transforming applications to a cloud-native architecture, making all platforms and infrastructure programmable and API-driven, and using policies and profiles for secure application placement and flexible workload delivery via containers, virtual machines, and physical servers.
Simplicity and operational excellence:
Using data-driven operational insights to manage complexity in an increasingly dynamic cloud environment.
Avoiding large capital investment by optimising infrastructure that resides within existing data centre facilities, increasing the density and use of infrastructure resources, and shifting to a converged, multi-tenant infrastructure with flexible infrastructure assets.
Using public cloud services opportunistically with a secure, hybrid cloud model.
Resiliency and performance:
Enabling geographically distributed applications and data that can self-heal and be located close to end users for the best user experience.
Shifting security tools to a software-centric, virtualised form factor where security can accompany highly portable applications and data.
By building flexibility and programmability into its cloud services, both private and public, the GCS improves Cisco's effectiveness in responding to business activity.
This flexibility will serve business needs by allowing for faster file downloads for customers, and faster performance for the engineering platforms used by Cisco development teams located globally.
Security and privacy
Security concerns are a chief stumbling block for businesses looking to migrate to cloud paradigms.
In fact, research shows that data security is the dominant barrier for moving to the cloud for almost half of enterprises interested in making the switch.
Until recently, organisations have had to compromise on the level of security when implementing application virtualisation and cloud-computing models.
That's because replicating security policies of physical environments limits the advantages of virtualisation and fails to adequately address new security challenges.
However, as we'll explore, there is a growing range of best-practice policies and systems that will not only secure your data, but ensure system functions are not impeded.
Increasing digitisation, increasing risk
The past few years have been an extremely eventful period from a cloud security threat perspective.
For example, in 2014, network security vendors moved quickly to patch appliances against Heartbleed, a vulnerability in the popular OpenSSL cryptographic software library, and then months later for Shellshock, an open-source vulnerability.
The quick response highlights the ability of security vendors to provide support and assistance to customers in need, which will be critical going forward, as data breaches are estimated to cost companies US$2.1 trillion globally by 2019.
The incidents also showcase how quickly security risks can emerge, thanks to the rapidly evolving cloud landscape and widespread adoption. This is further highlighted by the fact that more than a quarter of users have uploaded sensitive data to the cloud.
The reality is that users expect their online experiences to always be fast, available and secure. Yet as more data, business processes, and services move to the cloud, organisations will find it challenging to protect websites and security without sacrificing performance.
To help meet user expectations, more secure internet servers are being deployed worldwide. This growing infrastructure footprint will allow for better authorisation and authentication controls, which in turn will provide end users with better service and more secure transactions.
Additionally, the impending explosion of the IoT devices will throw up new and challenging security concerns—for example, the route data takes to the provider. In some devices, data may be sent to an insecure local hub, where it will be stored until it can uploaded in a larger batch. This method could potentially place large amounts of data in less secure locations.
Business risks from data breaches
All organisations depend on their data to function. Yet headline-grabbing data breaches are reported each month around the world, costing businesses millions of dollars each time.
It's critical to develop a security strategy that not only protects your business against data breaches, but will respond swiftly if a situation arises. Doing so will go a long way to ensuring you maintain business continuity and protect your reputation, while avoiding fines, judicial penalties, legal fees and remediation costs.
Factors data centre managers should keep in mind when looking to balance security with digitisation goals include:
A complexity that comes with application virtualisation and cloud-computing environments is being a high-value target for both hackers and threats from within organisations.
Mobility of workloads:
Server virtualisation enables mobility of applications between servers. However, flexibly moving security policies along with virtual workloads can be challenging.
More points of attack:
Server virtualisation introduces additional points of attack.
Today's consolidated data centres and clouds have disparate user groups that require complete separation of network traffic and strict access control policies.
Separation of duties:
Traditional IT environments had a strict division of responsibilities between server administrators, network administrators and the security team. Server virtualisation complicates this division of labor.
Making virtual applications secure is one of the most difficult challenges to overcome when looking to maximise the benefits of moving from a traditional data centre to a cloud model.
That's because new virtual security services with visibility into virtual applications and switches are required to run in conjunction with the traditional data centre's security appliances and modules.
Here are a few best-practice points for data centre managers to keep in mind when designing a new security model.
Defend against unauthorised users:
Block all unauthorised traffic from the rest of the LAN. This can be achieved with a high-bandwidth network security appliance, such as the Cisco ASA 5585-X Adaptive Security Appliance.
Prevent intrusion and contain malware:
Legitimate traffic from outside the data centre may still contain malware, including Trojan horses, viruses and worms. The Cisco ASA 5585-X helps here too.
Defend with a proven firewall:
Extend the well-proven security component of the physical environment to the virtual and cloud infrastructure by integrating the Cisco ASA 1000V Cloud Firewall with the Cisco Nexus 1000V Virtual Switch.
Provide centralised multi-tenant policy management: Creating security profiles using a template-based configuration approach can simplify authoring, deployment, and management of your security policies. This can be made possible with the Cisco Virtual Network Management Centre (VNMC), which can manage the Cisco VSG and the Cisco ASA 1000V virtual firewall.
Support virtual machine mobility:
When security policies are assigned to virtual machines, those policies need to move around the data centre with the virtual machine as it moves to a new server. Firewall policies make this mobility difficult but it's a fundamental capability of the Cisco VSG.
A final word on security
If you think your enterprise is too small a target to bother with properly securing your systems, you couldn't be further from the truth.
Research shows enterprises with 10,000 records or fewer have a one-in-four chance of experiencing a data breach over a two-year period. On the other hand, companies with a minimum of 100,000 records have just a 1 percent probability of their data being breached over the same period.
How Cisco secures its data centres
Cisco is entrusted to protect crucial information on behalf of some of the world's leading companies, so it's vital that it takes a data-centric approach to security.
Multi-tenancy is an important security innovation that helps Cisco create the same segmentation in the virtual environment that exists in physical data centres. Multi-tenant architecture allows applications to virtually partition data and configurations, providing each client customised virtual applications.
The foundation of data centre security is a series of processes that are designed as layers, to protect critical assets such as customer data and intellectual property.
In essence, there are five security capabilities: prevent and mitigate, detect, contain, measure and remediate. From a network perspective, Cisco operates mainly in the first three capabilities, using firewalls as a primary control.
The five capabilities are applied to technology within the data centre, including system applications, operating systems running on the hardware, virtualisation technologies such as VMware or OpenStack, storage systems and the network.
Securing the data centre
Data centre networks are evolving. The traditional model is to have standalone switches in an access/distribution/core topology. However, this is being replaced by data centre network fabrics, with flat spine/leaf topology.
While Cisco is aggressively adopting a programmable switching fabric through its deployment of Application Centric Infrastructure (ACI) on internal data centre networks, there is still a significant amount of data centre workloads on traditional switched network. As such, Cisco must apply different security models for the different network types. Both will be discussed in detail in later sections.
Management, service and support
Currently, data centre network security has two major management categories. The first centres on the access control lists (ACLs) on firewalls and the Nexus switches, in the case of a traditional switched network. The second focuses on the management of the IPS/IDS monitoring and incident response capabilities.
Configurations and devices for firewalls and Nexus switches are managed by the Cisco network team under Global Infrastructure Services (GIS). GIS governs and manages changes to the ACLs on these devices. For example, if an employee requires a specific server to communicate with another server on a certain port, InfoSec will first investigate whether it is a valid request before routing the request to GIS for implementation.
Management and monitoring of the IPS/IDS are the responsibility of the Cisco Computer Security Incident Response Team (CSIRT). CSIRT operates the sensors that generate data around network traffic and devices. When an anomaly arises, it's flagged as an event related to that packet or session and sent to a Tier 1 analyst for review. If the analyst finds something about the event worthy of further investigation, Tier 1 escalates the alert to an investigator for deeper analysis. If an investigator is unable to track down the offending system's owner, a tool within the network can “black hole” the offending system until long-term actions can be taken.
If an infected system on the network tries to spread to other systems in the network, Cisco has the ability to black hole at the network level, thanks to a process called real-time blacklist [RBL], ensuring that the offending packets never reach their destination.
Application delivery and software-defined networking
A successful digital business today thrives on agility and the ability to scale quickly as needed. This is where software-defined networking (SDN) is making an enormous impact.
SDN enables organisations to accelerate application deployment and delivery, reducing IT costs through policy-enabled workflow automation. It centralises network and application services into extensible orchestration platforms that can automate the provisioning and configuration of the entire infrastructure, bringing together disparate IT groups and workflows. The result is a modern infrastructure that gives controllers the ability to deliver new applications and services in minutes, rather than days or weeks.
To quote Joe Skorupa, vice president and analyst at Gartner, in SDN architecture “the control and data planes are decoupled, network intelligence and state are logically centralised, and the underlying network infrastructure is abstracted from network applications and features. In addition, programmability enables external control and automation that allow for highly scalable, flexible networks that readily adapt to changing business needs".
SDN has garnered much attention in the networking industry since the initial OpenFlow 1.1 specification was introduced in 2011. Yet for many years, IT professionals have viewed operations, scalability and reliability as challenges that SDN technologies needed to address.
That was until two years ago, when Cisco introduced Cisco Application Centric Infrastructure (ACI), which will be discussed in more detail in the case study.
The are three types main models of SDN deployment, including switch-based, overlay and hybrid. Here are the definitions according to Gartner's research note “Ending the Confusion Around Software Defined Networking (SDN): A Taxonomy”:
This model is well suited for greenfield (new) deployments in which the cost of physical infrastructure and multivendor options are important. Its drawback is that it does not use existing Layer 2 and 3 networking equipment.
This model is well suited for deployments over existing IP networks or those in which the server virtualisation team manages the SDN environment. Here, the SDN endpoints reside in the hypervisor environment. The biggest drawbacks are that this model doesn't address the overhead required to manage the underlying infrastructure, debugging problems in an overlay can be complicated, and the model does not support bare-metal hosts.
This model is a combination of the other two approaches, with non-disruptive migration that can evolve to a switch-based model through time. Gateways link devices that do not natively support overlay tunnels, such as bare metal servers.
Hardware and other considerations for data centre managers
While SDN is shaping up to transform the internetworking industry, hardware still remains a big part of every network.
Commercial off-the-shelf hardware—such as servers with x86 processors that use software-based forwarding capabilities or emergent white-box switches—will find a place in many cases. But a more important consideration is the evolving role of high-performance hardware in SDN.
As service providers optimise their networks to take full advantage of the very best tools in SDN, network functions virtualisation (NFV) and other technologies, they need to strategically assess their hardware choices, based on functions and performance requirements and the intended business outcome for individual applications and services.
Low-cost, off-the-shelf hardware can support many standard applications. But high-performance hardware—such as core data-centre switches that support more stringent I/O requirements, high throughput, and high reliability—is still crucial. Of course, providers need to drive network costs down, and using more mass-produced hardware is one way to do so. But the SDN and NFV solutions that you deploy must satisfy the top requirements across applications, services, service provider architectures, and topologies, now and in the future.
It's worth noting that if you focus on establishing a clean abstraction of the services from the underlying hardware infrastructure using SDN principles, it is easier to change or evolve your network deployment, keeping it independent of the upper services and applications.
Along with an abstraction layer, an orchestration system is a vital component of your SDN environment. This system is used to manage across virtualised and physical infrastructures, and can easily migrate workloads, when necessary, without exposing complexity to the upper layers of the services and applications. In a nutshell, a clean abstraction between orchestration, application, and infrastructure layers, allows for different types of hardware to be switched quickly and easily as changing needs dictate.
The rise of container-based microservices
Cisco’s internal application architecture is moving towards Linux containers and microservices. Containers offer a lightweight, operating system-level virtualisation technology that makes applications easy to build, package and run.
Docker, the dominant container runtime, provides the capability to package an application with all its dependencies into a standardised unit for software development. Because containers run on the same operating system, they are lighter weight than virtual machines and make more efficient use of shared resources such as the file system and RAM. Therefore, containers can be launched and run faster than virtual machines. Running applications in different containers provides namespace isolation and resource control. It also provides modularity, with the flexibility to port a given application from one infrastructure to another, unlike in a hypervisor environment.
When containers are used, an application always runs the same regardless of the environment in which it runs. Also, because resources share the same kernel and operating system, they can be brought up and taken down much more quickly, allowing faster application scalability during peak use.
The emergence of Docker containers and the underlying support in the Linux kernel has enabled a shift in the way that applications are designed and built, using new microservice architectures.
Microservice architectures are an approach to building complex applications through small, independent components that communicate with each other over language-independent application programming interfaces (APIs). The environment is extremely dynamic, with new services being instantiated and torn down constantly.
Cisco requires a network infrastructure that allows itself to be programmable, but at the same time offer an application-centric network security model. Application Centric Infrastructure (ACI) delivers this. End points are placed into End Point Groups (EPGs) and then network contracts define and permit the network flows between EPGs. ACI allows the network fabric to be programmed by fully open APIs via the Application Policy Infrastructure Controller (APIC), which fully configures and operates the network. This strong alignment makes Cisco ACI well suited to container-based technology.
The integration of ACI with Docker containers allows an end user to create containers either directly through the Docker command-line interface (CLI) or through higher-level tools such as Docker Compose.
The user can also join the service tiers in a multi-host cluster and automate the creation of network policy in the APIC, allowing communication for the given containers.
Cisco ACI views containers like any other computing resource, and through this integration containers are mapped transparently to End Point Groups (EPGs) in the controller without the need for additional end-user intervention. This EPG-to-container mapping is intuitive and enables end users to benefit from Cisco ACI capabilities such as multi-tenancy, security, performance, and mobility.
Cisco ACI integration with containers allows end-users to build a network that reflects the way that software for containers functions and then add management and system controls. This approach is significantly different from that of other software-defined networking (SDN) solutions, in which the network is built based on the management tools, which then try to adapt the network so that containers can function with it.
ACI, security and analytics
ACI, by default, is a white-list model, which allows for an improved security posture. In this model, all network traffic between devices must be explicitly allowed. While this is very secure, it can still be difficult to work out how an application behaves on a network when application developers have never had to deal with this level of granularity. That's where Tetration Analytics comes in (which we'll discuss in more detail in section four).
Essentially, Tetration Analytics provides unprecedented visibility needed to profile applications, giving you the exactly security configuration that the infrastructure needs to be programmed with to enable an application to run in a white list model.
So in a nutshell, ACI is a key piece to enabling security and Tetration Analytics provides the visibility to define the exact security configuration.
Data centres don’t just process vast amounts of data—they generate incredible amounts of it too.
The sheer volume alone presents substantial challenges to IT and data centre managers hoping to get clarity and visibility around their data centre operations.
Exacerbating the challenges is the fact that because of technology advancements, 76 percent of data centre traffic is now east-west—and the complexity of processing data will increase as digitisation grows, and machine-to-machine communications become more commonplace.
This dynamic environment contributes to three main challenges: network operations not having full visibility into the traffic; IT operations being unable to know the exact application communication pattern; and policies being improperly implemented between different application tiers.
Legacy system issues
Legacy systems create no shortage of issues for managers seeking insights into their data. In many cases, data centre managers have to manage their data in central data warehouses, using cumbersome extract, transform, and load (ETL) techniques to access their data across multiple systems. Additionally, some enterprises have adopted bolt-on front-ends to their existing legacy systems, and as such, will potentially miss the opportunity to innovate beyond legacy-imposed back-end constraints.
Until recently, enterprises have performed fragmented tasks when addressing operational issues, resulting in slow and disjointed processes that are costly in both time and money.
IT managers today are also often hamstrung by a lack of visibility into data centre infrastructure and how applications are interacting. In some instances, they're unable to migrate applications to the cloud or set up a Disaster Recovery site with precision and speed. In other cases, they're unable to adopt a zero-trust model because they lack the critical information and resources to implement or maintain it.
However, the time it takes to make important data-based decisions can be drastically cut by employing services that provide good analytical insights—such as Cisco Tetration Analytics, released in July 2016.
These services can give enterprises the edge over their competitors by providing them with access to data in a much quicker, easier to digest and more reliable way. In addition, these insights often increase end-user productivity and reduce IT operational costs.
With the aid of good analytical insights, organisations can understand what applications are dependent on each other and move from reactive to proactive when it comes to operational decision making. They can also search across billions of flows in less than a second using Tetration's forensics search engine, and continuously monitor application behaviour to quickly identify any deviation in communication patterns.
Running the numbers
Cisco Tetration Analytics is designed to help customers gain complete visibility across their entire data centre in real time.
Key benefits of the platform include the ability to maintain up-to-date infrastructure for disaster recovery, build and deploy a consistent and secure whitelist policy model, and monitor applications for network policy compliance. It also allows you to migrate applications to SDN solutions and the cloud more easily, as you have contextual information of all up- and downstream dependencies of the application.
Through continuous monitoring, analysis, and reporting, Tetration provides IT managers with a deep understanding of the data centre that will simplify operational reliability.
It can also run like a time machine for the data centre, enabling organisations to rewind and have a look at what happened in the past, see what's happening in the present and model future risk assessments.
How it works
Powered by big data technologies, the platform gathers telemetry from hardware and software sensors and then analyses the information using advanced machine learning techniques.
Software sensors are installed on end hosts: either virtual machine or bare metal servers. The sensors support Linux and Windows server hosts, while hardware sensors are embedded in the ASICs of Cisco Nexus 9200-X and Nexus 9300-EX network switches to collect flow data at line rate from all the ports. It can also be installed in any data centre with any servers and any network switches.
A single Tetration appliance will monitor up to one million unique flows per second, and the platform as a whole can search tens of billions of telemetry records and return actionable insights in less than a second.
With so much data to sift through, and the ever increasing need for it to be done quickly, machine learning is proving critical in improving the efficiency of data centre management.
Providing a turn-key solution using unsupervised machine learning and a behaviour-based algorithmic approach, the Tetration platform's capabilities drastically reduce human inputs required to understand the communication patterns. Meanwhile, its self-monitoring and self-diagnostics capabilities eliminate the need for big data expertise to operate the cluster.
The platform can generate policies between the various tiers of any application after observing flow data over time. It can then export the policies it discovers to other applications.
Furthermore, it can also monitor application components for network policy compliance, with the ability to detect any violations in just minutes using behaviour analysis techniques.
Other machine learning benefits for data centres
Tetration is not the only machine-learning system generating major benefits for data centres. Earlier this year, Google's DeepMind AI system was put in control of parts of its data centres and tasked with reducing power consumption by manipulating computer servers and cooling systems.
The savings translated into a 15 percent improvement in power usage efficiency, which could deliver hundreds of millions of dollars in savings over multiple years, while also providing a great outcome for the environment.
Just as DeepMind learnt to master a diverse range of 2600 Atari games with only raw pixels and scores as inputs , the software has been able to figure out how to get the best score when it comes to using power most efficiently at the data centre.
Tetration on the ground
With almost 3000 business applications at data centres around the world, Cisco's migration of thousands of applications to SDN and the cloud is no mean feat, particularly while complying with evolving security policies and creating a zero-trust security environment.
Enter Cisco Tetration, the origins of which stem back to 2014, when Cisco began deploying its Application Centric Infrastructure (ACI) to support the move.
Not long after the migration began, it became apparent that staff time costs were blowing out. Consequently, Cisco sought an analytics-based solution, and got Tetration up and running in early 2016.
Pros for professionals
The major benefits of Tetration are that it helps Cisco IT staff achieve:
- Deep and near-real-time visibility into its application environment,
- Staff efficiencies and cost-effective migrations to SDN zero-trust operations and private cloud, and
- Substantially better compliance by monitoring policies and data flows between customer-facing, production, lab and partner systems.
It also has numerous potential uses, including:
- Application behaviour insights: Capturing real-time traffic data between application components and behavioural analysis to find application groups, communication patterns and service dependencies.
- White-list policy recommendations: Providing white-list policy recommendations for an application once application behaviour has been understood.
- Policy simulation: Simulating white-list policy and testing it before moving it into production.
- Policy compliance: Monitoring for deviations from policies once applied.
- Flow search and forensics: Searching up to billions of flow records in near real-time.
Tetration, in conjunction with ACI, will save Cisco millions of dollars and thousands of staff working hours in migration tasks, as outlined in an IDC report.
The report estimates it takes staff 70% less time to carry out application traffic analyses and establish zero-trust operational environments when using Tetration and ACI (reducing the staff time needed per 100 applications from 5200 IT staff hours to just 1550).
Assuming a $100 per hour fully loaded cost of IT staff time, this would represent a saving of $365,000 per 100 applications that Cisco migrates to ACI and applies white-list security models.
Cisco also expects to reduce the staff time needed to carry out a traffic analysis by 69%, compared with using manual processes.
This means that Cisco's IT team will need to expend only 1250 staff hours per 100 applications migrated to its SDN zero-trust environment, compared with 4000 hours per 100 applications using manual processes.
In addition, based on early results of using Tetration Analytics and Cisco ACI together, Cisco IT is seeing a further 75% staff time efficiency in terms of implementing whitelist security models once applications are migrated.
It's important to remember that cloud adoption is not just being driven by a business's need to cater for the exponential growth in data—it's also being driven by the customer and their high service expectations.
The explosion of data we're seeing is posing unprecedented service delivery challenges for legacy systems, while the cloud promises faster delivery of data, increased application performance and improved operational efficiencies.
Another advantage cloud computing offers is that businesses do not have to struggle to keep up with competitors by forking out for expensive CapEx infrastructure, but can instead rent it as a service and scale when it suits them best.
This scalability and agility is being both complemented and enabled by software-defined networking (SDN). Cisco's Application Centric Infrastructure (ACI) was designed in anticipation of the significant shift now occurring toward Linux containers and microservices, which offer a lightweight, operating system-level virtualisation technology that makes applications easy to build, package and run.
In tandem with Tetration, IT managers are now granted pervasive visibility into data centre infrastructure and how applications are interacting. Tetration also allows for cost-effective migrations to SDN zero-trust operations and private cloud, while offering substantially better compliance monitoring policies. Last, but certainly not least, it also enables enterprises to operate systems in a zero-trust security environment.
This will severely minimise the chances of data being compromised and having the reputation and productivity of businesses damaged in the process. Thanks to ACI and Tetration, what was once a weakness in cloud computing—security—is now becoming a strength, a monumental achievement considering the internet was designed as a loose platform with security as an afterthought.
While it might feel like a leap into a brave new world, the good news is that you won't be the first, and you certainly won't be alone.
Cisco IT will continue to pave the way forward when it comes to cloud migration and the evolution of the data centre, making your journey easier when the time comes to make the transition. We understand that sustaining future growth—in particular from an infrastructure perspective—is a huge challenge for companies big and small, which is why we've made it a priority to make huge strides in this field both now and into the future.