U.S. Public Sector/Defense
As the U.S. government increasingly embraces as-a-Service (aaS) consumption models, adoption of cloud technologies continues to accelerate. Government agencies are implementing cloud for many of the same reasons as nongovernment organizations.
Cloud offers the unique ability to achieve the following:
● Address technical debt and gain access to ongoing commercial innovation.
● Unlock IT value by focusing IT efforts and investments on driving mission impact.
● Enable agility in the cyberspace domain.
● Handle and analyze growing volumes of data to support data-driven policies and operationalization of data.
The Department of Defense’s cloud strategy outlines the strategic approaches DoD will pursue in the process of adopting General Purpose and Fit for Purpose clouds: “Warfighter First,” “Cloud Smart-Data Smart,” “Leverage Commercial Industry Best Practices,” and “Create a Culture Better Suited for Modern Technology Evolution.” This plan recognizes mission and tactical edge needs along with the necessity of preparing for artificial intelligence. To meet these needs, agencies’ networks must also be able to support the growing number of hyper-distributed applications provisioned across virtual machines, containers, and bare-metal hardware throughout data centers and clouds. Applications, Internet of Things (IoT) sensors, big data analytics, and Artificial Intelligence/Machine Learning (AI/ML) engines also generate hyper-distributed data at the edge, all of which will need to be processed in the cloud, on the premises, or both – wherever it can be executed at speed to meet the agency’s mission needs.
The Air Force, for example, is turning to cloud-hosted artificial intelligence and predictive maintenance powered by reference data to drive supply chain and engineering efficiency. At the tactical and operational levels, predictive maintenance analytics allows the organization to anticipate demand and distribute spares, reducing non-mission-capable time for critical combat systems. At the strategic level, it allows systems engineers to identify trends that drive redesign of parts and production processes in order to increase component reliability.
The future is here. Agencies are leveraging multiple cloud-deployment models (private, public, community, and hybrid) to deliver best-fit services based on mission needs. One prerequisite for enabling the successful adoption of cloud is the network’s readiness to support architecture shifts, driven by cloud and other concurrent disruptive technology vectors, including big data, AI, and IoT. Defense agencies seeking to step into the future are asking a crucial question: Is our network cloud-ready?
Today’s network requirements are experiencing a fundamental shift as the traditional model of accessing highly centralized resources is coupled with the need for a distributed, decentralized architecture. This shift is driven by two principal forces:
1. Centralized cloud computing models, where IaaS, PaaS, SaaS, and other Cloud Solution Provider (CSP) hosted services are consumed from public and community clouds.
2. The rapid growth of edge devices – and the distributed architectures that enable them to process data locally and to communicate and share resources with other edge devices.
The upsurge of edge devices and of the distributed architecture to support them has given rise to a distributed architecture that brings the core building blocks of cloud – computing, storage, and networking – closer to the edge. Where latency and bandwidth constraints exist, agencies will leverage edge computing. In some cases, they leverage both an edge computing model (for real-time data processing) and a centralized cloud computing model (for heavy processing). For example, ML models are trained in a public cloud and then deployed at the edge to enable near-real-time predictions. Furthermore, agencies can leverage on-premises private clouds or hybrid cloud deployment models to make core cloud capabilities available at the edge.
To facilitate their cloud journey, and to de-risk that journey, agencies like DoD will need a cloud-ready network able to support both centralized cloud and distributed architectures that comprise private, community, public, and hybrid clouds. For DoD, a robust cloud environment is a game changer. Suddenly, cloud-hosted applications and reference data are linked to real-time incoming data identifying alerts that both reduce operational risk and expose opportunities to increase operational effectiveness. These opportunities include ensuring the availability of critical information; securing mission networks; streamlining logistics and sustainment; increasing situational awareness in the battlespace; and enhancing quality of life for soldiers, sailors, airmen, Marines, and their families.
Traditional hub-and-spoke network architectures were designed to support consolidated applications and services hosted at centralized “Demilitarized Zones” (DMZs) and data centers. This layout forces the backhaul of internet traffic through the DMZ, creating inefficient traffic routes that increase the distance between end user and application. Today, most agencies still rely on this approach, backhauling traffic destined for off-premises IaaS, PaaS, and SaaS services through a trusted, central connection point. Yet the reality of today’s landscape, with an ever-growing influx of data and devices, is pushing the limits of hub-and-spoke networks. Traditional network designs are increasingly unable to support the edge-to-cloud shift of internet traffic, making it nearly impossible for networks to keep up.
Today’s agency branch office users collaborate more online through the use of SaaS applications (e.g., WebEx, Office 365) and other cloud services. Branch-based end users are also consuming more and more bandwidth-intensive cloud-hosted applications. In this scenario, two common approaches are available to address IaaS, PaaS, and SaaS performance challenges:
1. Decentralize and deploy multiple internet exits.
2. Provide high-bandwidth connectivity directly from the branch sites.
However, the combination of security, complexity, and cost arising from the rigidity of traditional Wide-Area Network (WAN) technologies makes these solutions impracticable on a large scale.
By providing an architecture that integrates routing, security, centralized policy, and orchestration, Software-Defined Wide-Area Networks (SD-WANs) enable agency branch offices and remote users operating government-furnished equipment to securely connect to applications by leveraging any combination of internet transport services (e.g., MPLS, cellular, broadband). SD-WAN technology addresses the bandwidth and performance issues related to cloud-hosted applications, so agencies can extend their secure footprint anywhere.
SD-WAN provides agencies the following advantages:
● Predictable application experience using multiple hybrid links with real-time steering based on SLA policies.
● Zero-trust network security and segmentation.
● Integrated security composed of enterprise firewall, intrusion prevention, advanced malware protection, DNS-layer enforcement, URL filtering, and antivirus.
● Seamless public cloud expansion and SaaS optimization.
● Centralized management, zero-touch provisioning, and a high degree of automation.
● Rich analytics for visibility, troubleshooting, and planning.
● Highly scalable solution able to scale to 10,000+ locations.
With SD-WAN, agencies like DoD can build a scalable, carrier-neutral WAN infrastructure while also reducing WAN transport costs and network operational expenses. The technology ensures a predictable end-user experience for cloud-hosted applications and supports a seamless, multicloud architecture with simplified operational experience, integrated security, and rich analytics.
Poor end-user experience is one of the top complaints when an agency adopts SaaS, often due to unpredictable SaaS performance when confronting the many dynamic changes in internet gateways and SaaS hosting servers. SD-WAN solves these problems and ensures an optimal SaaS user experience across all agency branches by creating multiple internet exit points and dynamically steering around bandwidth and latency issues in real time.
To achieve this, the SD-WAN fabric continuously measures the performance of designated SaaS applications through all permissible paths leading from a branch. For each path, the fabric computes a quality-of-experience score that gives network administrators visibility into application performance. The SD-WAN technology also makes real-time decisions about the best-performing path between the end users at a remote branch and the cloud SaaS application.
In the quest for higher availability and a better end-user experience, agencies have the flexibility to deploy this capability in multiple ways based on their mission needs and security requirements.
Option 1: Direct cloud access from a remote branch
Agencies using a single or multiple inexpensive broadband internet circuits at remote sites can instruct the branch router to direct some types of traffic destined for a designated SaaS to break out directly to the internet. Only trusted and critical traffic to the designated SaaS will be allowed a secure local breakout, while all other internet-bound traffic will follow its usual path. For example, an agency can specify a policy that permits the most performance-demanding and trusted Office 365 applications, such as Exchange Online and SharePoint Online, to take advantage of a local and direct internet connection, while the remaining user network communication outside of the customer network will be routed through the customer data center.
Option 2: Cloud access through the most optimal regional hub or carrier-neutral facility
For agencies that want their SaaS to employ a regional hub egress architecture, SD-WAN can help ensure the best possible path through the available regional hub infrastructure. For example, SD-WAN capabilities can be deployed to dynamically choose the optimal regional gateway for the agency’s Office 365 application traffic.
Option 3: Local internet access through secure web gateways
Agencies can connect remote branches to the SD-WAN fabric using inexpensive broadband internet circuits and can apply differentiated security policies depending on the types of services to which users are connecting. Instead of sending all branch traffic to a Secure Web Gateway (SWG) or Cloud Access Security Broker (CASB), an organization may wish to enforce its IT security policies in a targeted manner by routing regular internet traffic through an SWG, while allowing performance-optimal direct connectivity for a limited set of sanctioned SaaS applications. For example, the agency can leverage SD-WAN capabilities to dynamically choose the optimal path among multiple Internet Service Providers (ISPs), both for Office 365 applications permitted to travel directly and for applications that are routable through the SWG per agency policy.
SD-WAN enables agencies to apply routing policies that are mission-centric, application-aware, and differentiated. Organizations can leverage SD-WAN to intelligently route SaaS traffic and provide a fast, secure, reliable end-user experience while simultaneously providing network administrators with real-time and historical visibility into application performance, through a quality-of-experience metric.
Learn about Cisco’s SD-WAN, Secure Internet Gateway (Cisco Umbrella), and Cloud Access Security Broker (Cisco Cloudlock) solutions.
Applications are more dynamic and complicated than ever, with many parsed into services and microservices, often deployed across on-premises and off-premises cloud environments. Delivering an exceptional digital experience in a blended workload environment requires application and IT infrastructure teams to focus on what matters most: making certain that applications always perform, whether they are deployed in traditional data centers or in complex multicloud environments.
Dynamic Workload Optimization
Agencies must be able to develop and deploy applications on the infrastructure that makes sense for their programmers, their users, and their budget. Agentless workload optimization management technologies can detect elements in an agency’s environment, from applications to individual components, and deliver a topological map of that environment and its interdependent relationships. This empowers agencies to quickly model “what-if” scenarios based on the real-time environment in order to forecast capacity needs accurately and make the right deployment decisions.
Workload optimization management technologies can also automate the scaling of workloads, storage, and databases based on the level of comfort among IT personnel: recommend (view only), manual (select and apply), or automated (executed in real time by software). Automated workload optimization can eliminate human error and free IT staff to focus on higher-value initiatives.
Learn about Cisco’s Workload Optimization Manager Solution.
Application Performance Visibility
In order to deliver consistently positive digital experiences, agencies will need to connect end-user experience and application performance to mission outcomes. A solution that can monitor, correlate, analyze, and act on application and mission performance data in real time, regardless of where the application is hosted (on-premises private, hybrid cloud, or off-premises CSP cloud), can enable developers, IT operations, and mission owners to gain the insights needed to make mission-critical and strategic improvements. Application performance monitoring solutions that leverage AI and ML to enable AIOps and cognitive operations can offer automated insights that allow agencies to both avoid mission-impacting performance issues before they occur and perform automated root-cause analysis that expedites Mean Time To Repair (MTTR).
By leveraging workload optimization and application performance visibility solutions, agencies can replace sizing guesswork with real-time analytics and modeling so they know how much infrastructure is needed for applications to keep pace with mission demand. Gaining insights through these solutions will allow agencies to adopt a proactive approach to IT operations and stay focused on end-user experience and mission impact.
Software-defined networking can facilitate the application agility and data center automation required to accelerate cloud adoption. An application-centric infrastructure enables simplified operations, automated network connectivity, consistent policy management, and visibility for multiple on-premises data centers and for public clouds or multicloud environments. This infrastructure also offers agencies the flexibility to move applications seamlessly to any location or cloud while maintaining security and high availability.
Furthermore, an application-centric infrastructure captures mission and user intents and translates them into native policy constructs for applications deployed across various cloud environments. Using a holistic approach, it ordains availability and segmentation for bare-metal, virtualized, containerized, or microservices-based applications deployed across multiple cloud domains. The common policy and operating model can drastically reduce both cost and complexity associated with managing multicloud deployments.
DevSecOps and Containers
The drive toward shorter and more iterative development cycles, with a focus on delivering mission needs, is leading agencies to adopt DevSecOps (development, security, and operations) methodologies that enable development, security, and IT teams to work more closely and collaboratively. For example, the Navy’s transformational architecture Compile to Combat in 24 Hours (C2C24) has established a cloud-enabled Consolidated Afloat Networks and Enterprise Services (CANES) DevSecOps environment for all content providers to use. The DevSecOps environment for hosting shore-based applications in the cloud is under development, with operations to commence in spring 2019. The Navy expects cost and time savings of 15 percent or more as these architectures reach full adoption.
In parallel with DevSecOps models, containers and microservices are being adopted as the building blocks of today’s software development – a preferred path for both new application development and application modernization projects. Containers, which encompass the operating system, libraries, and anything else that the application needs, offer a lightweight, portable way to bundle applications. This isolation brings portability, standardization, and flexibility to development environments; applications are decoupled from the platform, so containers can move from platform to platform or from cloud to cloud without modification to the application. With containers, developers can spend less time debugging and assessing differences between environments and more time on development.
The benefits of cloud increase exponentially when organizations bring together containers and DevSecOps (a lightweight means of virtualizing applications with a methodology to join siloed IT teams). For organizations making this transition, one of the biggest challenges is maintaining common and consistent environments throughout an application’s life cycle, from development through deployment. To address this challenge, agencies will need hybrid cloud architectures that deploy applications across on-premises and cloud environments in a secure, consistent manner. The supporting hybrid architectures must be tested and validated, and must deliver consistent container clusters both on-premises and in the cloud, leveraging the best attributes of each. Agencies that can extend on-premises capabilities and resources to the cloud, and that can also utilize services and resources from the cloud on-premises, will reduce the burden on their IT teams with respect to people, processes, and skill sets. This in turn will accelerate the application deployment cycle, resulting in faster innovation and increased agility.
Learn about Cisco’s integrated system for Azure Stack, Cisco’s hybrid solution for Kubernetes on AWS, Cisco’s hybrid solution for Kubernetes on Google Cloud Platform, and Cisco’s validated solutions with Docker.
To protect a government agency’s data, infrastructure, and networks from growing digital threats in today’s connected world, agencies must adopt a zero-trust network architecture based on a “verify and never trust” approach. Attackers and malicious insiders will penetrate threat-centric defenses, so the zero-trust network model is centered around one guiding principle: Security must extend throughout the network, not just at the perimeter.
The original tenets of a zero-trust network center around the following elements:
● Eliminate network trust. Assume that all traffic, regardless of location, is threat traffic until it is verified (authorized, inspected, and secured).
● Segment network access. Adopt a least-privileged strategy and strictly enforced controls so users have access only to the resources needed to perform their job.
● Gain visibility and analytics. Continuously inspect and log all traffic both internally and externally, using real-time protection capabilities, to monitor for malicious activity.
Network segmentation and visibility remain critical, yet users also access workloads and data beyond the network. Agencies must extend the zero-trust approach to their workforce, workloads, and data:
● Zero-trust workforce. Users must be authenticated; their access and privileges must be continuously monitored and governed. Users must also be secured as they interact with the internet.
● Zero-trust workloads. Control must be enforced across the entire application stack, including connections between containers or hypervisors in the cloud.
● Zero-trust data. Data must be secured, managed, classified, and encrypted, both at rest and in transit.
As DoD continues to progress through its digital transformation journey, the importance of mission assurance in the network of networks becomes more and more evident. In an era when data is core to operations, the cloud environment becomes an essential medium of maneuver for data. As with other maneuver force elements, the freedom to operate across the entire spectrum of cloud-hosted resources is essential. That freedom to access and capitalize on data requires confidentiality, integrity, and availability via edge-to-edge threat management and oversight of all interactions.
For military leaders, soldiers, sailors, Marines, and airmen, few things are more important than situational awareness. The U.S. government is working toward a future in which a robust global network enables access to cloud resources so that the armed services can capitalize on AI technologies and the availability of reference data and real-time analysis – enabling our military to become hyper-aware on multiple fronts, expedite information-sharing across theaters, and establish an advantage on the battlefield.
As DoD looks to private, hybrid, and public clouds; software-defined networking; and integration with IoT, it encounters a great opportunity: a chance to implement an integrated cyber platform that instigates a joint information function by providing decision advantage and enabling information effects across multiple domains. Military decisions will become powered by real-time access to data across local edges, private data centers, and public clouds. Such a cyber platform must be a multinetwork architecture that is agile, integrated, and resilient – capable of adapting in a continually changing environment. The network of networks can ensure data has the freedom to operate: to enhance mission assurance with continuous monitoring of the network’s own health; to lock down and remediate threats at machine speed; and to manage routing intuitively based on the need for speed, efficiency, security, and arrival surety. This is possible today with intent-based, zero-trust networking – generating advanced, decision-based effects with data delivered across the spectrum, from the tactical edge to the cloud and back.
For more information please visit cisco.com/go/DOD.