As mobile and other connected devices proliferate, edge computing architecture will provide new pathways for data transport and an alternative to cloud-based networks.
As the number of mobile devices and connected sensors accelerate, network architectures must evolve. These devices bring more data volume and require data velocity that cloud computing architecture can’t accommodate. But edge computing pushes these network connections away from cloud-based, centralized resources to distributed models of edge and fog computing, helping to sustain the volumes and velocity of data.
Edge computing takes resources and tasks such as traditional computation, network, storage, and compute accelerators that often reside in the cloud and moves them closer to Internet of Things (IoT)-enabled or mobile endpoints. This architecture distributes intelligence throughout the IoT network, boosting performance, bandwidth, efficiency, security and reliability.
This article discusses some of the challenges associated with deploying edge computing architecture (or fog computing), and techniques to overcome these challenges.
No doubt: Edge computing architecture brings speed, performance and security. But IT departments should consider that, just as not every application or process should live in the cloud, not every app or process is best suited to the edge.
An initial challenge is to determine your business objectives. To do so, identify tasks the organization and its users need to achieve at the edge. Healthcare practitioners taking patient vitals in remote areas might need certain capabilities. Programmers developing and testing virtual reality features for a new videogame might need others. Smart cities may have significant capacity and latency requirements. If a company remote-monitors IoT sensors, the process may be constrained by bandwidth. But in all cases, performance and speed of data transport are critical.
Your mission could be to create or enhance activities in smart cities, transportation, factories or other vertical markets.
Next, consider the use cases for these verticals, including surveillance, self-driving cars or predictive maintenance for machines. All these real-world applications require rapid, real-time, high-volume data. Then, select specific applications within these use cases. An app might detect abandoned packages, monitor renewable energy sources on a power grid or discover impending bearing failures. Knowing the specific taxonomy of your selected verticals, use cases and applications will help you develop detailed requirements.
The next challenge is to develop network architecture and element partitioning to meet the requirements of users and applications. It is important to understand which portions of the system will run in the cloud and which portions will execute at the edge.
In general, a given quantity of CPU power or storage is cheaper the closer it resides to the cloud, but its performance and latency, bandwidth, reliability and security properties improve as processes move closer to the edge.
Edge computing architecture involves a hierarchy of levels (for example, regional, neighborhood, street- and building-level nodes in a smart city), and each level may have numerous peer nodes sharing the load. Partitioning can map applications up and down the hierarchy and distribute them across the multiple fog nodes on each level.
Several standards bodies are at work perfecting fog and edge computing architecture, including the OpenFog Consortium. [Editor’s note: Cisco has several leadership roles represented in the consortium.] The ETSI Multi-access Edge Computing initiative provides an excellent edge prospective. Finally, the Industrial Internet Consortium edge computing task group has studied these areas extensively. It’s worthwhile to consult these resources as well as experts as you refine your IoT network architectural models.
You'll need to choose the individual modular components of fog nodes. Your choice of hardware may dictate performance levels, physical size, energy use and programming model at the edge.
Some fog nodes allow different complements of hardware modules to be equipped, depending on application needs. For example, different types of CPUs, accelerators (including field-programmable gate arrays, general-purpose graphics processing units, or tensor processing units), and storage devices (including large RAM for in-memory databases, flash arrays or rotating disk drives) can be installed in modular fog, or edge, nodes. I/O interfaces between IoT-enabled things and the edge, among edge nodes, and between edge nodes and the cloud can have lots of options, including licensed or unlicensed wireless links and copper or fiber-wired links.
Modular software infrastructure components can be selected, too, including security packages, management packages, databases, analytics algorithms and protocol stacks. Finally, the application-specific hardware and software customized for the selected applications can be installed on the edge network. Doing so would, for example, enable application-specific interfaces. Interoperability among these modular components provides the network operator a choice of suppliers.
Security is perhaps the most difficult challenge facing edge computing architecture and deployments. On the one hand, keeping the data nearer the sensors and actuators where it is created and used reduces the number of attack vectors. On the other hand, fog nodes, because of their remote nature, can be subject to many types of network-based and physical assaults. Hardware roots of trust, trusted platform modules and trusted execution environments will be key features of fog nodes, building a solid base of security and extending it all the way up the stack to the applications.
Management may be the second most important challenge facing fog deployments. By 2030, the number of connected IoT devices is expected to reach 500 billion. That many devices could overwhelm the edge nodes available today.
Each fog node may process the traffic from somewhere between 100 and 10,000 connected IoT devices, meaning the next decade could require the installation of between 50 million and 5 billion edge, or fog, nodes. This will be an extreme challenge for those responsible for the installation, configuration and ongoing management of IoT networks. The management systems that support fog computing need to be advanced, and all management operations must be highly automated to support the kind of fog network growth anticipated.
Orchestration is also important to management; orchestration enables edge and fog networks to dynamically configure, monitor and reapportion their various resources and software packages. Fog orchestration needs to be aware of the hierarchical nature of fog nodes as well as peer-to-peer capabilities, with the ability to dynamically assign and rebalance workloads and where various portions of the application software runs.
Initially, edge and fog networks will be deployed with only a vague understanding of which applications will run on them and the resource requirements of future applications. Therefore, fog or edge computing deployments should be designed to scale in multiple dimensions. One dimension is performance, such as the ability to retrofit processors, upgrade link bandwidths, or add nodes as performance requirements grow. Another dimension is reliability. Fog networks should support various redundancy schemes and should be able to scale to five nines of availability or better, which is required for life-critical services.
Ultimately, fog deployment involves a series of engineering challenges that need well-balanced solutions. Some of the challenges described above suggest engineering tradeoffs in dimensions such as architectural complexity, performance, security, reliability, time to service and total-lifecycle cost. Successful fog deployments will carefully consider and address these challenges.