Connecting more people, data, and things to the Internet will help automated processes do more for an enterprise, its suppliers, and its customers.
By Anil Kalbag, Distinguished IT Engineer, Cisco
Contributor: Gerald Silverman, IT Architect, Cisco
Everyday physical objects are being connected to the Internet and given a digital representation, enabling their interaction with people and information systems. This article discusses how business processes and workflows would be enriched with real-time information and deeper visibility into the physical world. Technology enablers for Internet of Everything (IoE)-enriched business process are also discussed, as are some of the challenges for networks and business operations.
In the 1990s, the advent of the Internet and the World Wide Web revolutionized almost every aspect of our lives. These technologies create new opportunities for human endeavor and new ways for interactions, and they made information widely and easily available around the globe. The Internet and web also revolutionized enterprises and businesses of every size, in every industry. They unleashed novel business models, gave rise to new business processes, improved existing ones, and changed the ways in which businesses interact with each other and with customers.
Today with the Internet of Everything (IoE), we are at the cusp of another such revolution that has the potential to change the way we live, work, and play. This change has begun with the Internet of Things (IoT), the practice of connecting millions of devices and sensors to the Internet. “As these ‘things’ add capabilities like context awareness, increased processing power, and energy independence, and as more people and new types of information are connected, we will quickly enter the Internet of Everything (IoE), where things that were silent will have a voice. Cisco defines IoE as bringing together people, process, data, and things to make networked connections more relevant and valuable than ever before—turning information into actions that create new capabilities, richer experiences, and unprecedented economic opportunity for businesses, individuals, and countries.”
As noted by Robert Metcalfe, the value of a network increases proportionally to the square of the number of users on the network. This relationship is often referred to as Metcalfe’s law or the Network Effect. By enabling millions of devices, sensors, people, business processes, and data to be connected to the Internet, we are creating exponential value and giving rise to unprecedented opportunities, wealth generation, and improvement in the quality of life and experiences.
However, significant challenges need to be addressed before we can realize the promise of the IoE. The millions of things connected to the Internet will produce a deluge of data that needs to be assimilated to generate useful information. In the context of a business, this information should result in coordinated action that generates value for customers. Even in the context of governments and society at large, the data generated from the devices and sensors should lead to meaningful action that produces value.
While the network connectivity and infrastructure aspects of the IoE have received a lot of attention, the topic of how business process will be affected is just beginning to be discussed and researched. This article focuses on how business processes can leverage the IoE. The term business process is used broadly here and is not limited to the processes of a for-profit business. It includes any kind of coordinated activity that produces value and is managed with the goal of improving it and making it more efficient and effective. The term workflow has traditionally been used for coordinated human activity. In this article, business process and workflow are used interchangeably. Likewise, the terms data center and cloud are used interchangeably to describe centralized compute, storage, and network resources.
Business process is defined as “a collection of related, structured activities or tasks that produce a specific service or product (serve a particular goal) for a particular customer or customers.”
The concept of a business process is said to date back to the eighteenth century with Adam Smith’s description of the business process employed in a pin factory. In the early days, humans, often equipped with tools, carried out the various activities in a business process. With the industrial revolution, many of the manufacturing processes were mechanized, with machines performing activities that were formerly carried out by humans. The information age and increasing use of information technology (IT) in business and enterprises of various kinds (including non-profits and government agencies) have led to even more automation and evolution of business processes. However, even today, most business processes involve human workflows, with some IT systems interspersed to execute parts of the processes that are automated.
Business Process as a Corporate Asset
Over time, businesses and enterprises of various kinds have come to view their processes as a valuable asset. In the early 1990s, business process reengineering came to be viewed as a discipline and a management strategy aimed at analyzing business processes and redesigning them with the goal of improving efficiency and productivity, increasing customer satisfaction, reducing costs, and providing business agility. Many enterprises embarked on business process reengineering efforts, and techniques and tools for process improvement, such as Six Sigma, became popular.
Business Process Management
Businesses use their processes to gain competitive advantage and differentiate themselves from competitors. Consequently, rather than perform a one-time effort of business-process reengineering, most businesses now employ business process management (BPM) to “continuously improve business effectiveness and efficiency while striving for innovation, flexibility and technology integration. BPM sees processes as strategic assets of an organization that must be understood, managed, and improved to deliver value-added products and services to clients.” The use of IT in business has been so pervasive that some use the term business technology to refer to the application of information technology to enable business capability and processes.
Business Process Management Systems
Business processes are often embedded in the applications and systems that are used to operate an enterprise. Alternatively, they are reflected in the way these applications and systems are integrated and interact with each other. This arrangement makes the business processes somewhat rigid and costly to change. Modifying a business process often means making code changes, with the associated complexities of release management. These complexities hamper business agility and the ability to rapidly adapt business processes to the changing needs of the marketplace, respond to competitive pressures, and comply with regulatory requirements.
The need to better manage business processes and abstract them out of applications and systems as much as possible has created a market for BPM management solutions. The promise of BPM systems is that they help better manage business processes, put them in the hands of the business, and reduce reliance on the IT department for creating and modifying processes.
The BPM system orchestrates activities between people and information systems with the goal of improving and optimizing human-centric business activities. The system focuses on defining, executing, and monitoring long-running business processes involving human activity.
Blind Spots in Business Processes
Modern business processes are complex, involving a wide spectrum of tasks and activities, some of which are carried out by humans, some by machines (e.g., those on a manufacturing shop floor), and some by other physical equipment (e.g., trucks handling logistics). An increasing number of those tasks and activities are automated using information and communications technologies. Added to this complexity is the fact that some parts of a process are not executed by the enterprise. Enterprises of all kinds are increasingly leveraging technology partners, channel partners, contract manufacturers, warehousing and logistics partners, break/fix service partners, and other outside services to handle all or part of a business process. Most enterprises come to view these partners as the extended enterprise and look for ways to have tight integration and collaboration with them.
To be successful and stay competitive, an enterprise (especially a for-profit business) needs full visibility and control over business processes from end-to-end. Today, enterprise BPM systems provide visibility only into activities that are automated using IT and, to some extent, those carried out by humans. Human activity is visible only when it involves interaction with information systems or when a human reports the activity status in the BPM system. Activities carried out by machines and physical equipment have largely been absent in process models and executable versions of the processes.
Gartner uses the term operational technology (OT) to describe the machines, equipment, sensors, actuators, and software used in manufacturing and physical product value creation. The term IT typically describes systems and software for processing data as well as supporting business applications such as Enterprise Resource Planning (ERP) and Customer and Partner Relationship Management (xRM). In most enterprises, OT and IT have existed as separate islands with little to no interaction. (Figure 1)
Figure 1. Separation between Information Technology and Operational Technology in the Internet of Things
Control systems—made up of sensors, meters, measurement instruments, actuators, and control units—have been used to manage operational processes in real time in industries such as manufacturing, energy exploration and distribution, chemical plants, and transportation. However, these control systems have typically used proprietary or specialized protocols, and the span of control and management has been limited to small sets of devices and actuators managed by a control unit. Due to the heterogeneity and lack of standards in operational technology, interoperability is a challenge, with devices and equipment from different vendors unable to communicate with each other or with the enterprise IT systems.
BPM systems have lacked visibility into OT, which has resulted in blind spots in the business process. Often humans need to bridge the gap between OT and IT by updating information systems based on the status of activities and tasks that are carried out by OT. However, this approach leads to delays, lack of sufficient insight into operational technology, and potential for human errors. Additionally, this model cannot scale, and it does not provide real-time insight into the activities carried out by the OT.
Consider order fulfillment at Cisco. When an order is captured in the ERP, a message is sent to the contract manufacturer to build that product based on the configuration specified in the order. At this point Cisco has no visibility into what happens on the shop floor where the product is manufactured until the contract manufacturer updates the order information in the ERP. Likewise, when the product is shipped, the order status is updated only at certain points, for example, when it reaches a Strategic Logistics Center (SLC). No visibility is available to the location of the product when it is in transit, and any delays in transit will only be known when and if someone updates the ERP system. Typically this update involves human intervention.
Now consider a future scenario where the machines and equipment used to build Cisco products are connected to the Internet. Real-time status of shop-floor machines is now available to enterprise information systems. (Figure 2) Failure of a machine is detected automatically and can trigger a new production plan and schedule a maintenance work order.
One of the manufacturing scenarios described by the Industrial Internet Consortium is that of extreme customization where “Raw materials will be programmed with information that it will be part of product X, to be delivered to customer Y. Once the material is in the factory, the material itself records any deviations from the standard process, determines when it’s ‘done,’ and knows how to get to its customer.”
Even after the product leaves the manufacturing site, the IoE enables tracking of the product until it is delivered to the customer. Real-time weather and traffic information is available to the logistics carrier to determine optimal routes for delivery. Changes in a truck’s delivery plans can be communicated immediately to the carriers and, if appropriate, to the customer.
Figure 2. Connecting OT to IT Enables Real-Time Product Tracking from Order to Delivery
A similar scenario exists with the Cisco service supply chain. Some Cisco customers who cannot afford even small service interruptions in their networks purchase premium service levels with a service level agreement (SLA) as stringent as two hours. When a network outage is detected, a replacement part must be delivered to the customer site immediately. Cisco locates parts depots in proximity to customer sites that have premium service levels to enable fast delivery of spare parts and help ensure service restoration within the specified SLA.
However, once the part leaves the depot on a vehicle, little visibility into the part’s real-time location is possible. The vehicle carrying the part could be stuck in traffic or have a breakdown. Real-time traffic information gathered via traffic sensors can help guide the vehicle to an optimal route. In some situations another vehicle may need to be dispatched with the spare parts, from the same or a different depot. Additionally, service engineers must be sent to the customer site to install the part and restore service. Here again, the IoE helps identify engineers in the vicinity who could be dispatched to the customer site.
Over time, the technology stacks used for OT and IT have begun to converge. This convergence has led some to suggest that CIOs now need to broaden their purview to include operational technology in addition to the traditional focus on information technology. The transformational change in operational technology that is brought about by the IoE is that it will be Internet Protocol (IP)-enabled and connected to the Internet, opening a world of possibilities. Following is a list of benefits that will accrue from the IoE:
● These newly IP-enabled “things” can now participate in the Internet, which in the past has been the realm of only humans and information systems.
● Each of these “things” are now uniquely addressable via their IP address.
● Sensors, devices, actuators, and equipment from different vendors can interoperate via the common Internet protocols.
● Businesses, governments, and consumers can get real-time information about and deeper insights into the physical world.
Pushing Process to the Edge
Besides the elimination of blind spots, the IoE will bring about another change in business processes related to where processes are executed. Execution of automated business processes currently takes place primarily in enterprise data centers and more recently in the cloud. The IoE will require decomposition of processes, with parts of processes running close to the physical elements that carry out the tasks and activities. For example, data gathered from sensor networks will need to be analyzed and assimilated locally, with aggregated data and high-level business events propagated to the processes and services running centrally in enterprise data centers or the cloud. In some scenarios, the analysis of sensor data will drive action via actuators in real time, creating a feedback loop and a closed-loop system.
The center of gravity of computing is already shifting from the data center to mobile devices such as smartphones and tablets. From depositing checks into bank accounts to receiving airline boarding passes on mobile devices, physical objects are being given a digital presence, leading to better customer experiences. The IoE will shift computing further to the network edge and closer to the billions of everyday physical objects.
Cisco has coined the term Fog Computing to describe the extension of the cloud paradigm to the network edge, where it enables a new breed of applications and services. Being a cloud close to the ground, fog emphasizes the proximity of compute, storage, and network resources close to the sensors and devices that use them. Fog Computing is characterized by a) Low latency and location awareness; b) Widespread geographical distribution; c) Mobility; d) Very large number of nodes, e) Predominant role of wireless access, f) Strong presence of streaming and real-time applications, g) Heterogeneity.
Cisco products such as the Cisco® 1000 Series Connected Grid Routers are ruggedized to function in harsh environments and powered with Cisco IOX software, which offers a guest operating system that is ideal for running third-party applications close at the network edge. (Figure 3) The guest operating system also enables management of these apps via the IOX platform. This software design opens the possibility of running simple application logic at the network edge, close to the physical objects.
Figure 3. Ruggedized Cisco 1000 Series Connected Grid Router with Guest Operating System
As mentioned earlier, the ability to do computing at the network edge is crucial to the IoE. Not every piece of data needs to make the multi-hop trip from devices to the data center. An IoE application consists of components that are distributed across the IoE stack, from the cloud and data center to the fog and network edge. The Cisco IOX platform raises the intriguing possibility of leveraging powerful integration technologies traditionally employed in the enterprise closer to the network edge.
A leading enterprise service bus (ESB) vendor has launched this type of project. The focus of the initiative is to enable the ESB product to run on lightweight platforms such as Raspberry Pi and support IoE-related protocols such as ZigBee and Message Queuing Telemetry Transport (MQTT).
We can envision a scenario where the initial filtering, analysis, and preprocessing of sensor events are implemented using ESB technology located near the network edge. These functions will produce actionable business events that can be consumed by business processes running in the cloud. (Figure 4)
Figure 4. IoE Application Enabled on the Network Edge and in the Cloud
For example, a Cisco device supporting the IOX platform is connected to sensors that monitor the temperature of a cargo container carrying perishable goods. When the temperature in the container exceeds a designated value for a sustained period of time, a remediation process that is hosted in the data center needs to be launched.
Figure 5 represents an ESB integration flow that would execute within an edge or fog device. The example flow receives raw temperature values from the sensors and feeds them to a Complex Event Processing (CEP) component. This component computes the average temperature over an aggregate number of values. If the average temperature exceeds a critical value, an event that includes the temperature and other salient information, such as location of the cargo container, is published to a MQTT queue.
Figure 5. Example of Using Integration Technology to Support Event Processing at the Network Edge
A component residing in the cloud or data center subscribes to the MQTT queue. Upon receipt of the event, a business process (implemented using a BPM system) is launched to address the anomalous temperature conditions.
Technology Enablers for IoE-enriched Processes
Several technology enablers such as IPv6, mobility, and Fog Computing are fueling the IoE. Some of these enablers focus on network connectivity, communication protocols, and infrastructure components. This section discusses technology enablers for IoE-enriched business processes. Many of these enablers have existed for some time but may need to evolve for the IoE.
With billions of everyday physical objects connected to the Internet and being able to identify themselves and communicate with each other, the question is no longer about how to get data. Instead, the question is how to convert the copious amounts of raw data produced by these billions of network-enabled objects to useful information. From millions of tweets generated by people through their smartphones, to data from smart meters, to weather and climate-related data captured by sensors, identifying patterns and events of interest from the vast amount of data requires new technologies.
This is not just a data warehousing and mining problem. In some use cases, the data stream needs to be analyzed in real time so that appropriate action may be taken based on the information gleaned from the data.
Big Data technologies that enable analysis of large data sets, often in the range of petabytes, have been key to fueling the IoE revolution. They provide the capability to process and analyze these large data sets and data streams to identify events of interest.
Complex Event Processing
While individual events can be of interest, the ability to correlate and detect patterns of events can yield insight that is not discernable by observing individual events. Consider a home security system in which a door sensor generates a signal indicating that the front door was opened. If the security system is in its armed state, this signal could indicate an intrusion. The event would trigger a notification to the operations center of the company providing the home security service and police would be dispatched. However, if the system was disarmed within seconds of opening the door, it would imply that an authorized person entered. The ability to correlate the arming, disarming, and door-open events and the sequence in which they happen is critical to deciding what action needs to be taken.
As another example, consider an eldercare service, such as the one offered by MyLively.com, in which sensors are placed around the house in which an elder person is living alone. Sensors on drawers, doors, the refrigerator, TV, medicine cabinet, etc. monitor the daily routine of the person. (Figure 6) These events are then analyzed to detect unusual event patterns. For example, if the bathroom door closes and does not open again for a significant length of time it could mean the person may have slipped and fallen in the bathtub and needs help.
Figure 6. Sensors Placed on Everyday Objects to Monitor Activity (Image source: www.mylively.com)
Wikipedia defines complex event processing (CEP) as a method for event processing that “combines data from multiple sources to infer events or patterns that suggest more complicated circumstances. The goal of complex event processing is to identify meaningful events.” CEP provides the capability to detect event patterns, aggregate and filter events, model event hierarchies, abstract events into higher level events, and detect relationships between events.
While CEP has been traditionally applied to fraud detection and stock trading, it is a key enabler for IoE-enriched business processes. It will support complex analysis of data generated by Internet-enabled physical objects, allowing them to become active participants in the overall business process by influencing decisions, triggering activities, and modifying the workflows based on changes in the real-world environment.
The OASIS Reference Mode for Service Oriented Architecture defines a service-oriented architecture (SOA) as “a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains.” It also defines a service as “a mechanism to enable access to one or more capabilities, where the access is provided using a prescribed interface and is exercised consistent with constraints and policies as specified by the service description.” Exposing physical objects as services provides a relevant abstraction of their capabilities, enabling them to participate in SOA and business processes.
Business processes are orchestrations and choreographies of services, typically web or representational state transfer (RESTful) services. The term orchestration is used when a central controller, such as a Business Process Execution Language (BPEL) engine, is involved and usually means all the orchestrated services are under a single ownership domain. In contrast, choreography is used to describe a peer-to-peer model where services participating in the choreography are typically defined under multiple ownership domains. In the case of IoE, orchestration will typically be used at the network edge where services provided by multiple sensors and actuators are defined under the same ownership domain. Choreography will be used between local services at the network edge and global, higher-level centralized services in the data center and cloud.
Through its key principles of encapsulation, discovery, and loose coupling, SOA enables development of large-scale, adaptive, and manageable systems that integrate functionality across ownership boundaries, making it a suitable paradigm for IoE-enriched processes.
APIs and API Management
The mobile revolution has created what has come to be referred to as the App Economy. Mobile devices connected to the Internet are providing experiences that surpass those provided by the web. Today these mobile devices are predominantly smartphones and tablets. However, we are already beginning to see cars and devices such as thermostats, smoke and CO2 detectors, smart switches, and wearables (e.g., Google Glass) being connected to the Internet. Services running in the cloud and corporate data centers are being exposed to mobile apps via Application Programming Interfaces (APIs). Independent Software Vendors (ISVs) are creating innovative apps that leverage the APIs, and in turn, API providers are able to collect valuable information about usage patterns.
Numerous IoT APIs are popping up, enabling devices to communicate with each other and with information systems, which in turn, enables them to participate in business processes and workflows. Examples of IoT APIs include:
● Smart Object API
● Thing Speak Open Source API for meaningful connections between things and people
● OGC SensorThings API
● SkyNet API
The proliferation of APIs has resulted in the need for API management. Developers need the ability to find APIs and understand how to use them. Apps must be authorized to access APIs, which may involve distribution of keys and tokens. APIs need to be versioned to allow them to evolve, while at the same time ensuring that apps using older versions of the API do not break. Access to APIs needs to be controlled so that requests can be throttled to prevent the API from crashing due to overloading. Information about API usage needs to be collected and analyzed. With the proliferation of APIs as a result of the IoE, API Management becomes an imperative activity.
Enterprise Service Bus
An Enterprise Service Bus (ESB) is defined in Wikipedia as “a software architecture model used for designing and implementing communication between mutually interacting software applications in a service-oriented architecture (SOA).” It was inspired by the bus concept in computer hardware, where modules connect to a common bus and data flows between modules via the bus. The ESB provides intelligent routing of messages between services as well as commodity services for message transformation, event handling, message queuing and sequencing, and protocol conversion.
Events in the physical world often take place in an unpredictable manner. Also, the services related to the physical world need an orchestration mechanism that can scale to handle large volumes of messages in real time. An ESB is characterized by the loosely coupled, event-driven, adaptive nature of the interaction between services. This characterization makes an ESB ideal for composing adaptive choreography based on situational awareness. It complements a BPM system, and an ESB is often used in conjunction with a BPM system for handling system-to-system interaction, especially where large volumes of real-time message exchanges are in play.
If-This-Then-That (IFTTT) is a service that lets an application developer rapidly and easily connect services. The services are referred to as Channels and are the building blocks of IFTTT. Recipes are simple conditional statements of the form “if this then that.” The “this” part of a recipe is called a Trigger, and the “that” part is an Action. Triggers are essentially events that cause some action to be taken. Integration of devices and sensors enable recipes to interact with the physical world. Channels for the Belkin WeMo switch and SmartThings are already available.
In a way, recipes are miniature processes with a single activity or action. It is conceivable that an action from one recipe could serve as a trigger for another recipe, creating a chain of value-generating actions. They could also trigger more complex business processes.
Message Queuing Telemetry Transport (MQTT) is a lightweight publish/subscribe connectivity protocol for use on embedded platforms. It was designed for applications that run on resource-constrained devices where power or network bandwidth may be limited and/or a small code footprint is required. It has been applied in a diverse set of IoE use cases ranging from home automation to healthcare. The Eclipse Paho project provides an open-source implementation of MQTT.
The scale and complexity of the IoE pose several challenges in everything ranging from network connectivity and operational management to security, law, and regulations. This section focuses on some of the challenges that are specific to the IoE in the areas of business process.
Process Modeling and Execution
Most of the current business process management systems are geared toward defining static processes and workflows. While they may contain decision points, branching, and merging, the various paths that a workflow may take are statically defined and few in number. Likewise, while events are handled by some BPM systems, predefined points exist in the process for event handling. In the real world, events may happen at any time. IoE will require dynamic processes where the workflows may be altered based on real-world events, whenever they occur.
As discussed earlier, the IoE will require some process steps to be executed at the network edge close to the real-world objects involved, while other steps will be executed in the data center or cloud. Process modelers will need training on how to model those distributed processes, and BPM systems will need to evolve to handle highly distributed and dynamic processes.
Dealing with Unreliable Data and Devices
Real-world physical objects are resource constrained and are predominantly connected over wireless connections of unpredictable quality, called Low-power Lossy Networks (LLN). Processes that include IoE events will need to be able to deal with unreliable data. The wrapper around the real-world physical object that presents it as a service will need to be able to assign a quality measure to the data gathered from the object. In turn, the business process in which the IoE service participates will need the ability to assimilate that quality measure.
Besides the quality of the data being an issue, the devices themselves may be unreliable and may fail. Sometimes it may be difficult to even detect whether the device has failed. For example, consider the warning light in a car that should come on if the engine overheats. If the light is not on, it could mean the engine is operating in the normal temperature range, or it could mean the sensor or the light is defective.
Event Definition and Taxonomy
Events modeled in business processes today pertain to activities performed by people, such as Document Approved or those generated by information systems such as Order Booked. The IoE will entail a whole range of real-world related events that business modelers may not be familiar with, for example, a change in the acidity of wine in the making. This range will require individuals with domain knowledge who can define meaningful events that lead to appropriate action. Additionally, the need for complex event processing is likely to increase significantly because the IoE is a relatively new discipline. People will be needed who have the skills to identify event patterns, event correlations, and how lower-level events are aggregated to higher-level composite events.
Besides the technical challenges, using information gathered from real-world physical objects in business processes is likely to raise privacy concerns. Just because something is technologically possible does not mean that it will be accepted by society. Today, auto insurance quotes are generated by taking into consideration several factors, such as zip code of the insured party, distance involved in commuting to work, and the age of the primary user of the vehicle. However, with smart keys, it is possible to detect which key and which person is operating the vehicle and GPS devices can record where it has been driven. Car speed, tire pressure, and other data that is of interest if the car is involved in an accident can be tracked. While insurance companies may like to get this data, the vehicle owners may not be willing to share it due to privacy concerns.
Even when privacy is not a concern, user experience should be taken into consideration. Location-based, targeted advertisements and offers sent as push notifications to mobile devices can be useful. However, if not done appropriately, they can also be annoying and may prompt individuals to turn off the location service on their mobile device.
The interdependence between software development and IT operations brought about by the DevOps approach becomes even more critical in the development and operation of IoE applications and environments. Best practices in continuous integration and continuous delivery (CI/CD) and release management that are currently required for on-premises, cloud, and hybrid enterprise applications must now extend to the network edge. Dependency management and the synchronization of software releases must be coordinated across the entire IoE stack. The use of container and configuration management technologies will become critical to the enablement of IoE applications, with a concomitant need for automation and orchestration. Monitoring the health of an IoE application will require end-to-end visibility into application and system components that reside in the cloud and on the edge, as well as in the multitude of sensors that feed those components.
As discussed earlier, APIs are a key enabling technology for the IoE. Northbound APIs residing on numerous edge devices will need to be brought into the API ecosystem. An API gateway or proxy typically mediates API policy enforcement and metrics collection. Hosting the gateway on an edge device, closer to the implementation, may not be feasible due to resource constraints. Instances of an API may be running on numerous edge devices. These differences will pose a challenge for API owners who need to manage the lifecycle of an API across many devices and for application developers who need to integrate information from the APIs. Proven integration platforms such as ESB can play a role in such scenarios.
The IoE is giving everyday physical objects a digital representation, enabling them to connect to the Internet. This connectivity is helping enrich business processes by providing real-time information, deeper visibility, and real-world awareness about an object’s status and activity. Many technologies and products that form the building blocks for IoE-enabled processes are already available, and more are being developed. As with the web, enterprises that take advantage of IoE will be able to provide a better user experience and value-added services, as well as gain competitive advantage and differentiation. However, the IoE is a new phenomenon, and challenges need to be addressed. The potential of IoE to enrich business processes should be part of an enterprise’s overall IoE strategy.
For More Information
● Internet of Everything
● To read additional Cisco IT case studies on a variety of business solutions, visit Cisco on Cisco: Inside Cisco IT.
This publication describes how Cisco has benefited from the deployment of its own products. Many factors may have contributed to the results and benefits described; Cisco does not guarantee comparable results elsewhere.
CISCO PROVIDES THIS PUBLICATION AS IS WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Some jurisdictions do not allow disclaimer of express or implied warranties, therefore this disclaimer may not apply to you.