Set Up Crosswork Data Gateway

This section contains the following topics:

Set Up Controller

Controller controls the collection of data and its distribution to output destinations. It also instructs the Crosswork Data Gateway where it can download functional images from.

The Controller is hosted as a separate software entity and controls the functionality of one or more Crosswork Data Gateway instances. Its implementation varies with the use case:

  • In the Crosswork Cloud use case, the Controller functionality is subsumed within the Cloud application business logic.

  • In the Customer-hosted solution use-case, the Controller is a custom HTTPs server provided by you, which provides a rudimentary means of managing images on Crosswork Data Gateway and specifying static collection jobs.

The Controller requirements are described in Cisco Crosswork Data Gateway 1.0 Installation Guide. It is important to meet these requirements for successful deployment and functioning of Crosswork Data Gateway. The Controller must be ready for integration with Crosswork Data Gateway.


Note

  • Developing and integrating a Controller with Crosswork Data Gateway is out of scope of this document, however Cisco provides guidelines on how the same should be implemened.

  • A Crosswork Data Gateway instance can be integrated with only one Controller at a time.


Set Up Output Channels

Crosswork Data Gateway allows you to distribute collected data to either a Kafka server or a gRPC server.

To do this, Kafka or gRPC server information should be provided in the "sink" component of the collection job payload.


Note

Crosswork Data Gateway is read only to the network. The devices must be configured for the correct data metrics in advance of the Crosswork Data Gateway collecting them.


Output Data Format

Crosswork Data Gateway generates output in protobuf format. A sample output is shown below:


Note

Crosswork Data Gateway Collected Data Output Sequence Number resets if the Crosswork Data Gateway VM is rebooted or the collector is restarted.


Data Gateway Output protobuf message:

/*
 * Copyright (c) 2018 Cisco Systems, Inc.
 * All rights reserved.
 */
syntax = "proto3";

package output;
import "google/protobuf/timestamp.proto";

option java_package = "com.cisco.dg.protobuf.output";
option java_outer_classname = "DataOutput";

/*
    Supported collector types
*/
enum CollectorType {
    INVALID_COLLECTOR_TYPE = 0;
    CLI = 1;
    SNMP = 2;
    MDT = 3;
    NETCONF = 4;
}

/*
    CollectionSensorHeader contains all necessary fields that client app would need to
    relate to the data that was requested, via a collection job. It will also help the client app
    to determine if the data is missing or if any cadences not met.
*/
message CollectionSensorHeader {
    string collection_job_id = 1; // collection job id via collection job request.
    string source_name = 2; // Source name via collection job request.
    string device_host = 3; // Host or ip address from which data is collected.
    string device_name = 4; // device name or tag that was given via collection job request.
    string sensor_config_id = 5; // sensor config id that identifies each sensorConfig via collection job request.
    CollectorType collector_type = 6; // Type(CLI/SNMP/MDT etc.) of the collector used for data collection.
    string sensor_path = 7; // sensor path for which the data is collected.
}

message CollectionDetailHeader {
    google.protobuf.Timestamp collection_start_time = 1;    // time when the data was requested from device.
    google.protobuf.Timestamp collection_end_time = 2;      // time when the data was received from device.
    // Incrementing number per cadence for this device and sensor path.
    // Starts at 0. Incremented before requesting data.
    int64 sequence_number = 3;
}

/*
    DataEnvelop is how the data is received by client app. Dispatcher would construct this message and
    stream to client app transport set in the collection job request.
*/
message DataEnvelop {
    CollectionSensorHeader sensor_header = 1; // Header to identify collection job and sensor path.
    CollectionDetailHeader detail_header = 2; // collection stats to identify.
    CollectionOutput output = 3; // collected data or error.
}

/*
    CollectionOutput holds Data and Error details.
*/
message CollectionOutput {
    oneof type {
        CollectionData data = 1; // Data would have value in case of success, would be null in case of error.
        CollectionError error = 2; // Error string in case of collection error, would be null in case of success.
    }
}

/*
    Data holds the collected data from the device .
*/
message CollectionData {
    // data bytes is serialized data collected in different formats by each collector. Based on "CollectorType" in
    // CollectionSensorHeader the serialized bytes format would vary. Following format can be expected per type
    //
    // CLI: UTF-8 string containing the console output text of a CLI. In case of custom XDE packages the format would
    // depend on output format of a custom package. (ex. UTF-8 string which contains xml data etc.)
    //
    // MDT: serialized Telemetry proto. (can contain GPB or GPB-KV data). XR devices expose this telemetry.proto.
    //
    // SNMP: serialized SnmpData proto. (For OID, Table and MIB walk operations)
    // In case of custom XDE packages the format would depend on output format of a custom package.
    // (ex. UTF-8 string which contains xml data etc.)

    bytes data = 1; // Data collected from device.
}
/*
    Snmp data collected for OID, Table and Mib walk operation will have the following format.
*/

message SnmpData {
    repeated OidRecord oid_records = 1;    // list of OID records.
}

message OidRecord {
    string oid = 1;                    // SNMP object identifier (OID), e.g. 1.3.6.1.2.1.2.2.1.4.2
    string symbol = 2;                 // symbol for OID, textual / user friendly name representing OID, e.g. ifMtu
    string value = 3;                  // corresponding value from device, e.g. 1500
}

/*
    Error holds the details of the error occurred during processing.
*/
message CollectionError {
    string error_message = 1; // Error message to describe the reason for collection error.
    string detail_error = 2; // Detailed error message to analyse the failure.
    string error_code = 3; // Error code to determine the type of error.
}

/*
    This RPC service exposed to produce DataEnvelop.
    gRPC server has to implement this service to receive stream data.
    gRPC client has to use this service to send the stream data
*/
service OutputService {
    rpc streamData (stream DataEnvelop) returns (stream DataEnvelop) {
    };
}

Crosswork Data Gateway Authentication and Bootstrap

Enroll Crosswork Data Gateway

Crosswork Data Gateway generates an Enrollment package using information from the OVF template. The enrollment package is required for the unique identification of the Crosswork Data Gateway with the Controller. For more information on Enrollment Package, see Enrollment Package.

The enrollment package is uploaded to the Controller who then instantiates a new Crosswork Data Gateway object in its database and waits for a first-sign-of-life from the Crosswork Data Gateway.

Session Establishment

The Image Manager component utilizes the Base URL and CA Root Cert to setup a session with Crosswork Cloud Service. It confirms the identity of the Controller and offers its own proof of identity via signed certificates during this initial connection.

Download Images

Once the session is established, the Image Manager requests the Controller for the boot-config. The Manifest is sent as a signed JSON envelope containing a signature.

Image Manager compares the SHA ID of the images with the local cache images SHA ID and forms a list of images that needs to be upgraded/created/removed.

Image Manager downloads the signed images .tar, validates the SHA hash. It also downloads the docker-compose file.

After that it proceeds to install and boot the containers.

Crosswork Data Gateway is now ready to collect data.