Skip to main content Skip to footer

Search Crosser Knowledge Base

Flow to Flow communication

Introduction

One of the benefits of the Crosser solution is that you can deploy multiple flows (processes) into one existing container.

Due to that, you can add new use cases without influencing running processes at the edge, even without restarting or re-deploying the runtime (Docker Container or Windows Service).

This allows you to realize your use cases step by step and keep the processes isolated.

But you might also think about spreading use cases into different flows to have a clear separation between process steps or even responsibilities within your team.

This article will describe some examples on how you can utilize flow to flow communication and use it to your advantage.

Flow to flow communication options

To understand how you can realize flow to flow communication, you need to know that the Crosser Edge Node comes with three additional features, next to the processing engine.

MQTT Broker

The Crosser Edge Node comes with an integrated MQTT Broker which can be exposed to allow external client communication.

This allows you to publish data into the Crosser Edge Nodes' broker from external system like sensors, PLCs and many more.

The same principle can also be used if you want to interconnect flows. Since the broker is accessible from within the container, you do not even need to expose it to the outside world if you just want to realize flow to flow communication.

Therefore, you will find two modules in the module library which you can use to only connect to the integrated MQTT Broker.

MQTT Pub Broker: This module is designed to publish messages to the internal MQTT Broker topics

MQTT Sub Broker: This module is designed to subscribe to messages on the internal MQTT Broker topics.

Example

Let's assume you have your PLC connected and want to send transformed and aggregated data to your Azure IoT Hub.

Later on you decide to add another flow which should consume the same data but calculate KPIs or store the data in an on-premise database.

If you do changes to your KPI flow, you do not want to interrupt the data acquisition and transport to your Azure IoT Hub.

To do that, you can separate the flows. (flows are a bit simplified to illustrate the concept)

Flow A - Data acquisition and aggregation

Flow B - Upload to Azure IoT Hub

Flow C - Bulk insert into MsSQL

Flow D - KPI generation

Merging this into the Crosser Edge Node detail view, it would look like this.

HTTP

As an alternative to MQTT you can use HTTP to send information from one flow to another.

Similar to MQTT, you can expose the Web Server to the outside world to allow external systems to push data into a flow.

Keep in mind that in order to send data to an HTTP endpoint, the endpoint needs to be opened by the webserver.

First thing you need to do is to open an HTTP endpoint using the HTTP Listener module and deploy the flow.

Let's assume you want to send information from an external system or flow to this endpoint (/mydata), you would do that by simply posting data to:

http://<my-node->/mydata:9090

Start of the flow in that case would be the HTTP Listener module with the following configuration.

(requested path must match the endpoint you want to post data to)

To post data to another flow, you can use the HTTP Request module and set the action to POST.

The basic idea is the same as for MQTT but keep in mind that the HTTP Endpoints are actually bounded to flows and not to the Crosser Edge Node.

In addition, we are only working with simple HTTP which means that you do not need to have a connection open between flows all the time.

This is especially helpful if this is simply restricted due to firewalls, or MQTT is overall not allowed since it can enable bi-directional communication.

Most of the time, customers use MQTT for flow to flow communication.

HTTP on the other hand comes with the big advantage, that it allows you to send in triggers from external systems with a simple post command.

Example

Let's assume you want to read certain data from your S7 PLC just in time, maybe due to some event.

You do not only want to read the data but also forward it to your KPI flow as soon as possible.

Your KPI flow though, is not hosted on the same Crosser Edge Node but on another Crosser Edge Node, which can only be reached through HTTP communication.

In that case, you can split your flows like below. (flows are a bit simplified to illustrate the concept)

Flow A - Trigger and data acquisition

Flow B - Listen to data and generate KPI's

Merging this into the Crosser Edge Node detail view, it would look like this.

Key/Value store

The Key/Value store is a great tool if you need to pick up certain data at some step in a flow.

While MQTT and HTTP push the data, you can decide where and when to read or write the data into the Key/Value store.

In general, you can have multiple Key/Value stores within one Crosser Edge Node. You can create new Key/Value stores with just adding a new name in the module configuration.

Overall you have three different Key/Value modules.

1. Key/Value Set

This module is used to create a Key/Value store and store the Key/Value pairs.

Important: By default Key/Value stores are kept in memory. By enabling the flag 'persistent storage', you can write the Key/Value store to the local disk.

You can only access Key/Value stores which are created by other flows if the 'persistent storage'-flag is set to true.

2. Key/Value Get

This module allows you to pick up Values from the Key/Value store.

3. Key/Value Delete

This module allows you to remove Values from a Key/Value store.

Example

Let's say you need to enrich streaming data with metadata which is held in a MsSQL database. The streaming data has a high frequency and you need to enrich every single message.

You could of course add the MsSQL Select module within your streaming flow, but this would put a lot of load on your database since you would execute the query for every message.

Usually the metadata does not change too often which makes it most likely sufficient, if you update it every now and then.

You can use the Key/Value store to store the metadata fetched from your MsSQL Server like once per hour and use the Key/Value Get module in your streaming flow instead.

Your flows could look like below. (flows are a bit simplified to illustrate the concept)

Flow A - Fetch metadata and save in Key/Value store

Flow B - Get streaming data and enrich with metadata

Merging this into the Crosser Edge Node detail view, it would look like this.

About the author

David Nienhaus | Senior Solution Engineer

David is a Senior Solution Engineer at Crosser. He has over 10 years experience working with software integration and digitization projects for critical infrastructure.
His engineering background gives him the understanding and focus needed to solve customer use cases in the most efficient and successful way.