Skip to main content

Kafka is an event streaming platform typically used for collecting, persisting and delivering large volumes of data (events) in a reliable and scalable way. Crosser is a low-code stream analytics and integration solution.

Why Crosser and Kafka make sense?

Both Crosser and Kafka/Confluent talk about Stream Processing when describing their respective solutions. So why would you want to use two stream processing systems together? As we will show you in this article there are several good reasons why you want to do this. At a high level they can be summarized as follows:

Ingest options - Crosser offers a wide variety of connectors that can be used to get hold of data, especially in the industrial space. For example OPC UA, PLCs and Historians. Collect and prepare your data with Crosser and then send it to Kafka.

Egress options - In the same way Crosser provides a wide variety of connectors for delivering your data from Kafka to other systems. Sending data back to the factory floor as well as to on-premise systems and cloud services.

Low-code application development - Use Crosser’s low-code development tools to build processing applications for your data that resides in Kafka.

Crosser offers a low-code solution where modules from the Crosser library are combined into data pipelines, or flows as we call them, using a visual drag&drop design tool. Use the Kafka Consumer and Producer modules to connect with the data in your Kafka system and then use other modules from our library to process your data and connect to other systems.

Preview your pipeline within the design tool to verify your data flows with live data and then deploy your flows into a Crosser execution environment (Node). Crosser Nodes can be deployed on premise, in your cloud infrastructure or hosted by Crosser.

A single Docker container can run any number of flows and a flow is just a configuration that is deployed from the management tool, no need to update containers to change or add a flow.

The design tool is one part of our management and configuration tool called Crosser Cloud. Crosser Cloud is available as a hosted service but can also be deployed on your own infrastructure. 

Let’s take a closer look at each of the use cases.


Crosser support Kafka with connecting to Industrial Protocols

Crosser to Kafka

The Crosser module library has over 60 native input modules, i.e. modules to get data from different sources. With Crosser Connect Tools you can access hundreds of additional data sources. Crosser makes it easy to get data from a large variety of data sources. Some popular examples:

Industrial connectors - OPC UA, Modbus, Siemens S7, Rockwell, Aveva Historian, OSI PI

IoT protocols - MQTT, HTTP

Databases - MS SQL, PostgreSQL, MySQL, Oracle, InfluxDB, TimescaleDB, MongoDB, Couchbase

Files - Local and remote (FTP)

Getting access to data is the starting point. Many times you receive data in a format not suitable for sending to a system like Kafka, or you collect data from multiple sources that use different formats and you want to harmonize these into a single format before sending to Kafka. This is easily done with standard modules in the Crosser library.

Using other modules from the Crosser library you can remove invalid data, filter your sensor data to remove noise, roll up data into batches and much more, before using the Kafka Producer module to deliver the result to Kafka.


Kafka to Crosser Automated Workflow

Kafka to Crosser

In the same way we can get data from a Kafka topic into a Crosser flow with the Kafka Consumer module and use one of the over 70 native output modules to deliver data to other systems. Also for output systems you can leverage the Crosser Connect Tools to connect to hundreds of other systems. You can also add logic to trigger notifications or other types of processing.

Some examples of popular output modules:

Databases - MS SQL, PostgreSQL, MySQL, Oracle, InfluxDB, TimescaleDB, MongoDB, Couchbase

Storage - Azure Datalake, AWS S3, Google Cloud storage

ERP systems - SAP ECC, IFS Aurena, Aveva Insight

Sales/Marketing - Salesforce, Hubspot, Active Campaign

Support - Zoho Desk, Service Now, Zendesk

Notifications - Slack, email, Teams, Twilio


Processing Kafka data with Crosser Data Flow Pipeline

Processing Kafka data with Crosser - add smartness to your data workflows for automation and integration

At this point it should be pretty obvious that you can also use Crosser to process Kafka data. Pick up data from one topic, process in Crosser and deliver the result back to Kafka on another topic.

By using fixed-function modules from the library you can build basic condition logic, filtering and restructuring of data. You can also add more advanced logic by using one of our code modules that allow you to run your own Python or C# code within a pipeline. Any custom algorithm or processing can then be applied to your data, including running machine learning models using any of the common Python machine learning frameworks.


Crosser and Kafka is a powerful combination of simplicity and functionality for building streaming data pipelines (data flows). The combined solution makes it easy to connect and collect data from various sources, and to integrate and send to various systems and services. And last but not least, it is very easy to add intelligence and processing to your streaming data pipelines.

Learn more Intelligent Workflows and Data Automation here →

Look into the rich library or connectors here →

About the author

Goran Appelquist (Ph.D) | CTO

Göran has 20 years experience in leading technology teams. He’s the lead architect of our end-to-end solution and is extremely focused in securing the lowest possible Total Cost of Ownership for our customers.

“Hidden Lifecycle (employee) cost can account for 5-10 times the purchase price of software. Our goal is to offer a solution that automates and removes most of the tasks that is costly over the lifecycle.

My career started in the academic world where I got a PhD in physics by researching large scale data acquisition systems for physics experiments, such as the LHC at CERN. After leaving academia I have been working in several tech startups in different management positions over the last 20 years.

In most of these positions I have stood with one foot in the R&D team and another in the product/business teams. My passion is learning new technologies, use it to develop innovative products and explain the solutions to end users, technical or non-technical."

Cookie Notice

Find out more about how this website uses cookies to enhance your browsing experience.