top of page

Apache Kafka with DataRow.io

Process streaming data with ease 

Consuming Kafka topics with DataRow.io provides a simple, scalable solution for ingesting streaming data on AWS. You can rapidly build, visualize and automate real-time data ingestion pipelines right from a web browser in minutes. Furthermore, you can track performance, start, stop and replicate the entire data pipeline. 

Kafka-DataRow.io (1).png

Processing Kafka message with DataRow.io

To process Kafka message with DataRow.io, you need to create a new job using the web interface, drag Kafka reader activity into the designer canvas, and specify the list of Kafka brokers, topics and message format, and then select transformation and writer activities, finally specify the number of EC2 nodes and types needed for processing. Hundreds of built-in activities remove complexity related to implementation details and lets you focus on "what" rather than "how". You no longer have to deal with the challenges associated with designing, authoring and optimizing complex real-time data ingestion scripts saving valuable production time.

Consuming Kafka topics with DataRow.io

To consume Kafka topics with DataRow.io, you need to create a new streaming job using the web interface, specify Kafka coordinates, transformation steps, and target destination. DataRow.io will automatically provision ephemeral cluster in your AWS and execute the job. 

 

 

bottom of page