Real-Time Data Replication to Apache Kafka®

Welcome to our Data Replication to Apache Kafka Resources Page

On this page, learn how to stream data to Apache Kafka with Log-Based Change Data Capture.

Why HVR for Replication to Apache Kafka?

  • Fast Data: Stream data from a variety of sources in real, time with log-based CDC. A non-intrusive, log-based, transactional solution that supports many source technologies with optimized continuous delivery into Kafka.
  • Easy to get Started: One-time load into topics, referred to as refresh, integrated with continuous log-based Change Data Capture (CDC). This capability simplifies populating a topic with an initial data set.
  • Flexible: Many options to deliver data in JSON, Avro and other formats.

Datasheet: HVR is a Confluent Certified Connector for Kafka

Apache Kafka is a distributed streaming platform that is used to:

  1. Distribute data between systems or applications
  2. Build real-time applications that respond to data events

HVR is a Confluent Certified Connector for Kafka providing native, high performance data delivery into Kafka, and integration with schema registry.

The HVR technology delivers end-to-end data integration capabilities to set up and manage real-time change data capture and continuous data delivery with Kafka as a target.


Kafka Integration Demo

HVR on PremSQL Server to Amazon AWS Redshift and Kafka

This is a 30 min demo walk through how to use HVR to quickly setup capturing real-time changes from an on-premise SQL Server database and sending them to both Amazon Redshift and Apache Kafka hosted on AWS.

Modernizing Your Application Architecture with Microservices

Recorded on: Tuesday, October 9, 2018
Joint webinar brought to you by:



Organizations are quickly adopting microservice architectures to achieve better customer service and improve user experience while limiting downtime and data loss. However, transitioning from a monolithic architecture based on stateful databases to truly stateless microservices can be challenging and requires the right set of solutions.

In this webinar, learn from field experts as they discuss how to convert the data locked in traditional databases into event streams using HVR and Apache Kafka®. They will show you how to implement these solutions through a real-world demo use case of microservice adoption.

You will learn:

  • How log-based change data capture (CDC) converts database tables into event streams
  • How Kafka serves as the central nervous system for microservices
  • How the transition to microservices can be realized without throwing away your legacy infrastructure
Test drive
Contact us