Real-Time Data Replication to Apache Kafka®
Welcome to our Data Replication to Apache Kafka Resources Page
On this page, learn how to stream data to Apache Kafka with Log-Based Change Data Capture.
Datasheet: HVR is a Confluent Certified Connector for Kafka
Apache Kafka is a distributed streaming platform that is used to:
- Distribute data between systems or applications
- Build real-time applications that respond to data events
HVR is a Confluent Certified Connector for Kafka providing native, high performance data delivery into Kafka, and integration with schema registry.
The HVR technology delivers end-to-end data integration capabilities to set up and manage real-time change data capture and continuous data delivery with Kafka as a target.
Kafka Integration Demo
HVR on PremSQL Server to Amazon AWS Redshift and Kafka
This is a 30 min demo walk through how to use HVR to quickly setup capturing real-time changes from an on-premise SQL Server database and sending them to both Amazon Redshift and Apache Kafka hosted on AWS.
Modernizing Your Application Architecture with Microservices
Recorded on: Tuesday, October 9, 2018
Joint webinar brought to you by:
Organizations are quickly adopting microservice architectures to achieve better customer service and improve user experience while limiting downtime and data loss. However, transitioning from a monolithic architecture based on stateful databases to truly stateless microservices can be challenging and requires the right set of solutions.
In this webinar, learn from field experts as they discuss how to convert the data locked in traditional databases into event streams using HVR and Apache Kafka®. They will show you how to implement these solutions through a real-world demo use case of microservice adoption.
You will learn:
- How log-based change data capture (CDC) converts database tables into event streams
- How Kafka serves as the central nervous system for microservices
- How the transition to microservices can be realized without throwing away your legacy infrastructure
Why HVR for Apache Kafka?
- Populating the schema registry.
- Flexible options to deliver data in JSON, Avro and other formats.
- One-time load into topics, referred to as refresh, integrated with continuous log-based Change Data Capture (CDC). This capability
simplifies populating a topic with an initial data set.
- Non-intrusive, log-based, transactional CDC on many supported source technologies with optimized continuous delivery into Kafka.
- Transactionality can be propagated through manifests.
- A graphical management console to setup the data flows and any automated alerts, also giving access to rich data movement insights.