Real-Time Data Integration into Apache Kafka®

HVR is a low impact way to deliver the information you need to get you started on your Kafka initiatives.

Apache Kafka® (“Kafka” for short) is a popular technology to share data between systems and build applications that respond to data events. Well-known organizations like Netflix, Uber and Microsoft (LinkedIn) built a data sharing infrastructure based on Kafka.

Organizations often have the need to feed their Kafka systems from their traditional RDMS. HVR is a real-time data integration software that enables you to move high volumes of data into your Apache Kafka system for real-time updates. HVR is a Confluent Certified Connector for Kafka providing native, high-performance data delivery into Kafka, and integration with schema registry.

HVR includes log-based CDC technology, a non-intrusive way to move data in real-time from a variety of relational database technologies, and provides native support for Kafka as a target.


Kafka

Simplified Real-Time Data Integration into Kafka

Following are HVR features that allow you to easily and continuously move data to Kafka:

  • Easy Management: A graphical management console to set up the data flows and any automated alerts, also giving access to rich data movement insights
  • Easy Deployment: Schema Registry population
  • Flexibility: Options to delivery data in JSON, Avro and other formats
  • Simplified Operations: One-time load into topics, referred to as refresh, integrated with continuous Change Data Capture (CDC) This capability simplifies populating a topic with an initial dataset
  • Broad Platform Support: Transactional CDC on many supported source technologies with optimized continuous delivery into Kafka

Get Real-Time Data Integration from These Supported Sources

Non-intrusive, log-based change data capture is the most efficient and reliable method to retrieve changes from relational database technologies. HVR supports multiple technologies, with options to absolutely minimize the impact on the transactional database.

  • DB2 for Linux, Unix and Windows (LUW)
  • DB2 iSeries (AS400)
  • Oracle, including cloud-hosted services like Amazon’s Oracle RDS (Relational Database Service) and Oracle Cloud
  • Microsoft SQL Server
  • SAP HANA
  • SAP ECC*
  • DB2 for Linux, Unix and Windows (LUW)
  • DB2 iSeries (AS400)
  • PostgreSQL, including AWS PostgreSQL RDS
  • Ingres
  • MySQL
    • AWS MySQL RDSAmazon’s MySQL compatible Aurora database
    • MariaDB
  • Oracle
    •  Amazon Oracle RDS (Relational Database Service)
    • Oracle Cloud

*SAP ECC is commonly used by large organizations. HVR includes technology to decode data in cluster and pool tables, allowing you to unlock data from all tables in older versions of SAP ECC.

HVR also supports file capture. Delimited files like CSV can be parsed into separate records, or files can be replicated as a LOB.

Setting up Real-Time Data Integration into Apache Kafka

HVR recommends the use of agents to distribute load and optimize network communication. An agent, which is an additional installation of software on or near the source or target, facilitates the movement of changes that occur between data endpoints.

Learn more about when to use and not use agents in our video

The use of agents is especially beneficial when systems are separated by a Wide Area Network (WAN), e.g. between on-premises and cloud environments, or when data is shared between multiple cloud availability zones or cloud providers. Communication can be encrypted using SSL/TLS, and authentication can be strengthened using custom-generated certificates.

A data integration setup always includes a central HVR installation, referred to as the hub, that controls CDC and data delivery. Use our Graphical User Interface to define data movement, set up automated alerts, and to gain data movement insights based on the rich statistics HVR gathers as data moves into Kafka.

© 2018 HVR

Live Demo Contact Us