Comprehensive real-time data replication. Simplified.

A subscription includes everything you need for efficient data replication and integration, whether you need to move your data between databases, to the cloud or multi-cloud environment, into a data lake or data warehouse.

Icon_clock

Fast

Low-impact data movement even at high volumes with Log-Based Change Data Capture (CDC) and compression. Fast data benefits with analytics tools, a stellar customer service team, and more.

Efficient

The most efficient way to replicate and integrate data in hybrid and complex environments is with HVR’s distributed, flexible and modular architecture. Design your integration flow the way you need it and stream data from one source to many, all at once, without needing to define your setup multiple times.

Secure

HVR understands the importance of data security. HVR is the only real-time data replication solution that enables routing through a firewall proxy in hybrid environments. Data is also encrypted for an added layer of protection.

With the combination of flexibility, performance and robustness, HVR has proven to be a very good choice to embed in our flight planning system.
– Senior Database Software Architect, Lufthansa

Key features. What you get with an HVR subscription:

Icon_initial_load_aqua

Table Creation and Initial Load

Mapping data between source and target is automated and made easy.

https://www.hvr-software.com/wp-content/uploads/2019/05/Icon_CDC_aqua.jpg

Log-Based Change Data Capture

Only the changes are moved between source and target, a low impact way to move data.

Icon_data_validation_aqua

Data Validation and Compare

Have assurance your data is accurate. Compare data before consumption. Live Compare capabilities allows for compare on data in-flight.

Icon_stats_reporting_aqua

Insights: Statistics, and more

This tool gives you the ability to view how data is moving in real-time, be proactive and identify chokepoints.

Broad source and target support

Whether you’re replicating your data to a data lake or data warehouse, from on-prem to the cloud, we support it.

  • All
  • Source
  • Target
Amazon Redshift
Amazon RDS
Amazon Aurora on PostgreSQL
PostgreSQL
Amazon S3
Amazon Aurora on MySQL
MySQL
Snowflake
Snowflake on AWS
Snowflake on Azure
Teradata
SAP HANA
SAP ECC
Apache Kafka
Apache Hive
Apache HBase
Apache Cassandra
Microsoft SQL Server
Microsoft Azure SQL Database
Microsoft Azure Data Warehouse
Microsoft Azure DLS
Microsoft Azure Blob Storage
Oracle
Salesforce
MapR
HDFS
IBM DB2 LUW
IBM DB2 on z/OS
IBM DB2 iSeries (AS400)
MariaDB
MongoDB
Ingres
SharePoint
Greenplum
Google BigQuery
Actian Vector
HVR Agent Plug-in

Don’t see your platform?

Please contact us to learn more about our API agent plug-in. This plug-in gives you the ability to connect to a target not listed.

postnl-logo

HVR has proven its stability and robustness. It keeps on running and running with minimal maintenance effort. HVR guarantees secure delivery of all our data.
– Director of IT, PostNL

A flexible, distributed architecture with a central point of control

What is a distributed architecture?

In a distributed architecture, changes take place as close to the source as possible for high performance and low impact, enabled via the central point of control, the Hub, and optional agents. A distributed architecture is the most efficient solution for moving data in complex environments.

Benefits of a distributed architecture include:

  • Flexibility in how you design your environment
  • Performance
  • Manageability

How does it work?

Log-Based Change Data Capture (CDC) takes place on or as close to the source server as possible. This is where relevant transaction data is extracted and compressed. The data is then sent across the wire to the central hub, the distributor. The hub guarantees recoverability and as needed queues the compressed transactions.

Separately from the capture, an integrate process picks up compressed changes for its destination that are sent to the target where they are unpacked and applied using the most efficient method for the target.

Example

Sources — Oracle and SQL Server On-Prem
Target — Amazon Redshift Data Lake

An organization using on-premise Oracle and SQL Server databases as sources and a Data Lake in Amazon Redshift will be able to scale to many sources with capture running on the individual database servers. These servers send compressed (and encrypted) changes into the AWS cloud to be applied to Redshift. The changes into Redshift go through S3 and copy into Redshift tables, followed by set-based SQL statements on the target tables, so that on aggregate the analytical database can still keep up with the transaction load from multiple sources.

Data integration architecture: understanding agents

The question of whether or not to use an agent when performing data integration, especially around use cases with Log-Based Change Data Capture (CDC) and continuous, near real-time delivery, is common. Through the use of agents, changes take place as close to the source as possible for low impact and performance. Agents are optional with HVR, it depends on your goals as to whether or not to deploy them.

In this video, HVR’s CTO, Mark Van de Wiel goes into detail about:

  • The pros and cons of using an agentless setup, versus an agent setup
  • When to consider one over the other
  • Two common distributed architectures using an agent setup

Topologies supported

Uni-Directional

Real-Time Reporting Migrations

(Reverse Post Migration)

Broadcast

Data Distribution

Bi-Directional

Active / Active Standby

High Availability

Integration / Consolidation

Data Warehouse / Data Lake

Multi-Directional

Multi-Way Active / Active

Geographical Distribution

Cascading

Data Marts

Management and operations

Total cost of ownership for many software implementations is determined by the administrative effort to manage and maintain the environment once the system goes live.

HVR is extremely well-architected to limit administrative efforts to manage real-time data replication environments:

  • The hub is the central point of control and all logs are stored there.
  • Administrators have access to an easy-to-use with rich context-sensitive on-line help GUI to interact with the hub.
  • All functionality is available through straightforward, easy-to-learn commands that can be used to script automated tasks.
  • HVR is upwards and backwards compatible to allow users to adopt new features or implement fixes by only upgrading the necessary installation(s) to take advantage of the latest capabilities.
  • Administrators can configure maintenance tasks for lights-out management. HVR will send alerts as needed by email or through SNMP traps into management software such as HP Open View, Nagios or Oracle Enterprise Manager.
HVR_log-based_Change_Data_Capture

HVR provides administrators access to rich statistics it retrieves out of the detailed data replication logs. Administrators can simply get a graphical overview of system activity over time, or slice and dice the data to drill into the detailed DML operations on individual database objects. An integrated process can store historical statistics in the database, or users can simply copy the metrics and create their own charts.

Insights_topology_statistics
Stats_screenshot

FAQs

Can HVR replicate data from a single source to multiple targets?
Yes, in fact that is one of the advantages of HVR’s architecture. HVR can capture from a single Oracle instance, queue the captured changes on the hub, and then integrate those changes to as many targets as needed. HVR does not have any limitation on the number of targets.

Can HVR replicate data from a multiple source to a single target?
Yes, HVR can be configured to capture data from many sources and then replicate to a single target. Many data warehousing solutions require data to be collected from any number of sources to be either combined into a single target warehouse database or into separate target schemas. Some applications and data are designed so that there will not be any conflicts on primary key constraints. If that is not the case for your scenario, then HVR offers you the option to add extra columns and set to values stored in the metadata to make sure you don’t have any conflicting primary keys.

Does my source and target layouts need to have the same structure and layout?
No, tables do not need to have the same layout. You can instruct HVR to ignore certain columns or populate extra columns during replication. Column values can also be changed through transformations as well as enriched with the results querying other tables, either on the source or the target. HVR also makes additional transactional metadata values to be available to be mapped to columns, such as source timestamps or transaction identifiers.

To minimize any impact to our network, can we compress the change data before it sent over the network?
Yes, in fact HVR already compresses the database by default before sending over the network using an internal algorithm which achieves very high compression rates. The impressive compression ratio reduces impact on your corporate network while using little overhead on the source.

When instantiating the target database, does the user have to pre-create the target tables, or can HVR help with that?
The initial load of the target tables takes place by running an HVR Refresh operation. The Refresh can create all the target tables if they don’t already exists. The target tables are created based on the DDL of the source tables in conjunction with any column re-mapping that the user has configured in the replication channel.

Can HVR convert all insert, update, and delete operations and insert them into a time-based journal or history table?
Yes, HVR Integrate provides a feature known as TimeKey which converts all changes (inserts, updates, and deletes) into inserts into separate tables. HVR will log both the before and after image for update operations, the after image for insert operations, and the before image for delete operations. HVR also logs additional transaction metadata to provide more time based details for every row replicated. HVR will also automatically create the tables with the preferred structure for timekey integration.

© 2019 HVR

Try now Contact us