Comprehensive real-time data replication. Simplified.
A subscription includes everything you need for efficient high volume data replication and integration, whether you need to move your data between databases, to the cloud, or multi-cloud integration, into a data lake or data warehouse.
Low-impact data movement even at high volumes with Log-Based Change Data Capture (CDC) and compression. Fast data benefits with analytics tools, a stellar customer service team, and more.
The most efficient way to replicate and integrate data in hybrid and complex environments is with HVR’s distributed, flexible and modular architecture. Design your integration flow the way you need it and stream data from one source to many, all at once, without needing to define your setup multiple times.
HVR understands the importance of data security. HVR is the only real-time data replication solution that enables routing through a firewall proxy in hybrid environments. Data is also encrypted for an added layer of protection.
With the combination of flexibility, performance and robustness, HVR has proven to be a very good choice to embed in our flight planning system.
– Senior Database Software Architect, Lufthansa
Table Creation and Initial Load
Mapping data between source and target is automated and made easy.
Log-Based Change Data Capture
Only the changes are moved between source and target, a low impact way to move data.
Data Validation and Compare
Have assurance your data is accurate. Compare data before consumption. Live Compare capabilities allows for compare on data in-flight.
Insights: Statistics, and more
This tool gives you the ability to view how data is moving in real-time, be proactive and identify chokepoints.
Broad source and target support
Whether you’re replicating your data to a data lake or data warehouse, from on-prem to the cloud, we support it.
HVR has proven its stability and robustness. It keeps on running and running with minimal maintenance effort. HVR guarantees secure delivery of all our data.
– Director of IT, PostNL
Sources — Oracle and SQL Server On-Prem
Target — Amazon Redshift Data Lake
An organization using on-premise Oracle and SQL Server databases as sources and a Data Lake in Amazon Redshift will be able to scale to many sources with capture running on the individual database servers. These servers send compressed (and encrypted) changes into the AWS cloud to be applied to Redshift. The changes into Redshift go through S3 and copy into Redshift tables, followed by set-based SQL statements on the target tables, so that on aggregate the analytical database can still keep up with the transaction load from multiple sources.
Data integration architecture: understanding agents
The question of whether or not to use an agent when performing data integration, especially around use cases with Log-Based Change Data Capture (CDC) and continuous, near real-time delivery, is common. Through the use of agents, changes take place as close to the source as possible for low impact and performance. Agents are optional with HVR, it depends on your goals as to whether or not to deploy them.
In this video, HVR’s CTO, Mark Van de Wiel goes into detail about:
- The pros and cons of using an agentless setup, versus an agent setup
- When to consider one over the other
- Two common distributed architectures using an agent setup
Integration / Consolidation
Data Warehouse / Data Lake
HVR provides administrators access to rich statistics it retrieves out of the detailed data replication logs. Administrators can simply get a graphical overview of system activity over time, or slice and dice the data to drill into the detailed DML operations on individual database objects. An integrated process can store historical statistics in the database, or users can simply copy the metrics and create their own charts.
Yes, in fact that is one of the advantages of HVR’s architecture. HVR can capture from a single Oracle instance, queue the captured changes on the hub, and then integrate those changes to as many targets as needed. HVR does not have any limitation on the number of targets.
Yes, HVR can be configured to capture data from many sources and then replicate to a single target. Many data warehousing solutions require data to be collected from any number of sources to be either combined into a single target warehouse database or into separate target schemas. Some applications and data are designed so that there will not be any conflicts on primary key constraints. If that is not the case for your scenario, then HVR offers you the option to add extra columns and set to values stored in the metadata to make sure you don’t have any conflicting primary keys.
No, tables do not need to have the same layout. You can instruct HVR to ignore certain columns or populate extra columns during replication. Column values can also be changed through transformations as well as enriched with the results querying other tables, either on the source or the target. HVR also makes additional transactional metadata values to be available to be mapped to columns, such as source timestamps or transaction identifiers.
To minimize any impact to our network, can we compress the change data before it sent over the network?
Yes, in fact HVR already compresses the database by default before sending over the network using an internal algorithm which achieves very high compression rates. The impressive compression ratio reduces impact on your corporate network while using little overhead on the source.
When instantiating the target database, does the user have to pre-create the target tables, or can HVR help with that?
The initial load of the target tables takes place by running an HVR Refresh operation. The Refresh can create all the target tables if they don’t already exists. The target tables are created based on the DDL of the source tables in conjunction with any column re-mapping that the user has configured in the replication channel.
Can HVR convert all insert, update, and delete operations and insert them into a time-based journal or history table?
Yes, HVR Integrate provides a feature known as TimeKey which converts all changes (inserts, updates, and deletes) into inserts into separate tables. HVR will log both the before and after image for update operations, the after image for insert operations, and the before image for delete operations. HVR also logs additional transaction metadata to provide more time based details for every row replicated. HVR will also automatically create the tables with the preferred structure for timekey integration.