A subscription includes everything you need for efficient data replication and integration, whether you need to move your data between databases, to the cloud or multi-cloud environment, into a data lake or data warehouse.
Low-impact data movement even at high volumes with Log-Based Change Data Capture (CDC) and compression. Fast data benefits with analytics tools, a stellar customer service team, and more.
The most efficient way to replicate and integrate data in hybrid and complex environments is with HVR’s distributed, flexible and modular architecture. Design your integration flow the way you need it and stream data from one source to many, all at once, without needing to define your setup multiple times.
HVR understands the importance of data security. HVR is the only real-time data replication solution that enables routing through a firewall proxy in hybrid environments. Data is also encrypted for an added layer of protection.
With the combination of flexibility, performance and robustness, HVR has proven to be a very good choice to embed in our flight planning system.
– Senior Database Software Architect, Lufthansa
Mapping data between source and target is automated and made easy.
Only the changes are moved between source and target, a low impact way to move data.
Have assurance your data is accurate. Compare data before consumption. Live Compare capabilities allows for compare on data in-flight.
This tool gives you the ability to view how data is moving in real-time, be proactive and identify chokepoints.
HVR has proven its stability and robustness. It keeps on running and running with minimal maintenance effort. HVR guarantees secure delivery of all our data.
– Director of IT, PostNL
Log-Based Change Data Capture (CDC) takes place on or as close to the source server as possible. This is where relevant transaction data is extracted and compressed. The data is then sent across the wire to the central hub, the distributor. The hub guarantees recoverability and as needed queues the compressed transactions.
Separately from the capture, an integrate process picks up compressed changes for its destination that are sent to the target where they are unpacked and applied using the most efficient method for the target.
Sources — Oracle and SQL Server On-Prem
Target — Amazon Redshift Data Lake
An organization using on-premise Oracle and SQL Server databases as sources and a Data Lake in Amazon Redshift will be able to scale to many sources with capture running on the individual database servers. These servers send compressed (and encrypted) changes into the AWS cloud to be applied to Redshift. The changes into Redshift go through S3 and copy into Redshift tables, followed by set-based SQL statements on the target tables, so that on aggregate the analytical database can still keep up with the transaction load from multiple sources.
The question of whether or not to use an agent when performing data integration, especially around use cases with Log-Based Change Data Capture (CDC) and continuous, near real-time delivery, is common. Through the use of agents, changes take place as close to the source as possible for low impact and performance. Agents are optional with HVR, it depends on your goals as to whether or not to deploy them.
In this video, HVR’s CTO, Mark Van de Wiel goes into detail about:
Real-Time Reporting Migrations
(Reverse Post Migration)
Active / Active Standby
Data Warehouse / Data Lake
Multi-Way Active / Active
HVR provides administrators access to rich statistics it retrieves out of the detailed data replication logs. Administrators can simply get a graphical overview of system activity over time, or slice and dice the data to drill into the detailed DML operations on individual database objects. An integrated process can store historical statistics in the database, or users can simply copy the metrics and create their own charts.