Challenges and Solutions for Geographical Replication

Still one of the most common real-time data integration scenarios is on-premises real-time reporting and Business Intelligence. Such a scenario involves one or more source databases (e.g. Oracle, SQL Server, DB2) and one target database (e.g. Teradata, Greenplum, Vector, SQL Server, Oracle), often all located in the same data center. The real-time reporting use case generally has relatively low complexity:

  • Uni-directional replication with no end user changes directly on the target.
  • High network bandwidth on a reliable network between source(s) and target.

Geographical replication introduces a number of additional challenges that I will discuss in this post.

A traditional use case for geographical distribution is a database application that requires local/regional access to a database to perform well. The requirement for the local database may be driven by network latency, reliability or bandwidth, or a combination of these. A great example to get appreciation for this use case is on-shore versus off-shore data processing, but there are many other examples.

Consider the on-shore/off-shore data processing example and contrast this to the on-premises real-time reporting use case:

  • Data has to be replicated both/multiple ways, and end users (or applications) modify data on both sides.
  • Network connectivity is unreliable and bandwidth may be low most of the time.

Geographical distribution often involves active/active replication which generally requires more thought than uni-directional replication: how are unique identifiers created, when should triggers fire, how can we avoid or deal with data collisions, etc. There is a great white paper on our website that discusses such challenges.

The on-shore/off-shore example introduces additional challenges beyond regular active/active replication:

  • Network connectivity may often be disrupted yet the DBA does not want a day (and night) job monitoring/managing data replication.
  • Network bandwidth can be extremely low and latency very high, so optimized network communication with maximum compression is important.
  • A temporary lack of network connectivity increases the likelihood that the same data is changed in multiple locations.
  • There may potentially be dozens of active sites (depending on the organization size) and not just a few introducing a management challenge.
  • Etc.

HVR provides a great technology to address these challenges. Fundamentally the distributed architecture makes a complex active/active set up much easier to manage than many point-to-point replication tools would. To see the manageability benefits check out this video for a simplified example. On top of that automatic recovery through the built-in scheduler and optimized network communication with high data compression ratios are all core to the technology.

Sounds interesting and you’d like to learn more? Check out the HVR technical white paper!

About Mark

Mark Van de Wiel is the CTO for HVR. He has a strong background in data replication as well as real-time Business Intelligence and analytics.

© 2019 HVR

Try now Contact us