Server Requirements

Last updated on Jul 22, 2020

Contents

This section describes the requirements for the HVR Hub server machine, as well as the servers running HVR remote agent on the source and target locations.

Hub Server

The HVR hub is an installation of HVR on a server machine (hub server). The HVR hub orchestrates replication in logical entities called Channels. The hub runs a Scheduler service to manage jobs that move data flow between source location(s) and target location(s) (Capture jobs, Integrate jobs, Refresh jobs, Compare jobs).

For the list of databases that HVR supports as a hub database, see section Hub Database in Capabilities.

In order to operate, the HVR Scheduler must connect to a repository database consisting of a number of relational tables. By design HVR hub processing is limited to job orchestration, recording system state information and temporary storage of router files and transaction files. For the Refresh process, no data is persisted on the HVR hub so the hub acts as a simple pass through. Therefore, the HVR hub needs storage to hold the following:

  1. HVR state information ($HVR_CONFIG)
  2. Repository database (including statistics retention, location information, etc)
  3. HVR installation (<100 MB standard)

Resource Consumption

HVR is designed to distribute work between HVR remote agents. As a result, resource-intensive processing rests on the HVR remote agents, with the HVR hub machine performing as little processing as possible. The HVR hub machine controls all the jobs that move data between sources and targets, and stores the system's state to enable recovery without any loss of changes. All data transferred between sources and targets pass through the HVR hub machine, including data from a one-time load (hvrrefresh) and detailed row-wise comparison (hvrcompare). 

The HVR hub machine needs resources to:

  • Run the HVR Scheduler .
  • Spawn jobs to perform one-time data movement (Compare and Refresh) and continuous replication (Capture and Integrate). In all cases, the resource-intensive part of data processing is implemented on an HVR remote agent machine, including data compression, with the HVR hub machine simply passing the data from source to target. For data refresh or compare, the data is simply transferred skipping the disk. During normal capture activity, data is temporarily stored on the disk to allow the quickest possible recovery, with capture(s) and integrate(s) running asynchronously for optimal efficiency. If the data transfer is encrypted, the HVR hub machine decrypts the data and encrypts it again (typically using different encryption certificates) as needed to deliver it to the target.
  • Transfer compressed data from source to target. Since the amount of data transferred is reduced by 5-10 times, large amounts of data can be transferred without the need for very high network bandwidth.
  • Collect metrics from the log files to be stored in the repository database.
  • Provide real-time process metrics to any Graphical User Interfaces (GUIs) connected to the HVR hub machine. HVR runs as a service, regardless of whether any GUI is connected, and real-time metrics are provided for monitoring purposes.
  • Allow configuration changes in the design environment.


CPU: every HVR job spawns a process – i.e. one for every Capture, one for every Integrate. The CPU utilization for each of these processes on the hub machine is generally very low unless some heavy transformations are processed on the hub machine (i.e. depending on the channel design). Besides, Refresh or Compare may spawn multiple processes when running. A lot of CPU can be used when performing a row-by-row refresh/compare.

Memory: memory consumption is slightly higher on the hub machine than on the source, but still fairly modest. Some customers run dozens of channels on a dedicated hub machine with a fairly modest configuration. Row-by-row refresh and compare may use a lot of memory but are not run on an ongoing basis.

Storage space: storage utilization on the hub machine can be high. If Capture is running but Integrate is not into at least one destination, then HVR will accumulate transaction files on the hub machine. These files are compressed, but depending on the activity on the source database and the amount of time it takes until the target starts processing transactions, a lot of storage space may be used. Start with at least 10 GB, but possibly more if the hub machine manages multiple channels and network connectivity is unreliable. Large row-by-row refresh or compare can also use a lot of storage space.

I/O: if HVR is running Capture and keeping up with the transaction log generation on a busy system that processes many small transactions, then transaction files will be created at a rapid pace. Make sure that the file system can handle frequent I/O operations. Typically, a storage system cache or file system cache or SSD (or a combination of these) can take care of this.

Sizing Guidelines for Hub Server

The most important factor impacting the HVR hub size is whether the hub also performs the role of a source and/or a target HVR agent. General recommendations include:

  • Co-locate the HVR hub with a production source database only if the server(s) hosting the production database has (have) sufficient available resources (CPU, memory, storage space, and I/O capacity) to support the HVR hub for your setup.
  • HVR Capture may run against a physical standby of the source database with no direct impact on the source production database. In this case, consider CPU utilization of the capture process(es) running on the source database. For the Oracle RAC production database, there is one log parser per node in the source database, irrespective of the standby database configuration.
  • Sorting data to coalesce changes for burst mode and to perform row-wise data compare (also part of the row-wise refresh) are CPU, memory and (temporary) storage space intensive.
  • Utilities to populate the database target like TPT (for Teradata) and gpfdist (for Greenplum) can be very resource-intensive.

Storage for HVR_CONFIG

The most important resource for the HVR hub machine to function well is fast I/O operations (in terms of IOPS), especially for the $HVR_CONFIG directory, where runtime data and state are written to. To support capture on a busy source system, transaction files can be written to the disk every second or two, with updates to the (tiny) capture state file at the same rate, as well as very frequent updates to the log files that keep track of the activity. With multiple channels running, there will be many small I/O operations into the $HVR_CONFIG directory every second. The disk subsystem with a sizable cache and preferably Solid-State Drives (SSDs) is a good choice for HVR hub storage.

Hub Database

The HVR hub stores channels metadata, a very limited amount of runtime data, as well as aggregated process metrics (statistics) in its repository database. The most important resource for the hub database is storage, with even quite modest needs in order to support a single hub (up to 20 GB of disk space allocated for the repository database can support virtually all hub setups). Traditionally, repository database are stored locally to the HVR hub, but there are cases when a database service is used to host the repository database away from the HVR hub. The main advantage of a local repository database is a lower likelihood that the database connection fails (resulting in all data flows to stop because the Scheduler fails in such a case) versus offloading any resources the repository requires with a database elsewhere.

The statistics stored in the repository database (hvr_stats) can take up a large amount of storage space.

Sizing Guidelines for Hub Database

Review the guidelines and decide based on your situation what is the best HVR hub configuration. For example:

  • Your HVR hub may capture changes for one of multiple sources, using HVR remote agents for the other sources.
  • One of your sources may be a heavily-loaded 8-node Oracle Exadata database that requires far more resources to perform CDC than a single mid-size SQL Server database.
  • You may plan to run very frequent (resource-intensive) CDC jobs, etc.

The change rate mentioned in the sizing guideline below is the volume of transaction log changes produced by the database (irrespective of whether or not HVR captures all table changes from the source or only a subset).

Hub SizeResourcesStandalone HubWith Capture, no IntegrateWith Integrate, no CaptureWith Capture and Integrate

Small

CPU cores: 4-8

Memory: 16-32 GB

Disk: 50-500 GB SSD

Network: 10GigE HBA (or equivalent)

5 channels with average change rate up to 20 GB/hour

2 channels with average change rate up to 20 GB/hour

2 channels with average change rate up to 20 GB/hour

1 channel processing up to 20 GB/hour

Medium

CPU cores: 8-32

Memory: 32-128 GB

Disk: 300 GB - 1 TB SSD

Network: 2x10 GigE HBA

20 channels, up to 5 with high average change rate of 100 GB/hour

8 channels, up to 2 with high average change rate of 100 GB/hour

6 channels, up to 2 with high average change rate of 100 GB/hour

4 channels, up to 2 with high average change rate of 100 GB/hour

Large

CPU cores: 32+

Memory: 128 GB+

Disk: 1 TB+ SSD

Network: 4+ x10 GigE HBA

50+channels

15+ channels

12+ channels

8+ channels

Monitoring Disk Space on HVR Hub

Even though the HVR hub uses limited storage on the HVR hub, a shortage of free disk space can significantly impact the repository database performance and therefore the HVR performance. Standard database monitoring tools can be employed to verify the amount of disk space left on the HVR hub server – considering the type of repository database that has been installed. Since every database has unique requirements in terms of optimum storage required for operational health, it is important to set these alerting thresholds accordingly. These are to be used as guidelines only and not as reference architectures. In most cases, the disk alerts must be set for 80%, 85% and 90% capacity. Any higher than 90% is considered as a production support call to immediately add disk or free up storage. Standard database monitoring solutions can be helpful to monitor the disk usage from the hub database perspective.

Source Location Server

The HVR remote agent machine on the capture location needs resources to perform the following functions:

  • For one-time data loads (refresh) and row-wise compare, HVR remote agent machine retrieves data from a source database, compresses it, optionally encrypts it and sends to the HVR hub. For optimum efficiency, data is not written to the disk during such operations. Matching source database session(s) may use a fair amount of database (and system) resources. Resource consumption for Refresh and Compare is only intermittent.
  • For bulk compare jobs, the HVR remote agent machine computes a checksum for all the data.
  • To set up CDC during Initialize, HVR remote agent machine retrieves metadata from the database and adds table-level supplemental logging as needed.
  • During CDC, resources are needed to read the logs, parse them, and store information about in-flight transactions in memory (until a threshold is reached and additional change data is written to disk). The amount of resources required for this task varies from one system to another, depending on numerous factors, including:
  • the log read method (direct or through an SQL interface),
  • data storage for the logs (on disk or in, for example, Oracle Automatic Storage  Manager),
  • whether the system is clustered or not,
  • the number of tables in the replication and data types for columns in these tables, and
  • the transaction mix (ratio of insert versus updates versus deletes, and whether there are many small, short-running transactions versus larger, longer-running transactions).

Log parsing is generally the most CPU-intensive operation that can use up to 100% of a single CPU core when capture is running behind. HVR uses one log parser per database thread, and every database node in an Oracle cluster constitutes one thread.

For a real-world workload with the HVR agent running on the source database server, it is extremely rare to see more than 10% of total system resource utilization going to the HVR remote agent machine during CDC, with typical resource consumption well below 5% of system resources.

For an Oracle source database, HVR will periodically write the memory state to disk to limit the need to re-read archived log files to capture long-running transactions. Consider storage utilization for this if the system often processes large, long-running transactions.

Resource Consumption

  • CPU: every channel will use up to 1 CPU core in the system. If HVR runs behind and there is no bottleneck accessing the transaction logs or using memory, then HVR can use up to the full CPU core per channel. In a running system with HVR reading the tail end of the log, the CPU consumption per channel is typically much lower than 100% of the CPU core. Most of the CPU is used to compress transaction files. Compression can be disabled to lower CPU utilization. However, this will increase network utilization (between source HVR remote agent machine and the HVR hub and between the HVR hub and any target HVR remote agent machine). Refresh and Compare operations that are not run on an ongoing basis will add as many processes as the number of tables refreshed/compared in parallel. In general, the HVR process uses relatively few resources, but the associated database job uses a lot of resources to retrieve data (if parallel select operations run against a database, then the Refresh or Compare operations can use up to 100% of the CPU on the source database).
  • Memory: memory consumption is up to 64 MB per transaction per channel. Generally, 64 MB per transaction is not reached and much less memory is used but this depends on the size of the transactions and what portion of it is against tables that are part of a channel. Note that the 64 MB threshold can be adjusted (upwards and downwards).
  • Storage space: the HVR installation is about 100 MB in size and while running Capture, it uses no additional disk space until the 64 MB memory threshold is exceeded and HVR starts spilling transactions to disk. HVR will write compressed files but in rare cases, with large batch jobs modifying tables in the channel that only commit at the end. HVR may be writing a fair amount of data to disk starting with at least 5 GB for $HVR_CONFIG. Please note that HVR Compare may also spill to disk which would also go into this area. If one aggressively backs up the transaction logs so that they become unavailable to the source database, then you may consider hvrlogrelease to take copies of the transaction logs until HVR does not need them anymore. This can add a lot of storage space to the requirements depending on the log generation volume of the database and how long transactions may run (whether they are idle or active does not make a difference for this).
  • I/O: every channel will perform frequent I/O operations to the transaction logs. If HVR is current, then each of these I/O operations is on the tail end of the log, which could be a source of contention in older systems (especially if there are many channels). Modern systems have a file system or storage cache, and frequent I/O operations should barely be noticeable.

Target Location Server

The HVR remote agent machine on the integrate location needs resources to perform the following functions:

  • Apply data to the target system, both during a one-time load (refresh) and during continuous integration. The resource utilization for this task varies a lot from one system to another, mainly depending whether changes are applied in so-called burst mode or using continuous integration. The burst mode requires HVR to perform a single net change per row per cycle so that a single batch insert, update or delete results in the correct end state for the row. For example, when, in a single cycle, a row is first inserted followed by two updates then the net operation is an insert with the two updates merged with the initial data from the insert. This so-called coalesce process is both CPU and (even more so) memory intensive, with HVR spilling data to disk if memory thresholds are exceeded.
  • Some MPP databases like Teradata and Greenplum use a resource-intensive client utility (TPT and gpfdist respectively) to distribute the data directly to the nodes for maximum load performance. Though resource consumption for these utilities is not directly attributed to the HVR remote agent machine, you must consider the extra load when sizing the configuration.
  • For data compare, the HVR integrate agent machine retrieves the data from a target system to either compute a checksum (bulk compare) or to perform a row-wise comparison. Depending on the technologies involved, HVR may, in order to perform the row-wise comparison, sort the data, which is memory intensive and will likely spill significant amounts of data to disk (up to the total data set size).
  • Depending on the replication setup, the HVR integrate agent machine may perform extra tasks like decoding SAP cluster and pool tables using the SAP Transform or encrypt data using client-side AWS KMS encryption.

With multiple sources sending data to a target, a lot of data has to be delivered by a single HVR integrate agent machine. Load balancers (both physical and software-based like AWS’s Elastic Load Balancer (ELB)) can be used to manage integration performance from many sources into a single target by scaling out the HVR integrate agent machines.

Resource Consumption

  • CPU: on the target, HVR typically does not use a lot of CPU resources, but the database session it initiates does (also depends on whether any transformations are run as part of the channel definition). A single Integrate process will have a single database process that can easily use the full CPU core. Multiple channels into the same target will each add one process (unless specifically configured to split into more than one Integrate process). Compare/Refresh can use more cores depending on the parallelism in HVR. Associated database processes may use more than one core each depending on parallelism settings at the database level.
  • Memory: the memory consumption for HVR on the target is very modest unless large transactions have to be processed. Typically, less than 1 GB per Integrate is used. Row-by-row refresh and compare can use gigabytes of memory but are not run on an ongoing basis.
  • Storage space: $HVR_CONFIG on the target may be storing temporary files for row-by-row compare or refresh, and if tables are large, a significant amount of space may be required. Start with 5 GB.
  • I/O: the I/O performance for HVR on the target is generally not critical.

Monitoring Integrate Agent Machine Resources

Because replication between heterogeneous source and target heavily depends on the available computing on the integrate agent machine for data type conversions during refresh and CDC, coalescing operations in a burst cycle, computing checksum for compares, in cases of HVR utilizing SAP Xform  for declustering and depooling tables, decryption of data received from the hub, and the like, its imperative to determine scale-out integrate agent strategies pre-deployment.

See Also