Mark Van de Wiel

Is too much flexibility a bad thing?

Flexibility in software often leads to complexity. However, power users want the knobs to change things for optimum results and squeeze the last few percents of performance out of the system. The complexity, of course, can lead to very challenging environments, especially in global organizations when groups of engineers manage many environments.

Below are a couple of examples of scenarios I ran into recently that make you say “just because you can does not mean you should…”.

1. The Oracle Database is a mature and very powerful piece of software. Oracle documents clearly how you can install multiple versions of its software in different Oracle homes, so that a single server (or cluster) can run instances of different versions side-by-side. In this case, the customer has an Oracle10g and an Oracle11g installation running side-by-side on a Linux server. Database files are created in ASM which complicates the picture because there can only be one ASM instance on the server. In this case it must be an Oracle11g (or higher) ASM instance in order to support both Oracle10g and Oracle11g database files (with compatibility in ASM set to allow 10g files created in ASM). The grid installation (for ASM) was owned by an operating system user grid. However, it turned out the ASM instance was running as user oracle (ps -ef |grep pmon). OS user oracle also owned the database installations. There were also multiple operating system groups to allow more fine-grained privilege assignments like asmdba, asmoper and asmadmin. This is of course in addition to the traditional oinstall and dba groups. The DBA mentioned that this system did not have a standard setup…

  • In this environment HVR had its own user account– hvr to capture out of some Oracle11g database instances as well as a 10g database instance. It turned out not to be straightforward to get a setup like this to work… Lesson learned is that just because you can run Oracle10g databases on Oracle11g ASM doesn’t mean you should.

2. Another example I recently came across was a test for an active/active setup for SQL Server.

Active/active replication is very easy to set up with HVR. In this environment there were unexpected performance and latency problems.

  • To explain the initial setup let’s say there was one server A and another server B, each running SQL Server. HVR was installed on both servers. Within this environment it turned out that lots of traffic was going back and forth between the servers causing the problems:
    • User was connected to server A to run the HVR GUI.
    • The HVR hub was running server B where also a remote HVR listener was running. However the database connection for the HVR hub connected back to server A.
    • Database capture was set up to perform both local and remote capture (for the database on the other server) on Server B.

When this configuration started running there were simply too many data packets going in each direction to work well.

So the configuration was changed into:

  • Shifted the HVR hub to server A to have the hub database local.
  • Changed the location definitions to locally connect on server A, and for server B to connect to HVR on the server, and then connect to a local database.

Almost magically the performance and latency challenges vanished and things worked fine. I.e. the lesson learned is that simply because you can easily connect to SQL Server on a remote server – even to capture changes – doesn’t mean you always should.

If you would like to see how our software can aid your organization’s replication needs, feel free to contact us!

About Mark

Mark Van de Wiel is the CTO for HVR. He has a strong background in data replication as well as real-time Business Intelligence and analytics.

Test drive
Contact us