This section describes the requirements, access privileges, and other features of HVR when using Snowflake for replication. For information about compatibility and supported versions of Snowflake with HVR platforms, see Platform Compatibility Matrix.
For information about the supported data types and mapping of data types in source DBMS to the corresponding data types in target DBMS or file format, see Data Type Mapping.
For instructions to quickly setup replication using Snowflake, see Quick Start for HVR - Snowflake.
HVR requires that the Snowflake ODBC driver is installed on the machine from which HVR connects to Snowflake. For more information on downloading and installing Snowflake ODBC driver, see Snowflake Documentation.
For information about the supported ODBC driver version, refer to the HVR release notes (hvr.rel) available in hvr_home directory or the download page.
This section lists and describes the connection details required for creating Snowflake location in HVR.
|The hostname or ip-address of the machine on which the Snowflake server is running. |
|The port on which the Snowflake server is expecting connections. |
|The name of the Snowflake role to use. |
|The name of the Snowflake warehouse to use. |
|The name of the Snowflake database. |
|The name of the default Snowflake schema to use. |
|The username to connect HVR to the Snowflake Database. |
|The password of the User to connect HVR to the Snowflake Database.|
Linux / Unix
Driver Manager Library
The optional directory path where the ODBC Driver Manager Library is installed. This field is applicable only for Linux/Unix operating system.
For a default installation, the ODBC Driver Manager Library is available at /usr/lib64 and does not need to be specified. However, when UnixODBC is installed in for example /opt/unixodbc the value for this field would be /opt/unixodbc/lib.
The optional directory path where odbc.ini and odbcinst.ini files are located. This field is applicable only for Linux/Unix operating system.
For a default installation, these files are available at /etc and do not need to be specified. However, when UnixODBC is installed in for example /opt/unixodbc the value for this field would be /opt/unixodbc/etc.
The odbcinst.ini file should contain information about the Snowflake ODBC Driver under the heading [SnowflakeDSIIDriver].
|The user defined (installed) ODBC driver to connect HVR to the Snowflake server.|
Integrate and Refresh Target
HVR supports integrating changes into Snowflake location. This section describes the configuration requirements for integrating changes (using Integrate and refresh) into Snowflake location. For the list of supported Snowflake versions, into which HVR can integrate changes, see Integrate changes into location in Capabilities.
HVR uses the Snowflake ODBC driver to write data to Snowflake during continuous Integrate and row-wise Refresh. However, the preferred methods for writing data to Snowflake are Integrate with /Burst and Bulk Refresh using staging as they provide better performance (see section 'Burst Integrate and Bulk Refresh' below).
When performing the refresh operation using slicing (option -S), a refresh job is created per each slice for refreshing only rows contained in the slice. These refresh jobs must not be run in parallel but should be scheduled one after another to avoid a risk of corruption on a Snowflake target location.
Grants for Integrate and Refresh Target
The User should have permission to read and change replicated tables.
The User should have permission to create and drop HVR state tables.
The User should have permission to create and drop tables when HVR Refresh will be used to create target tables.
Burst Integrate and Bulk Refresh
While Integrate is running with parameter /Burst and Bulk Refresh, HVR can stream data into a target database straight over the network into a bulk loading interface specific for each DBMS, or else HVR puts data into a temporary directory (‘staging file') before loading data into a target database. For more information about staging files on Snowflake, see Snowflake Documentation.
For best performance, HVR performs Integrate with /Burst and Bulk Refresh into Snowflake using staging files. HVR implements Integrate with /Burst and Bulk Refresh (with file staging ) into Snowflake as follows:
- HVR first stages data into the configured staging platform:
- Snowflake Internal Staging using Snowflake ODBC driver (default)
- AWS or Google Cloud Storage using cURL library
- Azure Blob FS using HDFS-compatible libhdfs API
- HVR then uses Snowflake SQL command 'copy into' to ingest data from the staging directories into the Snowflake target tables
HVR supports the following cloud platforms for staging files:
Snowflake Internal Staging
By default, HVR stages data on the Snowflake internal staging before loading it into Snowflake while performing Integrate with Burst and Bulk Refresh. To use the Snowflake internal staging, it is not required to define action LocationProperties on the corresponding Integrate location.
Snowflake on AWS
- An AWS S3 location (bucket) - to store temporary data to be loaded into Snowflake. For more information about creating and configuring an S3 bucket, refer to AWS Documentation.
An AWS user with AmazonS3FullAccess permission policy - to access the S3 bucket. Alternatively, an AWS user with minimal set of permission can also be used.Click here for minimal set of permissions.
Sample JSON file with a user role permission policy for S3 location
For more information on the Amazon S3 permissions policy, refer to the AWS S3 documentation.
For more information, refer to the following AWS documentation:
- Define action LocationProperties on the Snowflake location with the following parameters:
- /StagingDirectoryHvr: the location where HVR will create the temporary staging files (ex. s3://my_bucket_name/).
- /StagingDirectoryDb: the location from where Snowflake will access the temporary staging files. If /StagingDirectoryHvr is an Amazon S3 location then the value for /StagingDirectoryDb should be same as /StagingDirectoryHvr.
- /StagingDirectoryCredentials: the AWS security credentials. For AWS, multiple formats are available for supplying the credentials (including encryption). For more information see LocationProperties /StagingDirectoryCredentials.
Following are the two basic formats:
By default, HVR connects to us-east-1 once for determining your S3 bucket region. If a firewall restriction or a service such as Amazon Private Link is preventing the determination of your S3 bucket region, you can change this region (us-east-1) to the region where your S3 bucket is located by defining the following action:
Snowflake on Azure
HVR can be configured to stage the data on Azure BLOB storage before loading it into Snowflake. For staging the data on Azure BLOB storage and perform Integrate with Burst and Bulk Refresh, the following are required:
- An Azure BLOB storage location - to store temporary data to be loaded into Snowflake
- An Azure user (storage account) - to access this location. For more information, refer to the Azure Blob storage documentation.
- Define action LocationProperties on the Snowflake location with the following parameters:
- /StagingDirectoryHvr: the location where HVR will create the temporary staging files (e.g. wasbs://myblobcontainer).
- /StagingDirectoryDb: the location from where Snowflake will access the temporary staging files. If /StagingDirectoryHvr is an Azure location, this parameter should have the same value.
- /StagingDirectoryCredentials: the Azure security credentials. The supported format for supplying the credentials is azure_account=azure_account;azure_secret_access_key=secret_key.
For Linux (x64) and Windows (x64), since HVR 5.7.0/8 and 5.7.5/4, it is not required to install and configure the Hadoop client. However, if you want to use the Hadoop client, set the environment variable HVR_AZURE_USE_HADOOP=1 and follow the steps mentioned below.
For HVR versions prior to 5.7.0/8 or 5.7.5/4, the Hadoop Client must be present on the machine from which HVR will access the Azure Blob FS.Click here for the Hadoop client configuration steps.
Internally, HVR uses the WebHDFS REST API to connect to the Azure Blob FS. Azure Blob FS locations can only be accessed through HVR running on Linux or Windows, and it is not required to run HVR installed on the Hadoop NameNode although it is possible to do so. For more information about installing Hadoop client, refer to Apache Hadoop Releases.
Hadoop Client Configuration
The following are required on the machine from which HVR connects to Azure Blob FS:
- Hadoop 2.6.x client libraries with Java 7 Runtime Environment or Hadoop 3.x client libraries with Java 8 Runtime Environment. For downloading Hadoop, refer to Apache Hadoop Releases.
- Set the environment variable $JAVA_HOME to the Java installation directory. Ensure that this is the directory that has a bin folder, e.g. if the Java bin directory is d:\java\bin, $JAVA_HOME should point to d:\java.
- Set the environment variable $HADOOP_COMMON_HOME or $HADOOP_HOME or $HADOOP_PREFIX to the Hadoop installation directory, or the hadoop command line client should be available in the path.
- One of the following configuration is recommended,
- Set $HADOOP_CLASSPATH=$HADOOP_HOME/share/hadoop/tools/lib/*
Create a symbolic link for $HADOOP_HOME/share/hadoop/tools/lib in $HADOOP_HOME/share/hadoop/common or any other directory present in classpath.
Since the binary distribution available in Hadoop website lacks Windows-specific executables, a warning about unable to locate winutils.exe is displayed. This warning can be ignored for using Hadoop library for client operations to connect to a HDFS server using HVR. However, the performance on integrate location would be poor due to this warning, so it is recommended to use a Windows-specific Hadoop distribution to avoid this warning. For more information about this warning, refer to Hadoop Wiki and Hadoop issue HADOOP-10051.
Verifying Hadoop Client Installation
To verify the Hadoop client installation,
- The HADOOP_HOME/bin directory in Hadoop installation location should contain the hadoop executables in it.
Execute the following commands to verify Hadoop client installation:
If the Hadoop client installation is verified successfully then execute the following command to check the connectivity between HVR and Azure Blob FS:
To execute this command successfully and avoid the error "ls: Password fs.adl.oauth2.client.id not found", few properties needs to be defined in the file core-site.xml available in the hadoop configuration folder (for e.g., <path>/hadoop-2.8.3/etc/hadoop). The properties to be defined differs based on the Mechanism (authentication mode). For more information, refer to section 'Configuring Credentials' in Hadoop Azure Blob FS Support documentation.
Verifying Hadoop Client Compatibility with Azure Blob FS
To verify the compatibility of Hadoop client with Azure Blob FS, check if the following JAR files are available in the Hadoop client installation location ( $HADOOP_HOME/share/hadoop/tools/lib ):
Snowflake on Google Cloud Storage
HVR can be configured to stage the data on Google Cloud Storage before loading it into Snowflake. For staging the data on Google Cloud Storage and perform Integrate with Burst and Bulk Refresh, the following are required:
A Google Cloud Storage location - to store temporary data to be loaded into Snowflake
A Google Cloud user (storage account) - to access this location.
- Configure the storage integrations to allow Snowflake to read and write data into a Google Cloud Storage bucket. For more information, see Configuring an Integration for Google Cloud Storage in Snowflake documentation.
Define action LocationProperties on the Snowflake location with the following parameters:
/StagingDirectoryHvr: the location where HVR will create the temporary staging files (e.g. gs://mygooglecloudstorage_bucketname).
/StagingDirectoryDb: the location from where Snowflake will access the temporary staging files. If /StagingDirectoryHvr is a Google cloud storage location, this parameter should have the same value.
/StagingDirectoryCredentials: Google cloud storage credentials. The supported format for supplying the credentials is "gs_access_key_id=key;gs_secret_access_key=secret_key;gs_storage_integration=integration_name_for_google_cloud_storage".
Compare and Refresh Source
The User should have permission to read replicated tables.