Requirements for Snowflake
This section describes the requirements, access privileges, and other features of HVR when using Snowflake for replication. For information about the capabilities supported by HVR on Snowflake, see Capabilities for Snowflake.
For information about compatibility and supported versions of Snowflake with HVR platforms, see Platform Compatibility Matrix.
HVR requires that the Snowflake ODBC driver is installed on the machine from which HVR will connect to Snowflake. For more information on downloading and installing Snowflake ODBC driver, see Snowflake Documentation.
This section lists and describes the connection details required for creating Snowflake location in HVR.
|Server||The hostname or ip-address of the machine on which the Snowflake server is running. |
|Port||The port on which the Snowflake server is expecting connections. |
|Role||The name of the Snowflake role to use. |
|Warehouse||The name of the Snowflake warehouse to use. |
|Database||The name of the Snowflake database. |
|Schema||The name of the default Snowflake schema to use. |
|User||The username to connect HVR to the Snowflake Database. |
|Password||The password of the User to connect HVR to the Snowflake Database.|
|Linux / Unix|
|Driver Manager Library||The optional directory path where the ODBC Driver Manager Library is installed. For a default installation, the ODBC Driver Manager Library is available at /usr/lib64 and does not need to specified. When UnixODBC is installed in for example /opt/unixodbc-2.3.1 this would be /opt/unixodbc-2.3.1/lib.|
|ODBCSYSINI||The directory path where odbc.ini and odbcinst.ini files are located. For a default installation, these files are available at /etc and does not need to be specified. When UnixODBC is installed in for example /opt/unixodbc-2.3.1 this would be /opt/unixodbc-2.3.1/etc. The odbcinst.ini file should contain information about the Snowflake ODBC Driver under the heading [SnowflakeDSIIDriver].|
|ODBC Driver||The user defined (installed) ODBC driver to connect HVR to the Snowflake server.|
Integrate and Refresh Target
- The User should have permission to read and change replicated tables.
grant select, insert, update, delete, truncate on tbl to hvruser
grant usage, modify, create table on schema schema in database database to role hvruser
Burst Integrate and Bulk Refresh
HVR allows you to perform Integrate with Burst and Bulk Refresh on Snowflake (it uses the Snowflake "COPY INTO" feature for maximum performance). The following are required to perform Integrate with Burst and Bulk Refresh in Snowflake:
- HVR requires an AWS S3 location - to store temporary data to be loaded into Snowflake and AWS user with AmazonS3FullAccess policy - to access this location.
For more information, refer to the following AWS documentation:
- Amazon S3 and Tools for Windows PowerShell
- Managing Access Keys for IAM Users
- Creating a Role to Delegate Permissions to an AWS Service
- /StagingDirectoryHvr: the location where HVR will create the temporary staging files. HVR highly recommends using Amazon S3 location (ex. s3://my_bucket_name/).
- /StagingDirectoryDb: the location from where Snowflake will access the temporary staging files. If /StagingDirectoryHvr is an Amazon S3 location, this parameter should have the same value.
- /StagingDirectoryCredentials: the AWS security credentials. The supported formats are 'aws_access_key_id=<''key''>;aws_secret_access_key=<''secret_key''>' or 'role=<''AWS_role''>'. How to get your AWS credential or Instance Profile Role can be found on the AWS documentation webpage.
Compare and Refresh Source
- The User should have permission to read replicated tables.
grant select on tbl to hvruser