Requirements for Redshift
This section describes the requirements, access privileges, and other features of HVR when using Amazon Redshift for replication. For information about the capabilities supported by HVR on Redshift, see Capabilities for Redshift.
For information about compatibility and supported versions of Redshift with HVR platforms, see Platform Compatibility Matrix.
To quickly setup replication using Redshift, see Quick Start for HVR into Redshift
HVR uses ODBC connection to the Amazon Redshift clusters. The Amazon Redshift ODBC driver version 22.214.171.1246-1 should be installed on the machine from which HVR connects to the Amazon Redshift clusters. For more information about downloading and installing Amazon Redshift ODBC driver, refer to AWS documentation.
On Linux, HVR additionally requires unixODBC 2.3.0 or later.
This section lists and describes the connection details required for creating Redshift location in HVR.
|Node||The hostname or IP-address of the machine on which the Redshift server is running. |
|Port||The port on which the Redshift server is expecting connections. |
|Database||The name of the Redshift database. |
|User||The username to connect HVR to Redshift Database. |
|Password||The password of the User to connect HVR to Redshift Database.|
|Linux / Unix|
|Driver Manager Library||The optional directory path where the ODBC Driver Manager Library is installed. For a default installation, the ODBC Driver Manager Library is available at /usr/lib64 and does not need to specified. When UnixODBC is installed in for example /opt/unixodbc-2.3.1 this would be /opt/unixodbc-2.3.1/lib.|
|ODBCSYSINI||The directory path where odbc.ini and odbcinst.ini files are located. For a default installation, these files are available at /etc and does not need to be specified. When UnixODBC is installed in for example /opt/unixodbc-2.3.1 this would be /opt/unixodbc-2.3.1/etc. The odbcinst.ini file should contain information about the Amazon Redshift ODBC Driver under the heading [Amazon Redshift (x64)].|
|ODBC Driver||The user defined (installed) ODBC driver to connect HVR to the Amazon Redshift clusters.|
Bulk Refresh and Burst Integrate
HVR allows you to perform Bulk Refresh and Integrate with Burst on Redshift (it uses the Redshift "Copy from" feature for maximum performance). The following are required to perform Bulk Refresh and Integrate with Burst in Redshift:
- HVR requires an AWS S3 location - to store temporary data to be loaded into Redshift and AWS user with AmazonS3FullAccess policy - to access this location.
For more information, refer to the following AWS documentation:
- Amazon S3 and Tools for Windows PowerShell
- Managing Access Keys for IAM Users
- Creating a Role to Delegate Permissions to an AWS Service
- /StagingDirectoryHvr: the location where HVR will create the temporary staging files. HVR highly recommends using Amazon S3 location (ex. s3://my_bucket_name/)
- /StagingDirectoryDb: the location from where RedShift will access the temporary staging files. If /StagingDirectoryHvr is an Amazon S3 location, this parameter should have the same value.
- /StagingDirectoryCredentials: the AWS security credentials. The supported formats are 'aws_access_key_id=<''key''>;aws_secret_access_key=<''secret_key''>' or 'role=<''AWS_role''>'. How to get your AWS credential or Instance Profile Role can be found on the AWS documentation webpage.
If the AWS S3 location is in different region than the EC2 node, where the Target agent is installed, an extra action is required - Environment. The name of the environment should be HVR_S3_REGION with value - the region where S3 is created (e.g. Oregon).