This section describes the requirements, access privileges, and other features of HVR when using Hive ACID (Atomicity, Consistency, Isolation, Durability) for replication. For information about compatibility and supported versions of Hive ACID with HVR platforms, see Platform Compatibility Matrix.
For information about the supported data types and mapping of data types in source DBMS to the corresponding data types in target DBMS or file format, see Data Type Mapping.
HVR uses ODBC connection to the Hive ACID server. One of the following ODBC driver should be installed on the machine from which it connects to the Hive ACID server HortonWorks ODBC driver 2.1.7 (and above) or Cloudera ODBC driver 2.5.12 (and above).
We do not recommend using the Hortonworks Hive ODBC driver version 2.1.7 and 2.1.10 on Windows.
HVR can deliver changes into Hive ACID tables as a target location for its refresh and integration. Delivery of changes into Hive ACID tables for Hive versions before 2.3 is only supported with action ColumnProperties /TimeKey.
For file formats (JSON and Avro), the following action definition is required to handle certain limitations when execute any SQL statement against Hive external tables (due to compatibility issues with Hive 3 Metastore and Hive ODBC drivers):
Environment /Name=HVR_ODBC_CONNECT_STRING_ADD /Value="UseNativeQuery=1"
This section lists and describes the connection details required for creating Hive ACID location in HVR.
Hive ODBC Connection
Hive Server Type
The type of Hive server. Available options:
Service Discovery Mode
The mode for connecting to Hive. This field is enabled only if Hive Server Type is Hive Server 2. Available options:
The hostname or IP address of the Hive server.
The TCP port that the Hive server uses to listen for client connections. This field is enabled only if Service Discovery Mode is No Service Discovery.
The name of the database schema to use when a schema is not explicitly specified in a query.
The namespace on ZooKeeper under which Hive Server 2 nodes are added. This field is enabled only if Service Discovery Mode is ZooKeeper.
The authentication mode for connecting HVR to Hive Server 2. This field is enabled only if Hive Server Type is Hive Server 2. Available options:
The username to connect HVR to Hive server. This field is enabled only if Mechanism is User Name or User Name and Password.
The password of the User to connect HVR to Hive server. This field is enabled only if Mechanism is User Name and Password.
The Kerberos service principal name of the Hive server. This field is enabled only if Mechanism is Kerberos.
The Fully Qualified Domain Name (FQDN) of the Hive Server 2 host. The value of Host can be set as _HOST to use the Hive server hostname as the domain name for Kerberos authentication.
The realm of the Hive Server 2 host.
The transport protocol to use in the Thrift layer. This field is enabled only if Hive Server Type is Hive Server 2. Available options:
The partial URL corresponding to the Hive server. This field is enabled only if Thrift Transport is HTTP.
Linux / Unix
Driver Manager Library
The directory path where the Unix ODBC Driver Manager Library is installed.
The directory path where odbc.ini and odbcinst.ini files are located.
The user defined (installed) ODBC driver to connect HVR to the Hive server.
Show SSL Options.
Enable/disable (one way) SSL. If enabled, HVR authenticates the Hive server by validating the SSL certificate shared by the Hive server.
Enable/disable two way SSL. If enabled, both HVR and Hive server authenticate each other by validating each others SSL certificate. This field is enabled only if Enable SSL is selected.
|Trusted CA Certificates||The directory path where the .pem file containing the server's public SSL certificate signed by a trusted CA is located. This field is enabled only if Enable SSL is selected.|
SSL Public Certificate
The directory path where the .pem file containing the client's SSL public certificate is located. This field is enabled only if Two-way SSL is selected.
SSL Private Key
The directory path where the .pem file containing the client's SSL private key is located. This field is enabled only if Two-way SSL is selected.
Client Private Key Password
The password of the private key file that is specified in SSL Private Key. This field is enabled only if Two-way SSL is selected.
Hive ACID on Amazon Elastic MapReduce (EMR)
To enable Hive ACID on Amazon EMR,
Add the following configuration details to the hive-site.xml file available in /etc/hive/conf on Amazon EMR:
- Save the modified hive-site.xml file.
- Restart Hive on Amazon EMR.
For more information on restarting a service in Amazon EMR, refer to How do I restart a service in Amazon EMR? in AWS documentation.
Integrate and Refresh Target
HVR supports integrating changes into Hive ACID location. This section describes the configuration requirements for integrating changes (using Integrate and refresh) into Hive ACID location. For the list of supported Hive ACID versions, into which HVR can integrate changes, see Integrate changes into location in Capabilities.
Burst Integrate and Bulk Refresh
While HVR Integrate is running with parameter /Burst and Bulk Refresh, HVR can stream data into a target database straight over the network into a bulk loading interface specific for each DBMS (e.g. direct-path-load in Oracle), or else HVR puts data into a temporary directory (‘staging file') before loading data into a target database.
For best performance, HVR performs Integrate with /Burst and Bulk Refresh on Hive ACID location using staging files. HVR implements Integrate with /Burst and Bulk Refresh (with file staging) into Hive ACID as follows:
- HVR first creates Hive external tables using Amazon/HortonWorks Hive ODBC driver
- HVR then stages data into:
- S3 using AWS S3 REST interface (cURL library) or
- HDFS/Azure Blob FS/Azure Data Lake Storage using HDFS-compatible libhdfs API
- HVR uses Hive SQL commands 'merge' (Integrate with /Burst) or 'insert into' (Bulk Refresh) against the Hive external tables linked to S3/HDFS/Azure Blob FS/Azure Data Lake Storage to ingest data into ACID Hive managed tables.
- HVR requires an AWS S3 or HDFS/Azure Blob FS/Azure Data Lake Storage location to store temporary data to be loaded into Hive ACID.
If AWS S3 is used to store temporary data then HVR requires the AWS user with 'AmazonS3FullAccess' policy to access this location. For more information, refer to the following AWS documentation:
- Define action LocationProperties on Hive ACID location with the following parameters:
- /StagingDirectoryHvr: the location where HVR will create the temporary staging files. The format for AWS S3 is s3://S3 Bucket/Directory and for HDFS is hdfs://NameNode:Port/Directory
- /StagingDirectoryDb: the location from where Hive ACID will access the temporary staging files.
If /StagingDirectoryHvr is an AWS S3 location then the value for /StagingDirectoryDb should be same as /StagingDirectoryHvr.
- /StagingDirectoryCredentials: the AWS security credentials. The supported formats are 'aws_access_key_id="key";aws_secret_access_key="secret_key"' or 'role="AWS_role"'. How to get your AWS credential or Instance Profile Role can be found on the AWS documentation web page.
Since HVR uses CSV file format for staging, the following action definitions are required to handle certain limitations of the Hive deserialization implementation: