This section describes HVR actions and their parameters. Actions in HVR allow you to define the behavior of replication. When a replication channel is created, at least two actions - Capture and Integrate must be defined on source and target locations respectively to activate replication.
Add new tables to channel if they match.
Ignore new tables which match pattern.
Database schema for matching tables.
Generate schema for target location(s).
|OnEnrollBreak||pol||Applies policy pol to control the behavior of capture job for an existing table to handle break in the enroll information.|
|OnPreserveAlterTableFail||pol||Applies policy pol to control the behavior of capture job for an existing table to handle any failure while performing |
Configure options for adapt's refresh of target.
Behavior when source table dropped. Default: from channel only.
Preserve old columns in target, and do not reduce data types sizes.
Preserve old rows in target during recreate.
Call OS command during replication jobs.
Call database procedure dbproc during replication jobs.
Pass argument str to each agent execution.
Execute agent on hub instead of location's machine.
Specify order of agent execution.
Capture changes directly from DBMS logging system.
Coalesce consecutive changes on the same row into a single change.
Only capture the new values for updated rows.
Do not capture truncate table statements.
Capture job must select for column values.
Ignore changes that satisfy expression.
Ignore update changes that satisfy expression.
Hash structure to improve parallelism of captured tables.
Hash capture table on specific key columns.
Delete file after capture, instead of capturing recently changed files.
Only capture files whose names match pattern .
Ignore files whose names match pattern .
Ignore files whose last line does not match pattern .
Changes in file size during capture is not considered an error.
Delay read for secs seconds to ensure writing is complete.
Check timestamp of parent dir, as Windows move doesn't change mod-time.
Do not resolve collisions automatically.
Exploit timestamp column col_name for collision detection.
Delete history table row when no longer needed for collision detection.
During row–wise refresh, discard updates if target timestamp is newer.
Name of column in the HVR_COLUMN repository table.
Data type used for matching instead of Name.
Database column name differs from the HVR_COLUMN repository table.
Column exists in base table but not in the HVR_COLUMN repository table.
Column does not exist in base table.
SQL expression for column value when capturing or reading.
Type of mechanism used by Capture, Refresh, and Compare job to evaluate value in parameter CaptureExpression .
SQL expression for column value when integrating.
Operation scope for expressions.
Capture values from table's DBMS row-id.
Reduce width of data type when selecting or capturing changes.
Add column to table's replication key.
Use column instead of the regular key during replication.
Distribution key column.
|PartitionKeyOrder||int||Define the column as a partition key and set partitioning order for the column.|
Convert deletes to update of this column to 1. Value 0 means not deleted.
Convert all changes to inserts, using this column for time dimension.
Ignore values in column during compare and refresh.
Data type in database if it differs from the HVR_COLUMN repository table.
String length in database if it differs from the length in the HVR repository tables.
Precision in database if it differs from the precision in the HVR repository tables.
Integer scale in database if it differs from the scale in the HVR repository tables.
Nullability in database if it differs from the nullability in the HVR repository tables.
Inhibit generation of capture insert trigger.
Inhibit generation of capture update trigger.
Inhibit generation of capture delete trigger.
Inhibit generation of capture database procedures.
Inhibit generation of capture tables.
Inhibit generation of integrate database procedures.
Search directory for include SQL file.
Search directory for include SQL file.
Clause for trigger-based capture table creation statement.
Clause for state table creation statement.
Clause for integrate burst table creation statement.
Clause for fail table creation statement.
Clause for history table creation statement.
Clause for base table creation statement during refresh.
|RefreshTableGrant||Executes a grant statement on the base table created during HVR Refresh .|
Only capture database sequences, do not integrate them.
Only integrate database sequences, do not capture them.
Name of database sequence in the HVR repository tables.
Schema which owns database sequence.
Name of sequence in database if it differs from name in HVR.
Name of environment variable.
Value of environment variable.
Transform rows from/into xml-files.
Transforms rows from/into csv files.
Transforms rows into Apache AVRO format. Integrate only.
Transforms rows into JSON format. The content of the file depends on the value for parameter JsonMode. This parameter only has an effect on the integrate location.
Read and write files as Parquet format.
Write compact XML tags like <r> and <c> instead of <row> and <column>.
Compress/uncompress while writing/reading.
Encoding of file.
First line of file contains column names.
Field separator. Defaults to comma (,). Examples: , \\x1f or \\t
Line separator. Defaults to newline (\\n). Examples: ;\\n or \r\\n
Character to quote a field with, if the fields contains separators. Defaults to quote (\").
Character to escape the quote character with. Defaults to quote (\").
File termination at end-of-file. Example: EOF or \xff
String representation for columns with NULL value.
|JsonMode||mode||Style used to write row into JSON format.|
Compression codec for Avro and Parquet.
Version of Apache AVRO format.
Parquet page size in bytes.
Maximum row group size in bytes for Parquet.
Category of data types to represent complex data into Parquet format.
|BeforeUpdateColumns||prefix||Merges the 'before' and 'after' versions of a row into one.|
|BeforeUpdateColumnsWhenChanged||Adds the prefix (defined in BeforeUpdateColumns) only to columns in which values were updated.|
Write files with UNIX or DOS style newlines.
Run files through converter before reading.
Arguments to the capture converter.
Run files through converter after writing.
Arguments to the integrate converter program.
Method of writing or integrating changes into the target location.
Frequency of commits.
Coalesce consecutive changes on the same row into a single change.
Control order in which changes are written to files.
Resilient integrate for inserts, updates and deletes.
Write failed row to fail table.
Apply changes by calling integrate database procedures.
Bundle small transactions for improved performance.
Split very large transactions to limit resource usage.
Enable/Disable triggering of database rules.
Integrate changes with special session name sess_name .
Name of the Kafka topic. You can use strings/text or expressions as Kafka topic name.
|MessageKey||expression||Expression to generate user defined key in a Kafka message.|
|MessageKeySerializer||format||Encodes the generated Kafka message key in a string or Kafka Avro serialization format.|
Expression to name new files, containing brace substitutions.
|ComparePattern||patt||Perform direct file compare.|
Error if a new file has same name as an existing file.
Limit each XML file to size bytes.
Report name of each file integrated.
API name of table to upload attachments into.
API name of attachment table's key column.
Max amount of routed data (compressed) to process per integrate cycle.
Move processed router files to journal directory on hub.
Delay integration of changes for N seconds.
Restrict during capture.
Restrict during integration.
Restrict during refresh and compare.
Restrict during compare.
|SliceCondition||sql_expr||During sliced Refresh or Compare, only rows where the condition sql_expr evaluates as TRUE are affected.|
Horizontal partition table based on value in col_name .
Join partition column with horizontal lookup table.
Changes to lookup table also trigger replication.
Only send changes to locations specified by address.
Get copy of any changes sent to matching address.
Trigger capture job at specific times, rather than continuous cycling.
Capture job runs for one cycle after trigger.
Trigger integrate job only after capture job routes new data.
Trigger integrate job at specific times, rather than continuous cycling.
Integrate job runs for one cycle after trigger.
Trigger refresh job at specific times.
Trigger compare job at specific times.
Size of history maintained by hvrstats job, before it purges its own rows.
|LatencySLA||threshold||Threshold for the latency.|
|TimeContext||times||Time range during which the LatencySLA is active/valid.|
Name of a table in a database differs from the name in the HVR repository tables.
Exclude table (which is available in the channel) from being replicated/integrated into target.
Replication table cannot have duplicate rows.
Database schema which owns table.
Defines a policy to handle type coercion error.
Defines which types of coercion errors are affected by CoerceErrorPolicy.
Remove trailing whitespace from varchar.
Trim time when converting from Oracle and SqlServer date.
Convert between empty varchar and Oracle varchar space.
Convert between constant date (dd/mm/yyyy) and Ingres empty date.
On table creation use Unicode data types, e.g. map varchar to nvarchar.
Maximum number of columns in the implicit distribution key.
Avoid putting given columns in the implicit distribution key.
|BucketsCount||Number of buckets to be specified while creating a table in Hive ACID.|
Specify the replacement rules for unsupported characters.
Specify how binary data is represented on the target side.
Inserts value str into the string data type column(s) if value is missing/empty in the respective column(s) during integration.
Inserts value str into the numeric data type column(s) if value is missing/empty in the respective column(s) during integration.
Inserts value str into the date data type column(s) if value is missing/empty in the respective column(s) during integration.
|Context||context||Action is only effective/applied if the context matches with the context (option -C) defined in Refresh or Compare.|
Path to script or executable performing custom transformation.
Value(s) of parameter(s) for transform (space separated).
|SapUnpack||Unpack the SAP pool, cluster, and long text table (STXL).|
Execute transform on hub instead of location's machine.
Distribute rows to multiple transformation processes.