hvradapt – Explore base table definitions and adapt them into channel information
hvradapt [-ddel_tbls]… [-ffname] [‑hclass] [-iletters]... [-I] [–lloc ] [‑ntbltmplfile]… [‑Ntblpatt]… [-R] [-rtbls]... [-sscope] [-Sscope] [‑uuser] [‑Udictsch] [-x] [-X] -- hubdb chn
Command hvradapt will compare the base tables in a database with the table information for a channel. It will then either add, replace or delete table information (in catalogs hvr_table and hvr_column) so this information matches.
This is equivalent to the "Table Explore" dialog in the HVR GUI.
If the adapt location contains a table which is not in the channel but is matched by the template it is added to the channel. If a table is in the channel but is not matched by the template, then it is deleted from the channel. And if a table is both matched by the template and included in the channel, but has the wrong column information in the channel, then this column information is updated. If no template is supplied (no –n or -N option), then no tables are added or deleted; only existing column information is updated where necessary.
|–dtblname...||Delete specified table names from channel. No other tables are compared or changed.|
|–ffname||Write (append) list of modified or added HVR table names to file. This can be useful which calls hvradapt and then does extra steps (e.g. hvrrefresh for tables which were affected.|
|–hclass||Specify hub database. Valid values are oracle, ingres, sqlserver, db2, db2i, postgresql, and teradata.|
|–iletters||Ignore certain differences. Value letters can contain:
|–I||Controls whether HVR should convert catalog datatypes into datatypes that could be created in the DBMS. If not supplied then the datatypes are converted before they are compared. Otherwise the actual catalog datatype is compared without any conversion.|
|–lloc||Specifies the adapt location, typically the channel's capture location|
|–ntbltmplfile||Specifies a table template file for the channel. This template file defines which base tables in the adapt location should be included in the channel. The file contains table names and/or patterns. For format, see Example below.|
|–Ntblpatt||Specifies a table template. This template defines which base tables in the adapt location should be included in the channel. Multiple templates can be supplied; each can be either a base table name, or a pattern (such as mytbl*). These patterns can also be supplied in a pattern file (option -n).|
|–R||Do not re-describe tables|
|–rtbls||Re-describe only specific tables|
|–sscope||Add TableProperties /Schema action to scope. Valid values for scope are all (a), group (g) or location (l). The default is all (a).|
|–Sscope||Add ColumnProperties /Absent to scope instead of updating the column information. Valid values for scope are group (g) or location (l). The default is not to add these ColumnProperties /Absent actions.|
|–uuser[/pwd]||Connect to hub database using DBMS account user. For some databases (e.g. SQL Server) a password must also be supplied.|
|–Udictsch||SAP dictionary tables are owned by DB schema dictsch. If this option is not supplied then HVR will look for the SAP dictionary tables in the HVR location's default schema, and also in schemas indicated in any TableProperties /Schema parameter. This option can only be used with option -X.|
|–x||Check only mode. Do not apply any changes to the catalogs|
|–X||Explore table and column information from SAP dictionaries, instead of DBMS catalogs. For SAP "transparent" tables this can give more correct datatype information. It will also show the logical tables inside SAP "cluster" and "pool" tables. These are shown as 'unpacked' tables.|
First, a template file (e.g. /tmp/adapt.tmpl) is created with the contents:
Adapt template defines which base tables in the adapt location should be included in or excluded from the channel. Each pattern line should have the following format;
[ ! ][schema. ] tablename [-T target_schema] [-K (col_list)]
Value tablename can be a literal (enclosed in double quotes) or a pattern (matching can be done with *, ? or [letters] ). Special symbol + matches all tables already in the channel. Empty lines and hash comments (#Test) are ignored.
Options -K and -T only have an effect at the moment a table is added to channel, but are ignored afterwards. Option -K marks the listed columns as distribution key columns in the 'column catalog', instead of defining a new ColumnProperties/DistributionKey action
+ # Match all tables already in channel order # Table order order_lines # Table order_lines schema1.* # All tables in schema1 schema2.tbl1 # Table in non-default schema history_* # Pattern for dynamic tables !tmp_* # Negative pattern to exclude tables "my table" # Table called "my table" sch1."sch2.tab*" # Table called "sch2.tab*" in schema "sch1" sch3.xx* -T sch4 # Replicate tables in schema sch3 to schema sch4 my_table2 -K(c1 c2) # Table should be added with distribution key (c1, c2)
The following example demonstrates the use of a shell script to automatically enroll new or modified tables.
Subsequently, the script below runs hvradapt to check for new or modified tables in location loc1. If hvradapt finds any, the script executes the necessary commands to enroll the tables in the channel mychn and perform a refresh.
#!/bin/sh hub=myhub/passwd # Hub database and password (if required) chn=mychn # Channel src=loc1 # Source location F=/tmp/adapt_$chn.out # File where hvradapt writes list of new or changed tables hvradapt -f$F -n/tmp/adapt.tmpl -l$src $hub $chn # Add new or changed tables from source to channel if test -f $F # If file $F exists then new or changed tables were detected then hvrsuspend $hub $chn-integ # Stop integrate jobs hvrinit -oelj $hub $chn # Regenerate supplemental-logging, jobs and enroll info hvrrefresh -r$src -t$F -qrw -cbkr $hub $chn # Re-create and online refresh tables in file $F hvrstart -r -u $hub $chn-inget # Re-trigger stopped jobs hvradapt -x $hub $chn # Double-check channel now matches target location (optional) rm $F # Remove file with list of new or changed tables fi