Search…
Loading Data with SQL
This topic describes several ways to load data to OmniSci using SQL commands.
    If there is a potential for duplicate entries, and you want to avoid loading duplicate rows, see How can I avoid creating duplicate rows? on the Troubleshooting page.
    If a source file uses a reserved word, OmniSci automatically adds an underscore at the end of the reserved word. For example, year is converted to year_.

COPY FROM

CSV/TSV Import

Use the following syntax for CSV and TSV files:
1
COPY <table> FROM '<file pattern>' [WITH (<property> = value, ...)];
Copied!
<file pattern> must be local on the server. The file pattern can contain wildcards if you want to load multiple files. In addition to CSV, TSV, and TXT files, you can import compressed files in TAR, ZIP, 7-ZIP, RAR, GZIP, BZIP2, or TGZ format.
COPY FROM appends data from the source into the target table. It does not truncate the table or overwrite existing data.
You can import client-side files (\copy command in omnisql) but it is significantly slower. For large files, OmniSci recommends that you first scp the file to the server, and then issue the COPY command.
OmniSci supports Latin-1 ASCII format and UTF-8. If you want to load data with another encoding (for example, UTF-16), convert the data to UTF-8 before loading it to OmniSci.
Available properties in the optional WITH clause are described in the following table.
Parameter
Description
Default Value
array_delimiter
A single-character string for the delimiter between input values contained within an array.
, (comma)
array_marker
A two-character string consisting of the start and end characters surrounding an array.
{ }(curly brackets). For example, data to be inserted into a table with a string array in the second column (for example, BOOLEAN, STRING[], INTEGER) can be written as true,{value1,value2,value3},3
buffer_size
Size of the input file buffer, in bytes.
8388608
delimiter
A single-character string for the delimiter between input fields; most commonly:
    , for CSV files
    \t for tab-delimited files
Other delimiters include | ,~, ^, and;.
Note: OmniSci does not use file extensions to determine the delimiter.
',' (CSV file)
escape
A single-character string for escaping quotes.
'"' (double quote)
header
Either 'true' or 'false', indicating whether the input file has a header line in Line 1 that should be skipped.
'true'
line_delimiter
A single-character string for terminating each line.
'\n'
lonlat
In OmniSci, POINT fields require longitude before latitude. Use this parameter based on the order of longitude and latitude in your source data.
'true'
max_reject
Number of records that the COPY statement allows to be rejected before terminating the COPY command. Records can be rejected for a number of reasons, including invalid content in a field, or an incorrect number of columns. The details of the rejected records are reported in the ERROR log. COPY returns a message identifying how many records are rejected. The records that are not rejected are inserted into the table, even if the COPY stops because the max_reject count is reached.
Note: If you run the COPY command from OmniSci Immerse, the COPY command does not return messages to Immerse once the SQL is verified. Immerse does not show messages about data loading, or about data-quality issues that result in max_reject triggers.
100,000
nulls
A string pattern indicating that a field is NULL.
An empty string, 'NA', or \N
parquet
Import data in Parquet format. Parquet files can be compressed using Snappy. Other archives such as .gz or .zip must be unarchived before you import the data.
'false'
plain_text
Indicates that the input file is plain text so that it bypasses the libarchive decompression utility.
CSV, TSV, and TXT are handled as plain text.
quote
A single-character string for quoting a field.
" (double quote). All characters inside quotes are imported “as is,” except for line delimiters.
quoted
Either 'true' or 'false', indicating whether the input file contains quoted fields.
'true'
source_srid
When importing into GEOMETRY(*, 4326) columns, specifies the SRID of the incoming geometries, all of which are transformed on the fly. For example, to import from a file that contains EPSG:2263 (NAD83 / New York Long Island) geometries, run the COPY command and include WITH (source_srid=2263). Data targeted at non-4326 geometry columns is not affected.
0
threads
Number of threads for performing the data import.
Number of CPU cores on the system
By default, the CSV parser assumes one row per line. To import a file with multiple lines in a single field, specify threads = 1 in the WITH clause.

Examples

1
COPY tweets FROM '/tmp/tweets.csv' WITH (nulls = 'NA');
2
COPY tweets FROM '/tmp/tweets.tsv' WITH (delimiter = '\t', quoted = 'false');
3
COPY tweets FROM '/tmp/*' WITH (header='false');
4
COPY trips FROM '/mnt/trip/trip.parquet/part-00000-0284f745-1595-4743-b5c4-3aa0262e4de3-c000.snappy.parquet' with (parquet='true');
Copied!

Globbing, Filtering, and Sorting Examples

These examples assume the following folder and file structure:

Globbing

Local Parquet/CSV files can now be globbed by specifying either a path name with a wildcard or a folder name.
Globbing a folder recursively returns all files under the specified folder. For example,
COPY table_1 FROM ".../subdir";
returns file_3, file_4, file_5.
Globbing with a wildcard returns any file paths matching the expanded file path. So
COPY table_1 FROM ".../subdir/file*"; returns file_3, file_4.
Does not apply to S3 cases, because file paths specified for S3 always use prefix matching.

Filtering

Use file filtering to filter out unwanted files that have been globbed. To use filtering, specify the REGEX_PATH_FILTER option. Files not matching this pattern are not included on import. Consistent across local and S3 use cases.
The following regex expression:
COPY table_1 from ".../" WITH (REGEX_PATH_FILTER=".*file_[4-5]");
returns file_4, file_5.

Sorting

Use the FILE_SORT_ORDER_BY option to specify the order in which files are imported.
FILE_SORT_ORDER_BY Options
    pathname (default)
    date_modified
    regex *
    regex_date *
    regex_number *
*FILE_SORT_REGEX option required
Using FILE_SORT_ORDER_BY
COPY table_1 from ".../" WITH (FILE_SORT_ORDER_BY="date_modified");
Using FILE_SORT_ORDER_BY with FILE_SORT_REGEX
Regex sort keys are formed by the concatenation of all capture groups from the FILE_SORT_REGEX expression. Regex sort keys are strings but can be converted to dates or FLOAT64 with the appropriate FILE_SORT_ORDER_BY option. File paths that do not match the provided capture groups or that cannot be converted to the appropriate date or FLOAT64 are treated as NULLs and sorted to the front in a deterministic order.
Multiple Capture Groups:
FILE_SORT_REGEX=".*/data_(.*)_(.*)_" /root/dir/unmatchedFile<NULL> /root/dir/data_andrew_54321_andrew54321 /root/dir2/data_brian_Josef_brianJosef
Dates:
FILE_SORT_REGEX=".*data_(.*) /root/data_222<NULL> (invalid date conversion) /root/data_2020-12-312020-12-31 /root/dir/data_2021-01-012021-01-01
Import:
COPY table_1 from ".../" WITH (FILE_SORT_ORDER_BY="regex", FILE_SORT_REGEX=".*file_(.)");

Geo Import

You can use COPY FROM to import geo files. You can create the table based on the source file and then load the data:
1
COPY tableName FROM 'source' WITH (geo='true', ...);
Copied!
You can also append data to an existing, predefined table:
1
COPY tableName FROM 'source' WITH (geo='true', ...);
Copied!
Use the following syntax, depending on the file source.
Source
Syntax
Local server
COPY [tableName] FROM '/filepath' WITH (geo='true', ...);
Web site
COPY [tableName] FROM '[http `
_https_]://_website/filepath_' WITH (geo='true', ...);`
Amazon S3
COPY [tableName] FROM 's3://bucket/filepath' WITH (geo='true', s3_region='region', s3_access_key='accesskey', s3_secret_key='secretkey', ... );
    If you are using COPY FROM to load to an existing table, the field type must match the metadata of the source file. If it does not, COPY FROM throws an error and does not load the data.
    COPY FROM appends data from the source into the target table. It does not truncate the table or overwrite existing data.
    Supported DATE formats when using COPY FROM include mm/dd/yyyy, dd-mmm-yy, yyyy-mm-dd, and dd/mmm/yyyy.
    COPY FROM fails for records with latitude or longitude values that have more than 4 decimal places.
The following WITH options are available for geo file imports from all sources.
Parameter
Description
Default Value
geo_assign_render_groups
Enable or disable automatic render group assignment for polygon imports; can be true or false. If polygons are not needed for rendering, set this to false to speed up import.
true
geo_coords_type
Coordinate type used; must be geography.
N/A
geo_coords_encoding
Coordinates encoding; can be geoint(32) or none.
geoint(32)
geo_coords_srid
Coordinates spatial reference; must be 4326 (WGS84 longitude/latitude).
N/A
geo_explode_collections
Explodes MULTIPOLYGON, MULTILINESTRING, or MULTIPOINT geo data into multiple rows in a POLYGON, LINESTRING, or POINT column, with all other columns duplicated.
When importing from a WKT CSV with a MULTIPOLYGON column, the table must have been manually created with a POLYGON column. Similarly, a MULTILINESTRING column must be created with LINESTRING, and a MULTIPOINT column must be created with POINT. Storing MULTILINESTRING or MULTIPOINT directly is not supported, but can be exploded into a LINESTRING or POINT column.
When importing from a geo file, the table is automatically created with the correct type of column.
When the input column contains a mixture of MULTI and single geo, the MULTI geo are exploded, but the singles are imported normally. For example, a column containing five two-polygon MULTIPOLYGON rows and five POLYGON rows imports as a POLYGON column of fifteen rows.
false
Currently, a manually created geo table can have only one geo column. If it has more than one, import is not performed.
The following file types are supported:
    ESRI Shapefile (.shp)
    ESRI file geodatabase (.gdb)
    GeoJSON (.geojson, .json, .geojson.gz, .json.gz)
    KML (.kml, kmz)
    File bundles:
      .zip
      .tar
      .tar.gz
      .tgz
An ESRI file geodatabase can have multiple layers, and importing it results in the creation of one table for each layer in the file. This behavior differs from that of importing shapefiles, GeoJSON, or KML files, which results in a single table. For more information, see Importing an ESRI File Geodatabase.
The first compatible file (.shp, .gdb, .geojson, .kml) in the bundle is loaded; subfolders are traversed until a compatible file is found. The rest of the contents in the bundle are ignored. If the bundle contains multiple filesets, unpack the file manually and specify it for import.
For more information about importing specific geo file formats, see Importing Geospatial Files.
CSV files containing WKT strings are not considered geo files and should not be parsed with the geo='true' option. When importing WKT strings from CSV files, you must create the table first. The geo column type and encoding are specified as part of the DDL. For example, for a polygon with no encoding, try the following:
1
ggpoly GEOMETRY(POLYGON, 4326) ENCODING COMPRESSED(32)
Copied!

SQLImporter

SQLImporter is a Java utility run at the command line. It runs a SELECT statement on another database through JDBC and loads the result set into OmniSciDB.

Usage

1
java -cp [OmniSci utility jar file]:[3rd party JDBC driver]
2
SQLImporter
3
-u <userid>; -p <password>; [(--binary|--http|--https [--insecure])]
4
-s <omnisci server host> -db <omnsci db> --port <omnisci server port>
5
[-d <other database JDBC drive class>] -c <other database JDBC connection string>
6
-su <other database user> -sp <other database user password> -ss <other database sql statement>
7
-t <OmniSci target table> -b <transfer buffer size> -f <table fragment size>
8
[-tr] [-nprg] [-adtf] -i <init commands file>
Copied!

Flags

1
-r Row load limit
2
-h,--help Help message
3
-r <arg>; Row load limit
4
-h,--help Help message
5
-u,--user <arg>; OmniSci user
6
-p,--passwd <arg>; OmniSci password
7
--binary Use binary transport to connect to OmniSci
8
--http Use http transport to connect to OmniSci
9
--https Use https transport to connect to OmniSci
10
-s,--server <arg>; OmniSci Server
11
-db,--database <arg>; OmniSci Database
12
--port <arg>; OmniSci Port
13
--ca-trust-store <arg>; CA certificate trust store
14
--ca-trust-store-passwd <arg>; CA certificate trust store password
15
--insecure <arg>; Inseure TLS - Do not validate server OmniSci server credentials
16
-d,--driver <arg>; JDBC driver class
17
-c,--jdbcConnect <arg>; JDBC connection string
18
-su,--sourceUser <arg>; Source user
19
-sp,--sourcePasswd <arg>; Source password
20
-ss,--sqlStmt <arg>; SQL Select statement
21
-t,--targetTable <arg>; OmniSci Target Table
22
-b,--bufferSize <arg>; Transfer buffer size
23
-f,--fragmentSize <arg>; Table fragment size
24
-tr,--truncate Truncate table if it exists
25
-nprg,--noPolyRenderGroups Disable render group assignment
26
-adtf,--allowDoubleToFloat Allow narrow casting
27
-i,--initializeFile <arg>; File containing init command for DB
Copied!
OmniSci recommends that you use a service account with read-only permissions when accessing data from a remote database.
In release 4.6 and higher, the user ID (-u) and password (-p) flags are required. If your password includes a special character, you must escape the character using a backslash (\).
If the table does not exist in OmniSciDB, SQLImporter creates it. If the target table in OmniSciDB does not match the SELECT statement metadata, SQLImporter fails.
If the truncate flag is used, SQLImporter truncates the table in OmniSciDB before transferring the data. If the truncate flag is not used, SQLImporter appends the results of the SQL statement to the target table in OmniSciDB.
The -i argument provides a path to an initialization file. Each line of the file is sent as a SQL statement to the remote database. You can use -i to set additional custom parameters before the data is loaded.
The SQLImporter string is case-sensitive. Incorrect case returns the following:
Error: Could not find or load main class com.mapd.utility.SQLimporter

PostgreSQL/PostGIS Support

You can migrate geo data types from a PostgreSQL database. The following table shows the correlation between PostgreSQL/PostGIS geo types and OmniSci geo types.
PostgreSQL/PostGIS Type
OmniSci Type
point
point
lseg
linestring
linestring
linestring
polygon
polygon
multipolygon
multipolygon
Other PostgreSQL types, including circle, box, and path, are not supported.

OmniSciDB Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar
2
com.mapd.utility.SQLImporter -u admin -p HyperInteractive -db omnisci --port 6274
3
-t mytable -su admin -sp HyperInteractive -c "jdbc:omnisci:myhost:6274:omnisci"
4
-ss "select * from mytable limit 1000000000"
Copied!
By default, 100,000 records are selected from OmniSciDB. To select a larger number of records, use the LIMIT statement.

Hive Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar:/hive-jdbc-1.2.1000.2.6.1.0-129-standalone.jar
2
com.mapd.utility.SQLImporter
3
-u user -p password
4
-db OmniSci_database_name --port 6274 -t OmniSci_table_name
5
-su source_user -sp source_password
6
-c "jdbc:hive2://server_address:port_number/database_name"
7
-ss "select * from source_table_name"
Copied!

Google Big Query Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar:./GoogleBigQueryJDBC42.jar:
2
./google-oauth-client-1.22.0.jar:./google-http-client-jackson2-1.22.0.jar:./google-http-client-1.22.0.jar:./google-api-client-1.22.0.jar:
3
./google-api-services-bigquery-v2-rev355-1.22.0.jar
4
com.mapd.utility.SQLImporter
5
-d com.simba.googlebigquery.jdbc42.Driver
6
-u user -p password
7
-db OmniSci_database_name --port 6274 -t OmniSci_table_name
8
-su source_user -sp source_password
9
-c "jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=project-id;OAuthType=0;
10
[email protected];OAuthPvtKeyPath=/home/simba/myproject.json;"
11
-ss "select * from schema.source_table_name"
Copied!

PostgreSQL Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar:/tmp/postgresql-42.2.5.jar
2
com.mapd.utility.SQLImporter
3
-u user -p password
4
-db OmniSci_database_name --port 6274 -t OmniSci_table_name
5
-su source_user -sp source_password
6
-c "jdbc:postgresql://127.0.0.1/postgres"
7
-ss "select * from schema_name.source_table_name"
Copied!

SQLServer Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar:/path/sqljdbc4.jar
2
com.mapd.utility.SQLImporter
3
-d com.microsoft.sqlserver.jdbc.SQLServerDriver
4
-u user -p password
5
-db OmniSci_database_name --port 6274 -t OmniSci_table_name
6
-su source_user -sp source_password
7
-c "jdbc:sqlserver://server:port;DatabaseName=database_name"
8
-ss "select top 10 * from dbo.source_table_name"
Copied!

MySQL Example

1
java -cp /opt/omnisci/bin/omnisci-utility-5.6.0.jar:mysql/mysql-connector-java-5.1.38-bin.jar
2
com.mapd.utility.SQLImporter
3
-u user -p password
4
-db OmniSci_database_name --port 6274 -t OmniSci_table_name
5
-su source_user -sp source_password
6
-c "jdbc:mysql://server:port/DatabaseName=database_name"
7
-ss "select * from schema_name.source_table_name"
Copied!

StreamInsert

Stream data into OmniSciDB by attaching the StreamInsert program to the end of a data stream. The data stream can be another program printing to standard out, a Kafka endpoint, or any other real-time stream output. You can specify the appropriate batch size, according to the expected stream rates and your insert frequency. The target table must exist before you attempt to stream data into the table.
1
<data stream> | StreamInsert <table name> <database name> \
2
{-u|--user} <user> {-p|--passwd} <password> [{--host} <hostname>] \
3
[--port <port number>][--delim <delimiter>][--null <null string>] \
4
[--line <line delimiter>][--batch <batch size>][{-t|--transform} \
5
transformation ...][--retry_count <num_of_retries>] \
6
[--retry_wait <wait in secs>][--print_error][--print_transform]
Copied!
Setting
Default
Description
<table_name>
n/a
Name of the target table in OmniSci
<database_name>
n/a
Name of the target database in OmniSci
-u
n/a
User name
-p
n/a
User password
--host
n/a
Name of OmniSci host
--delim
comma (,)
Field delimiter, in single quotes
--line
newline (\n)
Line delimiter, in single quotes
--batch
10000
Number of records in a batch
--retry_count
10
Number of attempts before job fails
--retry_wait
5
Wait time in seconds after server connection failure
--null
n/a
String that represents null values
--port
6274
Port number for OmniSciDB on localhost
`-t
--transform`
n/a
Regex transformation
--print_error
False
Print error messages
--print_transform
False
Print description of transform.
--help
n/a
List options
For more information on creating regex transformation statements, see RegEx Replace.

Example

1
cat file.tsv | /path/to/omnisci/SampleCode/StreamInsert stream_example \
2
omnisci --host localhost --port 6274 -u imauser -p imapassword \
3
--delim '\t' --batch 1000
Copied!

Importing AWS S3 Files

You can use the SQL COPY FROM statement to import files stored on Amazon Web Services Simple Storage Service (AWS S3) into an OmniSci table, in much the same way you would with local files. In the WITH clause, specify the S3 credentials and region information of the bucket accessed.
1
COPY <table> FROM '<S3_file_URL>' WITH ([[s3_access_key = '<key_name>',s3_secret_key = '<key_secret>',] | [s3_session_token - '<AWS_session_token']] s3_region = '<region>');
Copied!
Access key and secret key, or session token if using temporary credentials, and region are required. For information about AWS S3 credentials, see https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys.
OmniSci does not support the use of asterisks (*) in URL strings to import items. To import multiple files, pass in an S3 path instead of a file name, and COPY FROM imports all items in that path and any subpath.

Custom S3 Endpoints

OmniSci supports custom S3 endpoints, which allows you to import data from S3-compatible services, such as Google Cloud Storage.
To use custom S3 endpoints, add s3_endpoint to the WITH clause of a COPY FROM statement; for example:
1
COPY trips FROM 's3://omnisci-importtest-data/trip-data/trip_data_9.gz' WITH (header='true', s3_endpoint='storage.googleapis.com');
Copied!
For information about interoperability and setup for Google Cloud Services, see Cloud Storage Interoperability.
You can also configure custom S3 endpoints by passing the s3_endpoint field to Thrift import_table.

Examples

The following examples show failed and successful attempts to copy the table trips from AWS S3.
1
omnisql> COPY trips FROM 's3://omnisci-s3-no-access/trip_data_9.gz';
2
Exception: failed to list objects of s3 url 's3://omnisci-s3-no-access/trip_data_9.gz': AccessDenied: Access Denied
Copied!
1
omnisql> COPY trips FROM 's3://omnisci-s3-no-access/trip_data_9.gz' with (s3_access_key='xxxxxxxxxx',s3_secret_key='yyyyyyyyy');
2
Exception: failed to list objects of s3 url 's3://omnisci-s3-no-access/trip_data_9.gz': AuthorizationHeaderMalformed: Unable to parse ExceptionName: AuthorizationHeaderMalformed Message: The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-1'
Copied!
1
omnisql> COPY trips FROM 's3://omnisci-testdata/trip.compressed/trip_data_9.csv' with (s3_access_key='xxxxxxxx',s3_secret_key='yyyyyyyy',s3_region='us-west-1');
2
Result
3
Loaded: 100 recs, Rejected: 0 recs in 0.361000 secs
Copied!
The following example imports all the files in the trip.compressed directory.
1
omnisql> copy trips from 's3://omnisci-testdata/trip.compressed/' with (s3_access_key='xxxxxxxx',s3_secret_key='yyyyyyyy',s3_region='us-west-1');
2
Result
3
Loaded: 105200 recs, Rejected: 0 recs in 1.890000 secs
Copied!

trips Table

The table trips is created with the following statement:
1
omnisql> \d trips
2
CREATE TABLE trips (
3
medallion TEXT ENCODING DICT(32),
4
hack_license TEXT ENCODING DICT(32),
5
vendor_id TEXT ENCODING DICT(32),
6
rate_code_id SMALLINT,
7
store_and_fwd_flag TEXT ENCODING DICT(32),
8
pickup_datetime TIMESTAMP,
9
dropoff_datetime TIMESTAMP,
10
passenger_count SMALLINT,
11
trip_time_in_secs INTEGER,
12
trip_distance DECIMAL(14,2),
13
pickup_longitude DECIMAL(14,2),
14
pickup_latitude DECIMAL(14,2),
15
dropoff_longitude DECIMAL(14,2),
16
dropoff_latitude DECIMAL(14,2))
17
WITH (FRAGMENT_SIZE = 75000000);
Copied!

Using Server Privileges to Access AWS S3

You can configure OmniSci server to provide AWS credentials, which allows S3 Queries to be run without specifying AWS credentials.
Run the following to enable OmniSci server privileges:
./startomnisci --allow-s3-server-privileges OR ./omnisci_server --allow-s3-server-privileges
S3 Regions are not configured by the server and may require specification on the client side.
Examples
    \detect: $ export AWS_REGION=us-west-1 omnisql > \detect <s3-bucket-uri
    import_table: $ ./OmniSci-remote -h localhost:6274 import_table "'<session-id>'" "<table-name>" '<s3-bucket-uri>' 'TCopyParams(s3_region="'us-west-1'")'
    COPY FROM: omnisql > COPY <table-name> FROM <s3-bucket-uri> WITH(s3_region='us-west-1');

Configuring AWS Credentials

Configure one of the following credential sources before running omnisci_server .
Environment Variables
Credential Profile
IAM roles (EC2 instances only):
Set the following environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.
Specify a shared AWS credentials file and profile with the environment variable AWS_SHARED_CREDENTIALS_FILE (default: ~/.aws/credentials) and AWS_PROFILE (default: default).
Prerequisites
    1.
    An IAM Policy that has sufficient access to the S3 bucket.
    2.
    An IAM AWS Service Role of type Amazon EC2 , which is assigned the IAM Policy from (1).
Setting Up an EC2 Instance with Roles
For a new EC2 Instance:
    1.
    AWS Management Console > Services > Compute > EC2 > Launch Instance.
    2.
    Select desired Amazon Machine Image (AMI) > Select.
    3.
    Select desired Instance Type > Next: Configure Instance Details.
    4.
    IAM Role > Select desired IAM Role > Review and Launch.
    5.
    Review other options > Launch.
For an existing EC2 Instance:
    1.
    AWS Management Console > Services > Compute > EC2 > Instances.
    2.
    Mark desired instance(s) > Actions > Security > Modify IAM Role.
    3.
    Select desired IAM Role > Save.

KafkaImporter

You can ingest data from an existing Kafka producer to an existing table in OmniSci using KafkaImporter on the command line:
1
KafkaImporter <table_name> <database_name> {-u|--user <user_name> \
2
{-p|--passwd <user_password>} [{--host} <hostname>] \
3
[--port <OmniSciDB_port>] [--http] [--https] [--skip-verify] \
4
[--ca-cert <path>] [--delim <delimiter>] [--batch <batch_size>] \
5
[{-t|--transform} transformation ...] [retry_count <retry_number>] \
6
[--retry_wait <delay_in_seconds>] --null <null_value_string> [--quoted true|false] \
7
[--line <line_delimiter>] --brokers=<broker_name:broker_port> \
8
--group-id=<kafka_group_id> --topic=<topic_type> [--print_error] [--print_transform]
Copied!
KafkaImporter requires a functioning Kafka cluster. See the Kafka website and the Confluent schema registry documentation.

KafkaImporter Options

Setting
Default
Description
<table_name>
n/a
Name of the target table in OmniSci
<database_name>
n/a
Name of the target database in OmniSci
-u <username>
n/a
User name
-p <password>
n/a
User password
--host <hostname>
localhost
Name of OmniSci host
--port <port_number>
6274
Port number for OmniSciDB on localhost
--http
n/a
Use HTTP transport
--https
n/a
Use HTTPS transport
--skip-verify
n/a
Do not verify validity of SSL certificate
--ca-cert <path>
n/a
Path to the trusted server certificate; initiates an encrypted connection
--delim <delimiter>
comma (,)
Field delimiter, in single quotes
--line <delimiter>
newline (\n)
Line delimiter, in single quotes
--batch <batch_size>
10000
Number of records in a batch
--retry_count <retry_number>
10
Number of attempts before job fails
--retry_wait <seconds>
5
Wait time in seconds after server connection failure
--null <string>
n/a
String that represents null values
--quoted <boolean>
false
Whether the source contains quoted fields
`-t
--transform`
n/a
Regex transformation
--print_error
false
Print error messages
--print_transform
false
Print description of transform
--help
n/a
List options
--group-id <id>
n/a
Kafka group ID
--topic <topic>
n/a
The Kafka topic to be ingested
--brokers <broker_name:broker_port>
localhost:9092
One or more brokers

KafkaImporter Logging Options

KafkaImporter Logging Options
Setting
Default
Description
--log-directory <directory>
mapd_log
Logging directory; can be relative to data directory or absolute
--log-file-name <filename>
n/a
Log filename relative to logging directory; has format KafkaImporter.{SEVERITY}.%Y%m%d-%H%M%S.log
--log-symlink <symlink>
n/a
Symlink to active log; has format KafkaImporter.{SEVERITY}
--log-severity <level>
INFO
Log-to-file severity level: INFO, WARNING, ERROR, or FATAL
--log-severity-clog <level>
ERROR
Log-to-console severity level: INFO, WARNING, ERROR, or FATAL
--log-channels
n/a
Log channel debug info
--log-auto-flush
n/a
Flush logging buffer to file after each message
--log-max-files <files_number>
100
Maximum number of log files to keep
--log-min-free-space <bytes>
20,971,520
Minimum number of bytes available on the device before oldest log files are deleted
--log-rotate-daily
1
Start new log files at midnight
--log-rotation-size <bytes>
10485760
Maximum file size, in bytes, before new log files are created
Configure KafkaImporter to use your target table. KafkaImporter listens to a pre-defined Kafka topic associated with your table. You must create the table before using the KafkaImporter utility. For example, you might have a table named customer_site_visit_events that listens to a topic named customer_site_visit_events_topic.
The data format must be a record-level format supported by OmniSci.
KafkaImporter listens to the topic, validates records against the target schema, and ingests topic batches of your designated size to the target table. Rejected records use the existing reject reporting mechanism. You can start, shut down, and configure KafkaImporter independent of the OmniSciDB engine. If KafkaImporter is running and the database shuts down, KafkaImporter shuts down as well. Reads from the topic are nondestructive.
KafkaImporter is not responsible for event ordering; a streaming platform outside OmniSci (for example, Spark streaming, flink) should handle the stream processing. OmniSci ingests the end-state stream of post-processed events.
KafkaImporter does not handle dynamic schema creation on first ingest, but must be configured with a specific target table (and its schema) as the basis. There is a 1:1 correspondence between target table and topic.
1
cat tweets.tsv | -./KafkaImporter tweets_small omnisci-u imauser-p imapassword--delim '\t'--batch 100000--retry_count 360--retry_wait 10--null null--port 9999--brokers=localhost:9092--group-id=testImport1--topic=tweet
2
cat tweets.tsv | ./KafkaImporter tweets_small omnisci
3
-u imauser
4
-p imapassword
5
--delim '\t'
6
--batch 100000
7
--retry_count 360
8
--retry_wait 10
9
--null null
10
--port 9999
11
--brokers=localhost:9092
12
--group-id=testImport1
13
--topic=tweet
Copied!

StreamImporter

StreamImporter is an updated version of the StreamInsert utility used for streaming reads from delimited files into OmniSciDB. StreamImporter uses a binary columnar load path, providing improved performance compared to StreamInsert.
You can ingest data from a data stream to an existing table in OmniSci using StreamImporter on the command line.
1
StreamImporter <table_name> <database_name> {-u|--user <user_name> \
2
{-p|--passwd <user_password>} [{--host} <hostname>] [--port <OmniSciDB_port>] \
3
[--http] [--https] [--skipverify] [--ca-cert <path>] [--delim <delimiter>] \
4
[--null <null string>] [--line <line delimiter>] [--quoted <boolean>] \
5
[--batch <batch_size>] [{-t|--transform} transformation ...] \
6
[retry_count <number_of_retries>] [--retry_wait <delay_in_seconds>] \
7
[--print_error] [--print_transform]
Copied!

StreamImporter Options

Setting
Default
Description
<table_name>
n/a
Name of the target table in OmniSci
<database_name>
n/a
Name of the target database in OmniSci
-u <username>
n/a
User name
-p <password>
n/a
User password
--host <hostname>
n/a
Name of OmniSci host
--port <port>
6274
Port number for OmniSciDB on localhost
--http
n/a
Use HTTP transport
--https
n/a
Use HTTPS transport
--skip-verify
n/a
Do not verify validity of SSL certificate
--ca-cert <path>
n/a
Path to the trusted server certificate; initiates an encrypted connection
--delim <delimiter>
comma (,)
Field delimiter, in single quotes
--null <string>
n/a
String that represents null values
--line <delimiter>
newline (\n)
Line delimiter, in single quotes
--quoted <boolean>
true
Either true or false, indicating whether the input file contains quoted fields.
--batch <number>
10000
Number of records in a batch
--retry_count <retry_number>
10
Number of attempts before job fails
--retry_wait <seconds>
5
Wait time in seconds after server connection failure
`-t
--transform`
n/a
Regex transformation
--print_error
false
Print error messages
--print_transform
false
Print description of transform
--help
n/a
List options

StreamImporter Logging Options

Setting
Default
Description
--log-directory <directory>
mapd_log
Logging directory; can be relative to data directory or absolute
--log-file-name <filename>
n/a
Log filename relative to logging directory; has format StreamImporter.{SEVERITY}.%Y%m%d-%H%M%S.log
--log-symlink <symlink>
n/a
Symlink to active log; has format StreamImporter.{SEVERITY}
--log-severity <level>
INFO
Log-to-file severity level: INFO, WARNING, ERROR, or FATAL
--log-severity-clog <level>
ERROR
Log-to-console severity level: INFO, WARNING, ERROR, or FATAL
--log-channels
n/a
Log channel debug info
--log-auto-flush
n/a
Flush logging buffer to file after each message
--log-max-files <files_number>
100
Maximum number of log files to keep
--log-min-free-space <bytes>
20,971,520
Minimum number of bytes available on the device before oldest log files are deleted
--log-rotate-daily
1
Start new log files at midnight
--log-rotation-size <bytes>
10485760
Maximum file size, in bytes, before new log files are created
Configure StreamImporter to use your target table. StreamImporter listens to a pre-defined data stream associated with your table. You must create the table before using the StreamImporter utility.
The data format must be a record-level format supported by OmniSci.
StreamImporter listens to the stream, validates records against the target schema, and ingests batches of your designated size to the target table. Rejected records use the existing reject reporting mechanism. You can start, shut down, and configure StreamImporter independent of the OmniSciDB engine. If StreamImporter is running but the database shuts down, StreamImporter shuts down as well. Reads from the stream are non-destructive.
StreamImporter is not responsible for event ordering - a first class streaming platform outside OmniSci (for example, Spark streaming, flink) should handle the stream processing. OmniSci ingests the end-state stream of post-processed events.
StreamImporter does not handle dynamic schema creation on first ingest, but must be configured with a specific target table (and its schema) as the basis.
There is a 1:1 correspondence between target table and a stream record.
1
cat tweets.tsv | ./StreamImporter tweets_small omnisci
2
-u imauser
3
-p imapassword
4
--delim '\t'
5
--batch 100000
6
--retry_count 360
7
--retry_wait 10
8
--null null
9
--port 9999
Copied!

Importing Data from HDFS with Sqoop

You can consume a CSV or Parquet file residing in HDFS (Hadoop Distributed File System) into OmniSciDB.
Copy the OmniSci JDBC driver into the Apache Sqoop library, normally found at /usr/lib/sqoop/lib/.

Example

The following is a straightforward import command. For more information on options and parameters for using Apache Sqoop, see the user guide at sqoop.apache.org.
1
sqoop-export --table iAmATable \
2
--export-dir /user/cloudera/ \
3
--connect "jdbc:omnisci:000.000.000.0:6274:omnisci" \
4
--driver com.omnisci.jdbc.OmniSciDriver \
5
--username imauser \
6
--password imapassword \
7
--direct \
8
--batch