Troubleshooting

These questions are culled from our public forum and support team. If you have a question to contribute (or better still, a question and answer) please post it on the OmniSci Community Forum.

Why do I keep running out of memory for GPU queries?

This typically occurs when the system cannot keep the entire working set of columns in GPU memory.

OmniSci provides two options when your system does not have enough GPU memory available to meet the requirements for executing a query.

The first option is to turn off the watch dog (--enable_watch_dog=0). That allows the query to run in stages on the GPU. OmniSci orchestrates the transfer of data through layers of abstraction and onto the GPU for execution. See Advanced Configuration Flags for OmniSci Server.

The second option is to set --allow-cpu-retry. If a query does not fit in GPU memory, it falls back and executes on the CPU. See Configuration Flags for OmniSci Server.

OmniSci is an in-memory database. If your common use case exhausts the capabilities of the VRAM on the available GPUs, try re-estimating the scale of the implementation required to meet your needs. OmniSci can scale across multiple GPUs in a single machine: up to 20 physical GPUs (the most OmniSci has found in one machine), and up to 64 using GPU visualization tools such as Bitfusion Flex. OmniSci can scale across multiple machines in a distributed model, allowing for many servers, each with many cards. The operational data size limit is very flexible.

Why do I keep running out of memory for rendering?

This typically occurs when there is not enough OpenGL memory to render the query results.

Review your Mapd_server.INFO log and see if you are exceeding GPU memory. These appear as EVICTION messages.

You might need to increase the amount of buffer space for rendering using the --render-mem-bytes configuration flag. Try setting it to 1000000000. If that does not work, go to 2000000000.

How can I confirm that OmniSci is actually running on GPUs?

The easiest way to compare GPU and CPU performance is by using the omnisql command line client, which you can find at $OMNISCI_PATH/bin/omnisql.

To start the client, use the command bin/omnisql -p HyperInteractive, where HyperInteractive is the default password.

Once omnisql is running, use one of the following methods to see where your query is running:

  • Prepend the EXPLAIN command to a SELECT statement to see a representation of the code that will run on the CPU or GPU. The first line is important; it shows either IR for the GPU or IR for the CPU. This is most direct method.
  • The server logs show a message at startup stating if OmniSci has fallen back to CPU mode. The logs are in your $OMNISCI_STORAGE directory (default /var/lib/omnisci/data), in a directory named mapd_log.
  • After you perform some queries, the \memory_summary command shows how much memory is in use on the CPU and on each GPU. OmniSci manages memory itself, so you will see separate columns for in use (actual memory being used) and allocated (memory assigned to omnisci_server, but not necessarily in use yet). Data is loaded lazily from disk, which means that you must first perform a query before the data is moved to CPU and GPU. Even then, OmniSci only moves the data and columns on which you are running your queries.

How do I compare the performance on GPUs vs. CPUs to demonstrate the performance gain of GPUs?

Now, to see the performance advantage of running on GPU over CPU, manually switch where your queries are run:

  1. Enable timing reporting in omnisql using \timing.
  2. Ensure that you are in GPU mode (the default): \gpu.
  3. Run your queries a few times. Because data is lazily moved to the GPUs, the first time you query new data/columns takes a bit longer than subsequent times.
  4. Switch to CPU mode: \cpu. Again, run your queries a few times.

If you are using a data set that is sufficiently large, you should see a significant difference between the two. However, if the sample set is relatively small (for example, the sample 7-million flights dataset that comes preloaded in OmniSci) some of the fixed overhead of running on the GPUs causes those queries to appear to run slower than on the CPU.

Does OmniSci support a single server with different GPUs? For example, can I install OmniSci on one server with two NVIDIA GTX 760 GPUs and two NVIDIA GTX TITAN GPUs?

OmniSci does not support mixing different GPU models. Initially, you might not notice many issues with that configuration because the GPUs are the same generation. However, in this case you should consider removing the GTX 760 GPUs, or configure OmniSci to not use them.

To configure OmniSci to use specific GPUs:

  1. Run the nvidia-smi command to see the GPU IDs of the GTX 760s. Most likely, the GPUs are grouped together by type.
  2. Edit the omnisci_server config file as follows:
    1. If the GTX 760 GPUs are 0,1, configure omnisci_server with the option start-gpu=2 to use the remaining two TITAN GPUs.
    2. If the GTX 760s are 2,3, add the option num-gpus=2 to the config file.

The location of the config file depends on how you installed OmniSci.

How can I avoid creating duplicate rows?

To detect duplication prior to loading data into OmniSciDB, you can perform the following steps. For this example, the files are labeled A,B,C...Z.

  1. Load file A into table MYTABLE.
  2. Run the following query.
    select count(t1.uniqueCol) as dups from MYTABLE t1 join MYTABLE t2 on t1.uCol = t2.uCol;

    There should be no rows returned; if rows are returned, your first A file is not unique.

  3. Load file B into table TEMPTABLE.
  4. Run the following query.
    select count(t1.uniqueCol) as dups from MYTABLE t1 join MYTABLE t2 on t1.uCol = t2.uCol;

    There should be no rows returned if file B is unique. Fix B if the information is not unique using details from the selection.

  5. Load the fixed B file into MYFILE.
  6. Drop table TEMPTABLE.
  7. Repeat steps 3-6 for the rest of the set for each file prior to loading the data to the real MYTABLE instance.