Previous Index Next

7. Tuning Hints

This section contains notes on Cartridge Tuning.


    There are several INIT.ORA parameters which affect query performance for the cartridge. These primarily impact the ddblob index, which is the most resource-intensive of the four indextypes.

    db_file_multiblock_read_count: The number of blocks read per I/O request in a sequential scan. Larger is better, up to a value of 32.

    db_block_buffers: Number of buffers in the buffer cache. The blob index uses the buffer cache, so if the buffer cache is larger than the blob index, then no physical disk I/O is required for a search. The size of the blob data can be obtained for an index by examining the log data for a search at the debug_level of 9 (select c$dcischem.ddpackage.fsetdebug(9) from dual;). An entry of the form "Data LOB size for search: 9726184" gives the LOB size in bytes.

    log_buffer: The size of the redo log buffer. Index creation is a long transaction which benefits from a large log_buffer. Also, see the section on logfile sizes.

  • LOB caching:

    The default behavior of the ddblob indextype is to enable LOB caching for all ddblob indexes created in the system. This means that the Oracle Server will attempt to cache BLOB data in memory like normal tabular data. Provided that the SGA is sufficiently large (db_block_buffers), searches over ddblob indexes will not require any physical disk I/O.

    The downside of LOB caching is that BLOB data will displace other tabular data from the buffer cache during a search. On a system with small memory availability, this can cause all queries to slow down substantially as the BLOB data can fill the buffer cache for every search. In this scenario, it is preferred to disable BLOB caching on a per-index basis.

    Each ddblob index has an auxillary table associated with it. That auxillary table has two BLOB columns. The following SQL commands will alter the CACHE status for the auxillary table:

      SQL> create index test_index on test(asmiles) indextype is c$dcischem.ddblob;
      Index created.
      -- the auxillary table is named <index_name> || '_DDT'
      SQL> desc test_index_ddt;
      Name                   Null?    Type
     ---------------------- -------- --------------
      N1                             NUMBER
      N2                             NUMBER
      HASH                           BLOB
      DAT                            BLOB
      SQL> alter table test_index_ddt modify lob (dat) (nocache);
      Table altered.

    One can turn caching back on with "alter ... modify lob (dat) (cache)". Note that the HASH BLOB is used for inserts, deletes, and updates. It is small compared to the DAT BLOB, so should remain cached for best overall performance. The DAT BLOB is larger and is used for searching, so select its cache behavior based on overall throughtput and tuning requirements.


    Index creation time for the ddblob index is sensitive to logfile availability as it does a lot of I/O which needs to be logged. I added a total of four logfile groups to minimize checkpoint blocking. If you see entries in the instance logfile admin/<sid>/bdump/alert_<sid>.log like:

        Current log# 1 seq# 15 mem# 0: /oracle/db/oradata/dev/redo02.log
      Thread 1 cannot allocate new log, sequence 16
      Checkpoint not complete
        Current log# 1 seq# 15 mem# 0: /oracle/db/oradata/dev/redo02.log

    during index creation, then increasing the number and size of your redo logs will improve index creation and modification performance substantially.

      SQL> select * from v$logfile;
      ---------- ------- ---------------------------------
               1         /oracle/db/oradata/dev/redo01.log
               2         /oracle/db/oradata/dev/redo02.log

    Something like the following sequence will work. This will give you a total of 16 MB of log file space spread over four separate

      -- Create two new logfiles
      SQL> alter database add logfile group 3
             ('/oracle/db/oradata/dev/redo03.log') size 4000k;
      SQL> alter database add logfile group 4
             ('/oracle/db/oradata/dev/redo04.log') size 4000k;
      -- Advance the current log file into the new ones
      SQL> alter system switch logfile;
      SQL> alter system switch logfile;
      -- Get rid of the original two logfiles
      SQL> alter database drop logfile group 1;
      SQL> alter database drop logfile group 2;
      -- Remove the files redo01.log and redo02.log 
         from the system and re-create them.
      SQL> quit
      $ rm /oracle/db/oradata/dev/redo01.log
      $ rm /oracle/db/oradata/dev/redo02.log
      $ sqlplus sys/pw
      SQL> alter database add logfile group 1
             ('/oracle/db/oradata/dev/redo01.log') size 4000k;
      SQL> alter database add logfile group 2
             ('/oracle/db/oradata/dev/redo02.log') size 4000k;
      SQL> select * from v$logfile;
      ---------- ------- ---------------------------------
               1         /oracle/db/oradata/dev/redo01.log
               2         /oracle/db/oradata/dev/redo02.log
               3         /oracle/db/oradata/dev/redo03.log
               4         /oracle/db/oradata/dev/redo04.log


    Beginning with version 4.93, Daycart supports multithreaded searches for contains(), matches(), and isin() queries. These three in particular are the queries where the performance tends to be limited by CPU availability.

    On Solaris, in order to get the best performance with multithreading searches it is desirable to use the memory management library which Sun provides for this purpose (libmtmalloc.a). The 'mtmalloc' library is optimized by Sun for multithreaded applications. It minimizes inter-thread contention for shared heap data (eg. malloc() calls). By default, Oracle does not use this library when building extproc.

    One can relink the extproc executable to include libmtmalloc.a on Solaris and see a significant improvement in performance for searches (20+%). The steps are as follows:

    1. Find the make.log file in your Oracle installation. Typically it is in $ORACLE_HOME/install/.
    2. In make.log, locate the block of messages pertaining to extproc and copy the link command into a separate script file. Typically, a string search for 'extproc' will find the command. The desired line starts with:
      /usr/ccs/bin/ld -o ..../extproc -L ...
    3. Edit the link line and add the argument '-lmtmalloc' before the sysliblist entry. Change the name of the output target to extproc_mt.
    4. Run the link command.
    5. Change the SQLNet listener.ora file to use the new binary and restart SQLNet.

    An example link command, with the edits highlited, follows:

    /usr/ccs/bin/ld -o /oracle/products/o92e/bin/extproc_mt \
             -L/oracle/products/o92e/rdbms/lib/ \
             -L/oracle/products/o92e/lib/ \
             -dy /oracle/products/o92e/lib/WS6U1/crti.o \
             /oracle/products/o92e/lib/WS6U1/crt1.o  \
             /oracle/products/o92e/rdbms/lib/hormc.o \
             /oracle/products/o92e/rdbms/lib/defopt.o \
             /oracle/products/o92e/rdbms/lib/homts.o  \
             -lagtsh -lpls9 -lplp9 -lthread  -lclntsh \
             -lxml9 -lcore9 -lunls9 -lnls9 \
             /oracle/products/o92e/lib/libgeneric9.a \
             -lmtmalloc \
             `cat /oracle/products/o92e/lib/sysliblist` \
             -R /opt/SUNWcluster/lib:/oracle/products/o92e/lib  \
             -Y P,:/opt/SUNWcluster/lib:/usr/ccs/lib:/usr/lib  \
             -Qy -lc -laio -lposix4 -lkstat -lm \
             /oracle/products/o92e/lib/WS6U1/crtn.o -lvsn9