Dec 31, 2016

Conversion of DB2 LRSN to Timestamp and vice-versa


Redirected Recovery (of Recovery Expert for z/OS) migrates data from Source to Target Databases using the Image Copies of the Source Database’s Tablespaces. As it can do only PIT Recovery, we have to mention the Timestamp (in Recovery JCLs build process) that’s close the actual max Image Copy completion time for all Tablespaces in the Database.

Getting the max Image Copy completion time is tricky, which I have realized recently. Below is the scenario.

1.  I used the below SQL query to get the max ICTIME for the Image Copies of Partitioned TS.
  
  SELECT MAX(ICTIME)
    FROM SYSIBM.SYSCOPY
    WHERE
         DBNAME = ’XXXYYY’ AND
         ICDATE = 161202   AND
         DSNUM  > 0
     WITH UR;

      2. I used a Timestamp which is 5 minutes more than the MAX(ICTIME) in my Recovery JCLs build    process. But the process failed for a Tablespace. After PMR with IBM, below has been confirmed.
a.      Though the Timestamp I used is greater than the ICTIME of the Tablespace, it’s not greater than the START_RBA (Hexadecimal format of START_RBA) of the Tablespace.
b.      ICTIME of the Tablespace is 215511, and the Timestamp I used is 2016-12-02-22.26.00. LRSN for the Timestamp I used is D1BC9BC5FA6 which is lesser than the LRSN of the Image Copy D1BC9BC9594.
For the above mentioned reason, Tool is unable to generate Recovery JCL for the Tablespace.

Having knowledge about conversion of LRSN to Timestamp and vice-versa is needed in scenarios like above mentioned. After help from IDUG DB2 community, I’m able to understand the whole process of conversion.


è LRSN to Timestamp

1)      RBA has to be converted to LRSN. Below SQL query can be used for that.

SELECT HEX(START_RBA) FROM SYSIBM.SYSCOPY
WHERE
   DBNAME = 'LPAHCA'   AND
   TSNAME = 'TSATAETT' AND
   ICDATE = 161231     AND
   ICTYPE = 'F'
WITH UR;

2)      Below SQL query can be used to convert the Hex value of START_RBA to Timestamp.

SELECT TIMESTAMP(X'hex value of START_RBA') FROM SYSIBM.SYSDUMMY1;


  è LRSN to Timestamp and vice-versa can be achieved using the below REXX code.

/* REXX */

/* CONVERT TIMESTAMP TO DB2 LRSN */
 TS = '2016-12-31-07.04.37.747296'
 NUMERIC DIGITS 31
 PARSE VAR TS YR '-' MT '-' DY '-' HR '.' MI '.' SE '.' MS
 DYS = DATE('B', YR||MT||DY,'S')  /* DAYS SINCE 0001-01-01 */
 DYS = DYS - 693595                      /* MINUS DAYS FROM 0001-01-01 TO 1900-01-01 */
 TMS = (DYS * 86400000000),
            +(HR * 3600000000),
            +(MI * 60000000),
            +(SE * 1000000),
            + MS                                    /* TOTAL MICROSECONDS FOR THE TIMESTAMP */
 LRSN=SUBSTR(D2X(TMS),1,12)
 SAY 'MICROSECONDS: ' TMS
 SAY 'TIMESTAMP:    ' TS
 SAY 'LRSN:         ' LRSN

/* CONVERT DB2 LRSN TO TIMESTAMP */
 SAY STCK(LRSN)
 EXIT
 STCK: PROCEDURE
 ACC=ARG(1)
 ACC=X2C(ACC)
 TDATE = COPIES('0' , 26)
 ADDRESS LINKPGM "BLSUXTOD ACC TDATE"
 RETURN TDATE



Oct 9, 2016

Caché Journaling Status Monitoring

In InterSystems Caché, Journals are the files that record the changes made to the Databases. 
The process are recording the Database changes is called Journaling.

Journaling Uses:
1. Record of changes made to Data
2. Supports restoring Data from backup
3. Supports Transaction processing
4. Protects Application Data
5. Prevents Data loss or inconsistent Data
6. Required for
a.      Shadowing
b.      Mirroring
c.       Shared-disk cluster configurations

In our Caché on AIX architecture, Journals are continuously shipped from Production Server (called as Primary Server) to Reporting Server (called as Secondary Server). Journal files are mounted on /journal file system.

Whenever the /journal file system becomes full, Journaling daemon tries multiple times to continue with the activity before getting Disabled. Journaling will not be Enabled automatically after additional Storage is added to /journal file system, and has to be done manually.

There is no direct approach for pro-active monitoring of the Journaling Status. I coded a simple Shell Script that’ll help in monitoring the Status. The Script is scheduled to run every 30 minutes, and email if the Journaling Status is found as Disabled.

#!/bin/ksh
DB_JRN_DET=/home/rfo7936/db_jrn_det
csession cache "Status^JOURNAL"    > $DB_JRN_DET
cat $DB_JRN_DET | grep -i enabled  > $DB_JRN_DET
if [[ -s $DB_JRN_DET ]] then RC=0
 else RC=1
fi
if (( $RC > 0 )) then
    mail -s "JOURNALING is disabled on $HOSTNAME"  <email ID>
    exit $RC
  else
    exit $RC
fi

Sep 30, 2016

Redirected Recovery of IBM DB2 Recovery Expert for z/OS v3.2


In our ERP DB2 environment on z/OS, we perform frequent Data migrations. Most of the time it’ll be whole Database (of size ~3 TB) migrated from Prod to QA or DEV sub-systems. My DBA team was using a Vendor Tool (with REXX customizations) for the Data migrations. But, that was unable to handle the volume of the Data migrated every time and also had some Performance issues. So, we needed a better Tool for Data migrations.

IBM DB2 Recovery Expert for z/OS is in market since many years. It didn’t had the component that can be used for Data migrations from one DB2 sub-system to other. IBM released Redirected Recovery component in 2015, which can perform the Data migrations with better Performance.

We are one of the early adopters of the new component. And, I was assigned to test and implement the Tool in our DB2 environment. After 8 months of rigorous testing, I implemented Redirected Recovery successfully in our DB2 for z/OS environments.

and my colleague are going to present about our Redirected Recovery Implementation experiences in IBM WoW Conference. Our Presentation is scheduled on 27-Oct-16 from 1 – 1.45 PM. Below are our Session details.
Session ID:       DMT-1112
Session Name: Stories from the Front Line on Improving DB2 for z/OS Availability and Operations 







Sep 29, 2016

Caché Database Size Monitoring


We use a vendor supplied application called Artiva for maintaining Medical bills collection details, hosted on AIX servers. Artiva uses InterSystems Caché as the DBMS.

There is AIX File systems (on which Caché Databases are mounted) size monitoring alerts mechanism, but no Caché Databases size monitoring alerts mechanism in place when I started as DBA. InterSystems and the Artiva’s vendor didn’t provide any straight forward approach for Databases size monitoring.

All Caché Databases have the following Size details.
  1. Current size of the Database
  2. Expansion size                          – how much MB can the Database expand further
  3. Maximum size of the Database – with expansion, Database can grow till this size

I coded a Shell script that will do the following.
  1. Search for the Artiva-Caché Databases
  2. Calculate the Total Free Storage percentage, available for the Database to grow
  3. Send an email if any Database with Total Free Storage percentage is less than 10%
This Script is scheduled to be executed every 15 minutes, every day.

The Script is as follows.
 #!/bin/ksh

######################################################
DB_FREE_SPC=/cache/cachedba/scripts/db_free_spc
DB_FREE_SPC_MOD=/cache/cachedba/scripts/db_free_spc_mod
DB_FREE_REPORT=/cache/cachedba/scripts/db_free_report
######################################################

RC=0

#####Getting Database Size Details from Caché######
csession cache "ALL^%ZUTFREE" > $DB_FREE_SPC
#############################################

cat $DB_FREE_SPC | grep -i / | grep -v Auto     > $DB_FREE_SPC
cat $DB_FREE_SPC | sed -n 's/%//p' > $DB_FREE_SPC_MOD
echo "--------------------------------------------------------------------------------------------------------------" > $DB_FREE_REPORT
echo " DB_NAME \t\t  CUR_DB_SIZE \t\t FREE_SPC_PCT_IN_DB         MAX_DB_SIZE \t      TOT_FREE_SPC_PCT \t      FREE_SPC_IN_FS" >> $DB_FREE_REPORT
echo "--------------------------------------------------------------------------------------------------------------"  >> $DB_FREE_REPORT

cat $DB_FREE_SPC_MOD |
while read LINE
do
  DB_NM=$(echo $LINE | awk '{print $2}' )
  DB_CUR_SIZ=$(echo $LINE | awk '{print $5}')
  DB_FREE_PCT=$(echo $LINE | awk '{print $4}')
  DB_MAX_SIZ=$(echo $LINE | awk '{print $6}')
  FS_FREE_SIZ=$(echo $LINE | awk '{print $8}')
  DB_CUR_SIZ=$(echo ${DB_CUR_SIZ%?})
  DB_MAX_SIZ=$(echo ${DB_MAX_SIZ%?})
  ((DB_TOT_FREE_PCT = 100 - DB_CUR_SIZ / DB_MAX_SIZ *100))

 if (( $DB_TOT_FREE_PCT < 10 )) then
      (( RC += 1 ))
      echo " $DB_NM  \t\t\t $DB_CUR_SIZ   \t\t\t $DB_FREE_PCT  \t\t\t $DB_MAX_SIZ  \t\t\t $DB_TOT_FREE_PCT  \t\t\t $FS_FREE_SIZ"  >> $DB_FREE_REPORT
 fi
done

#####Email step#####
if (( $RC > 0 )) then
    mail -s "DATABASE SIZE ALERT ON $HOSTNAME"  <email ID> < $DB_FREE_REPORT
    exit $RC
  else exit $RC
fi
########################


Sep 24, 2016

InfoSphere Optim Data Growth for Archiving on z/OS


We have ~3 TB of operational data in ERP DB2 Prod environment. It’s complicated to apply Application upgrades with this volume of data.  
So, we decided to Archive and Delete the data from Prod Operational Database. We are going to use InfoSphere Optim Data Growth for Archiving on z/OS for Archive and Delete.

The Optim Archive works in the following sequence of steps.
1. Define the DASD needed to hold the Archive Datasets
2. Define Tables Relationships within the Optim Directory
    Note: Tables Relationships will hold the Tables to be archived that make a whole unit.
3. Define Access Definitions within the Optim Directory
    Note: Access Definitions will hold the criteria to be used for archiving the Tables
4. Define the Target for holding the Archive Data
5. Archive the DB2 Operational Data from the Tables into Archive Datasets
6. Verify the Archive Datasets
7. Transfer the Data from Archive Datasets to Target
8. Delete the archived data from DB2 Operational Database

Optim Archive provides multiple options for archiving the DB2 Operational data on z/OS. 
Below are the options provided based on the Target.

1. z/OS as the Target for the Archive Data    
 
  
2. Hadoop as the Target for the Archive Data     


3. Distributed Platform as the Target for the Archive Data


Note: 
ODM stands for Open Data Manager. Its collection of Drives needed for Reporting Tools to connect to Archive Datasets on z/OS. 
Its included in the Optim Archive for z/OS installation kit.



Sep 12, 2016

DB2 MSTR log processing


DB2 MSTR logs have all kinds of DB2 Error messages, which can provide more insight into the Application performance & activity.

We use Data-Sharing setup and have 13 MSTR logs in DEV/QA/Prod regions combined. There is no way of getting customized details from the MSTR logs, except for going through the complete logs.

I developed a new Process that’ll access the MSTR logs, parse the Data and provide the details needed for Application monitoring.
The Process includes 2 Steps.


        Step 1: This Step will be executed on Daily basis.

1.   Connect to SDSF
I coded a REXX routine that connects to SDSF and gets the DB2 MSTR logs.

ISFIN_PARA:                                                     
 "ALLOC F(ISFIN) UNIT(VIO) NEW REU",                           
  "CYLINDERS SPACE(10,10) RECFM(F,B) LRECL(80) DSORG(PS)"      
 "ALLOC FI(ISFOUT) NEW DELETE REU ",                           
  "CYLINDERS SPACE(10,10) LRECL(133) RECFM(F,B,A) DSORG(PS)"   
 "ALLOC F(OUTP) UNIT(VIO) NEW REU",                            
  "CYLINDERS SPACE(100,100) LRECL(133) RECFM(F,B,A) DSORG(PS)" 
 QUEUE "ST"                     /* OPEN SDSF STATUS QUEUE     */
 QUEUE "OWNER *"                /* OPEN SDSF STATUS QUEUE     */
 QUEUE "PRE "A""                /* OPEN SDSF STATUS QUEUE     */
 QUEUE "FIND "A""               /* LOCATE YOUR JOB            */
 QUEUE "++S"                    /* BROWSE JESYSMSG DATASET    */
 QUEUE "PRINT FILE OUTP"        /* PRINT JESYSMSG TO DATASET  */
 QUEUE "PRINT 1 999999"         /* PRINT ALL RECS OF JESYSMSG */
 QUEUE "PRINT CLOSE"            /* CLOSE PRINT FILE           */
 QUEUE "END"                    /* END SDSF SESSION           */
 QUEUE "EXIT"                   /* EXIT SDSF                  */
 "EXECIO" QUEUED()" DISKW ISFIN (FINIS"                        
RETURN;   
                                                      
2.  Next REXX routine I coded will parse the output captured by above REXX routine to find out details of below mentioned DSN messages.
DSNL030I
DSNI031I
DSNT375I
DSNT376I
DSNT501I
DSNB260I
DSNJ139I

3.    Finally, the details parsed by above REXX routine will be inserted into a DB2 Table by another REXX routine I coded.
I created Views for each DSN message. So, the corresponding Details will be inserted into the respective Views.




   Step 2: This Step will be executed on Weekly basis. 
                I setup a JCL that executes the below SQL query and emails the details to my Team.

          SELECT
              STC_REGION
             ,DSN_MSG
             ,COUNT(*) AS OCCURENCES
          FROM DBMSTRLG.MSTRLOG
          WHERE WEEK(ROW_ENTRY_TS) = WEEK(CURRENT TIMESTAMP) - 1
          GROUP BY
              STC_REGION
             ,DSN_MSG
          WITH UR;

This SQL query will gives the no.of occurrences of each DSN message during last week.
For more details about the DSN messages, we can query the base DB2 Table.




Jul 23, 2016

Virtual Indexes for Query performance optimization, in DB2 v10 for z/OS


The table DSN_VIRTUAL_INDEXES, enables Optimization Tools to test the effect of creation and/or dropping of Indices on the performance of SQL queries.

Optimizer tools like Data Studio Query Optimizer or Dell SQL optimizer will make an entries into DSN_VIRTUAL_INDEXES Table when trying to optimize the SQL query performance. It’s not advised to make entries into the Table manually.

We don’t have any SQL query optimizer tool installed, and I wanted to test whether an Index creation on the Table will improve the SQL query performance. For that, I did the following.

Step 1: I created DSN_VIRTUAL_INDEXES Table.
DDL for DSN_VIRTUAL_INDEXES Table can be found in SDSNSAMP library. Below is the DDL I used for the creation of Table.

CREATE TABLE RFO7936.DSN_VIRTUAL_INDEXES (
  TBCREATOR      VARCHAR(128) FOR MIXED DATA             NOT NULL,
  TBNAME            VARCHAR(128) FOR MIXED DATA             NOT NULL,
  IXCREATOR       VARCHAR(128) FOR MIXED DATA             NOT NULL,
  IXNAME             VARCHAR(128) FOR MIXED DATA             NOT NULL,
  ENABLE              CHARACTER(1) FOR MIXED DATA             NOT NULL,
  MODE CHARACTER(1) FOR MIXED DATA             NOT NULL,
  UNIQUERULE   CHARACTER(1) FOR MIXED DATA             NOT NULL,
  COLCOUNT       SMALLINT           NOT NULL,
  CLUSTERING    CHARACTER(1) FOR MIXED DATA             NOT NULL,
  NLEAF INTEGER              NOT NULL,
  NLEVELS             SMALLINT           NOT NULL,
  INDEXTYPE        CHARACTER(1) FOR MIXED DATA             NOT NULL           WITH DEFAULT,
  PGSIZE                SMALLINT           NOT NULL,
  FIRSTKEYCARDF              DOUBLE               NOT NULL           DEFAULT -1,
  FULLKEYCARDF               DOUBLE               NOT NULL           DEFAULT -1,
  CLUSTERRATIOF             DOUBLE               NOT NULL           DEFAULT -1,
  PADDED             CHARACTER(1) FOR MIXED DATA             NOT NULL           WITH DEFAULT,
  COLNO1             SMALLINT           WITH DEFAULT NULL,
  ORDERING1      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO2             SMALLINT           WITH DEFAULT NULL,
  ORDERING2      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO3             SMALLINT           WITH DEFAULT NULL,
  ORDERING3      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO4             SMALLINT           WITH DEFAULT NULL,
  ORDERING4      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO5             SMALLINT           WITH DEFAULT NULL,
  ORDERING5      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO6             SMALLINT           WITH DEFAULT NULL,
  ORDERING6      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO7             SMALLINT           WITH DEFAULT NULL,
  ORDERING7      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO8             SMALLINT           WITH DEFAULT NULL,
  ORDERING8      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO9             SMALLINT           WITH DEFAULT NULL,
  ORDERING9      CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO10           SMALLINT           WITH DEFAULT NULL,
  ORDERING10   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO11           SMALLINT           WITH DEFAULT NULL,
  ORDERING11   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO12           SMALLINT           WITH DEFAULT NULL,
  ORDERING12   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO13           SMALLINT           WITH DEFAULT NULL,
  ORDERING13   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO14           SMALLINT           WITH DEFAULT NULL,
  ORDERING14   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO15           SMALLINT           WITH DEFAULT NULL,
  ORDERING15   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO16           SMALLINT           WITH DEFAULT NULL,
  ORDERING16   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO17           SMALLINT           WITH DEFAULT NULL,
  ORDERING17   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO18           SMALLINT           WITH DEFAULT NULL,
  ORDERING18   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO19           SMALLINT           WITH DEFAULT NULL,
  ORDERING19   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO20           SMALLINT           WITH DEFAULT NULL,
  ORDERING20   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO21           SMALLINT           WITH DEFAULT NULL,
  ORDERING21   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO22           SMALLINT           WITH DEFAULT NULL,
  ORDERING22   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO23           SMALLINT           WITH DEFAULT NULL,
  ORDERING23   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO24           SMALLINT           WITH DEFAULT NULL,
  ORDERING24   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO25           SMALLINT           WITH DEFAULT NULL,
  ORDERING25   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO26           SMALLINT           WITH DEFAULT NULL,
  ORDERING26   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO27           SMALLINT           WITH DEFAULT NULL,
  ORDERING27   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO28           SMALLINT           WITH DEFAULT NULL,
  ORDERING28   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO29           SMALLINT           WITH DEFAULT NULL,
  ORDERING29   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO30           SMALLINT           WITH DEFAULT NULL,
  ORDERING30   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO31           SMALLINT           WITH DEFAULT NULL,
  ORDERING31   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO32           SMALLINT           WITH DEFAULT NULL,
  ORDERING32   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO33           SMALLINT           WITH DEFAULT NULL,
  ORDERING33   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO34           SMALLINT           WITH DEFAULT NULL,
  ORDERING34   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO35           SMALLINT           WITH DEFAULT NULL,
  ORDERING35   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO36           SMALLINT           WITH DEFAULT NULL,
  ORDERING36   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO37           SMALLINT           WITH DEFAULT NULL,
  ORDERING37   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO38           SMALLINT           WITH DEFAULT NULL,
  ORDERING38   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO39           SMALLINT           WITH DEFAULT NULL,
  ORDERING39   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO40           SMALLINT           WITH DEFAULT NULL,
  ORDERING40   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO41           SMALLINT           WITH DEFAULT NULL,
  ORDERING41   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO42           SMALLINT           WITH DEFAULT NULL,
  ORDERING42   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO43           SMALLINT           WITH DEFAULT NULL,
  ORDERING43   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO44           SMALLINT           WITH DEFAULT NULL,
  ORDERING44   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO45           SMALLINT           WITH DEFAULT NULL,
  ORDERING45   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO46           SMALLINT           WITH DEFAULT NULL,
  ORDERING46   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO47           SMALLINT           WITH DEFAULT NULL,
  ORDERING47   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO48           SMALLINT           WITH DEFAULT NULL,
  ORDERING48   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO49           SMALLINT           WITH DEFAULT NULL,
  ORDERING49   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO50           SMALLINT           WITH DEFAULT NULL,
  ORDERING50   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO51           SMALLINT           WITH DEFAULT NULL,
  ORDERING51   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO52           SMALLINT           WITH DEFAULT NULL,
  ORDERING52   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO53           SMALLINT           WITH DEFAULT NULL,
  ORDERING53   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO54           SMALLINT           WITH DEFAULT NULL,
  ORDERING54   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO55           SMALLINT           WITH DEFAULT NULL,
  ORDERING55   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO56           SMALLINT           WITH DEFAULT NULL,
  ORDERING56   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO57           SMALLINT           WITH DEFAULT NULL,
  ORDERING57   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO58           SMALLINT           WITH DEFAULT NULL,
  ORDERING58   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO59           SMALLINT           WITH DEFAULT NULL,
  ORDERING59   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO60           SMALLINT           WITH DEFAULT NULL,
  ORDERING60   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO61           SMALLINT           WITH DEFAULT NULL,
  ORDERING61   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO62           SMALLINT           WITH DEFAULT NULL,
  ORDERING62   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO63           SMALLINT           WITH DEFAULT NULL,
  ORDERING63   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL,
  COLNO64           SMALLINT           WITH DEFAULT NULL,
  ORDERING64   CHARACTER(1) FOR MIXED DATA             WITH DEFAULT NULL
  )
  IN LDARFO.VIRIDX
  AUDIT NONE
  DATA CAPTURE NONE
  NOT VOLATILE
  APPEND NO;


Step 2: I made the following entry into the DSN_VIRTUAL_INDEXES Table.

TBCREATOR
LDARFO
TBNAME
PAEMPLOYEE
IXCREATOR
RFO7936
IXNAME
VIR_IDX
ENABLE
Y
MODE
C
UNIQUERULE
D
COLCOUNT
1
CLUSTERING
N
NLEAF
384715
NLEVELS
-1
INDEXTYPE
2
PGSIZE
4
FIRSTKEYCARDF
384715
FULLKEYCARDF
384715
CLUSTERRATIOF
-1
PADDED
Y
COLNO1
36
ORDERING1
A

Above entry says that, I want to test whether creation of a non-unique Index on 36th column of PAEMPLOYEE Table will improve query performance or not.


Step 3: I executed the following EXPLAIN statement in SPUFI.
EXPLAIN PLAN SET QUERYNO = 2 FOR                          
SELECT * FROM LDARFO.PAEMPLOYEE WHERE SECURITY_NBR='300668'


Step 4: Once the statement mentioned in Step 3 is executed, we can query PLAN_TABLE to find out whether the Optimizer has considered the Virtual Index for SQL query optimization.
Below entry is found in PLAN_TABLE, which shows that the Virtual Index is considered by the Optimizer.



We have to make sure that the statistics values for the virtual index are valid. Having something like -1 for the number of leaf pages will not help Optimizer in considering the Virtual Index for the optimization.

As long as you have 'Y' in the ENABLE column for the Index in DSN_VIRTUAL_INDEXES, and the Index definition is valid, it should be considered by the Optimizer when access path selection is performed for a query targeting the underlying table.

Note: The above explained is a simple scenario, but can follow the same process for different complex scenarios.