Kamran Agayev's Oracle Blog

Oracle Certified Master

Archive for March, 2013

Step by Step Oracle 10g RAC installation ebook

Posted by Kamran Agayev A. on 15th March 2013

From the following link you can download my Step by Step Oracle 10g RAC installation on VMware ebook

Downloaded 3985 times

Posted in Uncategorized | 8 Comments »

Debugging Data Pump session

Posted by Kamran Agayev A. on 11th March 2013

While importing a dump file, sometimes it takes too long and seems to be “hanging” and processing something unknown in the background, or queuing for something. As a DBA, you HAVE NOT wait “until it finishes”. You have to try to find out the solution.

Yesterday, while importing a metadata of a big database, the session hold and took hours to finish.

This is the command I run to import the dump file into the new database:

impdp system/oracle directory=mydir dumpfile=dump01.dmp exclude=tablespace, statistics parfile=param.ini METRICS=y 

The session was hold in the following section:

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/CLUSTER

     Completed 1 CLUSTER objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/INDEX

     Completed 1 INDEX objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Note: I suggest that we use undocumented METRICS=Y parameter to get information about the number of objects and the time it takes to process them into the log file

I waited shortly and was curious about the activities in the background, so decided to debug the Data Pump session and run the following query to get the SID and SERIAL number of the running data pump sessions. These values will be used to trace the session in the next step:

col username format a10
set linesize 150
col job_name format a20
col program format a25
SELECT TO_CHAR (SYSDATE, ‘YYYY-MM-DD HH24:MI:SS’) “DATE”,
     s.program,
     s.sid,
     s.status,
     s.username,
     d.job_name,
     p.spid,
     s.serial#,
     p.pid
FROM V$SESSION s, V$PROCESS p, DBA_DATAPUMP_SESSIONS d
WHERE p.addr = s.paddr AND s.saddr = d.saddr;

DATE                PROGRAM                          SID STATUS   USERNAME   JOB_NAME             SPID            SERIAL#        PID

——————- ————————- ———- ——– ———- ——————– ———— ———- ———-

2013-03-07 14:52:04 udi@TestDB02 (TNS V1-V3)         152 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12143                36         17

2013-03-07 14:52:04 oracle@TestDB02 (DM00)           148 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12147                 3         19

2013-03-07 14:52:04 oracle@TestDB02 (DW01)           142 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12149                15         20

 

 

 

 

Then I queried V$SESSION_WAIT view to get the waiting event:

SELECT   w.sid, w.event, w.seconds_in_wait

  FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w

 WHERE   s.saddr = d.saddr AND s.sid = w.sid;

     SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

       142 control file sequential read                                                                  1

       148 wait for unread message on broadcast channel                                  43

       152 wait for unread message on broadcast channel                                 306

If you run the same query a number of times, you’ll see how the wait event changes:

To get detailed information I decided to debug the first session (SID = 142) and provided values of SID and SERIAL# columns to DBMS_SYSTEM.SET_EV procedure and started the trace:

SQL>  EXECUTE SYS.DBMS_SYSTEM.SET_EV (142,15,10046,8,”);

Then I switched to udump folder and checked the file labeled “dbname_ora_PID” – “testdb_ora_12149”

-bash-4.2$ tail -f testdb_ora_12149.trc

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

WAIT #4: nam=’control file sequential read’ ela= 21 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366772623

WAIT #4: nam=’control file sequential read’ ela= 15 file#=1 block#=1 blocks=1 obj#=51780 tim=13410366772660

WAIT #4: nam=’control file sequential read’ ela= 15 file#=2 block#=1 blocks=1 obj#=51780 tim=13410366772695

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=16 blocks=1 obj#=51780 tim=13410366772728

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=18 blocks=1 obj#=51780 tim=13410366772760

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=24 blocks=1 obj#=51780 tim=13410366772802

WAIT #4: nam=’db file sequential read’ ela= 12 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366772834

WAIT #4: nam=’db file single write’ ela= 11313 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366784170

WAIT #4: nam=’control file parallel write’ ela= 38910 files=3 block#=17 requests=3 obj#=51780 tim=13410366823104

WAIT #4: nam=’control file parallel write’ ela= 46532 files=3 block#=15 requests=3 obj#=51780 tim=13410366869661

WAIT #4: nam=’control file parallel write’ ela= 40258 files=3 block#=1 requests=3 obj#=51780 tim=13410366909955

WAIT #4: nam=’control file sequential read’ ela= 12 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366909986

WAIT #4: nam=’rdbms ipc reply’ ela= 7016 from_process=5 timeout=21474836 p3=0 obj#=51780 tim=13410366917049

=====================

PARSING IN CURSOR #15 len=296 dep=3 uid=0 oct=6 lid=0 tim=13410366917452 hv=2379717279 ad=’de799c58′

update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

END OF STMT

PARSE #15:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=3,tim=13410366917447

EXEC #15:c=0,e=337,p=0,cr=5,cu=1,mis=0,r=1,dep=3,og=3,tim=13410366918000

STAT #15 id=1 cnt=0 pid=0 pos=1 obj=0 op=’UPDATE  SEG$ (cr=5 pr=0 pw=0 time=277 us)’

STAT #15 id=2 cnt=1 pid=1 pos=1 obj=14 op=’TABLE ACCESS CLUSTER SEG$ (cr=5 pr=0 pw=0 time=166 us)’

STAT #15 id=3 cnt=1 pid=2 pos=1 obj=9 op=’INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=31 us)’

By using tkprof I formatted content of the trace file into a readable output:

tkprof testdb_ora_12149.trc impdp_output.dat

And read the file:

insert into seg$ (file#,block#,type#,ts#,blocks,extents,minexts,maxexts,

extsize,extpct,user#,iniexts,lists,groups,cachehint,bitmapranges,hwmincr,

spare1, scanhint)

values

(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,0,:16,DECODE(:17,0,NULL,

:17),:18)

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse      905      0.04       0.01          0          0          0           0

Execute    905      0.26       0.32          0       2756       6420         905

Fetch        0      0.00       0.00          0          0          0           0

——- ——  ——– ———- ———- ———- ———-  ———-

total     1810      0.30       0.34          0       2756       6420         905

Everything is fine. It’s importing the metadata into the database by inserting data to the seg$ (segments) table. I used tail command to get the output of the trace file and run the first query.

After a delay, I got “statement suspended, wait error to be cleared” message, checked alert.log file and found that the USERS tablespace is FULL

SQL> /

SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

142 statement suspended, wait error to be cleared                       20

148 wait for unread message on broadcast channel                                 905

152 wait for unread message on broadcast channel                                 246

SQL>

tail –f Alert.log file

statement in resumable session ‘SYSTEM.SYS_IMPORT_FULL_01.1’ was suspended due to

    ORA-01658: unable to create INITIAL extent for segment in tablespace USERS

After adding some space to the USERS tablespace the message disappeared.

Then suddenly I begin receiving “data file init write” wait event. That was the main reasons why the import took much time.

The reason was that the AUTOEXTEND value of the datafile was set very low and Oracle was making too many calls to OS in order to add new extents

The output from the trace file was as follows:

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

In order to define the tablespace which is the cause of this event, I got the object id from the trace file (obj#=51780) and checked obj$ table.

SELECT   name

  FROM   OBJ$

 WHERE   obj# = 51780

Then using DBMS_METADATA.GET_DLL function I got the tablespace name of the object that couldn’t be imported

SELECT   DBMS_METADATA.get_ddl (‘TABLE’, ‘TAB_TEST’, ‘USR01’) FROM DUAL

As the ‘Data file init write’ event directly relates to AUTOEXTEND and occurs while adding extents to the datafile, I checked AUTOEXTEND value of the datafiles. Their values were too small. I increased the AUTOEXTEND values and the event disappeared from the trace log and import job finished successfully.

Reference: Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump [ID 286496.1]

Posted in Uncategorized | 7 Comments »