Kamran Agayev's Oracle Blog

Oracle Certified Master

My book is published!

Posted by Kamran Agayev A. on September 13th, 2013

My book is out!

I would like to announce to all my fellow DBA friends that my book is published and is available at Amazon.

http://www.amazon.com/Oracle-Backup-Recovery-secrets-In-Focus/dp/0984428232/

It took me more than one year to write it. 3 years ago I got an email from Rampant Techpress where they asked me if I can write a book about RMAN Backup and Recovery. I wasn’t ready at that time, but decided to accept that challenge and agreed. The first man came in my mind to help me at that time was my friend Aman Sharma. He also agreed and we started the journey together.

After writing the first chapter I realized that we really need to have technical reviewers, so I asked Syed Sabdar and Hemant K. Chitale if they can help us. Fortunately they agreed.  I would like to thank them both as they did a great job and made a lot of corrections

All chapters are based on practical scenarios and deep research and I hope it will help you to understand RMAN in depth, to create your own RMAN backup strategy and to recover from any type of failures you can face in your production database. There’re a lot of scenarios under each chapter with an explanation and step by step guide to help you to reproduce them on your test environment

book

Posted in Administration, RMAN Backup and Recovery | 14 Comments »

How to become an Oracle Certified Master (My OCM journey)

Posted by Kamran Agayev A. on August 16th, 2013

This is it! Finally I’ve successfully passed OCM exam. It was almost more than 1 year that I’ve been preparing for this.

It’s almost impossible to find additional information about the exam online, so I decided to make my own preparation plan as I’ve done it for OCP exams before.  And I would like to share my preparation tips with you and hope it will motivate you to start thinking about OCM.

It is literally true that you can succeed best and quickest by helping others to succeed.” - Napoleon Hill

Please note that this document has already been checked and approved by Oracle Certification Team.  

 

My preparation

-          In parallel to working as a production DBA at a leading mobile operator company in Azerbaijan (Azercell Telecom LLC), I decided to make a preparation plan for 9 months. I printed exam topics from this link and put it on my desktop. I planned to spend a month on each section for preparation (there are total 8 sections)

OCM_Prep1

-          For each section and topic I went through the documentation, checked blog posts, studied metalink notes and created my own practices. I installed VirtualBox on my laptop and used it for all type of practices including RAC and Data Guard. I created different practices for each topic of each session, assigned a time frame for each practice and tried to solve them in a very limited time. I tried to not use GUI in all kind of scenarios. I was able to install Oracle Software and create a database both in silent mode using almost all parameters in a few minutes (with closed eyes :) ) . I learned all commands by heart; I was able to create a Data Guard using command line without looking to the documentation. I created RAC for several times, added and deleted nodes, went through all configuration and management issues. If you’re preparing for OCM exam, you have to forget GUI at all!

EM is a very important tool which has to be used on your production system and which helps you to administer all your databases from one window. You have to know every single link and page on EM, however you shouldn’t rely only on EM, because there may be some tasks which should be performed without EM. And don’t forget to master EM, as the second Section is all about Grid Control :)

-          You have to be prepared for any type of corruption at any time during the exam. You have to test all kind of failure scenarios and recover from any type of error. If you are not able to recover your database during the exam, you will have to return home earlier. I created all kind of corruption and failure scenarios, and prepared and tested all “How to recover if …” cases.

-          Each time I studied everything related to any topic and solved all practices, I removed that line from the paper. My aim was to have a paper with everything removed out like this :)

OCM_Prep2

-          I sensed that I would have to fight against the time during the exam, so I decided to improve my typing speed. For the last 7 years I’ve been typing without looking at the keyboard, but it wasn’t enough for me. I decided to improve my typing speed and accuracy, thus registered at www.typeracer.com and competed every day. As a result my typing accuracy improved further.

-          After 8 months I finished my research almost for every section and topic of an OCM exam. For the 3 weeks of the last month I performed all practices that I had made for myself (all practices per week!). For the last week I did all practices per DAY! Every time it took me a few hours to finish all tasks. In the last week I realized that my fingers type faster than my brain process :) It was very hard to devote some hours per day for all practices, so I was preparing over the lunch breaks, after work, and mostly on weekends.

 

Finally, registration day arrived. Actually, I had planned that date before the preparation and started the registration 2 months before the exam. When you register for the exam try to check the nearest country, book the nearest hotel to the exam center, get a document to have a VISA from the embassy and book flight tickets. Don’t forget to contact OCM exam center 1-2 weeks before an exam to make sure that the exam will proceed and make sure you’ve received a confirmation letter from Oracle.

Before taking an OCM exam, you have to earn OCP certification and take 2 Oracle University (OU) courses. You can check this link for more information. If you haven’t taken the required courses before, it would be better to take the topic that you don’t feel at ease. After taking the required courses, you should submit them at Course Submission Form using course Enrollment ID

Also don’t forget to print addresses of the hotel and the exam center. On the first day I could not explain where to go unless I showed the printed address to the taxi driver. Don’t forget to take your ID to the exam center. In my case, I forgot it, came back to the hotel 1 hour before the exam started, was stuck in a traffic jam and finally got to the exam center 10 minutes ago :)

Although I had not a chance to relax (slept only 3 hours before the exam), but I would strongly advise you to have a rest a day before the exam.

 

About the exam 

Exam starts at 9.00 and finishes at 18.00. You should arrive 1 hour before the exam. Don’t forget to bring your Oracle Testing ID with you

You will not get any bug during the exam. The proctor is responsible for the discipline, provides information in the beginning of each section and doesn’t answer to any technical question. You can use nonsearchable documentation.

You need to find syntax of any command in a second, you have to know which book, which chapter and which section you need to look for specific topic or question. During preparation days, don’t use documentation search and google AT ALL! In some cases, you don’t have the second chance to correct the mistake you’ve made. You don’t have any chance to learn something during the exam. If you don’t know any topic or haven’t done it or practiced it before, or even don’t know where to find the information in the documentation just go to the next question. I’m sure you’ve already done the same at OCP exam

As I’ve signed NDA, I can’t provide more information about the exam. In a nutshell, you’ll struggle against the time during the whole exam. I want to repeat again – If you come across a question that you haven’t prepared for and seen any time before, that can be a big problem. Theoretical knowledge is not enough; you have to have a practical experience.

In order to avoid feeling nervous during the exam, you are supposed to have gone through the same experience during your DBA life. If you’ve faced such critical issues and solved them in time, then the exam will be less excited.

 

In conclusion …  

It doesn’t matter how experienced you are, it is still very challenging to use only command line interface to perform any requirement, to solve data corruption and recover the database from any data loss, create and manage RAC and Data Guard in a very limited timeline, tune the database and perform different kind of administrative tasks one by one without any mistake.

I hope my preparation method will help you start the preparation for the OCM exam. If you have any question, please do not hesitate to post your comment.

First and foremost I would like to thank The Alimighty God. He has given me the power to believe in my passion and pursue my dreams.  I could never have done this without the faith I have. Sincere thanks to my family, especially to my father who motivated me every day. I would also like to thank to my collegues, my friends and all my students for supporting me. Thank you for being genuinely happy for my success and good fortune, as if it were your own

And last but not least, I would like to share some motivation quotes which can help you to be motivated during the preparation and also in your entire life

  • If my mind can conceive it, and my heart can believe it – then I can achieve it. – Muhammad Ali
  • “Most people have the will to win, few have the will to prepare to win.” -Bobby Knight
  • You’re never a loser until you quit trying. -Mike Ditka”
  • Many of life’s failures are people who did not realize how close they were to success when they gave up
  • He who is not courageous enough to take risks will accomplish nothing in life.
  • Accept challenges in life, so that you may feel the exhilaration of victory.
  • “The greatest danger for most of us is not that we aim too high and we miss it, but we aim too low and reach it.” – Michelangelo
  • Edison failed 10,000 times before he made the electric light. Do not be discouraged if you fail a few times. – Napoleon Hil

 

 OCM_Certificate_Kamran_Aghayev

Posted in Administration | 69 Comments »

User equivalence check failed for user “oracle”.

Posted by Kamran Agayev A. on August 14th, 2013

If you’ve installed and configured RAC environment, you should face “User equivalenece check” error during pre-requisite check

There’re three common reasons for getting this error:

- You didn’t configure ssh between nodes. Make sure you can ssh to all nodes which are used at runcluvfy.sh script

- UID are different on any of node

- runcluvfy.sh script can’t find necessary executable to perform the connectivity or equivalence check

 

In my case, I configured ssh on both nodes and all parameters of an oracle user were same. So in this case, I decided to debug runcluvfy.sh script. To debug this script, you need to set some environment variables:

export CV_HOME=/home/oracle/cv   (Export Cluster Verify debugging home page)

export SRVM_TRACE=true

./runcluvfy.sh stage -post hwos -n node1,node2 -verbose

 

Then I checked the log file under /home/oracle/cv/pid/cv/log/log_file.log and got the following output:

[Worker 0] [8:57:44:666] [UnixSystem.checkRemoteExecutionSetup:1833] checkRemoteExecutionSetup:: Error checking user equivalence us
ing Secured Shell ‘/usr/local/bin/ssh’; Wed Aug 14 08:57:44 GMT+04:00 2013

[Thread-7] [8:57:44:850] [StreamReader.run:65] ERROR>remshd: Login incorrect.; Wed Aug 14 08:57:44 GMT+04:00 2013

[Thread-5] [8:58:15:773] [StreamReader.run:65] ERROR>rcmd_af: connect: node1: Connection refused; Wed Aug 14 08:5
8:15 GMT+04:00 2013

 

Hey, wait. I don’t have /usr/local/bin/ssh file! I do have ssh executable but it’s under the different directory. And why the script didn’t use “which ssh” command and get the exact path of the ssh executable?

Anyway, I created a symbolic link to the original ssh file and run the script again. It worked!

Posted in Administration, RAC issues | No Comments »

Peer not authenticated – OEM error

Posted by Kamran Agayev A. on August 14th, 2013

Some days ago I was requested to deploy 9i database to 10g Grid Control. According to compatibility matrix (check metalink note - Oracle Enterprise Manager Grid Control Certification Checker (Doc ID 412431.1)) you can install 10g agent on 9i database and deploy it

So I downloaded 10.2.0.2 agent from this link and installed it on 9i machine

Before getting the installation screen, I got the following error :

Unable to convert from “UTF-8″ to “ISO8859-1″ for NLS!

After struggling a little bit, I set the DISPLAY environment variable and was able to get the installation screen. Installation went smooth and agent deployed successfully. However, when trying to reach to that database over OEM Grid Control, I got “peer not authenticated” error:

GRID_Error

 

 

I checked OEM and agent logs and didn’t get much information about this error. The output from OEM log was as follows:

2013-08-13 11:19:13,950 [EMUI_11_19_13_/console/database/config] ERROR perf.sitemap logStackTrace.2401 – oracle.sysman.emSDK.emd.comm.CommException: peer not authenticated
at oracle.sysman.emSDK.emd.comm.EMDClient.getResponseForRequest(EMDClient.java:1680)
at oracle.sysman.emSDK.emd.comm.EMDClient.getTarget(EMDClient.java:1214)
at oracle.sysman.db.conf.DBConfigObject.getTargetInstance(DBConfigObject.java:463)
at oracle.sysman.db.conf.DBConfigObject.initPropertiesFromRepository(DBConfigObject.java:634)
at oracle.sysman.db.conf.DBConfigObject.initialize(DBConfigObject.java:434)
at oracle.sysman.db.conf.DBConfigUI.initialize(DBConfigUI.java:271)

 

Then I tried to compare targets.xml file on both 9i and on 10g machine where agent was working properly and found that the main two lines were missing from 9i targets.xml file:

<Property NAME=”UserName” VALUE=”emuser”/>
<Property NAME=”password” VALUE=”bae2342769a2fac163dfdf256813e716″ ENCRYPTED=”TRUE”/>

 

I added both lines to the 9i targets.xml file, created “emuser” and changed VALUE parameter of the password field and set ENCRYPTED to “FALSE”

Then I performed the following steps in order to delete all files that should be uploaded and start the agent again in non-secure mode :

 

$AGENT_HOME/bin/emctl stop agent

 

rm -rf $AGENT_HOME/sysman/emd/recv

rm -rf $AGENT_HOME/sysman/emd/collection

rm -rf $AGENT_HOME/sysman/emd/state

rm -rf $AGENT_HOME/sysman/emd/upload

rm -rf  $AGENT_HOME/sysman/emd/lastupld.xml

$AGENT_HOME/bin/emctl clearstaete agent

$AGENT_HOME/bin/emctl start agent

 

After some minutes I was able to manage the database from OEM and the error disappeared.

Posted in Administration | No Comments »

I’m Oracle Certified Master now!

Posted by Kamran Agayev A. on July 10th, 2013

Last week I got an email from Oracle Certification Team that I’ve successfully passed OCM exam. I can proudly say that I’m Oracle Certified Master now!

Although I’m working as a production DBA, I prepared for this exam for more than 6 months. I made a plan for each month, even for each week, prepared own scenarios and practiced for hours.

The exam was tough, you need to be prepared for any case, you have to make decisions very quickly, and even you have to type very fast.

I will prepare blog post on how I prepared for OCM exam and hope it will help and inspire you to take it

 OCM_ODb10gAdmin_clr

Posted in Uncategorized | 39 Comments »

Step by step video tutorial of installing Oracle Database 12c on OEL 6

Posted by Kamran Agayev A. on July 2nd, 2013

In this video tutorial you’ll learn how to install Oracle Database 12c on Oracle Enterprise Linux 6

Oracle12c_installation

 

You can download .mp4 format of the video from this link

You can watch it online from this link  (Don’t forget to watch it in HD format)

You an download Installation configuration file from this link

Posted in Administration, Oracle on Linux, Video Tutorials | 5 Comments »

Moving RAC redolog files to the new ASM diskgroup

Posted by Kamran Agayev A. on May 17th, 2013

Here’re the steps you need to perform in order to move redo log files to the different ASM diskgroup in a RAC environment

First of all, make sure you’ve been provided shared disks by system administrators. If you’re using asmlib, create disks using oracleasm as follows:

[oracle@node1 ~] sudo /etc/init.d/oracleasm createdisk RLOGS /dev/mapper/mpath6

Then run scan and query new disks from other nodes:

[oracle@node2 ~]$ sudo /etc/init.d/oracleasm scandisks
[oracle@node2 ~]$ sudo /etc/init.d/oracleasm listdisks

After having your disk added to the list, query it from /dev/oracleasm/disks folder:
[oracle@node1 ~]$ ls -ltr /dev/oracleasm/disks/

If you got the disk, then everything is ok. Now you can connect to the ASM instance and create a diskgroup. For this, get the name of the ASM instance from OS and connect to it:
ps -ef | grep ASM
export ORACLE_SID=+ASM1
sqlplus / as sysdba

SQL> create diskgroup RLOGS external redundancy
2 DISK ‘/dev/oracleasm/disks/RLOGS’;

Diskgroup created.

SQL>

If it’s not mounted on the rest nodes, connect to those nodes and mount it manually:
SQL>alter diskgroup RLOGS mount;

And also make sure the name of the disk group is added to asm_diskstrings parameter at ASM instance

After having your disk group created, create new redo log group per thread, and remove on redo log group from the same thread which has INACTIVE status
You can check the status of any group based on thread using V$LOG view:

SELECT group#, thread#, status
FROM v$log
WHERE thread# = 2
ORDER BY 2, 1

GROUP# THREAD# STATUS
———- ———- —————-
20 2 CURRENT
21 2 INACTIVE

Now add new group to the same thread (thread#=2) and check v$logfile view:

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 GROUP 22 (‘+RLOGS’) SIZE 50M

SELECT lf.group#, lf.MEMBER
FROM v$logfile lf, v$log l
WHERE lf.group# = l.group# AND l.thread# = 2

GROUP# MEMBER
———- ——————————————————-
20 +OLD_LOG/PROD/onlinelog/group_20.259.815656575
21 +OLD_LOG/PROD/onlinelog/group_21.260.815656475
22 +RLOGS/PROD/onlinelog/group_22.260.815656475

Now you can safely delete INACTIVE redo log group from the same thread:

SQL> ALTER DATABASE DROP LOGFILE GROUP 21;

Perform above mentioned steps one by one for each thread.

Each time switch log file and make sure the new redo log member is used. Moreover, try to query both sql commands that are written above after each time you add and drop redo log group to see if it’s added/deleted successfully.

Posted in Administration | No Comments »

TT0802: Database permanent space exhausted (TimesTen db)

Posted by Kamran Agayev A. on April 18th, 2013

During TimesTen stress test, we’ve encountered the following error while trying to perform bulk insert :

TT0802: Database permanent space exhausted — file “blk.c”, lineno 3477, procedure “sbBlkAlloc”"

The reason is because of the PERM_ALLOCATED_SIZE parameter defined in sys.odbc.ini file. The default value is 64MB which is small for performing bulk-insert and stress-test. You can get this value either from sys.odbc.ini file, or using dssize command from ttIsql

Command> dssize

PERM_ALLOCATED_SIZE: 65536
PERM_IN_USE_SIZE: 64973
PERM_IN_USE_HIGH_WATER: 65408
TEMP_ALLOCATED_SIZE: 49152
TEMP_IN_USE_SIZE: 11242
TEMP_IN_USE_HIGH_WATER: 12445

Command> exit

 

In order to increase this paramter, switch to $TIMESTEN_HOME/info folder and change PERM_ALLOCATED_SIZE parameter inside the required cache group:

[prod_db]
DataStore=/home/oracle/OracleCache/datastore1
PermSize=10000
OracleNetServiceName=PROD_DB
DatabaseCharacterSet=WE8ISO8859P1

 

Here, we changed the parameter to 10GB.  Now restart the TimesTen deamon and check the value again:

[oracle@timesten info]$ ttDaemonAdmin -stop
TimesTen Daemon stopped.
[oracle@timesten info]$ ttDaemonAdmin -start
TimesTen Daemon startup OK.

 

Command> dssize

PERM_ALLOCATED_SIZE: 10240000
PERM_IN_USE_SIZE: 64593
PERM_IN_USE_HIGH_WATER: 64593
TEMP_ALLOCATED_SIZE: 49152
TEMP_IN_USE_SIZE: 8711
TEMP_IN_USE_HIGH_WATER: 8775

Command>

 

Now, run the bulk insert again and you’ll not get the error, instead you’ll get PERM_IN_USE_SIZE value increased

 

Posted in TimesTen | No Comments »

Step by Step Oracle 10g RAC installation ebook

Posted by Kamran Agayev A. on March 15th, 2013

From the following link you can download my Step by Step Oracle 10g RAC installation on VMware ebook

Downloaded 1638 times

Posted in Uncategorized | 8 Comments »

Debugging Data Pump session

Posted by Kamran Agayev A. on March 11th, 2013

While importing a dump file, sometimes it takes too long and seems to be “hanging” and processing something unknown in the background, or queuing for something. As a DBA, you HAVE NOT wait “until it finishes”. You have to try to find out the solution.

Yesterday, while importing a metadata of a big database, the session hold and took hours to finish.

This is the command I run to import the dump file into the new database:

impdp system/oracle directory=mydir dumpfile=dump01.dmp exclude=tablespace, statistics parfile=param.ini METRICS=y 

The session was hold in the following section:

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/CLUSTER

     Completed 1 CLUSTER objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/INDEX

     Completed 1 INDEX objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Note: I suggest that we use undocumented METRICS=Y parameter to get information about the number of objects and the time it takes to process them into the log file

I waited shortly and was curious about the activities in the background, so decided to debug the Data Pump session and run the following query to get the SID and SERIAL number of the running data pump sessions. These values will be used to trace the session in the next step:

col username format a10
set linesize 150
col job_name format a20
col program format a25
SELECT TO_CHAR (SYSDATE, ‘YYYY-MM-DD HH24:MI:SS’) “DATE”,
     s.program,
     s.sid,
     s.status,
     s.username,
     d.job_name,
     p.spid,
     s.serial#,
     p.pid
FROM V$SESSION s, V$PROCESS p, DBA_DATAPUMP_SESSIONS d
WHERE p.addr = s.paddr AND s.saddr = d.saddr;

DATE                PROGRAM                          SID STATUS   USERNAME   JOB_NAME             SPID            SERIAL#        PID

——————- ————————- ———- ——– ———- ——————– ———— ———- ———-

2013-03-07 14:52:04 udi@TestDB02 (TNS V1-V3)         152 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12143                36         17

2013-03-07 14:52:04 oracle@TestDB02 (DM00)           148 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12147                 3         19

2013-03-07 14:52:04 oracle@TestDB02 (DW01)           142 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12149                15         20

 

 

 

 

Then I queried V$SESSION_WAIT view to get the waiting event:

SELECT   w.sid, w.event, w.seconds_in_wait

  FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w

 WHERE   s.saddr = d.saddr AND s.sid = w.sid;

     SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

       142 control file sequential read                                                                  1

       148 wait for unread message on broadcast channel                                  43

       152 wait for unread message on broadcast channel                                 306

If you run the same query a number of times, you’ll see how the wait event changes:

To get detailed information I decided to debug the first session (SID = 142) and provided values of SID and SERIAL# columns to DBMS_SYSTEM.SET_EV procedure and started the trace:

SQL>  EXECUTE SYS.DBMS_SYSTEM.SET_EV (142,15,10046,8,”);

Then I switched to udump folder and checked the file labeled “dbname_ora_PID” – “testdb_ora_12149”

-bash-4.2$ tail -f testdb_ora_12149.trc

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

WAIT #4: nam=’control file sequential read’ ela= 21 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366772623

WAIT #4: nam=’control file sequential read’ ela= 15 file#=1 block#=1 blocks=1 obj#=51780 tim=13410366772660

WAIT #4: nam=’control file sequential read’ ela= 15 file#=2 block#=1 blocks=1 obj#=51780 tim=13410366772695

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=16 blocks=1 obj#=51780 tim=13410366772728

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=18 blocks=1 obj#=51780 tim=13410366772760

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=24 blocks=1 obj#=51780 tim=13410366772802

WAIT #4: nam=’db file sequential read’ ela= 12 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366772834

WAIT #4: nam=’db file single write’ ela= 11313 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366784170

WAIT #4: nam=’control file parallel write’ ela= 38910 files=3 block#=17 requests=3 obj#=51780 tim=13410366823104

WAIT #4: nam=’control file parallel write’ ela= 46532 files=3 block#=15 requests=3 obj#=51780 tim=13410366869661

WAIT #4: nam=’control file parallel write’ ela= 40258 files=3 block#=1 requests=3 obj#=51780 tim=13410366909955

WAIT #4: nam=’control file sequential read’ ela= 12 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366909986

WAIT #4: nam=’rdbms ipc reply’ ela= 7016 from_process=5 timeout=21474836 p3=0 obj#=51780 tim=13410366917049

=====================

PARSING IN CURSOR #15 len=296 dep=3 uid=0 oct=6 lid=0 tim=13410366917452 hv=2379717279 ad=’de799c58′

update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

END OF STMT

PARSE #15:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=3,tim=13410366917447

EXEC #15:c=0,e=337,p=0,cr=5,cu=1,mis=0,r=1,dep=3,og=3,tim=13410366918000

STAT #15 id=1 cnt=0 pid=0 pos=1 obj=0 op=’UPDATE  SEG$ (cr=5 pr=0 pw=0 time=277 us)’

STAT #15 id=2 cnt=1 pid=1 pos=1 obj=14 op=’TABLE ACCESS CLUSTER SEG$ (cr=5 pr=0 pw=0 time=166 us)’

STAT #15 id=3 cnt=1 pid=2 pos=1 obj=9 op=’INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=31 us)’

By using tkprof I formatted content of the trace file into a readable output:

tkprof testdb_ora_12149.trc impdp_output.dat

And read the file:

insert into seg$ (file#,block#,type#,ts#,blocks,extents,minexts,maxexts,

extsize,extpct,user#,iniexts,lists,groups,cachehint,bitmapranges,hwmincr,

spare1, scanhint)

values

(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,0,:16,DECODE(:17,0,NULL,

:17),:18)

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse      905      0.04       0.01          0          0          0           0

Execute    905      0.26       0.32          0       2756       6420         905

Fetch        0      0.00       0.00          0          0          0           0

——- ——  ——– ———- ———- ———- ———-  ———-

total     1810      0.30       0.34          0       2756       6420         905

Everything is fine. It’s importing the metadata into the database by inserting data to the seg$ (segments) table. I used tail command to get the output of the trace file and run the first query.

After a delay, I got “statement suspended, wait error to be cleared” message, checked alert.log file and found that the USERS tablespace is FULL

SQL> /

SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

142 statement suspended, wait error to be cleared                       20

148 wait for unread message on broadcast channel                                 905

152 wait for unread message on broadcast channel                                 246

SQL>

tail –f Alert.log file

statement in resumable session ‘SYSTEM.SYS_IMPORT_FULL_01.1′ was suspended due to

    ORA-01658: unable to create INITIAL extent for segment in tablespace USERS

After adding some space to the USERS tablespace the message disappeared.

Then suddenly I begin receiving “data file init write” wait event. That was the main reasons why the import took much time.

The reason was that the AUTOEXTEND value of the datafile was set very low and Oracle was making too many calls to OS in order to add new extents

The output from the trace file was as follows:

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

In order to define the tablespace which is the cause of this event, I got the object id from the trace file (obj#=51780) and checked obj$ table.

SELECT   name

  FROM   OBJ$

 WHERE   obj# = 51780

Then using DBMS_METADATA.GET_DLL function I got the tablespace name of the object that couldn’t be imported

SELECT   DBMS_METADATA.get_ddl (‘TABLE’, ‘TAB_TEST’, ‘USR01′) FROM DUAL

As the ‘Data file init write’ event directly relates to AUTOEXTEND and occurs while adding extents to the datafile, I checked AUTOEXTEND value of the datafiles. Their values were too small. I increased the AUTOEXTEND values and the event disappeared from the trace log and import job finished successfully.

Reference: Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump [ID 286496.1]

Posted in Uncategorized | 5 Comments »