Kamran Agayev's Oracle Blog

Oracle Certified Master

Archive for the 'Uncategorized' Category

OCM Exam Tips and Tricks at www.ocmguide.com

Posted by Kamran Agayev A. on 13th October 2017

Dear friends

Hope most of you already got my book and started preparing for the OCM exam. Every month I get an email from my readers as well as from those who used my book and passed OCM exam successfully!

If you haven’t subscribed to the OCM Newsletter and want to read the previous articles, use the following link:
http://www.ocmguide.com/category/ocm-tips-and-tricks/

 

If you want to get free trial copy of the book in pdf format, use the following address:
http://www.ocmguide.com/

 

If you also want to successfully pass the exam, then use the following address to purchase the book:
https://www.amazon.com/Oracle-Certified-Master-Study-Guide/dp/1536800791/ref=sr_1_1?ie=UTF8&qid=1474879527&sr=8-1&keywords=oracle+exam+guide

 

If you are in my facebook friend list, you have already known that I collect picture of my readers and make them famous in my facebook account :) So if you are a reader of my book, please send me your photo with my book and become a famous! :)

Please do not hesitate to contact me directly regarding any OCM topic you find it complicated. And please post your comments on amazon and here on my blog regarding the book, Your feedback is highly appreciated!

 

ocm-book

Posted in Uncategorized | 1 Comment »

Get your weekly OCM exam tip to your email – sign up at www.ocmguide.com

Posted by Kamran Agayev A. on 18th April 2017

Dear reader.

You must have already known about my OCM exam Study Guide that I’ve published few months ago. If not, then get your pdf copy of sample chapters of the book from www.ocmguide.com now. With along the sample pdf copy you will also be registered for the weekly OCM exam tips email list!

Good luck to you with your exam preparation and feel free to contact me on any questions regarding the exam preparation. I’m ready to help you through it.

Check the following link to read the OCM tip of this week!

 

OCM Tip of the week – Implement fine-grained access control

http://www.ocmguide.com/ocm-tip-of-the-week-implement-fine-grained-access-control/ 

Posted in Uncategorized | No Comments »

Step by step guide to create an Oracle Database in the Cloud

Posted by Kamran Agayev A. on 5th December 2016

In this blog post I will share the steps to create an Oracle Database in Cloud. We will create a database service, create a second database in the same machine in a silent mode, access to the cloud machine with SSH and monitor the database with OEM.

First of all, make sure you have an Oracle account. Open oracle.com and click on Register link to get a free oracle account. Next, open cloud.oracle.com, select “Compute” from Infrastructure menu and click “Try It” button to get a free trial account for 1 month. Next, login with your oracle account and register for cloud account. Make sure you don’t apply with generic email addresses like hotmail, gmail and etc. Instead, use your company address.

After successfully registering you will get an email with your credential information. Click on the link specified in the email, provide username/password and login to your cloud account. Click Oracle Database Cloud Service link.

 

image-1

 

 

Click on the link under “My Service URL” to access list of cloud services you have.

 

image-2

 

 

 

 

 

 

 

 

 

 

 

 

If you haven’t signed in yet, sign in page will appear. Use the username and password provided in the email you got during the registration and login

image-3

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Next, Oracle Cloud Services dashboard will appear.

 

image-4

 

 

 

 

 

 

 

 

 

 

 

 

 

To create and manage database instances, click on the “Open Service Console” menu of the Database tab as shown below:

 

image-5

 

 

 

 

 

 

 

 

 

In this page you will get list of database services. Here, I have two database services (myfirstdb, myseconddb). I have 7.5G memory and 150gb of storage provided for each service.

image-6

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click Delete on the drop down menu for each service and delete both services.

 

image-7

 

 

 

 

 

 

 

 

 

 

 

 

 

After successfully deleting available instances, click on “Create Service” button to create a new database instance:

image-8

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Select “Oracle Database Cloud Service” option to create a database using a wizard. For the billing frequency you have two options: Hourly and Monthly. It doesn’t make sense when you use a trial account. So select any of them and click Next.

 

image-9

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

For the software release you have 3 options: 11.2, 12.1 and 12.2. Choose any of them and click Next.

 

image-10

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

As a software edition, you have 4 options:

Standard Edition

Enterprise Edition

Enterprise Edition – High Performance

Enterprise Edition – Extreme Performance

To get more information and features that each edition provides, check the following documentation:

 

Home / Cloud / Oracle Database Cloud Service/ Using Oracle Database Cloud Service/ About Database Cloud Service Database Deployments/ Oracle Database Software Package

https://docs.oracle.com/cloud/latest/dbcs_dbaas/CSDBI/GUID-660363B8-0E2F-4A4F-A9BD-70A43F332A16.htm#GUID-6C0B1E17-98A5-4522-A3B9-36EFA05E64F9

Select “Enterprise Edition” option and click Next

 

image-11

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In the next page you will be asked to provide a SSH Public Key for the cloud service. For this, use Putty Key Generator executable to generate a SSH public key. Click on Generate button, move your mouse over the blank are to generate some action.

 

image-12

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The SSH key will be generated as follows. Provide a password and click on “Save private key” button to save this key as a private key

 

image-13

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To save a key as a public key, copy the text and save it.

 

image-14

 

 

 

 

 

 

 

 

 

 

 

 

 

image-15

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provide this file for the “SSH Public Key” field of the database service creation wizard.

image-16

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provide the service name, database storage, select necessary compute shape and click Next.

image-17

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Review the information and click Create button to create a service

image-18

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click “In Progress” link to check the service creation progress.

image-19

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After a while, the service will be created successfully.

 

image-20

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click on proddb link to open the service.

 

image-21

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Before trying to connect to the database in the cloud from outside, you should enable dblistener security rule. Open “Oracle Database Cloud Service” dashboard.

 

image-22

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click on Network tab

image-23

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When you click on Network tab, you will get list of security roles.

 

image-24

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Click on the menu icon for ora_p2_dblistener role and select Update.

 

image-25

 

 

 

 

 

 

 

Enable the status of this security role

 

image-26

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Enable the ora_p2_dbconsole security role to get access to OEM.

 

image-27

 

 

 

 

 

 

Now, let’s connect to the database from SSH. Provide the private key that was saved above

image-28

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provide the IP of the virtual machine that is provided in the main page of the database cloud service and click Open.

 

image-29

 

 

 

 

 

 

 

Provide username as “oracle” and password that was provided when generating a private key using PuTTy Key Generator tool and login to the server where the database is running.  Connect to SQL*Plus and run SQL commands:

image-31

 

 

 

 

 

 

 

 

 

 

 

 

 

image-32

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

No open tnsnames.ora file and add the following entry:

 

tnsnames.ora

PRODDB =

(DESCRIPTION =

(ADDRESS = (PROTOCOL = TCP)(HOST = 141.144.32.70)(PORT = 1521))

(CONNECT_DATA =

(SID = PRODDB)

)

)

 

Open a command prompt, login to the database in the cloud and run SQL commands:

 

image-33

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To open an OEM, click on the menu icon on the Database Cloud Service home page and select “Open EM Console”

 

image-34

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provide the username and password and login

 

image-35

 

 

 

 

 

 

 

 

 

 

 

image-36

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

image-37

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After creating and configuring a database using a wizard, I decided to create a new database in a silent mode in the same machine as follows:

 

 

[oracle@proddb dbhome_1]$ dbca -silent -createdatabase -gdbname mydb -templatename /u01/app/oracle/product/11.2.0/dbhome_1/assistants/dbca/templates/General_Purpose.dbc -sid mydb -syspassword oracle -systempassword oracle -emConfiguration none -datafileDestination /u02/app/oracle/oradata/mydb -memoryPercentage 40

 

Copying database files

1% complete

3% complete

11% complete

18% complete

26% complete

37% complete

Creating and starting Oracle instance

40% complete

45% complete

50% complete

55% complete

56% complete

60% complete

62% complete

Completing Database Creation

66% complete

70% complete

73% complete

74% complete

75% complete

76% complete

77% complete

88% complete

99% complete

100% complete

Look at the log file “/u01/app/oracle/cfgtoollogs/dbca/mydb/mydb.log” for further details.

[oracle@proddb dbhome_1]$

 

Check if the database is up and running and connect to it:

 

[oracle@proddb dbhome_1]$ ps -ef | grep smon

oracle    7040     1  0 11:40 ?        00:00:00 ora_smon_mydb

oracle    7226  2625  0 11:40 pts/1    00:00:00 grep smon

oracle   11837     1  0 10:30 ?        00:00:00 ora_smon_PRODDB

[oracle@proddb dbhome_1]$ export ORACLE_SID=mydb

[oracle@proddb dbhome_1]$ sqlplus / as sysdba

 

SQL*Plus: Release 11.2.0.4.0 Production on Sat Dec 3 11:40:39 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>

 

Run free command to check the free space of the machine. We have 2 databases running on this machine, so we have only 1g free memory.

[oracle@proddb dbhome_1]$ free

total       used       free     shared    buffers     cached

Mem:       7397060    6295684    1101376     266948      66356    1364664

-/+ buffers/cache:    4864664    2532396

Swap:      4194300      29916    4164384

[oracle@proddb dbhome_1]$

After having a database service with the specific parameters, you can change the parameters anytime. Let’s add 2gb free space to the machine. Switch to the home page of the database cloud service, click on the menu icon and choose “Scale Up/Down” link

image-38

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Provide size of the new space and click “Yes, Scale Up/Down Service” button

image-39

 

 

 

 

 

 

 

 

 

 

The host will reboot and the required space will be added :

image-40

 

 

 

 

 

 

 

 

 

The new space is mounted to the new mount point (u05):

 

[oracle@proddb ~]$ df -kh

Filesystem            Size  Used Avail Use% Mounted on

/dev/xvdb3             25G   12G   12G  51% /

tmpfs                 3.6G     0  3.6G   0% /dev/shm

/dev/xvdb1            477M  148M  300M  34% /boot

/dev/xvde1             59G  7.4G   49G  14% /u01

/dev/mapper/dataVolGroup-lvol0

25G  3.8G   20G  17% /u02

/dev/mapper/fraVolGroup-lvol0

6.8G  2.1G  4.4G  32% /u03

/dev/mapper/redoVolGroup-lvol0

26G  3.1G   22G  13% /u04

/dev/xvdg1            2.0G  3.1M  1.9G   1% /u05

[oracle@proddb ~]$

Posted in Uncategorized | 3 Comments »

“Oracle Certified Master 11g Study Guide” – www.ocmguide.com – Free Ebook

Posted by Kamran Agayev A. on 14th October 2016

ocm-book

 

This is it! My second book is out!

There are bunch of certified associate and professional Oracle DBAs around the world who plan to take an OCM exam, but without having a study guide it seems to be hard to prepare for that amount of different topics and make all kind of practical tests. And most of them, just give up.

And having successfully passed the exam, I decided to help DBAs by sharing my preparation method with you. It has been 2 years since I have started writing the book about how to prepare for OCM 11g Exam. This guide covers all OCM topics with documentation references and MOS notes, with step by step installation and configuration instructions, practical and real-life examples.

This book consists of 200+ practical scenarios, 260 screenshots and 300+ assigned tasks with specified time limit that makes you 100% ready for the exam. Every single topic is explained in a practical examples using both graphic user and command line interfaces.

If you are a certified DBA and looking forward to become an OCM, then this study guide is for You!

If you are a DBA and want to have a practical experience in different topics such as Oracle Enterprise Manager, Backup and Recovery, ASM, RAC, Grid Infrastructure, Performance Tuning, Data Guard and etc, then this book is for You!

To get a free copy of some chapters of the book, visit www.ocmguide.com

 

Posted in Uncategorized | 7 Comments »

Oracle 11g Clusterware failure scenarios with practical demonstrations – Webinar

Posted by Kamran Agayev A. on 5th October 2015

Sharing my Webinar on “Oracle 11g Clusterware failure scenarios with practical demonstrations” which I did for RACSIG few days ago

 

racsig

 

 

 

 

 

 

 

 

 

Posted in Uncategorized | 2 Comments »

Wrong parallelism degree for the heavy SQL commands

Posted by Kamran Agayev A. on 24th June 2015

Yesterday I got a call from the Datawarehouse team member who claimed that some reports started to run slowly. Hmm … Slowly … Right after I got a call, I checked the session and found that it was running with parallel degree 8. The degree value was too low. I checked parallelism parameters of the database:

SQL> set linesize 150

SQL> show parameter parallel_degree_policy

NAME                                 TYPE                             VALUE

———————————— ——————————– ——————————

parallel_degree_policy               string                           AUTO

SQL>

 

The automatic degree of parallelism future was enabled.

parallel_degree_limit parameter was set to CPU.  As the formula to calculate the limit is PARALLEL_THREADS_PER_CPU * CPU_COUNT * the number of instances available, I got the necessary values and calculated

SQL> show parameter PARALLEL_THREADS_PER_CPU

 

NAME                                 TYPE                             VALUE

———————————— ——————————– ——————————

parallel_threads_per_cpu             integer                          2

SQL>

SQL> show parameter cpu_count

 

NAME                                 TYPE                             VALUE

———————————— ——————————– ——————————

cpu_count                            integer                          47

SQL>

 

So the parallel_degree_limit=47*2=94 (which is pretty high)

So there is no issue regarding the SQL statement and the database level parallelism configuration. I got an execution plan of the same SQL command from SYS user, and here what I got after the execution plan:

Note

   – automatic DOP: Computed Degree of Parallelism is 94 because of degree limit

Hmm… If I run the query with SYS user, the query runs with correct parallelism degree. If I run it with different user, I get the only 8 parallel sessions. The first thing came into my mind was Resource Manager.

I need to check the Resource Manager, but …. but …. there was a problem with the Enterprise Manager :) I know you’re smiling, because you should know how it’s hard to view and configure resource manager with command line interface.

No way, I don’t have an option to make the customer wait for me to configure the agent of the Grid Control. First of all, I queried DBA_RSRC_CONSUMER_GROUP_PRIVS view to get the list of consumer groups that the specific user is assigned:

SQL> SELECT * FROM DBA_RSRC_CONSUMER_GROUP_PRIVS

2  where grantee=’PROD_USER’;

 

GRANTEE                        GRANTED_GROUP                  GRA INI

—————————— —————————— — —

PROD_USER                         CG_BATCHES                     NO  NO

PROD_USER                         CG_REP_LOW                     NO  NO

PROD_USER                         CG_REP_HIGH                    NO  NO

PROD_USER                         CG_REP_NORMAL                  NO  NO

 

SQL>

 

Then I queried DBA_RSRC_PLAN_DIRECTIVES view to get detailed information about the resource plan directives of the consumer groups that the specific user belongs to:

SQL> SELECT plan, group_or_subplan, cpu_p1, cpu_p2, cpu_p3, cpu_p4, parallel_degree_limit_p1

2    FROM DBA_RSRC_PLAN_DIRECTIVES

3   WHERE     plan = ‘DWH_DAY_PLAN’

4         AND group_or_subplan IN (SELECT granted_group

5                                    FROM DBA_RSRC_CONSUMER_GROUP_PRIVS

6                                   WHERE grantee = ‘PROD_USER’);

 

PLAN                           GROUP_OR_SUBPLAN                   CPU_P1     CPU_P2     CPU_P3     CPU_P4 PARALLEL_DEGREE_LIMIT_P1

—————————— —————————— ———- ———- ———- ———- ————————

DWH_DAY_PLAN                   CG_REP_LOW                              0          0          0         50                       16

DWH_DAY_PLAN                   CG_REP_HIGH                             0         65          0          0                       32

DWH_DAY_PLAN                   CG_REP_NORMAL                           0          0         80          0                       24

DWH_DAY_PLAN                   CG_BATCHES                              0          0          0         50                       16

 

SQL>

 

So the highest parallelism degree was set for CG_REP_HIGH consumer group, thus the user should use that group. Next, I checked V$SESSION view to check which consumer groups are used by that user:

SQL>

SELECT resource_consumer_group, COUNT (1)

FROM v$session

WHERE username = ‘PROD_USER’

GROUP BY resource_consumer_group;

 

RESOURCE_CONSUMER_GROUP             COUNT(1)

——————————–                           ———-

CG_REP_HIGH                                                  70

OTHER_GROUPS                                              154

_ORACLE_BACKGROUND_GROUP_        3

 

Ops. Most of the connections are using OTHER_GROUPS consumer group which has a very limited parallelism degree. So why the users were switching to the OTHER_GROUPS consumer group? The first reason was the switch consumer group feature. The sessions are either switched manually (which is not possible to happen) or automatically based on the CPU and I/O resource limits defined for that group (which is not defined in the group specification) So what is the problem? Ah, may be mapping priority. I checked the dba_rsrc_mapping_priority view to get the list of mapping priority:

 

SQL> SELECT * FROM dba_rsrc_mapping_priority;

 

ATTRIBUTE                        PRIORITY STATUS

—————————— ———- ——————————

EXPLICIT                                1

SERVICE_MODULE_ACTION                   2

SERVICE_MODULE                          3

MODULE_NAME_ACTION                      4

MODULE_NAME                             5

SERVICE_NAME                            6

ORACLE_USER                             7

CLIENT_PROGRAM                          8

CLIENT_OS_USER                          9

CLIENT_MACHINE                         10

CLIENT_ID                              11

11 rows selected.

SQL>

 

It didn’t gave me enough information and I checked the consumer group usage based on oracle user, OS user, module name and etc. And what I found was that all users that come from the same machine (from where the heavy SQL commands were running) were using OTHER_GROUPS consumer group. I queried dba_rsrc_group_mappings view to get information about session attributes (such as module name, OS user and etc.) with the consumer groups, and provided the OS username as a parameter for the VALUE column:

SQL> SELECT * FROM dba_rsrc_group_mappings

WHERE value=’REP_USR’;

And didn’t get any value. I asked the guy who called me if they changed the username of the OS, and he told me “Yes” :) 2 days ago they migrated the software to the new machine and now uses different username to run the reports. (The old one was REPUSER) I queried the old user at DBA_RSRC_GROUP_MAPPINGS view and get the output!

SQL> col value format a15

SQL> col consumer_group format a15

SQL> select * from dba_rsrc_group_mappings

2  WHERE value=’REPUSER’;

 

ATTRIBUTE                      VALUE           CONSUMER_GROUP  STATUS

—————————— ————— ————— ——————————

CLIENT_OS_USER                 REPUSER         CG_REP_HIGH

 

SQL>

 

So the previous user was assigned to the specific consumer group, and after the software moved to the new host and used new OS user to run the reports, it didn’t find to which consumer group it belongs and switched to OTHER_GROUPS consumer group. After founding it, I manually set the new OS user to the CG_REP_HIGH consumer group as follows:

BEGIN

dbms_resource_manager.create_pending_area();

dbms_resource_manager.set_consumer_group_mapping (DBMS_RESOURCE_MANAGER.CLIENT_OS_USER,’REP_USR’,’CG_REP_HIGH’);dbms_resource_manager.validate_pending_area();

dbms_resource_manager.submit_pending_area();

END;

/

After assigning the user to the consumer group, we opened a new session to run the report and checked the consumer group of that session. It was as what excepted – CG_REP_HIGH. And we got all parallelism degrees back and users were happy :)

Posted in Uncategorized | 2 Comments »

My LAD Tour 2014 impressions

Posted by Kamran Agayev A. on 10th September 2014

After returning home from LAD Tour, I would like to share some of my experience on my first ever trip to Argentina and Uruguay. First of all, it was a very long journey :) It took me 26 hours to get to Buenos Aires (Baku->Istanbul->Sao Paolo->Buenos Aires) and it was really a long flight! Once I arrived in Buenos Aires and exited an airport, the first thing what I felt was cold :), because the weather was chilly. I took a shuttle directly to hotel. In the morning, I got up early and took a wonderful tour to the Tigre.  The trip was phenomenal and I had a great time. Tigre is special when you are in the islands, 500 river and streams, and threre are not any street, or bridges. An incredible place is the Rio De La Plata with 40 miles in width and their new islands.

LAD_Tour1

LadTour5

 

 

 

 

 

 

 

 

 

 

 

 

 

I came back to the hotel at 3 PM. On the first of my stay I was having a jetlag and I felt sleepy at 4PM (It was 12AM midnight in Azerbaijan, Baku :) ) and woke up at 2AM :) (It was 10AM in Baku) and couldn’t sleep till the morning. For the rest 4 hours I prepared for my presentations, had a breakfast and went straight to San Andre University which was a few blocks from the hotel. I met Marcelo Ochoa (an Oracle ACE from Argentina) in the lobby, who was one of the organizers of the conference. Together with Gustavo Gonzalez (ACE Director) Afterwards, I met Hans Forbrich and Michele Malcher in the speakers room. In a few minutes the conference started and all attendees gathered in the auditorium. After 15 minutes of the introduction, I was invited to delivery my speech. I was very lucky to make a first presentation of the event in front of the participants that were gathered to listen the introduction before me and stayed there :). The lecture theater was completely packed and I was very excited that so many people were listening my presentaiton

Kamran A - Argentina

 

 

 

 

 

 

 

 

 

 

 

 

 

I talked about my RAC investigation and discussed real-life solutions on some “Oracle RAC How-to” scenarios. Then Hans Forbrich talked about Oracle VM. Once his speech was finished, my strong interest emerged in his field of research and consequently I made up a decision to use VM in real life. I also delivered my second presentation in 3 hours after my initial speech. My second speech of the presentation was dedicated to “Success story of migrating Oracle 10g single node to Oracle 11gR2 RAC environment in AzerCell”. After my second half of the presentation all the participants were invited to attend a lunch in the dining hall. Some of the participants approached me in order to introduce themselves and to ask questions about my presentation. During the launch we had very lovely conversation. The launch followed by seminar of practical RAC simulation – RacAttack! And this simulation was attended by 15+ participant and all of them brought their laptop to install Oracle 12c 2 node RAC. I noticed that most of them were using 2G or 4G RAM and this caused difficulties in installing RAC, nevertheless, the rest of the participants installed Grid Infrastructure successfully.

So the first day was over. Unfortunately, due to continuing jetlag at that time, I was forced to miss the speakers’ dinner. Next day, I attended 2 sessions of Mike Dietrich about Pluggable Databases and Oracle Upgrade. Both of the sessions were outstanding in the quality of delivery and the content! Then I left the conference and took a City Tour. I was fascinated by Buenos Aires and its tempestuous history.

LadTour3 LAD_Tour4

 

 

 

 

 

 

 

 

 

 

Being a BJJ (Brazilan Jiu Jitsu) practitioner, I have been trained in different countries and thus try to learn various techniques from different BJJ schools. This time I called Sebastian Munoz from Gracie Humaita school and took a private and public class. Both classes were awesome. I learned a lot of different techniques, rolled with professor and students.

Lad Tour bjj

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I was very tired, returned back to the hotel and on the following morning packed for Montevideo trip. When I arrived to the Argentina Airlilnes (it was very cold!) I met Hans, Michele and Mike. I bought my tickets and thanks to Hans, we had some rest in the Star Alliance VIP Lounge Ezeiza.

The flight to Montevideo wasn’t too far. After 55 minutes, we landed. Nelson Calero (President of Uruguay Oracle User Group) and Edelweiss (Oracle ACE) were already there, they picked up us at the airport to the hotel. Then we went out to have a wonderful lunch, did a city tour and returned back to the hotel.

Uruguay_lunch2 Uruguay_lunch1

 

 

 

 

 

 

 

 

 

 

 

 

The following day, the conference has started. It was a few blocks far from the hotel, in the ORT University. Both my sessions were after the midday, and there were plenty of participants. In the first session I talked about RAC Best Practices, while in the second session I presented “Oracle RAC failure test cases”.

Kamran A - ORT University

 

 

 

 

 

 

 

 

 

 

 

After the presentation was finished, I had a short discussion with participants, afterwards I decided to go to the Gracie Humaita Montevideo gym and took a class from them. Once I entered the gym, I noticed the generous kindness of the students of this JitJitsu school. Everybody was friendly to me. Master De Souza introduced us to different techniques. Then I rolled with them a little and went to join to the dinner with the speakers. The dinner was awesome and very delicious. Once the dinner was over, we returned back to the hotel and I slept.

Uruguay_Bjj

 

 

 

 

 

 

 

 

 

 

Next day, we did a RacAttack with 30+ participants! That was the most interesting RacAttack I’ve ever done in my professional career. I wasn’t expecting such a huge number of participants, and almost all of them had different kind of problems with the installation and configuration. With the great support and guidance of Michele, Hans and Nelson, we resolved essentially most of the problems and therefore most of the participants were finally able to install Grid Control. The entire session lasted for 8 hours and it was exciting.

RacAttack2 RacAttack1

 

 

 

 

 

 

 

 

 

 

 

Finally, I would like to underline the fact that both events were very well planned, professionally organized  and went extremely well. After travelling 60 hours, presenting 4 sessions and organizing 2 RacAttack events for 50+ attendees, joining 3 different BJJ (Brasilian Jiu JItsu) classes, provided me with a unique and great opportunity to make new friends, and get to know good experts in my field. I also enjoyed getting an exposure to new culture and traditions.

I would like express my deep appreciation and gratitude to the organizers, to OTN, to ACE Team and especially to Francisco Munoz Alvarez for creating such great tours and helping us to share our knowledge. Thanks to Nelson Calera, Edelwiss for their hospitality. I truly appreciate the time they spent with us. It’s nice to know that you are surrounded by such lovely people

Uruguay group photo

 

 

Posted in Uncategorized | No Comments »

I’m Oracle Certified Master now!

Posted by Kamran Agayev A. on 10th July 2013

Last week I got an email from Oracle Certification Team that I’ve successfully passed OCM exam. I can proudly say that I’m Oracle Certified Master now!

Although I’m working as a production DBA, I prepared for this exam for more than 6 months. I made a plan for each month, even for each week, prepared own scenarios and practiced for hours.

The exam was tough, you need to be prepared for any case, you have to make decisions very quickly, and even you have to type very fast.

I will prepare blog post on how I prepared for OCM exam and hope it will help and inspire you to take it

 OCM_ODb10gAdmin_clr

Posted in Uncategorized | 39 Comments »

Step by Step Oracle 10g RAC installation ebook

Posted by Kamran Agayev A. on 15th March 2013

From the following link you can download my Step by Step Oracle 10g RAC installation on VMware ebook

Downloaded 2675 times

Posted in Uncategorized | 8 Comments »

Debugging Data Pump session

Posted by Kamran Agayev A. on 11th March 2013

While importing a dump file, sometimes it takes too long and seems to be “hanging” and processing something unknown in the background, or queuing for something. As a DBA, you HAVE NOT wait “until it finishes”. You have to try to find out the solution.

Yesterday, while importing a metadata of a big database, the session hold and took hours to finish.

This is the command I run to import the dump file into the new database:

impdp system/oracle directory=mydir dumpfile=dump01.dmp exclude=tablespace, statistics parfile=param.ini METRICS=y 

The session was hold in the following section:

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/CLUSTER

     Completed 1 CLUSTER objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/CLUSTER/INDEX

     Completed 1 INDEX objects in 0 seconds

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE

Note: I suggest that we use undocumented METRICS=Y parameter to get information about the number of objects and the time it takes to process them into the log file

I waited shortly and was curious about the activities in the background, so decided to debug the Data Pump session and run the following query to get the SID and SERIAL number of the running data pump sessions. These values will be used to trace the session in the next step:

col username format a10
set linesize 150
col job_name format a20
col program format a25
SELECT TO_CHAR (SYSDATE, ‘YYYY-MM-DD HH24:MI:SS’) “DATE”,
     s.program,
     s.sid,
     s.status,
     s.username,
     d.job_name,
     p.spid,
     s.serial#,
     p.pid
FROM V$SESSION s, V$PROCESS p, DBA_DATAPUMP_SESSIONS d
WHERE p.addr = s.paddr AND s.saddr = d.saddr;

DATE                PROGRAM                          SID STATUS   USERNAME   JOB_NAME             SPID            SERIAL#        PID

——————- ————————- ———- ——– ———- ——————– ———— ———- ———-

2013-03-07 14:52:04 udi@TestDB02 (TNS V1-V3)         152 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12143                36         17

2013-03-07 14:52:04 oracle@TestDB02 (DM00)           148 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12147                 3         19

2013-03-07 14:52:04 oracle@TestDB02 (DW01)           142 ACTIVE   SYSTEM     SYS_IMPORT_FULL_01   12149                15         20

 

 

 

 

Then I queried V$SESSION_WAIT view to get the waiting event:

SELECT   w.sid, w.event, w.seconds_in_wait

  FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w

 WHERE   s.saddr = d.saddr AND s.sid = w.sid;

     SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

       142 control file sequential read                                                                  1

       148 wait for unread message on broadcast channel                                  43

       152 wait for unread message on broadcast channel                                 306

If you run the same query a number of times, you’ll see how the wait event changes:

To get detailed information I decided to debug the first session (SID = 142) and provided values of SID and SERIAL# columns to DBMS_SYSTEM.SET_EV procedure and started the trace:

SQL>  EXECUTE SYS.DBMS_SYSTEM.SET_EV (142,15,10046,8,”);

Then I switched to udump folder and checked the file labeled “dbname_ora_PID” – “testdb_ora_12149”

-bash-4.2$ tail -f testdb_ora_12149.trc

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

WAIT #4: nam=’control file sequential read’ ela= 21 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366772623

WAIT #4: nam=’control file sequential read’ ela= 15 file#=1 block#=1 blocks=1 obj#=51780 tim=13410366772660

WAIT #4: nam=’control file sequential read’ ela= 15 file#=2 block#=1 blocks=1 obj#=51780 tim=13410366772695

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=16 blocks=1 obj#=51780 tim=13410366772728

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=18 blocks=1 obj#=51780 tim=13410366772760

WAIT #4: nam=’control file sequential read’ ela= 13 file#=0 block#=24 blocks=1 obj#=51780 tim=13410366772802

WAIT #4: nam=’db file sequential read’ ela= 12 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366772834

WAIT #4: nam=’db file single write’ ela= 11313 file#=4 block#=1 blocks=1 obj#=51780 tim=13410366784170

WAIT #4: nam=’control file parallel write’ ela= 38910 files=3 block#=17 requests=3 obj#=51780 tim=13410366823104

WAIT #4: nam=’control file parallel write’ ela= 46532 files=3 block#=15 requests=3 obj#=51780 tim=13410366869661

WAIT #4: nam=’control file parallel write’ ela= 40258 files=3 block#=1 requests=3 obj#=51780 tim=13410366909955

WAIT #4: nam=’control file sequential read’ ela= 12 file#=0 block#=1 blocks=1 obj#=51780 tim=13410366909986

WAIT #4: nam=’rdbms ipc reply’ ela= 7016 from_process=5 timeout=21474836 p3=0 obj#=51780 tim=13410366917049

=====================

PARSING IN CURSOR #15 len=296 dep=3 uid=0 oct=6 lid=0 tim=13410366917452 hv=2379717279 ad=’de799c58′

update seg$ set type#=:4,blocks=:5,extents=:6,minexts=:7,maxexts=:8,extsize=:9,extpct=:10,user#=:11,iniexts=:12,lists=decode(:13, 65535, NULL, :13),groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17,0,NULL,:17),scanhint=:18 where ts#=:1 and file#=:2 and block#=:3

END OF STMT

PARSE #15:c=0,e=37,p=0,cr=0,cu=0,mis=0,r=0,dep=3,og=3,tim=13410366917447

EXEC #15:c=0,e=337,p=0,cr=5,cu=1,mis=0,r=1,dep=3,og=3,tim=13410366918000

STAT #15 id=1 cnt=0 pid=0 pos=1 obj=0 op=’UPDATE  SEG$ (cr=5 pr=0 pw=0 time=277 us)’

STAT #15 id=2 cnt=1 pid=1 pos=1 obj=14 op=’TABLE ACCESS CLUSTER SEG$ (cr=5 pr=0 pw=0 time=166 us)’

STAT #15 id=3 cnt=1 pid=2 pos=1 obj=9 op=’INDEX UNIQUE SCAN I_FILE#_BLOCK# (cr=2 pr=0 pw=0 time=31 us)’

By using tkprof I formatted content of the trace file into a readable output:

tkprof testdb_ora_12149.trc impdp_output.dat

And read the file:

insert into seg$ (file#,block#,type#,ts#,blocks,extents,minexts,maxexts,

extsize,extpct,user#,iniexts,lists,groups,cachehint,bitmapranges,hwmincr,

spare1, scanhint)

values

(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,0,:16,DECODE(:17,0,NULL,

:17),:18)

call     count       cpu    elapsed       disk      query    current        rows

——- ——  ——– ———- ———- ———- ———-  ———-

Parse      905      0.04       0.01          0          0          0           0

Execute    905      0.26       0.32          0       2756       6420         905

Fetch        0      0.00       0.00          0          0          0           0

——- ——  ——– ———- ———- ———- ———-  ———-

total     1810      0.30       0.34          0       2756       6420         905

Everything is fine. It’s importing the metadata into the database by inserting data to the seg$ (segments) table. I used tail command to get the output of the trace file and run the first query.

After a delay, I got “statement suspended, wait error to be cleared” message, checked alert.log file and found that the USERS tablespace is FULL

SQL> /

SID EVENT                                                            SECONDS_IN_WAIT

———- —————————————————————- —————

142 statement suspended, wait error to be cleared                       20

148 wait for unread message on broadcast channel                                 905

152 wait for unread message on broadcast channel                                 246

SQL>

tail –f Alert.log file

statement in resumable session ‘SYSTEM.SYS_IMPORT_FULL_01.1’ was suspended due to

    ORA-01658: unable to create INITIAL extent for segment in tablespace USERS

After adding some space to the USERS tablespace the message disappeared.

Then suddenly I begin receiving “data file init write” wait event. That was the main reasons why the import took much time.

The reason was that the AUTOEXTEND value of the datafile was set very low and Oracle was making too many calls to OS in order to add new extents

The output from the trace file was as follows:

WAIT #4: nam=’Data file init write’ ela= 3 count=4294967295 intr=32 timeout=2147483647 obj#=51780 tim=13410366772411

In order to define the tablespace which is the cause of this event, I got the object id from the trace file (obj#=51780) and checked obj$ table.

SELECT   name

  FROM   OBJ$

 WHERE   obj# = 51780

Then using DBMS_METADATA.GET_DLL function I got the tablespace name of the object that couldn’t be imported

SELECT   DBMS_METADATA.get_ddl (‘TABLE’, ‘TAB_TEST’, ‘USR01’) FROM DUAL

As the ‘Data file init write’ event directly relates to AUTOEXTEND and occurs while adding extents to the datafile, I checked AUTOEXTEND value of the datafiles. Their values were too small. I increased the AUTOEXTEND values and the event disappeared from the trace log and import job finished successfully.

Reference: Export/Import DataPump Parameter TRACE – How to Diagnose Oracle Data Pump [ID 286496.1]

Posted in Uncategorized | 7 Comments »