Kamran Agayev's Oracle Blog

Oracle Certified Master

Archive for the 'RAC issues' Category

Real Application Clusters

Interim patch apply best practices in Oracle

Posted by Kamran Agayev A. on 2nd October 2015

Yestarday, after successfully applying an interim patch to the 3 node clusterware environment, I decided to share my experience with you. In this blog post, you can find some best practicec that I think must be followed before and during patch insallation to bring the downtime and failure risks to the minimum.

First of all, make sure you read the following metalink notes:

Master Note For OPatch (Doc ID 293369.1)
FAQ: OPatch/Patch Questions/Issues for Oracle Clusterware (Grid Infrastructure or CRS) and RAC Environments (Doc ID 1339140.1) 

 

Before applying any interim patches or upgrading the database or the clusterware, make sure you have answers to the following questions:

– Have you tested the patch installation? 

– Have you tested rollback of the patch? 

– What you will do if you can’t rollback the patch with default rollback mechanism? 

– What you will do if you fail to open the database after the patch installation? 

Do you have a backup? Have you tested it? What if you don’t have enough time to restore? 

 

Here is the list of what I would prefer to do before applying any interim patches to the mission critical environment:

– Backup the home folder that is going to be patched

tar -cvf grid_home_before_patch.tar /home/oracle/app/11.2.0

If the patch installation goes wrong and you can’t rollback the patch using default method, restore the backup of the installation home folder and bring the database (clusterware) up.

– Make sure you have a full backup of the database 

Most probably you will not go with the restoration, but you never know what might happen)

– Make sure your backup is recoverable

Yes. This might be a discussion topic, but I strongly believe (as an author of the RMAN Backup and Recovery book :) ) in “If you don’t test your backup, you don’t have a backup” philosophy. Restore it and make sure a) The backup is restorable/recoverable b) Your restore/recover scripts works fine. In my experience, I had a situation where the restore of the backup failed while restoring it for the developers for the testing purpose because of non tested recovery scripts. I heard a situation (a couple of years ago when I attended a wonderful OOW session) where one of the attendees complained how they failed to restore a backup when the production environment failed and this downtime (for days) costed them for a couple of million dollars.

– Make sure you have a Standby database.

Why? Imagine you took 30 minutes downtime to apply the patch and for any reason you were not able to do it and can’t rollback because you are in the middle of the patch apply procedure and trying to fix the issue. Or you can’t rollback the patch for any reason. You stuck! And you don’t have time to solve it. And you are forced to open the database right away. And you can’t do it as well. In this critical case, you can forward the applications to the Standby database. Build up a standby database, make sure archived log files of the production database are shipping to the standby server. You can also perform a failover to test your standby database and build it up again.

– Test the patch apply procedure on the test environment with the “same binaries”. 

Clone the database and clusterware soft to the test machine (or install the same release and apply the same patches as in the production environment) and apply the patch. Get the errors in the test environment before you get them in the production.

– Make sure there’s no any session runinng in the background related with the binaries of the home that is being patched. 

Yesterday, when I was trying to apply an interim patch to the 3 node clusterware (11.2.4) I came up with the following error:

Backing up files…

UtilSession failed: Map failed
java.lang.OutOfMemoryError: Map failed
Log file location: /home/oracle/11.2.0/grid_1124/cfgtoollogs/opatch/opatch2015-10-01_17-40-56PM_1.log

OPatch failed with error code 73

The reason was not the memory at all, we were having a lot of free memory at that time. There were some binary files in use in the background despite the fact that the whole clusterware stack was down. Anyway, I killed all processes and the installation proceeded.

The following metalink note also might be useful – OPatch Apply/Rollback Errors: ‘Prerequisite check “CheckActiveFilesAndExecutables” failed’ (Doc ID 747049.1)

– Download the latest OPatch

Check the following metalink note to download the latest OPatch. How To Download And Install The Latest OPatch Version (Doc ID 274526.1)

Download and extract it under the home folder that is going to be patched. Add the $GRID_HOME/OPatch or $ORACLE_HOME/OPatch to the PATH environment variable. Make sure which opatch command returns you a result

– Make sure you have enough free space in the mount point where the home folder reside.

A few years ago when I was trying to install a patch to the production environment, I decided to try it on the test environment (10 minutes before patching a production database). I ended up with the “There is no free space to proceed the installation” error. Home folder was full, and OPatch was taking backup of the binaries and library files that are being patched.  Check the following metalink note for more informaton: Opatch Fails Updating Archives with ” No space left on device ” Error. (Doc ID 1629444.1)

 – Bring the instance down before patching

If you have a Grid Infrastructure installed, you have a RAC database and you plan to apply the patch node by node without a downtime, bring the instance of the node you’re patching using the following command:

srvctl stop instance -d RACDB -i RACDB1

Why? Because if you start installing the patch and run the rootcrs.pl -unlock command which is the first step that brings the clusterware stack down, the database will be closed with ABORT mode and non of the sessions will be failed over.

– Try to rollback the patch installation at test environment after installing it

Why? Feel how you should rollback (and see if you get any error) the specific patch if you failed the installation and can’t proceed, or you installed successfully, but it caused another bug or problem. Check the following metalink note to learn how to rollback the patch and run opatch lsinventory to make sure it is rollbacked.  How to Rollback a Failed Interim Patch Installation (Doc ID 312767.1)

Sometimes, rollback also might fail :) In this case, the best option is to restore the whole home folder from the backup, but it is not mentioned in ths metalink note OPatch Fails to Rollback Patch Due to Relink Errors (Doc ID 1534583.1)

– Debug the OPatch if it is stuck 

You can use OPATCH_DEBUG=TRUE parameter to debug the OPatch. If it doesn’t generate enough information, use truss (or strace in Liunx) to debug OPatch. Check the following metalink note to learn how to use truss with OPatch. How To Use Truss With OPatch? (Doc ID 470225.1) 

Opatch might also hang due to corrupted jar and java executables. Check this metalink note – opatch napply Hanging (Doc ID 1055397.1)

 

This is all I have :) Please let me know if this document helped you and share your experience with me :) Have a successfull patching days ahead! :)

Posted in Administration, RAC issues | 8 Comments »

ORA-00304: requested INSTANCE_NUMBER is busy

Posted by Kamran Agayev A. on 27th August 2015

There are a lot of explanation and different solutions for the error “ORA-00304: requested INSTANCE_NUMBER is busy”. But today, in my case while I was tyring to shutdown one of the cluster nodes, it hanged. There were no more information related with the hang in the log and trace files, so I went with shut abort and startup and got the following message:

SQL> startup

ORA-00304: requested INSTANCE_NUMBER is busy

SQL>

The second node of the RAC database was up and running. And the instance_number was set to 2. After a little investigation, I found out that there was one process related with the database running on OS (even the database was closed) I killed that session and started the first node and it opened successfully

Posted in Administration, RAC issues | 3 Comments »

Default listener “LISTENER” is not configured when running DBCA

Posted by Kamran Agayev A. on 6th January 2015

When running dbca to create a new database you can get the following message:

Default Listener “LISTENER” is not configured in Grid Infrastructure home. Use NetCA to configure Default Listener and rerun DBCA

default_listener_problem

 

 

 

 

 

 

 

 

 

 

 

 

Actually, there’s no need to run netca, all you need is to create a new listener as follows:

srvctl add listener

srvctl start listener

 

Posted in RAC issues | No Comments »

Node names are missing from ./runInstaller output

Posted by Kamran Agayev A. on 4th January 2015

While installing Oracle Database after Oracle Grid Infrastructure installation, I was supposed to get the list of all nodes where I need to install Oracle Software (11gR2 – 11.2.0.4). But instead, I got nothing

runInstaller_output

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I checked the status of the clusterware, it was up and running on both nodes:

[oracle@node1 bin]$ ./olsnodes
node1
node2
[oracle@node1 bin]$ ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

 

Then I checked the inventor.xml file and found out that the CRS=true is missing.

[oracle@node1 bin]$ cat /etc/oraInst.loc | grep inventory_loc
inventory_loc=/u01/app/oraInventory

[oracle@node1 bin] cd /u01/app/oraInventory/ContentsXML/

[oracle@node1 bin] more inventory.xml

<output trimmed ————— >

<HOME NAME=”Ora11g_gridinfrahome1″ LOC=”/u01/app/product/11.2.0.3/grid” TYPE=”O” IDX=”1″>

</output trimmed ————->

After running the following command, I updated the inventory.xml file and node list appeared

[oracle@node1 ~]$ cd /u01/app/product/11.2.0.3/grid/oui/bin/
[oracle@node1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=”/u01/app/product/11.2.0.3/grid” CRS=true
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 3919 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.

 

[oracle@node1 bin] more inventory.xml

<HOME NAME=”Ora11g_gridinfrahome1″ LOC=”/u01/app/product/11.2.0.3/grid” TYPE=”O” IDX=”1″ CRS=”true”>

 

runInstaller_output2

Posted in RAC issues | No Comments »

Struggling with RAC Installation – ORA-15018: diskgroup cannot be created

Posted by Kamran Agayev A. on 9th December 2014

I said it before. It was only once that I succeeded to install Oracle Clusterware without any issues and that was during OCM exam :) I didn’t hit any bug, I didn’t re-configured anything. The installation went smooth. But …

Today, I got all following errors :) :

ORA-15032: not all alterations performed
ORA-15131: block of file in diskgroup could not be read
ORA-15018: diskgroup cannot be created
ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks
ORA-15025: could not open disk “/dev/mapper/mpathh”
ORA-15056: additional error message
ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks for diskgroup “OCR_MIRROR”
ORA-15033: disk ‘/dev/mapper/mpathh’ belongs to diskgroup “OCR_MIRROR”

In the beginning, while installing Oracle 11gRAC, I got the following error:

CRS-2672: Attempting to start ‘ora.diskmon’ on ‘vsme_ora1’

CRS-2676: Start of ‘ora.diskmon’ on ‘vsme_ora1’ succeeded

CRS-2676: Start of ‘ora.cssd’ on ‘vsme_ora1’ succeeded

 

Disk Group OCR_MIRROR creation failed with the following message:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks

ORA-15025: could not open disk “/dev/mapper/mpathh”

ORA-15056: additional error message

 

 

Configuration of ASM … failed

see asmca logs at /home/oracle/app/cfgtoollogs/asmca for details

Did not succssfully configure and start ASM at /home/oracle/11.2.4/grid1/crs/install/crsconfig_lib.pm line 6912.

/home/oracle/11.2.4/grid1/perl/bin/perl -I/home/oracle/11.2.4/grid1/perl/lib -I/home/oracle/11.2.4/grid1/crs/install /home/oracle/11.2.4/grid1/crs/install/rootcrs.pl execution failed

 

Bad news is that the installation failed. Good news is that I can easily restart the installation again without any issues, as the root.sh script is rest restartable. If you don’t need to install the software on all nodes again, solve the problem and run root.sh script again. If the problem is solved, it will sun smoothly. If you need to install the software on all nodes, you have to deconfigure and run the installation again. To remove the failed RAC installation, run rootcrs.pl script on all nodes except the last one, as follows:

$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig –force

 

Run the following command on the last node:

$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig –force –lastnode

 

Now, run ./runInstaller command and start the installation again.

 

So let’s go back to the problem. It was claiming that “disk specification ‘/dev/mapper/mpathh’ matches no disks”. Hmm … The first thing that came in my mind was permission of the disk. So I checked it, it was root:disk. I changed it to oracle:dba and run root.sh script. Got the same problem again.

I checked the following log file:

/home/oracle/app/cfgtoollogs/asmca

 

[main] [ 2014-12-09 17:26:29.220 AZT ] [UsmcaLogger.logInfo:143]  CREATE DISKGROUP SQL: CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’

[main] [ 2014-12-09 17:26:29.295 AZT ] [SQLEngine.done:2189]  Done called

[main] [ 2014-12-09 17:26:29.296 AZT ] [UsmcaLogger.logException:173]  SEVERE:method oracle.sysman.assistants.usmca.backend.USMDiskG

roupManager:createDiskGroups

[main] [ 2014-12-09 17:26:29.296 AZT ] [UsmcaLogger.logException:174]  ORA-15018: diskgroup cannot be created

ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks

ORA-15025: could not open disk “/dev/mapper/mpathh”

ORA-15056: additional error message

 

Oracle  wasn’t able to create the diskgroup claiming that the specified device matches no disks. I logged in to the ASM instance and tried to create the diskgroup by my own:

SQL> CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’;

 

SQL> CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’

ERROR at line 1:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification ‘/dev/mapper/mpathh’ matches no disks

ORA-15025: could not open disk “/dev/mapper/mpathh”

ORA-15056: additional error message

Linux-x86_64 Error: 13: Permission denied

Additional information: 42

Additional information: -807671168

 

I checked the permission, it was root:disk . I changed it to oracle:dba and run the command again.

SQL> CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’

ERROR at line 1:

ORA-15018: diskgroup cannot be created

ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup “OCR_MIRROR”

 

I run the query again, this time got different message:

SQL> CREATE DISKGROUP OCR_MIRROR EXTERNAL REDUNDANCY  DISK ‘/dev/mapper/mpathh’ ATTRIBUTE ‘compatible.asm’=’11.2.0.0.0′,’au_size’=’1M’

ERROR at line 1:

ORA-15018: diskgroup cannot be created

ORA-15033: disk ‘/dev/mapper/mpathh’ belongs to diskgroup “OCR_MIRROR”

 

 

I tried to mount the diskgroup and got the following error:

SQL> alter diskgroup ocr_mirror mount;

alter diskgroup ocr_mirror mount

*

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15017: diskgroup “OCR_MIRROR” cannot be mounted

ORA-15063: ASM discovered an insufficient number of disks for diskgroup “OCR_MIRROR”

 

I checked the permission. It was changed again! I changed it back to oracle:dba and tried to mount the diskgroup and got the following error!

SQL> alter diskgroup ocr_mirror mount

ERROR at line 1:

ORA-15032: not all alterations performed

ORA-15131: block  of file  in diskgroup  could not be read

 

Ohhh … Come on! I logged to the ASM instance, and queried the v$asm_disk and v$asm_diskgroup views.

SQL> select count(1) from v$asm_disk;

   COUNT(1)

———-

         0

 

I changed permission to oracle:dba and run the query again:

SQL> /

  COUNT(1)

———-

         1

 

I run again:

 

SQL> select count(1) from v$asm_diskgroup;

   COUNT(1)

———-

         0

 

What??? The permission is changed automatically while I query V$ASM_DISKGROUP view? Yes … Even, when you query V$ASM_DISKGROUP, Oracle checks ASM_DISKSTRING parameter and query the header of all disks that are listed in that parameter. For more information on this topic, you can check my following blog post:

V$ASM_DISKGROUP displays information from the header of ASM disks

So, this means that when I query V$ASM_DISK view, Oracle scan the disk (with the process that runs under root user) and change the permission of the disk.

After making change to the /etc/udev/rules.d/99-oracle-asmdevices.rules file and adding the following line, the problem solved:

NAME=”/dev/mapper/mpathh”, OWNER=”oracle”, GROUP=”dba”, MODE=”0660″

 

So I checked the permission of the disks again after querying V$ASM_DISK multiple time, and made sure that it doesn’t change the permission of the disk and run root.sh script. Everything worked fine and I got the following output:

ASM created and started successfully.

Disk Group OCR_MIRROR mounted successfully.

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

CRS-4256: Updating the profile

Successful addition of voting disk 5feed4cb66df4f43bf334c3a8d73af92.

Successfully replaced voting disk group with +OCR_MIRROR.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

—  —–    —————–                ——— ———

 1. ONLINE   5feed4cb66df4f43bf334c3a8d73af92 (/dev/mapper/mpathh) [OCR_MIRROR]

Located 1 voting disk(s).

CRS-2672: Attempting to start ‘ora.asm’ on ‘vsme_ora1’

CRS-2676: Start of ‘ora.asm’ on ‘vsme_ora1’ succeeded

CRS-2672: Attempting to start ‘ora.OCR_MIRROR.dg’ on ‘vsme_ora1’

CRS-2676: Start of ‘ora.OCR_MIRROR.dg’ on ‘vsme_ora1’ succeeded

Preparing packages for installation…

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster … succeeded

 

 

Posted in RAC issues | 1 Comment »

Getting ORA-01105 during RAC db startup

Posted by Kamran Agayev A. on 30th July 2014

Today, while starting RAC instances of 2 node RAC database (10gR2 on Linux), I got the following error in the first node:

ORA-01105: mount is incompatible with mounts by other instances
ORA-01677: standby file name convert parameters differ from other instance

 

I checked the alert.log file, but there was no enough information to solve this issue:

Wed Jul 30 09:58:48 AZST 2014
Setting recovery target incarnation to 2
ORA-1105 signalled during: ALTER DATABASE MOUNT…
Wed Jul 30 09:58:58 AZST 2014
SUCCESS: diskgroup DATA was dismounted

 

After playing with some initialization parameters, I found a metalink note where it was defined as a bug (bug13001004)

Check out the following metalink note:

Spfile defined in OCR is not used if one exists in $ORACLE_HOME/dbs (Doc ID 1373622.1)

 

The solution is – to move parameter file to the centralized directory (/ocfs) and remove any instance_name parameter

Posted in Administration, RAC issues | No Comments »

Using odd number of disks for Voting disk

Posted by Kamran Agayev A. on 29th May 2014

As you’ve already known, you should use odd number of disks for voting disk. A node must be able to strictly access more than half of the voting disks at any time. Let me show you how it works. I have installed and configured two node 11gR3 RAC on VirtualBox and use the following case to show how it works:

– Create a diskgroup with 3 failure groups and 3 different disks

– Move voting disk to the new diskgroup. Shutdown the second node and deattach one of the disks. In this case, cluster should start as it can access more than half of the voting disks (2 from 3)

– Start the second node. The cluster should be up. Shut the second node again and deattach the second voting disk. And start it. The cluster will not start. Check the ocssd.log file

– Shut down all node, attach the previous disks and start it again. Cluster will be up

Here’re the detailed steps:

– Create a diskgroup :

Pic1

– Mount the diskgroup at the second node:

SQL> ALTER DISKGROUP vdisk MOUNT;

 

– Replace voting disk, move it to the new diskgroup and query the voting disk:

– Pic2

 

 

 

 

 

 

 

 

– Shutdown the second instance and reattach one of the disks of VDISK diskgroup :

Pic3

 

 

 

 

 

 

 

 

 

 

– Star the second node, query the Voting disk and check if the clusterware is up:

Pic4

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

– Shutdown the second node again, remove the second disk from the Voting diskgroup and start the node:

pic5

 

 

 

 

 

 

– Check the log file at $GRID_HOME/log/node2/cssd/ocssd.log :

2014-05-29 01:51:23.055: [ CSSD][2946955008]clssnmvVerifyCommittedConfigVFs: Insufficient voting files found, found 1 of 3 configured, needed 2 voting files
2014-05-29 01:51:23.055: [ CSSD][2946955008](:CSSNM00020:)clssnmvVerifyCommittedConfigVFs: voting file 0, id 279c162c-1b964f88-bfb1d622-aecc9e4e not found
2014-05-29 01:51:23.055: [ CSSD][2946955008](:CSSNM00020:)clssnmvVerifyCommittedConfigVFs: voting file 1, id 7e282f3f-5e514f42-bfb79396-c69fda76 not found
2014-05-29 01:51:23.055: [ CSSD][2946955008](:CSSNM00021:)clssnmCompleteVFDiscovery: Found 1 voting files, but 2 are required. Terminating due to insufficient configured voting files

– As you see, cluster is down. Now, shutdown both nodes, add disks to the second node and check the status of the clusterware:

Pic6

 

 

 

 

 

 

 

 

 

Posted in RAC issues | No Comments »

How to troubleshoot CRSCTL REPLACE VOTEDISK error?

Posted by Kamran Agayev A. on 27th May 2014

It took me some time to investigate why CRSCTL REPLACE VOTEDISK command is not working.
[oracle@node1 ~]$ crsctl replace votedisk VDISK
CRS-4264: The operation could not be validated
CRS-4000: Command Replace failed, or completed with errors.
When you get an error during VOTEDISK replacement, make sure you check the following items:

– Make sure the disk group you’re moving the voting disk is mounted on all nodes.

– Make sure the compatibility parameter is set to the version of Grid software you’re using. You can change it using the following command:

alter diskgroup VDISK set attribute ‘compatible.asm’=’11.2’;

Query V$ASM_DISKGROUP view to make sure it’s the same with the rest disk groups and with the version of the Grid Software:

select group_number, name, compatibility, database_compatibility from v$asm_diskgroup;

– Check alert.log file of an ASM instance, any available trace file of the ASM instance. Check /var/log/messages file and trace the replace command usint strace file. See if you can catch any error from the log file:

[grid@node5 ~]strace crsctl replace votedisk VDISK
– Make sure you’ve an odd number of votedisk

– Make sure there’s enough space in the diskgroup

– Make sure disk permissions is correct

– Make sure you’re running the command using Grid Software owner

Today, all above mentioned checks are failed :). In my case, the problem was using incorrect “crsctl” command. After upgrading the RAC environment from 11.2.0 to 11.2.3 I was still using old crsctl (by accident, forgot to set environment variables correctly). But no need to worries, it was a test database.

Let me know if you have any additional check to investigate voting disk replace failure

Cheers

Posted in RAC issues | 4 Comments »

Cluster won’t start if diagnostic_dest folder is missing

Posted by Kamran Agayev A. on 3rd March 2014

One of the reason of why cluster won’t start is DIAGNOSTIC_DEST folder is missing. Here it is what I got today in of the nodes of the RAC environment:

db-bash: crs_stat -t

HA Resource Target State
———– —— —–
error connecting to CRSD at [(ADDRESS=(PROTOCOL=IPC)(KEY=ora_crsqs))] clsccon 184

 

While checking alert log file of the clusterware ($GRID_HOME/log/node1/alertnode1.log

[/home/oracle/11.2.0/grid_1124/bin/oraagent.bin(4745)]CRS-5011:Check of resource “+ASM” failed: details at “(:CLSN00006:)” in “/home/oracle/11.2.0/grid_1124/log/node01/agent/ohasd/oraagent_oracle/oraagent_oracle.log”

 

ASM instance failed to start. I connected to ASM instance and tried to start it manually:

db-bash-$ asm

SQL*Plus: Release 11.2.0.4.0 Production on Mon Mar 3 10:32:24 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

ASM> startup

ORA-48108: invalid value given for the diagnostic_dest init.ora parameter

ORA-48140: the specified ADR Base directory does not exist [/home/oracle/11.2.0/dbhome]

ORA-48187: specified directory does not exist

HPUX-ia64 Error: 2: No such file or directory

Additional information: 1

ASM>

 

ADR Base Directory is missing. After creating it, I successfully started the CRS and got the happiest message :) :

db-bash-$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
db-bash-$

Posted in Administration, RAC issues | 2 Comments »

V$ASM_DISKGROUP displays information from the header of ASM disks

Posted by Kamran Agayev A. on 17th January 2014

While playing with OCR recovery, suddenly I realized that V$ASM_DISKGROUP view gets information from the headers of the ASM disk files that are specified at *.ASM_DISKSTRING parameter. Here’s the description of V$ASM_DISKGROUP view from documentation:

V$ASM_DISKGROUP displays one row for every ASM disk group discovered by the ASM instance on the node.

http://docs.oracle.com/cd/E18283_01/server.112/e17110/dynviews_1027.htm

 

I got explain plan of V$ASM_DISKGROUP to know which X$ table stand behind it and got – X$KFGRP

SQL> set autotrace on

SQL> select count(1) from v$asm_diskgroup;

COUNT(1)

———-

3

 

pic1

 

 

 

 

 

 

 

SQL> select name_kfgrp from x$kfgrp;

NAME_KFGRP

——————————

 

DATA

FLASH

OCR

 

SQL> select GRPNUM_KFDSK, NUMBER_KFDSK, STATE_KFDSK, ASMNAME_KFDSK, PATH_KFDSK from x$KFDSK;

pic2

 

 

 

 

 

 

 

 

Then I queried ASM_DISKGROUPS parameter :

 

SQL> show parameter disk

 

NAME                                 TYPE        VALUE

———————————— ———– ——————————

asm_diskgroups                       string      OCR, DATA, FLASH

asm_diskstring                       string      /dev/oracleasm/disks

 

No I will create a new tablespace under FLASH diskgroup, create a new table, change owner of the disk of FLASH diskgroup and make it #*disappear* from V$ASM_DISKGROUP view, and then return everything back 

SQL> create tablespace new_tbs datafile ‘+FLASH’;

Tablespace created.

SQL> create table new_table (id number) tablespace new_tbs;

Table created.

SQL> insert into new_table values(1);

1 row created.

SQL> commit;

Commit complete.

SQL> select * from new_table;

ID
———-
1

 

I create a parameter file from spfile, change ASM_DISKGROUPS parameter to OCR,DATA (remove FLASH) and mount the ASM instance again using a parameter file with

ASM_DISKGROUPS=’OCR’,’DATA’ specified:

 

pic7

 

So DISK6 is member and as the disk is discovered by ASM instance, FLASH diskgroup is dismounted, but still there.

Let’s change the owner of the disk and check it again. But before checking the owner, let’s read it’s header by KFED:

[root@node1 disks]# kfed read DISK6

pic4

 

Now let’s start the instance and check V$ASM_DISKGROUP view:
pic5

 

Query X$KFGRP view:


 
SQL> select NAME_KFGRP from X$KFGRP;

NAME_KFGRP

——————————

 

DATA

OCR

SQL>

 

Switch to the database and check if you can query the table:

SQL> select * from new_table;

select * from new_table

*

ERROR at line 1:

ORA-01157: cannot identify/lock data file 8 – see DBWR trace file

ORA-01110: data file 8: ‘+FLASH/rac/datafile/new_tbs.257.837080939’

SQL>

 

 

Now shutdown the ASM instance, return the owner back and check V$ASM_DISKGROUP again:

[root@node1 disks]# chown -R oracle:dba DISK6
pic6

 

FLASH diskgroup appeared, however it’s not specified at ASM_DISKGROUPS parameter. Now mount the diskgroup and query the table again:

SQL> alter diskgroup flash mount;

Diskgroup altered.

SQL>

SQL> select * from new_table;

ID
———-
1

SQL>

 

This means that if you want to move the ASM instance to another host, it’s enough to specify ASM_DISKSTRING parameter, V$ASM_DISKGROUP will discover all diskgroups

Posted in Administration, RAC issues | 2 Comments »