RAC Attack - Oracle Cluster Database at Home/Clusterware Callouts


The goal of this lab is to demonstrate Oracle Fast Application Notification (FAN) Callouts. In versions prior to 11g, these were also known as Oracle Clusterware Callouts.

This feature is a relatively little-known capability for Oracle Clusterware to fire a script (or a whole directory full of them) to perform whatever tasks you may want performed when a cluster-wide event happens.

For more information, consult the documentation here: http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254/hafeats.htm#BABGCEBF

For this exercise, we’ll configure some FAN callout scripts on each node and then trigger various cluster events to see how each one triggers the callout script.



  1. Start with a normal, running cluster with both nodes up and running.
  2. From a shell prompt (logged in as oracle) on each server, navigate to /u01/grid/oracle/product/11.2.0/grid_1/racg/usrco. Create file there called callout1.sh using vi (or your favorite editor). The contents of the file should be this: #!/bin/ksh umask 022 FAN_LOGFILE=/tmp/`hostname`_uptime.log echo $* "reported="`date` >> $FAN_LOGFILE &


  3. Make sure that the permissions on the file are set to 755 using the following command: [oracle@<node_name> ~]$ chmod 755 \ > /u01/grid/oracle/product/11.2.0/grid_1/racg/usrcocallout1.sh
  4. Monitor the logfiles for clusterware on each node. On each node, start a new window and run the following command: [oracle@<node_name> ~]$ tail –f \ /u01/grid/oracle/product/11.2.0/grid_1/log/`hostname -s`/crsd/crsd.log
  5. Next, we need to trigger an event that will cause the callout to fire. One such event is node shutdown. Shutdown the clusterware on node collabn2. [root@collabn2 ~]# crsctl stop crs Stopping resources. This could take several minutes. Successfully stopped Oracle Clusterware resources Stopping Cluster Synchronization Services. Shutting down the Cluster Synchronization Services daemon. Shutdown request successfully issued.
  6. Following this command, watch the logfiles you began monitoring in step 2 above. Because we set long timeouts on our test cluster, you might have to wait for a few minutes before you see anything.
    • You should eventually observe entries noting that the node has failed and shortly following that, you should observe an entry placed in the /tmp/<hostname>_uptime.log file indicating that the node is down.
    • Note which members run the clusterware callout script. (A surviving member could run commands to notify clients and/or application servers that one of the cluster nodes has died.)
    You should see these messages in the /tmp/*.log files: NODE VERSION=1.0 host=collabn2 incarn=0 status=nodedown reason=public_nw_down timestamp=30-Aug-2009 01:56:12 reported=Sun Aug 30 01:56:13 CDT 2009 NODE VERSION=1.0 host=collabn2 incarn=147028525 status=nodedown reason=member_leave timestamp=30-Aug-2009 01:57:19 reported=Sun Aug 30 01:57:20 CDT 2009
  7. Restart the clusterware. Is there a node up event? [root@collabn2 bin]# crsctl start crs
  8. Try powering off one of the virtual machines – is there an difference from the previous test? What if you disable a linux network interface or VMware network card?
  9. You may conduct more testing, if you wish. Another interesting event is a database instance going down unexpectedly. Come back to this lab after installing a database to test that situation. [oracle@collabn2 ~]$ sqlplus "/ as sysdba" SQL*Plus: Release 11.1.0.6.0 - Production on Fri Aug 1 14:49:29 2008 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options SQL> shutdown abort; ORACLE instance shut down. SQL> INSTANCE VERSION=1.0 service=RAC.vm.ardentperf.com database=RAC instance=RAC2 host=collabn2 status=down reason=user timestamp=01-Aug-2008 12:34:02 reported=Fri Aug 1 12:34:03 CDT 2008