Wednesday, October 28, 2009

creating a new RAC environment on Linux using Rapid Clone

Hi,
Here i try to explain the steps at a high level that are involved in creating a new R12 environment with RAC database on Linux platform at backend. We used a combination of manual and rapid clone steps to setup the environment. The source was production and the target was a stage environment. We have shared appl top architecture implemented in this environment. The oracle apps version is R12 (12.0.4) and runs a 10.2.0.3 RAC database on block devices. The production was not taken down for cloning purposes, instead, we used hot backups from production to bring up the target server database.
At the top level, following are the steps:
1. Set up the RDBMS code tree on target server
2. Make Hot Backup Consistent
3. Copy the source block devices from backup server to target server
4. Bring up the target database
5. Apps Precloning Steps on source steps
6. Apps post clone steps on target
Source DB Name: Source
Source DB Instance 1: Source1
Source DB Instance 2 : Source2
Target DB Name: Target
Target DB Instance 1: Target1
Target DB Instance 2: Target2
Source db Hostnames: isprd-db-01 / isprd-db-02
Source Apps Hostnames: isprd-01/isprd-02/isprd-03/isprd-04
Target DB Hostnames: isstg-db-01 / isstg-db-02
Target Apps Hostnames: isstg-01/isstg-02/isstg-03/isstg-04
Load Balancer: target.domain.com
1. Setup the RDBMS code tree on Target Server:
- This requires that we setup the RAC code tree. However, the database needs to be brought up as a single instance and then converted to RAC.- Run adpreclone.pl dbTier on source database host. Since the code tree copy would take a lot of time over the network, we copied the code tree from another environment running on the target servers. Setup the /etc/oraInst.loc to point to correct inventory locations.- Copy only the appsutil folder from production where you have run the adpreclone.pl- Ensure that $ORACLE_HOME, $ORACLE_SID, $LD_LIBRARY_PATH are all set.- Run the following command one first node to complete the cloning of code tree on target server:
$ cd $ORACLE_HOME/clone/bin/ perl clone.pl ORACLE_HOME=/oracle/product/10.2.0.3/TARGET ORACLE_HOME_NAME=ORACLE_TARGET -O"LOCAL_NODE=isstg-db-01"' '-O"CLUSTER_NODES={isstg-db-01,isstg-db-02}"'- run root.sh as root user which is present under $ORACLE_HOME
- Run the following command on the second db node to complete the cloning of code tree on target server:
$ cd $ORACLE_HOME/clone/bin/
perl clone.pl ORACLE_HOME=/oracle/product/10.2.0.3/TARGET ORACLE_HOME_NAME=ORACLE_TARGET'-O"CLUSTER_NODES={isstg-db-01,isstg-db-02}"' '-O"LOCAL_NODE=isstg-db-02"' '-O-noConfig'
$ cd $ORACLE_HOME/oui/bin/ ./runInstaller -silent -updateNodeList ORACLE_HOME=/oracle/product/10.2.0.3/TARGET "CLUSTER_NODES={isstg-db-01,isstg-db-02}"
- run root.sh from $ORACLE_HOME
2. Make Hot Backup Consistent:
This step is required because we are not taking the copy from cold backup. In our environment, we have a backup server where the hot backup (datafiles) are stored. We also have the code tree, and using this code tree, we can make the hot backup consistent.
on the backup server:
sqlplus>startup nomountsql>alter database mount;sql>recover database until cancel using backup controlfile;
apply a couple of archive logs as requested by the recovery process and the cancel.sql> cancelsql> alter database open readonly;sql> shutdown immediate;
The above steps make hot backup consistent.
- Take a backup of the control file on source:
sql> alter database backup controlfile to trace;
- Create dd scripts to copy the block devices from source to target. Sample command for one file would look like:
dd if=/dev/mapper/oraSOURCE_0213_03Z7_4400p1 bs=256kgzip --fast ssh isstg-db-01 "gunzipdd of=/dev/mapper/oraTARGET_4669_1DXH_4400p1 bs=256k "
- You can run the above dd commands in background and check for nohup.log- When the copy is going on, you can create the control file for target. The first line in our case would look like follows:
STARTUP NOMOUNTCREATE CONTROLFILE SET DATABASE "TARGET" RESETLOGS NOARCHIVELOG
- Make sure all references are to target- If you are planning to have 16 groups of redo logs, provide only 8 at this time. You can add the second thread later when we convert single instance to RAC.- Prepare the init.ora for target instance. Update the correct values for control files which will be created in the next section. Have the list of redo log volumes handy.- Set _no_recovery_through_resetlogs=TRUE in target as the source for this target is RAC.- Take care to ensure that you rename the tnsnames.ora, ldap.ora and sqlnet.ora on target database code tree ($TNS_ADMIN) location so that no damage is done to any remote production dbs due to db links that got carried over.- Set aq and job queue process to 0 so that no inadvertant damages are done.- Once the copy is over, bring up the target database using the control file script.- Take a backup of sys.link$ table and truncate the same.- run exec fnd_conc_clone.setup_clean as apps user.- Once the db comes up cleanly, convert to RAC.- Create the - Ensure that you have set the undo_tablespace parameter correctly.- Ensure that you have the RAC init parameters setup. For eg. CLUSTER_DATABASE/CLUSTER_INTERCONNECT, REMOTE_LISTENER,LOCAL_LISTENER and so on.- Once you have finished above steps, bring up the single node instance with SID pointing to target.- Convert the single node to RAC by adding additional log groups:
ALTER DATABASE ADD LOGFILE THREAD 2GROUP 9 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p1','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p1') SIZE 512M,GROUP 10 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p2','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p2') SIZE 512M,GROUP 11 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p3','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p3') SIZE 512M,GROUP 12 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p5','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p5') SIZE 512M,GROUP 13 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p6','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p6') SIZE 512M,GROUP 14 ('/dev/mapper/oraTarget_4669_2B5A_log2_1000p7','/dev/mapper/oraTarget_4669_2B7A_log2_mirror_1000p7') SIZE 512M;

alter database enable public thread 2;
- Startup second node after ensuring the RAC parameters have been defined in node 2 init file.- Verify alert log for any errors.
Create the xml file for node 1 and node 2 of the RAC db respectively. For node 1, use the following syntax:
[oracle@isstg-db-01 bin]$ cd $ORACLE_HOME/appsutil/clone/bin[oracle@isstg-db-01 bin]$ perl adclonectx.pl contextfile=/oracle/product/10.2.0.3/TARGET/appsutil/clone/context/db/SORUCE_isprd-db-01.xml template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp initialnode
Respond to the prompts and the above command will create the xml file for you. Copy the context file from source server to target server to use as per the contextfile variable in the above command.
For the second node, use the following syntax:
cd $ORACLE_HOME/appsutil/clone/binperl adclonectx.pl contextfile=<> template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp pairsffile=$ORACLE_HOME/appsutil/clone/pairsfile.txt addnode
In the contextfile variable above, provide the location of the newly created xml (the one which we created in preceding step) file copied to the secondary node from the primary node.
Run autoconfig on both the db nodes.
This completes the db part. Next move on to apps part.
5. Apps Precloning Steps on source steps
This portion is pretty much standard and what we do here is run adpreclone.pl on source node. In case of shared appl top, running on any one node will do.
perl adpreclone.pl appsTier
Then copy the apps file system to target.
6. Apps post clone steps on target
Once the copy is complete, login to target system and run the following command:
perl adcfgclone.pl appsTier
In other nodes, run the following command to create and configure the shared appl top:
perl adclonectx.pl addnode contextfile=/apps/target/TARGET_isstg-04.xml
In the above syntax, contextfile is the new created xml file in node 1 which is copied over to additional nodes.
This pretty much sums up the steps at macro level. I have not gone to the micro level and finishing tasks as that would result in a humungous post. I have putforth those steps which i thought were critical and provide direction to a similar effort.
- Aravind Kamath Posral

No comments: