Posts

Showing posts from April, 2011

Setting guaranteed restore point with/without enabling flashback logging

Completing full backup/restore from RMAN backup or cold backup should now be a "thing of the past" due to flashback technology.  It is highly suggested to create a guaranteed restore point (GRP) in 10g/11g Oracle database without enabling flashback logging; the reason this would be beneficial is if you are not concerned with being able to flashback the database to a particular point-in-time AFTER the GRP was setup in the database.  If you would like to be able to flashback the database to any point in time AFTER the GRP was setup in the database, you WOULD need to enable flashback logging. Both GRP without flashback logging and flashback logging will generate flashback logs; however, without flashback logging enabled, the amount of flashback logs will be much less and much smaller than if flashback logging were enabled. Starting with Oracle 11gR2, you can now enable/disable flashback logging and create/drop GRPs at will WITHOUT requiring a database outage. I will go thr

Clone database from 11g physical standby

I found a great blog entry around cloning a database from an 11g physical standby.  I decided to complete the steps myself. The method basically entails turning your physical standby into a snapshot standby database via dgmgrl (11g only). For 11gR2, flashback logging is not a requirement.  Otherwise, flashback logging being enabled is a requirement as converting physical standby into snapshot standby requires a guaranteed restore point to be created; this allows the snapshot standby to be converted back into physical standby. Setting db_recovery_file_dest and db_recovery_file_dest_size are configured is the only caveat. 1. Startup clone instance with startup nomount. SQL*Plus: Release 11.2.0.1.0 Production on Wed May 11 13:06:12 2011 Copyright (c) 1982, 2009, Oracle.  All rights reserved. Connected to an idle instance. SQL> startup nomount ORACLE instance started. Total System Global Area  217157632 bytes Fixed Size                  2211928 bytes Variable Size      

Move controlfile into ASM

To move your database controlfile into ASM, do the following: 1. show parameter on control_files to see current location.   2. set control_files parameter in database to desired new location(s).     alter system set control_files='+DG01','+FRA01' scope=spfile; 3. Shutdown database and startup nomount.     shutdown immediate;     startup nomount; 4 . R estore controlfile from old control_file via RMAN.        RMAN> restore controlfile from '+DG01/esridevw_stby/controlfile/current.9361.748436947'; 5. Mount and open the instance.     alter database mount;     alter database open;

11.2 RAC issue: 2nd instance won't start via srvctl

I encountered a very weird issue in Oracle on 11.2 RAC that I couldn't find any reference to on Metalink.  The issue occurred after I created a RAC database, registered it to CRS manually, started it up via srvctl. Everything was working fine up to this point.  The problem arose when I stopped and restarted the database with srvctl; only the first instance would start successfully.  The second instance wouldn't start due to this error: CRS-2664: Resource 'ora.asm' is already running on ' node_name ' This was very strange because the instance would start just fine if I used sqlplus.  After further research amongst my colleagues and me, we found that there was an uncommented line in the /etc/hosts file that was causing the issue.  The line was: ::1   localhost6.localdomain6 localhost6 After having the SA comment this line out and bounce the second node that was having the problem starting the second instance, the second instances were started and restarted

Oracle Data Pump (IMPDP) via Network link

I use Oracle's datapump via network link methodology whenever I need to use data pump instead of taking export, copying over dumpfiles, and then doing import.  The Data pump via network link  method is much simpler. - Create user on SOURCE db to use for network link in TARGET database. CREATE USER ARTISDBA IDENTIFIED BY "Password123$"; GRANT DBA TO ARTISDBA; - Create Database link on target CREATE DATABASE LINK SERVICE_NAME CONNECT TO ARTISDBA IDENTIFIED BY "Password123$" USING 'SERVICE_NAME'; - Copy TNSNAMES entry for source  database to target's tnsnames.ora file. - Create directory in TARGET to use for your logfiles. CREATE DIRECTORY dumpdir AS '/orabkup/expdp'; GRANT READ,WRITE ON DIRECTORY dumpdir to PUBLIC; - Run import job on TARGET.  You parallelize the job to make it faster. impdp parfile=impdp.par PARFILE contents: DIRECTORY=dumpdir NETWORK_LINK=SERVICE_NAME SCHEMAS='TEST' PARALLEL=4 LOGFILE=impdp_fr