http://www.beyondoracle.com/2009/01/07/move-clone-copy-database-with-rman/
http://www.beyondoracle.com/2009/01/07/move-clone-copy-database-with-rman/
cloning a database from one server to another with different directory structures can be easily accomplished with RMAN. Imagine that you have a database on one node and you want to copy it to another node without shuting down your database and move your datafiles to a different directory structure… This will be demonstrated here by using RMAN.
ASSUMPTIONS
Source Database
10.2.0.4 database online (sid neo) at server1 (app)
archivelog mode is enabled
db datafiles are in the directory /opt/oracle/oradata/neo2
database will be backed up online with RMAN to /u01/backup
Destiny Database
10.2.0.4 Oracle Home installed without any database running at server2 (mynode2.com)
db datafiles must be created / moved to different directory: /opt/oracle/oradata/neo
only the manual backup created at server1 will be moved to server2
AT SERVER1
Logon as oracle user software owner at server1 and set your environment variables. Then open RMAN and backup the source database we want to copy /move / clone.
[oracle@neoface oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@neoface oracle]$ export ORACLE_SID=neo
[oracle@neoface oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@neoface oracle]$ sqlplus “/ as sysdba”
SQL> alter system switch logfile;
SQL> exit;
[oracle@neoface oracle]$ rman target /
RMAN> backup database plus archivelog;
The RMAN backup created the following files at /u01/backup:
cf_NEO_c-1689570411-20090106-00 (control file backup)
back_NEO_675389594_736_1
back_NEO_675389780_737_1
back_NEO_675390018_738_1
back_NEO_675390293_739_1
Copy those 5 backup files to server2
[oracle@neoface oracle]$ scp /u01/backup/back_NEO* root@mynode2.com:/u01/backup/
Create an initialization file (pfile) from the current spfile. Then copy it to the server2.
[oracle@neoface oracle]$ sqlplus “/ as sysdba”
SQL> create pfile from spfile;
SQL> exit;
[oracle@neoface oracle]$ scp /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora oracle@mynode2.com:/opt/oracle/product/10.2.0/db_1/dbs/initneo.ora/
AT SERVER2
Logon at server2 to do the following steps:
create the OS directories to hold the datafiles and the admin log files and pfile:
edit the pfile to modify the instance name in parameters like bdump, udump, etc
change the onwership of pfile to belong to oracle user
connect to RMAN and startup the database in nomount mode
restore the control file from the backup
mount the database
validate catalog by crosschecking and cataloging the 4 backups pieces we copied
rename the datafiles and redolog files and restoring the database
Switch to oracle user and create datafiles directories :
[root@mynode2 root] su – oracle
[oracle@mynode2 oracle]$ mkdir /opt/oracle/admin/neo -p
[oracle@mynode2 oracle]$ cd /opt/oracle/admin/neo
[oracle@mynode2 oracle]$ mkdir cdump udump bdump pfile
[oracle@mynode2 oracle]$ mkdir /opt/oracle/oradata/neo -p
Edit your pfile accordingly your new directory structure:
[oracle@mynode2 oracle]$ vi /opt/oracle/product/10.2.0/db_1/dbs/initneo.ora
Set environment variables and start working on RMAN:
[oracle@mynode2 oracle]$ export ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
[oracle@mynode2 oracle]$ export ORACLE_SID=neo
[oracle@mynode2 oracle]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@mynode2 oracle]$ rman target /
RMAN> startup nomount
RMAN> restore controlfile from ‘/u01/backup/cf_NEO_c-1689570411-20090106-00′;
RMAN> alter database mount ;
RMAN> exit
Now that the database is mounted, we’ll check the correct database SCN from the current log that we’ll use later to recover the database. Take note of your current SCN.
[oracle@mynode2 oracle]$ sqlplus “/ as sysdba”
SQL> select group#, first_change#, status, archived from v$log;
GROUP# FIRST_CHANGE# STATUS ARC
---------- ------------- ---------------- ---
1 336565140 ACTIVE YES
2 336415067 CURRENT NO
3 336523814 INACTIVE YES
SQL> exit;
[oracle@mynode2 oracle]$ rman target /
As we only copied to this server the backup we created at the beggining and we did not copy all the backups we had on server1 we must crosscheck the catalog against the OS files. Run the following commands at RMAN prompt :
RMAN> CROSSCHECK backup;
RMAN> CROSSCHECK copy;
RMAN> CROSSCHECK backup of database;
RMAN> CROSSCHECK backup of controlfile;
RMAN> CROSSCHECK archivelog all;
Now let’s catalog the 4 backup pieces that we copy to this server2:
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389594_736_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675389780_737_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390018_738_1′;
RMAN> CATALOG backuppiece ‘/u01/backup/back_NEO_675390293_739_1′;
Next, as we changed the directory of our datafiles we must rename the redologs:
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo01.log’ to ‘/opt/oracle/oradata/neo/redo01.log';
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo02.log’ to ‘/opt/oracle/oradata/neo/redo02.log';
RMAN> ALTER DATABASE rename file ‘/opt/oracle/oradata/neo2/redo03.log’ to ‘/opt/oracle/oradata/neo/redo03.log';
If you use BLOCK CHANGE TRACKING to allow fast incremental backups, and if you want to move the datafiles to different directory you must disable this feature and enabling it by specifying the new dir:
RMAN> ALTER DATABASE disable block change tracking;
RMAN> ALTER DATABASE enable block change tracking using file ‘/opt/oracle/oradata/neo/block_change_tracking.f';
This will avoid errors like ORA-19751 and ORA-19750
Now let’s run the script that will restore our database, renaming the datafiles and recovering until the archivelog with SCN 336415067, the current one.
RMAN> run {
set newname for datafile 1 to “/opt/oracle/oradata/neo/system01.dbf”;
set newname for datafile 2 to “/opt/oracle/oradata/neo/undotbs01.dbf”;
set newname for datafile 3 to “/opt/oracle/oradata/neo/sysaux01.dbf”;
set newname for datafile 4 to “/opt/oracle/oradata/neo/data01.dbf”;
set newname for datafile 5 to “/opt/oracle/oradata/neo/index01.dbf”;
set newname for datafile 6 to “/opt/oracle/oradata/neo/users01.dbf”;
set newname for datafile 7 to “/opt/oracle/oradata/neo/streams.dbf”;
set newname for datafile 8 to “/opt/oracle/oradata/neo/data01brig.dbf”;
set newname for datafile 9 to “/opt/oracle/oradata/neo/index02.dbf”;
restore database;
switch datafile all;
recover database until scn 336415067;
}
RMAN> ALTER DATABASE open resetlogs;
I didn’t manage to avoid errors like ORA-01110 and ORA-01180 at RMAN without using the “until” clause in the “recover database” sentence instead, like most people use it, as the first instruction after the run command.
Later I’ll upload a video for you easily see it running flawlessly
Btw, HAPPY NEW YEAR!!!!
PS: to provide better look and offer my reader more resources, my site is now supported by advertising. If you feel that it very annoying, please give me some feedback. If you think it deserves to be clicked… just do it
(2011/07/20) updated to issue a switch logfile before database backup and avoid errors “RMAN-06026: some targets not found – aborting restore” and “RMAN-06023: no backup or copy of datafile 1 found to restore”
Thursday, October 29, 2015
Tuesday, October 27, 2015
Various database tables are employed by the concurrent processing architecture:
Table 1-1 Concurrent Request Life Cycle
Phase Activity
Pending The request is waiting to be run
Running The request is running
Completed The request has finished
Inactive The request cannot be run
A concurrent program library contains concurrent programs that can be called by a concurrent manager. An important example is the Oracle Application Object Library program library (FNDLIBR), which contains Oracle E-Business Suite immediate concurrent programs, and is assigned to the standard concurrent manager. Although each concurrent manager can only run immediate concurrent programs from its own concurrent program library, it can also run spawned or Oracle tool concurrent programs.
Various database tables are employed by the concurrent processing architecture:
Table 1-2 Concurrent Processing Database Tables
Table Content
FND_CONCURRENT_REQUESTS Details of user requests, including status, start date, and completion date
FND_CONCURRENT_PROGRAMS Details of concurrent programs, including execution method, whether the program is constrained, and whether it must be run alone.
FND_CONCURRENT_PROCESSES Cross-references between concurrent requests and queues, and a history of concurrent manager processes
FND_CONCURRENT_QUEUES Information about each of the concurrent manager queues
Caution: Do not update these tables manually. You can (subject to your organization's archiving requirements) periodically run the "Purge Concurrent Requests and/or man
All About Oracle Parallel Concurrent Processing (PCP)
Refer the below link ..
http://www.dbatutor.com/2014/04/all-about-oracle-parallel-concurrent.html
All About Oracle Parallel Concurrent Processing (PCP)
1) What is PCP
- Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture.
- PCP allows concurrent processing activities to be distributed across multiple nodes, maximizing throughput and providing resilience to node failure.
2) How to Configure Parallel Concurrent Processing (PCP)
Below are steps to configure the PCP in Oracle Applications.
A) Set Up PCP
- Edit the applications context file via Oracle Applications Manager, and set the value of the variable APPLDCP to ON.
- Execute AutoConfig by running the following command on all concurrent processing nodes:
- $ $INST_TOP/admin/scripts/adautocfg.sh
- Source the Applications environment.
- Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes.
- Restart the Applications listener processes on each application tier node.
- Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered.
- Verify that the Internal Monitor for each node is defined properly, with correct primary node specification, and work shift details. For example, Internal Monitor: Host1 must have primary node as host1. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator.
- Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system.
- Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.)
- Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required (In case of Non RAC Database). By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.
B) Set Up Transaction Managers (Only R12)
If you are already using the transnational managers and If you wish to have transnational managers fail over, Perform the below steps
- Shut down the application services (servers) on all nodes
- Shut down all the database instances cleanly in the Oracle RAC environment, using the command:
- SQL>shutdown immediate;
- Edit the $ORACLE_HOME/dbs/_ifile.ora and add the following parameters:
_lm_global_posts=TRUE
_immediate_commit_propagation=TRUE
- Start the instances on all database nodes.
- Start up the application services (servers) on all nodes.
- Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type' to ‘QUEUE', and verify that the transaction manager works across the Oracle RAC instance.
- Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.
- Restart the concurrent managers.
- If any of the transaction managers are in a deactivated status, activate them from Concurrent > Manager > Administrator.
C) Set Up Load Balancing on Concurrent Processing Nodes (Only Applicable in case of RAC)
If you wish to have PCP to use the load balancing capability of RAC, You can perform the below, Connections will load balanced using SID_BALANCE value and they will connect to all the RAC nodes.
- Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (_balance>).
- Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.
3) Is RAC Mandatory to Implement PCP?
- No, RAC is not manadatory for PCP, If you have two or more applications nodes, You can enable PCP, But PCP works better in conjunction with RAC to handle all the failover scenarious.
4) How PCP Works with RAC?
- In RAC Enabled env, PCP uses cp_two_task env variable to connect to DB RAC node, This can be set one CM node to one RAC node or you can set to connect to all the RAC nodes in the cluster.
5) What happens when one of the RAC node goes down when PCP enabled?
- When Concurrent: PCP Instance Check is set to ON and cp_two_task value set to SID (i.e One CM node connects to only one RAC node always), If one DB node goes down, PCP identifies the DB failure and shifts all the CM managers to other applications node where Database is available.
6)What happen when one of the PCP node goes down?
- IMON identifies the failure and through FNDSM (service Manager) It initiates ICM to start in surviving node (If ICM is is running on Failed node), ICM will start all the managers.
7) What is primary and Secondary Nodes in PCP?
- It is requirement to define the primary and secondary node to distribute load on the servers, If this is not defined,All the managers will start on the node where ICM is running by default.
8) How Fail Back happens in PCP?
- Once failed node comes online, IMON detects and ICM will fail back all the managers defined on that node.
9) What happens to requests running during failover in PCP?
- It is important to note RAC and PCP does not support any DML commands and TAF and FAN are not supported with E-Bussiness Suite.
- When a request is running, If CM goes down it is having status running normal and it will not have any associated process ID, When ICM start in other node, It
verifies for all the running normal requests and verifies the OS process ID, If it did not find the process ID, It will resubmit the request to start.
- This behavior is normal even in NON PCP env.
- The Internal Concurrent Manager (ICM) will only restart a request if the following conditions are met
The ICM got the manager's database lock for the manager that was running the request
The phase of the request is "running" (phase_code = 'R')
The program for this request is set to "restart on failure"
All of the above requirements have been met AND at least one of the following:
a. The ICM is just starting up, (ie. it has just spawned on a given node and going through initial code before the main loop)
b. The node of the concurrent manager for which we got the lock is down
c. The database instance (TWO_TASK) defined for the node of that concurrent manager is down (this is not applicable if one is using some "balance" @ TWO_TASK on that node)
10) How PCP identifies when node goes down?
- There are two types of failures that PCP recognizes.
a.) Is the node pingable ?
Issues an operating system ping on the machine name - timeout or available.
b.) Is the database available?
Query on V$threads and V$instance for value of open or close.
- When any of the two above failures occur, the following example will illustrate the failover and failback of managers.
Primary node = HOST1 - Managers assigned to primary node are ICM (FNDLIBR-cpmgr) , FNDCRM
Secondary node = HOST2 - Manager assigned to secondary node is STandard Manager (FNDLIBR)
When HOST1 becomes unavailable, both ICM and FNDCRM are migrated over to HOST2.
This is viewable from Administer Concurrent Manager form in System Administrator Responsibility.
The $APPLCSF/log/.mgr logfile will also reflect that HOST1 is being added to unavailable list.
On HOST2, after pmon cycle, FNDICM, FNDCRM, and FNDLIBR are now migrated and running.
(Note: FNDIMON and FNDSM run independently on each concurrent processing node. FNDSM
is not a persistent process, and FNDIMON is a persistent process local to each node)
Once HOST1 becomes available, FNDICM and FNDCRM are migrated back to the original primary
node for successful failback.
In summary, in a successful fail over and failback scenario, all managers should failover to their secondary node, and once node or instance becomes available; then all managers should failback to primary node.
Thursday, October 22, 2015
Oracle DB 10g Clone using RMAN
http://www.dba-oracle.com/t_rman_clone_copy_database.htm
https://oracle-base.com/articles/11g/duplicate-database-using-rman-11gr2
https://www.youtube.com/watch?v=33f366qRlvE
Use RMAN to copy/clone a database
Oracle Tips by Burleson Consulting
January 1, 2015
Question: I want to use RMAN to copy a database to a different server, in a different disk directory. Can I use RMAN to restore a database to another server?
Answer: Cloning the database on the remote host requires two servers. Both servers should have Oracle Software installed on them. Make sure the Operating System is the same on both the servers. On the first server, we should have one database which will be duplicated to the second host. The first server will be named as "production server" and the database on it will be called "production database" (SID of the production database is db1). However, the second server will have a name as "auxiliary server" and the database which will be created on it will be called "auxiliary database" (SID of the auxiliary database is aux). The server name of the production database is PROD_SERVER and the server name of the auxiliary database is DUP_SERVER
Also see: RMAN cloning on a local host
Before cloning the production database, we should configure some initial actions as follows:
Step 1 - Create a password file for the auxiliary instance
It's possible to create a password file on the production server as well as on the auxiliary server. Because the passwords of target and auxiliary databases password files should not be similar as it's a security threat. However, it should be mentioned that when creating standby database from RMAN, the password file has to be same. But here, as we don't create standby database, the new password file should be created. Create a password file using ORAPWD utility as follows:
$ orapwd file=$ORACLE_HOME/dbs/orapwaux password=test entries=3;
Step 2 ? Configure auxiliary instance listener net configuration
To clone the production database to auxiliary server, the connection should be configured between two servers i.e. must be able to access the production database from auxiliary server and vice versa. Thus, you need to make changes to tnsnames.ora and listener.ora files. If you haven't these files at $ORACLE_HOME/network/admin directory, create them using netca utility. First of all, change LISTENER.ORA file on auxiliary server and add the following lines:
LISTENER.ORA #auxiliary instance
(SID_DESC =
(SID_NAME = aux)
(ORACLE_HOME = /u01/oracle/product/10.2.0/db_1)
)
The reason why we made a change to listener.ora file is because we make auxiliary server to listen for the connections which comes from the production database. While duplicating the database, we'll connect to the auxiliary instance from the production database and as an auxiliary instance should understand and accept the connections which come from production database, we configured the listener.ora file. Without stopping and starting the listener, we apply new changes made to listener configuration by running the following command:
$ lsnrctl reload
Now, edit the TNSNAMES.ORA file at the production database and add the entry about auxiliary database to it as follows:
AUX =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS=(PROTOCOL=TCP)(HOST= DUP_SERVER)(PORT=1521))
)
(CONNECT_DATA=
(ORACLE_SID=aux)
)
)
After changing the file, make sure you can connect to auxiliary instance by using tnsping utility
$ tnsping aux
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS=(PROTOCOL=TCP)(HOST= DUP_SERVER)(PORT=1521))) (CONNECT_DATA= (ORACLE_SID=aux)))
OK (40 msec)
It means that we have access to the auxiliary server from production server and we successfully configured connection between two servers
Step 3 ? Create a parameter file for an Auxiliary database
We'll use this parameter file to start auxiliary instance in NOMOUNT mode. You can create pfile from spfile of production database and copy it to the auxiliary server. You can also create a parameter file which consists only of the following few parameters:
DB_NAME: As we create duplicate database at the remote host, we can keep database name the same as the production database name. However, this parameter will be different when we duplicate database on the local host, because it?s impossible to have two databases with the same name in one host.
CONTROL_FILES: This parameter defines the name of Control Files which will be restored to auxiliary instance
DB_BLOCK_SIZE: This parameter must be the same as in the target instance
COMPATIBLE: If production database uses specific compatible parameter, add this parameter to auxiliary parameter file with the same value
SGA_TARGET: This parameter specifies the total size of all SGA components. To automatically size these components, make it the same as in the production database. However it is not mandatory that this be the same size as in the production database. The cloning may be done to create a Test / Development environment on a smaller server with lesser RAM.
You need also to create the following directories on the auxiliary server and add them to the parameter file:
adump
bdump
udump
cdump
Flash Recovery Area
$ cd $ORACLE_HOME
$ mkdir admin
$ cd admin/
$ mkdir aux
$ cd aux/
$ mkdir adump bdump cdump udump
$ cd ..
$ mkdir flash_recovery_area
After changes are made to parameter file, it will look like as follows:
DB_NAME=aux
CONTROL_FILE=(/u02/oradata/aux/control01.ctl,/u02/oradata/aux/control02.ctl,
/u02/oradata/aux/control03.ctl')
DB_BLOCK_SIZE=8192
COMPATIBLE='10.2.0.1.0'
SGA_TARGET=285212672
audit_file_dest='/u01/oracle/product/10.2.0/db_1/admin/aux/adump'
background_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/aux/bdump'
core_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/aux/cdump'
user_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/aux/udump'
db_recovery_file_dest='/u01/oracle/product/10.2.0/db_1/flash_recovery_area'
db_recovery_file_dest_size=2147483648
Now, to make Oracle automatically uses this parameter file each time, create spfile as follows:
$ export ORACLE_SID=aux
$ export ORACLE_HOME=/u01/oracle/product/10.2.0/db_1/
$ sqlplus "/ as sysdba"
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 29 00:48:57 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> CREATE SPFILE FROM PFILE='/u01/oracle/product/10.2.0/db_1/dbs/pfile.ora';
File created
Step 4 - Startup auxiliary instance in NOMOUNT mode
SQL> STARTUP NOMOUNT
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Step 5 - Backup the production database and copy all backup files to auxiliary server
As the auxiliary instance is ready, you need to backup the production database and copy backup files to auxiliary server. As the backup will be recovered on the auxiliary server, the production database should be in ARCHIVELOG mode
Connect to target instance and take the compressed backup of the database using nocatalog. The backup of the database can be taken by connecting to the RMAN catalog database as well
$ export ORACLE_SID=db1
$ rman target /
RMAN> BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;
Step 6 - Create datafile locations on an auxiliary server
If you want to create the datafiles in the same directory as in the production database, create the folder of datafiles on the auxiliary server
# mkdir u02
# cd u02/
# mkdir oradata
# cd oradata/
# mkdir db1
# cd db1/
# chown -R oracle:oinstall /u02/
Step 7 - Copy the backup of the production database to the same directory residing on the auxiliary database server
As we configured the database to use Flash Recovery Area, all RMAN backups will be stored under the flash recovery area folder. So the same folder should be created at the auxiliary server and all backup files should be copied to that folder. When you issue the DUPLICATE DATABASE command, RMAN searches the auxiliary server for the backup files which should reside at the same folder where they were at the production server.
Identify the location of RMAN backups on the production server and create the same folders on the auxiliary server. Then copy all backup files from the production server to the auxiliary server
Step 8 - Duplicate the Database
After all above steps are done correctly, we're ready to create the clone database. For this, we connect to both production and auxiliary database from RMAN and issue the command "DUPLICATE TARGET DATABASE TO aux".
$ rman target sys/my_pass auxiliary sys/test@aux
connected to target database: DB1 (DBID=1298725119)
connected to auxiliary database: AUX (not mounted)
RMAN> DUPLICATE TARGET DATABASE TO aux NOFILENAMECHECK;
Starting Duplicate Db at 06-DEC-09
=================
database opened
Finished Duplicate Db at 06-DEC-09
After performing all steps, we have successfully duplicated the production database to the remote host (auxiliary server). RMAN performed the following steps automatically to duplicate the database :
Allocates automatic auxiliary channel
Creates a controlfile for the clone database
Performs an incomplete recovery of the clone database using incremental backups and archived redo log files up to the last backed up archived redo log file.
Shutdowns the database and opens it using RESETLOGS option.
Generates a new unique DBID for the clone database
But remember, all file locations were the same as in the production database. However, in case we need to change clone database?s directory structure, we can do it in different ways.
Answer on RMAN cloning by Sorabh Harit: Applies to: Oracle Server - Enterprise Edition - Version: 10.2.0.0.0
The goal is restoring a database using RMAN on a different node with different backup directory structures and different database directory structures .
You have a database backed up on NODE 1.
You need to restore the database on NODE 2.
The directory structure is different on NODE 2.
You need to put the backups in a new directory structure in NODE 2, unlike as they were in NODE 1.
You need to restore the database files into a new directory structure in NODE 2, unlike as they were in NODE 1.
Solution
Below is the procedure with an example of using RMAN to copy a database to another directory:
Connect to the target database using rman and backup the database ---> NODE 1
$ rman target /Recovery Manager: Release 10.2.0.1.0 - Production on Tue Feb 13 00:29:33
2007Copyright ? 1982, 2005, Oracle. All rights reserved.connected to target database: ORA10G (DBID=3932056136)RMAN>
backup database plus archivelog;Starting backup at 13-FEB-07current log archivedusing channel ORA_DISK_1channel
ORA_DISK_1: starting archive log backupsetchannel ORA_DISK_1: specifying archive log(s) in backup setinput archive log thread=1
sequence=143 recid=109 stamp=614392105channel ORA_DISK_1: starting piece 1 at 13-FEB-07channel ORA_DISK_1: finished piece
1 at 13-FEB-07piece handle=/u01/oracle/product/oradata/ora10g/fra/ORA10G/backupset/2007_02_13/o1_mf_annnn_TAG20070213T002825_2x21kbds_.bkp
tag=TAG20070213T002825 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:02Finished backup at 13-FEB-07Starting
backup at 13-FEB-07using channel ORA_DISK_1channel ORA_DISK_1: starting full datafile backupsetchannel ORA_DISK_1: specifying
datafile(s) in backupsetinput datafile fno=00003 name=/u01/oracle/product/oradata/ora10g/data/sysaux01.dbfinput datafile
fno=00001 name=/u01/oracle/product/oradata/ora10g/data/system01.dbfinput datafile fno=00002 name=/u01/oracle/product/oradata/ora10g/data/undotbs01.dbfinput
datafile fno=00004 name=/u01/oracle/product/oradata/ora10g/data/users01.dbfinput datafile fno=00005 name=/home/oracle/1.dbfinput
datafile fno=00006 name=/u01/oracle/product/oradata/ora10g/data/sysaux02.dbfinput datafile fno=00007 name=/u01/oracle/product/oradata/ora10g/data/undotbs02.dbfchannel
ORA_DISK_1: starting piece 1 at 13-FEB-07channel ORA_DISK_1: finished piece 1 at 13-FEB-07piece handle=/u01/oracle/product/oradata/ora10g/fra/ORA10G/backupset/2007_02_13/o1_mf_nnndf_TAG20070213T002827_2x21kd12_.bkp
tag=TAG20070213T002827 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:55Finished backup at 13-FEB-07Starting
backup at 13-FEB-07current log archivedusing channel ORA_DISK_1channel ORA_DISK_1: starting archive log backupsetchannel
ORA_DISK_1: specifying archive log(s) in backup setinput archive log thread=1 sequence=144 recid=110 stamp=614392165channel
ORA_DISK_1: starting piece 1 at 13-FEB-07channel ORA_DISK_1: finished piece 1 at 13-FEB-07piece handle=/u01/oracle/product/oradata/ora10g/fra/ORA10G/backupset/2007_02_13/o1_mf_annnn_TAG20070213T002925_2x21m6ty_.bkp
tag=TAG20070213T002925 comment=NONEchannel ORA_DISK_1: backup set complete, elapsed time: 00:00:02Finished backup at 13-FEB-07Starting
Control File and SPFILE Autobackup at 13-FEB-07piece handle=/u01/oracle/product/ora10g/dbs/c-3932056136-20070213-02 comment=NONEFinished Control File and SPFILE Autobackup at 13-FEB-07RMAN> exit
Move the following files to the NODE 2 :
+ The database backup pieces
+ Controlfile backup piece
+ The parameter file i.e init.ora file
Edit the PFILE on NODE 2 to change the environment specific parameters like .
user_dump_dest =
background_dump_dest =
control_files =
Once the PFILE is suitably modified invoke Rman on the NODE 2 after setting the Oracle environment variables and start the database in nomount mode:
[oracle@test-br test]$ export ORACLE_HOME=/u01/oracle/product/ora10g
[oracle@test-br test]$ export ORACLE_SID=ora10g
[oracle@test-br test]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@test-br test]$ rman target /
Recovery Manager: Release 10.2.0.1.0 - Production on Tue Feb 13 00:36:55 2007
Copyright ? 1982, 2005, Oracle. All rights reserved.
connected to target database (not started)
RMAN> startup nomount
Oracle instance started
Total System Global Area 205520896 bytes
Fixed Size 1218508 bytes
Variable Size 75499572 bytes
Database Buffers 121634816 bytes
Redo Buffers 7168000 bytes
Restore the controlfile from the backup piece.
RMAN> restore controlfile from '/u01/oracle/product/ora10g/dbs/c-3932056136-20070213-02';
Starting restore at 13-FEB-07
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=155 devtype=DISK
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete,
elapsed time: 00:00:02
output filename=/u01/oracle/product/oradata/ora10g/cntrl/control01.ctl
Finished restore at 13-FEB-07
Mount the database.
RMAN> alter database mount ;
Now catalog the backup pieces that were shipped from NODE 1.
RMAN> catalog backuppiece '/home/oracle/test/backup/o1_mf_annnn_TAG20070213T002925_2x21m6ty_.bkp';Starting
implicit crosscheck backup at 13-FEB-07allocated channel: ORA_DISK_1channel ORA_DISK_1: sid=155 devtype=DISKCrosschecked
3 objectsFinished implicit crosscheck backup at 13-FEB-07Starting implicit crosscheck copy at 13-FEB-07using channel
ORA_DISK_1Finished implicit crosscheck copy at 13-FEB-07searching for all files in the recovery areacataloging files...no
files catalogedcataloged backuppiecebackup piece handle=/home/oracle/test/backup/o1_mf_annnn_TAG20070213T002925_2x21m6ty_.bkp
recid=41 stamp=614393265RMAN> catalog backuppiece '/home/oracle/test/backup/o1_mf_annnn_TAG20070213T002825_2x21kbds_.bkp';cataloged
backuppiecebackup piece handle=/home/oracle/test/backup/o1_mf_annnn_TAG20070213T002825_2x21kbds_.bkp recid=42 stamp=614393292RMAN>
catalog backuppiece '/home/oracle/test/backup/o1_mf_nnndf_TAG20070213T002827_2x21kd12_.bkp';cataloged backuppiecebackup
piece handle=/home/oracle/test/backup/o1_mf_nnndf_TAG20070213T002827_2x21kd12_.bkp recid=43 stamp=614393310
Get to know the last sequence available in the archivelog backup using the following command.
RMAN > list backup of archivelog all;
Rename the Redologfiles,so that they can be created in new locations when opened the database is opened in resetlogs.
SQL> alter database rename file '/u01/oracle/product/oradata/ora10g/log/redo01.log' to '/home/oracle/test/log/redo01.log';..................
Now restore the datafiles to new locations and recover.
RMAN> run { set until sequence set newname for datafile 1 to '/home/oracle/test/data/sys01.dbf';
set newname for datafile 2 to '/home/oracle/test/data/undotbs01.dbf'; set newname for datafile 3 to '/home/oracle/test/data/sysaux01.dbf';
set newname for datafile 4 to '/home/oracle/test/data/users01.dbf'; set newname for datafile 5 to '/home/oracle/test/data/1.dbf';
set newname for datafile 6 to '/home/oracle/test/data/sysaux02.dbf'; set newname for datafile 7 to '/home/oracle/test/data/undotbs02.dbf';
restore database; switch datafile all; recover database; alter database open resetlogs; }
Here, Rampant TechPress author Kamran Agayev describes how to copy or clone a database using RMAN.
Steps to clone a database using RMAN:
1: Create a password file on the destination server
2: Establish connectivity between the target and destination server (tnsnames.ora, sqlnet.ora)
3: Create the directories for the database files
4: Take the RMAN backup from the target server and copy it to the destination server.
5: Startup the destination database in nomount mode
6: Run the RMAN duplicate database command
Wednesday, October 21, 2015
Steps To Shutdown(stop)/Startup(start) CRS, OHAS, ASM, RDBMS & ACFS Services on a RAC Cluster 11.2 & 12.1 Configuration (Doc ID 1355977.1)
Additional Steps Required When ACFS filesystem are involved:
Community Discussions
References
APPLIES TO:
Oracle Database - Enterprise Edition - Version 11.2.0.1 to 12.1.0.2 [Release 11.2 to 12.1]
Information in this document applies to any platform.
***Checked for relevance on 21-Jan-2014***
GOAL
The present document provides in detail the steps to shutdown/startup The CRS, OHAS, ASM & RDBMS Services on a RAC Cluster 11.2 & 12.1 configuration:
SOLUTION
1) Connect to node #1, then please check if the CRS/OHAS & services are enabled to autostart as follow (repeat this step on each node):
# $GRID_ORACLE_HOME/bin/crsctl config crs
2) If not, then you can enable it as follow (repeat this step on each node):
# $GRID_ORACLE_HOME/bin/crsctl enable crs
3) Shutdown the services on each node as follow:
# $GRID_ORACLE_HOME/bin/crsctl stop crs
4) Verify the services were/are down (repeat this step on each node):
# $GRID_ORACLE_HOME/bin/crsctl status resource -t
5) Then start the services on node #1 as follow (only on first node):
# $GRID_ORACLE_HOME/bin/crsctl start crs
6) Wait 1 minute, then validate the services started & diskgroups were mounted (only on first node):
# $GRID_ORACLE_HOME/bin/crsctl status resource -t
7) Then start the services on node #2 as follow:
# $GRID_ORACLE_HOME/bin/crsctl start crs
8) Wait 1 minute, then validate the services started & diskgroups were mounted on node #2:
# $GRID_ORACLE_HOME/bin/crsctl status resource -t
9) If there are more nodes in the RAC, then repeat the same steps (7-8).
10) Then check the status of the clusterware globally as follows:
# crsctl check cluster -all
Sample output:
[root@asmgrid1 ~]# crsctl check cluster -all
**************************************************************
asmgrid1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
asmgrid2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Note 1: Alternatively, you can stop and start the cluster globally as follows:
Stop:
[root@asmgrid1 ~]# crsctl stop cluster -all [-f]
Start:
[root@asmgrid1 ~]# crsctl start cluster -all
Note 2: For RAC ACFS configurations please check the next information:
http://docs.oracle.com/cd/E11882_01/readmes.112/e22488/toc.htm#CACCIEID
2.19.3Oracle ACFS and Oracle Clusterware Stack Shut Down
When attempting to shut down Oracle Clusterware, the Oracle Clusterware stack may report that it did not successfully stop on selected nodes (reference Bug 8651848). If the database home is on Oracle ACFS, then you may receive the following error:
CRS-5014: Agent orarootagent.bin timed out starting process acfsmount for action
This error can be ignored.
Alternatively, the Oracle Clusterware stack may report that it did not successfully stop on selected nodes due to the inability to shut down the Oracle ACFS resources. If this occurs, take the following steps:
Ensure that all file system activity to Oracle ACFS mount points is quiesced by shutting down programs or processes and retry the shutdown.
If the ora.registry.acfs resource check function times out, or the resource exhibits a state of UNKNOWN or INTERMEDIATE, then this may indicate an inability to access the Oracle Cluster Registry (OCR). The most common cause of this is a network failure. The commands "acfsutil registry" and "ocrcheck" may give you a better indicator of the specific error. Clear this error and attempt to stop Oracle Clusterware again.
Additional Steps Required When ACFS filesystem are involved:
Note 3: If ACFS filesystems are associated with this ASM Cluster configuration, then the ACFS filesystems need to be dismounted first, this is because if you try to dismount an ACFS filesystem (or any other regular Unix/Linux filesystem) that is being used/accessed at that time (same as a regular unix/Linux filesystem) then you will get a “Resource Busy” error, therefore “crsctl stop crs” statement will fail as expected.
You will need to perform the following additional steps instead:
A) Dismount all the ACFS filesystems running in the cluster as follows (as root user):
# srvctl stop filesystem -d volume_device_name [-n node_name] [-f]
Where:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-d :The Oracle ACFS volume device name
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-n : The name of a node
If you do not specify this option, then the utility stops the volume resource on all active nodes in the cluster.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-f : This option stops the file system and also stops any databases or other resources that depend on this file system.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Example:
[root@asmgrid1 ~]# df -k | grep asm
/dev/asm/volnew1-347 14680064 159700 14520364 2% /u01/app/grid/acfsmounts/acfsdgnew_volnew1
/dev/asm/vol_oh1-115 35651584 4590104 31061480 13% /u04acfs
# srvctl stop filesystem -d /dev/asm/volnew1-347 -f
# srvctl stop filesystem -d /dev/asm/vol_oh1-115 -f
B) Make sure the ACFS filesystem(s) was/were dismounted on all the nodes:
# df -k | grep asm
C) Then perform the steps described in the following document:
=)> Steps To Shutdown/Startup The CRS, OHAS, ASM & RDBMS Services on RAC 11.2 Configuration. (Doc ID 1355977.1)
Note: To manually remount the ACFS filesystem back (CRS managed) the next statement needs to be executed as root user:
# srvctl start filesystem -d volume_device_name [-n node_name]
=============================================================
https://www.youtube.com/watch?v=0iNWl4r8hmo
===============
http://www.dbas-oracle.com/2013/07/Shutdown-and-Start-Oracle-Real-Application-Clusters-RAC-Database.html
Shutdown and Start Oracle Real Application Clusters Database is any activity, which is performed by DBA for Applying CRS patch, Scheduled maintenance and adding more products in to Real Application Cluster. Since, RAC involves so many components like Database instance, ASM instance, Node application and CRS services, So Start or Shut down Oracle Real Application Clusters Database involved series of steps.
An important factor to consider by DBA while start or shutdown Oracle RAC Database is that steps must be preformed in specific suggested order otherwise database can go into inconsistency. In this post, I will explain each step one by one and after performing steps we will also cross check for same operation. Let's start with.
How to Shutdown Oracle Real Application Clusters Database ?
Here, We will start with stopping resources from Enterprise Manager Grid Control to Oracle Cluster ware or CRS process and will end at stopping CRS services. Here I am taking an example of Two node RAC database to perform all operations. Steps followed are as:
1. Shutdown Oracle Home process accessing database.
2. Shutdown RAC Database Instances on all nodes.
3. Shutdown All ASM instances from all nodes.
4. Shutdown Node applications running on nodes.
5. Shut down the Oracle Cluster ware or CRS.
1. Shutdown Oracle Home process accessing database: There could be oracle and non Oracle application which access database, So first step is to stop all the applications or DBA should inform application owner and he should stop all applications accessing Database. DBA should also stop Oracle application like Enterprise Manager Grid Control and Database Listener which access database for monitoring and database connections.
[oracle@database ~]$ emctl stop dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved.
https://database.example.com:5500/em/console/aboutApplication
Stopping Oracle Enterprise Manager 11g Database Control ...
... Stopped.
[grid@node1 bin]$ srvctl stop listener -n node1
[grid@node1 bin]$ srvctl status listener -n node1
Listener LISTENER is enabled on node(s): node1
Listener LISTENER is not running on node(s): node1
2. Shutdown RAC Database Instances on all nodes: Suppose DBA has two node RAC database, So he has to stop all instances from all DB nodes. Here, I am taking an example of two node RAC. First, I am checking on which server database is running and then stopping and verify for the same.
Syntax: srvctl stop database -d {databasename}
[oracle@node2 ~]$ srvctl status database -d oradb
Instance oradb1 is running on node node1
Instance oradb2 is running on node node2
[oracle@node2 ~]$ srvctl stop database -d oradb
[oracle@node2 ~]$ srvctl status database -d oradb
Instance oradb1 is not running on node node1
Instance oradb2 is not running on node node2
We just need to execute one command from any one of the server having database and it will stop all database instances on all servers. If you have more than one database configured on Nodes, then Database Administrator has to execute this command for each database.
3. Shutdown All ASM instances from all nodes: Next DBA has to shut down an ASM instance which are used to acess database, enter the following command, where node is the
name of the node where the ASM instance is running
Syntax: srvctl stop asm -n {node}
[grid@node2 oracle]# srvctl stop asm -n node1 -f
[grid@node2 oracle]# srvctl stop asm -n node2 -f
[grid@node2 oracle]# srvctl status asm -n node1
ASM is not running on node1
[grid@node2 oracle]# srvctl status asm -n node2
ASM is not running on node2
Sometimes, Database administrator face some issues in stopping ASM instance, In that case use "-f" option to forcefully shutdown ASM instances.
4. Shutdown Node applications running on nodes: To stop node applications running on a node, enter the following command, where node is the name of the node where the applications are running
[grid@node2 oracle]# srvctl stop nodeapps -n node1 -f
[grid@node2 oracle]# srvctl status nodeapps -n node1
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
Network is enabled
Network is running on node: node1
GSD is disabled
GSD is not running on node: node1
ONS is enabled
ONS daemon is running on node: node1
Repeat same command for all nodes one by one. If you face any issue in stopping node applications use "-f" as force option to stop applications.
5. Shut down the Oracle Clusterware or CRS: In this command all CRS related process will be stopped. This is the only command which needs to be executed by "root" user on all database nodes.
[root@node1 bin]# crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
*************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@node1 bin]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.crsd' on 'node1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN3.lsnr' on 'node1'
CRS-2673: Attempting to stop 'ora.node2.vip' on 'node1'
-------------------------------------------------
-------------------------------------------------
-------------------------------------------------
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'node1'
CRS-2677: Stop of 'ora.gipcd' on 'node1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'node1'
CRS-2677: Stop of 'ora.gpnpd' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@node1 bin]# crsctl check cluster -all
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Check failed, or completed with errors.
DBA can see Now RAC is completely down. Now you can move ahead with your patching, maintenance activity. Next let's see how to start Oracle RAC cluster database.
How to Start Oracle Real Application Clusters Database ?
In Starting Real Application Clusters Database, We will also follow a set of sequence. The order will be just opposite to Shutting down the Real Application Clusters Database.
1. Start Oracle Clusterware or CRS.
2. Start Node applications running on nodes.
3. Start All ASM instances from all nodes.
4. Start RAC Database Instances on all nodes.
5. Start Oracle Home process accessing database.
1. Start Oracle Clusterware or CRS: CRS starts automatically when you start or restart Server, but Here DBA has manually shutdown the CRS So, he has to start it manually. This is the only command which needs to be executed by "root" user. Database Administrator should execute this command on all nodes.
[root@node1 bin]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started
[root@node2 bin]# crsctl check cluster -all
**************************************************************
node1:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Here, DBA can see "CRS-4639: Could not contact Oracle High Availability Services" or "CRS-4535: Cannot communicate with Cluster Ready Services" messages. Wait 5 minutes and then again check with "crsctl check cluster -all" command. This time Database administrator will get "CRS-4537: Cluster Ready Services is online". If still same issue DBA can start ora.crsd process to resolve this issue. Below is the command
[root@node1 bin]# crsctl start res ora.crsd -init
[root@node1 bin]# crsctl check cluster -all
**************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
Now, Cluster is up and running. DBA can also use "ps -ef | grep d.bin" to check cluster status.
2. Start Node applications running on nodes: To start node applications running on a node, enter the following command, where node is the name of the node where the applications are running
[grid@node1 bin]$ srvctl start nodeapps -n node1
[grid@node1 bin]$ srvctl status nodeapps -n node1
VIP node1-vip is enabled
VIP node1-vip is running on node: node1
Network is enabled
Network is running on node: node1
GSD is disabled
GSD is not running on node: node1
ONS is enabled
ONS daemon is running on node: node1
DBA has to execute this command for each node to start Real Application Clusters Cluster database.
3. Start All ASM instances from all nodes: Next DBA has to start all ASM instances which are used to access database, enter the following command, where node is the name of the node where the ASM instance has to start.
[grid@node1 bin]$ srvctl start asm -n node1
[grid@node1 bin]$ srvctl status asm -n node1
ASM is running on node1
DBA has to start ASM instance on all database nodes.
4. Start RAC Database Instances on all nodes: Now, We will start database instances on database nodes to access data.
[grid@node1 bin]$ srvctl start database -d oradb
[grid@node1 bin]$ srvctl status database -d oradb
Instance oradb1 is running on node node1
Instance oradb2 is running on node node2
Now database is up and running on both DB nodes.
5. Start Oracle Home process accessing database: let's start listener and OEM to access database and allow users to connect to the database.
[grid@node1 bin]$ srvctl start listener -n node1
[grid@node1 bin]$ srvctl status listener -n node1
Listener LISTENER is enabled on node(s): node1
Listener LISTENER is running on node(s): node1
[oracle@database ~]$ emctl start dbconsole
Oracle Enterprise Manager 11g Database Control Release 11.2.0.2.0
Copyright (c) 1996, 2010 Oracle Corporation. All rights reserved.
https://database.example.com:5500/em/console/aboutApplication
Starting Oracle Enterprise Manager 11g Database Control ..........................
. started.
------------------------------------------------------------------
Logs are generated in directory /etc/oracle/oracle/database.example.com_orcl/sysman/log
Now Database is ready to user. Since, shutting down and starting up RAC database is a series of steps. SO, I tried to make a one place repository for Shutdown and Start Oracle Real Application Clusters Database. If you wants to Shutdown or Restart Oracle RAC database Server. See How to Shutdown or Restart Oracle RAC Database Server in 6 Steps ?
Encountered RMAN-03002 and RMAN-06091 when Deleting Obsolete Backups (Doc ID 567555.1)
Encountered RMAN-03002 and RMAN-06091 when Deleting Obsolete Backups (Doc ID 567555.1) To BottomTo Bottom
In this Document
Symptoms
Cause
Solution
This document is being delivered to you via Oracle Support's Rapid Visibility (RaV) process and therefore has not been subject to an independent technical review.
APPLIES TO:
Oracle Database - Enterprise Edition - Version 9.2.0.1 to 11.2.0.2 [Release 9.2 to 11.2]
Information in this document applies to any platform.
***Checked for relevance on 22-Sep-2014***
SYMPTOMS
When attempting to delete obsolete backups from RMAN using the following command:
RMAN> delete obsolete;
the following error occurs:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of delete command at 05/07/2008 22:04:21
RMAN-06091: no channel allocated for maintenance (of an appropriate type)
.
CAUSE
Tape channel had not being allocated when attempt to delete obsolete backup on tape.
Using the following command to verify that there are backup sets on tape.
RMAN> list backup;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Incr 0 113.25M SBT_TAPE 00:08:35 01-MAR-08
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: HOT_DB_BK_LEVEL0
Handle: bk_4_1_648250152 Media:
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
5 0 Incr 1342657 01-MAR-08 /u01/oracle/app/oracle/oradata/test/undotbs2
75 0 Incr 1342657 01-MAR-08 /u01/oracle/app/oracle/oradata/test/USER_DAT_08_vg3_002
91 0 Incr 1342657 01-MAR-08 /u01/oracle/app/oracle/oradata/test/USER_DAT_12_vg3_002
==>The Device Type is SBT_TAPE.
SOLUTION
To implement the solution, please execute the following steps:
Please run the following commands to delete obsolete backup sets on both disk and tape:
RMAN> allocate channel for maintenance type disk;
RMAN> allocate channel for maintenance device type 'sbt_tape' PARMS '...';
==>Please contact your MML(Media Management Layer) vendor to get the actual tape parameters and repalce the '...' .
RMAN> delete obsolete;
If you want to delete obsolete backup sets on disk, you can use the following commands:
RMAN> allocate channel for maintenance type disk;
RMAN> delete obsolete device type disk;
Note: If above solution does not fix your issue, contact Oracle Support for known bugs.
Thursday, October 15, 2015
ssh password less entry
ssh-keygen -t rsa
ssh oracle@10.100.208.67 mkdir -p .ssh
cat .ssh/id_rsa.pub | ssh oracle@10.100.208.67 'cat >> .ssh/authorized_keys'
Data Guard NOT supported in Oracle Standard Edition (Doc ID 305360.1)
Data Guard and Oracle Standard Edition (Doc ID 305360.1)
PURPOSE
-------
This note outlines the usage of standby features with Standard Edition
releases of the Oracle Database software from Oracle9i Release 1 and
up.
SCOPE & APPLICATION
-------------------
This note is intended for DBAs and Support Engineers.
Data Guard and Oracle Standared Edition
----------------------------------------
Oracle Data Guard is available only as a feature of Oracle Database
Enterprise Edition. It is not available with Standard Edition.
The following limitations exist using standby databases under Standard
Edition:
1. Remote archival can not be configured. Setting the parameter
LOG_ARCHIVE_DEST_n to any value will fail.
SQL> alter system set log_archive_dest_1='SERVICE=stby';
*
ERROR at line 1:
ORA-00439: Feature not enabled: Managed standby
Note: LOG_ARCHIVE_DEST can be used for local only archival.
2. While you can specify the MANAGED keyword when starting standby
recovery:
SQL> recover managed standby database disconnect;
The managed recovery process (MRP) will not progress once started.
It relies on the standby controlfile being updated with the latest
available archive log during remote archival from the primary.
The archives can be brought over manually and registered into the
controlfile but the register attempt will fail instead with the
following errors:
SQL> alter database register logfile '/stby_arch/arc0055.dbf';
*
ERROR at line 1:
ORA-00439: Feature not enabled: Managed Standby
Manual recovery with 'RECOVER STANDBY DATABASE' SQL will work. The
archive log location will be determined based on the setting of the
LOG_ARCHIVE_FORMAT and LOG_ARCHIVE_DEST parameters on the standby
instance.
3. Switchover will not be possible.
4. Failover must be done with 'ALTER DATABASE ACTIVATE STANDBY DATABASE'
5. The only standby type possible is a physical standby, not logical.
6. The standby environment cannot be managed using the Data Guard Broker
or it's interfaces, DGMGRL and the Data Guard GUI from Grid Control.
7. standby_file_management parameter will not work
8. RMAN 'DUPLICATE FOR STANDBY' cannot be used.
Overall it is possible to create and maintain a physical standby with
Standard Edition releases. However it is left up to the user to handle
the archive log transfer to the standby site and the apply of the logs
using basic 'RECOVER STANDBY DATABASE' SQL syntax. There is
possible data loss in case of primary instance or node or site down.
The behaviors such as rolling upgrades can not be used.
Oracle 10g Data Guard implementation steps
http://www.orafaq.com/node/2030
https://pavandba.files.wordpress.com/2009/11/dataguardphysicalstandbystep.pdf
http://gjilevski.com/2010/03/05/creating-physical-standby-in-oracle-10g/
http://www.idevelopment.info/data/Oracle/DBA_tips/Data_Guard/DG_40.shtml
http://oracleinstance.blogspot.qa/2010/01/step-by-step-document-for-creating.html
http://www.oracle-class.com/?p=276
10g Data Guard, Physical Standby Creation, step by step
Alejandro Vargas
Principal Support Consultant
Oracle Advanced Support Services
Creating a Data Guard Physical Standby environment, General Review. ..................................................................................... 2
The Enviroment................................................................................................................................................................................................ 2
Implementation notes: ................................................................................................................................................................................. 2
Step by Step Implementation of a Physical Standby Environment.................................................................................................. 3
Primary Database Steps............................................................................................................................................................................... 3
Primary Database General View................................................................................................................................................................ 3
Enable Forced Logging .................................................................................................................................................................................. 4
Create a Password File.................................................................................................................................................................................. 5
Configure a Standby Redo Log .................................................................................................................................................................. 6
Enable Archiving .............................................................................................................................................................................................. 7
Set Primary Database Initialization Parameters ................................................................................................................................. 8
Standby Database Steps............................................................................................................................................................................11
Create a Control File for the Standby Database ...............................................................................................................................11
Backup the Primary Database and transfer a copy to the Standby node. ..............................................................................12
Prepare an Initialization Parameter File for the Standby Database...........................................................................................15
Configure the listener and tnsnames to support the database on both nodes......................................................................17
Set Up the Environment to Support the Standby Database on the standby node..............................................................21
Start the Primary Database ......................................................................................................................................................................24
Verify the Physical Standby Database Is Performing Properly.....................................................................................................25
Reference:........................................................................................................................................................................................................27
Creating a Data Guard Physical Standby environment, General Review.
Manually setting up a Physical standby database is a simple task when all prerequisites and setup steps are carefully met and executed.
In this example I did use 2 hosts, that host a RAC database. All RAC preinstall requisites are then in place and no additional configuration was
necessary to implement Data Guard Physical Standby manually. Note that using Enterprise Manager Grid Control Data Guard Physical Standby
can be implemented from the Grid Control Console easily. Still, this exercise provide a degree of familiarity with Data Guard.
The Enviroment
2 Linux servers, Oracle Distribution 2.6.9-55 EL i686 i386 GNU/Linux
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
ssh is configured for user oracle on both nodes
Oracle Home is on identical path on both nodes
Implementation notes:
Once you have your primary database up and running these are the steps to follow:
1. Enable Forced Logging
2. Create a Password File
3. Configure a Standby Redo Log
4. Enable Archiving
5. Set Primary Database Initialization Parameters
Having followed these steps to implement the Physical Standby you need to follow these steps:
1. Create a Control File for the Standby Database
2. Backup the Primary Database and transfer a copy to the Standby node.
3. Prepare an Initialization Parameter File for the Standby Database
4. Configure the listener and tnsnames to support the database on both nodes
5. Set Up the Environment to Support the Standby Database on the standby node.
6. Start the Physical Standby Database
7. Verify the Physical Standby Database Is Performing Properly
Step by Step Implementation of a Physical Standby Environment
Primary Database Steps
Primary Database General View
SQL> select name from v$database;
NAME
---------
WHITEOWL
SQL> select file_name from dba_data_files;
FILE_NAME
--------------------------------------------------------------------------------
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_users_310mzml9_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_sysaux_310mzm34_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_undotbs1_310mzmk2_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_system_310mzm27_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test2_3117h15v_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test3_3117h8nv_.dbf
/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test4_3117hk7d_.dbf
7 rows selected.
SQL> select name from v$database;
NAME
---------
WHITEOWL
SQL> show parameters unique
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string whiteowl
Enable Forced Logging
In order to implement Standby Database we enable 'Forced Logging'.
This option ensures that even in the event that a 'nologging' operation is done, force logging takes precedence and all operations are logged
into the redo logs.
SQL> ALTER DATABASE FORCE LOGGING;
Database altered.
Create a Password File
A password file must be created on the Primary and copied over to the Standby site. The sys password must be identical on both sites. This is
a key pre requisite in order to be able to ship and apply archived logs from Primary to Standby.
cd $ORACLE_HOME/dbs
[vmractest1] > orapwd file=orapwwhiteowl password=oracle force=y
Configure a Standby Redo Log
A Standby Redo log is added to enable Data Guard Maximum Availability and Maximum Protection modes. It is important to configure the
Standby Redo Logs (SRL) with the same size as the online redo logs.
In this example I'm using Oracle Managed Files, that's why I don't need to provide the SRL path and file name. If you are not using OMF's
you then must pass the full qualified name.
SQL> select * from v$logfile;
GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ---------------------------------------------------------------------- ---
3 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_3_310n22jj_.log NO
2 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_2_310n21sx_.log NO
1 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_1_310n215q_.log NO
SQL> select bytes from v$log;
BYTES
----------
52428800
52428800
52428800
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
Database altered.
SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
Database altered.
SQL> select * from v$logfile
2 /
GROUP# STATUS TYPE MEMBER IS_
---------- ------- ------- ---------------------------------------------------------------------- ---
3 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_3_310n22jj_.log NO
2 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_2_310n21sx_.log NO
1 ONLINE /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_1_310n215q_.log NO
4 STANDBY /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_4_3gznjc9v_.log NO
5 STANDBY /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_5_3gznnrh0_.log NO
6 STANDBY /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_6_3gznrwd7_.log NO
6 rows selected.
Enable Archiving
On 10g you can enable archive log mode by mounting the database and executing the archivelog command:
SQL> startup mount;
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 75499088 bytes
Database Buffers 205520896 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /vmasmtest/whiteowl/archdest/arch
Oldest online log sequence 92
Next log sequence to archive 94
Current log sequence 94
Set Primary Database Initialization Parameters
Data Guard must use spfile, in order to configure it we create and configure the standby parameters on a regular pfile, and once it is ready we
convert it to an spfile.
Several init.ora parameters control the behavior of a Data Guard environment. In this example the Primary database init.ora is configured so
that it can hold both roles, as Primary or Standby.
SQL> create pfile='/oradisk/app01/oracle/product/10gDB/dbs/pfilewhiteowl.ora' from spfile;
File created.
Edit the pfile to add the standby parameters, here shown highlighted:
db_name='whiteowl'
db_unique_name='whiteowl'
LOG_ARCHIVE_CONFIG='DG_CONFIG=(whiteowl,blackowl)'
control_files='/vmasmtest/od01/WHITEOWL/WHITEOWL/controlfile/o1_mf_310n1xf0_.ctl'
LOG_ARCHIVE_DEST_1=
'LOCATION=/vmasmtest/whiteowl/archdest/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=whiteowl'
LOG_ARCHIVE_DEST_2=
'SERVICE=blackowl LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=blackowl'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
# Standby role parameters --------------------------------------------------------------------
*.fal_server=blackowl
*.fal_client=whiteowl
*.standby_file_management=auto
*.db_file_name_convert='BLACKOWL/BLACKOWL','WHITEOWL/WHITEOWL'
*.log_file_name_convert='/vmasmtest/od01/BLACKOWL/BLACKOWL/','/vmasmtest/od01/WHITEOWL/WHITEOWL/'
# ---------------------------------------------------------------------------------------------
audit_file_dest='/oradisk/app01/oracle/admin/whiteowl/adump'
background_dump_dest='/oradisk/app01/oracle/admin/whiteowl/bdump'
core_dump_dest='/oradisk/app01/oracle/admin/whiteowl/cdump'
user_dump_dest='/oradisk/app01/oracle/admin/whiteowl/udump'
compatible='10.2.0.1.0'
db_block_size=8192
db_create_file_dest='/vmasmtest/od01/WHITEOWL'
db_domain=''
db_file_multiblock_read_count=16
job_queue_processes=10
open_cursors=300
pga_aggregate_target=94371840
processes=150
remote_login_passwordfile='EXCLUSIVE'
sga_target=283115520
undo_management='AUTO'
undo_tablespace='UNDOTBS1'
Once the new parameter file is ready we create from it the spfile:
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount pfile='/oradisk/app01/oracle/product/10gDB/dbs/pfilewhiteowl.ora';
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
SQL> create spfile from pfile='/oradisk/app01/oracle/product/10gDB/dbs/pfilewhiteowl.ora';
File created.
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.
Database opened.
Standby Database Steps
Create a Control File for the Standby Database
The standby database will use a control file that is generated on the primary database
SQL> startup mount;
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS
'/oradisk/app01/oracle/product/10gDB/dbs/blackowl.ctl';
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
Backup the Primary Database and transfer a copy to the Standby node.
Generate a script to copy datafiles
SQL> set pages 50000 lines 120 head off veri off flush off ti off
SQL> spool cpfiles
SQL> select 'scp -p '||file_name||' $v_dest' from dba_data_files;
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_users_310mzml9_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_sysaux_310mzm34_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_undotbs1_310mzmk2_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_system_310mzm27_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test2_3117h15v_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test3_3117h8nv_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test4_3117hk7d_.dbf $v_dest
7 rows selected.
SQL> select 'scp -p '||file_name||' $v_dest' from dba_temp_files;
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_temp_310n2bnj_.tmp $v_dest
SQL> select 'scp -p '||member||' $v_dest' from v$logfile;
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_3_310n22jj_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_2_310n21sx_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_1_310n215q_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_4_3gznjc9v_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_5_3gznnrh0_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_6_3gznrwd7_.log $v_dest
6 rows selected.
SQL> spool off
Shutdown the database, edit the script to add the v_dest location, and execute it.
[vmractest1] > mv cpfiles.lst cpfiles
#!/bin/ksh
v_dest=vmractest2:/oradisk/od01/BLACKOWL/datafile/
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_users_310mzml9_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_sysaux_310mzm34_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_undotbs1_310mzmk2_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_system_310mzm27_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test2_3117h15v_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test3_3117h8nv_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_test4_3117hk7d_.dbf $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/o1_mf_temp_310n2bnj_.tmp $v_dest
v_dest= vmractest2:/oradisk/od01/BLACKOWL/onlinelog/
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_3_310n22jj_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_2_310n21sx_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_1_310n215q_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_4_3gznjc9v_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_5_3gznnrh0_.log $v_dest
scp -p /vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_6_3gznrwd7_.log $v_dest
On the standby node create the required directories to get the datafiles
[root@vmractest2 root]# mkdir -p /oradisk/od01/BLACKOWL/datafile/
[root@vmractest2 root]# mkdir -p /oradisk/od01/BLACKOWL/onlinelog/
[root@vmractest2 root]# chown -R oracle:dba /oradisk/od01
On the primary node execute the script to copy the database while the main database is down (or in backup mode)
[vmractest1] > chmod 700 cpfiles
[vmractest1] > ./cpfiles
o1_mf_users_310mzml9_.dbf 100% 2801MB 7.0MB/s 06:37
o1_mf_sysaux_310mzm34_.dbf 100% 340MB 7.4MB/s 00:45
o1_mf_undotbs1_310mzmk2_.dbf 100% 100MB 8.0MB/s 00:12
o1_mf_system_310mzm27_.dbf 100% 490MB 7.0MB/s 01:09
o1_mf_test2_3117h15v_.dbf 100% 100MB 6.5MB/s 00:15
o1_mf_test3_3117h8nv_.dbf 100% 100MB 6.0MB/s 00:16
o1_mf_test4_3117hk7d_.dbf 100% 100MB 6.4MB/s 00:15
o1_mf_temp_310n2bnj_.tmp 100% 433MB 5.8MB/s 01:14
o1_mf_3_310n22jj_.log 100% 50MB 7.5MB/s 00:06
o1_mf_2_310n21sx_.log 100% 50MB 8.4MB/s 00:05
o1_mf_1_310n215q_.log 100% 50MB 8.8MB/s 00:05
o1_mf_4_3gznjc9v_.log 100% 50MB 7.7MB/s 00:06
o1_mf_5_3gznnrh0_.log 100% 50MB 8.2MB/s 00:06
o1_mf_6_3gznrwd7_.log 100% 50MB 4.9MB/s 00:10
Prepare an Initialization Parameter File for the Standby Database
Copy and edit the primary init.ora to set it up for the standby role
*.db_name='whiteowl'
*.db_unique_name='blackowl'
*.audit_file_dest='/oradisk/app01/oracle/admin/blackowl/adump'
*.background_dump_dest='/oradisk/app01/oracle/admin/blackowl/bdump'
*.core_dump_dest='/oradisk/app01/oracle/admin/blackowl/cdump'
*.user_dump_dest='/oradisk/app01/oracle/admin/blackowl/udump'
*.compatible='10.2.0.1.0'
*.control_files='/oradisk/od01/BLACKOWL/controlfile/blackowl_01.ctl','/oradisk/od01/BLACKOWL/controlfile/blackowl_02.ctl'
*.db_block_size=8192
*.db_create_file_dest='/oradisk/od01/BLACKOWL'
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_file_name_convert='/oradisk/od01/BLACKOWL/datafile/','/vmasmtest/od01/WHITEOWL/WHITEOWL/datafile/'
*.log_file_name_convert='/oradisk/od01/BLACKOWL/onlinelog/','/vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/'
*.fal_server='whiteowl'
*.fal_client='blackowl'
*.job_queue_processes=10
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(whiteowl,blackowl)'
*.LOG_ARCHIVE_DEST_1='LOCATION=/oradisk/od01/BLACKOWL/archives/
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=blackowl'
*.LOG_ARCHIVE_DEST_2='SERVICE=whiteowl LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=whiteowl'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.open_cursors=300
*.pga_aggregate_target=94371840
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=283115520
*.standby_file_management='auto'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
Create all required directories for dump directories and archived log destination
[vmractest2] > mkdir -p /oradisk/app01/oracle/admin/blackowl/adump
{oracle} /oradisk/app01/oracle/product/10gDB/dbs [vmractest2] > mkdir -p /oradisk/app01/oracle/admin/blackowl/bdump
{oracle} /oradisk/app01/oracle/product/10gDB/dbs [vmractest2] > mkdir -p /oradisk/app01/oracle/admin/blackowl/cdump
{oracle} /oradisk/app01/oracle/product/10gDB/dbs [vmractest2] > mkdir -p /oradisk/app01/oracle/admin/blackowl/udump
{oracle} /oradisk/app01/oracle/product/10gDB/dbs [vmractest2] > mkdir -p /oradisk/od01/BLACKOWL/archives/
Copy from the primary the standby controlfile to its destination
[vmractest1] > scp -p blackowl.ctl vmractest2:/oradisk/od01/BLACKOWL/controlfile/blackowl_02.ctl
blackowl.ctl 100% 6992KB 7.2MB/s 00:00
[vmractest1] > scp -p blackowl.ctl vmractest2:/oradisk/od01/BLACKOWL/controlfile/blackowl_01.ctl
blackowl.ctl 100% 6992KB 6.9MB/s 00:00
Configure the listener and tnsnames to support the database on both nodes
Configure listener.ora on both servers to hold entries for both databases
# ON VMRACTEST1
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1)(PORT = 1522)(IP = FIRST))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = whiteowl)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB )
(SID_NAME = whiteowl)
)
)
# ON VMRACTEST2
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest2)(PORT = 1522)(IP = FIRST))
)
)
SID_LIST_LISTENER_VMRACTEST =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = blackowl)
(ORACLE_HOME = /oradisk/app01/oracle/product/10gDB )
(SID_NAME = blackowl)
)
)
Configure tnsnames.ora on both servers to hold entries for both databases
# ON VMRACTEST1
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1)(PORT = 1522)(IP = FIRST))
)
)
WHITHEOWL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = whiteowl)
)
)
BLACKOWL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = blackowl)
)
)
# ON VMRACTEST2
LISTENER_VMRACTEST =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest2)(PORT = 1522)(IP = FIRST))
)
)
BLACKOWL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest2)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = blackowl)
)
)
WHITHEOWL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = vmractest1)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = whiteowl)
)
)
Start the listener and check tnsping on both nodes to both services
[vmractest1.partnergsm.co.il] > tnsping whiteowl
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2007 15:10:00
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/oradisk/app01/oracle/product/10gDB/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
vmractest1)(PORT = 1522)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME =
whiteowl)))
OK (10 msec)
{oracle} /oradisk/app01/oracle/product/10gDB/network/admin
[vmractest1.partnergsm.co.il] > tnsping blackowl
TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 19-SEP-2007 15:10:09
Copyright (c) 1997, 2005, Oracle. All rights reserved.
Used parameter files:
/oradisk/app01/oracle/product/10gDB/network/admin/sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =
vmractest2)(PORT = 1522)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME =
blackowl)))
OK (10 msec)
Set Up the Environment to Support the Standby Database on the standby node.
Copy the password file from Primary to Standby, sys password must be identical
[vmractest1]> scp orapwwhiteowl
vmractest2:/oradisk/app01/oracle/product/10gDB/dbs/orapwblackowl
orapwwhiteowl 100% 1536 4.0MB/s 00:00
Setup the environment variables to point to the Satndby database
ORACLE_HOME=/oradisk/app01/oracle/product/10gDB
ORACLE_SID=blackowl
Startup nomount the Standby database and generate an spfile
{oracle} /oradisk/app01/oracle/product/10gDB/dbs [vmractest2] > sqlplus / as sysdba
SQL*Plus: Release 10.2.0.1.0 - Production on Wed Sep 19 16:17:18 2007
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount pfile='/oradisk/app01/oracle/product/10gDB/dbs/initblackowl.ora'
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
SQL> create spfile from pfile='/oradisk/app01/oracle/product/10gDB/dbs/initblackowl.ora';
File created.
SQL> shutdown immediate;
ORA-01507: database not mounted
ORACLE instance shut down.
Startup mount the Standby database and perform recovery
SQL> startup mount
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1218992 bytes
Variable Size 92276304 bytes
Database Buffers 188743680 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
Database altered.
The alert log of the standby will show the operations taking place
…
…
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Wed Sep 19 16:46:26 2007
Attempt to start background Managed Standby Recovery process (blackowl)
MRP0 started with pid=47, OS id=12498
Wed Sep 19 16:46:26 2007
MRP0: Background Managed Standby Recovery process started (blackowl)
Managed Standby Recovery not using Real Time Apply
Clearing online redo logfile 1 /oradisk/od01/BLACKOWL/onlinelog/o1_mf_1_310n215q_.log
Clearing online log 1 of thread 1 sequence number 95
Deleted Oracle managed file /oradisk/od01/BLACKOWL/onlinelog/o1_mf_1_310n215q_.log
Wed Sep 19 16:46:32 2007
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
Wed Sep 19 16:46:33 2007
Clearing online redo logfile 1 complete
Clearing online redo logfile 2 /oradisk/od01/BLACKOWL/onlinelog/o1_mf_2_310n21sx_.log
Clearing online log 2 of thread 1 sequence number 96
Deleted Oracle managed file /oradisk/od01/BLACKOWL/onlinelog/o1_mf_2_310n21sx_.log
Clearing online redo logfile 2 complete
Clearing online redo logfile 3 /oradisk/od01/BLACKOWL/onlinelog/o1_mf_3_310n22jj_.log
Clearing online log 3 of thread 1 sequence number 94
Deleted Oracle managed file /oradisk/od01/BLACKOWL/onlinelog/o1_mf_3_310n22jj_.log
Clearing online redo logfile 3 complete
Media Recovery Waiting for thread 1 sequence 96
Start the Primary Database
The alert log of the primary will show how it recognize the standby and start shipping archived logs
******************************************************************
LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
******************************************************************
Wed Sep 19 16:01:07 2007
LNS: Standby redo logfile selected for thread 1 sequence 100 for destination
LOG_ARCHIVE_DEST_2
Wed Sep 19 16:01:07 2007
Successfully onlined Undo Tablespace 1.
Wed Sep 19 16:01:07 2007
SMON: enabling tx recovery
Wed Sep 19 16:01:09 2007
Database Characterset is AL32UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=21, OS id=13864
Wed Sep 19 16:01:12 2007
Completed: ALTER DATABASE OPEN
Wed Sep 19 16:01:13 2007
ARCq: Standby redo logfile selected for thread 1 sequence 99 for destination
LOG_ARCHIVE_DEST_2
Wed Sep 19 16:05:05 2007
Thread 1 advanced to log sequence 101
Current log# 1 seq# 101 mem# 0:
/vmasmtest/od01/WHITEOWL/WHITEOWL/onlinelog/o1_mf_1_310n215q_.log
Wed Sep 19 16:05:06 2007
LNS: Standby redo logfile selected for thread 1 sequence 101 for destination
LOG_ARCHIVE_DEST_2
Verify the Physical Standby Database Is Performing Properly
Check archived redo log on Standby
SQL> show parameters db_unique_name;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string blackowl
SQL> l
1* SELECT NAME FROM V$DATABASE
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIM NEXT_TIME
---------- --------- ---------
96 19-SEP-07 19-SEP-07
97 19-SEP-07 19-SEP-07
98 19-SEP-07 19-SEP-07
99 19-SEP-07 19-SEP-07
100 19-SEP-07 19-SEP-07
Switch logfiles on Primary
SQL> alter system switch logfile;
System altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /vmasmtest/whiteowl/archdest/
Oldest online log sequence 100
Next log sequence to archive 102
Current log sequence 102
SQL> alter system switch logfile;
System altered.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /vmasmtest/whiteowl/archdest/
Oldest online log sequence 101
Next log sequence to archive 103
Current log sequence 103
Check archived redo log on Standby
SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIME NEXT_TIME
---------- -------------- --------------
96 19/09/07 09:35 19/09/07 09:45
97 19/09/07 09:45 19/09/07 15:20
98 19/09/07 15:20 19/09/07 15:48
99 19/09/07 15:48 19/09/07 16:00
100 19/09/07 16:00 19/09/07 16:05
101 19/09/07 16:05 19/09/07 16:08
102 19/09/07 16:08 19/09/07 16:08
7 rows selected.
Reference:
Oracle® Data Guard Concepts and Administration
10g Release 2 (10.2)
Part Number B14239-04
http://download.oracle.com/docs/cd/B19306_01/server.102/b14239/create_ps.htm#SBYDB00210
Wednesday, October 14, 2015
Oracle DB Architecture.
ORACLE_SID .....U CAN have different version of oracle db in one server
sqlplus 'as sysdba
startup
program starts by reading a file called spfile_oraclei-sid.ora that contrains the paramter identified by the oracle sid
its a binary file ..
init.ora file ..is a text file that historically predates the binary file
db parameter will give the size of shared memory area .which is called sga system global area...
a number of utility process will be started ...and in windows will have thread...
when memory allocated and process started .sqlplus displays the total sga size ...
as well as the size of forced part, fixed size,variable part ,db buffer and redo buffer
At this point ,oracle is going to open and read a file that is so critical and it is always duplicated..which is called control file
Found at location that is specified in the parameter file ..
This isthe file where oracle find the name of the all the data files, temp file and log files, together that makes up the db.
control file also will allow to check the global consistency ,whether something needs to be restored, whether the db is properly shutdown. whether an unfinised
transaction has to be rolled back...
With out control file your db is dead.
After passing this stage .sqlplus will display the db is mounted. .
which mostly means,that all files are identified..
The only step that remains is the opening of various file for read and write operation.
when files are opened then the db is opened ..and the user who are not dba's can connect.
That means ,they can connect ,if they are allowed to connect to the machine hosting the database.
If you want to enables users or an application server to connect from other machine ,which is the most common case. you have to launch lsnrctl start command another program called tnslsnr which is the "listener" that waits for the incoming connection, after displaying alot of intimate details about its configuration found
inthe lisntern.ora file , the program runs in the back end and everytihing is in place.
Lets now contemplate a clinet program that runs on another machine,if it want to acccess the db ,it needs to provide three pieces of information.
1. the name of the host which oracle runs.
2. the port which is tnslsnr is listening
3. the service.the identifier of the db ,u want to connect.
There are several ways to provide this connection.
You can directly provide everything ,this is for instance what you do with jdbc and a pure java driver .
If your client is using oracle client libraries ,you can provide alias ,and the client will automatically fetch the associated fetch the associated information from a file called tnsnames.ora
These two methods are by far the most common ones ,but there are other possibilities such as using a nldap directory or the system utilities such as network information
services as a subsitute to tnsnames.ora file
Obviously oracle will not give uncontrolled access,you must sign on so that you are identified and db will cleanly know you are authorized to do .
Usually you will provide username and password., authentication through an ldap directory is also sometimes. possible .
In practice ,your client issues a connection request ,that is handled by listener .Listener will either spawn a sub- process ,run the oracle code and become the
clients dedicated server, or it redirects the connection to an already existing shared server, which is much less common configuraiton.
The server process is , so to speak the clients proxy on the Oracle side. It can access the SGA ,that is shared between all processes. It will also locate a primary called PGA,where it stores what is private to the process and to the session.
But lets take a closer look to the various part in the SGA.There isnt much to say about the fixed system area.
But the variable area contains the shared pool, which holds compiled version of queries ,as well as various bufferpools that will contain for instance data dictionary information.
The data buffer contains data from the tables but also index structures and various work areas.
Everything is split in pages or blocs; the block is the oracle basic storage unit and a typical size for a block is 8k. whenver you query or change the content of a table, you operate in memory on blocks,that are in this cache.
Finaly a fourth area is used to store information before it is written to log files to ensure transnational consistency even in the occurrence of a crash .
The parts that really matter to understand how Oracle works are the shared pool,, the data buffer and the log buffer. this is the where the dedicated server process will operate ,as well as the main utility processes started with the db.
Lets see what happens when the client issues an SQL query.
select empno, ename, job from emp where ename like 'S%'
The query arrives as test to the server process that will start by computing a hash value that is used as a check sum .
Hash Value = 3046656596
Then the server will check whether this query has already been recently executed ,and will inspect the shared pool .
If the query is found, the analysis stops here ,this is called soft-parsing and the server process can execute the query.
If a prior query with exactly the same text cannot be found,then we have hard-parsing that is CPU intensive and locks some shared resources.
SQL syntax must be analyzed ,the objects that are referred to in the query must be identified ,some synonyms may have to be translated, and the best execution plan to
in the query must be identified ,some synonyms may have to be translated, and the best execution plan must be determined among sometimes a very large number of possible execution plans.
It may cause recursive queries on the data dictionary and input/output operations. This is why you shouldn't concatenate to the text of a query identifies that change
with every call ,but you should pass them as a parameters, as if the query where a function.
Once the query is analyzed and compiled ,it is going to be loaded in the shared pool and executed .
It will then reference pieces of data that may or may not be in the data buffer.
If data blocks are not in memory ,the server process fetches them from the data files and loads them .
Block tat have been modified are written asynhronously to the datafile by one or several db wirter processes.
When you update data, mechanism for analysing the statmenet and uploading data blocks in to memory are the same ones.
But before modifying data ,initial values are copied to a work area called "undo segment" .
Another process that want to read data being modified isnot blocked,but reads data from the undo segement instead of reading from the table block .
Intial and new value are also copied to the redo buffer.
If the redo buffer fills up ,the utility process LGWR ,or log write dumps its content to a log file.
When you commit the transaction ,oracle records that the initial value is no longer required for rolling back the change , it records that the change is validated and
the LGWR process writes the transaction to the log file .
This thime the operation is synchronous ,and the server processwaits for the go ahead before the call retruns to the client
That means the if you commit every single update in a loop, you will waste a lot of time just waiting for acknowledgment.
Tuesday, October 13, 2015
sql command to check the complete details of oracle database
spool D:\central\configuration.txt
select name,log_mode,open_mode from v$database;
select name from v$controlfile;
select value from NLS_DATABASE_PARAMETERS where parameter = 'NLS_CHARACTERSET';
select * from v$tablespace;
select BLOCK_SIZE from dba_tablespaces where tablespace_name in ('SYSTEM','UNDOTBS1','SYSAUX','TEMP','USERS','USER_DATA',INDEX_DATA');
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'UNDOTBS1';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'SYSAUX';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'TEMP';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'USERS';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'USER_DATA';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'INDEX_DATA';
select PROPERTY_VALUE from database_properties where property_name = 'DEFAULT_PERMANENT_TABLESPACE';
select value from v$parameter where name = 'db_block_size';
select value from v$parameter where name = 'undo_management';
select value from v$parameter where name = 'undo_tablespace';
select BLOCK_SIZE from dba_tablespaces where tablespace_name = 'UNDOTBS1';
select FILE_NAME, BYTES/1024/1024, AUTOEXTENSIBLE, MAXBYTES,INCREMENT_BY from dba_data_files where tablespace_name='UNDOTBS1';
select TABLESPACE_NAME, INITIAL_EXTENT, NEXT_EXTENT,
MIN_EXTENTS, MAX_EXTENTS, PCT_INCREASE, MIN_EXTLEN, STATUS, CONTENTS, LOGGING,
EXTENT_MANAGEMENT, ALLOCATION_TYPE, BLOCK_SIZE, FORCE_LOGGING, SEGMENT_SPACE_MANAGEMENT
from dba_tablespaces;
select USERNAME, USER_ID, PASSWORD, ACCOUNT_STATUS,
LOCK_DATE, EXPIRY_DATE, DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE, CREATED, PROFILE,
INITIAL_RSRC_CONSUMER_GROUP, EXTERNAL_NAME INITIAL_RSRC_CONSUMER_GROUP from dba_users;
select PROFILE from dba_profiles group by profile;
select RESOURCE_NAME, RESOURCE_TYPE, LIMIT from dba_profiles where profile=’DEFAULT';
sql command to check the tablespace used
SELECT
/* + RULE */
df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
NVL(ROUND(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
ROUND((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,
SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name
) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,
df.bytes
UNION ALL
SELECT
/* + RULE */
df.tablespace_name tspace,
fs.bytes / (1024 * 1024),
SUM(df.bytes_free) / (1024 * 1024),
NVL(ROUND((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
ROUND((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
FROM dba_temp_files fs,
(SELECT tablespace_name,
bytes_free,
bytes_used
FROM v$temp_space_header
GROUP BY tablespace_name,
bytes_free,
bytes_used
) df
WHERE fs.tablespace_name (+) = df.tablespace_name
GROUP BY df.tablespace_name,
fs.bytes,
df.bytes_free,
df.bytes_used
ORDER BY 4 ASC;
Sql command to check the size of database
select a.data_size+b.temp_size+c.redo_size "total_size"
from ( select sum(bytes)/1024/1024/1024 data_size
from dba_data_files ) a,
( select nvl(sum(bytes)/1024/1024/1024,0) temp_size
from dba_temp_files ) b,
( select sum(bytes)/1024/1024/1024 redo_size
from sys.v_$log ) c;
Subscribe to:
Posts (Atom)