Monday, June 15, 2015

Migrate Oracle E-Business Suite Release 12.2 on a Single Database Instance to a RAC Database.(Doc ID 1453213.1

Using Oracle 11g Release 2 Real Application Clusters and Automatic storage management with Oracle E-Business Suite Release 12.2 (Doc ID 1453213.1) Using Oracle 11g Release 2 Real Application Clusters and Automatic storage management with Oracle E-Business Suite Release 12.2 (Doc ID 1453213.1) To BottomTo Bottom Oracle E-Business Suite Release 12.2 has numerous configuration options that can be chosen to suit particular business scenarios, uptime requirements, hardware capability, and availability requirements. This document describes how to migrate Oracle E-Business Suite Release 12.2 to an Oracle Real Application Clusters (Oracle RAC) environment running Oracle Database 11g Release 2 (11.2.0.3 or higher). It also describes how to to use Rapid Install to install an Oracle RAC-configured Oracle E-Business Suite Release 12.2 system. Note: At present, this document applies to UNIX and Linux platforms only. If you are using Windows and want to migrate to Oracle RAC or ASM, you must follow the procedures described in the Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) and the Oracle Database Administrator's Guide 11g Release 2 (11.2). The most current version of this document can be obtained in My Oracle Support Knowledge (MOS) Document 1453213.1. There is a change log at the end of this document. Note: Most documentation links are to the generic Oracle 11gR2 documentation. As installation documentation is platform-specific, the links provided are for Linux. You should refer to the installation documentation for your platform. A number of conventions are used in describing the Oracle E-Business Suite architecture: Convention Meaning Application tier Machines (nodes) running Forms, Web, and other services (servers). Sometimes called middle tier. Database tier Machines (nodes) running the Oracle E-Business Suite database. oracle User account that owns the database file system (database ORACLE_HOME and files). CONTEXT_NAME The CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is _. CONTEXT_FILE Full path to the Applications context file on the application tier or database tier. The default locations are as follows. Application tier context file: /admin/.xml Database tier context file: /appsutil/.xml APPSpwd Oracle E-Business Suite database user password. Monospace Text Represents command line text. Type such a command exactly as shown. < > Text enclosed in angle brackets represents a variable. Substitute a value for the variable text. Do not type the angle brackets. \ On UNIX or Linux, the backslash character can be entered to indicate continuation of the command line on the next screen line. This document is divided into following sections: Section 1: Overview Section 2: Environment Section 3: Install Oracle Grid Infrastructure 11gR2 Section 4: Migrate Oracle E-Business Suite Release 12.2 on a Single Database Instance to a RAC Database Section 5: Use Rapid Install to Install a RAC Configured Oracle E-Business Suite Release 12.2 system Section 6: References Appendices Section 1: Overview As of Oracle E-Business Suite Release 12.2, you have two options for deploying Oracle E-Business Suite in an Oracle RAC environment: Traditional tools (dbca) - can be used to migrate an existing an Oracle E-Business Suite Release 12.2 system to Oracle RAC. Rapid Install - can be used to configure an Oracle RAC system for use with a new Oracle E-Business Suite Release 12.2 system. Both methods include a number of common steps, such as configuring Oracle Grid and setting up shared storage. When planning to set up Oracle Real Application Clusters and shared devices, you should be familiar with Oracle Database 11gR2, and have a good knowledge of Oracle Real Application Clusters (RAC). For further information, refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2). 1. Cluster Terminology You should understand the terminology used in a cluster environment. Key terms include the following. Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically. Cluster Ready Services (CRS) is the primary program that manages high availability operations in an RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners. Parallel Concurrent Processing (PCP) is an extension of the Concurrent Processing architecture. PCP allows concurrent processing activities to be distributed across multiple nodes in an Oracle RAC environment, maximizing throughput and providing resilience to node failure. Real Application Clusters (RAC) is an Oracle database technology that allows multiple machines to work on the same data in parallel, reducing processing time significantly. An Oracle RAC environment also offering resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime. Oracle Grid Infrastructure is the new unified ORACLE_HOME for both ASM and CRS. That is, Grid Infrastructure Install replaces the Clusterware Install in Oracle Database 11gR2. For further information refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux. Section 2: Environment 2.1 Software and Hardware Configuration Refer to the relevant platform installation guides for supported hardware configurations. For example, Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux and Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX. The minimum software versions are as follows: Component Version Oracle E-Business Suite Release 12 12.2.x or higher Oracle Database 11.2.0.3 or higher Oracle Cluster Ready Services 11.2.0.3 or higher You can obtain the latest Oracle Database 11gR2 software from: http://www.oracle.com/technology/software/products/database/index.html Note: The Oracle Cluster Ready Services must be at a release level equal to, or greater than, the Oracle Database version. 2.2 ORACLE_HOME Nomenclature This document refers to various ORACLE_HOMEs, as follows: ORACLE_HOME Purpose SOURCE_ORACLE_HOME Database ORACLE_HOME used by Oracle E-Business Suite Release R12.2. It can be any supported version. 11g R2 ORACLE_HOME Database ORACLE_HOME installed for the Oracle 11gR2 RAC Database. 11g R2 CRS ORACLE_HOME ORACLE_HOME installed for the Oracle Database 11gR2 Cluster Ready Services (Infrastructure home). Section 3: Install Oracle Grid Infrastructure 11gR2 Installation of Oracle Grid Infrastructure 11g Release 2 is now part of the Infrastructure install, which requires an understanding of the specific type of cluster and infrastructure that are to be deployed: the selection of these is outside the scope of this document. For convenience, the general steps are outlined below, but you should use the Infrastructure documentation set as the primary reference. Note: This section should be followed for both configuration methods: manual migration and Rapid Install. Note: Refer to Appendix H Higher version of Grid Infrastructure, if you are planning to install Oracle E-Business Suite 12.2 RAC using rapid install on Oracle Database 11.2.0.4.0 or 12.1.0.1 with Grid Infrastructure. 3.1 Check Network Requirements In Oracle Database 11gR2, the grid Infrastructure install can be configured to specify address management via node addresses, names (as per older releases), or via Grid Naming Services. Regardless of the choice, nodes must satisfy the following requirements: Each node must have at least two network adapters: one for the public network interface and one for the private network interface (interconnect). For the public network, each network adapter must support the TCP/IP protocol. For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters, and switches that support TCP/IP (Gigabit Ethernet or better is recommended). To improve availability, backup public and private network adapters can be configured for each node. The interface names associated with the network adapter(s) for each network must be the same on all nodes. If Grid Naming Services is not used, the following addresses must also be set up: An IP address and associated host name for each public network interface must be registered in the DNS. One unused virtual IP address (VIP) and associated virtual host name that are registered in DNS or resolved in the host file, or both, and which will be configured for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, clients can be configured to use either the virtual host name or virtual IP address. If a node fails, its virtual IP address fails over to another node. A private IP address (and optionally a host name) for each private interface. Oracle recommends that you use private network IP addresses for these interfaces. An additional virtual IP address (VIP) and associated virtual host name for the SCAN Listener, registered in DNS. For further information, refer to the Pre-installation requirements in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) Linux. Note: A common mistake is to set up ntpd correctly. Refer to the Setting Network Time Protocol for Cluster Time Synchronization section in Oracle Grid Infrastructure Installation Guide. 3.2 Verify Kernel Parameters As part of the Infrastructure install, the pre-installation process checks the kernel parameters and, if necessary, creates a "fixup" script that corrects most of the common kernel parameter issues. Follow the installation instructions for running this script. Detailed hardware and OS requirements are detailed in the Advanced Installation Oracle Grid Infrastructure for a Cluster Preinstallation Tasks section of Advanced Installation Oracle Grid Infrastructure for a Cluster Pre-installation Tasks [Linux] 3.3 Set up Shared Storage The available shared storage options are either ASM or shared file system (clustered or NFS). Use of raw disk devices is only supported for upgrades. These storage options are described in the Configuring Storage for Oracle Grid Infrastructure for a Cluster and Oracle RAC section of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) - Configuring Storage Linux. 3.3.1 ASM Configuration With Oracle Database 11g Release 2 (11.2), Oracle ASM is part of an Oracle Grid Infrastructure installation, and ASM binaries will be installed. Key points are: Ensure the ASM disks are configured correctly, using oracleasm or another method. Refer to Oracle Automatic Storage Management Administrator's Guide 11g Release 2 (11.2) and Oracle Database 2 Day + Real Application Clusters Guide 11g Release 2 (11.2). Be aware that the Grid Infrastructure install creates a single disk group, and Rapid Install supports a only single disk group. 3.3.2 Shared File System Ensure that the database directory is mounted with required mount options as per Knowledge Document 359515.1 Mount Options for Oracle files when used with NFS on NAS devices. 3.4 Check Account Setup Configure the oracle account's environment for Oracle Clusterware and Oracle Database 11gR2, as per the Creating Groups, Users and Paths for Oracle Grid Infrastructure section of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux. 3.5 Configure Secure Shell on All Cluster Nodes Secure Shell configuration is covered in detail in both the Oracle Real Application Clusters Installation Guide and Oracle Grid Infrastructure Installation Guide. The Oracle Database 11gR2 installer now provides the option to automatically set up passwordless ssh connectivity, so unlike previous releases manual set up of Secure Shell is not necessary. For further details on manual set up of passwordless ssh, refer to Appendix E: How to Complete Installation Prerequisite Tasks Manually of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux ). 3.6 Run the Cluster Verification Utility (CVU) The installer will automatically run the Cluster Verify tool and provide fixup scripts for any OS issues. However, to check for potential issues you can also run CVU prior to installation. Install the cvuqdisk package as described in the Installing the cvuqdisk RPM for Linux section in Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux). Use the following command to determine which pre-installation steps have been completed, and which still need to be performed: $ <11g Grid Software Stage>/runcluvfy.sh stage -pre crsinst -n Substitute <11g Grid Software Stage> with the stage location on your system. Substitute with with the names of the nodes in your cluster, separated by commas. To identify and resolve issues at this stage (rather than during install), consider adding the -fixup and -verbose options to the above command. Use the following command to check networking setup with CVU: $ <11g Grid Software Stage> /runcluvfy.sh comp nodecon -n [-verbose] Use the following command to check operating system requirements with CVU: $ <11g Software Stage> /runcluvfy.sh comp sys -n -p {crs|database} \ -osdba osdba_group -orainv orainv_group -verbose In steps 3 and 4 above, substitute <11g Grid Software Stage> with the stage location on your system. Substitute with a comma-separated list of the names of the nodes in your cluster. 3.7 Install Oracle Grid Infrastructure 11g Release 2 Use the same oraInventory location that was created during the installation of Oracle E-Business Suite Release 12; make a backup of oraInventory before installation. Start runInstaller from the Oracle Grid Infrastructure 11g Release 2 staging area, and install as per your requirements. For further information refer to the Installing Oracle Grid Infrastructure for a Cluster section of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) Linux. Note: Customers who have an existing Grid Infrastructure install tailored to their requirements can skip this step. Those who do not, require further information, or who are perhaps doing a test install, should refer to Appendix C for an example walk through. Confirming the Oracle Clusterware function: After installation, log in as root, and use the following command to confirm that your Oracle Clusterware installation is running correctly: $ /bin/crs_stat -t -v Successful Oracle Clusterware operation can also be verified using the following command: $ /bin/crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online Section 4: Migrate Oracle E-Business Suite Release 12.2 on a Single Database Instance to a RAC Database. This section explains how to migrate Oracle E-Business Suite Release 12.2 running on a single database instance to an Oracle Real Application Cluster environment. Refer to Section 5 if you are using the Rapid Install to build a new Oracle E-Business Suite Release 12.2 system. This section is divided into following sections: 4.1: Configuration Prerequisites 4.2: Installing Oracle Database 11g Release 2 (11gR2) software 4.3: Configure Shared Storage 4.4: Listener Configuration in 11gR2 4.5: Converting single instance to RAC 4.6: Enable AutoConfig on all nodes in the cluster 4.7: Enable AutoConfig on Application Tier 4.1: Configuration Prerequisites The prerequisites for migrating a database to Oracle RAC are as follows: An existing Oracle E-Business Suite Release 12.2 non-Oracle RAC system. Your datafiles reside on shared storage. If your datafiles are on local storage, move them to shared storage and recreate the control files. You have completed Section 3 of this document. You have applied all relevant patches, as detailed in the Interoperability Notes for R12.2 MOS Knowledge Document 1623879.1. 4.2: Install the Oracle Database Software 11gR2 and Upgrade the Oracle E-Business Suite Database to 11gR2 Note: If you want to use Oracle Database 11.2.0.4.0 database go to Step 4.2.2 4.2.1 Install Oracle Database Software 11g Release 2 Take a full backup of the oraInventory directory before starting this stage, in which you will run the Oracle Universal Installer (runInstaller ) to carry out an Oracle Database Installation with Oracle RAC. In the Cluster Nodes Window, verify the cluster nodes shown for the installation. Select all the nodes included in your Oracle RAC cluster. 4.2.1.1 Install Oracle Database Software 11g Release 2 (11.2.0.3.0) To install Oracle Database 11g Release 2 software select all the nodes in cluster. Ensure that the database software is installed on all nodes in the cluster. 4.2.1.2 Apply all required database patches Ensure that all the database patches are applied. Refer to the Part B of the Apply Database Patches section of MOS Knowledge Document ID 1349240.1, Database Preparation Guidelines for an Release 12.2 Upgrade. 4.2.2 Install Oracle Database Software 11gR2 and Upgrade the Oracle E-Business Suite Database to 11gR2 Note: If you are using Oracle Database 11g Release 2 (11.2.0.4.0), then apply Patch 17429475 using the Opatch utility. To install Oracle Database 11gR2 software and upgrade an existing database to 11gR2, refer to the interoperability note, Document 1623879.1. Follow all the instructions and steps listed there except for the following: Start the new database listener (Conditional) Implement and run AutoConfig Restart Applications server processes (Conditional) Note: At this point, ensure that the Applications patching cycle has completed. If it has not, as the owner of the source administration server, run the following command to finish any in-progress adop session: $adop phase=prepare,cutover Note: Ensure the database software is installed on all nodes in the cluster. 4.3: Configure Shared Storage This document does not discuss the setup of shared storage, as there are no specific tasks in setting up ASM, NFS (NAS) or clustered storage. For ASM, refer to Oracle Database Storage Administrator's Guide11g Release 2 (11.2). For configuring shared storage, refer to the Configuring Storage for Oracle Grid Infrastructure for a Cluster and Oracle RAC section of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux. 4.4: Listener Configuration in 11gR2 The Listener configuration requires careful attention when converting an Oracle Database to Oracle RAC. There are two types of listener in Oracle 11gR2 Clusterware: the SCAN listener and general database listeners. The SCAN listener provides a single named access point for clients, and replaces the use of Virtual IP addresses (VIP) in client connection requests (tnsnames.ora aliases). However, connection requests can still be routed via the VIP name, as both access methods are fully supported. Note: Starting with Oracle E-Business Suite 12.2, the recommended approach is to always use the SCAN listener for connecting to the database. This is the default mode for Rapid Install. To start or stop a listener from srvctl, the following three configuration components are required: An Oracle Home from which to run lsncrtl. The listener.ora file under the TNS_ADMIN network directory. The listener name (defined in the listener.ora) to start and stop the service. The Oracle Home can either be the Infrastructure home or a database home. The TNS_ADMIN directory can be any accessible directory. The listener name must be unique within the listener.ora file. For further information refer to Oracle Real Application Clusters Administration and Deployment Guide 11g Release 2 (11.2) Three issues must be considered: Listener configuration in 11gR2 Clusterware Listener requirements for converting to Oracle RAC Listener requirements for AutoConfig For a more detailed explanation of how instances interact with listeners, refer to Appendix E. 4.4.1 Listener Requirements for Converting to Oracle RAC Tools such as rconfig, dbca, and dbua impose additional restrictions on the choice of listener. The listener must be the default Grid listener, and it must run from the Oracle Grid Infrastructure home. So if the default listener is not set up for rconfig, will need to modify it using the following command: $ srvctl modify listener -l LISTENER -p [ if default LISTENER exists ] An alternative way to do this is: $ srvctl add listener -p After conversion, you can reconfigure the listener as required. 4.4.2 Listener requirements for AutoConfig AutoConfig supports use of either a named database listener or the SCAN listener. 4.5: Convert the Oracle 11g Database to RAC There are three options for converting to Oracle RAC, which are detailed in the Converting to Oracle RAC and Oracle RAC One Node from Single-Instance Oracle Databases section of Oracle Real Application Clusters Installation Guide 11g Release 2 (11.2) for Linux and UNIX. DBCA rconfig Enterprise Manager All these will convert a database to Oracle RAC, so you may choose the one you are most familiar with. Prerequisites for conversion are as follows: A clustered Grid Infrastructure install with at least one SCAN listener address. The default listener, running from the Grid Infrastructure home. The default port can be used, or another port specified during the Grid Infrastructure install. An 11gR2 ORACLE_HOME installed on all nodes in the cluster. Shared storage - the database files can already be on shared storage (CFS or ASM), or moved to ASM as part of the conversion as in Section 3.3: Set up Shared Storage Note: If you are using rconfig for RAC conversion, ensure you have applied Patch 17429475 and ensure that the patching cycle has completed. As an example, the steps involved for Admin Managed rconfig conversion are as follows: As the oracle user, navigate to the 11gR2 directory $11gR2_ORACLE_HOME/assistants/rconfig/sampleXMLs, and open the sample file ConvertToRAC_AdminManaged.xml using a text editor such as vi. This XML sample file contains comment lines that provide instructions on how to edit the file for your specific configuration. Make a copy of the sample ConvertToRAC.xml file, and modify the parameters as necessary. Keep a note of the name of your modified copy. Note: Study the example file (and associated notes) in Appendix A before you edit your own file and run rconfig. Execute rconfig using the convert option: convert verify="ONLY" prior to performing the actual conversion. Although this is optional, it is highly recommended as the test validates the parameters and identifies any issues that need to be corrected before the conversion takes place. Note: Specify the SourceDBHome variable in ConvertToRAC_AdminManaged.xml as the Non-RAC Oracle Home (). If you wish to specify the NEW_ORACLE_HOME, start the database from the new Oracle Home. Navigate to $11gR2_ORACLE_HOME/bin, and run rconfig as follows: Note: Before running rconfig, ensure that the local_listener initialization parameter is set to NULL. $ ./rconfig The rconfig command will perform the following tasks: Migrate the database to ASM storage (if ASM is specified as the storage option in the configuration XML file). Create database instances on all nodes in the cluster. Configure the listener and NetService entries. Configure and register the CRS resources. Start the instances on all nodes in the cluster. 4.5.1 Post Migration Steps 4.5.1.1 Database in Archivelog mode The conversion tools may change some configuration options. Most notably, your database will now be in archivelog mode, regardless of whether it was prior to the conversion. If you do not want to use archivelog mode, perform the following steps: Mount but do not open the database, using the startup mount command. Use the command alter database noarchivelog to disable archiving. Shut down the database using the shutdown immediate command. Start up the database using the startup command. For further details of how to control archiving, refer to Oracle Database Administrator's Guide 11g Release 2 (11.2). 4.5.1.2 Listener Configuration When converting to Oracle RAC, whichever tool was used will have used the Grid Local Listener (LISTENER) for the actual conversion. It is recommended to use Oracle E-Business Suite local listeners along with the SCAN listener. Configure the Oracle E-Business Suite local listeners on all nodes using the same port as used for the single instance. 4.5.1.2.1 Configure Oracle E-Business Suite Local Listener Create a directory under <11gR2_ORACLE_HOME>/network/admin, using the new instance name. For example, if your database name is VISRAC and you want to use "vis" as the instance prefix, create the directory as vis1_. Copy the listener.ora and tnsnames.ora from /network/admin/ to $ORACLE_HOME/network/admin/. Modify the listener and tnsnames to point to the New Oracle Home and SID. Ensure that the _Local alias exists in the tnsnames.ora, otherwise create the alias. Start the listeners on all nodes. Log on to the database and set the local listener parameter to _local, then repeat the following on all nodes for each instance: $ sqlplus / as sysdba SQL>alter system set local_listener = scope=both sid=' 4.6. Enable AutoConfig on All Database Nodes 4.6.1 Steps to Perform On All Oracle RAC Nodes Ensure that you have applied the Oracle E-Business Suite patches listed in the prerequisites section Execute $AD_TOP/bin/admkappsutil.pl on the applications tier to generate an appsutil.zip file for the database tier. Copy (e.g. via ftp) the appsutil.zip file to the database tier 11g R2_ORACLE_HOME. Unzip the appsutil.zip file to create the appsutil directory in the 11g R2_ORACLE_HOME. Copy the jre directory from SOURCE_ORACLE_HOME>/appsutil to 11gR2_ORACLE_HOME>/appsutil. Set the following environment variables: ORACLE_HOME =<11g R2_ORACLE_HOME> LD_LIBRARY_PATH = <11gR2 _ORACLE_HOME>/lib, <11gR2 _ORACLE_HOME>/ctx/lib ORACLE_SID = PATH= $PATH:$ORACLE_HOME/bin; TNS_ADMIN = $ORACLE_HOME/network/admin/ As the APPS user, run the following command on the primary node to de-register the current configuration: SQL>exec fnd_conc_clone.setup_clean; From the 11gR2 ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command: $ adbldxml.pl appsuser= appspass= Provide the SCAN name and SCAN Port when prompted. Set the value of s_virtual_hostname to point to the virtual hostname for the database host, by editing the database context file $ORACLE_HOME/appsutil/_hostname.xml. From the 11gR2 ORACLE_HOME/appsutil/bin directory, execute AutoConfig on the database tier by running the adconfig.pl script. Rerun AutoConfig on all nodes. 4.6.2 Shut Down Instances and Listeners Use the following commands: $ srvctl stop listener $ srvctl stop database -d 4.6.3 Update the Server Parameter File Settings After the Oracle RAC conversion, you will have a central server parameter file (spfile). It is important to understand the Oracle RAC specific changes that were introduced by AutoConfig, and to ensure that the context file is in sync with the database initialization parameters. The affected parameters are listed in the RAC template under 11gR2_ORACLE_HOME/appsutil/template/afinit_db112RAC.ora. They are also listed below. Many will have been set by the conversion, and others are likely to be set by customers for non-RAC related reasons. service_names - Oracle E-Business Suite customers may well have a variety of services already set. You must ensure that service_names includes %s_dbService% [database name] across all instances. local_listener - If you are using SRVCTL to manage your database, the installation guide recommends leaving this unset as it is dynamically set during instance start up. However, using the AutoConfig _local alias will also work. If you are using a non-default listener, then this parameter must be set to _local. remote_listener - If you are using AutoConfig to manage your connections, then the remote_listener must be set to the _remote AutoConfig alias. These six parameters will all have been set as part of the conversion. The following context variables should be updated to be in sync with the database: cluster_database cluster_database_instances undo_tablespace instance_name instance_number thread 4.6.4 Update the New listener.ora If you intend to manage an Oracle E-Business Suite database with SRVCTL, you must perform the following additional steps: Note: If you are using a shared Oracle Home then TNS_ADMIN cannot be shared as the directory path must be same on all nodes.See Appendix F for an example of how to use SRVCTL to manage listeners in a shared Oracle Home. If you wish to use the port allocated to the default listener, stop and remove the default listener. Add on Oracle E-Business Suite listener using the following commands: $ srvctl add listener -l -o <11gR2 ORACLE_HOME> -p $ srvctl setenv listener -l -T TNS_ADMIN= $ORACLE_HOME/network/admin Note: If registering the listener with Cluster Services failed with an CRS-0254 authorization failure error, refer to the Known Issues section. On each node, add the AutoConfig listener.ora as an ifile in the $ORACLE_HOME/network/admin/listener.ora. On each node, add the AutoConfig tnsnames.ora as an ifile in the $ORACLE_HOME/network/admin/tnsnames.ora. On each node, add the AutoConfig sqlnet.ora as an ifile in the $ORACLE_HOME/network/admin/sqlnet.ora Add TNS_ADMIN to the database using the following command: $ srvctl setenv database -d -T TNS_ADMIN= $ORACLE_HOME/network/admin Start up the database instances and listeners on all nodes. The database can now be managed using SRVCTL. 4.7. Establish the Oracle E-Business Suite Environment for Oracle RAC 4.7.1 Preparatory Steps Note: At this point, ensure that the Applications patching cycle has completed. If it has not, as the owner of the source administration server, run the following command to finish any in-progress adop session: $adop phase=prepare,cutover Perform the following steps on all application tier nodes: Source the Oracle E-Business Suite environment. On both the Run and Patch file systems, edit INSTANCE_NAME= and PORT=(if changed)in $TNS_ADMIN/tnsnames.ora to set up a connection to one of the instances in the Oracle RAC environment. Repeat this for all aliases including the _patch alias. Confirm that you are able to connect to one of the instances in the Oracle RAC environment from the RUN file system. On both the Run and Patch file systems, edit the context variable s_apps_jdbc_patch_connect_descriptor and modify INSTANCE_NAME to be one of the RAC instances in the CONNECT_DATA parameter value in the context file. For example : jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT=))(CONNECT_DATA=(SERVICE_NAME=ebs_patch)(INSTANCE_NAME=)) Run AutoConfig on both the RUN and PATCH file system using the command: $ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/ Note : AutoConfig will fail because adgentns.pl requires a Patch Edition, which does not exist yet. Ignore this error. For more information on AutoConfig, refer to the Technical Configuration section of the Oracle E-Business Suite Setup Guide Release 12.2 Edit all aliases (two_task and patch) in the tnsnames.ora on the Patch file system to set up a connection to one of the RAC instance. Execute the following commandto sync the Run and Patch file system: $ adop phase=prepare,cutover Check the $INST_TOP/admin/log/ AutoConfig log file for errors. Source the environment by using the latest environment file generated. Verify the tnsnames.ora and listener.ora files in the $INST_TOP/ora/10.1.2/network/admin directory (in both Run and Patch File systems) , and the jdbc URL in $FMW_HOME/user_projects/domain/EBS_domain_/config/jdbc/EBSDataSource--jdbc.xml. Ensure that the correct TNS aliases have been generated for load balancing and failover in each of these files, and that all the aliases are defined using the virtual hostnames. In the dbc file located at $FND_SECURE, ensure that the parameter APPS_JDBC_URL is configured with all of the instances in the environment, and that load_balance is set to YES. NOTE : To Setup Parallel Concurrent Processing follow Appendix I 4.7.2 Set Up Load Balancing The steps in this section describe how to implement load balancing for the Oracle E-Business Suite database connections. Note: As in other steps, ensure applications patching cycle is complete. Note: After setting TWO_TASK to the load balancing alias, the ADOP cleanup and cutover phases will fail. In order to workaround this problem: ADOP Cleanup: Copy the adalldefaults.txt file from $APPL_TOP/admin// to the $APPL_TOP/admin/ directory on RUN file system. Rerun the ADOP cutover. Implement load balancing across the Oracle E-Business Suite database connections: Using the Context Editor (via the Oracle Applications Manager interface), modify the variables as follows: To load-balance the Oracle Forms database connections, set the value of "Tools OH TWO_TASK" (s_tools_twotask) to point to the _balance alias generated in the tnsnames.ora file. To load-balance the Self-Service (HTML-based) database connections, set the value of "iAS OH TWO_TASK" (s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias) to point to the _balance alias generated in the tnsnames.ora file. Execute AutoConfig by running the command: $ $AD_TOP/bin/adconfig.sh contextfile=$INST_TOP/appl/admin/ Restart the Oracle E-Business Suite processes using the new scripts generated by AutoConfig. Ensure that the value of the profile option "Application Database ID" is set to the dbc file name generated in $FND_SECURE. Note: If you are adding a new node to the Application Tier, repeat all the above steps to set up load balancing on the new Application Tier node. Section 5: Use Rapid Install to Install a RAC Configured Oracle E-Business Suite Release 12.2 System Oracle E-Business Suite Release 12.2 introduces Real Applications Cluster database installation using Rapid Install. This allows for a number of different configuration options, including shared Oracle Home and both ASM and shared file system database storage. This part of the document is divided into following sections: 5.1: Configuration Prerequisites 5.2: Install Oracle E-Business Suite Release 12.2 with Oracle Database 11gR2 on Cluster Nodes 5.1: Configuration Prerequisites Oracle E-Business Suite Release 12.2 with an Oracle Real Application Clusters (RAC) database supports various different configurations. The prerequisites for these are divided into following: 5.1..1 Cluster Prerequisties 5.1.2 Shared File System Prerequisties 5.1.3 Shared Oracle Home Prerequisites 5.1.4 Database Software Install Prequisites 5.1.1 Cluster Prequisities NOTE: If you have installed Oracle Grid Infrastructure 11.2.0.4 or higher, refer to Appendix H for known issues and workarounds. Ensure that you have already installed the Oracle Grid infrastructure as per Section 3. Ensure that cluster services are up and running, and in particular the SCAN_LISTENER and LISTENER. $ $CRS_HOME/bin/crs_stat 5.1.2 Shared File System Prequisties Shared storage configuration was discussed in Section 3. For ASM, verify that datagroup is created and sized appropriately for the Oracle E-Business Suite database install. The values shown in the following table for Free_MB should be greater than, or equal to, your required size. $ $CRS_HOME/bin/asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 244995 244547 0 244547 0 Y DATA 5.1.3 Shared Oracle Home Prequisities Rapid Install supports a shared Oracle Home install. The Oracle Home directory should be mounted on all nodes. Rapid Install checks availability of the directory. If you plan to use Shared Oracle Home, ensure that the database directory is mounted with required mount options and shared across all of the cluster nodes as per Knowledge Document 359515.1, Mount Options for Oracle files when used with NFS on NAS devices. For example, the mount options for Linux x86-64 are as follows: rw,bg,hard,nointr,rsize=32768, wsize=32768,tcp,vers=3, timeo=600, actimeo=0 Note: Rapid Install does not currently support installing the Oracle Home into an ACFS [ASM based] File System. 5.1.4 Database Software Install Prerequisites Note:If you are planning to install Oracle E-Business Suite with a different user (other than grid user), ensure that asmadmin and asmdba groups are assigned to the grid user. Refer to the Oracle ASM Groups for Job Role Separation Installations section of Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux Refer to Appendix G for an example of creating users and groups. If you are planning to use a different user for the Database software, the following additional steps are required. For further information, refer to Oracle Grid Infrastructure Installation Guide 11g Release 2 for Linux.(11.2). If you are using ASM, use the following command to add asmdba to the oracle user's groups: $ /usr/bin/usermod -G dba,asmdba,asmadmin Rapid Install uses asmcmd, which is a command-line utility that you can use to manage Oracle ASM instances, disk groups, file access control for disk groups, etc. The default permissions on the files and directories in the Oracle Grid Home will cause asmcmd and therefore Rapid Install to fail for non-Grid users. To avoid this problem, temporarily change permissions as shown in the example below. For further information, refer to Knowledge Document 1295851.1 , ASMCMD Fails From Non-grid User Even if it Belongs To Same Groups. If you use the workaround, reset the permissions once Rapid Install has completed. $ cd /perl $ chmod -R 751 bin lib man ( Original permissions are 700 ) $ cd /lib $ chmod 660 libexpat.so.1 ( Original permissions are 600 ) $ cd /log chmod 770 Note: You should use the same inventory as used by the Oracle Grid install. 5.1.5 Other Considerations Run cluvfy manually before starting the installation from grid home. "cluvfy stage -pre dbinst -n " Check that the SCAN LISTENERS and GRID LOCAL LISTENERS are up and running as they are used by Rapid Install to convert the database into RAC. In order to avoid port Listener conflicts, always select a different port pool for the EBS Local Listener and Grid Local listener. The Grid local listener uses port 1521 by default so do not use that for EBS unless you have changed it during the grid installation. In the split tier configuration, keep the SCAN and VIP hosts details in the /etc/hosts file on the Application Tier server. Verify that the SCAN and VIP host can be pinged from the server where you are going to install Application Tier. 5.2: Install Oracle E-Business Suite Release 12.2 with Oracle Database 11gR2 on Cluster Nodes Rapid Install allows several different configuration options, including for example a shared Oracle Home installed on to either ASM or shared File System. This section describes what happens during the the Oracle E-Business Suite Release 12.2 Rapid Install. 5.2.1 The Rapid Install Rapid Install allows several different configuration options, including for example a shared Oracle Home installed on to either ASM or shared file system. This section describes the actions Oracle E-Business Suite Release 12.2 Rapid Install performs. 5.2.1.1 Installation Details The installation phase carries out the following tasks: Installs the database software on all selected nodes. If a shared Oracle Home is selected, the installer identifies the type of Oracle Home and performs the installation. Uses rman to restore the database on to the nominated shared storage (either ASM or shared file system). Configure the Oracle Database for Oracle E-Business Suite. Uses rconfig to convert the database to Oracle RAC. Runs AutoConfig on all nodes. Figure 1 shows the Oracle E-Business Suite Release 12.2 Rapid Install Database Node page, which has the following Oracle RAC options: RAC Enabled Storage Type Shared Oracle Home RAC Nodes Instance Prefix Close tab modal window When you select the Oracle RAC option, ASM storage is used by default. If you are not using ASM, you must specify file system. Select the appropriate check box if using a shared Oracle Home. When you click on the RAC Nodes button, the nodes list opens so you can select the required nodes for installation. There is also an option to change instance prefix also. After you have selected the relevant options, the installer checks the prerequisites have been met. This may take a few minutes. Validation Checks Rapid Install performs its standard validation checks for temporary space, swap space, etc. and in an Oracle RAC installation also performs cluster verification. If there any prerequisite failures, it will create an output file cluvfy_output_.lst under the temporary directory. It is essential that you review this and resolve any problems. If there are any issues that can be fixed automatically, Rapid Install will create a fixup.sh script, also in the temporary directory. While converting the Oracle database to Oracle RAC, rconfig uses the Oracle Grid local Listener, but once the AutoConfig configuration completes, it will use the SCAN Listener and Oracle E-Business Suite Database Listener. 5.2.2 Post Install Steps Update SRVCTL for the New listener.ora If you intend to manage an Oracle E-Business Suite database with SRVCTL, you must perform the following additional steps: Note: If you are using shared Oracle Home then TNS_ADMIN cannot be shared as the directory path must be same on all nodes.See Appendix F for an example of how to use SRVCTL to manage listeners in a shared Oracle Home. If you wish to use the port allocated to the default listener, stop and remove the default listener. Add the Oracle E-Business Suite listener as follows: $ srvctl add listener -l -o <11gR2 ORACLE_HOME> -p $ srvctl setenv listener -l -T TNS_ADMIN= $ORACLE_HOME/network/admin On each node, add the AutoConfig listener.ora as an ifile in the $ORACLE_HOME/network/admin/listener.ora. The contents of listener.ora file would be ifile=<11gR2 ORACLE_HOME>/network/admin//listener.ora On each node, add the AutoConfig tnsnames.ora as an ifile in the $ORACLE_HOME/network/admin/tnsnames.ora. On each node, add the AutoConfig sqlnet.ora as an ifile in the $ORACLE_HOME/network/admin/sqlnet.ora. Add TNS_ADMIN to the database as follows: $ srvctl setenv database -d -T TNS_ADMIN= $ORACLE_HOME/network/admin Start up the database instances and listeners on all nodes. The database can now be managed using SRVCTL. 5.2.3 Application Tier Install Typically, the application tier is not on the database machine. In such a case, you need to copy the configuration file to the application tier node(s) and install there. If you prefer to load the configuration from the database instead, you should use the Applications Database Listener port and dbname for the SID. For example: :: Rapid Install creates the application tier with the patch and run file systems, and the non-editioned file system. By default, all the database connections go through the SCAN Listener. Verify the SCAN host and SCAN port in the following $TNS_ADMIN/tnsnames.ora Context file: s_apps_jdbc_connect_descriptor and s_apps_jdbc_patch_connect_descriptor variables $FND_SECURE/.dbc The port and WebLogic server data source /user_projects/domains/EBS_domain_/config/jdbc/EBSDataSource--jdbc.xml Note: It is highly recommeded that you deploy a split-tier configuration and separate the applications and database tiers. 5.2.4 Set up Load Balancing Follow instructions given in Step 4.7.2 of Section 4.7 . 5.3 Known Issues First, check that all the prerequisites have been met. If you see any errors, check the Rapid Install log. For the Oracle Database tier: ORACLE_HOME/appsutil/log//.log. For the application tier(s): $INST_TOP/apps//logs/.log. Finally, review the OUI inventory log /logs/install.log. If the error reported is PRVF-5640, check that /etc/resolve.conf does not have multiple domains defined. When Rapid Install completes, the remote_listener parameter is set to :. Update the remote_listener parameter to _remote on all instances, using the following command: And also change s_instRemoteListener context variable in context file and run AutoConfig. SQL>alter system set remote_listener=_remote scope=both sid='' During the E-Business Suite 12.2.x RAC installation, you may encounter the following error during prerequistes checking: PRVF-7617 : Node connectivity between " : " and " : failed INFO: Cause: Node connectivity between the two interfaces identified ( : ) could not be verified. If so, modify /TechInstallMedia/database/client/stage/cvu/cv/admin/cvu_config and change the file as follows and then restart the installation: #CV_ASSUME_CL_VERSION=10.2 CV_ASSUME_CL_VERSION=11.2 Section 6: References Knowledge Document 745759.1, Oracle E-Business Suite and Oracle Real Application Clusters Documentation Roadmap Knowledge Document 384248.1, Sharing The Application Tier file system in Oracle E-Business Suite Release 12 Knowledge Document 406982.1, Cloning Oracle Applications Release 12 with Rapid Clone Knowledge Document 240575.1, RAC on Linux Best Practices Knowledge Document 265633.1, Automatic Storage Management Technical Best Practices Knowledge Document 881506.1, Oracle Applications Release R12 with Oracle 11g Release 2 Oracle E-Business Suite Installation Guide: Using Rapid Install, Release 12.2, Part No. E22950 Oracle E-Business Suite Setup Guide, Release 12.2, Part No. E22953 Appendices The appendices are divided as follows: Appendices A and B are intended to help with migrating non-Oracle RAC to Oracle RAC. Appendices C, D, E, and F are intended to help with Rapid Install usage in setting up Oracle RAC. Appendix G describes role separation. Appendix H lists Grid Infrastructure known issues. Appendix I describes how to set up Parallel Concurrent Processing (PCP). Migrating non-RAC system to RAC Appendix A: Sample Config XML file Appendix B: Database Conversion - Known Issues Appendix A: Sample Config XML file This appendix shows example contents of an rconfig XML input file. have been added to the code, and notes have been inserted between sections of code. RConfig xsi:schemaLocation="http://www.oracle.com/rconfig"> - - - Note: The Convert verify option in the ConvertToRAC.xml file can take one of three values YES/NO/ONLY: 1. YES: rconfig performs prerequisite checks and then starts conversion. 2. NO: rconfig does not perform prerequisite checks prior to starting the conversion. 3. ONLY: rconfig only performs prerequisites check and does not start the conversion. In order to validate and test the settings specified for converting to RAC with rconfig, it is advisable to execute rconfig using Convert verify="ONLY" prior to carrying out the actual conversion. /oracle/product/11.1.0/db_1 /oracle/product/11.1.0/db_1 - - - sys oracle sysdba - sys welcome sysdba - - - sales - - Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this option, specify the ASM parameters as per your environment in the above xml file. The ASM instance name specified above is only the current node ASM instance. Ensure that ASM instances on all the nodes are running and the required diskgroups are mounted on each of them. The ASM disk groups can be identified by issuing the following statement when connected to the ASM instance: SQL> select name, state, total_mb, free_mb from v$asm_diskgroup; +ASMDG Note: rconfig can also migrate the single instance database to ASM storage. If you want to use this path, specify the ASM parameters as per your environment in the above XML file. If you are using CFS for your current database files then specify "NULL" to use the same location unless you want to switch to other CFS location. If you specify the path for the TargetDatabaseArea, rconfig will convert the files to Oracle Managed Files nomenclature. +ASMDG Appendix B: Database Conversion - Known Issues Database Upgrade Assistant (DBUA) If DBUA is used to upgrade an existing AutoConfig-enabled Oracle RAC database, you may encounter an error about a pre-11gR2 listener existing in CRS. In such a case, copy the AutoConfig listener.ora to the <11gR2_ORACLE_HOME>/network/admin directory, and merge the contents in with the existing listener.ora file. Migration to RAC and RAC installation Appendix C: Example Grid Installation Appendix D: Enabling/Disabling SCAN Listener Support in Autoconfig Appendix E: Instance and Listener Interaction Appendix F: Shared ORACLE_HOME and TNS_ADMIN Appendix C: Example Grid Installation The following assumes a fresh Grid install and is intended for those less experienced with Clusterware, or who may be doing a test install. Start the Installer. Choose "Install and Configure Grid Infrastructure for a Cluster". Click "Next". Choose "Advanced Configuration". This is needed when specifying a scan name that is different from the cluster name. Click "Next". Choose Languages. Click "Next". Uncheck "Configure GNS" - this is for experienced users only. Enter the cluster name, scan name and scan port. Click "Next". Add Hostnames and Virtual IP names for nodes in the cluster. Click "SSH Connectivity". Click "Test". If SSH is not established, enter the OS user and password and let the installer set up passwordless connectivity. Click "Test" again, and if successful click "Next". Choose one interface as public, one as private. eth0 should be public while eth1 is usually set up as private. Click "Next". Uncheck "Grid Infrastructure manager" page "configuration repository". Choose Shared File System. Click "Next". Choose the required level of redundancy, and enter location for the OCR disk. This must be located on shared storage. Click "Next". Choose the required level of redundancy, and enter location for the voting disk. This must be located on shared storage. Click "Next". Choose the default of "Do not use" for IPMI. Click "Next". Select an operating system group for the operator and dba accounts. For the purposes of this example installation, choose the same group, such as "dba", for both. Click "Yes" in the popup window that asks you to confirm that the same group should be used for both, then click "Next". Enter the Oracle Base and Oracle Home. The Oracle Home should not be located under Oracle Base. Click "Next". Enter Create Inventory location. Click "Next". In the "Root Script Execution" page, either select or unselect "Automatically run configuration scripts" option (as you prefer). System checks are now performed. Fix any errors by clicking on "Fix and Check Again", or check "Ignore All" and click "Next". If you are not familiar with the possible effects of ignoring errors, it is advisable to fix them. Save the response file for possible future use, then click "Finish" to start the install. You will be required to run various scripts as root during the install. Follow the relevant on-screen instructions. Appendix D: Enabling/Disabling SCAN Listener Support in Autoconfig Managing the scan listener is handled on the database server. All that is required for the middle tier is for AutoConfig to be re-run, to pick up the updated connection strings. Switching from SCAN to non-SCAN s_scan_name=null, s_scan_port=null and s_update_scan=TRUE local_listener should be _local and remote listener _remote (To allow failover aliases) Run AutoConfig, it creates non SCAN aliases in tnsnames.ora Run AutoConfig on middle tier, it creates non SCAN aliases in tnsnames.ora Re-enabling SCAN s_scan_name=,s_scan_port= and s_update_scan=TRUE Modify the remote_listener to : using the SQL command: alter system set remote_listener='...' for all instances. Run AutoConfig, it creates SCAN aliases in tnsnames.ora Run AutoConfig on middle tier, it creates SCAN aliases in tnsnames.ora Appendix E: Instance and Listener Interaction Understanding how instances and listeners interact is best done with a worked example. Consider a two-node Oracle RAC cluster, with nodes C1 and C2. In this example, two local listeners are used, the default listener and a listener called "EBS" listener. There is nothing special about the name of the latter: it could equally well have been called the "ABC" listener, for example. Listener Configuration Listener Type Node SCAN Name Host Name VIP Name Listener Host Listener Port Listener Address EBS listener C1 N/A C1 C1-VIP C1 1531 C1 and C1-VIP C2 N/A C2 C2-VIP C2 1531 C2 and C2-VIP Default listener C1 N/A C1 C1-VIP C1 1521 C1 and C1-VIP C2 N/A C2 C2-VIP C2 1521 C2 and C2-VIP SCAN Either C1 or C2 C-SCAN N/A N/A Either C1 or C2 1521 C-SCAN Note the following: The SCAN and local listener can listener on the same port as they listen on different addresses. The SCAN listener can run on either C1 or C2. Listeners have no built in relationship with instances. SRVCTL configuration Listener Type Listener Name Listener Port Listener Host Listener Address General [Local] listener 1521 C1 C1 and C1-VIP 1521 C2 C2 and C2-VIP ebs_listener 1531 C1 C1 and C1-VIP 1531 C2 C2 and C2-VIP SCAN SCAN [ name doesn't matter and can be default ] 1521 Either C1 or C2 C-SCAN Instance to Listener Assignment The relationship between instances and listeners is established by the local_listener and remote_listener init.ora parameters. Local_Listener The instance broadcasts to the address list, informing the listeners that the instance is now available. The local listener must be running on the same node as the instance, as the listener spawns the oracle processes. The default value comes from the cluster. Remote_Listener The instances broadcasts to the address list, informing the listeners that the instance is now available for accepting requests, and that the requests are to be handled by the local_listener address. The remote hosts can be on any machine. There is no default value for this parameter. Database Instance Node Local_Listener Remote_Listener Default Listener Status EBS Listener Status SCAN Listener Status D1 I1 C1 Set to C1 & C1-VIP on 1531 C-SCAN/1521 I1 is unavailable I1 is available I1 is available via redirect to EBS Listener for C1 Set to C1 & C1-VIP on 1531 C1/C1-VIP on 1531, C2/C2-VIP on 1531 I1& I2 are unavailable I1 is available. I2 is available via redirect to EBS Listener for C2. I1 not available Not set. Instance uses cluster default listener - i.e. C1 & C1-VIP on 1521 C-SCAN/1521 I1 is available I1 is unavailable. I1 is available via redirect to Default Listener for C1 I2 C2 Set to C2 & C2-VIP on 1531 C-SCAN/1521 I2 is unavailable I2 is available I2 is available via redirect to EBS Listener for C2 Set to C2 & C2-VIP on 1531 C1/C1-VIP on 1531, C2/C2-VIP on 1531 I2 & I1 are unavailable I2 is available. I1 is available via redirect to EBS Listener for C1 I2 not available Not set. Instance uses cluster default listener - i.e. C2 & C2-VIP on 1521 C-SCAN/1521 I2 is available I2 is unavailable I2 is available via redirect to Default Listener for C2 Appendix F Shared ORACLE_HOME and TNS_ADMIN In Oracle 11gR2, listeners are configured at the cluster level, and all nodes inherit the port and environment settings. This means that the TNS_ADMIN directory path will be the same on all nodes. In a shared ORACLE_HOME configuration, the TNS_ADMIN directory must be a local, non-shared directory, in order to be able to use AutoConfig generated network files. These network files will be included as ifiles. The following is an example for setting up TNS_ADMIN for a shared in a two node cluster, C1 and C2, with respective instances I1 and I2. Run AutoConfig on both nodes. This will create listener.ora and tnsnames.ora under the node network directories. i.e. /network/admin/ and . Edit AutoConfig listener.ora files and change LISTENER_ to the listener common name . Skip this step if you have applied the db listener patch. Create a , e.g. /etc/local/network_admin. Create a listener.ora under on each node. node ifile=/network/admin//listener.ora node ifile=/network/admin//listener.ora Create a tnsnames.ora under the on each node. node ifile=/tnsnames.ora node ifile=/tnsnames.ora Add the common listener name to the cluster and set TNS_ADMIN to the non shared directory: srvctl add listener -l -o -p srvctl setenv listener -l -t TNS_ADMIN= Appendix G: Role Separation Create Job Role Separation Operating System Privileges Groups, Users, and Directories This section provides the instructions on how to create the operating system users and groups to install all Oracle software using a Job Role Separation configuration The Job separation privleges of Oracle is a configuration with operation system to divide the administration privileges. In this section , for example grid is the owner of the Grid infrastructure software and Oracle Automatic Storage Management binaries and oracle is the owner of the Oracle RAC Software binaries Both users must have an Oracle Inventory group as their primary group (For example oinstall). Several Operating Systems Groups can be created in order to divide the adminstration privileges as explained in the following table. Description OS Group Name OS Users Assigned to this Group Oracle Privilege Oracle Group Name Oracle Inventory and Software Owner oinstall grid, oracle Oracle Automatic Storage Management Group asmadmin grid SYSASM OSASM ASM Database Administrator Group asmdba grid, oracle SYSDBA for ASM OSDBA for ASM ASM Operator Group asmoper grid SYSOPER for ASM OSOPER for ASM Database Administrator dba oracle SYSDBA OSDBA Database Operator oper oracle SYSOPER OSOPER Example: Creating the groups and users: Create oinstall, asmadmin,asmdba,asmoper (optional) groups. Perform the following as a root user: #groupadd -g 9999 oinstall #groupadd -g 8888 asmadmin #groupadd -g 7777 asmdba #groupadd -g 6666 asmoper The following command creates a user named grid (that owns the Grid infrastructure) and assign the necessary groups to that user. Remember to set the grid user password. useradd -g oinstall -G asmadmin,asmdba,asmoper -d grid Create groups for the Oracle software: #groupadd -g 1010 dba #groupadd -g 1020 oper The following command creates a user named oracle (that will own the Oracle RAC software) and assigns the asmadmin and asmdba groups, which is necessary when using different users for Grid and Oracle: useradd -g oinstall -G dba,oper,asmadmin,asmdba -d oracle Remember to set the resource limits for the Oracle software installation users as per documentation. Appendix H: Higher Version of Grid Infrastructure - Known Issues Using Grid Infrastructure 11.2.0.4.0 Installing Oracle E-Business Suite 12.2/11.2.0.3 RAC using Rapid Install on an Oracle 11.2.0.4.0 cluster fails while running the adRacAutoConfig script. To fix this issue: Download and apply Patch 18848525 to the 11.2.0.3.0 Oracle Home. The patch Readme is very large and you only need to run the following commands: $ /18848525/custom/server/18848525/custom/scripts/prepatch.sh -dbhome $ /OPatch/opatch napply -oh -local /18848525/custom/server/18848525 $ /18848525/custom/server/18848525/custom/scripts/postpatch.sh -dbhome Ensure that you have the old (single instance) context file present in $ORACLE_HOME/appsutil/. If you don't, copy the newly created RAC specific context to the old non-RAC context file - for example: cp prac1_.xml prac_.xml. Rerun the $ORACLE_HOME/temp//adRacAutoConfig.sh script. Upload the configuration file to complete the installation. java -classpath /jdbc/lib/ojdbc6.jar:/appsutil/java/xmlparserv2.jar:/appsutil/java oracle.apps.ad.autoconfig.oam.CtxSynchronizer contextfile=$CONTEXT_FILE action=upload upload=$ORACLE_HOME/appstuil/conf_.txt Grid Infrastructure 12.1.0.1 Installing Oracle E-Business Suite 12.2/11.2.0.3 RAC using Rapid Install on an Oracle Database 12c cluster fails while running the adRacAutoConfig script and rconfig fails to register the database in the cluster. To fix this issue: Download and apply Patch 18848525 to the 11.2.0.3.0 Oracle Home. Follow the instructions in the patch Readme. Register the database in the cluster using the following commands: $ srvctl add database -d -o $ORACLE_HOME $ srvctl add instance -d -i -n Note: Repeat the above command to add all the instances on their respective nodes. When you rerun adRacAutoconfig,it will fail to start the listener due to Bug 18892986. The problem is that the context variable s_virtual_hostname has been updated with both the hostname and domain instead of just the hostname. In order to workaround this problem, the simplest solution is to remove the domain part of the name from s_virtual_hostname. The domain name is also appended twice in the listener.ora. You need to remove one of the domain name entries from the listener.ora. Note that this needs to be performed on each node. Restart the listener and run AutoConfig on all nodes. $ lsnrctl start $ $ORACLE_HOME/appsutil/scripts//adautoconfig.sh Upload the configuration file to complete the installation. java -classpath /jdbc/lib/ojdbc6.jar:/appsutil/java/xmlparserv2.jar:/appsutil/java oracle.apps.ad.autoconfig.oam.CtxSynchronizer contextfile=$CONTEXT_FILE action=upload upload=$ORACLE_HOME/appstuil/conf_.txt Appendix I Configure Parallel Concurrent Processing Check prerequisites for setting up Parallel Concurrent Processing Parallel Concurrent Processing (PCP) spans two or more nodes. If you need to add nodes, follow the relevant instructions in My Oracle Support Knowledge Document 1383621.1, Cloning Oracle Applications Release 12 with Rapid Clone. Note: If you are planning to implement a shared Application tier file system, refer to My Oracle Support Knowledge Document 1375769.1.1, Sharing the Application Tier File System in Oracle E-Business Suite Release 12, for configuration steps. If you are adding a new Concurrent Processing node to the application tier, you will need to set up load balancing on the new application by repeating steps in Section 4.7.2. Set Up PCP Edit the applications context file via Oracle Applications Manager, and set the value of the variable APPLDCP to ON. Source the Applications environment. Execute AutoConfig by running the following command on all concurrent processing nodes: $ $INST_TOP/admin/scripts/adautocfg.sh Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes. Restart the Applications listener processes on each application tier node. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility. Navigate to the Install > Nodes screen, and ensure that each node in the cluster is registered. Verify that the Internal Monitor for each node is defined properly, with correct primary node specification, and work shift details. For example, Internal Monitor: Host1 must have primary node as host1. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator. Set the $APPLCSF environment variable on all the Concurrent Processing nodes to point to a log directory on a shared file system. Set the $APPLPTMP environment variable on all the CP nodes to the value of the UTL_FILE_DIR entry in init.ora on the database nodes. (This value should be pointing to a directory on a shared file system.) Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required. By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason. Set Up Transaction Managers Shut down the application services (servers) on all nodes Shut down all the database instances cleanly in the Oracle RAC environment, using the command: SQL>shutdown immediate; Edit $ORACLE_HOME/dbs/_ifile.ora. Add the following parameters: _lm_global_posts=TRUE _immediate_commit_propagation=TRUE Start the instances on all database nodes, one by one. Start up the application services (servers) on all nodes. Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type' to ‘QUEUE', and verify that the transaction manager works across the Oracle RAC instance. Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers. Restart the concurrent managers. If any of the transaction managers are in deactivated status, activate them from Concurrent > Manager > Administrator. Set Up Load Balancing on Concurrent Processing Nodes Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (_balance>). Note: Windows users must set the value of "Concurrent Manager TWO_TASK" (s_cp_twotask context variable) to the instance alias. Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes. Note: For further details on Concurrent Processing, refer to the Product Information Center (PIC) (Doc ID 1304305.1). Change Log Date Description 22-Jan-2015 Added Appendix I for PCP configuration 22-Jul-2014 Editorial Review and updated for 11.2.0.4 and 12c cluster. 25-Feb-2014 Updated for 11.2.0.4 Database. 15-Apr-2013 Implemented Remarks ( Section 4.7.2 s_tools_two_task to s_tools_twotask) 06-Jul-2012 Updated Known Issues section added remote_listener steps 03-Dec-2012 Initial Draft My Oracle Support Knowledge Document 1453213.1 by Oracle E-Business Suite Development Documentation Notices Copyright © 2012, 2014, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services. For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc. Access to Oracle Support Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

No comments:

Post a Comment