What is oraInventory ??
The Orainventory is the location for the OUI's Book keeping
The inventory stores information about.
All the Oracle software products installed on all ORACLE_HOMES on a machine
Other non-oracle products such as Java Runtime env's (JRE)
Binary OraInventory
Before OUI 2.X (or 11.5.7 or earlier)the inventory was binary, the binary orainvenory maintains in inventory in binary format
XML Inventory
Starting from OUI 2.X and 11.5.8 information in the inventory is stored in the Extensible Markup Language (XML) format.
The XML format allows easier diagnostic of the problem and faster loading of data.
XML inventory is divided into 2 components.
Global Inventory
Global Inventory holds information about Oracle Products on a Machine, The inventory contains the high level list of all oracle products installed on a machine such as
ORACLE_HOMES or JRE.
It doesn't have any information about the details of patches applied on each ORACLE_HOMES.
There should be only one per machine. Its locations is defined in the oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris).
Local Inventory
There is one Local inventory per ORACLE_HOME.
Inventory inside each Oracle Home is called as local Inventory or ORACLE_HOME Inventory. This Inventory holds information to that ORACLE_HOME only.
Can I have multiple Global Inventories on a machine?
Can you have multiple global Inventory and answer is YES you can have multiple global Inventory but if your upgrading or applying patch then change Inventory Pointer
oraInst.loc to respective location.
If you are following single global Inventory and if you wish to uninstall any software then remove it from Global Inventory as well.
What to do if my Global Inventory is corrupted?
If your global Inventory is corrupted, you can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option
-attachHome
./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc ORACLE_HOME=Oracle_Home_Location ORACLE_HOME_NAME=Oracle_Home_Name CLUSTER_NODES={}
Do I need to worry about oraInventory during oracle Apps 11i cloning ?
No, Rapid Clone will update both Global & Local Inventory with required information, you don't have to worry about Inventory during Oracle Apps 11i cloning.
How to Move oraInventory from one location to other?
Find the current location of the central inventory (Normally $ORACLE_BASE/oraInventory):
Open the oraInst.loc file in /etc and check the value of inventory_loc
cat /etc/oraInst.loc
inventory_loc=/u01/app/oracle/oraInventory
inst_group=oinstall
Remark: The oraInst.loc file is simply a pointer to the location of the central inventory (oraInventory)
Copy the oraInventory directory to the destination directory
cp -Rp /u01/app/oracle/oraInventory /u02/app/oracle/oraInventory
Edit the oraInst.loc file to point to the new location
vi /etc/oraInst.loc
inventory_loc=/u02/app/oracle/oraInventory
inst_group=dba
Monday, December 30, 2013
Tuesday, December 24, 2013
Oracle Data Integrator 11g SUPERVISOR user password reset
When you try to create Master Repository Connection from Oracle Data Inegrator 11g console ..you might face issue with SUPERVISOR user's password.
Issue-Error :-
Oracle.odi.core.security.BadCredentialsException: Incorrect ODI username or password
at oracle.odi.core.security.SecurityManager.doODIInternalAuthentication(SecurityManager.java:392)
at oracle.odi.core.security.SecurityManager.createAuthentication(SecurityManager.java:260)
at oracle.odi.ui.docking.panes.OdiCnxFactory$1.run(OdiCnxFactory.java:214)
at oracle.ide.dialogs.ProgressBar.run(ProgressBar.java:655)
at java.lang.Thread.run(Thread.java:662)
Before resetting the password of the user SUPERVISOR .
Please try the steps below .
1. SQL> drop user snpm1 cascade;
User dropped.
SQL> create user snpm1 identified by welcome123 default tablespace users tempora
ry tablespace temp;
User created.
SQL>
SQL> grant connect, resource to snpm1;
Grant succeeded.
2. Recreate the connection in odi studio
2.1 In the Master Repository Creation Wizard, select the browse icon of the JDBC Driver and then select Oracle JDBC Driver. Click OK. Edit the JDBC URL to read:
jdbc:oracle:thin: localhost:1521:orcl
Enter the User as snpm1 and the Password as oracle1 . Click the Test Connection button and verify successful connection. Click OK. Click Next on the Master Repository Creation Wizard screen.
2.2 In the Authentication window, enter Supervisor Password as SUNOPSIS. Enter SUNOPSIS again to confirm the password. Click Next.
Note: User names and passwords are case-sensitive in ODI.
2.3 In the Password Storage window, select Internal password Storage, and then click Finish. When Master Repository is successfully created, you will see the Oracle Data Integrator Information message. Click OK. The ODI Master repository is now created.
Connection should be succesful when you click on test
3. You connect to the ODI Master repository by creating a new ODI Master Login. Open the New Gallery by choosing File > New. In the New Gallery, in the Categories tree, select ODI. From the Items list select Create a New ODI Repository login.
3.1 Configure Repository Connections with the parameters from the tables provided below. To enter the JDBC URL, click the button next to JDBC URL field and select jdbc:oracle:thin:@:: as shown in the screenshot, then edit the URL. Select Master Repository only button. Click Test button. Verify successful connection and click OK. Click OK to save the connection.
4. Click Connect to Repository. Select the newly created repository connection Master Repository from the drop-down list. ClickOK. The ODI Topology Manager starts. You are now successfully logged in to the ODI Topology Manager.
Follow the below steps to reset the SUPERVISOR user password .
1. Login to the MREP schema in the Database as sys user .
SQL> CONN SYS AS SYSDBA
Enter Password
Connedted
2. Select the SNP_USER table to verify the username.
SQL> SELECT WUSER_NAME FROM SNPM1.SNP_USER;
WUSER_NAME
-----------------------------------------------------------------------------
SUPERVISOR
3. Using the encode.sh ,create a encrypted password.
C:\Oracle\Middleware\Oracle_ODI1\oracledi\agent\bin>encode SUNOPSIS
fJyHhR,tqjrxeWsfL,nicqy
4. Update the SNP_USER Table with the Password generated using the encode.sh for the SUPERVISOR user.
SQL> show user
USER is "SYS"
SQL>
SQL> update DEV_ODI_REPO.snp_user set PASS='fJyHhR,tqjrxeWsfL,nicqy' where WUSER_NAME='SUPERVISOR';
1 row updated.
or try the below command
SQL> update SNPM1.snp_user set PASS='fJyHhR,tqjrxeWsfL,nicqy' where WUSER_NAME='
SUPERVISOR';
1 row updated.
5. Now, login to the ODI GUI .
ALTER USER COMMAND IN 11g
alter user identified by values in 11g
When executing alter user commands in 11g, it is important to understand the mechanism. Oracle 11g supports both sensitive and insensitive passwords.
When issuing an CREATE/ALTER USER IDENTIFIED BY PASSWORD, both the insensitive and the sensitive hashes are saved.
SQL> create user u identified by u;
User created.
SQL> grant create session to u;
Grant succeeded.
SQL> connect u/U
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> connect u/u
Connected.
Per default only the proper case works
SQL> alter system set sec_case_sensitive_logon=false;
System altered.
SQL> connect u/U
Connected.
SQL> conn u/u
Connected.
When sec_case_sensitive_logon=false, both uppercase and lowercase passwords work (10g behavior).
When issuing a create user identified by values, you must chose if you want to have both passwords, only the case insensitive or only the case sensitive.
SQL> select password,spare4 from user$ where name='U';
PASSWORD
------------------------------
SPARE4
--------------------------------------------------------------
18FE58AECB6217DB
S:8B1765172812D9F6B62C2A2B1E5FEF203200A44B4B87F9D934DABBB809A4
The hashes are in USER$.
SQL> alter user u identified by values '18FE58AECB6217DB';
User altered.
SQL> alter system set sec_case_sensitive_logon=true;
System altered.
SQL> conn u/u
Connected.
SQL> conn u/U
Connected.
When only the 10g oracle hash is used as a value, the password is case insensitive whatever the setting of sec_case_sensitive_logon is.
SQL> alter user u identified by values
'S:8B1765172812D9F6B62C2A2B1E5FEF203200A44B4B87F9D934-
DABBB809A4';
User altered.
SQL> alter system set sec_case_sensitive_logon=false;
System altered.
SQL> conn u/u
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> conn u/U
ERROR:
ORA-01017: invalid username/password; logon denied
When only the 11g oracle hash is used as a value, the password is case sensitive and if the setting of sec_case_sensitive_logon is on false, the login failed as there is no 10g string. This setting is probably the most secure setting as the 10g string is not saved in USER$.
SQL> alter user u identified by values
'S:8B1765172812D9F6B62C2A2B1E5FEF203200A44B4B87F9D934-
DABBB809A4;18FE58AECB6217DB';
SQL> alter system set sec_case_sensitive_logon=true;
System altered.
SQL> conn u/u
Connected.
SQL> conn u/U
ERROR:
ORA-01017: invalid username/password; logon denied
Warning: You are no longer connected to ORACLE.
SQL> conn / as sysdba
Connected.
SQL> alter system set sec_case_sensitive_logon=false;
System altered.
SQL> conn u/u
Connected.
SQL> conn u/U
Connected.
Sunday, December 22, 2013
VMWare download
https://download3.vmware.com/software/player/file/VMware-player-6.0.1-1379776.exe?HashKey=464345cd95ef4af59fa43693046ffb67&ext=.exe¶ms=%7B%22sourcefilesize%22%3A%2294M%22%2C%22dlgcode%22%3A%22PLAYER-601%22%2C%22languagecode%22%3A%22en%22%2C%22source%22%3A%22DOWNLOADS%22%2C%22downloadtype%22%3A%22manual%22%2C%22eula%22%3A%22N%22%2C%22downloaduuid%22%3A%22e43a4bc9-decd-418d-bb76-c96b625379f9%22%2C%22purchased%22%3A%22N%22%2C%22dlgtype%22%3A%22Product+Binaries%22%2C%22productversion%22%3A%226.0.1%22%2C%22productfamily%22%3A%22VMware+Player%22%7D&AuthKey=1387776536_577c10be09dc2fb141a05e087eb17f6a&ext=.exe
Install SOA 11g in SPARC 64 bit
http://jianmingli.com/wp/?p=1969
Install Oracle SOA 11g (11.1.1.4.0) on SPARC 64
Contents
1. Downloads
2. Check Versions
3. Install
1. Setup Env
2. Install WebLogic Server
3. Create Database Schema for BAM and SOA Servers
4. Install SOA
1. Unzip zip files
2. Run installer
3. Create Domains
5. Create boot.properties file
6. Include WebLogic Native Library
7. Adjust Memory Settings
8. Start/Stop Servers
1. Console URLs
4. Admin Console
1. Disable on-demand deployment of internal applications
5. Deinstall
1. Remove Domain
2. Deinstall SOA
1. Stop all WebLogic Servers
2. Start deinstaller
3. Remove user project domain directories
3. Drop Schema
4. Uninstall WebLogic
6. Issues
1. Unable to load performance pack. Using Java I/O instead
2. Native Library(terminalio) is not found
3. Numerous database connection exceptions in stdout
4. Could not obtain an exclusive lock to the embedded LDAP data files directory
5. PersistentStoreException
6. java.lang.OutOfMemoryError: PermGen space
7. References
Downloads
* Go to Oracle edelivery site and search for Fusion Middleware 11g Media Pack v23 for Oracle Solaris on SPARC (64-bit).
* Download
- Oracle WebLogic Server 11gR1 (10.3.4) Generic and Coherence: V24338-01.zip
- Oracle SOA Suite 11g Patch Set 3 (11.1.1.4.0) (Part 1 of 2): V24313-01_1of2.zip
- Oracle SOA Suite 11g Patch Set 3 (11.1.1.4.0) (Part 2 of 2): V24313-01_2of2.zip
- Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.4.0) for Microsoft Windows (32-bit): V24312-01.zip
Check Versions
* Check Oracle database version. Need 10.2.0.4 and above. Also need AL32UTF8 character set support.
1.
2. SQL> SELECT * FROM v$version;
3.
4. BANNER
5. ----------------------------------------------------------------
6. Oracle DATABASE 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
7. PL/SQL Release 10.2.0.4.0 - Production
8. CORE 10.2.0.4.0 Production
9. TNS FOR Solaris: Version 10.2.0.4.0 - Production
10. NLSRTL Version 10.2.0.4.0 - Production
11.
* Also need AL32UTF8 character set support
1.
2. SQL> SELECT parameter, value
3. FROM v$nls_parameters
4. WHERE parameter = 'NLS_CHARACTERSET';
5.
6. PARAMETER VALUE
7. -------------------- --------------------
8. NLS_CHARACTERSET AL32UTF8
9.
Install
Setup Env
* Login as user oracle.
1.
2. # Set JAVA_HOME to jdk 1.6
3. export JAVA_HOME=/opt/oracle/jdk1.6.0_21
4. export PATH=${JAVA_HOME}/bin/sparcv9:${PATH}
5. export LD_LIBRARY_PATH_64=${LD_LIBRARY_PATH_64}:${JAVA_HOME}/jre/lib/sparcv9/server
6.
7. # Used for X Windows
8. export PATH=/usr/openwin/bin:$PATH
9.
10. # Set locale to UTF-8
11. export LANG=en_US.UTF-8
12. export LC_ALL=en_US.UTF-8
13.
Install WebLogic Server
* Unzip V24338-01.zip to obtain wls1034_generic.jar (e.g. in /opt/oracle/sw directory)
* Start installer
1. java -d64 -Djava.security.egd=file:/dev/./urandom -jar wls1034_generic.jar
* Click Next on the Welcome screen.
* Enter Middleware Home directory
1. /opt/oracle/Middleware/home_11gr1
* Bypass Register for Security Updates for now.
* Select Typical install
* Check Local JDK/Sun SDK 1.6.0_21 (/opt/oracle/jdk1.6.0_21)
* Accept default installation directories for WebLogic Server and Oracle Coherence server.
1. /opt/oracle/Middleware/home_11gr1/wlserver_10.3
2. /opt/oracle/Middleware/home_11gr1/coherence_3.6
* Review installation summary and click Next
* On Installation Complete screen, uncheck Run Qickstart, then click Done.
Create Database Schema for BAM and SOA Servers
* See this post to create database schema for BAM and SOA servers.
Install SOA
Unzip zip files
unzip V24313-01_1of2.zip
unzip V24313-01_2of2.zip
(e.g. in /opt/oracle/sw directory)
Run installer
* Start installer GUI
1. cd /opt/oracle/sw/soa11.1.1.4/Disk1
2. ./runInstaller -jreLoc /opt/oracle/jdk1.6.0_21/jre
* Click Next on welcome screen
* Select Skip Software Updates
* Click Next on Prerequisite Checks
* Enter
1. Oracle Middleware Home: /opt/oracle/Middleware/home_11gr1
2. Oracle Home Directory: Oracle_SOA1
* Select WebLogic Server
* Review Installation Summary screen and
- Click Save button to save a response file
- Click Install to start install
* On Installation Complete screen,
- Click Save to save installation details
- Click Finish to end installation
Create Domains
* Start Configuration Wizard
1. cd /opt/oracle/Middleware/home_11gr1/Oracle_SOA1/common/bin
2. ./config.sh -log=soa_domain.log
* Select Create a new WebLogic domain and click Next
* On Select Domain Source screen, select
1. Oracle BPM Suite - 11.1.1.0[Oracle_SOA1]
2. Oracle SOA Suite - 11.1.1.0[Oracle_SOA1]
3. Oracle Enterprise Manager - 11.1.1.0[oracle_common]
4. Oracle Business Activity Monitoring - 11.1.1.0[Oracle_SOA1]
5. Oracle WSM Policy Manager - 11.1.1.0[oracle_common]
6. Oracle JRF - 11.1.1.0[oracle_common]
* Enter domain name and location
1. Domain name: soa_domain
2. Domain location: /opt/oracle/Middleware/home_11gr1/user_projects/domains
3. Application location: /opt/oracle/Middleware/home_11gr1/user_projects/applications
* Enter Administrator user name and pass
1. Name: weblogic
2. User password: welcome1
* Select Development Mode and Sun SDK 1.6.0_21
* On Configure JDBC Component Schema screen, select ALL component schema checkboxes and enter:
1. Vendor Oracle
2.
3. Driver: Oracle's Driver (Thin) for Service connections; Versions:9.0.1 and later
4.
5. DBMS/Service: orcl.world # Since we select Service connections driver, sid won't work
6.
7. Host Name: localhost
8.
9. Port: 1521
10.
11. Schema Password: welcome1
* Click Next after successful testing JDBC component schema
* Leave all optional configurations unchecked.
* Click Create on Configuration Summary screen.
* Click Done.
Create boot.properties file
* Create boot.properties file for managed servers so you don't have to type in user name and password during managed server startups.
* If installed in Development mode as we did here, boot.properties file is necessary otherwise managed servers (soa_server1 and bam_server1) won't start and will give following errors:
1.
2. Server is Running in Development Mode and Native Library(terminalio) to read the password securely from commandline is not found.
3.
* Create boot.properties file
1.
2. # create boot.properties file for soa_server1
3. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/servers
4. mkdir soa_server1
5. cd soa_server1
6. mkdir security
7. cd security
8. vi boot.properties
9. # insert content as shown below
10.
11. # create boot.properties file for bam_server1
12. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/servers
13. mkdir bam_server1
14. cd bam_server1
15. mkdir security
16. cd security
17. vi boot.properties
18. # insert content as shown below
19.
* boot.properties file content:
1.
2. username=weblogic
3. password=welcome1
4.
Include WebLogic Native Library
* Set LD_LIBRARY_PATH_64 to include WebLogic native library before starting WebLogic server. Otherwise, you'll get "Unable to load performance pack. Using Java I/O instead" warning when starting up.
1.
2. wls_home=/opt/oracle/Middleware/home_11gr1/wlserver_10.3
3. export LD_LIBRARY_PATH_64=${LD_LIBRARY_PATH_64}:${wls_home}/server/native/solaris/sparc64
4.
Adjust Memory Settings
* Adjust Java memory settings in setSOADomainEnv.sh file as needed:
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3. vi setSOADomainEnv.sh
4. # Adjust memory on the following lines
5. DEFAULT_MEM_ARGS="-Xms512m -Xmx1024m"
6. PORT_MEM_ARGS="-Xms512m -Xmx1024m"
7.
* I found this setting worked for me on a 8 GB Solaris box.
1.
2. DEFAULT_MEM_ARGS="-Xmx2048m -Xms1024m -XX:NewSize=448m -XX:MaxNewSize=448m -XX:SurvivorRatio=6 -XX:PermSize=256m -XX:MaxPermSize=256m"
3. PORT_MEM_ARGS="-Xmx2048m -Xms1024m -XX:NewSize=448m -XX:MaxNewSize=448m -XX:SurvivorRatio=6 -XX:PermSize=256m -XX:MaxPermSize=256m"
4.
Start/Stop Servers
* Start WebLogic servers
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3. nohup ./startWebLogic.sh >> AdminServer.out 2>> AdminServer.err < /dev/null &
4. nohup ./startManagedWebLogic.sh soa_server1 >> soa_server1.out 2>> soa_server1.err < /dev/null &
5. nohup ./startManagedWebLogic.sh bam_server1 >> bam_server1.out 2>> bam_server1.err < /dev/null &
6.
* Stop WebLogic servers
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3. ./stopManagedWebLogic.sh bam_server1 >> bam_server1.out 2>> bam_server1.err < /dev/null &
4. ./stopManagedWebLogic.sh soa_server1 >> soa_server1.out 2>> soa_server1.err < /dev/null &
5. ./stopWebLogic.sh >> AdminServer.out 2>> AdminServer.err < /dev/null &
6.
* Create aliases to start/stop soa servers
1.
2. soa_domain_bin_dir=/opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3.
4. alias startweblogic="nohup ${soa_domain_bin_dir}/startWebLogic.sh >> ${soa_domain_bin_dir}/AdminServer.out 2>> ${soa_domain_bin_dir}/AdminServer.err < /dev/null &"
5. alias startsoa11g="nohup ${soa_domain_bin_dir}/startManagedWebLogic.sh soa_server1 >> ${soa_domain_bin_dir}/soa_server1.out 2>> ${soa_domain_bin_dir}/soa_server1.err < /dev/null &"
6. alias startbam11g="nohup ${soa_domain_bin_dir}/startManagedWebLogic.sh bam_server1 >> ${soa_domain_bin_dir}/bam_server1.out 2>> ${soa_domain_bin_dir}/bam_server1.err < /dev/null &"
7.
8. alias stopbam11g="${soa_domain_bin_dir}/stopManagedWebLogic.sh bam_server1 >> ${soa_domain_bin_dir}/bam_server1.out 2>> ${soa_domain_bin_dir}/bam_server1.err < /dev/null &"
9. alias stopsoa11g="${soa_domain_bin_dir}/stopManagedWebLogic.sh soa_server1 >> ${soa_domain_bin_dir}/soa_server1.out 2>> ${soa_domain_bin_dir}/soa_server1.err < /dev/null &"
10. alias stopweblogic="${soa_domain_bin_dir}/stopWebLogic.sh >> ${soa_domain_bin_dir}/AdminServer.out 2>> ${soa_domain_bin_dir}/AdminServer.err < /dev/null &"
11.
12. alias tailweblogiclog="tail -f ${soa_domain_bin_dir}/AdminServer.out"
13. alias tailsoa11glog="tail -f ${soa_domain_bin_dir}/soa_server1.out"
14. alias tailbam11glog="tail -f ${soa_domain_bin_dir}/bam_server1.out"
15.
Console URLs
* See this post for various console URLs
Admin Console
Disable on-demand deployment of internal applications
* Login Admin Console
* Click on soa_domain on the left panel
* Click General tab
* Un-check Enable on-demand deployment of internal applications
* Click Save button
Deinstall
Remove Domain
* Stop all processes associated with the domain.
* Remove the relevant domain entry from the "$MW_HOME/domain-registry.xml" file.
1.
2.
3.
4.
5.
6.
* Remove the relevant domain entry from the "$WLS_HOME/common/nodemanager/nodemanager.domains" file.
1.
2. #Domains and directories created by Configuration Wizard
3. #Thu Aug 23 22:53:14 BST 2012
4. soa_domain=/u01/app/oracle/middleware/user_projects/domains/soa_domain
5.
* Delete the "soa_domain" application and domain directories.
1.
2. $ rm -Rf $MW_HOME/user_projects/applications/soa_domain
3. $ rm -Rf $MW_HOME/user_projects/domains/soa_domain
4.
Deinstall SOA
Stop all WebLogic Servers
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3. ./stopManagedWebLogic.sh bam_server1
4. ./stopManagedWebLogic.sh soa_server1
5. ./stopWebLogic.sh
6.
Start deinstaller
1.
2. cd /opt/oracle/Middleware/home_11gr1/Oracle_SOA1/oui/bin
3. ./runInstaller -deinstall -jreLoc /opt/oracle/jdk1.6.0_21
4.
* Click Next on Welcome screen
* Click Deinstall
* When warned /opt/oracle/Middleware/home_11gr1/Oracle_SOA1 will be deleted after deinstall,
- Click Yes if you want Oracle_SOA1 directory removed
- Click No if you don't want any file in the directory
* Click Finish when done.
Remove user project domain directories
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains
3. rm -rf soa_domain
4.
5. cd /opt/oracle/Middleware/home_11gr1/user_projects/applications
6. rm -rf soa_domain
7.
Drop Schema
* See this post to remove database schema for BAM and SOA servers.
Uninstall WebLogic
* Run uninstaller
1.
2. cd /opt/oracle/Middleware/home_11gr1/utils/uninstall
3. ./uninstall.sh
4. rm -rf /opt/oracle/Middleware/home_11gr1
5.
* Remove install directory
1.
2. rm -rf /opt/oracle/Middleware/home_11gr1
3.
Issues
Unable to load performance pack. Using Java I/O instead
* Error message:
Unable to load performance pack. Using Java I/O instead. Please ensure that a native performance library is in: '/opt/oracle/jdk1.6.0_21/jre/lib/sparcv9/server:/opt/oracle/jdk1.6.0_21/jre/lib/sparcv9:/opt/oracle/jdk1.6.0_21/jre/../lib/sparcv9:/opt/oracle/product/10.2/lib:/opt/oracle/jdk1.6.0_21/jre/lib/sparcv9/server:/usr/jdk/packages/lib/sparcv9:/lib/64:/usr/lib/64'
* Cause:
Solaris Sparc 64 native library path not included in LD_LIBRARY_PATH_64
* Fix:
Set LD_LIBRARY_PATH_64 to include native library before starting WebLogic server.
1.
2. wls_home=/opt/oracle/Middleware/home_11gr1/wlserver_10.3
3. export LD_LIBRARY_PATH_64=${LD_LIBRARY_PATH_64}:${wls_home}/server/native/solaris/sparc64
4.
* Note
Setting LD_LIBRARY_PATH to include native library path didn't work. Set LD_LIBRARY_PATH_64 instead.
Native Library(terminalio) is not found
* Error message:
1. Server is Running in Development Mode and Native Library(terminalio) to read the password securely from commandline is not found.
* Cause:
boot.properties file not found in security folder.
* Resolution 1:
Create boot.properties file in the security folder.
* Resolution 2:
Set
JAVA_OPTIONS=-Dweblogic.managerment.allowPasswordEcho=true
Numerous database connection exceptions in stdout
* Possible cause
- This could be caused by database not in AL32UTF8 mode.
* Possible resolution
- Convert database to AL32UTF8 character set.
Could not obtain an exclusive lock to the embedded LDAP data files directory
* Error message
1.
2.
3.
4.
* Possible cause
- WebLogic or SOA managed servers are not shutdown cleanly
* Possible fix
- Remove EmbeddedLDAP.lok files
- Example
1.
2. # make sure not soa processes are running
3. ps -ef|grep -i "soa_domain"|grep -v grep
4.
5. # cd to soa_domain
6. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain
7.
8. # find all lok files
9. find . -name "*.lok" -print
10. ./config/config.lok
11. ./edit.lok
12. ./servers/AdminServer/tmp/AdminServer.lok
13. ./servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
14. ./servers/soa_server1/tmp/soa_server1.lok
15. ./servers/soa_server1/data/ldap/ldapfiles/EmbeddedLDAP.lok
16. ./servers/bam_server1/tmp/bam_server1.lok
17. ./servers/bam_server1/data/ldap/ldapfiles/EmbeddedLDAP.lok
18.
19. # remove all lok files
20. find . -name "*.lok" -print -exec rm {} \;
21. ./config/config.lok
22. ./edit.lok
23. ./servers/AdminServer/tmp/AdminServer.lok
24. ./servers/AdminServer/data/ldap/ldapfiles/EmbeddedLDAP.lok
25. ./servers/soa_server1/tmp/soa_server1.lok
26. ./servers/soa_server1/data/ldap/ldapfiles/EmbeddedLDAP.lok
27. ./servers/bam_server1/tmp/bam_server1.lok
28. ./servers/bam_server1/data/ldap/ldapfiles/EmbeddedLDAP.lok
29.
30. # start as usual
31.
PersistentStoreException
* Error message
The persistent store "_WLS_soa_server1" could not be deployed: weblogic.store.PersistentStoreException: [Store:280105]The persistent file store "_WLS_soa_server1" cannot open file _WLS_SOA_SERVER1000000.DAT.
weblogic.store.PersistentStoreException: [Store:280105]The persistent file store "_WLS_soa_server1" cannot open file _WLS_SOA_SERVER1000000.DAT.
at weblogic.store.io.file.Heap.open(Heap.java:325)
at weblogic.store.io.file.FileStoreIO.open(FileStoreIO.java:104)
* Cause
- WebLogic or SOA managed servers are not shutdown cleanly
* Fix
- Stop all server instances.
- Remove all .lok files following previous section.
- Then remove all .DAT files:
# cd to soa_domain
cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain
# Find all .DAT files
find . -name "*.DAT" -print
./servers/AdminServer/data/store/default/_WLS_ADMINSERVER000000.DAT
./servers/AdminServer/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
./servers/soa_server1/data/store/default/_WLS_SOA_SERVER1000000.DAT
./servers/soa_server1/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
./servers/bam_server1/data/store/default/_WLS_BAM_SERVER1000000.DAT
./servers/bam_server1/data/store/diagnostics/WLS_DIAGNOSTICS000000.DAT
./BPMJMSFileStore/BPMJMSFILESTORE000000.DAT
./SOAJMSFileStore/SOAJMSFILESTORE000000.DAT
./UMSJMSFileStore_auto_1/UMSJMSFILESTORE_AUTO_1000000.DAT
./UMSJMSFileStore_auto_2/UMSJMSFILESTORE_AUTO_2000000.DAT
oracle@windu:/opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain
# remove all .DAT files
find . -name "*.DAT" -print -exec rm {} \;
# start as usual
java.lang.OutOfMemoryError: PermGen space
* Error message
java.lang.OutOfMemoryError: PermGen space
* See this post for JVM settings
* Possible fix: set MaxPermSize to higher value. For example,
1.
2. cd /opt/oracle/Middleware/home_11gr1/user_projects/domains/soa_domain/bin
3. vi setSOADomainEnv.sh
4. # need to tune
5. DEFAULT_MEM_ARGS="-Xms1024m -Xmx2048m -XX:MaxPermSize=256m"
6. PORT_MEM_ARGS="-Xms1024m -Xmx2048m -XX:MaxPermSize=256m"
7.
Weblogic 10.3.5 to 10.3.6 upgrade
Follow the below steps to upgrade weblogic from 10.3.5 to 10.3.6 .
http://docs.oracle.com/cd/E24902_01/doc.91/e18840/upgrade_1036.htm
Step 1: Before You Begin
Before you begin upgrading your existing WebLogic Server version to WebLogic 10.3.6.0:
•Shut down all WebLogic processes you want to upgrade such as:
◦Node Manager
◦Admin Server
◦All Managed Servers
•Download a new version of JDK 1.7+.
Note:
A plus sign '+' after the version number indicates that this and its subsequent versions are supported.
•Download this patch set for Oracle WebLogic 10.3.6.0: Patch 13529623 (p13529623_1036_Generic.zip)
Unzip the file in a temporary location and confirm the extracted contains this file:
wls1036_upgrade_generic.jar
Step 2 :
Installing and Verifying the JDK Version
In order to meet the MTRs for the requisite JDK, if you need to install a new JDK you should install it to a different location. Otherwise you should completely uninstall the existing JDK and replace it with the newer version.
To install and verify the level of an existing JDK, refer to the section of this guide entitled: Section 5.4, "Installing and Verifying the JDK Version."
Step 3:
Running OUI to Upgrade an Existing WebLogic Server to 10.3.6
This section describes running the Oracle Universal Installer (OUI) to upgrade an existing WebLogic Server to 10.3.6.
1.Locate this patchset from the temporary location where you downloaded it in Section 7.1, "Before You Begin":
wls1036_upgrade_generic.jar
2.Open a Command window with Run as Administrator option and run this command from the prompt:
>java -jar wls1036_upgrade_generic.jar
Upon execution, the installer starts preparing the OUI install program.
3.On Choose Middleware Home Directory, select the existing Middleware home you wish to upgrade.
4.Click the Next button.
5.On Register for Security Updates, the Email address and/or the My Oracle Support Password fields as applicable.
6.Click the Next button.
7.On Choose Products and Components, verify the components.
Note:
The OUI installer automatically selects the Oracle Coherence component. You can choose to select or deselect this component, keeping in mind that this server type has not yet been verified with Oracle JD Edwards EnterpriseOne.
8.Click the Next button.
9.On Choose Product Installation Directories, verify the directory locations for the previously selected products and components.
Note:
A new version of Oracle Coherence_3.7 will be installed.
10.Click the Next button.
OUI begins copying the files and performs the upgrade.
11.On Installation Complete, click the check box for Run Quickstart to continue with the upgrade of the Oracle WebLogic domains.
12.Click the Done button to complete the installation and exit OUI.
The Quickstart configuration screen appears.
13.On the QuickStart links panel, select this link:
Upgrade domains to version 10.3.6
An Upgrade Wizard is launched.
14.On the Welcome panel of the Upgrade Wizard, review and complete the tasks listed in the Prerequisites section of the above screen.
15.When the Prerequisite tasks are complete, click the Next button.
16.On Select WebLogic Version, select this radio button:
9.0 or higher
17.Click the Next button.
18.On Select a Domain to Upgrade, drill down through the Oracle\Middleware\user_projects\domains directory structure and select the Oracle JD Edwards domain. For example:
E1_Apps
19.Click the Next button.
20.On Inspect Domain, review the upgrade configuration selections.
21.Click the Next button.
22.On Select Upgrade Options, select this check box:
Back up current domain (recommended)
Caution:
The wizard advises you that if you choose the check box or Add log files to backup zip, the resultant zip file can be extremely large.
23.On Domain Backup, review the message.
24.Click the Next button.
25.On Select Directory for Domain Backup, you can accept or change location and filename of the backup zip file.
26.The wizard shows the progress of the domain backup.
27.When the backup is complete, click the Next button.
28.On Finalize Domain Upgrade, review the message.
29.Click the Next button to begin the Upgrade.
30.On Upgrade Complete, click the Done button to exit OUI.
Note:
As a result of this domain upgrade, you do not need to individually upgrade any Managed Server.
31.Start the WebLogic NodeManager.
32.Start the WebLogic Administration Console.
33.Start the existing Managed Server such as the Oracle JD Edwards EnterpriseOne HTML server.
34.Test and verify the upgrade.
================================================================================================
Upgrading WLS from 10.3.5 to 10.3.6
Download the software
You can download the WebLogic Server software from Oracle Support. It isn't easy but it has to be done.
Once you have logged in to Oracle Support do the following:
1.Click on the "Patches & Updates" tab
2.In the "Patch Search" box select the following
1.Product - Oracle WebLogic Server
2.Release - 10.3.6 (or whatever release you are looking for)
3.Platform - Whatever platform you are looking for (■10.3.6: For windows 32 bit: Patch 13529639 )
4.Description - upgrade
5.Click Search
3.Click the Patch Name/Number
4.Click the Download button.
This should get you the upgrade installer you need.
Backup your software and domain folders
Just to be on the safe side it is a good idea to backup your software and domain folders. The easiest way to do that is to Zip the folders and copy the zip files to a safe place.
Run the installer
To run the installer simply run the downloaded file. On 64 bit architectures that is most like running:
java -jar wls1036_upgrade_installer.jar
Once the installer is ready it is simply going through the install wizard.
Thursday, December 19, 2013
Oracle Data Integrator 11g Product Overview and Architecture
The purpose of ETL (Extract, Load, Transform) tools is to help with the consolidation
of data that is dispersed throughout the information system. Data is stored in disparate
applications, databases, files, operating systems, and in incompatible formats. The
consequences of such a dispersal of the information can be dire, for example, different
business units operating on different data will show conflicting results and information
cannot be shared across different entities of the same business.
Imagine the marketing department reporting on the success of their latest campaign
while the finance department complains about its lack of efficiency. Both have
numbers to back up their assertions, but the numbers do not match!
What could be worse than a shipping department that struggles to understand
customer orders, or a support department that cannot confirm whether a customer
is current with his/her payment and should indeed receive support? The examples
are endless.
The only way to have a centralized view of the information is to consolidate the
data—whether it is in a data warehouse, a series of data marts, or by normalizing
the data across applications with master data management (MDM) solutions. ETL
tools usually come into play when a large volume of data has to be exchanged (as
opposed to Service-Oriented Architecture infrastructures for instance, which would
be more transaction based).
In the early days of ETL, databases had very weak transformation functions. Apart
from using an insert or a select statement, SQL was a relatively limited language. To
perform heavy duty, complex transformations, vendors put together transformation
platforms—the ETL tools.
Product Overview
Over time, the SQL language has evolved to include more and more transformation
capabilities. You can now go as far as handling hierarchies, manipulating XML
formats, using analytical functions, and so on. It is not by chance that 50 percent of
the ETL implementations in existence today are done in plain SQL scripts—SQL
makes it possible.
This is where the ODI ELT architecture (Extract-Load-Transform—the inversion
in the acronym is not a mistake) comes into play. The concept with ELT is that
instead of extracting the data from a source, transforming it with a dedicated
platform, and then loading into the target database, you will extract from the
source, load into the target, then transform into the target database, leveraging
SQL for the transformations.
To some extent, ETL and ELT are marketing acronyms. When you look at ODI
for instance, it can perform transformations on the source side as well as on the
target side. You can also dedicate some database or schema for the staging and
transformation of your data, and can have something more similar to an ETL
architecture. Similarly, some ETL tools all have the ability to generate SQL code
and to push some transformations at the database level.
The key differences then for a true ELT architecture are as follows:
• The ability to dynamically manage a staging area (location, content,
automatic management of table alterations)
• The ability to generate code on source and target systems alike, in the
same transformation
• The ability to generate native SQL for any database on the market—most
ETL tools will generate code for their own engines, and then translate that
code for the databases—hence limiting their generation capacities to their
ability to convert proprietary concepts
• The ability to generate DML and DDL, and to orchestrate sequences of
operations on the heterogeneous systems
In a way, the purpose of an ELT tool is to provide the comfort of a graphical interface
with all the functionality of traditional ETL tools, to keep the efficiency of SQL
coding with set-based processing of data in the database, and limiting the overhead
of moving data from place to place.
In this chapter we will focus on the architecture of Oracle Data Integrator 11g, as
well as the key concepts of the product. The topics we will cover are as follows:
• The elements of the architecture, namely, the repository, the Studio, the
Agents, the Console, and integration into Oracle Enterprise Manager
• An introduction to key concepts, namely, Execution Contexts, Knowledge
Modules, Models, Interfaces, Packages, Scenarios, and Load Plans
ODI product architecture
Since ODI is an ELT tool, it requires no other platform than the source and target
systems. But there still are ODI components to be deployed: we will see in this
section what these components are and where they should be installed.
The components of the ODI architecture are as follows:
• Repository: This is where all the information handled by ODI is stored,
namely, connectivity details, metadata, transformation rules and scenarios,
generated code, execution logs, and statistics.
• Studio: The Studio is the graphical interface of ODI. It is used by
administrators, developers, and operators.
Product Overview
• Agents: The Agents can be seen as orchestrators for the data movement and
transformations. They are very lightweight java components that do not
require their own server—we will see in detail where they can be installed.
• Console: The Console is a web tool that lets users browse the ODI
repository, but it is not a tool used to develop new transformations. It can
be used by operators though to review code execution, and start or restart
processes as needed.
• The Oracle Enterprise Manager plugin for ODI integrates the monitoring of
ODI components directly into OEM so that administrators can consolidate
the monitoring of all their Oracle products in one single graphical interface.
At a high level, here is how the different components of the architecture
interact with one another. The administrators, developers, and operators
typically work with the ODI Studio on their machine (operators also have the
ability to use the Console for a more lightweight environment). All Studios
typically connect to a shared repository where all the metadata is stored. At
run time, the ODI Agent receives execution orders (from the Studio, or any
external scheduler, or via a Web Service call). At this point it connects to the
repository, retrieves the code to execute, adds last minute parameters where
needed (elements like connection strings, schema names where the data
resides, and so on), and sends the code to the databases for execution. Once the
databases have executed the code, the agent updates the repository with the
status of the execution (successful or not, along with any related error message)
and the relevant statistics (number of rows, time to process, and so on).
ODI repository
To store all its information, ODI requires a repository. The repository is by default a
pair of schemas (called Master and Work repositories) stored in a database. Unless
ODI is running in a near real time fashion, continuously generating SQL code for
the databases to execute the code, there is no need to dedicate a database for the
ODI repository. Most customers leverage existing database installations, even if
they create a dedicated tablespace for ODI.
Repository overview
The only element you will never find in the repository is the actual data processed
by ODI. The data will be in the source and target systems, and will be moved
directly from source to target. This is a key element of the ELT architecture. All other
elements that are handled through ODI are stored into the repository. An easy way
to remember this is that everything that is visible in the ODI Studio is stored in the
repository (except, of course, for the actual data), and everything that is saved in the
ODI Studio is actually saved into the repository (again, except for the actual data).
The repository is made of two entities which can be separated into two separate
database schemas, namely, the Master repository and the Work repository.
We will look at each one of these in more detail later, but for now you can consider
that the Master repository will host sensitive data whereas the Work repository will
host project-related data. A limited version of the Work repository can be used in
production environments, where the source code is not needed for execution.
Repository location
Before going into the details of the Master and Work repositories, let's first look into
where to install the repository.
The repository is usually installed in an existing database, often in a separate
tablespace. Even though ODI is an Oracle product, the repository does not have to
be stored in an Oracle database (but who would not use the best database in the
world?). Generally speaking, the databases supported for the ODI repository are
Oracle, Microsoft SQL Server, IBM/DB2 (LUW and iSeries), Hypersonic SQL, and
Sybase ASE. Specific versions and platforms for each database are published by
Oracle and are available at:
http://www.oracle.com/technetwork/middleware/ias/downloads/fusioncertification-
100350.html.
It is usual to see the repository share the same system as the target database.
We will now look into the specifics of Master and Work repositories.
Master repository
As stated earlier, the Master repository is where the sensitive data will be stored.
This information is of the following types:
• All the information that pertains to ODI users privileges will be saved
here. This information is controlled by administrators through the Security
Navigator of the ODI Studio. We will learn more about this navigator when
we look into the details of the Studio.
• All the information that pertains to connectivity to the different systems
(sources and targets), and in particular the requisite usernames and
passwords, will be stored here. This information will be managed by
administrators through the Topology Navigator.
• In addition, whenever a developer creates several versions of the same object,
the subsequent versions of the objects are stored in the Master repository.
Versioning is typically accessed from the Designer Navigator.
Work repository
Work repositories will store all the data that is required for the developers to design
their data transformations. All the information stored in the Work repository is
managed through the Designer Navigator and the Operator Navigator. The Work
repository contains the following components:
• The Metadata that represents the source and target tables, files, applications,
message buses. These will be organized in Models in the Designer Navigator.
• The transformation rules and data movement rules. These will be organized
in Interfaces in the Designer Navigator.
• The workflows designed to orchestrate the transformations and data
movement. These are organized in Packages and Load Plans in the
Designer Navigator.
• The jobs schedules, if the ODI Agent is used as the scheduler for the
integration tasks. These can be defined either in the Designer Navigator
or in the Operator Navigator.
• The logs generated by ODI, where the generated code can be reviewed,
along with execution statistics and statuses of the different executions
(running, done successfully or in error, queued, and so on). The logs
are accessed from the Operator Navigator.
Execution repository
In a production environment, most customers do not need to expose the source
code for the processes that are running. Modifications to the processes that run
in production will have to go through a testing cycle anyway, so why store the
source code where one would never access it? For that purpose, ODI proposes an
execution repository that only stores the operational metadata, namely, generated
code, execution results, and statistics. The type of Work repository (execution or
development) is selected at installation time. A Work repository cannot be converted
from development to execution or execution to development—a new installation will
be required if a conversion is needed.
Studio
The ODI Studio is the graphical interface provided to all users to interact with ODI.
People who need to use the Studio usually install the software on their own
machine and connect to a shared repository. The only exception would be when
the repository is not on the same LAN as the Studio. In that case, most customers
use Remote Terminal Service technologies to ensure that the Studio is local to the
repository (same LAN). Only the actual display is then sent over the WAN.
Agent
The ODI Agent is the component that will orchestrate all the operations. If SQL code
must be executed by a database (source or target), the agent will connect to that
database and will send the code (DDL and DML, as needed) for that database to
perform the transformations. If utilities must be used as part of the transformations
(or, more likely, as part of the data transfer) then the agent will generate whatever
configuration files or parameter files are required for the utility, and will invoke this
utility with the appropriate parameters—SQL Loader, BCP, Multiload, and NZload
are just a small list of such utilities.
There are two types of ODI Agent, namely, the standalone agent (available in all
releases of ODI) and the JEE agent (available with ODI 11g and after) that runs on
top of WebLogic Server. Each type has its own benefits, and both types of agents
can co-exist in the same environment:
• The JEE agent will take advantage of Weblogic in terms of high availability
and pooling of the connections
The standalone agents are very lightweight and can easily be installed on any
platform. They are small Java applications that do not require a server.
A common configuration is to use the JEE agent as a "Master" agent, whose sole
purpose it is to distribute execution requests across several child agents. These
children can very well be standalone agents. The master agent will know at all
times which children are up or down. The master agent will also balance the
load across all child agents.
Tuesday, December 17, 2013
Oracle BI Applications 11.1.1.7.1
Understanding the Oracle BI Applications Architecture
Previous releases of the Oracle BI Applications used Informatica PowerCenter as the embedded data integration engine, with individual data loading tasks being orchestrated into execution plans using another tool called the Data Warehouse Administration Console (DAC). Oracle BI Applications 11.1.1.7.1 instead uses Oracle Data Integrator 11g (11.1.1.7) to perform data loads, along with a number of Java-based applications that are deployed into managed servers within the Oracle Business Intelligence WebLogic domain. Figure 1 below shows the Oracle BI Applications logical product architecture, showing you how the usual WebLogic Server managed server deployed as part of an Oracle Business Intelligence BI domain extended to include new Oracle BI Applications-related Java applications, and a new managed server added that contains Oracle Data Integrator-related Java applications.
Oracle BI Applications 11.1.1.7.1 is set of packaged data extraction and loading routines designed to load a pre-built Oracle data warehouse, together with a set of dashboards, reports and other business metadata objects designed to provide customers with a quick-to-deploy, best-practice BI environment for Oracle's ERP and CRM applications. Whilst a full installation of Oracle Data Integrator is provided with Oracle BI Applications 11.1.1.7.1, its role is as an "embedded" data integration engine with administrators mainly interacting with it using web-based administration and configuration tools
Install and Configure ORACLE BI APPS 11.1.1.7.1 along with OBIEE 11.1.1.7.0
http://www.oracle.com/technetwork/articles/bi/mcginley-bi-apps-1993643.html
http://sunnyobi.blogspot.com/2013/07/installation-and-configuration-of-bi.html
Hello all, I am writing this post to provide the BI Apps 11.1.1.7.1 installation steps along with OBIEE 11.1.1.7.0.
If you didn’t install OBIEE 11.1.1.7.0 then follow all the steps from starting. Otherwise, skip the first 4 steps and continue. but make sure you have the weblogic 10.3.6 or upgrade to it, which is mandatory for ODI and BI Apps 11.1.1.7.1 products. The current BI APPS version is certified with just Oracle as Target db. So the Oracle db version should be 11.2.0.3 +. Because ODI is using JDBC drivers to connect to Oracle and will open multiple JDBC connections during the load. The DB versions prior to 11.2.0.3 are not supporting multiple JDBC connections at a time. So Oracle 11.2.0.3+ are mandatory for successful ETL loads.
•Install JDK 1.6.X (JDK 1.7 is not certified by ODI 11.1.1.7)
•Install weblogic 10.3.6 (ODI 11.1.1.7 must be installed on WL 10.3.6)
•Run OBIEE 11.1.1.7.0 RCU utility to create BIPLATFORM and MDS schemas.
•Install and configure OBIEE 11.1.1.7
•Install ODI with All components and “Skip Repository Configuration” option.
•Run OBI Apps RCU to create schemas for:
◦ODI Master and Work Repositories (DEV_BIA_ODIREPO)
◦Oracle Business Applications Components (DEV_BIACOMP)
◦Oracle Business Analytics Datwarehouse (DEV_DW)
◦Update to MDS schema.
•Install BI Apps 11.1.1.7.1
•Apply FMW Platform Patches: – This is an important step to avoid the BI Apps configuration failure irrespective of what type OBIEE installation (software only / Enterprise) you did. Download “Oracle Fusion Middleware Platform Patches for Oracle Business Analytics Applications Suite” from the Oracle Business Intelligence Applications 11.1.1.7.1 media pack on Oracle Software Delivery Cloud.
◦D:\Middleware\Oracle_BI1\perl\bin> perl D:\Middleware\Oracle_BI1\biapps\tools\bin\APPLY_PATCHES.pl D:\Middleware\Oracle_BI1\biapps\tools\bin\apply_patches_import.txt
•Update BIACOMP Schema with ATGLite patch scripts:
◦sqlplus DEV_BIACOMP/
◦SQL> D:\Middleware\Oracle_BI1\sdf\DW\ATGPF\sql\fndtbs_11170_upg.sql
•Configure BI Apps 11.1.1.7.1. Basically, The biapps configuration is a process of extending existed weblogic domain(bifoundation_domain) with deploying new applications like biacm,odiconsole and etc…
•Integrate ODI security with OPSS:
◦D:\Middleware\user_projects\domains\bifoundation_domain\bin>setDomainEnv.cmd
◦D:\Middleware\user_projects\domains\bifoundation_domain>java weblogic.WLST D:\Middleware\Oracle_BI1\bifoundation\install\createJPSArtifactsODI.py embedded –ADMIN_USER_NAME weblogic –DOMAIN_HOSTNAME localhost –DOMAIN_PORT 7001 –DOMAIN_HOME_PATH D:\Middleware\user_projects\domains\bifoundation_domain
•Setup ODI Studio to connect to BI Apps repository:
◦Open ODI Studio from windows Start à All Programs à Oracle à ODI Studio
◦In ODI Studio window, click on connect to Repository.
◦In Oracle Data Integrator Login window, Click on + icon to add a login credentials.
◦Provide all the details required as per the below screen.
◦Click Ok, and now you will see the Oracle Data Integrator Login window.
◦Login with the BI apps Admin username and password.
◦Upon Successful authentication, now we connected to BI Apps 11.1.1.7.1 Repository.
◦In ODI Studio, On Left hand window, you can see the ODI Designer, Operator, Topology and Security tab panes for easy navigation to perform the required tasks.
◦Click on Designer pane, and expand the Project list. Here you will see the BI Apps project folder and its mappings directories.
Sunday, December 15, 2013
Informatica System Architecture
Informatica System Architecture
Informatica Software Architecture illustrated
Informatica ETL product, known as Informatica Power Center consists of 3 main components.
1. Informatica PowerCenter Client Tools:
These are the development tools installed at developer end. These tools enable a developer to
•Define transformation process, known as mapping. (Designer)
•Define run-time properties for a mapping, known as sessions (Workflow Manager)
•Monitor execution of sessions (Workflow Monitor)
•Manage repository, useful for administrators (Repository Manager)
•Report Metadata (Metadata Reporter)
2. Informatica PowerCenter Repository:
Repository is the heart of Informatica tools. Repository is a kind of data inventory where all the data related to mappings, sources, targets etc is kept. This is the place where all the metadata for your application is stored. All the client tools and Informatica Server fetch data from Repository. Informatica client and server without repository is same as a PC without memory/harddisk, which has got the ability to process data but has no data to process. This can be treated as backend of Informatica.
3. Informatica PowerCenter Server:
Server is the place, where all the executions take place. Server makes physical connections to sources/targets, fetches data, applies the transformations mentioned in the mapping and loads the data in the target system.
Integrating OBIEE 11g 11.1.1.7.0 with Informatice Power Center 9.5.0
http://docs.oracle.com/cd/E50736_01/doc.30/e50445/before.htm
Thursday, December 12, 2013
RAC setup good Link
www.oracledba.org/11gR2/Pre_Install_11gR2.htm Step By Step: Install and setup Oracle 11g R2 RAC on Oracle Enterprise Linux 5.5 (32 bit) Platform. By Bhavin Hingu <> NEXT>> This Document shows the step by step of installing and setting up 3-Node 11gR2 RAC cluster. This setup uses IP Based iSCSI Openfiler SAN as a shared storage subsystem. This setup does not have IPMI and Grid Naming Service (GNS) configured. The SCAN is resolved through DNS. Hardware Used in setting up 3-node 11g R2 RAC using iSCSI SAN (Openfiler): · Total Machines: 5 (3 for RAC nodes + 1 for NAS + 1 for DNS) · Network Switches: 3 (for Public, Private and Shared Storage) · Extra Network Adaptors: 7 (6 for RAC nodes (2 for each node) and one for Storage Server) · Network cables: 11 (9 for RAC nodes (3 for each node), one for Shared Storage and 1 for DNS server) · External USB HD: 1 (1 TB) Machines Specifications: DELL OPTIPLEX GX620 CPU: Intel 3800MHz RAM: 4084MB HD: 250GB DVD, 10/100 NIC, 8 MB VRAM Network Adaptor Specifications: Linksys EG1032 Instant Gigabit Network Adapter Network Switch Specifications: D-Link 24-Port Rackmountable Gigabit Switch Network Cables Specifications: 25-Foot Cat6 Snagless Patch Cable – (Blue, Black and Grey) Software Used for the 3-node RAC Setup using NAS (Openfiler): · NAS Storage Solution: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686) · Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE) · Clusterware: Oracle 11g R2 Grid Infrastructure (11.2.0.1) · Oracle RAC: Oracle RDBMS 11g R2 (11.2.0.1) 3-Node RAC Setup Operating System: Oracle Enterprise Linux 5.5 (2.6.18-194.el5PAE): Server: All the RAC Nodes + DNS server Grid Infrastructure Software (Clusterware + ASM 11.2.0.1): Server: All the RAC Nodes ORACLE_BASE: /u01/app/grid ORACLE_HOME: /u01/app/grid11201 Owner: grid (Primary Group: oinstall, Secondary Group: asmadmin, asmdba) Permissions: 755 OCR/Voting Disk Storage Type: ASM Oracle Inventory Location: /u01/app/oraInventory Oracle Database Software (RAC 11.2.0.1): Server: All the RAC Nodes ORACLE_BASE: /u01/app/oracle ORACLE_HOME: /u01/app/oracle/db11201 Owner: oracle (Primary Group: oinstall, Secondary Group: asmdba, dba) Permissions: 755 Oracle Inventory Location: /u01/app/oraInventory Database Name: labdb Listener: LAB_LISTENER (TCP:1525) Openfiler 2.3: Server: single dedicated server acting as NAS. OS: Openfiler 2.3 (2.6.26.8-1.0.11.smp.pae.gcc3.4.x86.i686). 3-Node RAC Architecture: Machine Public Name Private Name VIP Name RAC Node1 node1.hingu.net node1-prv node1-vip.hingu.net RAC Node2 node2.hingu.net node2-prv node2-vip.hingu.net RAC Node3 node3.hingu.net node3-prv node3-vip.hingu.net Storage nas-server N/A N/A DNS server lab-dns N/A N/A SCAN IPs: 192.168.2.151 192.168.2.152 192.168.2.153 SCAN: lab-scan.hingu.net Cluster Name: lab Public Network: 192.168.2.0/eth2 Private network (cluster Interconnect): 192.168.0.0/eth0 Private network (Storage Network): 192.168.1.0/eth1 Machine Public IP Private IP VIP Storage IP RAC Node1 192.168.2.1 192.168.0.1 192.168.2.51 192.168.1.1 RAC Node2 192.168.2.2 192.168.0.2 192.168.2.52 192.168.1.2 RAC Node3 192.168.2.3 192.168.0.3 192.168.2.53 192.168.1.3 Storage N/A N/A N/A 192.168.1.101 DNS server 192.168.2.200 N/A N/A N/A The Installation is divided into 3 main categories: · Pre-installation task. · Installation of Oracle 11g R2 Grid Infrastructure (11.2.0.1). · Installation of Oracle 11g R2 Real Application Cluster (RAC 11.2.0.1). Pre-installation task: Server Hardware requirements Hardware Used in this exercise to setup 3-Node RAC Software Requirement. 3-Node 11g R2 RAC Architecture/Setup Installation of Oracle Enterprise Linux 5 Installation of Openfiler 2.3 Linux Package Requirement Network Setup DNS setup for SCAN resolution Creating Oracle Software owners/Groups/Permissions/HOMEs Installation of cvuqdisk Package Setup of Network Time Protocol Setup Oracle Software Owner’s Environment Setting up SSH equivalency for Oracle Software Owners Configure Shared Storage iSCSI disks using openfiler Configure the iSCSI disk Devices for Oracle ASM with ASMLib Server Hardware Requirements: · Each node in the Cluster must meet the below requirement. · At least 1024 x 768 display resolution, so that OUI displays correctly. · 1 GB of space in the /tmp directory · 5.5 GB space for Oracle Grid Infrastructure Home. · At least 2.5 GB of RAM and equivalent swap space (for 32 bit installation as in my case). · All the RAC nodes must share the same Instruction Set Architecture. For a testing RAC setup, it is possible to install RAC on servers with mixtures of Intel 32 and AMD 32 with differences in sizes of Memory/CPU speed. Installation of OEL5.5 (On All the RAC Nodes and DNS Host): The below selection was made during the installation of OEL5 on the Node 1 (node1.hingu.net). The same process was followed to install RHEL 5 on all the remaining RAC nodes and DNS Host (lab-dns). The Hostname/IP information was appropriately chosen for respective nodes from the Architecture diagram. Insert Installation Media #1: Testing the CD Media: Skip Language: English Key Board: U.S. English Partition Option: “Remove all Partitions on selected drives and create default layout” Boot Loader: “ The GRUB boot loader will be installed on /dev/sda” Network Devices: Active on Boot Devices IPV4.Netmask IPV6/Prefix Yes eth0 192.168.0.1/255.255.255.0 Auto Yes eth1 192.168.1.1/255.255.255.0 Auto Yes eth2 192.168.2.1/255.255.255.0 Auto Hostname à Manually à node1.hingu.net Ignore both the Warning Messages at this point Region: America/New York System Clock Uses UTC (checked) Root Password à Enter the root password Additional Tasks On top of Default Installation: “Checked all Software Development” and “Web Server” Customize Now (Selected) (Below is the extra selection on top of the default selected packages) Applications à Authoring and Publishing (checked) Development à Development Libraries à libstdc++44-devel Development à Java Development Development à Legacy Software Development Servers à Checked All the servers Servers à Legacy Network Server à bootparamd, rsh-server, rusers, rusers-server, telnet-server Servers à Network Servers à dhcp, dhcpv6, dnsmasq, ypserv Servers à Servers Configuration Tools à Checked All Base System à Administration Tools à Checked All Base System à Base à device-mapper-multipath, iscsi-initiator-utils, Base System à Legacy Software Support à openmotif22 Base System à System Tools à OpenIPMI-gui, lsscsi, oracle*, sysstat, tsclient Post Installation Steps: (1) Yes to License Agreement. (2) Disable the firewall (3) Disable SELinux (4) Disable kdump (5) Set the clock (6) Finish Installation of openfiler 2.3 Version: Openfiler V 2.3 (downloaded from here) This Install guide was followed to install Openfiler with below values of Hostname and IP. HOSTNAME: nas-server Network: NAS IP: 192.168.1.101 NETMASK: 255.255.255.0 Post installation Steps: · Disabled the Firewall using system-config-securitylevel-tui · Changed the password of the openfiler user (default is password) · Connected to the nas-server using: https://192.168.1.101:446/ link. · Registered the cluster nodes in the “Network Access Configuration” under the “System” tab. · ‘Enable” all the services shown under the ‘Service” tab System Setup Screen Minimum Required RPMs for OEL 5.5 (All the 3 RAC Nodes): binutils-2.17.50.0.6 compat-libstdc++-33-3.2.3 elfutils-libelf-0.125 elfutils-libelf-devel-0.125 elfutils-libelf-devel-static-0.125 gcc-4.1.2 gcc-c++-4.1.2 glibc-2.5-24 glibc-common-2.5 glibc-devel-2.5 glibc-headers-2.5 kernel-headers-2.6.18 ksh-20060214 libaio-0.3.106 libaio-devel-0.3.106 libgcc-4.1.2 libgomp-4.1.2 libstdc++-4.1.2 libstdc++-devel-4.1.2 make-3.81 numactl-devel-0.9.8.i386 sysstat-7.0.2 unixODBC-2.2.11 unixODBC-devel-2.2.11 Below command verifies whether the specified rpms are installed or not. Any missing rpms can be installed from the OEL Media Pack rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel elfutils-libelf-devel-static \ gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers kernel-headers ksh libaio libaio-devel \ libgcc libgomp libstdc++ libstdc++-devel make numactl-devel sysstat unixODBC unixODBC-devel I had to install below extra RPMs. numactl-devel à Located on the 3rd CD of OEL 5.5 Media pack. oracleasmlib à Available here (one for RHEL compatible) cvuqdisk à Available on Grid Infrastructure Media (under rpm folder) [root@node1 ~]# rpm -ivh numactl-devel-0.9.8-11.el5.i386.rpm warning: numactl-devel-0.9.8-11.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:numactl-devel ########################################### [100%] [root@node1 ~]# [root@node1 rpms]# rpm -ivh oracleasmlib-2.0.4-1.el5.i386.rpm warning: oracleasmlib-2.0.4-1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:oracleasmlib ########################################### [100%] Network Configuration for RAC Nodes/NAS Server/DNS Host: Public, VIPs and SCAN VIPs are resolved by DNS. The private IPs for Cluster Interconnects are resolved through /etc/hosts. The hostname along with public/private and NAS network is configured at the time of OEL network installations. The final Network Configurations files are listed here. (a) hostname: For Node node1: [root@node1 ~]# hostname node1.hingu.net node1.hingu.net: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=node1.hingu.net For Node node2: [root@node2 ~]# hostname node2.hingu.net node2.hingu.net: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=node2.hingu.net For Node node3: [root@node3 ~]# hostname node3.hingu.net node3.hingu.net: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=node3.hingu.net For Node nas-server: [root@nas-server ~]# hostname nas-server nas-server: /etc/sysconfig/network NETWORKING=yes HOSTNAME=nas-server For Node lab-dns: [root@lab-dns ~]# hostname lab-dns lab-dns.hingu.net: /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=nas-server (b) Private Network for Cluster Interconnect: node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0 # Linksys Gigabit Network Adapter DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:22:6B:BF:4E:60 IPADDR=192.168.0.1 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0 # Linksys Gigabit Network Adapter DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:22:6B:BF:4E:4B IPADDR=192.168.0.2 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0 # Linksys Gigabit Network Adapter DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.0.255 HWADDR=00:22:6B:BF:4E:49 IPADDR=192.168.0.3 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.0.0 ONBOOT=yes (c) Public Network: node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2 # Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express DEVICE=eth2 BOOTPROTO=static BROADCAST=192.168.2.255 HWADDR=00:18:8B:04:6A:62 IPADDR=192.168.2.1 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.2.0 ONBOOT=yes node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2 # Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express DEVICE=eth2 BOOTPROTO=static BROADCAST=192.168.2.255 HWADDR=00:18:8B:24:F8:58 IPADDR=192.168.2.2 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.2.0 ONBOOT=yes node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth2 # Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express DEVICE=eth2 BOOTPROTO=static BROADCAST=192.168.2.255 HWADDR=00:19:B9:0C:E6:EF IPADDR=192.168.2.3 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.2.0 ONBOOT=yes lab-dns.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth0 # Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.2.255 HWADDR=00:13:72:A1:E9:1B IPADDR=192.168.2.200 NETMASK=255.255.255.0 NETWORK=192.168.2.0 ONBOOT=yes (d) Private Network for Shared Storage: node1.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1 # Linksys Gigabit Network Adapter DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.1.255 HWADDR=00:22:6B:BF:4E:60 IPADDR=192.168.1.1 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes node2.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1 # Linksys Gigabit Network Adapter DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.1.255 HWADDR=00:22:6B:BF:45:13 IPADDR=192.168.1.2 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes node3.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1 # Linksys Gigabit Network Adapter DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.1.255 HWADDR=00:22:6B:BF:4E:48 IPADDR=192.168.1.3 IPV6INIT=yes IPV6_AUTOCONF=yes NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes nas-server.hingu.net: /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.1.255 HWADDR=00:22:6B:BF:43:D6 IPADDR=192.168.1.101 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes TYPE=Ethernet (e) /etc/hosts files: node1.hingu.net: /etc/hosts # # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 node1.hingu.net node1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 ##======================================= # Private Network for Cluster Interconnect ##======================================= 192.168.0.1 node1-prv 192.168.0.2 node2-prv 192.168.0.3 node3-prv ##======================================= ##======================================= node2.hingu.net: /etc/hosts # # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 node2.hingu.net node2 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 ##======================================= # Private Network for Cluster Interconnect ##======================================= 192.168.0.1 node1-prv 192.168.0.2 node2-prv 192.168.0.3 node3-prv ##======================================= ##======================================= node3.hingu.net: /etc/hosts # # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 node3.hingu.net node3 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 ##======================================= # Private Network for Cluster Interconnect ##======================================= 192.168.0.1 node1-prv 192.168.0.2 node2-prv 192.168.0.3 node3-prv ##======================================= ##======================================= lab-dns.hingu.net: /etc/hosts # # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 lab-dns.hingu.net lab-dns localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 Configure DNS server for SCAN VIPs, Node VIPs and Node IPs: DNS Server: lab-dns.hingu.net RPMs required in setting up DNS server: ypbind-1.19-12.el5 bind-utils-9.3.6-4.P1.el5_4.2 bind-9.3.6-4.P1.el5_4.2 system-config-bind-4.0.3-4.0.1.el5 bind-libs-9.3.6-4.P1.el5_4.2 bind-chroot-9.3.6-4.P1.el5_4.2 Configurations files modified/created to set up DNS: lab-dbs.hingu.net /var/named/chroot/etc/named.conf (modified) /var/named/chroot/var/named/hingu.net.zone (created) /var/named/chroot/var/named/2.168.192.in-addr.arpa.zone (created) /var/named/chroot/var/named/1.168.192.in-addr.arpa.zone (created) On node1, node2 and node3 /etc/resolv.conf (modified) /var/named/chroot/etc/named.conf // Enterprise Linux BIND Configuration Tool // // Default initial "Caching Only" name server configuration // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; }; // Zone for this RAC configuration is hingu.net zone "hingu.net" in { type master; file "hingu.net.zone"; allow-update { none; }; }; // For reverse lookups zone "2.168.192.in-addr.arpa" in { type master; file "2.168.192.in-addr.arpa.zone"; allow-update { none; }; }; zone "1.168.192.in-addr.arpa" in { type master; file "1.168.192.in-addr.arpa.zone"; allow-update { none; }; }; include "/etc/rndc.key"; /var/named/chroot/var/named/hingu.net.zone $TTL 1d hingu.net. IN SOA lab-dns.hingu.net. root.hingu.net. ( 100 ; se = serial number 8h ; ref = refresh 5m ; ret = update retry 3w ; ex = expiry 3h ; min = minimum ) IN NS lab-dns.hingu.net. ; DNS server lab-dns IN A 192.168.2.200 ; RAC Nodes Public name node1 IN A 192.168.2.1 node2 IN A 192.168.2.2 node3 IN A 192.168.2.3 ; RAC Nodes Public VIPs node1-vip IN A 192.168.2.51 node2-vip IN A 192.168.2.52 node3-vip IN A 192.168.2.53 ; 3 SCAN VIPs lab-scan IN A 192.168.2.151 lab-scan IN A 192.168.2.152 lab-scan IN A 192.168.2.153 ; Storage Network nas-server IN A 192.168.1.101 node1-nas IN A 192.168.1.1 node2-nas IN A 192.168.1.2 node3-nas IN A 192.168.1.3 /var/named/chroot/var/named/2.168.192.in-addr.arpa.zone $TTL 1d @ IN SOA lab-dns.hingu.net. root.hingu.net. ( 100 ; se = serial number 8h ; ref = refresh 5m ; ret = update retry 3w ; ex = expiry 3h ; min = minimum ) IN NS lab-dns.hingu.net. ; DNS machine name in reverse 200 IN PTR lab-dns.hingu.net. ; RAC Nodes Public Name in Reverse 1 IN PTR node1.hingu.net. 2 IN PTR node2.hingu.net. 3 IN PTR node3.hingu.net. ; RAC Nodes Public VIPs in Reverse 51 IN PTR node1-vip.hingu.net. 52 IN PTR node2-vip.hingu.net. 53 IN PTR node3-vip.hingu.net. ; RAC Nodes SCAN VIPs in Reverse 151 IN PTR lab-scan.hingu.net. 152 IN PTR lab-scan.hingu.net. 153 IN PTR lab-scan.hingu.net. /var/named/chroot/var/named/1.168.192.in-addr.arpa.zone $TTL 1d @ IN SOA lab-dns.hingu.net. root.hingu.net. ( 100 ; se = serial number 8h ; ref = refresh 5m ; ret = update retry 3w ; ex = expiry 3h ; min = minimum ) IN NS lab-dns.hingu.net. ; Storage Network Reverse Lookup 101 IN PTR nas-server.hingu.net. 1 IN PTR node1-nas.hingu.net. 2 IN PTR node2-nas.hingu.net. 3 IN PTR node3-nas.hingu.net. /etc/resolv.conf (on RAC nodes): search hingu.net nameserver 192.168.2.200 Start the DNS Service (named): service named start chkconfig --level 35 named on Verify the DNS Setup: NOTE: nslookup for lab-scan should return names in random order every time. Enable Name Service Cache Daemon nscd: (On all the RAC Nodes) chkconfig --level 35 nscd on service ncsd start Creating Oracle Users/Groups/Permissions and Installation Paths: (On all the RAC Nodes): userdel oracle groupdel oinstall groupdel dba groupadd -g 1000 oinstall groupadd -g 1020 asmadmin groupadd -g 1021 asmdba groupadd -g 1031 dba useradd -u 1100 -g oinstall -G asmadmin,asmdba grid useradd -u 1101 -g oinstall -G dba,asmdba oracle mkdir -p /u01/app/grid11201 mkdir -p /u01/app/grid chown -R grid:oinstall /u01 mkdir -p /u01/app/oracle chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01 passwd grid passwd oracle Install cvuqdisk Package: (On all the RAC Nodes): This package is located in the rpm directory on Grid Infrastructure Media and needs to be installed after the group oinstall is created. In my case, as this was a fresh install of 11g R2 on new hardware, old versions of cvuqdisk was not present. If it is, then the older version needs to be removed first. export CVUQDISK_GRP=oinstall echo $CVUQDISK rpm –ivh cvuqdisk-1.0.7-1.rpm [root@node1 rpm]# pwd /home/grid/11gR2_for_OEL5/grid11201/grid/rpm [root@node1 rpm]# export CVUQDISK_GRP=oinstall [root@node1 rpm]# echo $CVUQDISK_GRP oinstall [root@node1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%] [root@node1 rpm]# rpm -qa | grep cvuqdisk cvuqdisk-1.0.7-1 [root@node1 rpm]# Network Time Protocol Setting (On all the RAC Nodes): In this installation, Oracle Time Synchronization Service is used over the Linux system provided ntpd. So, it needs to deactivated and deinstalled to avoid any possibility of it being conflicted with the Oracle’s Cluster Time Sync Service (ctss). # /sbin/service ntpd stop # chkconfig ntpd off # mv /etc/ntp.conf /etc/ntp.conf.org Also remove the following file: /var/run/ntpd.pid Configure Grid Infrastructure as well as Oracle RAC Owner’s User Environment (grid and oracle): (a) Set the umask to 022 by putting below line into these users’ (grid and oracle) .bash_profile files: umask 022 Then, executed the .bash_profile and verified that the correct value of umask is displayed. [grid@node1 ~]$ . .bash_profile [grid@node1 ~]$ umask (b) Setting up X11 forwarding: Created the file ~/.ssh/config to disable the X11Forwadding by placing below line in it. Host * ForwardX11 no (c) Suppressed the Terminal output on STDOUT and STDERR to prevent Installation errors: Modified the file ~/.bashrc (or .cshrc for C shell) with below entry. Bourne, Bash, or Korn shell: if [ -t 0 ]; then stty intr ^C fi C shell: test -t 0 if ($status == 0) then stty intr ^C endif (d) Increased the Shell Limits: Recommended: Resource Soft Limit Hard Limit Processes 2047 16384 Open File Descriptors 1024 65536 Stack 10240 10240 - 32768 Set: Resource Soft Limit Hard Limit Processes 131072 131072 Open File Descriptors 131072 131072 Stack 32768 32768 Added the following lines to the /etc/security/limits.conf file: oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited oracle hard core unlimited oracle soft memlock 3500000 oracle hard memlock 3500000 # Recommended stack hard limit 32MB for oracle installations # oracle hard stack 32768 grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 3500000 grid hard memlock 3500000 # Recommended stack hard limit 32MB for grid installations # grid hard stack 32768 Added the following line in the /etc/pam.d/login file, if it does not already exist: session required /lib/security/pam_limits.so For the Bourne, Bash, or Korn shell, add the following lines to the /etc/profile: if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 131072 ulimit -n 131072 else ulimit -u 131072 -n 131072 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 131072 ulimit -n 131072 else ulimit -u 131072 -n 131072 fi fi For the C shell (csh or tcsh), add the following lines to the /etc/csh.login. if ( $USER == "oracle" ) then limit maxproc 131072 limit descriptors 131072 endif if ( $USER == "grid" ) then limit maxproc 131072 limit descriptors 131072 endif (e) Set the below Kernel Parameters with recommended range in /etc/sysctl.conf This was already set with the installation of oracle-validated package. /etc/sysctl.conf # Kernel sysctl configuration file for Oracle Enterprise Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 1 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 8192 # Controls the maximum shared segment size, in bytes kernel.shmmax = 4294967295 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 1073741824 # For 11g, recommended value for file-max is 6815744 fs.file-max = 6815744 # For 10g, uncomment 'fs.file-max 327679', comment other entries for this parameter and re-run sysctl -p # fs.file-max:327679 kernel.msgmni = 2878 kernel.sem = 250 32000 100 142 kernel.shmmni = 4096 net.core.rmem_default = 262144 # For 11g, recommended value for net.core.rmem_max is 4194304 net.core.rmem_max = 4194304 # For 10g, uncomment 'net.core.rmem_max 2097152', comment other entries for this parameter and re-run sysctl -p # net.core.rmem_max=2097152 net.core.wmem_default = 262144 # For 11g, recommended value for wmem_max is 1048576 net.core.wmem_max = 1048576 # For 10g, uncomment 'net.core.wmem_max 262144', comment other entries for this parameter and re-run sysctl -p # net.core.wmem_max:262144 fs.aio-max-nr = 3145728 # For 11g, recommended value for ip_local_port_range is 9000 65500 net.ipv4.ip_local_port_range = 9000 65500 # For 10g, uncomment 'net.ipv4.ip_local_port_range 1024 65000', comment other entries for this parameter and re-run sysctl -p # net.ipv4.ip_local_port_range:1024 65000 # Added min_free_kbytes 50MB to avoid OOM killer on EL4/EL5 vm.min_free_kbytes = 51200 (f) Repeated this process for all the remaining nodes in the cluster. SSH user Equivalency configuration (grid and oracle): On All the Cluster Nodes: su - oracle mkdir ~/.ssh chmod 700 ~/.ssh Generate the RSA and DSA keys: /usr/bin/ssh-keygen -t rsa /usr/bin/ssh-keygen -t dsa On node1: touch ~/.ssh/authorized_keys cd ~/.ssh (a) Add these Keys to the Authorized_keys file. cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys (b) Send this file to node2. scp authorized_keys node2:.ssh/ On node2: (a) Add these Keys to the Authorized_keys file. cd ~/.ssh cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys (b) Send this file to node3. scp authorized_keys node3:.ssh/ On node3: (a) Add these Keys to the Authorized_keys file. cd ~/.ssh cat id_rsa.pub >> authorized_keys cat id_dsa.pub >> authorized_keys (b) Send this file to node1 and node2. scp authorized_keys node1:.ssh/ scp authorized_keys node2:.ssh/ On All the Nodes: chmod 600 ~/.ssh/authorized_keys ssh node1 date ssh node2 date ssh node3 date ssh node1.hingu.net date ssh node2.hingu.net date ssh node3.hingu.net date ssh node1-prv date ssh node2-prv date ssh node3-prv date Entered 'yes' and continued when prompted Repeat the Above process for user grid: Configure the Shared Storage for 11g R2 Grid Infrastructure and RAC Database: Volume Group: grid Physical Volume: /dev/sda5 (1st extended partition on the last physical partition of local disk /dev/sda nas-server) Logical Volumes: asmdisk01, asmdisk02, asmdisk03 a) Connect to the nas-server using: https://192.168.1.101:446 using openfiler b) Create the Volume group “grid” and created the two logical volumes asmdisk01 and asmdisk02 for ASM. c) Assign iSCSI targets to these LUNs so that they can be discovered by the clients (cluster nodes node1, node2 and node3) Here is the process I followed to create the 3rd logical volume called asmdisk03. Steps to Create 3rd volume asmdisk03 of size 25GB. (1) Clicked on ‘Add Volumes” link under the “Volumes” tab. (2) Filled the appropriate values and press “create” (3) The 3rd Volume asmdisk03 was created. (4) Assigned the iscsi-target to this newly created volume. (a) Clicked on “iSCSI-Targets” line under the “Volumes” tab. (b) Under the “Target Configuration” sub tab, entered the Value of asmdisk03 in the “Target IQN” box and then clicked “Add” as shown in the screen. (c) Clicked on the “update” on the same screen with all the default values selected. (d) Went to the “LUN Mapping” sub tab where the iscsi-target is assigned to the new Logical Volume created (asmdisk03) (e) Clicked the “map” for the volume asmdisk03. (f) Went to the “Network ACL” tab and allow all the 3 rac nodes to have access on this iscsi-target. (5) Restarted the iscsi-target on the NAS (service iscsi-target restart) (6) Restarted the iscsi service and make it start automatic during the restart of system (On All the RAC Nodes): chkconfig --level 35 iscsi on service iscsi restart (7) Manually discovered the new lun and make them discover automatic at every startup of iscsi. This set of commands is required for every luns to be discovered on RAC nodes. I am showing it only for asmdisk03 here. (On All the RAC Nodes): iscsiadm -m discovery -t sendtargets -p 192.168.1.101 iscsiadm -m node -T iqn.2006-01.com.openfiler:asmdisk03 -p 192.168.1.101 –l iscsiadm -m node -T iqn.2006-01.com.openfiler:asmdisk03 -p 192.168.1.101 --op update -n node.startup -v automatic Configuration Files: /etc/sysconfig/network (nas-server) NETWORKING=yes HOSTNAME=nas-server /etc/sysconfig/network-scripts/ifcfg-eth1 (nas-server): DEVICE=eth1 BOOTPROTO=static BROADCAST=192.168.1.255 HWADDR=00:22:6B:BF:43:D6 IPADDR=192.168.1.101 NETMASK=255.255.255.0 NETWORK=192.168.1.0 ONBOOT=yes TYPE=Ethernet /etc/rc.local (nas-server) vgchange -ay service iscsi-target restart The screenshots of the above process: The 3rd Volume created. Assigning the iSCSI Target iqn to lun asmdisk03. Setting up Device Name Persistency: (On all the RAC nodes) Because OCR and Voting disks are residing on ASM, this setup is no longer required unless these files are stored outside of ASM. In this Installation, the OCR and Voting Files are stored on ASM. Configure the iSCSI disk Devises for Oracle ASM with ASMLib: (a) Partition the Disk Devises (only from one node): Format these disks to contain a single primary partition to represent it at the time of creating ASM disk using oracleasm. [root@node1 ~]# fdisk /dev/sdb The number of cylinders for this disk is set to 24992. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-24992, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-24992, default 24992): Using default value 24992 Command (m for help): p Disk /dev/sdb: 26.2 GB, 26206011392 bytes 64 heads, 32 sectors/track, 24992 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdb1 1 24992 25591792 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# fdisk /dev/sdc The number of cylinders for this disk is set to 25024. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-25024, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-25024, default 25024): Using default value 25024 Command (m for help): p Disk /dev/sdc: 26.2 GB, 26239565824 bytes 64 heads, 32 sectors/track, 25024 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdc1 1 25024 25624560 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# fdisk /dev/sdd The number of cylinders for this disk is set to 25248. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-25248, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-25248, default 25248): Using default value 25248 Command (m for help): p Disk /dev/sdd: 26.4 GB, 26474446848 bytes 64 heads, 32 sectors/track, 25248 cylinders Units = cylinders of 2048 * 512 = 1048576 bytes Device Boot Start End Blocks Id System /dev/sdd1 1 25248 25853936 83 Linux Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. [root@node1 ~]# (b) Refreshed the kernel on the remaining nodes with the latest partition table using partprobe. [root@node1 ~]# partprobe (c) Verified that the below RPMs are installed before configuring the ASM driver. oracleasm-2.6.18-194.el5-2.0.5-1.el5 oracleasm-support-2.1.3-1.el5 oracleasm-2.6.18-194.el5xen-2.0.5-1.el5 oracleasm-2.6.18-194.el5PAE-2.0.5-1.el5 oracleasm-2.6.18-194.el5debug-2.0.5-1.el5 oracleasmlib-2.0.4-1.el5 (d) Configured the ASMLib (All the RAC nodes): [root@node1 ~]# oracleasm configure –i (e) Loaded the ASMLib Module (All the RAC nodes): [root@node1 ~]# oracleasm init (f) Created the ASM disks using oracleasm: (ONLY from one of the RAC nodes) oracleasm createdisk DSK01 /dev/sdb1 oracleasm createdisk DSK02 /dev/sdc1 oracleasm createdisk DSK03 /dev/sdd1 oracleasm scandisks oracleasm listdisks (g) On the Remaining RAC nodes, simply scanned the ASM disks to instantiate these newly created disks oracleasm scandisks oracleasm listdisks (h) Verified that these ASM disk can be discovered by ASM Libraries (oracleasmlib) on all the RAC Nodes during Installation of Grid Infrastructure. /usr/sbin/oracleasm-discover 'ORCL:*' [grid@node1 ~]$ /usr/sbin/oracleasm-discover 'ORCL:*' Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] Discovered disk: ORCL:DSK01 [51183584 blocks (26205995008 bytes), maxio 512] Discovered disk: ORCL:DSK02 [51249120 blocks (26239549440 bytes), maxio 512] Discovered disk: ORCL:DSK03 [51707872 blocks (26474430464 bytes), maxio 512] [grid@node1 ~]$ With this, the pre-Installation steps are completed successfully and are ready to install 11g R2 Grid Infrastructure software Next. <> NEXT>> Comments Enter your comment here No one has commented yet. Be the first!
RAC Setup Instructions
• Pre-reqs. to make sure the cluster is setup OK.
• Stage all the software on one node, typically Node1
• Prepare the Shared Disk
• Install the Oracle Clusterware (using the push mechanism to install on the
other nodes in the cluster)
• Patch the Oracle Clusterware layer to 10.2.0.3
• Install Oracle ASM Software only Home
• Patch the Oracle ASM Software Home to 10.2.0.3
• Create Node Specific Network Listeners
• Create ASM Instances and initial ASM disk group
• Install Oracle RAC Database Software only Home
• Patch the Oracle RAC Database Software Home to 10.2.0.3
• Create RAC database
Monday, December 9, 2013
The /etc/sysconfig/network-scripts/ifcfg-ethN files
The /etc/sysconfig/network-scripts/ifcfg-ethN files
File configurations for each network device you may have or want to add on your system are located in the /etc/sysconfig/network-scripts/ directory with Red Hat Linux 6.1 or 6.2 and are named ifcfg-eth0 for the first interface and ifcfg-eth1 for the second, etc. Following is a example /etc/sysconfig/network-scripts/ifcfg-eth0 file:
DEVICE=eth0
IPADDR=208.164.186.1
NETMASK=255.255.255.0
NETWORK=208.164.186.0
BROADCAST=208.164.186.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
If you want to modify your network address manually, or add a new network on a new interface, edit this file -ifcfg-ethN, or create a new one and make the appropriate changes.
DEVICE=devicename, where devicename is the name of the physical network device.
IPADDR=ipaddr, where ipaddr is the IP address.
NETMASK=netmask, where netmask is the netmask IP value.
NETWORK=network, where network is the network IP address.
BROADCAST=broadcast, where broadcast is the broadcast IP address.
ONBOOT=answer, where answer is yes or no. Do the interface need to be active or inactive at boot time.
BOOTPROTO=proto, where proto is one of the following :
none - No boot-time protocol should be used.
bootp - The bootp now pump protocol should be used.
dhcp - The dhcp protocol should be used.
USERCTL=answer, where answer is one of the following:
yes - Non-root users are allowed to control this device.
no - Only the super-user root is allowed to control this device.
Saturday, December 7, 2013
RCU:6107 DB Init Param Error
While Installing Repository Creation Utility (RCU) Installation the following error occurs:
RCU:6107 DB Init Param Error
This can be removed simply by the following:
1. Login on your database with system user.
2. Write > show parameters processes (which will show the current value of processes).
3. If its value is less than 500 then write the following command:
ALTER SYSTEM SET PROCESSES=500 SCOPE=SPFILE;
4. Write > show parameters open_cursors (which will show the current value of open_cursors).
5. If its value is less than 500 then write the following command:
ALTER SYSTEM SET OPEN_CURSORS=500 SCOPE=SPFILE;
6. Restart your DB or system.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 599785472 bytes
Fixed Size 1288820 bytes
Variable Size 167773580 bytes
Database Buffers 427819008 bytes
Redo Buffers 2904064 bytes
Database mounted.
Database opened.
SQL> show parameters processes;
NAME TYPE VALUE
———————————— ———– ——————————
aq_tm_processes integer 0
db_writer_processes integer 1
gcs_server_processes integer 0
job_queue_processes integer 4
log_archive_max_processes integer 2
processes integer 500
7. Start the installation now….
There will be no error…..
Courtsey :http://mhabib.wordpress.com/2010/07/20/rcu6107-db-init-param-error/
Oracle Database 11g 11.2.0.1.0
www.tecmint.com/oracle-database-11g-release-2-installation-in-linux/
kamranagayev.com/2011/03/21/step-by-step-installing-oracle11g-on-linux/
Wednesday, December 4, 2013
DSA DSE X.500 in OID 11g
DSE DIRECTORY-SPECIFIC ENTRY
DSA DIRECTORY SYSTEM AGENT
See directory-specific entry (DSE)
DSA-specific entries. Different DSAs may hold the same DIT name, but have different contents. That is, the contents can be specific to the DSA holding it. A DSE is an entry with contents specific to the DSA holding it.
directory system agent (DSA)
The X.500 term for a directory server.
X.500 Overview
The X.500 directory service is a global directory service. Its components cooperate to manage information about objects such as countries, organizations, people, machines, and so on in a worldwide scope. It provides the capability to look up information by name (a white-pages service) and to browse and search for information (a yellow-pages service).
The information is held in a directory information base (DIB). Entries in the DIB are arranged in a tree structure called the directory information tree (DIT). Each entry is a named object and consists of a set of attributes. Each attribute has a defined attribute type and one or more values. The directory schema defines the mandatory and optional attributes for each class of object (called the object class). Each named object may have one or more object classes associated with it.
The X.500 namespace is hierarchical. An entry is unambiguously identified by a distinguished name (DN). A distinguished name is the concatenation of selected attributes from each entry, called the relative distinguished name (RDN), in the tree along a path leading from the root down to the named entry.
Users of the X.500 directory may (subject to access control) interrogate and modify the entries and attributes in the DIB.
Protocols
The X.500 standard defines a protocol (among others) for a client application to access the X.500 directory. Called the Directory Access Protocol (DAP), it is layered on top of the Open Systems Interconnection (OSI) protocol stack.
Brief History of LDAP
Once upon a time, in the dim and distant past (the late 70's - early 80's) the ITU (International Telecommunication Union) started work on the X.400 series of email standards. This email standard required a directory of names (and other information) that could be accessed across networks in a hierarchical fashion not dissimilar to DNS for those familiar with its architecture.
This need for a global network based directory led the ITU to develop the X.500 series of standards and specifically X.519, which defined DAP (Directory Access Protocol), the protocol for accessing a networked directory service.
The X.400 and X.500 series of standards came bundled with the whole OSI stack and were big, fat and consumed serious resources. Standard ITU stuff in fact.
Fast forward to the early 90's and the IETF saw the need for access to global directory services (originally for many of the same email based reasons as the ITU) but without picking up all the gruesome protocol (OSI) overheads and started work on a Lightweight Directory Access Protocol (LDAP). LDAP was designed to provide almost as much functionality as the original X.519 standard but using the TCP/IP protocol - while still allowing inter-working with X.500 based directories. Indeed, X.500 (DAP) inter-working and mapping is still part of the IETF LDAP series of RFCs.
A number of the more serious angst issues in the LDAP specs, most notably the directory root naming convention, can be traced back to X.500 inter-working and the need for global directories.
LDAP - broadly - differs from DAP in the following respects:
1. TCP/IP is used in LDAP - DAP uses OSI as the transport/network layers
2. Some reduction in functionality - obscure, duplicate and rarely used features (an ITU speciality) in X.519 were quietly and mercifully dropped.
3. Replacement of some of the ASN.1 (X.519) with a text representation in LDAP (LDAP URLs and search filters). For this point alone the IETF incurs our undying gratitude. Regrettably much ASN.1 notation still remains.
EBS (E-Buisness Suite) R12 Integration with Oracle Access Manager 11g R2
iamonlinewiki.blogspot.com/2012/10/ebs-e-buisness-suite-r12-integration.html#!/2012/10/ebs-e-buisness-suite-r12-integration.html
Note: This post is not a new invention but simplies the steps that are given in the integration document provided by Oracle. I have sucessfully integrated using below steps.
Pre-Requisites:· Oracle EBS 12.0.6 or 12.1.1 or later installed and configured o 10220779 Patch needs to be apply for 12.0.6 o 8919489 Patch needs to be apply for 12.1.1. Note: For other verison no patch required
· OID 11g R1 installed and configured
· OAM 11g R1 or OAM 11g R2 installed and configured
· OHS & Webgate (11g) installed and configured
The High Level Steps & components involved in this Intigration
1. EBS R12:
•Depends on the verison of the EBS it requires a patch before the integration. Refer to above pre-requisites section for the required patches.
• FND patch (as listed below) needs to be applied
•Site Profiles needs to be modified as explained in below sections
•Register the EBS instance & Home with OID
2. EBS AccessGate:
•The integration uses EBS AccessGate. So EBS AccessGate is required to be installed in either its own domain or existing domain (not IAM doamin) as a managed server. AccessGate will be deployed in this managed or admin server
3. OID 11g R1:
•EBS R12 SSO using OAM 11g has a mandatory requirement of OID. The EBS instance & Home will be registered with OID. Also OID is the identity store for user authentication.
•Configure OID to return operational attributes for lookup requests
4. OAM 11g R2/R1:
•Requires Resources, Policies (AuthN & AuthZ) to be created in a Policy Domain.
5. OHS & Webgate:
•Update the OHS proxy configuration so the requests proxy to protected resources from OHS
Detailed Steps for the Integration:
1.Make sure the listed components above are installed before the integration
2.Configure OID to return the operational attribtues for lookup requests
•Make LDIF file with below contents
dn: cn=dsaconfig, cn=configsets,cn=oracle internet directory
changetype: modify
add: orclallattrstodn
orclallattrstodn: cn=orcladmin
where orclallattrstodn is the Bind DN which is used to connect to OID
•Run ldapmodify command to add the above entry. Also the entry can be modified using ODSM console
3. Install weblogic instance or domain for EBS AccessGate
4. Download the EBS AccessGate (Patch 12796012) from oracle website
• Create a folder for EBS AccessGate
•Unzip the downloaded patch
•Create another directory "plan" in patch directory location
•copy the fndext.jar file from the downloaded location to EBS AccessGate Weblogic domain to the 'lib' location. ($MW_HOME/user_projects/domains//lib/
•Restart the EBS AccessGate domain & managed servers
5. Create a directory "public" in OHS Server at OHS_HOME/instances//config/OHS/ohs1/htdocs/
6. Copy the samplecleanup.html file from the above EBS AccessGate Patch location to OHS Server at OHS_HOME/instances//config/OHS/ohs1/htdocs/public location
7. Rename the file as "oacleanup.html". Make sure there is no another file with this name. This is Centralized log-out script which cleansup the cookies and log-out the user.
8. Get DBC file from EBS server. It is required for creating the datasource connection from EBS AccessGate to EBS Database server.
•Login to EBS DB Server.
•Run the environment variables script (Ex: . VIS_.env)
•java oracle.apps.fnd.security.AdminDesktop / CREATE NODE_NAME= IP_ADDRESS= DBC=
•Copy the generated DBC file back to EBS AccessGate patch directory
9. Deploy the AccessGate Application and create the datasource connection to EBS DB server
•cd $MW_HOME/wlserver_10.3/server/bin/
•. setWLSEnv.sh
•Set DOMAIN_HOME variable (ex: export DOMAIN_HOME=$MW_HOME/user_projects/domains/)
•Go to EBS AccessGate Patch directory and look for the script "txkEBSAuth.xml"
•run the command as shown below
•ant -f txkEBSAuth.xml
•Prmots for Weblogic userid, pw, EBS application scheama, deployment plan file location, DBC file location etc...
•The above command will deploy the AccessGate application into EBS AccessGate server and create datasource connections
Note: The above command works for only single instance Database. If using Oracle RAC the datasource connections needs to be created manually.
• Restart EBS AccessGate admin and managed servers
10. Assuming webgate is already installed and registered with OAM 11g server, create the required application domain, resources, policies etc... in OAM console
•Create an application Domain like "EBS App Domain" in OAM Console
•Create EBS Id (data store) store under system configuration tab. This is OID user data store
•Create an EBS Authentication Module (LDAP) and assign above data store as "identity store"
•Create EBS Authentication Scheme with below parameters
•Authentication Level: 1
•Challenge Method: FORM
•Challenge Redirect URL: http://:14100/oam/server
•Authentication Module: EBS Authentication Module
•Challenge URL: http://:///OAMLogin.jsp
•Context Type: external
•Create Protected Resource & Public Resource containers under Authentciation Policies
•Create Protected Resource & Public Resource containers under Authorization Policies
•Create the below resources for the above application domain
•Protected Resources Policies
•/ebsauth_app (This is the EBS application context)
•/ebsauth_appsp1/…/*
•Public Resource Policies
•/ebsauth_appsp1/OAMLogin.jsp
•ebsauth_appsp1/ssologout.do
•/ebsauth_appsp1/ssologout_callback
•/ebsauth_appsp1/style
•Un-Protected (Excluded) Resource Policies
•/exclude/index.html•/public/oacleanup.html
•Open the Protected Resource Policies under EBS Application Domain Authentication Policies
•Assign "EBS Authentication Scheme" as authentication
•Enter "http://:///OAMLogin.jsp in failure URL
•In response tab enter below details
•USER_NAME Header $user.userid
•USER_ORCLGUID Header $user.attr.orclguid
•Open Protected Resource Policies under EBS Application Domain Authorization Policies
•Enter "http://:///OAMLogin.jsp in failure URL
•Check the "Use Implied Constraints" checkbox
•In response tab enter below details
•USER_NAME Header $user.userid
•USER_ORCLGUID Header $user.attr.orclguid
11. Configure redirection configuration in OHS server
•Login to OHS server
•Go to /instances/instance_name/config/OHS/ohs1/
•Update the mod_wl_ohs.conf file with below contents
•
SetHandler weblogic-handler
WebLogicHost
WebLogicPort
•Restart the OHS server
12. Apply the FND patch on EBS Server
•12408040 for 12.0.6 version
•12408233 for 12.1.1 version
•12387976 for 12.1.2 & 12.1.3
13. Restart EBS Application
14. Configure EBS Site Policies as listed below
•Application Authenticate Agent = http://:///
•AutoLink SSO User = True
•OID Syncronization = True
•Application SSO Type = SSWA w/SSO
15. Restart EBS Application
16. Configure the centralzied log-out page
•open "oacleanup.html" file in OHS server
•update the below lines
•
•In function doLoad() section add below lines
logoutHandler.addCallback('/ebsauth_fin02/ssologout_callback');
logoutHandler.addCallback('http://webgatehost2.example.com:7780/ebsauth_test/ssologout_callback');
logoutHandler.addCookie('ObSSOCookie','domain=.);
•Restart the OHS Server
17. Test the integration by access the EBS Application URL (ex: http://ebs server:8000/OA_HTML/AppsLogin?>. It prompts for EBS AccessGate Login Page. Enter user credentials and verify if you are able to access EBS Application
Subscribe to:
Posts (Atom)