Essbase Server Clustering
Reference : Posted by Jeff Henkel Jan 23, 2012 6:13:00 PM http://blog.checkpointllc.com/essbase-server-clustering
At long last, after many years of customer requests, and many unsupported, creative workarounds, Oracle now has an officially supported Essbase clustering method. This is a software based, active-passive cluster, using Oracle's OPMN (Oracle Process Monitoring and Notification service). Due to the nature of Essbase, and its agent's need to have exclusive locking rights of files associated with applications and databases, only one agent can be active at any given time. But, what OPMN does is provide automatic fail over, high availability and write-back to the other Essbase agent, upon failure of the active agent. The only capability missing is load balancing.
This new functionality was first introduced with EPM System 11.1.2, though there have been many issues in this first release. Oracle recommends implementing Essbase clustering in EPM System 11.1.2.1. In addition, you need to apply OPMN patch 11744008, which resolves some known issues with OPMN. What Essbase clustering still doesn't give you is live backups, but, Oracle is supposed to be working on finally making that a feature for future releases.
An active-passive Essbase cluster can contain two Essbase servers. To install additional Essbase servers, you must install an additional instance of Essbase, either on the same server, which would really not be recommended, since you still have a point of failure of the physical hardware, or another physical server, which is recommended. The applications must be on a shared drive, and the cluster name must be unique within the deployment environment.
These types of shared drive are supported:
SAN storage device with a shared disk file system supported on the installation platform, such as OCFS.
NAS device over a supported network protocol.
Note: Any networked file system that can communicate with an NAS storage device is supported, but the cluster nodes must be able to access the same shared disk over that file system. SAN or a fast NAS device is recommended because of shorter I/O latency and fail over times.
Essbase cluster initial setup occurs on the first instance of Essbase, where you define the Essbase cluster name and the local Essbase instance name and instance location, using the EPM System Configurator. This version of Essbase still uses the old variable name of ARBORPATH, but, the variable itself is now used to define the location of the application files, not the location of the Essbase system files, as in previous versions Essbase.
All of this information is stored in the EPM System Registry, which is stored in the Shared Services database When you setup each instance, not only for Essbase, but for the entire system, you connect to the Shared Services database so that the same EPM System Registry is in use for the entire system. OPMN also reads the Essbase cluster information from the EPM System registry and keeps track of the active node there.
When you setup the second instance of Essbase, and connect to the same EPM System Registry, you will be presented with an option to join the previously configured cluster, that was setup on the first instance. All information regarding the previously configured cluster will automatically populate and will be grayed out. Once you complete the setup, with the EPM System Configurator, there are still quite a few manual steps that must be taken to update OPMN configuration files, on each Essbase instance. Consult the EPM System High Availability guide and Oracle EPM System Installation and Configuration guide for more detailed information on the manual changes required to complete the setup. Happy Clustering!
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment