Thursday, March 31, 2011

OID Commnads

1. ./ldapbind -h -p -D "cn=orcladmin" -w

This command is used to check the connectivity to OID Server from other application .
Can also be used to check the password of the user orcladmin,the port number and the hostname

2. ./ldapsearch -p -h -D "Service Account DN" -w "service account password" -b "" -s base "objectclasss=*" highestCommittedUSN

This command is used to get the Last Applied Change Number.This is mainly used if OID-AD Synchronization is implemented. When ever there is a change in AD ,the LACN number will change. Suppose if OID-AD synhronization is failed and the new user is not getting synchronized with OID .Then you can use this command to check the LACN number in OID (which we will get from ODM ) and the result you will get from the above LDAP search command.

3. ./dipassistant bootstrap -host -port -profile ActiveChgImp2 -dn cn=orcladmin

This command is used to do the bootstrap of OID. This command is also used if OID-AD Synchronization is implemented. This command will load the user from AD to OID ,based on the container you give in the domain rule .

4 .ldapcommand to find the number of users in OID

5. ./dipassistant mp -host -p -dn "cn=orcladmin" -profile ActiveChgImp1 odip.profile.mapfile=Location of activechg_emc.map2.master

This command is used to update the activechg_emc.map2.master after adding the domain rule .


6. ldapsearch -p -h <"Container from where you want to search ou=internal users,dc=corp,dc=oracle,dc=com"> \-s sub "cn= perftest001" createtimestamp orclguid

This command will search the user perftest001 in the OID and will display his createtimestamp and orclguid .

Analyze Access Log

Access Log
Related Modules Related Directives
mod_log_config
mod_setenvif
CustomLog
LogFormat
SetEnvIf


The server access log records all requests processed by the server. The location and content of the access log are controlled by the CustomLog directive. The LogFormat directive can be used to simplify the selection of the contents of the logs. This section describes how to configure the server to record information in the access log.

Of course, storing the information in the access log is only the start of log management. The next step is to analyze this information to produce useful statistics. Log analysis in general is beyond the scope of this document, and not really part of the job of the web server itself. For more information about this topic, and for applications which perform log analysis, check the Open Directory or Yahoo.

Various versions of Apache httpd have used other modules and directives to control access logging, including mod_log_referer, mod_log_agent, and the TransferLog directive. The CustomLog directive now subsumes the functionality of all the older directives.

The format of the access log is highly configurable. The format is specified using a format string that looks much like a C-style printf(1) format string. Some examples are presented in the next sections. For a complete list of the possible contents of the format string, see the mod_log_config format strings.

Common Log Format
A typical configuration for the access log might look as follows.

LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog logs/access_log common

This defines the nickname common and associates it with a particular log format string. The format string consists of percent directives, each of which tell the server to log a particular piece of information. Literal characters may also be placed in the format string and will be copied directly into the log output. The quote character (") must be escaped by placing a backslash before it to prevent it from being interpreted as the end of the format string. The format string may also contain the special control characters "\n" for new-line and "\t" for tab.

The CustomLog directive sets up a new log file using the defined nickname. The filename for the access log is relative to the ServerRoot unless it begins with a slash.

The above configuration will write log entries in a format known as the Common Log Format (CLF). This standard format can be produced by many different web servers and read by many log analysis programs. The log file entries produced in CLF will look something like this:

127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326

Each part of this log entry is described below.

127.0.0.1 (%h)
This is the IP address of the client (remote host) which made the request to the server. If HostnameLookups is set to On, then the server will try to determine the hostname and log it in place of the IP address. However, this configuration is not recommended since it can significantly slow the server. Instead, it is best to use a log post-processor such as logresolve to determine the hostnames. The IP address reported here is not necessarily the address of the machine at which the user is sitting. If a proxy server exists between the user and the server, this address will be the address of the proxy, rather than the originating machine.
- (%l)
The "hyphen" in the output indicates that the requested piece of information is not available. In this case, the information that is not available is the RFC 1413 identity of the client determined by identd on the clients machine. This information is highly unreliable and should almost never be used except on tightly controlled internal networks. Apache httpd will not even attempt to determine this information unless IdentityCheck is set to On.
frank (%u)
This is the userid of the person requesting the document as determined by HTTP authentication. The same value is typically provided to CGI scripts in the REMOTE_USER environment variable. If the status code for the request (see below) is 401, then this value should not be trusted because the user is not yet authenticated. If the document is not password protected, this part will be "-" just like the previous one.
[10/Oct/2000:13:55:36 -0700] (%t)
The time that the request was received. The format is:
[day/month/year:hour:minute:second zone]
day = 2*digit
month = 3*letter
year = 4*digit
hour = 2*digit
minute = 2*digit
second = 2*digit
zone = (`+' | `-') 4*digit

It is possible to have the time displayed in another format by specifying %{format}t in the log format string, where format is as in strftime(3) from the C standard library.
"GET /apache_pb.gif HTTP/1.0" (\"%r\")
The request line from the client is given in double quotes. The request line contains a great deal of useful information. First, the method used by the client is GET. Second, the client requested the resource /apache_pb.gif, and third, the client used the protocol HTTP/1.0. It is also possible to log one or more parts of the request line independently. For example, the format string "%m %U%q %H" will log the method, path, query-string, and protocol, resulting in exactly the same output as "%r".
200 (%>s)
This is the status code that the server sends back to the client. This information is very valuable, because it reveals whether the request resulted in a successful response (codes beginning in 2), a redirection (codes beginning in 3), an error caused by the client (codes beginning in 4), or an error in the server (codes beginning in 5). The full list of possible status codes can be found in the HTTP specification (RFC2616 section 10).
2326 (%b)
The last part indicates the size of the object returned to the client, not including the response headers. If no content was returned to the client, this value will be "-". To log "0" for no content, use %B instead.
Combined Log Format
Another commonly used format string is called the Combined Log Format. It can be used as follows.

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"" combined
CustomLog log/access_log combined

This format is exactly the same as the Common Log Format, with the addition of two more fields. Each of the additional fields uses the percent-directive %{header}i, where header can be any HTTP request header. The access log under this format will look like:

127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "http://www.example.com/start.html" "Mozilla/4.08 [en] (Win98; I ;Nav)"

The additional fields are:

"http://www.example.com/start.html" (\"%{Referer}i\")
The "Referer" (sic) HTTP request header. This gives the site that the client reports having been referred from. (This should be the page that links to or includes /apache_pb.gif).
"Mozilla/4.08 [en] (Win98; I ;Nav)" (\"%{User-agent}i\")
The User-Agent HTTP request header. This is the identifying information that the client browser reports about itself.
Multiple Access Logs
Multiple access logs can be created simply by specifying multiple CustomLog directives in the configuration file. For example, the following directives will create three access logs. The first contains the basic CLF information, while the second and third contain referer and browser information. The last two CustomLog lines show how to mimic the effects of the ReferLog and AgentLog directives.

LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog logs/access_log common
CustomLog logs/referer_log "%{Referer}i -> %U"
CustomLog logs/agent_log "%{User-agent}i"

This example also shows that it is not necessary to define a nickname with the LogFormat directive. Instead, the log format can be specified directly in the CustomLog directive.

Conditional Logs
There are times when it is convenient to exclude certain entries from the access logs based on characteristics of the client request. This is easily accomplished with the help of environment variables. First, an environment variable must be set to indicate that the request meets certain conditions. This is usually accomplished with SetEnvIf. Then the env= clause of the CustomLog directive is used to include or exclude requests where the environment variable is set. Some examples:

# Mark requests from the loop-back interface
SetEnvIf Remote_Addr "127\.0\.0\.1" dontlog
# Mark requests for the robots.txt file
SetEnvIf Request_URI "^/robots\.txt$" dontlog
# Log what remains
CustomLog logs/access_log common env=!dontlog

As another example, consider logging requests from english-speakers to one log file, and non-english speakers to a different log file.

SetEnvIf Accept-Language "en" english
CustomLog logs/english_log common env=english
CustomLog logs/non_english_log common env=!english

Although we have just shown that conditional logging is very powerful and flexible, it is not the only way to control the contents of the logs. Log files are more useful when they contain a complete record of server activity. It is often easier to simply post-process the log files to remove requests that you do not want to consider.

Log Rotation
On even a moderately busy server, the quantity of information stored in the log files is very large. The access log file typically grows 1 MB or more per 10,000 requests. It will consequently be necessary to periodically rotate the log files by moving or deleting the existing logs. This cannot be done while the server is running, because Apache will continue writing to the old log file as long as it holds the file open. Instead, the server must be restarted after the log files are moved or deleted so that it will open new log files.

By using a graceful restart, the server can be instructed to open new log files without losing any existing or pending connections from clients. However, in order to accomplish this, the server must continue to write to the old log files while it finishes serving old requests. It is therefore necessary to wait for some time after the restart before doing any processing on the log files. A typical scenario that simply rotates the logs and compresses the old logs to save space is:

mv access_log access_log.old
mv error_log error_log.old
apachectl graceful
sleep 600
gzip access_log.old error_log.old

Another way to perform log rotation is using piped logs as discussed in the next section.

Piped Logs
Apache httpd is capable of writing error and access log files through a pipe to another process, rather than directly to a file. This capability dramatically increases the flexibility of logging, without adding code to the main server. In order to write logs to a pipe, simply replace the filename with the pipe character "|", followed by the name of the executable which should accept log entries on its standard input. Apache will start the piped-log process when the server starts, and will restart it if it crashes while the server is running. (This last feature is why we can refer to this technique as "reliable piped logging".)

Piped log processes are spawned by the parent Apache httpd process, and inherit the userid of that process. This means that piped log programs usually run as root. It is therefore very important to keep the programs simple and secure.

One important use of piped logs is to allow log rotation without having to restart the server. The Apache HTTP Server includes a simple program called rotatelogs for this purpose. For example, to rotate the logs every 24 hours, you can use:

CustomLog "|/usr/local/apache/bin/rotatelogs /var/log/access_log 86400" common

Notice that quotes are used to enclose the entire command that will be called for the pipe. Although these examples are for the access log, the same technique can be used for the error log.

A similar but much more flexible log rotation program called cronolog is available at an external site.

As with conditional logging, piped logs are a very powerful tool, but they should not be used where a simpler solution like off-line post-processing is available.

Virtual Hosts
When running a server with many virtual hosts, there are several options for dealing with log files. First, it is possible to use logs exactly as in a single-host server. Simply by placing the logging directives outside the sections in the main server context, it is possible to log all requests in the same access log and error log. This technique does not allow for easy collection of statistics on individual virtual hosts.

If CustomLog or ErrorLog directives are placed inside a section, all requests or errors for that virtual host will be logged only to the specified file. Any virtual host which does not have logging directives will still have its requests sent to the main server logs. This technique is very useful for a small number of virtual hosts, but if the number of hosts is very large, it can be complicated to manage. In addition, it can often create problems with insufficient file descriptors.

For the access log, there is a very good compromise. By adding information on the virtual host to the log format string, it is possible to log all hosts to the same log, and later split the log into individual files. For example, consider the following directives.

LogFormat "%v %l %u %t \"%r\" %>s %b" comonvhost
CustomLog logs/access_log comonvhost

The %v is used to log the name of the virtual host that is serving the request. Then a program like split-logfile can be used to post-process the access log in order to split it into one file per virtual host.

How to enable and collect debug for HTTP, OC4J and OPMN

How to enable and collect debug for HTTP, OC4J and OPMN

For troubleshooting,we have to perform the following for enable and collect debug information from HTTP, OC4J and OPMN services .

Run the following script on all middle tiers as the application file system owner. A file named after the servername and date will be created in every server under the /tmp directory. Upload the resulting zip file from every server to your service request on Metalink.
zip -r /tmp/`uname -n`_`date +%m%d%y.%H%M`_iAS_CONFIG.zip \
$ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf \
$ORA_CONFIG_HOME/10.1.3/config \
$INST_TOP/pids/10.1.3/Apache \
$ORA_CONFIG_HOME/10.1.3/j2ee/ \
$ORA_CONFIG_HOME/10.1.3/javacache/admin \
$ORA_CONFIG_HOME/10.1.3/network/admin \
$ORA_CONFIG_HOME/10.1.3/opmn
First of all, need to shutdown the http server, oc4j and opmn services, for that:

Then need to take the backup existing log files for safe custody.
$LOG_HOME/ora/10.1.3/Apache
$LOG_HOME/ora/10.1.3/j2ee
$LOG_HOME/ora/10.1.3/opmn

1. Enable http ODL logging

Edit httpd.conf file, add the following to the end of file $ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf/httpd.conf

OraLogMode oracle
OraLogSeverity TRACE:32
OraLogDir $LOG_HOME/ora/10.1.3/Apache/oracle
Please use the full path to $LOG_HOME e.g.
OraLogDir = /u01/inst/apps/JCB_atg/logs/ora/10.1.3/Apache/oracle

Warning: the log.xml file created by the http ODL log can get very large.
Diligence must be taken to monitor this file and maintain its size under the 2GB limit which when exceeded can cause issues with login

2. Increase OC4J logging for java application
edit j2ee-logging.xml adjust the following in file:
$ORA_CONFIG_HOME/10.1.3/j2ee//config/j2ee-logging.xml


Message type: INTERNAL_ERROR, ERROR, WARNING, NOTIFICATION & TRACE
Message level: 1-32 (1 most severe, 32 least)




3. edit orion-web.xml adjust the following in file:
$ORA_CONFIG_HOME/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml

debug_mode
true

4. Increase OPMN Logging

edit opmn.xml adjust the following in file:
$ORA_CONFIG_HOME/10.1.3/opmn/conf/opmn.xml



Startup the http server, oc4j and opmn

Reproduce the issue

Wednesday, March 30, 2011

Tuesday, March 29, 2011

Dealing Transaction time-out(s) in Oracle BPEL & ESB processes

Dealing Transaction time-out(s) in Oracle BPEL & ESB processes
This post would cover the various transaction timeout properties & configurations of Oracle BPEL/ESB projects that can be of great help mitigating timeout issues during runtime arising due to several performance issues.

We had a synchronous BPEL process which would calculate, extract and generate generic XML content for use by other child BPEL processes, which were designated for various operations. The synchronous BPEL process would be called by main BPEL process which will manage the entire workflow. We had no issues in this process until the XML content generated by the synchronous process was fast & within the transaction time-out limits.

For transactions involving complex logic calculations & huge data extractions, the synchronous BPEL process took plenty of time to respond which resulted in transaction time-outs and the BPEL processes never proceeded to completeness.


To overcome this issue, we increased the transaction time-out parameters which can be configured in the Oracle SOA suite. Please note that the following settings are applicable for Oracle SOA suite 10.1.3.3 advanced installation.....

Config 1:

While introducing receive activity in a BPEL process anticipating a response from an Asynchronous BPEL process after an invoke, if the transaction times out; configure the following setting in the BPEL console which will increase the time-out duration.

This is the maximum time the process receiver will wait for a result before returning. Results from asynchronous BPEL processes are retrieved synchronously via a receiver that will wait for a result from the container.

•Login to BPEL Console
•Click on Manage BPEL Domain
•Update the syncMaxWaitTime property to an increased value (Default is 45 sec) depending on the requirement
Config 2:

Modify the transaction-timeout property in the orion-ejb-jar.xml file available under the following location;
$SOA_Home\j2ee\oc4j_soa\application-deployments\orabpel\ejb_ob_engine\orion-ejb-jar.xml

There will be several session beans available in this config file - all of which should be configured with the same value for the transaction-timeout.

Config 3:

Modify the transaction-timeout property in the transaction-manager.xml file available under the following location;
$SOA_Home\j2ee\oc4j_soa\config\transaction-manager.xml

Please note that this timeout value should be greater than the values configured in Config 1 & 2. In essence this value should be larger than syncMaxWaitTime & transaction-timeout configured in orion-ejb-jar.xml file [Config 2].

Config 4:

While using Oracle Enterprise Service Bus (ESB), there can be transcation timeouts while transacting with other BPEL processes or during deployment of ESB projects via Oracle JDeveloper. For the former, Config 3 would suffice & for the latter, configure the xa_timeout parameter in esb_config.ini located in SOA suite at the following location;
$SOA_Home\integration\esb\esb_config.ini

Note that whenever an ESB initiates a transaction, timeout specified in the esb_config.ini will take precedence.

The above specs are gathered from the following resources;
http://download.oracle.com/docs/cd/B31018_01/relnotes.1013/relnotes/esb.htm
http://download-west.oracle.com/docs/cd/B31017_01/integrate.1013/b28981/app_trblshoot.htm#sthref3957

http://sathyam-soa.blogspot.com/2008/12/dealing-transaction-time-outs-in-oracle.html

Adapters used in ESB

Adapters used in esb
1.Technology adapters (file,FTP,MQSerires,Database,AQ and JMS)
2. Oralce AS Adapters for Oracle Applications
3. Third party adapters
Applications(JDEdwards,One world,SAP,Peoplesoft,Siebel)
Legacy(CICS,IMS/DB,IMS,TM,Tuxedo, and VSAM)


ADAPTER LOCATION IN ESB

cd $SOA_HOME/j2ee/oc4j_soa/application-deployments/default:>
> ls
AppsAdapter datasources defaultWebApp FtpAdapter jmsrouter_ejb MQSeriesAdapter OracleOJMS
AqAdapter DbAdapter FileAdapter JmsAdapter jmsrouter_web OracleASjms


$SOA_HOME/j2ee/oc4j_esbdt/connectors/DbAdapter/DbAdapter/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/datasources/datasources/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/AppsAdapter/AppsAdapter/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/MQSeriesAdapter/MQSeriesAdapter/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/FileAdapter/FileAdapter/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/FtpAdapter/FtpAdapter/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/OracleASjms/OracleASjms/META-INF/oc4j-ra.xml
$SOA_HOME/j2ee/oc4j_esbdt/connectors/AqAdapter/AqAdapter/META-INF/oc4j-ra.xml

oc4j-ra.xml(Adapter Integration with OAS Components)

oc4j-ra.xml is used to implement the Adapter Integration with OAS component.

Deployed as J2CA Resource Adapters during installation
J2CA resource adapter is packaged into RAR file as JAR format
Deployment descriptor for RAR is ra.xml
OC4J generates the oc4j-ra.xml deployment descriptor during deployment
oc4j-ra.xml contains connection information, JNDI name etc.

oc4j-ra.xml is the OC4J specific deployment descriptor for a resource adapter.
That is why name is given as oc4j-ra which means ORACLE CONTAINER FOR JAVA- RESOURCE ADAPTER .

It contains deployment configurations for deploying resource adapters to OC4J, which includes EIS connection information as specified in the deployment descriptor of the resource adapter, JNDI name to be used, connection pooling parameters, and resource principal mapping mechanism and configurations


Location of oc4j-ra.xml is

ORACLE_HOME/j2ee//application-deployments/default//oc4j-ra.xml
ORACLE_HOME/j2ee//connectors/datasources/META-INF/oc4j-ra.xml

and in the same location we can find ra.xml as well.

How to increase the performance of SOA Server

Change the default setting in oc4j-ra.xml to increase the performance of the SOA Server

We have to change it two locations .
1. SOA_HOME/j2ee/oc4j_soa/application-deployments/default/DbAdapter/oc4j-ra.xml

2. SOA_HOME/j2ee/oc4j_soa/application-deployments/default/AqAdapter/oc4j-ra.xml

Follow the below steps to change in DbAdapter oc4j-ra.xml ,then the follow the same steps to change int AqAdapter

1) Location SOA_HOME/j2ee/oc4j_soa/application-deployments/default/DbAdapter

2) Open the file oc4j-ra.xml

Default Value:



New Value:







3) Similarly, open the file oc4j-ra.xml present under location:
SOA_HOME/j2ee/oc4j_soa/application-deployments/default/AqAdapter

4) Change the default value to the New value as shown above for DB Adapter.

5) Bounce the SOA Server.

Friday, March 25, 2011

Developing Web Service

You've heard the hype, and your head is probably dizzy from all the acronyms. So just what are Web Services, and how can you use them? This series of articles is intended to demystify Web Services and show, step-by-step, how to build, deploy, use, and find them.

Basic Web Services aren't very difficult to create. To prove this point, we'll show you, in this first article, how to construct a Web Service in about 30 minutes. In subsequent articles we'll delve deeper into Web Services and explain the following topics in detail:

•SOAP messaging
•WSDL definitions and their relationship to code
•Publishing services to a UDDI directory
•Exposing legacy applications as Web Services
•Advanced topics such as security
In this introductory article, we begin with a programmatic definition of Web Services, then quickly move on to show a simple Java class that calls and executes a Web Service. All of our examples will be in Java. We've created our examples using a free set of tools and a runtime environment from Systinet (details on how to access and download this software is in the Installing software chapter). You don't have to use these products to understand the examples, but we strongly recommend it. The concepts we introduce and the code we create are generally applicable and relatively independent of the tools used. We assume some knowledge of XML, but none of Web Services.

We believe that J2EE is the most mature architecture for business logic implementation, and our goal is to introduce Web Services as the natural extension to the existing J2EE component model, providing an industry standard XML-based protocol along with unified component description and discovery. This gives existing J2EE-based systems a much broader reach than ever before and makes J2EE a much better option for implementation of core business logic within the typically heterogenous environment of corporate information systems.


The Web Service - a programmatic definition
A Web Service is a software component with the following features:

•It is accessible through a SOAP (Simple Object Access Protocol) interface.
•It's interface is described in a WSDL (Web Service Description Language) document.
SOAP is an extensible XML messaging protocol that forms the foundation for Web Services. SOAP provides a simple and consistent mechanism that allows one application to send an XML message to another application. A SOAP message is a one-way transmission from a SOAP sender to a SOAP receiver, and any application can participate in an exchange as either sender or receiver. SOAP messages may be combined to support many communication behaviors, including request/response, solicit response, one-way asynchronous messaging, or event notification. SOAP is a high-level protocol that defines only the message structure and a few rules for message processing. It is completely independent of the underlying transport protocol, so SOAP messages can be exchanged over HTTP, JMS, or mail transport protocols. Currently the HTTP protocol is the most frequently used transport for SOAP messages. We'll show some sample SOAP messages later in this article.

WSDL is an XML document that contains a set of definitions that describes a Web Service. It provides all the information needed to access and use a Web Service. A WSDL document describes what the Web Service does, how it communicates, and where it resides. You use the WSDL document at develeopment-time to create your service interfaces. Some SOAP implementations, including Systinet WASP, also use WSDL at runtime to support dynamic communications.


Installing the software
REQUIREMENTS:We assume that you have a Java 1.3.x SDK and a standard HTTP browser installed on your system. The JAVA_HOME environment variable should point to your Java 1.3.x SDK installation directory.

If you want to follow along with the demo, you'll need to download WASP Advanced from Systinet. Unpack the downloaded package to a local disk (preferably c:) and run the install script from the bin subdirectory of the WASP Advanced Advanced installation.

In our examples we assume that we have unpacked WASP to the c:wasp-advanced directory. You'll also need to download the demo sources and unpack it into the c:wasp_demo directory. If you choose different directory names, please update the env.bat script appropriately (change the WASP_HOME and WASP_DEMO environment variables to point to the WASP installation directory and demo directory respectively).


Implementing a simple Web Service
We'll follow these steps to create our simple Web Service:

•Create the Web Service business logic. First we need to write a Java class that implements the Web Service business logic. In this case, our business logic will be a simple Java class that simulates a stock quote service.


•Deploy the Java class to the SOAP server. Next we need to turn the Java class into a Web Service. We'll show how to deploy the Java class to a SOAP server using the WASP deployment tool.


•Generate client access classes. A client application uses a proxy object to access a Web Service. At request time, the proxy accepts a Java method call from the application and translates it into an XML message. At response time, the proxy receives the SOAP reply message, translates it into Java objects, and returns the results to the client application.


•Client application development. The client application treats the proxy as a standard Java object that facilitates the communication with a Web Service.
NOTE: We're using MS Windows notation for our commandline commands. If you have a Unix-based environment, please make appropriate adjustments to these scripts.

So let's start with a simple Java class that implements a stock quote lookup function. Please look at the Java code below:

NOTE: All Java sources mentioned in this example can be found in the src subdirectory of the unpacked demo sources archive. All of them reside in the com.systinet.demos.stock package.



/*
* StockQuoteService.java
*
* Created on Sat 13th Oct 2001, 15:25
*/

package com.systinet.demos.stock;

/**
* Simple stock quote service
* @author zdenek
* @version 1.0
*/
public class StockQuoteService {


public double getQuote(String symbol) {
if(symbol!=null && symbol.equalsIgnoreCase("SUNW"))
return 10;
if(symbol!=null && symbol.equalsIgnoreCase("MSFT"))
return 50;
if(symbol!=null && symbol.equalsIgnoreCase("BEAS"))
return 11;
return 0;
}

public java.util.LinkedList getAvailableStocks() {
java.util.LinkedList list = new java.util.LinkedList();
list.add("SUNW");
list.add("MSFT");
list.add("BEAS");
return list;
}

}



Figure 1: Web Service code (StockQuoteService.java)
Our example is yet another simple stock quote system (we've seen so many of these, developers should be registered traders by now), but it illustrates how easily Web Services can be created and deployed. In our example, we're going to retrieve the price of three stocks (BEAS, MSFT, and SUNW).

The easiest way to turn our class into a Web Service is to compile our Java classes and then use the deployment tool to deploy them to the Web Services runtime.

NOTE: You'll find all scripts in the bin subdirectory of the unpacked demo sources archive.

NOTE: Before running the demo you need to install the Systinet SOAP framework. Please see the Installation chapter of this document for step-by-step installation.

First we start the Web Service runtime server with the startserver.bat script. Then we compile StockQuoteService.java and deploy the compiled class to the SOAP server using the deploy.bat commandline script.

Next we will make sure that everything worked properly by opening the administration console in the HTTP browser. Click on the Refresh button to show a list of all the packages deployed on the server. We should see the StockService package with one StockQuoteService deployed on the server. Notice that the Web Service runtime automatically generated the WSDL file and made it publicly available at http://localhost:6060/StockQuoteService/.


targetNamespace='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'
xmlns:mime='http://schemas.xmlsoap.org/wsdl/mime/'
xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/'
xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/'
xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
xmlns:xsd='http://www.w3.org/2001/XMLSchema'
xmlns:ns0='http://idoox.com/containers'
xmlns:http='http://schemas.xmlsoap.org/wsdl/http/'
xmlns:tns='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'
xmlns:SOAP-ENC='http://schemas.xmlsoap.org/soap/encoding/'>

























namespace='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'/>


namespace='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'/>





namespace='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'/>


namespace='http://idoox.com/wasp/tools/java2wsdl/output/com/systinet/demos/stock/'/>











Figure 2: Generated WSDL file (StockQuoteService.wsdl)
The WSDL file contains a full description of the deployed Web Service. Basically there are three parts in a WSDL file:

•The WHAT part, consisting of the types, message, and portType elements, defines the messages and data types exchanged between client and server. A message is the basic communication element of SOAP. A message can consist of one or more parts, each part representing a typed parameter. There are two messages (input and output) for each method of our stock quote Java class. Since we don't use any complex or compound types in our example, there are no compound type definitions in this WSDL (don't worry, we'll see many of them in future examples). All messages are grouped into operations in an entity called a portType. A portType represents the interface -- a concrete set of operations supported by the Web Service. A Web Service can have multiple interfaces represented by different portTypes. Look at the StockQuoteService portType in the sample WSDL file. It includes two operations: getAvailableStocks and getQuote. To invoke the getQuote method, the client sends a StockQuote_getQuote_Request message. (You'll find this message defined earlier in the file.) Notice that the StockQuote_getQuote_Request message consists of one part (the input parameter) called p0, which is defined as an XML Schema string type (xsd:string). The Web Service is supposed to reply with the StockQuote_getQuote_Response message, which contains one part (the return value) called response, which is an XML Schema double type (xsd:double).



•The HOW part, consisting of the binding elements, describes the technical implementation details of our Web Service. The binding binds a portType to a specific communication protocol (in this case, SOAP over HTTP). Since we're using SOAP, we use a number of WSDL extensibility elements for SOAP to define the specifics of our SOAP binding. (Notice that many of the elements in this section use the soap: namespace prefix. These elements are SOAP extensions to WSDL.) The soapAction attribute in the soap:operation element is an HTTP-specific attribute that can be used to specify the intent of the SOAP message. It can contain a message routing parameter or value that helps the SOAP runtime determine which application or method should be executed. The value specified in this attribute must also be specified in the SOAPAction: attribute in the HTTP header of the SOAP request message. In our case this attribute contains no value. A SOAP binding requires that we specify the communication style used for each operation in the portType. SOAP supports two possible communication styles: RPC and Document. The RPC style supports automatic marshalling and demarshalling of messages, permitting developers to express a request as a method call with a set of parameters, which returns a response containing a return value. The Document style does not support automatic marshalling and demarshalling of messages. It assumes that the contents of the SOAP message are well-formed XML data. A SOAP binding also requires that we specify how our messages are expressed in XML. We can use either literal values or encoded data types. The use='literal' attribute indicates that the SOAP runtime should send the XML as provided. The use='encoded' attribute indicates that the SOAP runtime should serialize the data for us using a particular encoding style. An encoding style defines a set of rules for expressing programming language types in XML. In this case we use the encoding style defined by SOAP in section 5 of the SOAP specification. Other encoding styles can also be used.



•Finally the WHERE part, consisting of the service element, pulls together the port type, the binding, and the actual location (a URI) of the Web Service. Check out the service element at the very end of the WSDL document.
As you can see, a WSDL file completely describes a Web Service. Given this WSDL file, we have all the information needed to create a client application that can access our stock quote Web Service.


Implementing a Java web service client
A client binds to a remote Web Service using a proxy Java component. When using Systinet WASP, this proxy is generated at runtime from the WSDL file. We need a Java interface that can keep a reference to this dynamically created object. We can either create the interface ourselves, or we can use WASP's WSDLCompiler to generate one for us. The interface creation is easy since the only requirement is that the interface methods must be a subset of methods of the Web Service business logic Java class. Let's look at the code below. First, the client creates a WebServiceLookup object. This object is then used to create the Web Service proxy by invoking the lookup method. The lookup method requires two parameters: a reference to a WSDL file and the class of the Java interface that will reference the proxy instance. The lookup method returns the proxy that is used to invoke the Web Service.

/**
* Stock Client
*
* @created July 17, 2001
* @author zdenek
*/

package com.systinet.demos.stock;

import org.idoox.wasp.Context;
import org.idoox.webservice.client.WebServiceLookup;

public class StockClient {

/**
* Web service client main method.
* Finds the web service and
* @param args not used.
*/
public static void main( String[] args ) throws Exception {

// lookup service
WebServiceLookup lookup = (WebServiceLookup)Context.getInstance(Context.WEBSERVICE_LOOKUP);
// bind to StockQuoteService
StockQuoteServiceProxy quoteService = (StockQuoteServiceProxy)lookup.lookup(
"http://localhost:6060/StockQuoteService/",
StockQuoteServiceProxy.class
);


// use StockQuoteService
System.out.println("Getting available stocks");
System.out.println("------------------------");
java.util.LinkedList list = quoteService.getAvailableStocks();
java.util.Iterator iter = list.iterator();
while(iter.hasNext()) {
System.out.println(iter.next());
}
System.out.println("");

System.out.println("Getting SUNW quote");
System.out.println("------------------------");
System.out.println("SUNW "+quoteService.getQuote("SUNW"));
System.out.println("");

}

}


Figure 3: Web Service client code (StockClient.bat)
Run the runJavaclient.bat script. This script will run WSDLcompiler to generate the Java interface, then it will compile and run the client application. You should see the output from the getAvailableStocks and getQuote methods on the console.


Developing and running the JavaScript Web Service client
NOTE: Please note that the JavaScript Web Service client currently requires Microsoft Internet Explorer 6.0 or Microsoft Internet Explorer 5.0 with Microsoft XML Parser 3.0 SP2 installed.

We can generate a browser-based JavaScript client using the runJScriptClient.bat script. This script will open an IE browser with a generated HTML page. You can then invoke all the Web Service methods from this page.


SOAP messages at a glance
Now we can use the WASP Administration console to view the SOAP messages that are exchanged between client and server. First, we need to open the administration console in the browser. Then click on the Refresh button to see all deployed packages. We should see our StockQuoteService Web Service deployed on the server. Enable debugging of all SOAP requests by clicking on the "enable" link (near the "Debug is OFF:" label in the StockQuoteService section of the administration console). Then re-run the Java Web Service client runJavaclient.bat script and click on the show SOAP conversation link in the admin console. This should open a browser window that displays two pairs of input and output SOAP messages.


==== INPUT ==== http://localhost:6060/StockQuoteService/ ==== 11/7/01 3:45 PM =


ns0:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/">



==== CLOSE =====================================================================

==== OUTPUT ==== http://localhost:6060/StockQuoteService/ ======================


ns0:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/">


SUNW
MSFT
BEAS




==== CLOSE =====================================================================

==== INPUT ==== http://localhost:6060/StockQuoteService/ ==== 11/7/01 3:45 PM =


ns0:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/">

SUNW



==== CLOSE =====================================================================

==== OUTPUT ==== http://localhost:6060/StockQuoteService/ ======================


ns0:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/">

10.0



==== CLOSE =====================================================================


Figure 4: SOAP messages
SOAP messages follow the following basic structure:

ENVELOPE attrs
HEADER attrs
directives
HEADER
BODY attrs
payload
BODY
FAULT attrs
errors
FAULT
ENVELOPE
The message content is enclosed in the ENVELOPE. In this simple case, our SOAP messages contain only a BODY section. There can be also two other sections, namely the HEADER and FAULT sections. The HEADER section is usually used for propagation of various context information (e.g. payment details, transaction context, security credentials etc). If an error occurs the FAULT section should carry information about the nature of the fault. The BODY section carries the main information payload (in our example, the stock value and related data). Generally, SOAP doesn't mandate any rules for the BODY section. We already mentioned two possible styles for the BODY section: Document and RPC. The Document style has no rigid formatting requirements beyond standard XML rules, while the RPC style defines rules for marking up the method call with all its parameters. The SOAP specification recommends but doesn't mandate an encoding style, the basic framework for expressing typed values in a SOAP message. The SOAP encoding style is based on the data types defined in the XML Schema, Part 2 Recommendation which includes primitive programming language types such as int, float, double or string. SOAP Encoding also defines rules for building complex types (e.g. arrays, structures etc.) on top of these primitives. In our case (we are using the RPC style with SOAP encoding), the BODY section of the input message contains the invoked method name with encoded parameters:


SUNW

The output message contains the result of the method call:


10.0


Cleanup
At the end we should undeploy our simple service from the server by running the undeploy.bat script.


Review
In this first article we've hopefully demonstrated that creating a simple Web Service isn't difficult - in fact, one of the benefits of using Web Services is that they're relatively easy to make and deploy. In the process of creating our stock trading system we've introduced some fundamental concepts, including SOAP and WSDL. We've shown how SOAP messages are constructed, and we've created and analysed a WSDL file that provides the instructions on how to interact with a Web Service. In the next article we'll build on this and examine how SOAP handles complex types, error messaging and also remote references.

Wednesday, March 23, 2011

WSDL (Web Services Description Language)


WSDL (Web Services Description Language) is an XML-based language for describing Web services and how to access them.

What is WSDL?
•WSDL stands for Web Services Description Language
•WSDL is written in XML
•WSDL is an XML document
•WSDL is used to describe Web services
•WSDL is also used to locate Web services
•WSDL is a W3C recommendation

WSDL Describes Web Services
WSDL stands for Web Services Description Language.

WSDL is a document written in XML. The document describes a Web service. It specifies the location of the service and the operations (or methods) the service exposes.


WSDL is a W3C Recommendation
WSDL became a W3C Recommendation 26. June 2007.


A WSDL document is just a simple XML document.

It contains set of definitions to describe a web service.


--------------------------------------------------------------------------------

The WSDL Document Structure
A WSDL document describes a web service using these major elements:

Element Defines
The data types used by the web service
The messages used by the web service
The operations performed by the web service
The communication protocols used by the web service

The main structure of a WSDL document looks like this:




definition of types........The data types used by the web service




definition of a message....The messages used by the web service




definition of a port.......The operations performed by the web service




definition of a binding....The communication protocols used by the web service






A WSDL document can also contain other elements, like extension elements, and a service element that makes it possible to group together the definitions of several web services in one single WSDL document.


--------------------------------------------------------------------------------
WSDL Ports
The element is the most important WSDL element.

It describes a web service, the operations that can be performed, and the messages that are involved.

The element can be compared to a function library (or a module, or a class) in a traditional programming language.


--------------------------------------------------------------------------------

WSDL Messages
The element defines the data elements of an operation.

Each message can consist of one or more parts. The parts can be compared to the parameters of a function call in a traditional programming language.


--------------------------------------------------------------------------------

WSDL Types
The element defines the data types that are used by the web service.

For maximum platform neutrality, WSDL uses XML Schema syntax to define data types.


--------------------------------------------------------------------------------

WSDL Bindings
The element defines the message format and protocol details for each port.


--------------------------------------------------------------------------------

WSDL Example
This is a simplified fraction of a WSDL document:
















In this example the element defines "glossaryTerms" as the name of a port, and "getTerm" as the name of an operation.

The "getTerm" operation has an input message called "getTermRequest" and an output message called "getTermResponse".

The elements define the parts of each message and the associated data types.

Compared to traditional programming, glossaryTerms is a function library, "getTerm" is a function with "getTermRequest" as the input parameter, and getTermResponse as the return parameter.


A WSDL port describes the interfaces (legal operations) exposed by a web service.


--------------------------------------------------------------------------------

WSDL Ports
The element is the most important WSDL element.

It defines a web service, the operations that can be performed, and the messages that are involved.

The port defines the connection point to a web service. It can be compared to a function library (or a module, or a class) in a traditional programming language. Each operation can be compared to a function in a traditional programming language.


--------------------------------------------------------------------------------

Operation Types
The request-response type is the most common operation type, but WSDL defines four types:

Type Definition
One-way The operation can receive a message but will not return a response
Request-response The operation can receive a request and will return a response
Solicit-response The operation can send a request and will wait for a response
Notification The operation can send a message but will not wait for a response


--------------------------------------------------------------------------------

One-Way Operation
A one-way operation example:












In the example above, the port "glossaryTerms" defines a one-way operation called "setTerm".

The "setTerm" operation allows input of new glossary terms messages using a "newTermValues" message with the input parameters "term" and "value". However, no output is defined for the operation.


--------------------------------------------------------------------------------

Request-Response Operation
A request-response operation example:
















In the example above, the port "glossaryTerms" defines a request-response operation called "getTerm".

The "getTerm" operation requires an input message called "getTermRequest" with a parameter called "term", and will return an output message called "getTermResponse" with a parameter called "value".




Wednesday, March 9, 2011

BOOMI

THE QUEST FOR A CLOUD INTEGRATION STRATEGY

ENTERPRISE INTEGRATION
Historically, enterprise-wide integration and its countless business benefits have only been available to large companies due to the high costs of purchasing and implementing integration solutions, which until recently amounted to hundreds of thousands of dollars. Those who could afford to purchase and implement
enterprise integration solutions realized cost efficiencies from automated business
processes spanning multiple systems, the elimination of costly and error-prone manual data entry, and faster responsiveness to changing business needs.
Small and medium sized organizations, unable to afford comprehensive solutions, generally had no other alternative but to piece together a combination of 1-to-1 integration solutions with custom code, or face the burden of dealing with siloed business applications. Fortunately, the development of Software as a Service appli¬cations has leveled the playing field in terms of affordability but it has also changed the integration game significantly, due to the unique characteristics of the model itself.
ADVENT OF SOFTWARE AS A SERVICE (SAAS)
The Software as a S ervice (SaaS) model of software deployment has revolution¬ized the industry and opened the door for businesses of all sizes to gain access to enterprise grade applications with affordable, pay-as-you-go pricing. According to IDC (2005), the key characteristics of SaaS applications are:
Network-based access to and management of commercially available software
Multi-tenancy architecture which enables multiple customers or users to
access the same data model
Centralized feature updating, which eliminates the need to download
patches and upgrades
Activities that are managed from central locations rather than at each
customer’s site, enabling customers to access applications remotely via
a web browser

SAAS IMPLICATIONS FOR INTEGRATION
While SaaS applications offer outstanding value in terms of features and
capabilities relative to cost, they have introduced several challenges specific to
integration. The first issue is that the majority of SaaS applications are point solutions and only service one line of business. As a result, companies without a method of synchronizing data between multiple lines of business are at a serious disadvantage in terms of maintaining accurate data, forecasting, and automat¬ing key business processes. According to Ray Wang (2008), Analyst at Forrester Research, “The successful adoption of SaaS solutions will transform usage from purpose built point solutions to integration into mission critical processes.”
APIS ARE INSUFFICIENT
Many SaaS providers have responded to the integration challenge by developing application programming interfaces (APIs). Unfortunately, accessing and managing data via an API requires a significant amount of coding as well as ongoing mainte¬nance due to frequent modifications and updates. Furthermore, despite the advent of web services- there is little to no standardization or consensus on the structure or format of SaaS APIs. As a result, the end users’ IT department expends an excess amount of time and resources developing and maintaining a unique method of com¬munication for the API of each SaaS application deployed within their organization.
DATA TRANSMISSION SECURITY
SaaS providers go to great length to ensure that customer data is secure within the hosted environment. However, the need to transfer data from on-premise systems or applications behind the firewall with SaaS applications hosted outside of the
client’s data center poses new challenges that need to be addressed by the integra¬tion solution of choice. It is critical that the integration solution is able to synchronize data bi-directionally from SaaS to On-Premise without opening the firewall. Best of breed integration providers can offer the ability to do so by utilizing the same secu¬rity as when a user is manually typing data into a web browser behind the firewall.
INTEGRATION ON DEMAND?
Forward thinking companies that have realized the outstanding value proposition of the SaaS model are looking for IT infrastructure and support that offers the same. End users research, try, and purchase SaaS applications in a self-service manner without ever leaving their web browser. Following purchase, maintenance is low as there are no servers to install or maintain and updates are handled centrally by the SaaS provider. Savvy businesses are seeking integration solutions built from the ground up as pure SaaS which also offer the ability to build, deploy, and manage integration processes from a web browser.
“The successful adop¬tion of SaaS solutions will transform usage from purpose built point
solutions to integration into mission critical
processes.”

INTEGRATION STRATEGIES
One critical integration challenge for companies is deciding just what kind of a SaaS integration provider they’re going to use. In addition to SaaS, many businesses are supported by a complex ecosystem consisting of a combination of on-premise, platform-as-a-service (PaaS), e-commerce, and cloud-based applications. Rather than look at each integration project in a silo, forward thinking companies select an integration strategy that will support all of the above in a single, seamless solution.
The four primary choices businesses currently have for SaaS integration include: building a custom-code solution based on a SaaS vendor’s application program¬ming interfaces (APIs); purchasing conventional integration software; subscribing to an integration-as-a-service (IaaS) solution; and engaging professional services or a system integrator. According to Benoit Lheureux, an analyst for Gartner, “The challenge for customers is to know when to choose one approach over another. The answer depends heavily on each customer’s particular situation, including factors such as internal integration skills and overall B2B strategies.”
The following factors should also be considered when evaluating the four integration options:
SCALABILITY
Ensure that the integration solution chosen is able to grow with your business. Con¬sider whether it will scale across multiple geographic locations and, if yes, will the IT staff be able to monitor all integration activity from one central location? Also, many SaaS applications have very particular usage restrictions about how much data can be sent through their API in a given time window. It is critical that as data volumes increase that the solution adequately is aware of and handles those restrictions.
RESOURCES (IMPLEMENTATION, MAINTENANCE)
The amount of resources required varies greatly amongst the integration strate¬gies above. Companies that choose to build or buy integration solutions should be prepared to allot significant amounts of IT time and budget for the installation and ongoing maintenance of servers, software, and code. For businesses with limited IT resources, outsourcing to an Integration as a Service (IaaS) provider is highly rec¬ommended. However, even with hosted solutions, it is important to query each pro¬vider so as to determine the resources required to build and maintain integrations.
COST
Cost is a critical factor in the decision to build, buy or partner. Building custom
integrations is often found to be a major drain on internal IT resources. The cost of an integration solution to support SaaS and cloud-based applications should be
affordable and comparable to a SaaS pay-as-you-go model.

COMPATIBLE APPLICATIONS & SYSTEMS
Integration solutions are not always a “one size fits all” situation- many are only built to accommodate specific applications and limit a company’s ability to optimize
integration by incorporating throughout the enterprise. Furthermore, if additional
applications are purchased in the future- the solution should have the ability to
extend to accommodate them and migrate data if need be.
WORKFLOW
Best of breed integration solutions offer more than just the transformation of data
between two different formats. Workflow, which encompasses the end-to-end series of steps needed to automate a business process, is mandatory to ensure the complete automation of complex business processes. Secure and reliable commu¬nication, content-based routing, business logic rules handling, data transformation, cleansing, and validation are examples of real-world requirements to address what are otherwise manual tasks.
ON-PREMISE SOFTWARE
Conventional integration software has been around significantly longer than most Integration as a Service (IaaS) providers and, therefore, was a popular alternative to coding custom integrations. Packaged software is costly to purchase, install and maintain on-site and is unlikely to extend to the cloud. In addition, these solutions were not developed to meet the unique requirements of SaaS applications such as multi-tenancy.
For businesses that have already purchased software- it may be possible to le¬verage their current investment if the provider offers a SaaS connector strategy. Unfortunately, like many on-premise software companies, the integration software providers are probably puzzling over what their SaaS strategy needs to be. In order to be optimized for SaaS, their current technology will need to be re-engineered from the ground up and will completely disrupt their on-premise solution.
INTEGRATION AS A SERVICE
The recent development of Integration as a Service is a natural outcome of the convergence of Service Oriented Architecture (SOA) and SaaS. As businesses of all sizes migrate to SaaS and cloud-based applications, the need for solutions to allow them to interoperate and exchange data is obvious- as is the call for such solutions to be deployed and purchased in the same on-demand model.
When developed and built from the ground up in a pure SaaS model*, IaaS solutions are cost-effective, scalable, and flexible. Businesses minimize the use of internal IT resources as the service is typically made available in a completely self-service model, and can be configured, deployed and manage right from the web browser without having to write code or install any software or hardware on-premise.
“The challenge for
customers is to know when to choose one
approach over another. The answer depends
heavily on each custom¬er’s particular situation, including factors such
as internal integration skills and overall B2B strategies.”

For organizations with multiple business units, these solutions can be deployed to multiple geographic locations from a centrally managed, web-based dashboard. CIOs can also gain a comprehensive view of all integration processes within their organization with this same dashboard. This is a growing challenge as individual departments purchase SaaS application subscriptions often independently from one another.
Best of breed IaaS providers offer a single, seamless solution for a business’ entire application portfolio including on-premise, cloud-based, and SaaS environments. Many providers offer pre-built connectors to leading application as well as the ability to develop custom connectors quickly and easily via visual, drag-and-drop technol¬ogy. As mentioned previously, IaaS providers should offer a secure method of data transmission both in the cloud and behind the company’s firewall for on-premise systems.
* Buyer beware of on-premise software vendors who market themselves as IaaS
by hosting their packaged software product in a data center. That is an ASP model
not SaaS and it will not scale over time, forcing their customers to absorb the
increasing costs.
BUILDING CUSTOM INTEGRATION
For those businesses considering building custom integrations, there are several key pieces that must be included in order to develop a complete integration solution: the integration process itself, including learning the proprietary API’s of the applica¬tions being integrated in order to build “connectors” to extract and load data; moni¬toring tools to provide logging to simplify error resolution; redundancy mechanisms to automatically handle scenarios where the applications being integrated become unavailable; and resiliency to support the frequent release cycles of SaaS applica¬tions and their corresponding APIs. Most people only account for the integration process itself, overlooking the other critical functionality and wind up spending exhorbinant time maintaining their custom code.
Custom coding integration processes is generally considered to be expensive, time-consuming and a drain on internal resources. Some find it to be a desirable option when a business needs to quickly connect no more than two applications, the data value is low, and it is not expected that the applications will change. However, given the dynamic nature of SaaS and cloud-based applications and the high value of the data exchanged between them- that scenario is rare.
MANAGED SERVICES/ SYSTEM INTEGRATORS
Though the introduction of on-demand Integration as a Service (IaaS) offerings, many growing businesses now have access to low cost, low maintenance integra¬tion solutions. However, companies with extremely limited or no IT resources also have the option of outsourcing their integration projects to either a Systems
Integrator (SI) or electing Managed Services from their integration provide


Prior to selecting a Systems Integrator, be sure to take into consideration the solution(s) they are advocating for the integration project at hand and take into
account the five factors described above: Scalability; Cost; Resources; Compat¬ibility of Applications & Systems: and Workflow. Beware of SI’s that are promoting antiquated integration solutions that are not optimized for SaaS and cloud environ¬ments as they will quickly defeat the SaaS value proposition and will not scale. Fortunately, many SI’s recognize the value of on-demand Integration as a Service solutions and add value through their domain expertise and willingness to solve the integration challenge on behalf of their clients.
For those businesses that prefer not to engage an SI but would need initial and ongoing assistance withintegration, many IaaS providers offer managed services to completely take on both the initial setup of the integration as well as the ongoing maintenance requirements. The main benefit of this alternative, in addition to reduc¬ing the strain on internal resources, is the ability to outsource integration projects to the experts in that area. As always, it is best to thoroughly investigate the provider, products, vertical expertise and services they offer to ensure that they are devel¬oped and deployed in the cloud to ensure compatibility with like applications and systems.
CONCLUSION
In summary, the advent of Software as a Service and Cloud Computing have revo¬lutionized the software industry by providing access to enterprise-grade software and services via the web to businesses of all sizes. SaaS and cloud environments are characterized by web-based delivery, multi-tenancy, and centralized manage¬ment and updates- completely unlike traditional software. As a result, new infra¬structure and supporting services, such as integration, are crucial to the success
of this model.
In choosing an integration strategy, businesses must be acutely aware of the reper¬cussions of the path chosen as a poor choice could result in an ongoing drain of valuable IT resources and exponential costs to the organization. It is equally critical that businesses consider the need for scalability- both in terms of the growth of the customer base and the expansion of back-office solutions to include future pur¬chases of SaaS, PaaS, and cloud computing applications. Best-of-breed integration solutions will mirror the SaaS value proposition and allow for scalability and expan¬sion as businesses grow and change over time.
Boomi is the market-leading provider of on-demand integration technology and the creator of
AtomSphere, the industry’s first integration platform-as-a-service. AtomSphere connects providers and consumers of SaaS and on-premise applications via a pure SaaS integration platform that
does not require software or appliances. ISVs and businesses alike benefit by connecting to the industry’s largest network of SaaS, PaaS, on-premise and cloud computing environments in
a seamless and fully self-service model. Leading SaaS players rely on AtomSphere to accelerate
time to market, increase sales, and eliminate the headaches associated with integration.

Building an Integration Process | BOOMI

Once a company logs into their account on the Boomi AtomSphere web site, integration processes can be designed and built using a visual designer that includes access to a library of pre-built connectors and process maps. Using familiar point-and-click, drag-and-drop techniques, users can build very simple to very sophisticated integration processes with exceptional speed. No coding is required.


Since its inception, Boomi has focused on simplifying the creation of integration processes for application integration, data integration, and B2B integration. By identifying the common steps needed to automate complex integration scenarios, a series of common integration components have been created and are available to all Boomi users. When developing an integration process, these components are connected to create an end-to-end integration workflow.







Standard Boomi Integration Components include:
Connector
connect to any application or data source
Always the first and last steps of an integration workflow, the Connector enables access to another application or data source. The connector sends/receives data and converts it into a normalized XML format. A Connector’s primary role is to "operationalize an API" by abstracting the technical details of an API and providing a wizard-based approach to configuring access to the associated application. Connectors are also configurable to capture only new or changed data, based on the last successful run date of an integration process.
Data Transformation
transform data from one format to another
While core to any integration, the data stored in various applications is rarely, if ever, semantically consistent. For example, a Customer represented in one application will have different fields and formats from that of another application. Using Boomi's Data Transformation components, users can map data from one format to another.

Any structured data format is supported, included XML, EDI, flat file, and database formats. While transforming data, the user can also invoke a variety of field-level transformations as well to transform, augment, or compute data fields. Over 50 standard functions are provided. Users can also create their own functions and re-use them in subsequent projects.
Decision
execute business logic and data integrity checks
Boomi’s Decision components enable true/false data validation that enables users to explicitly handle a result based on the programmed logic. For example, an order can be checked against the target system to see if it has already been processed. Based upon the outcome of the data check, the request will be routed down either the 'true' or 'false' path. Other examples include checking products referenced in an invoice to ensure they exist before processing the invoice, etc.
Cleanse
data cleansing and validation
Integrations are only as successful as the quality of data that gets exchanged. Boomi’s Cleanse components allow users to validate and "clean" data on a field-by-field, row-by-row basis to ensure that fields are the right data type, the right length, and the right numeric format (e.g. currency). Users have an option of specifying whether they wish to attempt to auto-repair bad data or simply reject rows that are “dirty”. All validation results are routed through a “clean” or “rejected” path which allows users to explicitly handle either scenario.
Message
user-defined dynamic notifications
For any step of the integration workflow, Message components can be used to create dynamic notifications that make use of content from the actual data being integrated. This allows the creation of messages like "Invoice 1234 was successfully processed for customer ABC Inc." Connectors are then used to deliver the message to the appropriate end point.
Route
dynamic, content based routing
Route components examine any of the content in the actual data being processed or use numerous other properties available to the user (such as directory, file name, etc.) and route the extracted data down specific paths of execution.
Split
intelligent data splitting and aggregation
Split components re-organize data into logical representations, such as business transactions. For example, you can take incoming invoice headers and details, and create logical invoice documents to ensure the applications being integrated process or reject the entire invoice vs. discrete pieces.
Testing Integrations
Real-time
To further simplify and shorten the integration development cycle, users can test their integrations directly in the Build environment. Integration testing includes watching as the data moves through the integration process, viewing the actual data, monitoring exactly where the integration fails, and click-thru on the failed step to examine error messages.





Configuring Integration Automation
Integration processes can be configured for “hands off” execution. AtomSphere provides both real-time as well as batch-style automation options depending on the requirements of the integration, as defined below:

Event-based Invocation - direct Atom invocation.

Included in every Boomi Atom is a lightweight HTTP Server. Data can be HTTP-posted to a specific Atom, and that data will be processed in real-time.
Event-based Invocation - remote Atom invocation.

Boomi also provides a Trigger API, which allows you to securely invoke an integration process that is running inside an Atom, regardless of where that Atom may be running, without opening any holes in the firewall. This is a very powerful option when you wish to provide external access to integration processes or trigger an integration based on some event in another application.
Schedule-based Invocation

Included in every Boomi Atom is a schedule manager, capable of invoking integration processes based on a schedule configured by the user. Invocations can be scheduled to run as frequently as every minute. Schedule-based invocation option requires no changes to external applications.
.

BOOMI

Boomi AtomSphere Frequently Asked Questions
Q: How do I access AtomSphere?
A: Because AtomSphere is an online service, there is no appliance or software to buy, install or maintain. Just point your browser to the login page at www.boomi.com and login.

Q: How do I sign up for AtomSphere? Can I download a demo?
A: You can sign up for a free trial under the ’30 Day Free Trial’ section of the Boomi website.

Q: What applications can I integrate using AtomSphere?
A: An up-to-date list of supported applications can be found on our website.

Q: Is any training required to learn to use AtomSphere?
A: AtomSphere is designed to be user friendly and anyone with basic IT skills and knowledge of the applications they plan to integrate should be able to build integration processes easily. Our customers have reported that using AtomSphere is similar to using other web-based software. However, Boomi’s Support Team offers weekly training sessions via webinar.

Q: Is there any training involved/included?
A: We hold various training/webinar series throughout each month, including a Boomi Basics Course every Monday at 12:30pm EST. You can find full details on all upcoming webinars at www.boomi.com/news_and_events

Q: What Support is available?
A: We have many support options to give you the help you need:

Free 30 Day Trial! Go to www.boomi.com
Submit support tickets to support@boomi.com
Live Chat – 8am-9pm ET is embedded into the application
Forums
Help Wiki Documentation
Premier Support
Boomi Basics Training Webinar Series - Every Monday @ 12:30pm EST
Q: How do I contact Customer Support?
A: All of the available options for support are listed in the above question. Your easiest and quickest path to access support is through the Live Chat embedded into Boomi AtomSphere.

Q: What sort of skill set is required to configure AtomSphere?
A: We aim for our service to be a visual, configuration based approach to integration. You do not need to be a developer to utilize the service; you simply need to understand where the data resides in the source system and where the data needs to be integrated in the destination system. The typical roles that utilize AtomSphere would include Systems Analyst, Application Administrator, or Business Process Engineer.

Q: What platforms do I need to have in order to run AtomSphere?
A: Since Boomi hosts the application, all you need is a computer or an alternative device that can run a Web browser. It doesn’t matter what type of hardware or operating system you’re running.

Q: What involvement is required from my company’s IT department to set up my integration processes?
A: Very minimal involvement from your IT department is typically needed. Typical involvement from the IT department would include allowing you access to the source/destination applications or allowing you to install a Boomi Atom to gain access to your on premise application.

Q: Can Boomi’s customer support team help me set up my integrations?
A: We have designed AtomSphere to be largely self-service and our website contains a number of resources to help you including documentation, videos, webinars, and training courses that are free. You also have access to Boomi forums and “chat” support from within AtomSphere . Your support level will determine the availability of these services and specific response times. Consulting services are also always available from our professional services team for a fee.

Boomi Basics
Q: What’s an Atom?
A: An Atom™ is a lightweight, dynamic runtime engine created with patent pending-technology, Boomi Atoms contain all the components required to execute an integration process. There is a full-featured dashboard to monitor the status and health of all Atoms and integration processes whether they are deployed in the cloud or on-premise.

Q: Where are Atoms hosted?
A: Boomi Atoms are completely self-contained and autonomous and can be run on virtually any server. They can be deployed “in the cloud” for SaaS to SaaS integration (e.g. Boomi’s data center, an ISVs data center or a third-party data center such as Amazon) or behind a company’s firewall for SaaS to On Premise integration.

Q: What is an Integration Process?
A: The main component in a Boomi integration is the Process. A Process represents a business process- or transaction-level interface between two or more systems. Examples of a Process might be “Salesforce Account Synchronization to accounting system” or “Sales Orders from Company ABC to QuickBooks.” Processes contain a left-to-right series of Shapes connected together like a flow chart to illustrate the steps required to transform, route, and otherwise manipulate the data from source to destination.

Q: What is a Connector?
A: Connectors get and send data in and out of Processes. They enable communication with the applications or data sources between which data needs to move or, in other words, the “end points” of the Process. Those applications and data sources can range from traditional on-premise applications like SAP and QuickBooks to Web-based applications like Salesforce.com and NetSuite to data repositories like an FTP directory, a commercial database, or even an email server.

Q: How does Boomi differ from an Application Programming Interface (API)?
A: An API opens up secure access to data in an application but it does not accomplish the integration itself. An API is like an electrical socket – until something is plugged into it, it just sits there. Boom integration Connectors are like “plugs.” Boomi Connectors plug into and API and abstract the technical details of the API and transportation protocols used to communicate with various applications and data sources, allowing you to focus on the business data and logic of the integration. A Connector is really a combination of two Components: a Connection and an Operation. Think of the Connection as the where and the Operation as the how. These two components determine the type of data source, how to physically connect to it, and what type of data records to exchange.

Q: Are there any limitations to the kind/amount of information being integrated?
A: No, we have benchmarked the Boomi Atom to be able to handle very large volumes, upwards of 1,000,000 records an hour.

Q: How often would we need to run the integration? How close to real time information can I get?
A: We support both real-time event-based and schedule-driven executions. We have a scheduler built into Boomi AtomSphere. You can schedule an integration to run based on intervals you define (up to every 1 minute) or on an advanced schedule (more flexible). We also have an external API that will allow you to call an integration to be run in real-time from an external source or application.

Q: Are the integrations manageable by either event OR specific dates?
A: Yes, our system will allow you to schedule your integration process to run at specific dates/intervals, up to every one minute. We also provide an API that will allow you to include event driven integration into your integration process.

Q: Does AtomSphere integrate with shopping carts & e-commerce functionality?
A: Yes, please refer to our website for a full list of supported applications.

Q: If Boomi’s platform is hosted in “the cloud”, how can I integrate my on-premise data and legacy applications?
A: We offer the ability to deploy a Boomi Atom behind your firewall. This Boomi Atom is the run time engine that gives you secure access to your on premise application without having to make any changes to your firewall.

Using Boomi
Q: How do you ensure the data is secure during the integration process?
A: Boomi AtomSphere Connectors go through application specific security reviews where applicable. All data that is passed between the Boomi Atom onsite and our data center is sent over a secure HTTPs channel with 128 bit encryption. Learn more about AtomSphere’s security

Q: How is error handling managed?
A: Error handling is administered via our ‘manage’ tab where users can see the integration process, its executions and all associated log and status notifications. Boomi AtomSphere also includes retry capabilities to ensure messages that had an error during transit are delivered; an Atom also tracks its state to ensure that only unique data is processed. Finally, decision logic can be configured to query destination applications to ensure duplicate data is not sent to the application.

Q: If I have on-premise sources, how do I test my integration process in the hosted environment? Do I have to deploy an atom to do my testing?
A: Yes, the Boomi Atom would reside onsite, allowing you access to the on-premise application through Boomi AtomSphere.

Q: Does the internet and/or AtomSphere need to be up for my Atom to run?
A: Yes, because the Boomi Atom that resides onsite has no GUI, it must be in fairly constant contact with the data center. One important design aspect of AtomSphere is that, much like the Internet itself, it is a distributed architecture, eliminating single points of failure. It is important to note that even during planned maintenance of the platform, deployed Atoms continue to run and process normally.

Q: Do you have rollbacks for changes to an integration process?
A: Yes, we offer version control for our integration process allowing you to rollback to a previous integration process should the need arise.

Q: Is Test Mode an actual test of the process flow of the integration and is the destination getting updated/changed?
A: Yes, test mode actually executes the integration process as designed, so the source and destination will get updated. Boomi AtomSphere provides the concept of ‘Environments’ for those that wish to have the same integration process pointed to different locations (ie. Test, QA, Production).

Sunday, March 6, 2011

OID

Oracle Internet Directory runs as an application on an Oracle Database. It communicates with the database by using Oracle Net Services, Oracle's operating system-independent database connectivity solution. The database may or may not be on the same host


Setting the JVM Heap Size for OC4J Processes

If you have sufficient memory available on your system and your application is memory intensive, you can improve your application performance by increasing the JVM heap size from the default value. While the amount of heap size required varies based on the application and on the amount of memory available, for most OC4J server applications, a heap size of at least 256 Megabytes is advised. If you have sufficient memory, using a heap size of 512 Megabytes or larger is preferable.

To change the size of the heap allocated to the OC4J processes in an OC4J instance, use the procedures outlined in "Using Application Server Control Console to Change JVM Command Line Options", and specify the following Java options:

-Xmssizem -Xmxsizem

Where size is the desired Java heap size in megabytes.

If you know that your application will consistently require a larger amount of heap, you can improve performance by setting the minimum heap size equal to the maximum heap size, by setting the JVM -Xms size to be the same as the -Xmx size.

For example, to specify a heap size of 512 megabytes, specify the following:

-Xms512m -Xmx512m

You should set your maximum Java heap size so that the total memory consumed by all of the JVMs running on the system does not exceed the memory capacity of your system. If you select a value for the Java heap size that is too large for your hardware configuration, one or more of the OC4J processes within the OC4J instance may not start, and Oracle Enterprise Manager 10g Application Server Control Console reports an error. Review the log files for the OC4J instance in the directory $ORACLE_HOME/opmn/logs, to find the error report:

Could not reserve enough space for object heap
Error occurred during initialization of VM

If you select a value for the JVM heap size that is too small, none of the OC4J processes will be able to start, and, after a timeout while attempting to make the change, Application Server Control Console reports an error, "An error occurred while restarting..." In this case, if you review the log files for the OC4J instance in the directory $ORACLE_HOME/opmn/logs, you may find errors similar to the following:

java.lang.OutOfMemoryError

Oracle SOA Suite

Oracle SOA Suite is a comprehensive, hot-pluggable software suite to build, deploy and manage Service-Oriented Architectures (SOA). The components of the suite benefit from common capabilities including consistent tooling, a single deployment and management model, end-to-end security and unified metadata management.

Oracle SOA Suite's hot-pluggable architecture helps businesses lower upfront costs by allowing maximum re-use of existing IT investments and assets, regardless of the environment (OS, application server, etc.) they run in, or the technology they were built upon. Its easy-to-use, re-use focused, unified application development tooling and end-to-end lifecycle management support further reduces development and maintenance cost and complexity.

The products contained in this suite are listed in the Component Index below. You can use that list to navigate to the individual product pages.