Quantcast
Channel: Oracle Service Bus – AMIS Oracle and Java Blog
Viewing all 20 articles
Browse latest View live

Automatic testing Oracle Service Bus using Hudson, maven and SoapUI

$
0
0
A lot of current projects are implementing some sort of service based architecture. Testing in this architecture becomes more complex. When implementing an OSB project with Scrum you test-automation is imperative. Scrum will require more frequent testing of your system. This is only feasible (in time and money) when you automate as much as possible.
Using soapUI you are able to create visually SOAP tests on your OSB implementation and running them against the defined infrastructure (develop, test, acceptance).  SoapUI enables with easy tools to implements verification and validation of the responses of your OSB implementation. When running the test you are also able to set limits in SLA response times on all the calls. This way you are able to monitor depreciation of performance in older parts of your OSB implementation when adding new services.
You can record and edit your SOAP test easy with the soapUI interface and edit it later. When you maven-enable your project it is quite easy running your tests when you implement the “maven-soapui-plugin” (see my other posting http://technology.amis.nl/blog/3061/automated-soap-testing-with-maven).  In the meantime version 3.0 of this plugin is released.
When implementing this with Hudson you do not have to convert the results.xml into a Surefire report. Hudson will manage this for you. Hudson will also enable you with an historical overview of all your test results.
Remark that the loadtest does not generate a Junit formatted log. This is only in the paid-version of soapUI.

 

<plugin>
  <groupId>eviware</groupId>
  <artifactId>maven-soapui-plugin</artifactId>
  <version>3.0</version>
 <executions>
  <execution>
  <phase>verify</phase>
  <id>soapui-tests</id>
  <configuration>
  <projectFile>${basedir}/src/test/soapui/CountryInfoService-soapui-project.xml</projectFile>
  <outputFolder>${basedir}/target/soapui</outputFolder>
 <junitReport>true</junitReport>
  <exportwAll>true</exportwAll>
  <printReport>true</printReport>
  <settingsFile>soapui-settings.xml</settingsFile>
 </configuration>
 <goals>
 <goal>test</goal>
 </goals>
  </execution>
 </executions>
 </plugin>

 

 

Connect soapUI to Hudson

In Hudson you run the goal clean verify and place a reference to the soapUI result log.  Please note: this option is only available in a Freestyle Hudson project so you manually have to add maven as a build engine. See screenshot below.

 

 

And after running the tests a few times Hudson will provide you with a nice history graph.

 


Google

The post Automatic testing Oracle Service Bus using Hudson, maven and SoapUI appeared first on AMIS Technology Blog.


Using Split-Joins in OSB Services for parallel processing of messages.

$
0
0

The Split-Join can be a very useful tool in your OSB services yet seems to be underestimated. When I did some asking around it turned out not many developers use this, even though I can come up with plenty of uses for the Split-Join. The Split-Join’s strength is in numbers, meaning it is the most powerful when you need to process a lot of pieces of similar data. For this example I used a simplified version of a project I am working on. In this project mobile devices are set to send data about rainfall to a database. The data is collected at a regular interfal creating a record and sent to the database per session which contains a large set of records. Instead of processing these records one at time I can process them concurrently and save a lot of processing (and waiting) time.

I created the XML Schema files and WSDL’s for the two services using JDeveloper and not Eclipse/OEPE because its design interface for these files is a lot more userfriendly (although this is of course personal preference).

The weather record element

The weather record element

The image shows the Record element, at first I had defined Record to be a complex type,  using it as a type to set for the element in request messages. However this actualy makes it implementing your Split-Join smoothly a bit harder. Having this element defined allows me to create variables using this structure and processing them without using XQuery translations. Saving you a significant bit of processing time inside your service. Especially when dealing with repeating actions (like the parallel processing of multiple records) you want to be aware of unnecessary overhead and try to avoid it. Or as a colleague put it, ‘for each step consider if you really need it’.

Message Schema (WeatherServiceMessages.xsd)

Message Schema (WeatherServiceMessages.xsd)

As you can see there are Requests and Responses for both ‘InsertRecord’ and ‘InsertRecords’ the first are for the service that processes a single record called storeRecord. This service processes and stores a single record in the database. The second set of messages is for a service that is exposed to the outside and the endpoint where devices will send their collections of records.  As you see the messages are relatively straightforward and most importantly, they both use the same Record element.

The Split-Join is a seperate file/component in your project, so first we create a new Split-Join, when asked for an operation we use the InsertRecords operation from the exposed WeatherData service. This will automatically create a Split-Join object with a recieve and reply action, as well as a request and response variable. Keep in mind that before we can do anything with a variable in the Split-Join they need to be initialized, in this example we use an assign or copy action to intializa variables.  The request variable is initialized for us since it contains the request we will be sending to the Split-Join. The response we initialize by assigning the an empty InsertRecordsResponse to it: <InsertRecordsResponse xmlns=”http://tu.delft/iot/Services/messages”/>.

I added a storeRecordResponseList because I want to collect all responses from the StoreRecord service but I don’t want to send them back as a list of tens or hundreds of response messages, instead we will process this into a single, concise InsertRecordsResponse after all the records have been processed. In a Split-Join you can add a variable by right-clicking the Variables listing. Here you pick the structure for your element. Remember how I created a Record element instead of type, just like I created an element for this list. This is where it comes in handy. Were I to use a type, the variable will automatically use the types name as the root elements name. Even if you initialize it with another root element (of the apropriate type). You could solve this by giving your types the names you would use for elements, but that would mess-up any naming convention and create a messy schema.

Split Join Variables

Split-Join Variables

Now that we have set up the basics, the real fun can begin. After the Assign Actions place a for-each component in your Split-Join. This for-each component is where the magic happens. Instead of the normal for-each component, this one is able process multiple items in parallel, and with that it is the strength of the Split-Join.

For-Each settings

For-Each settings

In the for-each we set the counter start value to 1, and the final value to the total number of records: ‘count($request.parameters/weat:Record)’ . Name your counter simple yet clear since you will need it later.

The second step is to create a variable for storing the Record we are currently working on. After we create the Record variable we initialize it by copying the Record for this exexution of the for-each loop into it. To find the right Record we use the previously set Counter.

$request.parameters/weat:Record[$counter]

Copy the Record

Once we have our record savely stored away we’ll go ahead and create the service call-out that processes our Record. In a Split-Join this component is called Invoke Service, and just like a Service Call-Out we configure the service and operation we are invoking, as well as an input and output variable. Remember to create these variables inside your for-each loop. In this case we’ll call them storeRecordRequest and storeRecordResponse. storeRecordResponse will be initialized with the response message of the service we invoke, storeRecordRequest however needs to be initialized by us.
We will initialize it by assigning the value ‘<InsertRecordRequest xmlns=”http://tu.delft/iot/Services/messages”/>’ to it.

Once initialized when can go ahead and put our Record in the request. Because we use the same Record element structure everywhere we can simply go ahead and insert it inside the storeRecordRequest variable as a child of the InsertRecordRequest element.

Insert the Record into storeRecordRequest

Insert the Record into storeRecordRequest

Last but not least we need to do something with the return messages. Remember the storeRecordResponseList variable? It is set to hold one or more storeRecordResponse messages. Again, we do not want to do too much, so with a simple insert we add our local storeRecordResponse to the global list.

storeRecordResponse into storeRecordResponseList

storeRecordResponse into storeRecordResponseList

Now that we have processed every Record all we need to do is process the response list into a concise response. We can do this by putting an assign after the for-each but before the reply. This will be executed once all parallel executions of the for-each are finished. With an XQuery resource that takes a ResponseList for input we construct the InsertRecordsResponse message for the Reply (response).

ning the InsertRecordsResponseMessage

ning the InsertRecordsResponseMessage

To put our newly created Split-Join to use all we have to do is generate a business service based upon the .flow file that we just created (make sure you save it first). Right-click the file and select Oracle Service bus>Generate Business Service. We call this business service from a proxy service using a simple routing that simply passes through the in- and outbound messages, make sure to select that option in your routing. Since the Split-Join and our proxy service use the same WSDL, we don’t have to do anything to these messages, obviously you could add some functionality to your proxy service like a validation step before sending the message of to your Split-Join.

WeatherData proxy service

WeatherData proxy service

To test the perfomance increase I created a mockservice using SoapUI for the StoreRecord service. In this service I set a 40ms to simulate processing time before it responds. Next I created a similar proxy service using a standard for-each component instead of a Split-Join, using the same steps as I used in the Split-Join example with as little alteration as necessary. Again using SoapUI I created a request with 10 records and sent it alternately to the Split-Join service and the for-each service. The first averages around 520ms of total response time while the latter takes all of 3000ms for the complete roundtrip.

On a last note I can recommend creating a proxy service using a normal for-each after you’ve created your first proper Split-Join to to see exactly what the differences are.

You can download the project here, import it in Eclipse and play around with it. For ease of use the SoapUI mockservice has been replaced with a storeRecord proxy service that echoes a standard response.

SplitJoinBlogCase

There is now a follow-up on this post about dealing with large amounts of messages that can be read here.

The post Using Split-Joins in OSB Services for parallel processing of messages. appeared first on AMIS Technology Blog.

OSB Split-Joins and managing your server load 1: Throttling

$
0
0

As Vlad pointed out in a comment on my previous post about using the Split-Join, there are a few things to keep in mind when using them. If you put a Split-Join in your service, and let it take care of any amount of service calls in parallel, you might be in for some trouble. Bogging down your server with requests and potentially losing some data being some of the biggest concerns. For this blog I will be referring to my previous post and the code example that came with it a lot, for your reference:

So now that we know how to process (parts of) messages in parallel, how do we control this and make sure things do not get out of hand. There are a few ways to do this and the way to go depends largely on this question; do we know how many messages to expect? Maybe there is a set number of RainRecords in every session. Or if there is not, maybe we can still enforce a fixed amount, or for the records to be organized in sets of a fixed amount. In these cases, the anwser to the previous question is yes, and if it is, we can come a long way solving our problem by using the Parallel component in the Split Join. More about that later though, for now let’s look at our original case, where we do not know exactly how many records we will receive.

When we do not know the amount of records we are going to receive, the solution does not lie in the Split-Join, instead we will have to look at the Service Bus Console. We are going to use Throttling to keep the amount of Service Invocations in check. Fist, navigate to your Service Bus Console, when the server in your OEPE environment is running you can normally find this at: http://localhost:7001/sbconsole/.Now navigate to the Project Explorer, in here you will go to SplitJoinBlogCase->localServices->storeRecord->business. Select the storeRecordMock business service from the list of resources.

Go to the storeRecordMockService

Go to the storeRecordMock business service

When selected you will see a detailed page describing the service, including four tabs. Select the Operation Settings tab where we can enable Throttling. For those unfamiliar with the console, you might notice that none of the settings can currently be changed. All changes you make here have to be part of a session, you can create a new session by clicking the Create button in the Change Center at the top left of the page. This will make the settings editable.

Editing the throttling settings

Editing the throttling settings

After creating the session the Create button will turn green and the text changes to Activate. Now we can edit the Throttling settings. First of all, check the Throttling state checkbox that enables the actual Throttling. Now we need to put some numbers in, for now let’s set the Maximum Concurrency to 8, this means no more than 8 calls to this service will be handled at the same time by the server. For Throttling Queue let’s set 2000, we are expecting a lot of relatively small messages, and they need to go somewhere to wait for their turn to get processed. Basicly this is the line a call to this service must wait in until one of the 8 service windows becomes available to handle the request. When you set the queue length to 0 there will be no queue and any excess messages will be discarded. Lastly we need to set Message Expiration, if we leave it at 0 messages will never expire, this is fine if we do not want to lose any messages, however it also means we let the caller wait for an indifinite amount of time. Instead of this  we set the Expiration time to 10000 msecs, if a message has to wait longer than 10 seconds to be processed we will discard it.

Edit and activate the settings

Edit and activate the settings

Once we have entered the settings for our throttling, scroll down to update the settings by clicking the Update button. Next click on the Activate button to let your new settings take effect. Before the settings are actually implemented the SB Console will ask for a description of the changes you made. Enter  a short description of what you did so that others know why the changes were made.

Add the discription

Add the discription

After submitting you are done enabling Throttling on your business service. It is worth it to keep in mind that the throttling value is per domain and not per server, meaning that in case of a clustered environment the messages will be equally divided among the managed servers. Read more about this here in Oracle’s documentation on the subject.

Unfortunately, when messages are discarded your WeatherDataService will return the Soap Fault generated by the concurrent message call that was discarded. This will result in a nice BEA-38001 error overiding your response (even though some messages might already be properly handled) To prevent this we will have to implement some basic error handling. I will not go into the details of error handling in this post, but in short this is what you do.

  1. Generate a proxy service based on your storeRecord business service.
  2. Add an Error Handler component to the Routing component
  3. Replace the body of the message (the Soap Fault) with an error message of your own. In my case I merely state that the processing of the record was unsuccesful. Obviously adding a more detailed message will make debugging easier, especially on more complex systems.
  4. Tell the Error Handler to resume. This tells the service not to stop functioning, instead just returning your error message and continue handling messages (as far as possible).
Add the Error handler to the storeRecord proxy

Add the Error handler to the storeRecord proxy

Creating the Soap Fault is fairly easy, just set the queue size to 0 and send more records than you allow concurrent calls to the business service. In my next posts we will dive deeper into the matter of error handling in the OSB and using the Parallel component in our Split-Join.

The post OSB Split-Joins and managing your server load 1: Throttling appeared first on AMIS Technology Blog.

SOA Suite 12c: The demise of the OSB and the glorious birth of the SB

$
0
0

Oracle has too many products. The range of acronyms is not infinite, especially when most of these acronyms start with an O and have three letters. As it turns out, OSB is Oracle-ese for Oracle Secure Backup. Well, and it used to also stand for Oracle Service Bus – but starting with SOA Suite 12c – that is no longer the case. The OSB is no more (or at least: it is no more the acronym for the Service Bus).

Does this mean the end of the Service Bus? Of course not. In fact, the Service Bus has gained in prominence. Or at least, it has moved closer to the SCA engine in the SOA Suite – the engine that runs SOA composite applications with BPEL, Mediator and Business Rule inside. The engine that may of us used to refer to as… the SOA Suite.

imageAs of SOA Suite 12c, both Service Bus projects as well as SOA composite applications (SCA composites) are developed using JDeveloper. Both are deployed to [their respective engines]in a WebLogic managed server (can be the same managed server, can be two different ones).

Administration activities on Service Bus components and on SOA composite components – including deployment, configuration, monitoring – are done through the Enterprise Manager Fusion Middleware Control console. Both Service Bus projects and SOA Composites can make use of JCA Adapters, XQuery transformations and XSLT maps, Domain Value Maps and XRef Cross References, native-to-XMLtranslation and other facilities that make developers’ lives easy.

A single install suffices – JDeveloper with integrated WLS – to develop and test SOA Composites and Service Bus projects. And the skill set required for both is remarkably similar – quite a bit more than in the past.

So, while OSB as a product indication disappears, Service Bus [development]is on par in almost all ways with SOA composite [application construction]. It is not a loss, but a gain instead.

By the way: notice how similar the visual representation of Service Bus projects (proxy service, pipeline, business service) is to a SOA composite application (service binding, Mediator, reference binding). The visual similarity is of course no accident: the logical composition and the function of both implementations is very similar as well. The difference between these two service implementations is in the details mainly.

First figure: SOA Composite application – exposing a SOAP WebService binding, implemented by a Mediator that invokes an outbound JMS Adapter:

image

Second figure: Service Bus project – exposing a SOAP WebService proxy, implemented by a Pipeline that invokes an outbound JMS Adapter:image

And of course both the Mediator and the Pipeline do a transformation of the request message. They both use the same XQuery document for this.

The post SOA Suite 12c: The demise of the OSB and the glorious birth of the SB appeared first on AMIS Technology Blog.

Resolving deployment issues with Service Bus 12c – OSB-398016 – Error loading WSDL

$
0
0

I was completely stuck with Service Bus 12c project deployment from JDeveloper to the Service Bus run time. Every deployment met with the same fate: Conflicts found during publish – OSB-398016, Error loading the WSDL from the repository:  The WSDL is not semantically valid: Failed to read wsdl file from url due to — java.net.MalformedURLException: Unknown protocol: servicebus.

I was completely lost and frustrated – not even a simple hello_world could make it to the server.

SNAGHTMLc3d51e6

Then, Google and Daniel Dias from Link Consulting to the rescue: http://middlewarebylink.wordpress.com/2014/07/17/soa-12c-end-to-end-e2e-tutorial-error-deploying-validatepayment/. He had run into the same problem – and he had a fix for it! Extremely hard to find if you ask me, but fairly easy to apply.

It turns out this is a known bug (18856204). The bug description refers to BPM and SB being installed in the same domain.

The resolution:

Open the Administration Console for the WebLogic Domain. From the Services node, select service OSGi Frameworks:

image

Click on the bac-svnserver-osgi-framework link. Note: if you run in production mode, you will now first have to create an edit session.

Add felix.service.urlhandlers=false in the Init Properties field for the configuration of this service. Then press the Save button.

image

If you run in Production Mode, you now have to commit the edit session.

Then, in order have this modification make any difference, you have to restart the WebLogic (Admin) Server.

This resolved the issue for me – a weight was lifted of my shoulders. Thanks to Daniel from Link!

The post Resolving deployment issues with Service Bus 12c – OSB-398016 – Error loading WSDL appeared first on AMIS Technology Blog.

Oracle Service Bus: Obtaining a list of exposed SOAP HTTP endpoints

$
0
0

The Oracle Service Bus is often used for service virtualization. Endpoints are exposed on the Service Bus which proxy other services. Using such an abstraction layer can provide benefits such as (among many other things) monitoring/logging, dealing with different versions of services, throttling/error handling and result caching. In this blog I will provide a small (Java) script, which works for SOA Suite 11g and 12c, which determines exposed endpoints on the Service Bus.Exposed SOAP HTTP endpoints

How to determine endpoints?

In order to determine endpoints on the Service Bus, The Service Bus MBeans can be accessed. These MBeans can obtained from within a local context inside the Service Bus or remotely via JMX (when configured, see http://stackoverflow.com/questions/1013916/how-to-enable-jmx-on-weblogic-10-x). In this example I’ll use a remote connection to a Weblogic Server instance which runs on the same machine (JDeveloper IntegratedWeblogicServer). To browse MBeans, you can use jvisualvm (http://docs.oracle.com/javase/7/docs/technotes/guides/visualvm/) which is distributed as part of the Oracle JDK. JVisualVM has a plugin to browse MBeans.

Screenshot from 2014-10-20 10:04:14

Screenshot from 2014-10-20 10:06:01

When connected, the Service Bus MBeans are located under com.oracle.osb. The proxy services which define the exposed endpoints, can be recognized by the Proxy$ prefix. In order to determine the actual endpoint, you can look at the ResourceConfigurationMBean of the proxy service. Under configuration, transport-configuration you can find a property called url. The script also filters HTTP SOAP services since the url field is also used for other transports. A replace of // with / is done on the combination server:host/url since the url can start with a /. This causes no difference in functioning but provides better readable output. If you want WSDL’s, you can add ‘?wsdl’ to the obtained endpoint.

The script requires some libraries. For 11g, you can look at http://techrambler99.blogspot.nl/2013/11/oracle-service-bus-using-management.html. On that blog, more specific API calls are done which have more dependencies. This example requires less. For 12c, it was enough to add the Weblogic 12.1 remote client library in the project properties.

Screenshot from 2014-10-20 11:30:33

The script does not determine the hostname/port of the server the Service Bus is currently running on. This could lead to misinterpretation when for example a load-balancer or another proxy component (such as OHS) is used.

The script

[code]

package nl.amis.smeetsm.sb;

import java.io.IOException;

import java.net.MalformedURLException;

import java.util.ArrayList;
import java.util.Hashtable;
import java.util.Set;

import javax.management.MBeanServer;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;
import javax.naming.InitialContext;
import javax.naming.NamingException;

import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;

public class SBEndpointList {

private static MBeanServerConnection connection;
private static String HOST = "localhost";
private static Integer PORT = 7101;
private static String USERNAME = "weblogic";
private static String PASSWORD = "Welcome01";

public SBEndpointList() throws IOException, MalformedURLException, NamingException {
this.connection = this.getMBeanServerConnection();
}

private ObjectName[] getObjectNames(ObjectName base, String child) throws Exception {
return (ObjectName[]) connection.getAttribute(base, child);
}

public ArrayList<String> getEndpoints() throws Exception {
ArrayList<String> result = new ArrayList<String>();

Set<ObjectName> osbconfigs =
connection.queryNames(new ObjectName("com.oracle.osb:Type=ResourceConfigurationMBean,*"), null);

for (ObjectName config : osbconfigs) {
if (config.getKeyProperty("Name").startsWith("ProxyService$")) {
//System.out.println(config);
CompositeDataSupport cds = (CompositeDataSupport) connection.getAttribute(config, "Configuration");
String servicetype = (String) cds.get("service-type");
if (servicetype.equals("SOAP")) {
CompositeDataSupport pstt = (CompositeDataSupport) cds.get("transport-configuration");
String url = (String) pstt.get("url");
String tt = (String) pstt.get("transport-type");
if (tt.equals("http")) {
result.add("http://SERVER:PORT" +("/" + url).replace("//" ,"/") );
}
}
}
}
return result;
}
private JMXConnector initRemoteConnection(String hostname, int port, String username,
String password) throws IOException, MalformedURLException {
JMXServiceURL serviceURL =
new JMXServiceURL("t3", hostname, port, "/jndi/" + DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME);
Hashtable<String, String> h = new Hashtable<String, String>();
h.put(Context.SECURITY_PRINCIPAL, username);
h.put(Context.SECURITY_CREDENTIALS, password);
h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
return JMXConnectorFactory.connect(serviceURL, h);
}

private MBeanServerConnection getMBeanServerConnection() throws IOException, MalformedURLException,
NamingException {
try {
InitialContext ctx = new InitialContext();
MBeanServer server = (MBeanServer) ctx.lookup("java:comp/env/jmx/runtime");
return server;
} catch (Exception e) {
JMXConnector jmxcon = initRemoteConnection(HOST, PORT, USERNAME, PASSWORD);
return jmxcon.getMBeanServerConnection();
}
}

public static void main(String[] args) throws IOException, MalformedURLException, Exception {
SBEndpointList me = new SBEndpointList();

ArrayList<String> result = me.getEndpoints();
System.out.println("Endpoints:");
for (String endpoint : result) {
System.out.println(endpoint);
}

}
}

[/code]

 

 

The post Oracle Service Bus: Obtaining a list of exposed SOAP HTTP endpoints appeared first on AMIS Technology Blog.

Oracle introduces API Manager!

$
0
0

Oracle has introduced a new product; API Manager (you can find the official documentation here). API Manager is an important addition to the already impressive Oracle SOA stack. In this article I’ll explain what this new product does and how it helps in managing your API’s. I will focus on the features and benefits you can have of this product and also elaborate a little about my current experiences with it.

APIManagerPortal

API Manager

What does API Manager do?

API Manager is a product which extends the Service Bus functionality and provides an API Manager Portal to manage API’s and browse analytics. API Manager allows you to save certain metadata as part of a Service Bus proxy service. This metadata is used to allow access to an API and provide data on their usage. SOAP and REST API’s are supported (HTTP API’s).

API settings

As you can see in the screenshot, you can set an API as managed or unmanaged. If an API is managed, you can only call it if you have a registered subscription. A subscription allows you to use an API key (HTTP header X-API-KEY) in order to access the API. Requests to managed API’s which have not specified a correct key, are denied.

SuccesfulRequest

If you test an API from inside the Service Bus console or Fusion Middleware Control however, you can still call the service without an API key.

API Manager workflow

API Manager uses several (application) roles.

Developer / Deployer

This role is not specific to API Manager. The API Developer creates a new API. Someone with the group membership Deployer can deploy it to the Service Bus.

API Consumer

The API Consumer can access the API Manager Portal, browse API’s and register as subscriber (generate an API key and use it in requests).

API Curator

The API curator is able to set service metadata in the Service Bus console. An API Curator can publish a service so it becomes visible in the API Manager Portal or set it deprecated.

API Administrator

The API Administrator can view analytics and can import/export metadata using WLST scripts.

API Manager Portal

A subscription can be created for an API consumer in the new API Manager Portal. This is accessible at http://[host]:[port]/apimanager/. The API Manager Portal is a clean easy to use interface. It uses several application roles which need to be configured before you can access the portal. API Curator, API Administrator and API Consumer. This is described in the installation manual.

Inside the portal, you can access 3 tabs; Subscriptions, Analytics and Catalog. Inside the Catalog and Subscriptions pages, you can create subscriptions. You first have to create an application in order to add subscriptions to it. An application has an API key and all API’s part of the application use the same key.

CreateApplication

You can not subscribe to an API which is not published (it is not visible in the portal and if it is visible because you just updated the state, this is denied). Also you can not create new subscriptions for a deprecated API. The API state (published, draft, private) and if it is deprecated, can be set in the Service Bus console.

NotPublished

The configuration done in the API Manager Portal can be imported and exported as a configuration JAR using WLST.

Using API Manager

Publish the service

One of the features API manager provides is Service Bus proxy states. When you publish a service, it is not available externally but gets the state ‘Draft’. When you call this service you get an HTTP 403 Forbidden. You have to specifically tell it to publish the service.

Can’t update the API key?

I did not see a mechanism in the API Manager Portal to update an API key. Probably this can be done ‘the hard way’ by looking at the database. Maybe you should ask yourself why you would want to change this key.

Key propagation

When using composite services as API, you will need to propagate the API key in service calls. The Service Bus and BPEL have their own mechanisms for this. Other components will also have their own way of doing this.

Circumventing API Manager

I was curious if I could circumvent the API Manager API key header check. When I have two Service Bus proxy services. One of the services is managed and other is not. The unmanaged service calls the managed service without API key. The call from the unmanaged service also gets an HTTP 403 message. This is a very good thing! It allows API Manager to manage internal and external API’s. If a service wants to use another service, it has to be registered as subscriber (if it is managed). I have not tried using a Java API or direct binding to call the service.

Some other things to mind

Upgrade existing DB schema’s

The API Manager Installation patches the Repository Creation Utility. If you create schema’s with the patched RCU, you can use API Manager. I have not seen (could have missed this) a mechanism to upgrade existing database schema’s with the functionality required by API Manager.

Service Bus extension

API Manager can be used for Service Bus Proxy services. I have not yet seen support for other Oracle SOA components/composites. This is understandable since it is a good practice to use the Service Bus in front of other components. It would be nice though if it is not dependent on a Service Bus implementation.

Installation

I followed the standard installation and created 3 users which were in the groups API Administrator, API Curator and API Consumer. I had assigned the application roles as described. I could have made a mistake though. When I tried to access the API Management Portal, I could only log in with the API Administrator role. The other users were not allowed access. None of the users were allowed access to the Service Bus Console (after login I got HTTP 403 Forbidden messages). The API Administrator user did not have enough permissions (I could for example not create or view applications). In order to write this article, I have created a superuser which was assigned all groups. With this user I could access all the required functionality to get everything working. My idea is that more permissions are required to use the described roles. I have not looked into this further.

No analytics?

During the writing of this blog I did not see any analytics data. The analytics tab showed very little. Only the Catalog tab gave me some information. I could not see any information on messages in Fusion Middleware Control either.

NoStats

Probably a specific setting is required to gather data. If you want to use this feature, you should look into this.

Conclusion

API Manager adds important new features to the Oracle Service Bus. It provides a mechanism to secure API’s, provide insight in consumers and allows more active management of the API lifecycle. This product does not work on a harvest of services to allow adding of metadata but it works on the actual service as you can see it in the Service Bus console. This allows true management and does not provide an abstraction which might become out of sync with the actual situation.

In order to use it though, some (minor) code changes are required. You need to supply a specific API Manager HTTP header when you want to access a managed service. This API key can be different per environment and consumers should be able to deal with these differences. Also if you want to use this, you need to look into gathering of analytics data on the API’s and into the roles/groups/users (maybe I’ll update this blog at a later time with more details). Using the roles though, you can implement a structured workflow which will also benefit your development process.

Because API Manager is not easily circumvented, consumers need to register in order to use an API. A danger here is that everyone starts using the same API key or every environments uses the same API key. This is of course not secure and voids the benefits of additional insight into your consumers. This insight is in my opinion the most important feature of this product. Not only do you know who uses your API (dependencies!), but you can even gather statistics on them. If for example requests originating from a certain consumer take a long time to process, you can take action and contact this consumer to maybe optimize their API usage. Also the mechanism of draft and deprecated API’s is very useful to indicate something shouldn’t be used yet or shouldn’t be used by new consumers. A developer can still test the service using the test console. In summary, this looks like a very useful product. I like it!

The post Oracle introduces API Manager! appeared first on AMIS Oracle and Java Blog.

Searching Oracle Service Bus Pipeline Alert contents

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

There are several ways monitor messages passing through the Service Bus. Using pipeline alerts is one of them. Pipeline alerts can be searched in the Enterprise Manager based on several parameters such as summary or when they have occurred. Usually an important part of the message payload is saved in the content of the alert. This content can not be searched from the Enterprise Manager. In this post I will provide an example for logging Service Bus request and response messages using pipeline alerts and a means to search alert contents for a specific occurrence. The example provided has been created in SOA Suite 12.1.3 but the script also works in SOA Suite 11.1.1.6.
titleimage

Service Bus Pipeline Alerts

The Oracle Service Bus provides several monitoring mechanisms. These can be tweaked in the Enterprise Manager.

CaptureDifferentWaysToMonitor

In this example I’m going to use Pipeline Alerts. Where you can find them in the Enterprise Manager has been described on: https://technology.amis.nl/2014/06/27/soa-suite-12c-where-to-find-service-bus-pipeline-alerts-in-enterprise-manager-fusion-middleware-control/. I’ve created a small sample process called HelloWorld. This process can be called with a name and returns ‘Hello name’ as a response. The process itself has a single AlertDestination and has two pipeline alerts. One for the request and one for the response. These pipeline alerts write the content of the header en body variables to the content field of the alert.

CaptureContent

When I call this service with ‘Maarten’ and with ‘John’, I can see the created pipeline alerts in the Enterprise Manager.

CaptureSeeAlerts

Next I want to find the requests done by ‘Maarten’. I’m not interested in ‘John’. I can search for the summary, but this only indicates the location in the pipeline where the alert occurred. I want to search the contents or description as it is called in the Enterprise Manager. Since clicking on every entry is not very time efficient, I want to use a script for that.

CaptureAlertDetail

Search for pipeline alerts using WLST

At first I thought I could use a method like on: http://docs.oracle.com/cd/E21764_01/web.1111/e13701/store.htm#CNFGD275 in combination with the location of the file-store which is used for the alerts; servers/[servername]/data/store/diagnostics. The dump however of this filestore was not readable enough for me and this method required access to the filesystem of the applicationserver. I decided to walk the WLST path.

The below WLST lists the pipeline alerts where ‘Maarten’ is in the contents / description. I used the following script. The script works on Service Bus 11.1.1.6 and 12.1.3. You should of course replace the obvious variables like username, password, url, servername and searchfor.

[code] import datetime

#Conditionally import wlstModule only when script is executed with jython
if __name__ == ‘__main__':
from wlstModule import *#@UnusedWildImport

print ‘starting the script ….’
username = ‘weblogic’
password = ‘Welcome01′
url=’t3://localhost:7101′
servername=’DefaultServer’
searchfor=’Maarten’

connect(username,password,url)

def get_children():
return ls(returnMap=’true’)

domainRuntime()
cd(‘ServerRuntimes’)
servers=get_children()

for server in servers:
#print server
cd(server)
if server == servername:
cd(‘WLDFRuntime/WLDFRuntime/WLDFAccessRuntime/Accessor/DataAccessRuntimes/CUSTOM/com.bea.wli.monitoring.pipeline.alert’)
end = cmo.getLatestAvailableTimestamp()
start = cmo.getEarliestAvailableTimestamp()
cursorname = cmo.openCursor(start,end,””)
if cmo.hasMoreData(cursorname):
records=cmo.fetch(cursorname)
for record in records:
#print record
if searchfor in record[9]:
print datetime.datetime.fromtimestamp(record[1]/1000).strftime(‘%Y-%m-%d %H:%M:%S’)+’ : ‘+record[3]+’ : ‘+record[13]
cmo.closeCursor(cursorname)
cd(‘..’)
[/code]

The output in my case looks like:

[code] 2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest
2015-04-18 12:59:21 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineRequest
2015-04-18 13:18:39 : Pipeline$HelloWorld$HelloWorldPipeline : HelloWorldPipelineResponse
[/code]

Now you can extend the script to provide more information or lookup the relevant requests in the Enterprise Manager.

titleimage

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Searching Oracle Service Bus Pipeline Alert contents appeared first on AMIS Oracle and Java Blog.


SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

An artifact repository has many benefits for collaboration and governance of artifacts. In this blog post I will illustrate how you can fetch SCA composites and Service Bus artifacts from an artifact repository and deploy them. The purpose of this exercise is to show that you do not need loads of custom scripts to do these simple tasks. Why re-invent a wheel when Oracle already provides it?

This example has been created for SOA Suite 12.1.3. This will not work as-is for 11g and earlier since they lack OOTB Maven support for SOA Suite artifacts. In order to start using Maven to do command-line deployments, you need to have some Oracle artifacts in your repository. See http://biemond.blogspot.nl/2014/06/maven-support-for-1213-service-bus-soa.html on how to put them there. I have used two test projects which were already in the repository. A SCA composite called HelloWorld_1.0 and a Service Bus project also called HelloWorld_1.0. In my example, the SCA composite is in the GroupId nl.amis.smeetsm.composite and the Service Bus project is in the GroupId nl.amis.smeetsm.servicebus. You can find information on how to deploy to an artifact repository (e.g. Nexus) here.

SCA Composite

Quick & dirty with few dependencies

I have described getting your SCA composite out of Nexus and into an environment here. The process described there has very few dependencies. First you manually download your jar file using the repository API and then you deploy it using a Maven command like:

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=HelloWorld-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101

In order for this to work, you need to have a (dummy) pom.xml file in the current directory. You cannot use the project pom file for this. The only requisites (next to a working Maven installation) are;

  • the sar file
  • serverUrl and credentials of the server you need to deploy to

Notice that you do not even need an Oracle home location for this. In order to build the project from sources however, you do need an Oracle home.

Less quick & dirty using Maven

An alternative to the previously described method is to use a pom which has the artifact you want to deploy as a dependency. This way Maven obtains the artifact from the repository (configured in settings.xml) for you. This is also a very useful method to combine artifacts in a greater context such as for example a release. The Maven assembly plugin (which uses the configuration file unit-assembly.xml in this example) can be used to specify how to treat the downloaded artifacts. The format ‘dir’ specifies that the downloaded artifacts should be put in a specific directory as-is (not zipped or otherwise repackaged). Format ‘zip’ will (surprise!) zip the result so you can for example put it in your repository or somewhere else. The dependencySet directive indicates which dependencies should go to which directory. When combining Service Bus and SOA artifacts in a single pom, you can use this information to determine which artifact should be put in which directory and this can then be used to determine which artifact should be deployed where.

You can for example use a pom.xml file like:

[code language=”xml”] 4.0.0
nl.amis.smeetsm.unit
HelloWorld_1.0 jar 1.0
HelloWorld_1.0
http://maven.apache.org


nl.amis.smeetsm.composite
HelloWorld_1.0
1.0
jar


maven-assembly-plugin
2.5.4


unit-assembly.xml

[/code]

With a unit-assembly.xml file like

[code language=”xml”]
unit dir

/unit/composite

nl.amis.smeetsm.composite:*




[/code]

Using this method you also need the following in your settings.xml file so it can find the repository. In this example I have used a local Nexus repository.

[code language=”xml”]
nexus
Internal Nexus Mirror
http://localhost:8081/nexus/content/groups/public/
*

[/code]

And then in order to obtain the jar from the repository

mvn assembly:single

And deploy it the same as described above only with a slightly longer location of the sar file.

mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=target/HelloWorld_1.0-1.0-unit/HelloWorld_1.0-1.0/unit/composite/HelloWorld_1.0-1.0.jar -Duser=weblogic -Dpassword=Welcome01 -DserverURL=http://localhost:7101

Thus what you need here (next to a working Maven installation) is;

  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact you want to deploy specified as dependency
  • serverUrl and credentials of the server you want to deploy to

Service Bus

For the Service Bus in general the methods used to get artifacts in and out of an artifact repository are very similar to the SCA composites.

Getting the Service Bus sbar from an artifact repository to an environment does require the projects pom file since you cannot specify an sbar file directly in a deploy command. The command to do the actual deployment also differs from deploying a SCA composite. You do require an Oracle home for this.

mvn pre-integration-test -DoracleHome=/home/maarten/Oracle/Middleware1213/Oracle_Home -DoracleUsername=weblogic -DoraclePassword=Welcome01 -DoracleServerUrl=http://localhost:7101

You can also use a method similar to the one described for the SCA composites. Mind though that you need the project pom file also as a dependency.

[code language=”xml”] 4.0.0
nl.amis.smeetsm.unit
HelloWorld_1.0 jar 1.0
HelloWorld_1.0
http://maven.apache.org


nl.amis.smeetsm.servicebus
HelloWorld_1.0
1.0
sbar


nl.amis.smeetsm.servicebus
HelloWorld_1.0
1.0
pom


maven-assembly-plugin
2.5.4


unit-assembly.xml

[/code]

And a unit-assembly.xml like;

[code language=”xml”]
unit dir

/unit/servicebus

nl.amis.smeetsm.servicebus:*




[/code]

Thus what you need here (next to a working Maven installation) is;

  • an Oracle home location
  • a settings.xml file containing a reference to the repository (you might be able to avoid this by providing it command-line)
  • a specific pom with the artifact specified as dependency (this will fetch the sbar and pom file)
  • serverUrl and credentials of the server you want to deploy to

Deploy many artifacts

In order to obtain large amounts of artifacts from Nexus and deploy them, it is relatively easy to create a shell script, for example something like the one below. The script below uses the structure created by the above described method to deploy artifacts. It has a part which first downloads a ZIP, unzips it and then loops through deployable artifacts and deploys them. The script depends on a ZIP in the artifact repository with the specified structure. In order to put the unit in Nexus, replace ‘dir’ with ‘zip’ in the assembly file and deploy the unit. You are creating a copy of the artifact though so you should probably use the pom and assembly directly for creating the unit of artifacts and loop over them without the step in between of creating a separate ZIP of the assembly.

The local directory should contain a dummypom.xml for the SCA deployment. The script creates a tmp directory, downloads the artifact, extracts it, loops over its contents, creates a deploy shell script and execute it. Separating assembly (deploy_unit.sh) and actual deployment (deploy_script.sh) is advised. This allows you to rerun the deployment or continue from a certain point where it might have failed. The assembly can be handed to someone else (operations?) to do the deployment.

dummypom.xml:

[code language=”xml”] 4.0.0
nl.amis.smeetsm
DummyPom
1.0
[/code]

deploy_unit.sh:

The script has a single parameter. The URL of the unit to be installed. This can be a reference to an artifact in a repository (if you have your unit as a separate artifact in the repository). The script is easily updated to use a local file or structure as described above.

[code language=”bash”] #!/bin/sh

servicebus_hostname=localhost
servicebus_port=7101
servicebus_username=weblogic
servicebus_password=Welcome01
servicebus_oraclehome=/home/maarten/Oracle/Middleware1213/Oracle_Home/
composite_hostname=localhost
composite_port=7101
composite_username=weblogic
composite_password=Welcome01

if [ -d “tmp” ]; then
rm -rf tmp
fi
mkdir tmp
cp dummypom.xml tmp/pom.xml
cd tmp

#first fetch the unit ZIP file
wget $1
for f in *.zip
do
echo “Unzipping $f”
unzip $f
done

#deploy composites
for D in `find . -type d -name composite`
do
echo “Processing directory $D”
for f in `ls $D/*.jar`
do
echo “Deploying $f”
URL=”http://$composite_hostname:$composite_port”
echo “URL: $URL”
echo mvn com.oracle.soa.plugin:oracle-soa-plugin:deploy -DsarLocation=$f -Duser=$composite_username -Dpassword=$composite_password -DserverURL=$URL >> deploy_script.sh
done
done

#deploy servicebus
for D in `find . -type d -name servicebus`
do
echo “Processing directory $D”
for f in `ls $D/*.pom`
do
echo “Deploying $f”
URL=”http://$composite_hostname:$composite_port”
echo “URL: $URL”
echo mvn -f $f pre-integration-test -DoracleHome=$servicebus_oraclehome -DoracleUsername=$servicebus_username -DoraclePassword=$servicebus_password -DoracleServerUrl=$URL >> deploy_script.sh
done
done

./deploy_script.sh

cd ..
rm -rf tmp
[/code]

For this example I created a very basic script. It does require a Maven installation, a settings.xml telling where the repository is and an Oracle home location (Service Bus requires it). Also it has some liabilities, for example in the commands used to find the deployable artifacts. It does give an idea though on how you can easily deploy large amounts of composites using relatively little code by leveraging Maven commands. It also illustrates the difference between SCA composite and Service Bus deployments.

Finally

You can easily combine the assembly files and pom files for the SCA composites and the Service Bus to create a release containing both. Deploying them is also easy using a single command. I also illustrated how you can easily loop over several artifacts using a shell script. I have not touched the usage of configuration plans and how to efficiently group related artifacts in your artifact repository. Those will be the topic of a next blog post.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post SOA Suite 12c: Collect & Deploy SCA composites & Service Bus artifacts using Maven appeared first on AMIS Oracle and Java Blog.

Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

This article explains how the process of doing performance measurements of an OSB Proxy Service and presenting them in a “performance analysis document” was partly automated. After running a SoapUI based Test Step (sending a request to the service), extracting the service performance metrics was done by using the ServiceDomainMBean in the public API of the Oracle Service Bus. These service performance metrics can be seen in the Oracle Service Bus Console via the Service Monitoring Details. Furthermore this article explains how these service performance metrics are used by a PowerPoint VBA module and slide with placeholders, to generate an image, using injected service performance metric values. This image is used to present the measurements in a “performance analysis document”.


Performance issues

In a web application we had performance issues in a page where data was being shown that was loaded using a web service (deployed on Oracle Service Bus 11gR1). In the web page, an application user can fill in some search criteria and when a search button is pressed, data is being retrieved (from a database) , via the MyProxyService, and shown on the page in table format.

Web application

Performance analysis document

Based on knowledge about the data, the business owner of the application, put together a number of test cases that should be used to do performance measurements, in order to determine if the performance requirements are met. All in all there were 9 different test cases. For some of these test cases, data was being retrieved for example concerning a period of 2 weeks and for others a period of 2 months.

Because it was not certain what caused the lack of performance, besides the front-end, also the back-end OSB Proxy Service was to be investigated and performance measurement results were to be documented (in the “performance analysis document ”). It was known from the start that once the problem was pinpointed and a solution was chosen and put in place, again performance measurements should be carried out and the results were again to be documented.

The “performance analysis document ” is the central document, used by the business owner of the application and a team of specialists, to be the basis for choosing solutions for the lack of performance in the web page. It contains an overview of all the measurements that were done (front-end and also the back-end), used software, details about the services in question, performance requirements, an overview of the test cases that were used, a summary, etc.

Because a picture says more than a thousand words, in the “performance analysis document”, the OSB Proxy Service was represented as shown below (the real names are left out). For each of the 9 test case’s such a picture was used.

Picture used in the performance analysis document

The OSB Proxy Service (for this article renamed to MyProxyService) contains a Request Response Pipeline with several Stages, Pipeline Pairs, a Route and several Service Callouts. For each component a response time is presented.

Service Monitoring Details

In the Oracle Service Bus Console, Pipeline Monitoring was enabled (at Action level or above) via the Operational Settings | Monitoring of the MyProxyService.

Enabled Pipeline Monitoring

Before a test case was started in the Oracle Service Bus Console, the Statistics of the MyProxyService where reset (by hand).

All the 9 test cases (requests with different search criteria) were set up in SoapUI, in order to make it easy to repeat them. To get average performance measurements, per test case, a total of 5 calls (requests) were executed. For the MyProxyService, the results of these 5 calls, were investigated in the Oracle Service Bus Console via the Service Monitoring Details.

Service Monitoring Details

In the example shown above, based on the message count of 5, the overall average response time is 820 msecs. The Service Metrics tab displays the metrics for a proxy service or a business service. The Pipeline Metrics tab (only available for proxy services) gives information on various components of the pipeline of the service. The Action Metrics tab (only available for proxy services) presents information on actions in the pipeline of the service, displayed as a hierarchy of nodes and actions.

At first the Service Monitoring Details (of the Oracle Service Bus Console) for a particular test case were copied by hand into a PowerPoint slide and from there a picture was created, that was then copied in to the “performance analysis document” at the particular test case paragraph.

Because of the number of measurements that had to be made for the “before situation” and the “after situation” (when the solution was put in place), it was decided to partly automate this process. Also with future updates in mind of the MyProxyService code, it was anticipated that after each update, the performance measurements for the 9 test cases were to be carried out again.

Overview of the partly automated process

Overview In the partly automated process, an image is derived from a PowerPoint slide and a customized VBA module. Office applications such as PowerPoint have Visual Basic for Applications (VBA), a programming language that lets you extend those applications. The VBA module reads data from a text file (MyProxyServciceStatisticsForPowerpoint.txt) and replaces certain text frames (placeholders, for example CODE_Enrichment_request_elapsed-time) on the slide, with data from the text file and in the end exports the slide to an image (png file). The image can then easily be inserted in the “performance analysis document” at the particular test case paragraph.

Text frame with placeholder CODE_Enrichment_request_elapsed-time Injected service performance metric values ==> Text frame with injected value for placeholder CODE_Enrichment_request_elapsed-time

 

To create the text file with service monitoring details, the JMX Monitoring API was used. For details about this API see:

Java Management Extensions (JMX) Monitoring API in Oracle Service Bus (OSB)

ServiceDomainMBean

I will now explain a little bit more about the ServiceDomainMBean and how it can be used.

The public JMX APIs are modeled by a single instance of ServiceDomainMBean, which has operations to check for monitored services and retrieve data from them. A public set of POJOs provide additional objects and methods that, along with ServiceDomainMbean, provide a complete API for monitoring statistics.

There also is a sample program in the Oracle documentation (mentioned above) that demonstrates how to use the JMX Monitoring API.

Most of the information that is shown in the Service Monitoring Details page can be retrieved via the ServiceDomainMBean. This does not apply to the Action Metrics (unfortunately). The POJO object ResourceType represents all types of resources that are enabled for service monitoring. The four enum constants representing types are shown in the following table:

 

Service Monitoring Details tab

ResourceType enum

Description

Service Metrics

SERVICE

A service is an inbound or outbound endpoint that is configured within Oracle Service Bus. It may have an associated WSDL, security settings, and so on.

Pipeline Metrics

FLOW_COMPONENT

Statistics are collected for the following two types of components that can be present in the flow definition of a proxy service.

· Pipeline Pair node

· Route node

Action Metrics

Operations

WEBSERVICE_OPERATION

This resource type provides statistical information pertaining to WSDL operations. Statistics are reported for each defined operation.

URI

This resource type provides statistical information pertaining to endpoint URI for a business service. Statistics are reported for each defined Endpoint URI.

 

Overview of extracting performance metrics and using them by a PowerPoint VBA module

Based on the above mentioned sample program, in Oracle JDeveloper a customized program was created to retrieve performance metrics for the MyProxyService, and more specifically for a particular list of components (“Initialization_request”, “Enrichment_request”, “RouteToADatabaseProcedure”, “Enrichment_response”, “Initialization_response”). Also an executable jar file MyProxyServiceStatisticsRetriever.jar was created via an Deployment Profile. The program creates a text file MyProxyServiceStatistics_2016_01_14.txt with the measurements and another text file MyProxyServciceStatisticsForPowerpoint.txt with specific key value pairs to be used by the PowerPoint VBA module.

Because the measurements had to be carried out on different WebLogic domains, a batch file MyProxyServiceStatisticsRetriever.bat was created where the domain specific connection credentials can be passed in as program arguments.

Conclusion

After analyzing the measurement, it became obvious that the performance lack was caused mainly by the call to the database procedure via RouteToADatabaseProcedure. So a solution was put in place whereby a caching mechanism (of pre-aggregated data) was used.

Keep in mind that with regard to Action Metrics, statistics can’t be gathered and with regard to Pipeline Metrics only Pipeline Pair node and Route node statistics can be gathered (via the ServiceDomainMBean). Luckily, in my case, the main problem was in the Route node so the ServiceDomainMBean could be used in a meaningful way.

It proved to be a good idea to partly automate the process of doing performance measurements and presenting them, because it saved a lot of time, due to the number of measurements that had to be made.

MyProxyServiceStatisticsRetriever.bat

[code language=”html”] D:\Oracle\Middleware\jdk160_24\bin\java.exe -classpath “MyProxyServiceStatisticsRetriever.jar;D:\Oracle\Middleware\wlserver_10.3\server\lib\weblogic.jar;D:\OSB_DEV\FMW_HOME\Oracle_OSB1\lib\sb-kernel-api.jar;D:\OSB_DEVbbbbb\FMW_HOME\Oracle_OSB1\lib\sb-kernel-impl.jar;D:\OSB_DEV\FMW_HOME\Oracle_OSB1\modules\com.bea.common.configfwk_1.7.0.0.jar” myproxyservice.monitoring.MyProxyServiceStatisticsRetriever “appserver01” “7001” “weblogic” “weblogic” “C:\temp”
[/code]

MyProxyServiceStatisticsRetriever.java

[code language=”html”] package myproxyservice.monitoring;

import com.bea.wli.config.Ref;
import com.bea.wli.monitoring.InvalidServiceRefException;
import com.bea.wli.monitoring.MonitoringException;
import com.bea.wli.monitoring.MonitoringNotEnabledException;
import com.bea.wli.monitoring.ResourceStatistic;
import com.bea.wli.monitoring.ResourceType;
import com.bea.wli.monitoring.ServiceDomainMBean;
import com.bea.wli.monitoring.ServiceResourceStatistic;
import com.bea.wli.monitoring.StatisticType;
import com.bea.wli.monitoring.StatisticValue;

import java.io.File;
import java.io.FileWriter;
import java.io.IOException;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;

import java.net.MalformedURLException;

import java.text.SimpleDateFormat;

import java.util.Arrays;
import java.util.Date;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;

public class MyProxyServiceStatisticsRetriever {
private ServiceDomainMBean serviceDomainMbean = null;
private String serverName = null;
private Ref[] proxyServiceRefs;
private Ref[] filteredProxyServiceRefs;

/**
* Transforms a Long value into a time format that de Service Bus Console also uses (x secs y msecs).
*/
private String formatToTime(Long value) {
Long quotient = value / 1000;
Long remainder = value % 1000;

return Long.toString(quotient) + ” secs ” + Long.toString(remainder) +
” msecs”;
}

/**
* Transforms a Long value into a time format that de Service Bus Console also uses (x secs y msecs).
*/
private String formatToTimeForPowerpoint(Long value) {
Long quotient = value / 1000;
Long remainder = value % 1000;

return Long.toString(quotient) + “secs” + Long.toString(remainder) +
“msecs”;
}

/**
* Gets an instance of ServiceDomainMBean from the weblogic server.
*/
private void initServiceDomainMBean(String host, int port, String username,
String password) throws Exception {
InvocationHandler handler =
new ServiceDomainMBeanInvocationHandler(host, port, username,
password);

Object proxy =
Proxy.newProxyInstance(ServiceDomainMBean.class.getClassLoader(),
new Class[] { ServiceDomainMBean.class },
handler);

serviceDomainMbean = (ServiceDomainMBean)proxy;
}

/**
* Invocation handler class for ServiceDomainMBean class.
*/
public static class ServiceDomainMBeanInvocationHandler implements InvocationHandler {
private String jndiURL =
“weblogic.management.mbeanservers.domainruntime”;
private String mbeanName = ServiceDomainMBean.NAME;
private String type = ServiceDomainMBean.TYPE;

private String protocol = “t3”;
private String hostname = “localhost”;
private int port = 7001;
private String jndiRoot = “/jndi/”;

private String username = “weblogic”;
private String password = “weblogic”;

private JMXConnector conn = null;
private Object actualMBean = null;

public ServiceDomainMBeanInvocationHandler(String hostName, int port,
String userName,
String password) {
this.hostname = hostName;
this.port = port;
this.username = userName;
this.password = password;
}

/**
* Gets JMX connection
*/
public JMXConnector initConnection() throws IOException,
MalformedURLException {
JMXServiceURL serviceURL =
new JMXServiceURL(protocol, hostname, port,
jndiRoot + jndiURL);
Hashtable h = new Hashtable();

if (username != null)
h.put(Context.SECURITY_PRINCIPAL, username);
if (password != null)
h.put(Context.SECURITY_CREDENTIALS, password);

h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
“weblogic.management.remote”);

return JMXConnectorFactory.connect(serviceURL, h);
}

/**
* Invokes specified method with specified params on specified
* object.
*/
public Object invoke(Object proxy, Method method,
Object[] args) throws Throwable {
if (conn == null)
conn = initConnection();

if (actualMBean == null)
actualMBean =
findServiceDomain(conn.getMBeanServerConnection(),
mbeanName, type, null);

return method.invoke(actualMBean, args);
}

/**
* Finds the specified MBean object
*
* @param connection – A connection to the MBeanServer.
* @param mbeanName – The name of the MBean instance.
* @param mbeanType – The type of the MBean.
* @param parent – The name of the parent Service. Can be NULL.
* @return Object – The MBean or null if the MBean was not found.
*/
public Object findServiceDomain(MBeanServerConnection connection,
String mbeanName, String mbeanType,
String parent) {
try {
ObjectName on = new ObjectName(ServiceDomainMBean.OBJECT_NAME);
return (ServiceDomainMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
on);
} catch (MalformedObjectNameException e) {
e.printStackTrace();
return null;
}
}
}

public MyProxyServiceStatisticsRetriever(HashMap props) {
super();
try {
String comment = null;
String[] arrayResourceNames =
{ “Initialization_request”, “Enrichment_request”,
“RouteToADatabaseProcedure”,
“Enrichment_response”,
“Initialization_response” };
List filteredResourceNames =
Arrays.asList(arrayResourceNames);

Properties properties = new Properties();
properties.putAll(props);

initServiceDomainMBean(properties.getProperty(“HOSTNAME”),
Integer.parseInt(properties.getProperty(“PORT”)),
properties.getProperty(“USERNAME”),
properties.getProperty(“PASSWORD”));

// Save retrieved statistics.
String fileName =
properties.getProperty(“DIRECTORY”) + “\\” + “MyProxyServiceStatistics” +
“_” +
new SimpleDateFormat(“yyyy_MM_dd”).format(new Date(System.currentTimeMillis())) +
“.txt”;
FileWriter out = new FileWriter(new File(fileName));

String fileNameForPowerpoint =
properties.getProperty(“DIRECTORY”) + “\\” +
“MyProxyServiceStatisticsForPowerpoint” + “.txt”;
FileWriter outForPowerpoint =
new FileWriter(new File(fileNameForPowerpoint));

out.write(“*********************************************”);
out.write(“\nThis file contains statistics for a proxy service on WebLogic Server ” +
properties.getProperty(“HOSTNAME”) + “:” +
properties.getProperty(“PORT”) + ” and:”);

out.write(“\n\tDomainName: ” + serviceDomainMbean.getDomainName());
out.write(“\n\tClusterName: ” +
serviceDomainMbean.getClusterName());
for (int i = 0; i < (serviceDomainMbean.getServerNames()).length;
i++) {
out.write("\n\tServerName: " +
serviceDomainMbean.getServerNames()[i]);
}
out.write("\n***********************************************");

proxyServiceRefs =
serviceDomainMbean.getMonitoredProxyServiceRefs();

if (proxyServiceRefs != null && proxyServiceRefs.length != 0) {

filteredProxyServiceRefs = new Ref[1];
for (int i = 0; i < proxyServiceRefs.length; i++) {
System.out.println("ProxyService fullName: " +
proxyServiceRefs[i].getFullName());
if (proxyServiceRefs[i].getFullName().equalsIgnoreCase("MyProxyService")) {
filteredProxyServiceRefs[0] = proxyServiceRefs[i];
}
}
if (filteredProxyServiceRefs != null &&
filteredProxyServiceRefs.length != 0) {
for (int i = 0; i < filteredProxyServiceRefs.length; i++) {
System.out.println("Filtered proxyService fullName: " +
filteredProxyServiceRefs[i].getFullName());
}
}

System.out.println("Started…");
for (ResourceType resourceType : ResourceType.values()) {
// Only process the following resource types: SERVICE,FLOW_COMPONENT,WEBSERVICE_OPERATION
if (resourceType.name().equalsIgnoreCase("URI")) {
continue;
}
HashMap proxyServiceResourceStatisticMap =
serviceDomainMbean.getProxyServiceStatistics(filteredProxyServiceRefs,
resourceType.value(),
null);

for (Map.Entry mapEntry :
proxyServiceResourceStatisticMap.entrySet()) {
System.out.println(“======= Printing statistics for service: ” +
mapEntry.getKey().getFullName() +
” and resourceType: ” +
resourceType.toString() +
” =======”);

if (resourceType.toString().equalsIgnoreCase(“SERVICE”)) {
comment =
“(Comparable to Service Bus Console | Service Monitoring Details | Service Metrics)”;
} else if (resourceType.toString().equalsIgnoreCase(“FLOW_COMPONENT”)) {
comment =
“(Comparable to Service Bus Console | Service Monitoring Details | Pipeline Metrics )”;
} else if (resourceType.toString().equalsIgnoreCase(“WEBSERVICE_OPERATION”)) {
comment =
“(Comparable to Service Bus Console | Service Monitoring Details | Operations)”;
}
out.write(“\n\n======= Printing statistics for service: ” +
mapEntry.getKey().getFullName() +
” and resourceType: ” +
resourceType.toString() + ” ” + comment +
” =======”);
ServiceResourceStatistic serviceStats =
mapEntry.getValue();

out.write(“\nStatistic collection time is – ” +
new Date(serviceStats.getCollectionTimestamp()));
try {
ResourceStatistic[] resStatsArray =
serviceStats.getAllResourceStatistics();

for (ResourceStatistic resStats : resStatsArray) {
if (resourceType.toString().equalsIgnoreCase(“FLOW_COMPONENT”) &&
!filteredResourceNames.contains(resStats.getName())) {
continue;
}
if (resourceType.toString().equalsIgnoreCase(“WEBSERVICE_OPERATION”) &&
!resStats.getName().equalsIgnoreCase(“MyGetDataOperation”)) {
continue;
}

// Print resource information
out.write(“\nResource name: ” +
resStats.getName());
out.write(“\n\tResource type: ” +
resStats.getResourceType().toString());

// Now get and print statistics for this resource
StatisticValue[] statValues =
resStats.getStatistics();
for (StatisticValue value : statValues) {
if (resourceType.toString().equalsIgnoreCase(“SERVICE”) &&
!value.getName().equalsIgnoreCase(“response-time”)) {
continue;
}
if (resourceType.toString().equalsIgnoreCase(“FLOW_COMPONENT”) &&
!value.getType().toString().equalsIgnoreCase(“INTERVAL”)) {
continue;
}
if (resourceType.toString().equalsIgnoreCase(“WEBSERVICE_OPERATION”) &&
!value.getType().toString().equalsIgnoreCase(“INTERVAL”)) {
continue;
}

out.write(“\n\t\tStatistic Name – ” +
value.getName());
out.write(“\n\t\tStatistic Type – ” +
value.getType());

// Determine statistics type
if (value.getType() ==
StatisticType.INTERVAL) {
StatisticValue.IntervalStatistic is =
(StatisticValue.IntervalStatistic)value;

// Print interval statistics values
out.write(“\n\t\t\tMessage Count: ” +
is.getCount());
out.write(“\n\t\t\tMin Response Time: ” +
formatToTime(is.getMin()));
out.write(“\n\t\t\tMax Response Time: ” +
formatToTime(is.getMax()));
/* out.write(“\n\t\t\tSum Value – ” +
is.getSum()); */
out.write(“\n\t\t\tOverall Avg. Response Time: ” +
formatToTime(is.getAverage()));

if (resourceType.toString().equalsIgnoreCase(“SERVICE”)) {
outForPowerpoint.write(“CODE_SERVICE_” +
value.getName() +
“;” +
formatToTimeForPowerpoint(is.getAverage()));
}
if (resourceType.toString().equalsIgnoreCase(“FLOW_COMPONENT”)) {
outForPowerpoint.write(“\r\nCODE_” +
resStats.getName() +
“_” +
value.getName() +
“;” +
formatToTimeForPowerpoint(is.getAverage()));
}
} else if (value.getType() ==
StatisticType.COUNT) {
StatisticValue.CountStatistic cs =
(StatisticValue.CountStatistic)value;

// Print count statistics value
out.write(“\n\t\t\t\tCount Value – ” +
cs.getCount());
} else if (value.getType() ==
StatisticType.STATUS) {
StatisticValue.StatusStatistic ss =
(StatisticValue.StatusStatistic)value;
// Print count statistics value
out.write(“\n\t\t\t\t Initial Status – ” +
ss.getInitialStatus());
out.write(“\n\t\t\t\t Current Status – ” +
ss.getCurrentStatus());
}
}
}

out.write(“\n=========================================”);

} catch (MonitoringNotEnabledException mnee) {
// Statistics not available
out.write(“\nWARNING: Monitoring is not enabled for this service… Do something…”);
out.write(“\n=====================================”);

} catch (InvalidServiceRefException isre) {
// Invalid service
out.write(“\nERROR: Invlaid Ref. May be this service is deleted. Do something…”);
out.write(“\n======================================”);
} catch (MonitoringException me) {
// Statistics not available
out.write(“\nERROR: Failed to get statistics for this service…Details: ” +
me.getMessage());
me.printStackTrace();
out.write(“\n======================================”);
}
}
}
System.out.println(“Finished”);
}
// Flush and close file.
out.flush();
out.close();
// Flush and close file.
outForPowerpoint.flush();
outForPowerpoint.close();

} catch (Exception e) {
e.printStackTrace();
}
}

public static void main(String[] args) {
try {
if (args.length <= 0) {
System.out.println("Use the following arguments: HOSTNAME, PORT, USERNAME, PASSWORD DIRECTORY. For example: appserver01 7001 weblogic weblogic C:\\temp");

} else {
HashMap map = new HashMap();

map.put(“HOSTNAME”, args[0]);
map.put(“PORT”, args[1]);
map.put(“USERNAME”, args[2]);
map.put(“PASSWORD”, args[3]);
map.put(“DIRECTORY”, args[4]);
MyProxyServiceStatisticsRetriever myProxyServiceStatisticsRetriever =
new MyProxyServiceStatisticsRetriever(map);
}
} catch (Exception e) {
e.printStackTrace();
}

}
}
[/code]

The VBA module

[code language=”html”] Sub ReadFromFile()

Dim FileNum As Integer
Dim FileName As String
Dim InputBuffer As String
Dim oSld As Slide
Dim oShp As Shape
Dim oTxtRng As TextRange
Dim oTmpRng As TextRange
Dim strWhatReplace As String, strReplaceText As String
Dim property As Variant
Dim key As String
Dim value As String
Dim sImagePath As String
Dim sImageName As String
Dim sPrefix As String
Dim lPixwidth As Long ‘ size in pixels of exported image
Dim lPixheight As Long

FileName = “C:\temp\MyProxyServciceStatisticsForPowerpoint.txt”
FileNum = FreeFile

On Error GoTo Err_ImageSave

sImagePath = “C:\temp”
sPrefix = “MyproxyservciceStatistics”
lPixwidth = 1024
‘ Set height proportional to slide height
lPixheight = (lPixwidth * ActivePresentation.PageSetup.SlideHeight) / ActivePresentation.PageSetup.SlideWidth

‘ A little error checking
If Dir$(FileName) “” Then ‘ the file exists, it’s safe to continue
Open FileName For Input As FileNum

While Not EOF(FileNum)
Input #FileNum, InputBuffer
‘ Do whatever you need to with the contents of InputBuffer
‘MsgBox InputBuffer
property = Split(InputBuffer, “;”)
For element = 0 To UBound(property)
If element = 0 Then
key = property(element)
End If
If element = 1 Then
value = property(element)
End If
Next element
‘ MsgBox key
‘ MsgBox value

‘ write find text
strWhatReplace = key
‘ write change text
strReplaceText = value
‘ MsgBox strWhatReplace

‘ go during each slides
For Each oSld In ActivePresentation.Slides
‘ go during each shapes and textRanges
For Each oShp In oSld.Shapes
If oShp.Type = msoTextBox Then

‘ replace in TextFrame
Set oTxtRng = oShp.TextFrame.TextRange
Set oTmpRng = oTxtRng.Replace( _
FindWhat:=strWhatReplace, _
Replacewhat:=strReplaceText, _
WholeWords:=True)

Do While Not oTmpRng Is Nothing

Set oTxtRng = oTxtRng.Characters _
(oTmpRng.Start + oTmpRng.Length, oTxtRng.Length)
Set oTmpRng = oTxtRng.Replace( _
FindWhat:=strWhatReplace, _
Replacewhat:=strReplaceText, _
WholeWords:=True)
Loop
oShp.TextFrame.WordWrap = False

End If
Next oShp
sImageName = sPrefix & “-” & oSld.SlideIndex & “.png”
oSld.Export sImagePath & “\” & sImageName, “PNG”, lPixwidth, lPixheight

Next oSld
Wend

Close FileNum
MsgBox “Gereed”
Else
‘ the file isn’t there. Don’t try to open it.
End If

Err_ImageSave:
If Err 0 Then
MsgBox Err.Description
End If

End Sub
[/code]

MyProxyServiceStatistics_2016_01_14.txt

[code language=”html”] *********************************************
This file contains statistics for a proxy service on WebLogic Server appserver01:7001and:
DomainName: DM_OSB_DEV1
ClusterName: CL_OSB_01
ServerName: MS_OSB_01
ServerName: MS_OSB_02
***********************************************

======= Printing statistics for service: MyProxyService and resourceType: SERVICE (Comparable to Service Bus Console | Service Monitoring Details | Service Metrics) =======
Statistic collection time is – Thu Jan 14 11:26:00 CET 2016
Resource name: Transport
Resource type: SERVICE
Statistic Name – response-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 552 msecs
Max Response Time: 1 secs 530 msecs
Overall Avg. Response Time: 0 secs 820 msecs
=========================================

======= Printing statistics for service: MyProxyService and resourceType: FLOW_COMPONENT (Comparable to Service Bus Console | Service Monitoring Details | Pipeline Metrics ) =======
Statistic collection time is – Thu Jan 14 11:26:00 CET 2016
Resource name: MyGetDataOperation
Resource type: FLOW_COMPONENT
Statistic Name – Validation_request
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 0 msecs
Max Response Time: 0 secs 0 msecs
Overall Avg. Response Time: 0 secs 0 msecs
Statistic Name – Validation_response
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 0 msecs
Max Response Time: 0 secs 0 msecs
Overall Avg. Response Time: 0 secs 0 msecs
Statistic Name – Authorization_request
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 42 msecs
Max Response Time: 0 secs 62 msecs
Overall Avg. Response Time: 0 secs 52 msecs
Statistic Name – Authorization_response
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 0 msecs
Max Response Time: 0 secs 0 msecs
Overall Avg. Response Time: 0 secs 0 msecs
Resource name: Initialization_request
Resource type: FLOW_COMPONENT
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 0 msecs
Max Response Time: 0 secs 1 msecs
Overall Avg. Response Time: 0 secs 0 msecs
Resource name: Enrichment_request
Resource type: FLOW_COMPONENT
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 196 msecs
Max Response Time: 0 secs 553 msecs
Overall Avg. Response Time: 0 secs 298 msecs
Resource name: Initialization_response
Resource type: FLOW_COMPONENT
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 1 msecs
Max Response Time: 0 secs 3 msecs
Overall Avg. Response Time: 0 secs 2 msecs
Resource name: RouteToADatabaseProcedure
Resource type: FLOW_COMPONENT
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 116 msecs
Max Response Time: 0 secs 174 msecs
Overall Avg. Response Time: 0 secs 146 msecs
Resource name: Enrichment_response
Resource type: FLOW_COMPONENT
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 119 msecs
Max Response Time: 0 secs 411 msecs
Overall Avg. Response Time: 0 secs 230 msecs
=========================================

======= Printing statistics for service: MyProxyService and resourceType: WEBSERVICE_OPERATION (Comparable to Service Bus Console | Service Monitoring Details | Operations) =======
Statistic collection time is – Thu Jan 14 11:26:00 CET 2016
Resource name: MyGetDataOperation
Resource type: WEBSERVICE_OPERATION
Statistic Name – elapsed-time
Statistic Type – INTERVAL
Message Count: 5
Min Response Time: 0 secs 550 msecs
Max Response Time: 1 secs 95 msecs
Overall Avg. Response Time: 0 secs 731 msecs
=========================================
[/code]

MyProxyServiceStatisticsForPowerpoint.txt

[code language=”html”] CODE_SERVICE_response-time;0secs820msecs
CODE_Validation_request;0secs0msecs
CODE_Validation_response;0secs0msecs
CODE_Authorization_request;0secs52msecs
CODE_Authorization_response;0secs0msecs
CODE_Initialization_request_elapsed-time;0secs0msecs
CODE_Enrichment_request_elapsed-time;0secs298msecs
CODE_Initialization_response_elapsed-time;0secs2msecs
CODE_RouteToADatabaseProcedure_elapsed-time;0secs146msecs
CODE_Enrichment_response_elapsed-time;0secs230msecs
[/code]
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module appeared first on AMIS Oracle and Java Blog.

Asynchronous interaction in Oracle BPEL and BPM. WS-Addressing and Correlation sets

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

There are different ways to achieve asynchronous interaction in Oracle SOA Suite. In this blog article, I’ll explain some differences between WS-Addressing and using correlation sets (in BPEL but also mostly valid for BPM). I’ll cover topics like how to put the Service Bus between calls, possible integration patterns and technical challenges.

I will also shortly describe recovery options. You can of course depend on the fault management framework. This framework however does not catch for example a BPEL Assign activity gone wrong or a failed transformation. Developer defined error handling can sometimes leave holes if not thoroughly checked. If a process which should have performed a callback, terminates because of unexpected reasons, you might be able to manually perform recovery actions to achieve the same result as when the process was successful. This usually implies manually executing a callback to a calling service. Depending on your choice of implementation for asynchronous interaction, this callback can be easy or hard.

WS-Addressing

The below part describes a WS-Addressing implementation based on BPEL templates. There are alternatives possible (requiring more manual work) such as using the OWSM WS-Addressing policy and explicitly defining a callback port. This has slightly different characteristics (benefits, drawbacks) which can be abstracted from the below description. BPM has similar characteristics but also slightly different templates.

When creating a BPEL process, you get several choices for templates to base a new process on. The Synchronous BPEL template creates a port which contains a reply (output message) in the WSDL. When you want to reply, you can use the ‘Reply’ activity in your BPEL process. The activity is present when opening your BPEL process after generation by the template, but you can use it in other locations, such as for example in exception handlers to reply with a SOAP fault. If you want to call a synchronous service, you only need a single ‘Invoke’ activity.

The output message is not created in the WSDL when using the One Way or Asynchronous templates. Also when sending an asynchronous ‘reply’, you have to use the Invoke activity in your BPEL process instead of the ‘Reply’ activity. One Way BPEL process and Asynchronous BPEL process templates are quite similar. The Asynchronous template creates a callback port and message. The ‘Invoke’ activity to actually do the asynchronous callback is already present in the BPEL process after it has been generated based on the template. The One Way template does not create a callback port in the WSDL and callback invoke in the BPEL process. If you want to call an Asynchronous service and want to do something with an asynchronous callback, you should first use an ‘Invoke’ activity to call the service and then wait with a ‘Receive’ activity for the callback.

For all templates, the bindings and service entry in the WSDL (to make it a concrete WSDL for an exposed service) are generated upon deployment of the process based on information from the composite.xml file (<binding. tags).

The callback port of an asynchronous BPEL process is visible in the composite.xml file as followed;

Callback in composite.xml

This callback tag however does not expose a service in the normal WSDL, but a WSDL with the callback URL can be obtained. When a request is send to a service, the WS-Addressing headers contain a callback URL in the wsa:ReplyTo/wsa:Address field. This URL can be appended by ?WSDL to obtain a WSDL which contains the actual exposed services.

Callback WSDL

The SOA infrastructure uses WS-Addressing headers to match the ‘Invoke’ message from the called service with the Receive activity which should be present in the calling service. These headers are not visible by default. If you want to see them, you can use the JDeveloper HTTP analyzer as a proxy server (the calling process can configure a reference to use a proxy server). You have to mind though to disable local optimization for the calls, else you won’t see requests coming through the analyzer. You can do that by adding the below two properties to the reference (see here).

<property name=”oracle.webservices.local.optimization”>false</property>
<property name=”oracle.soa.local.optimization.force”>false</property>

Even though WS-Addressing is used, the WS-Addressing OWSM policy is not attached to the service. The WS-Addressing headers can look like for example

Complete WS-Addressing headers

Since the callback message send from the called service is not a reference/service, you cannot tell the SOA Infrastructure to send the message through a proxy server (HTTP analyzer). You can however specify in JDeveloper to attach the log_policy to the callback port. This gives you the request and reply of the called service. The messages are stored in DOMAIN_HOME/servers/server_name/logs/owsm/msglogging. Using this log policy also allows you to recover messages later.

Recovery in case the callback does not come

Manual recovery is not straightforward. You need the request headers in order to construct a callback message and it helps if you have a successful callback message as a sample on how to construct the callback headers. Obtaining the request headers can be done in several ways. You can use the log OWSM policy or a Service Bus Log or Alert action. From within BPEL or BPM it is more difficult to obtain the complete set of headers. You can obtain some headers from properties on the Receive (see here for example something to mind when using the wsa.replytoaddress property) but these properties are header specific (not complete) and you can save specific SOAP header fields to variables at a Receive, but these are also specific. The method described by Peter van Nes here seems to allow you control the outgoing WS-Addressing headers and (a similar method) might allow you to also gather the incoming WS-Addressing headers. However, again, this method is per header element. You might be allowed to leave some headers out in order for the callback to still arrive at the correct instance and determine some of the headers from the Enterprise Manager or soainfra database tables but I haven’t investigated this further. Once you’ve obtained the required headers and constructed a suitable reply message, you can use your favorite tool to fire the request at the server.

Service Bus between calls

In case your company uses an architecture pattern which requires requests to go through the Service Bus, you might have a challenge with WS-Addressing. The callback is a direct call based on the callback address in the request. If you want to go through the Service Bus, you have to do some tricks. You can use the SOA-Direct transport (see some tips from Oracle on this here) and use a workaround like described here. Here the callback data is stored somewhere else in the WS-Addressing headers and later overwritten and used by the Service Bus on the way back to perform a proper callback. SOA-Direct however also has some drawbacks. SOA-Direct are blocking RMI calls which do not allow setting a timeout on the request (and thus for example can cause stuck threads). SOA-Direct also provides some challenges when working on a cluster and it causes a direct dependency between caller and called service requiring measures to be taken to achieve a deployment order of your composites (which can cause dependency loops, etc). It can also cause server start-up issues. I have not looked at recovery options for using SOA-Direct. Since it is based on RMI calls, you most likely will have to write Java code to resume a failed process. For plain WS-Addressing correlation I have not found an easy way without using SOA direct to have the callback go through the Service Bus thus you will most likely end up with the second pattern in the below image.

WS Addressing with the Service Bus in between

WS addressing in summary

WS Addressing benefits

  • Works out of the box. little implementation effort required.
  • Works in the JDeveloper integrated WebLogic server.
  • Is a generally accepted and widely acknowledged standard (see here). Also allows interoperability with several other web service providers.

WS Addressing drawbacks

  • Possible integration patterns are limited. WS-Addressing is SOAP specific and always one caller and one called service. Multiple callbacks to the same process are not possible and integrating calls from other sources (e.g. a file which has been put on a share) is also not straightforward.
  • It can be hard to propagate WS-Addressing headers through a chain of (e.g. composite) services.
  • The callback when using SOAP over HTTP is always direct (the caller sends a callback port with the request in the WS-A headers). difficult to put a Service Bus in between unless using SOA-Direct (which has other drawbacks).
  • Manual recovery in case a callback does not come, is possible, but requires determining the request headers. With SOA-Direct this becomes harder.
  • The callback cannot arrive before the process which needs to do the callback has been called since the WS-Addressing headers have not been determined yet. if for example you want to respond to events which have arrived before your process has been started, WS-Addressing is not the way to go.
  • Correlation is technical and cannot directly be based on functional data. It is not straightforward (or usual) to let the callback arrive at a different process.

Correlation

Instead of using the out of the box WS-Addressing feature, Oracle SOA Suite offers another mechanism to link messages together in an asynchronous flow, namely correlation. Correlation can be used in BPM and in BPEL. Correlation sets are not so hard and very powerful! You can for example read the following article to gain a better understanding. I borrowed the below image from that blog because it nicely illustrates how the different parts you need to define for correlation to work, link together.

Correlation

You first create a correlation set. The correlation set contains properties. These properties have property aliases. These aliases usually are XPath expressions on different messages which allow the SOA infrastructure to determine the value for the property for that specific message. If a message arrives which has the same property as the initialized correlation set in a process instance, the message is delivered at that process instance.

For example, the process gets started with a certain id. This id is defined as a property alias for the id property field in a correlation set. When receiving a message, this same correlation set is used but another alias for the property is used, namely an alias describing how the property can be determined from the received message. Because the property value of the process instance (based on the property alias of the message which started the process) and the property value of the received message (based on the property alias of the received message) evaluate to the same value, the message is delivered to that specific instance.

The power in this mechanism becomes more obvious when multiple events from diverse sources need to be orchestrated in a single process. You can think about making the receiving of a file part of your process or do asynchronous processing of items in a set and monitor/combine the results.

Recovery in case the callback does not come

Recovery in case a callback does not come is relatively easy. Correlation usually works based on a functional id and not on message headers so it is usually easier to construct a callback message. The callback message can be constructed based on the original request which could be determined from the audit logs. You can choose a construction such as with WS-Addressing with the implicit callback port defined in the composite, but it is easier to explicitly expose a callback port. This way you can even from the Enterprise Manager see where the callback should go to and use for example the test console to fire a request (this will most likely have preference with the operations department compared to using a separate tool).

Service Bus between calls

The scope of this part is a pattern in which Composite A calls Service Bus B, calls Composite B, calls Service Bus A, calls Composite A (all using SOAP calls).

There is one challenge when implementing such a pattern. Where should the callback go? At first, the callback URL can be present in the headers but when a Composite B calls Service Bus A, it overwrites the WS-Addressing headers and you loose this information unless you forward it in another way. If you hard-code the callback URL in Service Bus A, Composite B can only do callbacks to Composite A and you loose re-usability. If you are implementing a thin layer Service Bus in which the interface of the Service Bus is the same as the interface of the composite, you are not allowed to add extra fields to headers or message (to forward the callback URL in). A solution for this can be to provide a callback URL in the request message and use that callback URL to override the endpoint in the business service in the Service Bus. You can of course also use a custom SOAP header for this but having it in the message is a bit easier to implement and more visible in audit logging.

Obtaining the callback URL can easily be automated. You can use an X-Path expression like for example: concat(substring-before(ora:getCompositeURL(),’!’),’/ServiceCallback’). ServiceCallback of course depends on your specific callback URL. I’ll explain a bit more about the ora:getCompositeURL function later.

ora:getCompositeURL obtains (as you can most likely guess) the composite URL. The substring part strips the version part so the callback goes to the default version and you are not dependent on a specific version. Correlation is process based and does not depend on process version for correlation to determine to which instance to go. The below image explains the process on how such a pattern can work.

https://technology.amis.nl/wp-content/uploads/2016/02/wsaddressing-patterns-2.png

By using the default version, you are not dependent on a specific version. You can remove them and still have working correlation as long as you have a valid default version which is capable of receiving the callback.

Correlation in summary

Drawbacks

  • Implementation requires some thought. Where do you correlate and how do you correlate?
  • Tight coupling between caller and called service requires effort to avoid. The called service should not explicitly call back to the caller with an hardcoded endpoint in order to allow re-use of the called service. A solution for this can be requiring the calling service to send a callback URL with the request or do integration not with HTTP calls but for example with JMS.
  • The request and callback are required to contain data which allows correlation.
  • Correlation does not work on the JDeveloper embedded WebLogic server. You’ll receive the below error
    [2015-12-17T07:04:40.500+01:00] [DefaultServer] [ERROR] [] [oracle.integration.platform.blocks.event.jms2.EdnBus12c] [tid: DaemonWorkThread: ‘1’ of WorkManager: ‘wm/SOAWorkManager’] [userId: <anonymous>] [ecid: c370fba3-1fcd-480f-8c27-7c4cd6a7a41e-0000010e,0:19] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] Error occurred when saving event to JMS mapping to database: java.lang.ClassCastException: weblogic.jdbc.wrapper.PoolConnection_org_apache_derby_client_net_NetConnection cannot be cast to oracle.jdbc.OracleConnection
  • Correlation sets can be used on the SOA infrastructure but is not a generally acknowledged standard method for doing correlation (vendor specific).

Benefits

  • Very flexible. multiple correlation sets can be used in a single process. external events using diverse technologies can be correlated to running processes. complex integration patterns are possible.
  • Correlation does not depend on SOAP headers (technical data) but (most often) on functional data / message content
  • Recovery can be relatively easy (no need to determine request headers, only expected callback message)
  • The correlating events can arrive before the process is listening; they will be processed once the correlation set is initialized
  • SB between request and response can be used (it does not matter from where the callback comes)

Common

For both WS-Addressing and correlation, messages arrive in the soainfra database schema in the DLV_MESSAGE table (when using oneWayDeliveryPolicy async.persist). You can look at the state to determine if the message has been delivered (see here for a handy blog on states). You can also browse the error hospital in the Enterprise Manager to see and recover these messages. For more information on transaction semantics and asynchronous message processing (WS-A based), you can look here. DLV_SUBSCRIPTION is also an important table storing the open Receive activities. You can do many interesting queries on those tables such as demonstrated here to determine stuck processes.

In 11g undelivered messages are not automatically retried. You can schedule retries though. See here. If this is not scheduled, you are dependant on your operations department for monitoring these messages and taking action. Undelivered messages can go to the Exhausted state. When in this state, they are not automatically retried and you should reset them (to state undelivered) or abort them (what is more suitable for the specific case). See here. In 12c retries are scheduled by default.

Clustering

In a clustered environment you have to mind several environment properties to make sure your callback also is load-balanced. The getCompositeURL function and WS-Addressing headers use several properties to determine the callback URL. Some are based on soainfra settings and some on server settings.

  • First the Server URL configuration property value on the SOA Infrastructure Common Properties page is checked.
  • If not specified, the FrontendHost and FrontendHTTPPort (or FrontendHTTPSPort if SSL is enabled) configuration property values from the cluster MBeans are checked.
  • If not specified, the FrontendHost and FrontendHTTPPort (or FrontendHTTPSPort if SSL is enabled) configuration property values from the Oracle WebLogic Server MBeans are checked.
  • If not specified, the DNS-resolved Inet address of localhost is used.

If your operations department has followed the Enterprise Deployment Guide (for 11.1.1.6 see here for 12.1.3 see here), you have nothing to worry about though; these settings should already be correct.

Long running instances

As always, you want to avoid large numbers of long running processes. These cause difficulties in service life-cycle management and can cause performance issues and manual recovery can only be done for small numbers onless you want to automate that. It is recommended to put timers in all long running instances and do not wait forever (manual recovery can be time consuming). Also think about options for restarting your process in a new version if new functionality is added. Making long running processes short-running and have an event based architecture might also be a suitable solution.

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Asynchronous interaction in Oracle BPEL and BPM. WS-Addressing and Correlation sets appeared first on AMIS Oracle and Java Blog.

Seamless source “migration” from SOA Suite 12.1.3 to 12.2.1 using WLST and XSLT

$
0
0
Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

When you migrate sources from SOA Suite 12.1.3 to SOA Suite 12.2.1, the only change I’ve seen JDeveloper do to the (SCA and Service Bus) code is updating versions in the pom.xml files from 12.1.3 to 12.2.1 (and some changes to jws and jpr files). Service Bus 12.2.1 has some build difficulties when using Maven. See Oracle Support: “OSB 12.2.1 Maven plugin error, ‘Could not find artifact com.oracle.servicebus:sbar-project-common:pom’ (Doc ID 2100799.1)”. The workaround until Oracle has this fixed, is updating the pom.xml of the project, changing the packaging type from sbar to jar and removing the reference to the parent project.

Both updates to the pom files can easily be automated as part of a build pipeline. This allows you to develop 12.1.3 code and automate the migration to 12.2.1. This can be useful if you want to avoid keeping separate 12.1.3 and 12.2.1 versions of your sources during a gradual migration. You can do bug fixes on the 12.1.3 sources and compile/deploy to production (usually production is the last environment to be upgraded) and use the same pipeline to compile and deploy the same sources (using altered pom files) to a 12.2.1 environment.

In order to achieve this I’ve created a WLST script to use XSLT transformations to update the pom files. For composites and Service Bus the transformation to get from 12.1.3 to a working 12.2.1 project is slightly different. You can expand this to also update the group Id for your artifact repository to keep the 12.1.3 and 12.2.1 code separated there also. The transform.py file which is provided in this blog can be used to do other XSLT transformations from WLST also.

WLST

The WLST file (transform.py):
Usage: transform.py -parameter=value [stylesheetfile][inputfile][outputfile]

If you do not specify an outputfile, the output is send to the console. If you do not specify an inputfile, the console is used as input. You can specify XSLT parameters which will be used in the transformation. I’ve taken the sample code to do the XSLT transformation in WLST from here and expanded it. When using WLST to execute the script and pipe it to a file (not specifying an outputfile) do mind that you will get “Initializing WebLogic Scripting Tool (WLST) …” and such above your actual script output.

[code] import sys

from java.io import FileReader, PrintWriter
from java.lang import System
from javax.xml.transform import TransformerFactory, Transformer
from javax.xml.transform.stream import StreamSource, StreamResult

def transform(source, stylesheet, result, parameters):
transformer = TransformerFactory.newInstance().newTransformer(stylesheet)
for (p, v) in parameters: transformer.setParameter(p, v)
transformer.transform(source, result)

args = sys.argv[1:]
parameters = []
while args and args[0].startswith(‘-‘):
try:
i = args[0].index(‘=’)
except ValueError:
parameters.append((args[0], “”))
else:
parameters.append((args[0][1:i], args[0][i+1:]))
args = args[1:]

if len(args) == 1: source = StreamSource(System.in)
elif len(args) >= 2: source = StreamSource(FileReader(args[1]))
else: raise “Usage: transform.py -parameter=value [stylesheetfile][inputfile][outputfile]”

if len(args) == 3: output = args[2]
else: output = “”

stylesheet = StreamSource(FileReader(args[0]))
if len(output) == 0:
result = StreamResult(PrintWriter(System.out))
else:
result = StreamResult(FileWriter(File(output)))

transform(source, stylesheet, result, parameters)

stylesheet.reader.close()
source.reader and source.reader.close()
result.writer.close()
[/code]

XSLT

The transformation for SCA project pom files (migratesca.xsl):

<xsl:stylesheet
version="1.0"
xmlns:src="http://maven.apache.org/POM/4.0.0"
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:namespace-alias stylesheet-prefix="src" result-prefix=""/>
<xsl:template match="/src:project/src:parent/src:version">
<src:version>12.2.1-0-0</src:version>
</xsl:template>
<xsl:template match="/src:project/src:build/src:plugins/src:plugin/src:version">
<src:version>12.2.1-0-0</src:version>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>

The transformation for Service Bus pom files (migratesb.xsl):

<xsl:stylesheet
version="1.0"
xmlns:src="http://maven.apache.org/POM/4.0.0"
xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:namespace-alias stylesheet-prefix="src" result-prefix=""/>
<xsl:template match="/src:project/src:parent"/>
<xsl:template match="/src:project/src:packaging[text()='sbar']">
<src:packaging>jar</src:packaging>
</xsl:template>
<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
</xsl:stylesheet>

You can download the sources, sample pom files and a sample command-line here: https://github.com/MaartenSmeets/migrate1213to1221

Share this on .. Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someoneShare on TumblrBuffer this page

The post Seamless source “migration” from SOA Suite 12.1.3 to 12.2.1 using WLST and XSLT appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus: A quickstart for the Kafka transport

$
0
0

As mentioned on the following blog post by Lucas Jellema, Kafka is going to play a part in several Oracle products. For some usecases it might eventually even replace JMS. In order to allow for easy integration with Kafka, you can use Oracle Service Bus to create a virtualization layer around Kafka. Ricardo Ferreira from Oracle’s A-Team has done some great work on making a custom Kafka Service Bus transport available to us. Read more about this here, here  and here. The Kafka transport is not an ‘officially supported’ transport. Quote from the A-team blog: ‘The Kafka transport is provided for free to use “AS-IS” but without any official support from Oracle. The A-Team reserves the right of help in the best-effort capacity.’. I hope it will become an officially supported part of the Service Bus product in the future.

In this blog I summarize what I have done to get the end to end sample working for SOA Suite 12.2.1.2.0 and Kafka 0.10.1.0 based on the blogs I mentioned. This allows you to quickly start developing against Apache Kafka.

Setting up Apache Kafka

  • Setting up Apache Kafka for development is easy. You follow the quickstart on: https://kafka.apache.org/quickstart. To summarize the quickstart:
  • Download Apache Kafka: https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.1.0/kafka_2.11-0.10.1.0.tgz
  • Unzip it: tar -xzf kafka_2.11-0.10.1.0.tgz
  • Go to the Kafka directory: cd kafka_2.11-0.10.1.0
  • Start ZooKeeper: bin/zookeeper-server-start.sh config/zookeeper.properties
  • Start a new console
  • Start the Kafka broker: bin/kafka-server-start.sh config/server.properties
  • Create a topic: bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor 1 –partitions 1 –topic test

Setting up the Kafka transport in OSB

Copy the following files:

  • $KAFKA_HOME/libs/slf4j-api-1.7.21.jar
  • $KAFKA_HOME/libs/kafka-clients-0.10.1.0.jar

To $OSB_DOMAIN/lib.

In my case this was /home/oracle/.jdeveloper/system12.2.1.2.42.161008.1648/DefaultDomain/lib. I’m using the JDeveloper IntegratedWebLogicServer

Download the Kafka transport from here: http://www.ateam-oracle.com/wp-content/uploads/2016/10/kafka-transport-0.4.1.zip

Extract the zip file.
Copy kafka-transport.ear and kafka-transport.jar to $MW_HOME/osb/lib/transports.

Start the domain

Execute install.py from the kafka-transport zipfile. Use wlst.sh in my case from: /home/oracle/Oracle/Middleware12212/Oracle_Home/oracle_common/common/bin/wlst.sh

Provide the required information. It will ask for Url, username, password of your WebLogic server and deploy the kafka-transport.jar and kafka-transport.ear to the specified server (AdminServer + cluster targets). If the deployments are already there, they are first undeployed by the script.

Stop the domain

The below part I got from the following blog. This is required to be able to configure the Kafka transport from the webinterface.

Locate the following file: $MW_HOME/osb/lib/osbconsoleEar/webapp/WEB-INF/lib/adflib_osb_folder.jar.

Extract this JAR and edit /oracle/soa/osb/console/folder/l10n/FolderBundle.properties.

Add the following entries:

desc.res.gallery.kafka=The Kafka transport allows you to create proxy and business services that communicate with Apache Kafka brokers.
desc.res.gallery.kafka.proxy=The Kafka transport allows you to create proxy services that receive messages from Apache Kafka brokers.
desc.res.gallery.kafka.business=The Kafka transport allows you to create business services that route messages to Apache Kafka brokers.

ZIP up the result as a new adflib_osb_folder.jar

Check the Service Bus console

After the above steps are completed, you can start the domain and use the Kafka transport from the servicebus console.

capture01

capture02

capture03

capture04 capture06

capture05

Setting up JDeveloper

Copy the JDeveloper plugin descriptor (transport-kafka.xml) to the plugins folder:
$MW_HOME/osb/config/plugins. In my case this is: /home/oracle/Oracle/Middleware12212/Oracle_Home/osb/config/plugins/. The Kafka transport, since it is a custom transport, is not visible in the regular palette. You can however do File, New, Proxy or Business service to use the Kafka transport.

kafka-in-jdev

Also you will not see possible options for consumer or producer settings but you can use the settings from: here and here

Running an end to end sample

Apache Kafka provides shell scripts to test producing and consuming messages:
– Producing: bin/kafka-console-producer.sh –broker-list localhost:9092 –topic test
– Consuming: bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test –from-beginning

It helps to add a report, log or alert action to your Service Bus pipeline so you can see messages which have passed. As a report key I have used the Kafka offset from $inbound: ./ctx:transport/ctx:request/tp:headers/kafka:offset

capture10

capture11

And now?

As you can see, several steps need to be performed to install this custom transport. It is only supported on a best-effort basis by the A-Team. I could not see options for properties in the Service Bus Console as was shown in the blog posts mentioned at the start of this post, but that is not a real issue since if a fixed set would be provided and more options would become available in a new version of Kafka, this might become limiting. It is a shame custom transports are not visible in the component palette in JDeveloper. Once you know however you can use the Kafka transport by creating Proxy and Business services from File, New this also becomes a non-issue.

There are of course other solutions to take care of the integration with Kafka such as using Kafka connectors or create a custom service to wrap Kafka, but I like the way this custom transport allows you to integrate it with Service Bus. This allows you to make Kafka only available through this channel. This offers options like easily applying policies, monitoring, alerting, etc. I do expect in Oracle’s Cloud offering interaction with Kafka products running in the Oracle Cloud such as the Event Hub, will of course be much easier. We’re looking forward to it.

image-25

The post Oracle Service Bus: A quickstart for the Kafka transport appeared first on AMIS Oracle and Java Blog.

Gebruik van de “Standaard Zaak-en Documentservices 1.1” van Kwaliteitsinstituut Nederlandse Gemeenten (KING), almede MTOM/XOP t.b.v. een koppeling tussen diverse applicaties (gerealiseerd binnen OSB 11g) aangaande het proces van vergunningverlening voor een organisatie in de publieke sector

$
0
0

Voor een organisatie in de publieke sector werd aan AMIS gevraagd om met behulp van Oracle Service BUS 11g, een koppeling te realiseren tussen diverse applicaties aangaande het proces van vergunningverlening, zodat de daarbij benodigde gegevens eenvoudiger geautomatiseerd verwerkt konden worden.

Belangrijke randvoorwaarden uit het Solution Design waren:

  • het hanteren van landelijk vastgestelde standaarden voor informatie-uitwisseling, waarbij gekozen is voor StuF-ZKN en het gebruik van de “Standaard Zaak-en Documentservices 1.1” van Kwaliteitsinstituut Nederlandse Gemeenten (KING)
  • de keuze om grote bestanden op basis van MTOM/XOP over te zetten
  • het feit dat het gebruikte document managementsysteem (DMS) ten gevolge van een back-up een aantal uren per nacht niet beschikbaar is

Dit artikel zal nader ingaan op de “Standaard Zaak-en Documentservices 1.1” van KING en het gebruik van MTOM/XOP binnen de gerealiseerde oplossing. Verder wordt er ingegaan op het gebruik van Java Callout’s om bestanden weg te schrijven en te laden van de WebLogic applicatie server, alsmede het gebruik van een WebLogic queue om rekening te houden met de genoemde DMS back-up.

De betreffende koppeling is recent met succes in productie genomen.

Zaakgericht werken

Bij de organisatie waar de koppeling is gerealiseerd, is er sprake van zogenaamd zaakgericht werken en rondom het proces van vergunningverlening speelt de Wet algemene bepalingen omgevingsrecht (Wabo) een belangrijke rol. Zaakgericht werken is een procesgeoriënteerde manier van werken. Hierbij spelen onder andere zaken en documenten een rol. Zie voor nadere informatie bijvoorbeeld: http://www.noraonline.nl/wiki/Het_basisconcept_van_Zaakgericht_Werken.

In de Wet algemene bepalingen omgevingsrecht (Wabo) staat voor welke activiteiten een omgevingsvergunning nodig is. De omgevingsvergunning is één geïntegreerde vergunning voor de onderwerpen: bouwen, wonen, monumenten, ruimte, natuur en milieu. De omgevingsvergunning kan nodig zijn als een bedrijf op een bepaalde plek iets wil slopen, (ver-)bouwen, oprichten of gebruiken. Centraal in de Wabo staat de dienstverlening door de overheid aan burgers en het bedrijfsleven.

Bij de betreffende organisatie is de belangrijkste verandering het feit dat de benodigde gegevens deels geautomatiseerd in de zaak en het document kunnen worden gezet en vervolgens het vervolg van het proces kan worden gestart. Dit betekent dat handelingen zoals het aanmaken van de zaak en het toevoegen van de documentgegevens niet meer standaard handmatig wordt uitgevoerd, maar dat alleen als het nodig is, de zaken en documenten gecontroleerd en aangepast worden.

In onderstaande afbeelding is een overzicht te zien van de betreffende applicaties en OSB services waarmee de gewenste koppeling is gerealiseerd.

De volgende applicaties spelen een rol in de koppeling:

  • E-formulieren (IPROX-Forms)

IPROX-Forms is een op IPROX-CMS gebaseerde applicatie die de mogelijkheid biedt om complexe interacties met een bezoeker te definiëren en af te handelen. Anders gezegd: met IPROX-Forms is het mogelijk E-formulieren te maken.

Bron: https://www.infoprojects.nl/iprox/iprox-forms/

  • Landelijke Omgevingsloket online (OLO)

Met Omgevingsloket online (www.omgevingsloket.nl) kan digitaal een aanvraag of melding gedaan worden voor omgevingsvergunningen en watervergunningen. De overheid kan met Omgevingsloket online de aanvragen behandelen. Ook kan met Omgevingsloket online een vergunningcheck worden gedaan om te zien of een vergunning of melding nodig is.

Bron: http://www.infomil.nl/onderwerpen/integrale/omgevingsloket/

  • Squit XO

Squit XO is een veelzijdig softwarepakket voor de uitvoering van allerlei taken op het gebied van vergunningverlening, toezicht en handhaving (VTH). Door de ondersteuning van standaarden en de mogelijkheid om te koppelen met een breed scala aan andere applicaties, is Squit XO uitgegroeid tot dé marktleider in de VTH-softwaremarkt.

Bron: https://www.roxit.nl/media/715771/factsheet_squit-xo_algemeen_03032015.pdf

  • EDO / eBUS

De applicatie EDO bestaat uit een Document Management Systeem (OpenText eDOCS) en een deel maatwerk.

OpenText eDOCS oplossingen bieden ondernemingen een geïntegreerd aanbod speciaal ontwikkeld om bedrijfsprocessen, risicobeheer en naleving van wet- en regelgeving gedurende de hele content levenscyclus te ondersteunen, van content creatie tot archivering, en procesondersteuning en samenwerking te verbeteren, terwijl de content beschermd wordt en risico’s en naleving van wet- en regelgeving beheerd worden.

Bron: http://www.onefox.nl/Producten/Content/OpenTexteDOCS/Paginas/eDOCS.aspx

Uw Document Management Systeem (DMS) vervult een essentiële rol in uw bedrijfsprocessen. Integratie met andere bedrijfsapplicaties conform een open en transparante SOAP interface met uw DMS is daarom cruciaal. One Fox eBUS is dé web service laag bovenop OpenText eDOCS DM/RM die integratie met andere applicaties aanzienlijk vergemakkelijkt. Conform marktstandaarden, met inachtneming van uw bedrijfsregels voor eDOCS.

Bron: http://www.onefox.nl/Producten/Integratie/One%20Fox%20eBUS/Paginas/eBUS.aspx

Standaard Zaak-en Documentservices 1.1

Als uitgangspunt voor de inrichting van de OSB services geldt de “Standaard Zaak-en Documentservices 1.1”, welke uitgaat van vastgestelde standaarden: CMIS 1.0, StUF 3.01, StUF-ZKN 3.10, RGBZ 1.0 en ZTC 2.0 en welke deze (voor het beschreven toepassingsgebied genoemde standaarden) aanscherpt door ze te concretiseren voor de betrokken applicaties en de te ondersteunen functionaliteit. Daardoor verbetert de interoperabiliteit tussen betrokken applicaties.

De “Standaard Zaak-en Documentservices 1.1” is een beperkte uitwerking van de StUF-ZKN standaard met minder verplichte velden.

De services worden daarin gespecificeerd volgens de StUF-standaard (StUF 3.01 / StUF-ZKN 3.10). De volgende berichten worden ondersteund:

  • Synchrone vraag-/antwoordberichten (Lv01/La01)
  • Asynchrone kennisgevingen (Lk01)
  • Foutberichten en bevestigingsberichten(Fo0x en Bv03)(Lk01 en Bv01)
  • Vrije berichten (Di02/Du02)

Ten tijde van de realisatie van de koppeling bij de klant was versie 1.1.0.2 de laatste versie.

Bron: 20150707_Specificatie_Zaak-_en_Documentservices_v1.1.02.pdf, http://gemmaonline.nl/index.php/Documentatie_Zaak-_en_Documentservices#Zaak-_en_Documentservices_1.1

De bij deze standaard behorende WSDL en XML Schema Definitions (zie Zaak_DocumentServices_1_1_02.zip op bovengenoemde url) zijn gebruikt bij de koppeling.

Aangaande StUF-ZKN 3.10 was er nog een patch beschikbaar, genaamd Zkn0310_20151126_patch23 (zie http://gemmaonline.nl/index.php/Sectormodellen_Zaken:_StUF-ZKN), welke is gebruikt in de koppeling.

Aangaande StUF-BG 3.10 was er nog een patch beschikbaar, genaamd Bg0310_20151126_patch23 (zie http://gemmaonline.nl/index.php/Sectormodel_Basisgegevens:_StUF-BG), welke is gebruikt in de koppeling.

In de “Standaard Zaak-en Documentservices 1.1” wordt uitgegaan van een referentiearchitectuur. Deze is hieronder weergegeven. In de referentiearchitectuur is voor elke referentiecomponent aangegeven welke groep van services deze moet leveren dan wel gebruiken.

In onderstaande tabel is aangegeven welke services tot welke groep behoren:

Alle benodigde informatie en sources behorend bij de “Standaard Zaak-en Documentservices 1.1” kan worden gevonden op de webpagina van Gemeentelijke Model Architectuur (GEMMA) (http://gemmaonline.nl/index.php/Documentatie_Zaak-_en_Documentservices), de publicatie- en co-creatieomgeving van Kwaliteitsinstituut Nederlandse Gemeenten (KING).

OSB Services ZKN ZaakService-2.0 en ZKN ZaakDocumentService-2.0

Bij de klant zijn de bovengenoemde applicatieservices (alleen degene die functioneel nodig waren) als operaties opgenomen in een tweetal OSB services: ZKN ZaakService-2.0 en ZKN ZaakDocumentService-2.0. De algemene afhandeling van de berichten wordt gevormd door proxy service ZaakService_PS, respectievelijk ZaakDocumentService_PS. Elke proxy service bevat een “Operational Branch” waarbij de gekozen operatie wordt gebruikt om het binnenkomende bericht te routeren naar het juiste, bericht specifieke, afhandelingsdeel (geïmplementeerd door een local proxy service).

De operaties van de ZKN ZaakService-2.0, zijn ook toegevoegd aan de ZKN ZaakDocumentService-2.0, dit omdat vanuit Squit XO (via systeeminstellingen) maar naar één service gekoppeld kan worden.

Zo is voor bijvoorbeeld applicatieservice “Creëer Zaak” een operatie “creeerZaak_Lk01” aanwezig in proxy service ZaakService_PS met de volgende specificaties:

<operation name="creeerZaak_Lk01">
  <input message="ZKN:zakLk01"/>
  <output message="StUF:Bv03"/>
  <fault name="fout" message="StUF:Fo03"/>
</operation>

Zo is voor bijvoorbeeld applicatieservice “Voeg Zaakdocument toe” een operatie “voegZaakdocumentToe_Lk01” aanwezig in proxy service ZaakDocumentService_PS met de volgende specificaties:

<operation name="voegZaakdocumentToe_Lk01">
  <input message="ZKN:edcLk01"/>
  <output message="StUF:Bv03"/>
  <fault name="fout" message="StUF:Fo03"/>
</operation>

T.b.v. het gebruik van speciale karakters is bij de proxy services op tabblad HTTP Transport het volgende ingesteld:

Berichten, WSDL’s en XSD’s

De berichten die binnen de OSB service ZKN ZaakDocumentService-2.0 gebruikt worden zijn:

Bericht WSDL
zakLv01 …\zkn0310\zs-dms\zkn0310_beantwoordVraag_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\vraagAntwoord\zkn0310_msg_vraagAntwoord.xsd

zakLa01 …\zkn0310\zs-dms\zkn0310_beantwoordVraag_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\vraagAntwoord\zkn0310_msg_vraagAntwoord.xsd

Fo02 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

edcLv01 …\zkn0310\zs-dms\zkn0310_beantwoordVraag_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\vraagAntwoord\zkn0310_msg_vraagAntwoord.xsd

edcLa01 …\zkn0310\zs-dms\zkn0310_beantwoordVraag_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\vraagAntwoord\zkn0310_msg_vraagAntwoord.xsd

edcLk01 …\zkn0310\zs-dms\zkn0310_ontvangAsynchroon_mutatie_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\mutatie\zkn0310_msg_mutatie.xsd

Bv02 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

Bv03 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

Fo03 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

genereerDocumentIdentificatie_Di02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

genereerDocumentIdentificatie_Du02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

geefZaakdocumentbewerken_Di02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

geefZaakdocumentbewerken_Du02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

updateZaakdocument_Di02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

cancelCheckout_Di02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

Omdat verderop in dit blog artikel het vaak gaat over een bestand (in het kader van de operatie “voegZaakdocumentToe_Lk01”) is dit het moment om aan te geven dat in het edcLk01 bericht de verwijzing naar het bestand (oftewel het fysieke document) te vinden is in element edcLk01/object[1]/inhoud (met attributen contentType en bestandsnaam).

Bijvoorbeeld:

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf">
  <con:binary-content ref="cid:-15258baa:15451665d79:-7bc5"/>
</inhoud>

De berichten die binnen de OSB service ZKN ZaakService-2.0 gebruikt worden zijn:

Bericht WSDL

 

zakLk01 …\zkn0310\zs-dms\zkn0310_ontvangAsynchroon_mutatie_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\mutatie\zkn0310_msg_mutatie.xsd

Bv03 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

Fo03 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

genereerZaakIdentificatie_Di02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

genereerZaakIdentificatie_Du02 …\zkn0310\zs-dms\zkn0310_vrijeBerichten_zs-dms.wsdl

Met daarbij de volgende XSD:

…\zkn0310\zs-dms\zkn0310_msg_zs-dms.xsd

Fo02 …\0301\stuf0301_types.wsdl

Met daarbij de volgende XSD:

…\0301\stuf0301.xsd

Aangezien zowel het Solution Design, aangaande beveiliging, als de “Standaard Zaak-en Documentservices 1.1” voorstelt dat bij een WSDL niet ondersteunde berichten uit de binding en portType worden verwijderd, zijn de benodigde WSDL’s uit de standaard (lees het CDM project) als basis gebruikt voor de uiteindelijke WSDL (behorend bij de proxy service). Deze WSDL staat in de proxy directory van de OSB service ZKN ZaakDocumentService-2.0 en is samengesteld uit WSDL’s uit de standaard.

Omdat er binnen de WSDL meerdere operaties zijn die hetzelfde input bericht gebruiken, moest bij de instellingen van de proxy services aangegeven worden bij “Selection Algorithm” dat er gebruik moest worden gemaakt van “SOAPAction Header” (Select this algorithm to specify that operation mapping be done automatically from the WSDL associated with this proxy service).

ExtraElementen

Binnen de standaard kunnen extra elementen worden toegevoegd aan de berichten, zonder dat je het schema hoeft aan te passen. De StUF 3.01 standaard, in het bijzonder het stuf0301.xsd schema, bevat hiervoor het element “extraElementen”. Hiermee worden extra elementen gedefinieerd, die niet gevalideerd worden. De afstemming en validatie moeten dus tussen de applicaties worden geregeld. Door gebruik te maken van de extra elementen kan er dus toch binnen de standaard een stukje “maatwerk” worden gerealiseerd.

<element name="extraElementen" type="StUF:ExtraElementen"/>

<complexType name="ExtraElementen">
  <sequence>
    <element name="extraElement" nillable="true" maxOccurs="unbounded">
      <complexType>
        <simpleContent>
          <extension base="string">
            <attributeGroup ref="StUF:element"/>
            <attribute name="naam" type="string" use="required"/>
            <attribute ref="StUF:indOnvolledigeDatum"/>
          </extension>
        </simpleContent>
      </complexType>
    </element>
  </sequence>
</complexType>

Issues tijdens het deployen van de wsdl’s en xsd’s behorend bij de “Standaard Zaak-en Documentservices 1.1”

Aangezien de WSDL’s en XSD’s als onderdeel van de te realiseren koppeling als een canoniek data model worden beschouwd, zijn deze als onderdeel van een apart OSB Project gedeployed naar de Oracle Service Bus versie 11.1.1.7. Tijdens het deployen hiervan liep ik tegen een aantal issues aan. Door aanpassingen te doen aan de bestanden bleek het uiteindelijke mogelijk om ze succesvol te deployen.

Bijvoorbeeld tijdens het deployen van .\zkn0310\mutatie\zkn0310_ontvangAsynchroon_mutatie.wsdl trad de volgende fout op:

Attribute not allowed: version in element definitions@http://schame.xmlsoap.org.wsdl/

In dit bestand is regel 2 veranderd van:

<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:StUF="http://www.egem.nl/StUF/StUF0301" xmlns:ZKN="http://www.egem.nl/StUF/sector/zkn/0310" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsi="http://ws-i.org/schemas/conformanceClaim/" xmlns:xs="http://www.w3.org/2001/XMLSchema" name="StUF-ZKN0310" targetNamespace="http://www.egem.nl/StUF/sector/zkn/0310" version="031003">

naar:

<definitions xmlns="http://schemas.xmlsoap.org/wsdl/" xmlns:StUF="http://www.egem.nl/StUF/StUF0301" xmlns:ZKN="http://www.egem.nl/StUF/sector/zkn/0310" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsi="http://ws-i.org/schemas/conformanceClaim/" xmlns:xs="http://www.w3.org/2001/XMLSchema" name="StUF-ZKN0310" targetNamespace="http://www.egem.nl/StUF/sector/zkn/0310">

Deze aanpassing was inmiddels ook al als issue ERR0429 in de onderhoudsverzoeken voor de standaard opgenomen (zie http://www.gemmaonline.nl/index.php/Onderhoudsverzoeken.xls).

Het is dus goed je bewust te zijn dat een dergelijk lijst met “problemen” en bijbehorende oplossingen bestaat.

MTOM/XOP

Een van de randvoorwaarden uit het Solution Design was de keuze om grote bestanden op basis van MTOM/XOP te verzenden.

SOAP Message Transmission Optimization Mechanism (MTOM) is een manier om binaire data van en naar Web services te sturen. MTOM gebruikt XML-binary Optimized Packaging (XOP) om de binaire data over te zetten.

Binaire data, zoals een afbeelding in JPEG formaat, kunnen uitgewisseld worden tussen een client en de service. Meestal wordt de binaire data hierbij als een xsd:base64Binary string in een XML document opgenomen. Het versturen van de binaire data in dit formaat zorgt echter voor een enorme toename van de omvang van het bericht en is duur in termen van de benodigde processing ruimte en tijd.

Door MTOM te gebruiken, kan binaire data verzonden worden als een MIME bijlage, waardoor de transmissie ruimte op de lijn beperkt wordt. De binaire data is semantisch onderdeel van het XML document. Dit is een voordeel ten opzichte van SWA (SOAP Messages with Attachments), omdat dit het mogelijk maakt om bijvoorbeeld berichtbeveiliging op basis van WS-Security te gebruiken.

Het gebruik van MTOM om binaire data als een bijlage over te zetten verbeterd de prestaties van de Web services stack. De prestaties worden niet beïnvloed als een MTOM-encoded bericht geen binaire data bevat. Voor betere interoperabiliteit zouden berichten die geen binaire data bevatten niet MTOM-encoded moeten worden.
Bron: Oracle Fusion Middleware Online Documentation Library, 11g Release 1 (11.1.1.7), Using MTOM Encoded Message Attachments

Alle applicaties die betrokken waren bij de koppeling (zoals bijvoorbeeld SquitXO) zijn MTOM enabled.

Het gebruik van MTOM/XOP kan bij een proxy service (ook bij gebruik van het local protocol) ingesteld worden via de Message Handling page.

Door bij de proxy service Message Handling page te kiezen voor “Include Binary Data by Reference” (Default) worden in de inbound request message alle xop:Include elementen vervangen door ctx:binary-content elementen bij het vullen van de header en body message-related context variabelen.

Zie voor nadere informatie over de configuratie “Fusion Middleware Administrator’s Guide for Oracle Service Bus”.
Bron: Oracle Fusion Middleware Online Documentation Library, 11g Release 1 (11.1.1.7), Configuring Proxy Services

Voorbeeld zonder MTOM/XOP, verkregen door het uitvragen van de body context variabele:

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf">e1xydGY…</inhoud>

Voorbeeld met MTOM/XOP, verkregen door het uitvragen van de body context variabele:

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf">
  <con:binary-content ref="cid:-15258baa:15451665d79:-7bc5"/>
</inhoud>

Toelichting op gebruik MTOM/XOP

Hieronder volgt een voorbeeld van een bericht (SoapUI http log) zoals dat wordt aangeboden aan de OSB service ZKN ZaakDocumentService-2.0, operatie voegZaakdocumentToe_Lk01.

Te zien is dat er een multipart bericht wordt gebruikt, met enerzijds een gedeelte (Part) met Content-Type “application/xop+xml” en Content-ID “<rootpart@soapui.org>” en anderzijds een gedeelte (Part) met Content-Type “application/rtf” en Content-ID van “<http://www.soapui.org/9637768303968>”. Vanuit het eerste gedeelte wordt via “href=cid:http://www.soapui.org/9637768303968” verwezen naar het tweede gedeelte.

POST /ZaakDocumentService-2.0/ZaakDocumentService-2.0 HTTP/1.1[\r][\n]"
Accept-Encoding: gzip,deflate[\r][\n]"
Content-Type: multipart/related; type="application/xop+xml"; start="<rootpart@soapui.org>"; start-info="text/xml"; boundary="----=_Part_5_132570683.1461663503405"[\r][\n]"
SOAPAction: "http://www.egem.nl/StUF/sector/zkn/0310/voegZaakdocumentToe_Lk01"[\r][\n]"
MIME-Version: 1.0[\r][\n]"
Content-Length: 104249[\r][\n]"
Host: localhost:7001[\r][\n]"
Connection: Keep-Alive[\r][\n]"
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)[\r][\n]"
[\r][\n]"
[\r][\n]"
------=_Part_5_132570683.1461663503405"
[\r][\n]"
Content-Type: application/xop+xml; charset=UTF-8; type="text/xml""
[\r][\n]"
Content-Transfer-Encoding: 8bit"
[\r][\n]"
Content-ID: <rootpart@soapui.org>"
[\r][\n]"
[\r][\n]"
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">[\n]"
<s:Body xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">[\n]"
<edcLk01 xmlns="http://www.egem.nl/StUF/sector/zkn/0310">[\n]"
..
<object a:entiteittype="EDC" a:sleutelVerzendend="123" a:sleutelGegevensbeheer="2016/0000221" a:verwerkingssoort="T" xmlns:a="http://www.egem.nl/StUF/StUF0301">[\n]"
<identificatie>2016/0000221</identificatie>[\n]"
<dct.omschrijving>Een omschrijving</dct.omschrijving>[\n]"
<creatiedatum>20160406</creatiedatum>[\n]"
<ontvangstdatum>20160406</ontvangstdatum>[\n]"
<titel>xyz</titel>[\n]"
<beschrijving>Een beschrijving</beschrijving>[\n]"
<formaat>.rtf</formaat>[\n]"
<taal>NL</taal>[\n]"
<verzenddatum>20160406</verzenddatum>[\n]"
<vertrouwelijkAanduiding>OPENBAAR</vertrouwelijkAanduiding>[\n]"
<auteur>Een auteur</auteur>[\n]"
<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf" xmlns:b="http://www.w3.org/2005/05/xmlmime"><inc:Include href="cid:http://www.soapui.org/9637768303968" xmlns:inc="http://www.w3.org/2004/08/xop/include"/></inhoud>[\n]"
..
</object>[\n]"
</edcLk01>[\n]"
</s:Body>[\n]"
</s:Envelope>"
[\r][\n]"
------=_Part_5_132570683.1461663503405"
[\r][\n]"
Content-Type: application/rtf"
[\r][\n]"
Content-Transfer-Encoding: binary"
[\r][\n]"
Content-ID:<http://www.soapui.org/9637768303968>"
[\r][\n]"
[\r][\n]"
{\rtf1\adeflang1025\ansi\ansicpg1252\uc1\adeff38\deff0\stshfdbch0...05000000000000}}"
[\r][\n]"
------=_Part_5_132570683.1461663503405--"

Door bij de proxy service Message Handling page te kiezen voor “Include Binary Data by Reference” worden in de inbound request message alle xop:Include elementen vervangen door ctx:binary-content elementen bij het vullen van de header en body message-related context variabelen.

Daarmee wordt iets als:

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf" xmlns:b="http://www.w3.org/2005/05/xmlmime">
  <inc:Include href="cid:http://www.soapui.org/9637768303968" xmlns:inc="http://www.w3.org/2004/08/xop/include"/>
</inhoud>

vervangen door iets als (verkregen door het uitvragen van de body context variabele):

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf">
  <con:binary-content ref="cid:-15258baa:15451665d79:-7bc5"/>
</inhoud>

Let op:

Het edcLk01-bericht ondersteund standaard geen MTOM. D.w.z. dat het bericht niet zal valideren tegen de XSD i.v.m. de aanwezigheid van element binary-content binnen element inhoud. Hieronder volgt een voorbeeld van de soap fault (variabele $fault) die dan optreedt in de Service Error Handler:

<con:fault xmlns:con="http://www.bea.com/wli/sb/context">
  <con:errorCode>BEA-382505</con:errorCode>
  <con:reason>OSB Validate action failed validation</con:reason>
  <con:details>
    <con1:ValidationFailureDetail xmlns:con1="http://www.bea.com/wli/sb/stages/transform/config">
      <con1:message>Element not allowed: Include@http://www.w3.org/2004/08/xop/include in element inhoud@http://www.egem.nl/StUF/sector/zkn/0310</con1:message>
      <con1:xmlLocation>
        <xop:Include href="cid:http%3A%2F%2Ftempuri.org%2F1%2F635977930478048534" xmlns:xop="http://www.w3.org/2004/08/xop/include"/>
      </con1:xmlLocation>
    </con1:ValidationFailureDetail>
  </con:details>
  <con:location>
    <con:node>VoegZaakdocumentToe_Lk01PipelinePairNode</con:node>
    <con:pipeline>VoegZaakdocumentToe_Lk01PipelinePairNode_request</con:pipeline>
    <con:stage>ValidationStage</con:stage>
    <con:path>request-pipeline</con:path>
  </con:location>
</con:fault>

Rekening houden met de EDO back-up

Bij gebruik van de koppeling worden uiteindelijk gegevens in EDO geplaatst. In dit kader diende er rekening mee te worden gehouden dat EDO in verband met een back-up een aantal uren niet beschikbaar is. Omdat het in die uren wel mogelijk moet zijn om nieuwe aanvragen en aanvullende documenten in te dienen, moest er een voorziening worden getroffen om deze gegevens tijdelijk op te slaan, zodat deze na de back-up in EDO konden worden opgeslagen.

De berichten worden daarom in een queue op de OSB opgeslagen en de documenten op een fileshare.

OSB services JMSProducerStuFZKNMessageService-1.0 en JMSConsumerStuFZKNMessageService-1.0

De OSB service JMSProducerStuFZKNMessageService-1.0 bevat de business service JMSProducerStuFZKNMessageService_BS. Deze business service dient er voor om een bericht op de queue StuFZKNMessageQueue te plaatsen. De business service accepteert willekeurige SOAP-berichten. Bij bovenstaande queue is ook een error queue geconfigureerd.

De OSB service JMSConsumerStuFZKNMessageService-1.0 bevat de proxy service JMSConsumerStuFZKNMessageService_PS. Deze proxy service dient er voor om berichten van de queue StuFZKNMessageQueue op te halen (dequeue).

OSB service eFormulierenService-1.0

De OSB service eFormulierenService-1.0 bevat de proxy service eFormulierenService_PS. Deze bevat een “Operational Branch” waarbij de gekozen operatie wordt gebruikt om het binnenkomende bericht te routeren naar het juiste, bericht specifieke, afhandelingsdeel (geïmplementeerd door een local proxy service). Vanuit het bericht specifieke, afhandelingsdeel wordt de bijbehorende operatie aangeroepen in de OSB service ZKN ZaakDocumentService-2.0.

Wegschrijven bestand naar WebLogic applicatie server

Voordat een edcLk01 bericht op een queue wordt geplaatst (i.v.m. EDO back-up), wordt de document content weggeschreven naar een directory op de WebLogic applicatie server.

Dit wordt via een “Java Callout” gedaan. Deze “Java Callout” roept een java functie aan welke de volgende parameters ontvangt:

  • Lokatie en naam van een te gebruiken property-bestand
  • Attribuut bestandsnaam van element inhoud in het edcLk01 bericht
  • DataSource referentie naar de “binary-content” van de MTOM attachment
  • Context
  • Logging aan of uit

De java functie zorgt er vervolgens voor dat de inhoud van de MTOM attachment naar een, in het property-bestand opgegeven, lokatie wordt geschreven. De bestandsnaam wordt gevormd door een UUID en de meegegeven bestandsnaam. De UUID wordt gebruikt om de bestandsnaam gegarandeerd uniek te maken.

De, voor opslag, gebruikte bestandsnaam, incl. lokatie, wordt door java functie teruggegeven aan de proxy. De proxy plaatst deze naam in het attribuut bestandsnaam van element inhoud in het edcLk01 bericht zodat daar later aan gerefereerd kan worden. Tevens wordt het element binary-content verwijderd omdat deze niet meer nodig is.

Het element inhoud ziet er dan uiteindelijk bijvoorbeeld als volgt uit:

<inhoud b:contentType="application/rtf" a:bestandsnaam="/home/oracle/osb/attachments/squitXO/3690a3d0-6b7d-4d61-aa77-d1727d9c02ab#10_xyz.rtf" xmlns:b="http://www.w3.org/2005/05/xmlmime"/>

Java Callout AttachmentProcessor.processAttachment en Actions welke gebruikt zijn in de Proxy Service Message Flow

Java Callout Type Name Expression
Method = AttachmentProcessor. processAttachment
Parameter java.lang.String propertyFileLocation $attachmentProcessorPropertyFileLocation
Parameter java.lang.String bestandsnaam $body/zkn:edcLk01/zkn:object[1]/zkn:inhoud/@stuf:bestandsnaam
Parameter javax.activiation.DataSource datasource $body/zkn:edcLk01/zkn:object[1]/zkn:inhoud/ctx:binary-content
Parameter java.lang.String context $attachmentProcessorContext
Parameter boolean enableLogging $attachmentProcessorEnableLogging
Result java.lang.String bestandsnaam

Als context (variabele attachmentProcessorContext) wordt het bericht nummer binnen de KING Standaard Zaak-en Documentservices 1.1 gebruikt: #10

Hieronder staat de specificatie van de gebruikte class en method:

class AttachmentProcessor {

        public static String processAttachment(final String propertyFileLocation,
                                               final String bestandsnaam,
                                               final String context,
                                               final boolean enableLogging,
                                               final DataSource datasource) throws IOException 
}

Passing Streaming Content to a Java Callout:

You can pass binary-content as an input argument to a Java callout method in a streaming fashion. Oracle Service Bus handles this by checking the Java type of the input argument. If the argument is of type javax.activation.DataSource, the system creates a wrapper DataSource object and gets the InputStream from the corresponding source by invoking the Source.getInputStream() method. You can call this method as many times as you need in your Java callout code.

Bron: Oracle Fusion Middleware Online Documentation Library, 11g Release 1 (11.1.1.7), Fusion Middleware Administrator’s Guide for Oracle Service Bus

Replace Action:

Replace node contents of In Variabele body and XPath ./zkn:edcLk01/zkn:object[1]/zkn:inhoud/@stuf:bestandsnaam with outcome of Expression $bestandsnaam.

Hiermee wordt de bestandsnaam vervangen door:

<128 bit UUID><context>_<oorspronkelijke bestandsnaam>

Delete Action:

In Variabele body and XPath ./zkn:edcLk01/zkn:object[1]/zkn:inhoud/ctx:binary-content

Door bovenstaande actions wordt iets als:

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf">
  <con:binary-content ref="cid:-15258baa:15451665d79:-7bc5"/>
</inhoud>

vervangen door iets als (verkregen door het uitvragen van de body context variabele):

<inhoud b:contentType="application/rtf" a:bestandsnaam="/home/oracle/osb/attachments/squitXO/3690a3d0-6b7d-4d61-aa77-d1727d9c02ab#10_xyz.rtf"/>

Laden bestand van WebLogic applicatie server

Voordat een edcLk01 bericht inclusief document content in EDO wordt geplaatst, wordt de document content eerst geladen vanuit een directory op de WebLogic applicatie server. Pas als het laden van het bestand en het opslaan ervan in EDO succesvol is wordt het bestand verwijderd van de directory op de WebLogic applicatie server.

Dit wordt via een “Java Callout” gedaan.

Java Callout AttachmentProcessor.loadAttachment en Actions welke gebruikt zijn in de Proxy Service Message Flow

Java Callout Type Name Expression
Method = AttachmentProcessor. loadAttachment
Parameter java.lang.String bestandsnaam $body/zkn:edcLk01/zkn:object[1]/zkn:inhoud/@stuf:bestandsnaam
Parameter java.lang.String context $attachmentProcessorContext
Parameter boolean enableLogging $attachmentProcessorEnableLogging
Result javax.activiation.DataSource datasource

Als context (variabele attachmentProcessorContext) wordt het bericht nummer binnen de KING Standaard Zaak-en Documentservices 1.1 gebruikt: #10

Hieronder staat de specificatie van de gebruikte class en method:

class AttachmentProcessor {

    public static DataSource loadAttachment(final String bestandsnaam,
                                            final String context,
                                            final boolean enableLogging) throws IOException
}

Streaming Content Results from a Java Callout:

You can get streaming content results from a Java callout method. Oracle Service Bus handles this by checking the Java type of the result and then adding the new source to the source repository, setting the appropriate context variable value to the corresponding ctx:binary-content XML element.

Note:

To return the contents of a file from a Java callout method, you can use an instance of javax.activation.FileDataSource.

Whenever the Oracle Service Bus pipeline needs the binary contents of the source, it looks up the DataSource object corresponding to the ctx:binary-content element in the repository and invokes the DataSource.getInputStream() method to retrieve the binary octets.

The getInputStream() method might be called multiple times during message processing, for example to accommodate outbound message retries in the transport layer.

Bron: Oracle Fusion Middleware Online Documentation Library, 11g Release 1 (11.1.1.7), Fusion Middleware Administrator’s Guide for Oracle Service Bus

Replace Action:

Replace node contents of In Variabele body and XPath ./zkn:edcLk01/zkn:object[1]/zkn:inhoud with outcome of Expression $datasource.

Assign Action:

Assign outcome of Expression $body/zkn:edcLk01/zkn:object[1]/zkn:inhoud/@stuf:bestandsnaam to variable processAttachmentBestandsnaam.

Replace Action:

Replace node contents of In Variabele body and XPath ./zkn:edcLk01/zkn:object[1]/zkn:inhoud/@stuf:bestandsnaam with outcome of Expression fn:substring-after($processAttachmentBestandsnaam,’_’).

Door bovenstaande actions wordt iets als:

<inhoud b:contentType="application/rtf" a:bestandsnaam="/home/oracle/osb/attachments/squitXO/3690a3d0-6b7d-4d61-aa77-d1727d9c02ab#10_xyz.rtf"/>

vervangen door iets als (verkregen door het uitvragen van de body context variabele):

<inhoud b:contentType="application/rtf" a:bestandsnaam="xyz.rtf"/>
  <con:binary-content ref="cid:2265ab65:1549eee2602:-7bbf" xmlns:con="http://www.bea.com/wli/sb/context"/>
</inhoud>

Java Callout AttachmentProcessor.deleteAttachment welke gebruikt is in de Proxy Service Message Flow

Java Callout Type Name Expression
Method = AttachmentProcessor. deleteAttachment
Parameter java.lang.String bestandsnaam $processAttachmentBestandsnaam
Parameter java.lang.String context $attachmentProcessorContext
Parameter boolean enableLogging $attachmentProcessorEnableLogging
Result boolean deleteAttachmentSuccessfull

Als context (variabele attachmentProcessorContext) wordt het bericht nummer binnen de KING Standaard Zaak-en Documentservices 1.1 gebruikt: #10

Hieronder staat de specificatie van de gebruikte class en method:

class AttachmentProcessor {

    public static boolean deleteAttachment(final String bestandsnaam,
                                           final String context,
                                           final boolean enableLogging) throws IOException
}

AttachmentProcessor.java:

package nl.xyz.osb;


import com.sun.xml.internal.ws.util.ByteArrayDataSource;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;

import java.text.SimpleDateFormat;

import java.util.Date;
import java.util.Properties;
import java.util.UUID;

import javax.activation.DataSource;


public class AttachmentProcessor {

    /**
     * Processes an attachment by saving the binary content (param datasource) to a file with a filename based on param bestandnaam.
     * @param propertyFileLocation the filename and directory of the property file
     * @param bestandsnaam         the original filename of the attachment
     * @param datasource           the binary content of the attachment
     * @param context              the context for which the attachment is processed
     * @param enableLogging        indicator if logging should be enabled
     * @return                     the filename of the saved attachment
     * @throws IOException
     */
    public static String processAttachment(final String propertyFileLocation,
                                           final String bestandsnaam,
                                           final DataSource datasource,
                                           final String context,
                                           final boolean enableLogging) throws IOException {
        final Properties properties = readProperties(propertyFileLocation);
        final String targetPath = properties.getProperty("target_path");
        if ((targetPath == null) || targetPath.isEmpty()) {
            throw new IOException("Property [target_path]niet gevonden of leeg!");
        }

        final String filename =
            targetPath + UUID.randomUUID().toString() + context + "_" +
            bestandsnaam;
        FileOutputStream fileOutputStream = null;
        SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
        Date startDate = new Date();
        try {
            println("Begin of AttachmentProcessor.processAttachment",
                    enableLogging);
            println(sdf.format(startDate), enableLogging);
            println("AttachmentProcessor.processAttachment: bestandsnaam = " +
                    bestandsnaam, enableLogging);
            println("AttachmentProcessor.processAttachment: context = " +
                    context, enableLogging);
            if (datasource != null) {
                final InputStream inputStream = datasource.getInputStream();

                if (inputStream != null) {
                    fileOutputStream = new FileOutputStream(filename);

                    int total = 0;
                    int len = 0;
                    byte[] bytes = new byte[1024];

                    while ((len = inputStream.read(bytes)) != -1) {
                        fileOutputStream.write(bytes, 0, len);
                        total = total + len;
                    }
                    println("AttachmentProcessor.processAttachment: written file length = " +
                            total + " bytes", enableLogging);
                    println("AttachmentProcessor.processAttachment: filename = " +
                            filename, enableLogging);
                } else {
                    throw new IOException("InputStream/binary-content is leeg!");
                }

            } else {
                throw new IOException("Datasource/binary-content is leeg!");
            }
            println("End of AttachmentProcessor.processAttachment",
                    enableLogging);
        } catch (Exception e) {
            println("Error in AttachmentProcessor.processAttachment",
                    enableLogging);
            e.printStackTrace();
            throw new IOException(e);
        } finally {
            fileOutputStream.close();
        }

        return filename;
    }

    /**
     * Returns binary content (as datasource) of a saved attachment with a filename equal to param bestandnaam.
     * @param bestandsnaam  the filename of the attachment
     * @param context       the context for which the attachment is loaded
     * @param enableLogging indicator if logging should be enabled
     * @return              the binary content of the attachment
     * @throws IOException
     */
    public static DataSource loadAttachment(final String bestandsnaam,
                                            final String context,
                                            final boolean enableLogging) throws IOException {
        DataSource datasource = null;
        java.io.FileInputStream fileInputStream = null;
        java.io.ByteArrayOutputStream byteArrayOutputStream = null;
        SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
        Date startDate = new Date();
        try {
            println("Begin of AttachmentProcessor.loadAttachment",
                    enableLogging);
            println(sdf.format(startDate), enableLogging);
            println("AttachmentProcessor.loadAttachment: bestandsnaam = " +
                    bestandsnaam, enableLogging);
            println("AttachmentProcessor.loadAttachment: context = " + context,
                    enableLogging);
            fileInputStream = new java.io.FileInputStream(bestandsnaam);
            byteArrayOutputStream = new java.io.ByteArrayOutputStream();

            int total = 0;
            int len = 0;
            byte[] bytes = new byte[1024];
            while ((len = fileInputStream.read(bytes)) != -1) {
                byteArrayOutputStream.write(bytes, 0, len);
                total = total + len;
            }

            byte[] data = byteArrayOutputStream.toByteArray();

            datasource =
                    new ByteArrayDataSource(data, "application/octet-stream");
            println("AttachmentProcessor.loadAttachment: read file length = " +
                    total + " bytes", enableLogging);
            println("End of AttachmentProcessor.loadAttachment",
                    enableLogging);
        } catch (Exception e) {
            println("Error in AttachmentProcessor.loadAttachment",
                    enableLogging);
            e.printStackTrace();
            throw new IOException(e);
        } finally {
            fileInputStream.close();
        }

        return datasource;
    }

    /**
     * Returns binary content (as datasource) of a saved attachment with a filename equal to param bestandnaam.
     * @param bestandsnaam  the filename of the attachment
     * @param context       the context for which the attachment is deleted
     * @param enableLogging indicator if logging should be enabled
     * @return              indicator showing if deleting the file was successfull
     * @throws IOException
     */
    public static boolean deleteAttachment(final String bestandsnaam,
                                           final String context,
                                           final boolean enableLogging) throws IOException {
        boolean success;
        SimpleDateFormat sdf = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
        Date startDate = new Date();
        try {
            println("Begin of AttachmentProcessor.deleteAttachment",
                    enableLogging);
            println(sdf.format(startDate), enableLogging);
            println("AttachmentProcessor.deleteAttachment: bestandsnaam = " +
                    bestandsnaam, enableLogging);
            println("AttachmentProcessor.deleteAttachment: context = " +
                    context, enableLogging);
            success = (new File(bestandsnaam)).delete();
            println("End of AttachmentProcessor.deleteAttachment",
                    enableLogging);

        } catch (Exception e) {
            println("Error in AttachmentProcessor.deleteAttachment",
                    enableLogging);
            e.printStackTrace();
            throw new IOException(e);
        }

        return success;
    }

    /**
     * Returns properties of a property file
     * @param propertyFileLocation the filename and directory of the property file
     * @return properties
     * @throws IOException
     */
    private static Properties readProperties(final String propertyFileLocation) throws IOException {
        Properties properties = null;
        FileInputStream fileInputStream = null;

        try {
            File file = new File(propertyFileLocation);
            fileInputStream = new FileInputStream(file);
            properties = new Properties();
            properties.load(fileInputStream);
        } catch (Exception e) {
            e.printStackTrace();
            throw new IOException(e);
        } finally {
            if (fileInputStream != null) {
                fileInputStream.close();
            }
        }

        return properties;
    }

    /**
     * Print a text if enableLogging is true
     * @param s             the text to print
     * @param enableLogging indicator if logging should be enabled
     */
    private static void println(final String s, final boolean enableLogging) {
        if (enableLogging) {
            System.out.println(s);
        }
    }
}

The post Gebruik van de “Standaard Zaak-en Documentservices 1.1” van Kwaliteitsinstituut Nederlandse Gemeenten (KING), almede MTOM/XOP t.b.v. een koppeling tussen diverse applicaties (gerealiseerd binnen OSB 11g) aangaande het proces van vergunningverlening voor een organisatie in de publieke sector appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus: Pipeline alerts in Splunk using SNMP traps

$
0
0

Oracle Service Bus provides a reporting activity called Alert. The OSB pipeline alerts use a persistent store. This store is file based. Changing the persistent store to JDBC based, does not cause pipeline alerts to be stored in a database instead of on disk. When the persistent store on disk becomes large, opening pipeline alerts in the Enterprise Manager (12c) or Service Bus console (11g) can suffer from poor performance. If you put an archive setting on pipeline alerts (see here), the space from the persistent store on disk is not reduced when alerts get deleted. You can compact the store to reduce space (see here), but this requires the store to be offline and this might require shutting down the Service Bus. This can be cumbersome to do often and is not good for your availability.

If you do not want to use the EM / SB console or have the issues with the filestore, there is an alternative. Pipeline alerts can produce SNMP traps. SNMP traps can be forwarded by a WebLogic SNMP Agent to an SNMP Manager. This manager can store the SNMP traps in a file and Splunk can monitor the file. Splunk makes searching alerts and visualizing them easy. In this blog I will describe the steps needed to get a minimal setup with SNMP traps going and how to see the pipeline alerts in Splunk.

Service Bus

Create an AlertDestination in JDeveloper

Make sure you have Alert Logging and Reporting disabled and SNMP Trap enabled in the Alert Destination you are using in your Service Bus project. For testing purposes you can first keep the Alert Logging on to also see the alerts in the EM or SB Console.

Add the Alert action to a pipeline

In this example I’m logging the entire body of the message. You might also consider logging the (SOAP) header in a more elaborate setup if it contains relevant information. Configure the alert to use the alert destination.

WebLogic Server

Configure an SNMP Manager

On Ubuntu Linux installing an SNMP Manager and running it is as easy as:

sudo apt-get install snmptrapd

Update /etc/snmp/snmptrapd.conf

Uncomment the line: authCommunity log,execute,net public

The authCommunity public is the same as set in the WebLogic SNMP Agent configuration below for Community Based Access, Community Prefix.

sudo snmptrapd -Lf /var/log/snmp-traps

This runs an SNMP trap daemon on UDP port 162 and puts the output in a file called /var/log/snmp-traps. On my Ubuntu machine, snmptrapd logging ended up in /var/log/syslog.

Configure the SNMP Agent

Configuring an SNMP Agent on WebLogic Server is straightforward and you do not need to restart the server after you have done this configuration. Go to Diagnostics, SNMP and enable the SNMP Agent for the domain. Do mind the following pieces of configuration though:

On Linux a non-privileged user is not allowed to run servers under port 1024. I’ve added a zero after the port numbers to avoid the issue of the SNMP Agent not being able to start (see here).

For the Trap Destination specify the host/port where the SNMP Manager (snmptrapd) is running.

Test the setup

If you want to test the configuration of the agent, Service Bus alert and AlertDestination, you can use the following (inspired by this).

First run setDomainEnv.cmd or setDomainEnv.sh; weblogic.jar must be in the CLASSPATH.

java weblogic.diagnostics.snmp.cmdline.Manager SnmpTrapMonitor -p 162

The port is the port given in the trap destination. Use a port above 1024 if you do not have permissions to create a server running on a lower port.

Now if you call your service with the pipeline alert and alert destination configured correctly and you have configured the SNMP Agent in WebLogic Server correctly, you will see the SNMP Manager producing output in the console of the SNMP trap which has been caught. If you do not see any output, check the WebLogic server logs for SNMP related errors. If this is working correctly, you can change the trap destination to point to snmptrapd (which of course needs to be running). If you do not see pipeline alerts from snmptrapd in /var/log/snmp-traps, you might have a connectivity issue to snmptrapd or you have not configured snmptrapd correctly. For example, you forgot to edit /etc/snmp/snmptrapd.conf. Also check /var/log/syslog for snmptrapd messages.

Splunk

It is easy to add a file as a source in Splunk. OOTB you get results like below. As you can see, the entire message is present in the log including additional data such as the pipeline, the location of the alert and the domain.

You can read more about the Splunk setup here.

Some notes

  • Do you want to use pipeline alerts? The Alert activity in Service Bus is blocking; processing of the pipeline will continue after the Alert has been delivered (stored in the persistent store or after having produced an SNMP trap). This can delay service calls (in contrast to Report activities). Also there have been reports of memory leaks. See: ‘OSB Alert Log Activities Generating Memory Leak on WebLogic Server (Doc ID 1536484.1)’ on Oracle support.
  • Use a single alert destination for all your services. This makes changing the alert configuration more easy.
  • Think about your alert levels. You do not want alerts for everything all the time since it has a performance impact.
  • Configure logrotate for the SNMP Manager trap file. Otherwise it might become very large and difficult to parse. See here for some examples.
  • Consider running snmptrapd on another host as the WebLogic Server. In case of large numbers of pipeline alerts, it will cause disk IO and potentially more than the regular persistent store because of its plain text format. I have not checked if this causes a delay in Service Bus pipeline processing. My guess is that producing alerts and sending it to the SNMP Agent might be part of the same thread which is used for processing the Service Bus pipeline, but sending SNMP traps from the SNMP Agent to the SNMP Manager is not; will not delay the Service Bus process. Do some performance tests before making decisions on a local or remote snmptrapd setup.
  • Which SNMP Manager do you want to use? I’m using snmptrapd because it is easy to produce files which can be read by Splunk but with this (Service Bus, WebLogic Server) setup you can of course easily use any other SNMP Manager instead of snmptrapd icm Splunk. For example Enterprise Manager Cloud Control (see here).
  • SNMP traps are UDP messages. If send and not received, they might be lost. As a consequence you might lose pipeline alerts
  • Pipeline alerts are also visible in the server log. Splunk can monitor the server log. This is an easy alternative.

The post Oracle Service Bus: Pipeline alerts in Splunk using SNMP traps appeared first on AMIS Oracle and Java Blog.


Oracle Service Bus: Produce messages to a Kafka topic

$
0
0

Oracle Service Bus is a powerful tool to provide features like transformation, throttling, virtualization of messages coming from different sources. There is a (recently opensourced!) Kafka transport available for Oracle Service Bus (see here). Oracle Service Bus can thus be used to do all kinds of interesting things to messages coming from Kafka topics. You can then produce altered messages to other Kafka topics and create a decoupled processing chain. In this blog post I provide an example on how to use Oracle Service Bus to produce messages to a Kafka topic.

Messages from Service Bus to Kafka

First perform the steps as described here to setup the Service Bus with the Kafka transport. Also make sure you have a Kafka broker running.

Next create a new Business Service (File, New, Business Service). It is not visible in the component palette since it is a custom transport. Next use transport Kafka.


In the Type screen be sure to select Text as request message and None as response message.


Specify a Kafka bootstrap broker.


The body needs to be of type {http://schemas.xmlsoap.org/soap/envelope/}Body. If you send plain text as the body to the Kafka transport, you will get the below error message:

<Error> <oracle.osb.pipeline.kernel.router> <ubuntu> <DefaultServer> <[STUCK] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <43b720fd-2b5a-4c93-073-298db3e92689-00000132> <1486368879482> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <OSB-382191> <SBProject/ProxyServicePipeline: Unhandled error caught by system-level error handler: com.bea.wli.sb.pipeline.PipelineException: OSB Assign action failed updating variable "body": [OSB-395105]The TokenIterator does not correspond to a single XmlObject value

If you send XML as the body of the message going to the transport but not an explicit SOAP body, you will get errors in the server log like below:

<Error> <oracle.osb.pipeline.kernel.router> <ubuntu> <DefaultServer> <[STUCK] ExecuteThread: '22' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <43b720fd-2b5a-4c93-a073-298db3e92689-00000132> <1486368987002> <[severity-value: 8] [rid: 0] [partition-id: 0] [partition-name: DOMAIN] > <OSB-382191> <SBProject/ProxyServicePipeline: Unhandled error caught by system-level error handler: com.bea.wli.sb.context.BindingLayerException: Failed to set the value of context variable "body". Value must be an instance of {http://schemas.xmlsoap.org/soap/envelope}Body.

As you can see, this causes stuck threads. In order to get a {http://schemas.xmlsoap.org/soap/envelope/}Body you can for example use an Assign activity. In this case I’m replacing text in the input body and assign it to the output body. I’m using <ns:Body xmlns:ns=’http://schemas.xmlsoap.org/soap/envelope/’>{fn:replace($body,’Trump’,’Clinton’)}</ns:Body>. This replaces Trump with Clinton.


When you check the output with a tool like for example KafkaTool you can see the SOAP body is not propagated to the Kafka topic.

Finally

Oracle Service Bus processes individual messages. If you want to aggregate data or perform analytics on several messages, you can consider using Oracle Stream Analytics (OSA). It also has pattern recognition and several other interesting features. It is however not very suitable to split up messages or perform more complicated actions on individual messages such as transformations. For such a use-case, use Oracle Service Bus.

The post Oracle Service Bus: Produce messages to a Kafka topic appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX

$
0
0

At a public sector organization in the Netherlands an OSB proxy service was (via JMS) reading messages from a WebLogic queue. These messages where then send to a back-end system. Every evening during a certain time period the back-end system was down. So therefor and also in case of planned maintenance there was a requirement whereby it was necessary to be able to stop and start sending messages to the back-end system from the queue. Hence, a script was needed to disable/enable the OSB proxy service (deployed on OSB 11.1.1.7).

This article will explain how the OSB proxy service can be disabled/enabled via WebLogic Server MBeans with JMX.

A managed bean (MBean) is a Java object that represents a Java Management Extensions (JMX) manageable resource in a distributed environment, such as an application, a service, a component, or a device.

First an “high over” overview of the MBeans is given. For further information see “Fusion Middleware Developing Custom Management Utilities With JMX for Oracle WebLogic Server”, via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13728/toc.htm

Next the structure and use of the System MBean Browser in the Oracle Enterprise Manager Fusion Middleware Control is discussed.

Finally the code to disable/enable the OSB proxy service is shown.

To disable/enable an OSB proxy service, also WebLogic Scripting Tool (WLST) can be used, but in this case (also because of my java developer skills) JMX was used. For more information have a look for example at AMIS TECHNOLOGY BLOG: “Oracle Service Bus: enable / disable proxy service with WLST”, via url: https://technology.amis.nl/2011/01/10/oracle-service-bus-enable-disable-proxy-service-with-wlst/

The Java Management Extensions (JMX) technology is a standard part of the Java Platform, Standard Edition (Java SE platform). The JMX technology was added to the platform in the Java 2 Platform, Standard Edition (J2SE) 5.0 release.

The JMX technology provides a simple, standard way of managing resources such as applications, devices, and services. Because the JMX technology is dynamic, you can use it to monitor and manage resources as they are created, installed and implemented. You can also use the JMX technology to monitor and manage the Java Virtual Machine (Java VM).

For another example of using MBeans with JMX, I kindly point you to another article (written by me) on the AMIS TECHNOLOGY BLOG: “Doing performance measurements of an OSB Proxy Service by programmatically extracting performance metrics via the ServiceDomainMBean and presenting them as an image via a PowerPoint VBA module”, via url: https://technology.amis.nl/2016/01/30/performance-measurements-of-an-osb-proxy-service-by-using-the-servicedomainmbean/

Basic Organization of a WebLogic Server Domain

As you probably already know a WebLogic Server administration domain is a collection of one or more servers and the applications and resources that are configured to run on the servers. Each domain must include a special server instance that is designated as the Administration Server. The simplest domain contains a single server instance that acts as both Administration Server and host for applications and resources. This domain configuration is commonly used in development environments. Domains for production environments usually contain multiple server instances (Managed Servers) running independently or in groups called clusters. In such environments, the Administration Server does not host production applications.

Separate MBean Types for Monitoring and Configuring

All WebLogic Server MBeans can be organized into one of the following general types based on whether the MBean monitors or configures servers and resources:

  • Runtime MBeans contain information about the run-time state of a server and its resources. They generally contain only data about the current state of a server or resource, and they do not persist this data. When you shut down a server instance, all run-time statistics and metrics from the run-time MBeans are destroyed.
  • Configuration MBeans contain information about the configuration of servers and resources. They represent the information that is stored in the domain’s XML configuration documents.
  • Configuration MBeans for system modules contain information about the configuration of services such as JDBC data sources and JMS topics that have been targeted at the system level. Instead of targeting these services at the system level, you can include services as modules within an application. These application-level resources share the life cycle and scope of the parent application. However, WebLogic Server does not provide MBeans for application modules.

MBean Servers

At the core of any JMX agent is the MBean server, which acts as a container for MBeans.

The JVM for an Administration Server maintains three MBean servers provided by Oracle and optionally maintains the platform MBean server, which is provided by the JDK itself. The JVM for a Managed Server maintains only one Oracle MBean server and the optional platform MBean server.

MBean Server Creates, registers, and provides access to…
Domain Runtime MBean Server MBeans for domain-wide services. This MBean server also acts as a single point of access for MBeans that reside on Managed Servers.

Only the Administration Server hosts an instance of this MBean server.

Runtime MBean Server MBeans that expose monitoring, run-time control, and the active configuration of a specific WebLogic Server instance.

In release 11.1.1.7, the WebLogic Server Runtime MBean Server is configured by default to be the platform MBean server.

Each server in the domain hosts an instance of this MBean server.

Edit MBean Server Pending configuration MBeans and operations that control the configuration of a WebLogic Server domain. It exposes a ConfigurationManagerMBean for locking, saving, and activating changes.

Only the Administration Server hosts an instance of this MBean server.

The JVM’s platform MBean server MBeans provided by the JDK that contain monitoring information for the JVM itself. You can register custom MBeans in this MBean server.

In release 11.1.1.7, WebLogic Server uses the JVM’s platform MBean server to contain the WebLogic run-time MBeans by default.

Service MBeans

Within each MBean server, WebLogic Server registers a service MBean under a simple object name. The attributes and operations in this MBean serve as your entry point into the WebLogic Server MBean hierarchies and enable JMX clients to navigate to all WebLogic Server MBeans in an MBean server after supplying only a single object name.

MBean Server Service MBean JMX object name
The Domain Runtime MBean Server DomainRuntimeServiceMBean

Provides access to MBeans for domain-wide services such as application deployment, JMS servers, and JDBC data sources. It also is a single point for accessing the hierarchies of all run-time MBeans and all active configuration MBeans for all servers in the domain.

com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean
Runtime MBean Servers RuntimeServiceMBean

Provides access to run-time MBeans and active configuration MBeans for the current server.

com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean
The Edit MBean Server EditServiceMBean

Provides the entry point for managing the configuration of the current WebLogic Server domain.

com.bea:Name=EditService,Type=weblogic.management.mbeanservers.edit.EditServiceMBean

Choosing an MBean Server

If your client monitors run-time MBeans for multiple servers, or if your client runs in a separate JVM, Oracle recommends that you connect to the Domain Runtime MBean Server on the Administration Server instead of connecting separately to each Runtime MBean Server on each server instance in the domain.

The trade off for directing all JMX requests through the Domain Runtime MBean Server is a slight degradation in performance due to network latency and increased memory usage. However, for most network topologies and performance requirements, the simplified code maintenance and enhanced security that the Domain Runtime MBean Server enables is preferable.

System MBean Browser

Oracle Enterprise Manager Fusion Middleware Control provides the System MBean Browser for managing MBeans that perform specific monitoring and configuration tasks.

Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened.

Here the previously mentioned types of MBean’s can be seen: Runtime MBeans and Configuration MBeans:

When navigating to “Configuration MBeans | com.bea”, the previously mentioned EditServiceMBean can be found:

When navigating to “Runtime MBeans | com.bea | Domain: <a domain>”, the previously mentioned DomainRuntimeServiceMBean can be found:

Also the later on in this article mentioned MBeans can be found:

For example for the ProxyServiceConfigurationMbean, the available operations can be found:

When navigating to “Runtime MBeans | com.bea”, within each Server the previously mentioned RuntimeServiceMBean can be found.

 

Code to disable/enable the OSB proxy service

The requirement to be able to stop and start sending messages to the back-end system from the queue was implemented by disabling/enabling the state of the OSB Proxy service JMSConsumerStuFZKNMessageService_PS.

Short before the back-end system goes down, dequeuing of the queue should be disabled.
Right after the back-end system goes up again, dequeuing of the queue should be enabled.

The state of the OSB Proxy service can be seen in the Oracle Service Bus Administration 11g Console (for example via the Project Explorer) in the tab “Operational Settings” of the proxy service.

For ease of use, two ms-dos batch files where created, each using MBeans, to change the state of a service (proxy service or business service). As stated before, the WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

  • Disable_JMSConsumerStuFZKNMessageService_PS.bat

On the server where the back-end system resides, the ms-dos batch file “Disable_JMSConsumerStuFZKNMessageService_PS.bat” is called.

The content of the batch file is:

java.exe -classpath “OSBServiceState.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbservicestate.OSBServiceState “xyz” “7001” “weblogic” “xyz” “ProxyService” “JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS” “Disable”

  • Enable_JMSConsumerStuFZKNMessageService_PS.bat

On the server where the back-end system resides, the ms-dos batch file “Enable_JMSConsumerStuFZKNMessageService_PS.bat” is called.

The content of the batch file is:

java.exe -classpath “OSBServiceState.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbservicestate.OSBServiceState “xyz” “7001” “weblogic” “xyz” “ProxyService” “JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS” “Enable”

In both ms-dos batch files via java.exe a class named OSBServiceState is being called. The main method of this class expects the following parameters:

Parameter name Description
HOSTNAME Host name of the AdminServer
PORT Port of the AdminServer
USERNAME Username
PASSWORD Passsword
SERVICETYPE Type of resource. Possible values are:
  • ProxyService
  • BusinessService
SERVICEURI Identifier of the resource. The name begins with the project name, followed by folder names and ending with the resource name.
ACTION The action to be carried out. Possible values are:
  • Enable
  • Disable

Every change is carried out in it´s own session (via the SessionManagementMBean), which is automatically activated with description: OSBServiceState_script_<systemdatetime>

This can be seen via the Change Center | View Changes of the Oracle Service Bus Administration 11g Console:

The response from “Disable_JMSConsumerStuFZKNMessageService_PS.bat” is:

Disabling service JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS has been succesfully completed

In the Oracle Service Bus Administration 11g Console this change can be found as a Task:

The result of changing the state of the OSB Proxy service can be checked in the Oracle Service Bus Administration 11g Console.

The same applies when using “Enable_JMSConsumerStuFZKNMessageService_PS.bat”.

In the sample code below the use of the following MBeans can be seen:

Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in
“Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

Provides API to create, activate or discard sessions. [http://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/SessionManagementMBean.html]

Provides API to enable/disable services and enable/disable monitoring for a proxy service. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ProxyServiceConfigurationMBean.html]

Provides API for managing business services. [https://docs.oracle.com/cd/E13171_01/alsb/docs25/javadoc/com/bea/wli/sb/management/configuration/BusinessServiceConfigurationMBean.html]

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

Service findService(String name,
                    String type,
                    String location)

This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

In the code example below certain java fields are used. For reading purposes the field values are shown in the following table:

Field Field value
DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime
DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean
SessionManagementMBean.NAME SessionManagement
SessionManagementMBean.TYPE com.bea.wli.sb.management.configuration.SessionManagementMBean
ProxyServiceConfigurationMBean.NAME ProxyServiceConfiguration
ProxyServiceConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ProxyServiceConfigurationMBean
BusinessServiceConfigurationMBean.NAME BusinessServiceConfiguration
BusinessServiceConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.BusinessServiceConfigurationMBean

Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

When running the code the following error was thrown:

java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
	at $Proxy0.findService(Unknown Source)
	at nl.xyz.osbservice.osbservicestate.OSBServiceState.<init>(OSBServiceState.java:66)
	at nl.xyz.osbservice.osbservicestate.OSBServiceState.main(OSBServiceState.java:217)
Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
	... 4 more
Process exited.

So because of the use of com.bea.wli.sb.management.configuration.DelegatedSessionManagementMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

The java code:

package nl.xyz.osbservice.osbservicestate;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.BusinessServiceConfigurationMBean;
import com.bea.wli.sb.management.configuration.ProxyServiceConfigurationMBean;
import com.bea.wli.sb.management.configuration.SessionManagementMBean;

import java.io.IOException;

import java.net.MalformedURLException;

import java.util.HashMap;
import java.util.Hashtable;
import java.util.Properties;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;


public class OSBServiceState {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;

    public OSBServiceState(HashMap props) {
        super();
        SessionManagementMBean sessionManagementMBean = null;
        String sessionName =
            "OSBServiceState_script_" + System.currentTimeMillis();
        String servicetype;
        String serviceURI;
        String action;
        String description = "";


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));

            servicetype = properties.getProperty("SERVICETYPE");
            serviceURI = properties.getProperty("SERVICEURI");
            action = properties.getProperty("ACTION");

            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            // Create a session via SessionManagementMBean.
            sessionManagementMBean =
                    (SessionManagementMBean)domainRuntimeServiceMBean.findService(SessionManagementMBean.NAME,
                                                                                  SessionManagementMBean.TYPE,
                                                                                  null);
            sessionManagementMBean.createSession(sessionName);

            if (servicetype.equalsIgnoreCase("ProxyService")) {

                // A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.
                // A Ref object has two components: A typeId that indicates whether it is a project, folder, or a resource, and an array of names of non-zero length.
                // For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
                // For a project, the Ref object simply contains one name component, that is, the project name.
                // A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.
                Ref ref = constructRef("ProxyService", serviceURI);

                ProxyServiceConfigurationMBean proxyServiceConfigurationMBean =
                    (ProxyServiceConfigurationMBean)domainRuntimeServiceMBean.findService(ProxyServiceConfigurationMBean.NAME +
                                                                                          "." +
                                                                                          sessionName,
                                                                                          ProxyServiceConfigurationMBean.TYPE,
                                                                                          null);
                if (action.equalsIgnoreCase("Enable")) {
                    proxyServiceConfigurationMBean.enableService(ref);
                    description = "Enabled the service: " + serviceURI;
                    System.out.print("Enabling service " + serviceURI);
                } else if (action.equalsIgnoreCase("Disable")) {
                    proxyServiceConfigurationMBean.disableService(ref);
                    description = "Disabled the service: " + serviceURI;
                    System.out.print("Disabling service " + serviceURI);
                } else {
                    System.out.println("Unsupported value for ACTION");
                }
            } else if (servicetype.equals("BusinessService")) {
                Ref ref = constructRef("BusinessService", serviceURI);

                BusinessServiceConfigurationMBean businessServiceConfigurationMBean =
                    (BusinessServiceConfigurationMBean)domainRuntimeServiceMBean.findService(BusinessServiceConfigurationMBean.NAME +
                                                                                             "." +
                                                                                             sessionName,
                                                                                             BusinessServiceConfigurationMBean.TYPE,
                                                                                             null);
                if (action.equalsIgnoreCase("Enable")) {
                    businessServiceConfigurationMBean.enableService(ref);
                    description = "Enabled the service: " + serviceURI;
                    System.out.print("Enabling service " + serviceURI);
                } else if (action.equalsIgnoreCase("Disable")) {
                    businessServiceConfigurationMBean.disableService(ref);
                    description = "Disabled the service: " + serviceURI;
                    System.out.print("Disabling service " + serviceURI);
                } else {
                    System.out.println("Unsupported value for ACTION");
                }
            }
            sessionManagementMBean.activateSession(sessionName, description);
            System.out.println(" has been succesfully completed");
        } catch (Exception ex) {
            if (sessionManagementMBean != null) {
                try {
                   sessionManagementMBean.discardSession(sessionName);
                    System.out.println(" resulted in an error.");
                } catch (Exception e) {
                    System.out.println("Unable to discard session: " +
                                       sessionName);
                }
            }

            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD, SERVICETYPE, SERVICEURI, ACTION.);

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                map.put("SERVICETYPE", args[4]);
                map.put("SERVICEURI", args[5]);
                map.put("ACTION", args[6]);
                OSBServiceState osbServiceState = new OSBServiceState(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX

$
0
0

At a public sector organization in the Netherlands there was the need to make an inventory of the deployed OSB services in order to find out, the dependencies with certain external web services (which were on a list to become deprecated).

For this, in particular the endpoints of business services were of interest.

Besides that, the dependencies between services and also the Message Flow per proxy service was of interest, in particular Operational Branch, Route, Java Callout and Service Callout actions.

Therefor an OSBServiceExplorer tool was developed to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. For now, this tool was merely used to quickly return the information needed, but in the future it can be the basis for a more comprehensive one.

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX.

If you are interested in general information about, using MBeans with JMX, I kindly point you to another article (written be me) on the AMIS TECHNOLOGY BLOG: “Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX”, via url: https://technology.amis.nl/2017/02/28/oracle-service-bus-disable-enable-a-proxy-service-via-weblogic-server-mbeans-with-jmx/

Remark: Some names in the examples in this article are in Dutch, but don’t let this scare you off.

MBeans

For ease of use, a ms-dos batch file was created, using MBeans, to explore services (proxy and business). The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

On a server, the ms-dos batch file “OSBServiceExplorer.bat” is called.

The content of the ms-dos batch file “OSBServiceExplorer.bat” is:
java.exe -classpath “OSBServiceExplorer.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbserviceexplorer. OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

In the ms-dos batch file via java.exe a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description
HOSTNAME Host name of the AdminServer
PORT Port of the AdminServer
USERNAME Username
PASSWORD Passsword

In the sample code shown at the end of this article, the use of the following MBeans can be seen:

Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

Provides methods for retrieving runtime information about a server instance and for transitioning a server from one state to another. [https://docs.oracle.com/cd/E11035_01/wls100/javadocs_mhome/weblogic/management/runtime/ServerRuntimeMBean.html]

Provides various API to query, export and import resources, obtain validation errors, get and set environment values, and in general manage resources in an ALSB domain. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ALSBConfigurationMBean.html]

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

Service findService(String name,
                    String type,
                    String location)

This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

In the sample code shown at the end of this article, certain java fields are used. For reading purposes the field values are shown in the following table:

Field Field value
DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime
DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean
ALSBConfigurationMBean.NAME ALSBConfiguration
ALSBConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ALSBConfigurationMBean
Ref.DOMAIN <Reference to the domain>

Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.

A special Ref DOMAIN refers to the whole domain.
[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

When running the code the following error was thrown:

java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
	at $Proxy0.findService(Unknown Source)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.<init>(OSBServiceExplorer.java:174)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.main(OSBServiceExplorer.java:445)
Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
	... 4 more
Process exited.

So because of the use of com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

See the code fragment below:

fileWriter.write("Found server runtimes:\n");
int length = (int)serverRuntimes.length;
for (int i = 0; i < length; i++) {
    ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

    String name = serverRuntimeMBean.getName();
    String state = serverRuntimeMBean.getState();
    fileWriter.write("- Server name: " + name + ". Server state: " +
                     state + "\n");
}
fileWriter.write("" + "\n");

List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS

See the code fragment below:

Set refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


fileWriter.write("Found total of " + refs.size() + " refs, including the following proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {

        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " + ref.getFullName() +
                         "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

As mentioned before, a Ref object uniquely represents a resource, project or folder. A Ref object has two components:

  • typeId that indicates whether it is a project, folder, or a resource
  • array of names of non-zero length.

For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
For a project, the Ref object simply contains one name component, that is, the project name.
A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.

[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Below is an example of a Ref object that represents a folder (via JDeveloper Debug):

Below is an example of a Ref object that represents a resource (via JDeveloper Debug):

ResourceConfigurationMBean

In order to be able to determine the actual endpoints of the proxy services and business services, the ResourceConfigurationMBean is used. When connected, the Service Bus MBeans are located under com.oracle.osb. [https://technology.amis.nl/2014/10/20/oracle-service-bus-obtaining-list-exposed-soap-http-endpoints/]

When we look at the java code, as a next step, the names of a set of MBeans specified by pattern matching are put in a list and looped through.

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the queryNames method.

Set queryNames(ObjectName name,
               QueryExp query)
               throws IOException

Gets the names of MBeans controlled by the MBean server. This method enables any of the following to be obtained: The names of all MBeans, the names of a set of MBeans specified by pattern matching on the ObjectName and/or a Query expression, a specific MBean name (equivalent to testing whether an MBean is registered). When the object name is null or no domain and key properties are specified, all objects are selected (and filtered if a query is specified). It returns the set of ObjectNames for the MBeans selected.
[https://docs.oracle.com/javase/7/docs/api/javax/management/MBeanServerConnection.html]

See the code fragment below:

String domain = "com.oracle.osb";
String objectNamePattern =
    domain + ":" + "Type=ResourceConfigurationMBean,*";

Set osbResourceConfigurations =
    connection.queryNames(new ObjectName(objectNamePattern), null);

fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
for (ObjectName osbResourceConfiguration :
     osbResourceConfigurations) {
…
    String canonicalName =
        osbResourceConfiguration.getCanonicalName();
    fileWriter.write("- Resource: " + canonicalName + "\n");
…
}

The pattern used is: com.oracle.osb:Type=ResourceConfigurationMBean,*

Example content fragment of the text file:

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
…

Below is an example of an ObjectName object (via JDeveloper Debug), found via the queryNames method:

Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened. Here the previously mentioned ResourceConfigurationMBean’s can be found.


[Via MBean Browser]

The information on the right is as follows (if we navigate to a particular ResourceConfigurationMBean, for example …$UpdateZaak_Lk01_PS) :


[Via MBean Browser]

Here we can see that the attributes Configuration and Metadata are available:

  • Configuration

[Via MBean Browser]

The Configuration is made available in java by the following code fragment:

CompositeDataSupport configuration = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Configuration");
  • Metadata

[Via MBean Browser]

The Metadata is made available in java by the following code fragment:

CompositeDataSupport metadata = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Metadata");

Diving into attribute Configuration of the ResourceConfigurationMBean

For each found proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

See the code fragment below:

String canonicalName =
    osbResourceConfiguration.getCanonicalName();
…
String servicetype =
    (String)configuration.get("service-type");
CompositeDataSupport transportconfiguration =
    (CompositeDataSupport)configuration.get("transport-configuration");
String transporttype =
    (String)transportconfiguration.get("transport-type");
…
fileWriter.write("  Configuration of " + canonicalName +
                 ":" + " service-type=" + servicetype +
                 ", transport-type=" + transporttype +
                 ", url=" + url + "\n");

Proxy service configuration:

Below is an example of a proxy service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

The proxy services which define the exposed endpoints, can be recognized by the ProxyService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

String url = (String)transportconfiguration.get("url");

Business service configuration:

Below is an example of a business service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://xyz/eBus/FolderService.svc

The business services which define the exposed endpoints, can be recognized by the BusinessService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

CompositeData[] urlconfiguration =
    (CompositeData[])transportconfiguration.get("url-configuration");
String url = (String)urlconfiguration[0].get("url");

So, via the url key found in the business service configuration, the endpoint of a business service can be found (for example: http://xyz/eBus/FolderService.svc). So in that way the dependencies (proxy and/or business services) with certain external web services (having a certain endpoint), could be found.

Proxy service pipeline, element hierarchy

For a proxy service the elements (nodes) of the pipeline are investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");


[Via MBean Browser]

Below is an example of a nodes object (via JDeveloper Debug):

If we take a look at the dataMap object, we can see nodes of different types.

Below is an example of a node of type Stage (via JDeveloper Debug):

Below is an example of a node of type Action and label ifThenElse (via JDeveloper Debug):

Below is an example of a node of type Action and label wsCallout (via JDeveloper Debug):

For the examples above the Message Flow part of the UpdateZaak_Lk01_PS proxy service looks like:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .proxy file (in this case: UpdateZaak_Lk01_PS.proxy) for the _ActiondId- identification, mentioned as value for the name key.

<con:stage name="EditFolderZaakStage">
        <con:context>
          …
        </con:context>
        <con:actions>
          <con3:ifThenElse>
            <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7c84</con2:id>
            <con3:case>
              <con3:condition>
                …
              </con3:condition>
              <con3:actions>
                <con3:wsCallout>
                  <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f</con2:id>
                  …

The first node in the dataMap object (via JDeveloper Debug) looks like:

The dataMap object is of type HashMap. A hashMap maintains key and value pairs and often denoted as HashMap<Key, Value> or HashMap<K, V>. HashMap implements Map interface

As can be seen, the key is of type Object and the value of type CompositeData.

In order to know what kind of information is delivered via the CompositeData object, the rowType object can be used.

See the code fragment below:

TabularType tabularType = nodes.getTabularType();
CompositeType rowType = tabularType.getRowType();

Below is an example of a rowType object (via JDeveloper Debug):

From this it is now clear that the CompositeData object for a ProxyServicePipelineElementType contains:

Index key value
0 children Children of this node
1 label Label
2 name Name of the node
3 node-id Id of this node unique within the graph
4 type Pipeline element type

In the code fragment below, an iterator is used to loop through the dataMap object.

Iterator keyIter = nodes.keySet().iterator();

for (int j = 0; keyIter.hasNext(); ++j) {

    Object[] key = ((Collection)keyIter.next()).toArray();

    CompositeData compositeData = nodes.get(key);

    …
}

The key object for the first node in the dataMap object (via JDeveloper Debug) looks like:

The value of this key object is 25, which also is shown as the value for the node-id of the compositeData object, which for the first node in the dataMap object (via JDeveloper Debug) looks like:

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Think for example of a “Stage” having an “If Then” action which in turn contains several “Assign” actions. A proxy service Message Flow can of course contain all kinds of elements (see the Design Palette).

Below is (for another proxy service) an example content fragment of the text file, that reflects the hierarchy:

     Index#76:
       level    = 1
       label    = branch-node
       name     = CheckOperationOperationalBranch
       node-id  = 62
       type     = OperationalBranchNode
       children = [42,46,50,61]
         level    = 2
         node-id  = 42
         children = [41]
           level    = 3
           label    = route-node
           name     = creeerZaak_Lk01RouteNode
           node-id  = 41
           type     = RouteNode
           children = [40]
             level    = 4
             node-id  = 40
             children = [39]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b99
               node-id  = 39
               type     = Action
               children = []
         level    = 2
         node-id  = 46
         children = [45]
           level    = 3
           label    = route-node
           name     = updateZaak_Lk01RouteNode
           node-id  = 45
           type     = RouteNode
           children = [44]
             level    = 4
             node-id  = 44
             children = [43]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b77
               node-id  = 43
               type     = Action
               children = []
         …

Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For this the label and type keys are used.

See the code fragment below:

String label = (String)compositeData.get("label");
String type = (String)compositeData.get("type");

if (type.equals("Action") &&
    (label.contains("wsCallout") ||
     label.contains("javaCallout") ||
     label.contains("route"))) {

    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
} else if (type.equals("OperationalBranchNode") ||
           type.equals("RouteNode"))
{
    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
}

Example content fragment of the text file:

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

When we take a closer look at the node of type Action and label wsCallout with index 106, this can also be found in the MBean Browser:


[Via MBean Browser]

The children node-id’s are 30 (a node of type Sequence and name requestTransform, also having children) and 34 (a node of type Sequence and name responseTransform, also having children).

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found proxy service the metadata information (dependencies and dependents) is written to file.

See the code fragment below:

fileWriter.write("  Metadata of " + canonicalName + "\n");

String[] dependencies =
    (String[])metadata.get("dependencies");
fileWriter.write("    dependencies:\n");
int size;
size = dependencies.length;
for (int i = 0; i < size; i++) {
    String dependency = dependencies[i];
    if (!dependency.contains("Xquery")) {
        fileWriter.write("      - " + dependency + "\n");
    }
}
fileWriter.write("" + "\n");

String[] dependents = (String[])metadata.get("dependents");
fileWriter.write("    dependents:\n");
size = dependents.length;
for (int i = 0; i < size; i++) {
    String dependent = dependents[i];
    fileWriter.write("      - " + dependent + "\n");
}
fileWriter.write("" + "\n");

Example content fragment of the text file:

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS

As can be seen in the MBean Browser, the metadata for a particular proxy service shows the dependencies on other resources (like business services and XML Schemas) and other services that are dependent on the proxy service.


[Via MBean Browser]

By looking at the results in the text file "OSBServiceExplorer.txt", the dependencies between services (proxy and business) and also the dependencies with certain external web services (with a particular endpoint) could be extracted.

Example content of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS
…

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
…

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } if ((level == 1 && type.equals("OperationalBranchNode")) || level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {

                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);

            fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                CompositeDataSupport configuration =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Configuration");

                CompositeDataSupport metadata =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Metadata");

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                if (canonicalName.contains("ProxyService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    String url = (String)transportconfiguration.get("url");
                    
                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                } else if (canonicalName.contains("BusinessService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    CompositeData[] urlconfiguration =
                        (CompositeData[])transportconfiguration.get("url-configuration");
                    String url = (String)urlconfiguration[0].get("url");

                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                }

                if (canonicalName.contains("ProxyService")) {

                    fileWriter.write("" + "\n");

                    CompositeDataSupport pipeline =
                        (CompositeDataSupport)configuration.get("pipeline");
                    TabularDataSupport nodes =
                        (TabularDataSupport)pipeline.get("nodes");

                    TabularType tabularType = nodes.getTabularType();
                    CompositeType rowType = tabularType.getRowType();

                    Iterator keyIter = nodes.keySet().iterator();

                    for (int j = 0; keyIter.hasNext(); ++j) {

                        Object[] key = ((Collection)keyIter.next()).toArray();

                        CompositeData compositeData = nodes.get(key);

                        String label = (String)compositeData.get("label");
                        String type = (String)compositeData.get("type");
                        if (type.equals("Action") &&
                            (label.contains("wsCallout") ||
                             label.contains("javaCallout") ||
                             label.contains("route"))) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        } else if (type.equals("OperationalBranchNode") ||
                                   type.equals("RouteNode")) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        }
                    }

                    fileWriter.write("" + "\n");
                    fileWriter.write("  Metadata of " + canonicalName + "\n");

                    String[] dependencies =
                        (String[])metadata.get("dependencies");
                    fileWriter.write("    dependencies:\n");
                    int size;
                    size = dependencies.length;
                    for (int i = 0; i < size; i++) {
                        String dependency = dependencies[i];
                        if (!dependency.contains("Xquery")) {
                            fileWriter.write("      - " + dependency + "\n");
                        }
                    }
                    fileWriter.write("" + "\n");

                    String[] dependents = (String[])metadata.get("dependents");
                    fileWriter.write("    dependents:\n");
                    size = dependents.length;
                    for (int i = 0; i < size; i++) {
                        String dependent = dependents[i];
                        fileWriter.write("      - " + dependent + "\n");
                    }
                    fileWriter.write("" + "\n");

                }

            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

$
0
0

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.

Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.

Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:

JKS

There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

connect('weblogic','Welcome01','t3://localhost:7001') 
edit()
startEdit()
cd ('Servers/myserver/ServerMBean/myserver')

cmo.setKeyStores('CustomIdentityAndCustomTrust')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Identity.jks') 
cmo.setCustomIdentityKeyStorePassPhrase('passphrase') 
cmo.setCustomIdentityKeyStoreType('JKS')
cmo.setCustomIdentityKeyStoreFileName('/path/keystores/Trust.jks') 
cmo.setCustomTrustKeyStorePassPhrase('passphrase') 
cmo.setCustomTrustKeyStoreType('JKS')

save()
activate()
disconnect()

Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: oracle.wsm.security and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.

KSS

OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

connect('weblogic','Welcome01','t3://localhost:7001')
svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')

Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer

Incoming

  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing

Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL

 

Specifying the SOA client identity keystore and key password in the credential store

You can only specify one keystore/key for all two-way-SSL outgoing composite connections. This is not a setting per process. See here.

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as javax.net.ssl.trustStore, javax.net.ssl.trustStoreType, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, javax.net.ssl.keyStoreType, javax.net.ssl.keyStorePassword in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. If you do not use the default keystore name for the KSS keystore, you must grant permission to the wsm-agent-core.jar in OPSS.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys. In this case the OWSM policy parameters such as keystore.sig.csf.key refer to a key alias directly instead of a CSF entry which has the key alias defined as the username.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally

This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Securing Oracle Service Bus REST services with OAuth2 client credentials flow (without using additional products)

$
0
0

OAuth2 is a popular authentication framework. As a service provider it is thus common to provide support for OAuth2. How can you do this on a plain WebLogic Server / Service Bus without having to install additional products (and possibly have to pay for licenses)? If you just want to implement and test the code (what), see this installation manual. If you want to know more details about the implementation (how) and choices made (why), read on!

Introduction

OAuth2 client credentials flow

OAuth2 supports different flows. One of the easiest to use is the client credentials flow. It is recommended to use this flow when the party requiring access can securely store credentials. This is usually the case when there is server to server communication (or SaaS to SaaS).

The OAuth2 client credentials flow consists of an interaction pattern between 3 actors which all have their own roll in the flow.

  • The client. This can be anything which supports the OAuth2 standard. For testing I’ve used Postman
  • The OAuth2 authorization server. In this example I’ve created a custom JAX-RS service which generates and returns JWT tokens based on the authenticated user.
  • A protected service. In this example I’ll use an Oracle Service Bus REST service. The protection consists of validating the token (authentication using standard OWSM policies) and providing role based access (authorization).

When using OAuth2, the authorization server returns a JSON message containing (among other things) a JWT (JSON Web Token).

In our case the client authenticates using basic authentication to a JAX-RS servlet. This uses the HTTP header Authorization which contains ‘Basic’ followed by Base64 encoded username:password. Of course Base64 encoded strings can be decoded easily (e.g. by using sites like these) so never use this over plain HTTP!

When this token is obtained, it can be used in the Authorization HTTP header using the Bearer keyword. A service which needs to be protected can be configured with the following standard OWSM policies for authentication: oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy and a custom policy for role based access / authorization.

JWT

JSON Web Tokens (JWT) can look something like:

View the code on Gist.

This is not very helpful at first sight. When we look a little bit closer, we notice it consists of 3 parts separated by a ‘.’ character. These are the header, body and signature of the token. The first 2 parts can be Base64 decoded.

Header

The header typically consists of 2 parts (see here for an overview of fields and their meaning). The type of token and the hashing algorithm. In this case the header is

View the code on Gist.

kid refers to the key id. In this case it provides a hint to the resource server on which key alias to use in its key store to validate the signature.

Body

The JWT body contains so-called claims. In this case the body is

View the code on Gist.

The subject is the subject for which the token was issued. www.oracle.com is the issuer of the token. iat indicates an epoch at which the token was issued and exp indicates until when the token is valid. Tokens are valid only for a limited duration. www.oracle.com is an issuer which is accepted by default so no additional configuration was required.

Signature

The signature contains an encrypted hash of the header/body of the token. If those are altered, the signature validation will fail. To encrypt the signature, a key-pair is used. Tokens are signed using a public/private key pair.

Challenges

Implementing the OAuth2 client credentials flow using only a WebLogic server and OWSM can be challenging. Why?

  • Authentication server. Bare WebLogic + Service Bus do not contain an authentication server which can provide JWT tokens.
  • Resource Server. Authentication of tokens. The predefined OWSM policies which provide authentication based on JWT tokens (oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy) are picky to what tokens they accept.
  • Resource Server. Authorization of tokens. OWSM provides a predefined policy to do role based access to resources: oracle/binding_permission_authorization_policy. This policy works for SOAP and REST composites and Service Bus SOAP services, but not for Service Bus REST services.

Custom components

How did I solve these challenges? I created two custom components;

  • Create a simple authentication server to provide tokens which conform to what the predefined OWSM policies expect. By increasing the OWSM logging and checking for errors when sending in tokens, it becomes clear which fields are expected.
  • Create a custom OWSM policy to provide role based access to Service Bus REST resources

Authentication server

The authentication server has several tasks:

  • authenticate the user (client credentials)
    • using the WebLogic security realm
  • validate the client credentials request
    • using Apache HTTP components
  • obtain a public and private key for signing
    • from the OPSS KeyStoreService (KSS)
  • generate a token and sign it

Authentication

User authentication on WebLogic Server of servlets consists of 2 configuration files.

A web.xml. This file indicates

  • which resources are protected
  • how they are protected (authentication method, TLS or not)
  • who can access the resources (security role)

The weblogic.xml indicates how the security roles map to WebLogic Server roles. In this case any user in the WebLogic security realm group tokenusers (which can be in an external authentication provider such as for example an AD or other LDAP) can access the token service to obtain tokens.

Validate the credentials request

From Postman you can do a request to the token service to obtain a token. This can also be used if the response of the token service conforms to the OAuth2 standard.

By default certificates are checked. With self-signed certificates / development environments, those checks (such as host name verification) might fail. You can disable the certificate checks in the Postman settings screen.

Also Postman has a console available which allows you to inspect requests and responses in more detail. The request looked like

Thus this is what needed to be validated; an HTTP POST request with a body containing application/x-www-form-urlencoded grant_type=client_credentials. I’ve used the Apache HTTP components org.apache.http.client.utils.URLEncodedUtils class for this.

After deployment I of course needed to test the token service. Postman worked great for this but I could also have used Curl commands like:

View the code on Gist.

Accessing the OPSS keystore

Oracle WebLogic Server provides Oracle Platform Security Services.

OPSS provides secure storage of credentials and keys. A policy store can be configured to allow secure access to these resources. This policy store can be file based, LDAP based and database based. You can look at your jps-config.xml file to see which is in use in your case;

You can also look this up from the EM;

In this case the file based policy store system-jazn-data.xml is used. Presence of the file on the filesystem does not mean it is actually used! If there are multiple policy stores defined, for example a file based and an LDAP based, the last one appears to be used.

The policy store can be edited from the EM

You can create a new permission:


Codebase: file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-
Permission class: oracle.security.jps.service.keystore.KeyStoreAccessPermission
Resource name: stripeName=owsm,keystoreName=keystore,alias=*
Actions: read

The codebase indicates the location of the deployment of the authentication server (Java WAR) on WebLogic Server.

Or when file-based, you can edit the (usually system-jazn-data.xml) file directly

In this case add:


&lt;grant&gt;
&lt;grantee&gt;
&lt;codesource&gt;
&lt;url&gt;file:${domain.home}/servers/${weblogic.Name}/tmp/_WL_user/oauth2/-&lt;/url&gt;
&lt;/codesource&gt;
&lt;/grantee&gt;
&lt;permissions&gt;
&lt;permission&gt;
&lt;class&gt;oracle.security.jps.service.keystore.KeyStoreAccessPermission&lt;/class&gt;
&lt;name&gt;stripeName=owsm,keystoreName=keystore,alias=*&lt;/name&gt;
&lt;actions&gt;*&lt;/actions&gt;
&lt;/permission&gt;
&lt;/permissions&gt;
&lt;/grant&gt;

At the location shown below

Now if you create a stripe owsm with a policy based keystore called keystore, the authentication server is allowed to access it!

 

The name of the stripe and name of the keystore are the default names which are used by the predefined OWSM policies. Thus when using these, you do not need to change any additional configuration (WSM domain config, policy config). OWSM only supports policy based KSS keystores. When using JKS keystores, you need to define credentials in the credential store framework and update policy configuration to point to the credential store entries for the keystore password, key alias and key password. The provided code created for accessing the keystore / keypair is currently KSS based. Inside the keystore you can import or generate a keypair. The current Java code of the authentication server expects a keypair oauth2keypair to be present in the keystore.

Accessing the keystore and key from Java

I defined a property file with some parameters. The file contained (among some other things relevant for token generation):


keystorestripe=owsm
keystorename=keystore
keyalias=oauth2keypair

Accessing the keystore can be done as is shown below.

View the code on Gist.

When you have the keystore, accessing keys is easy

View the code on Gist.

(my key didn’t have a password but this still worked)

Generating the JWT token

After obtaining the keypair at the keyalias, the JWT token libraries required instances of RSAPrivateKey and RSAPublicKey. That could be done as is shown below

View the code on Gist.

In order to sign the token, an RSAKey instance was required. I could create this from the public and private key using a RSAKey.Builder method.

View the code on Gist.

Using the RSAKey, I could create a Signer

View the code on Gist.

Preparations were done! Now only the header and body of the token. These were quite easy with the provided builder.

Claims:

View the code on Gist.

Generate and sign the token:

View the code on Gist.

Returning an OAuth2 JSON message could be done with

View the code on Gist.

Role based authorization policy

The predefined OWSM policies oracle/http_jwt_token_service_policy and oracle/http_jwt_token_over_ssl_service_policy create a SecurityContext which is available from the $inbound/ctx:security/ctx:transportClient inside Service Bus. Thus you do not need a custom identity asserter for this!

However, the policy does not allow you to configure role based access and the predefined policy oracle/binding_permission_authorization_policy does not work for Service Bus REST services. Thus we need a custom policy in order to achieve this. Luckily this policy can use the previously set SecurityContext to obtain principles to validate.

Challenges

Provide the correct capabilities to the policy definition was a challenge. The policy should work for Service Bus REST services. Predefined policies provide examples, however they could not be exported from the WSM Policies screen. I did ‘Create like’ a predefined policy which provided the correct capabilities and then copied those capability definitions to my custom policy definition file. Good to know: some capabilities required the text ‘rest’ to be part of the policy name.

Also I encountered a bug in 12.2.1.2 which is fixed with the following patch: Patch 24669800: Unable to configure Custom OWSM policy for OSB REST Services. In 12.2.1.3 there were no issues.

An OWSM policy consists of two deployments

A JAR file

  • This JAR contains the Java code of the policy. The Java code uses the parameters defined in the file below.
  • A policy-config.xml file. This file indicates which class is implementing the policy. Important part of this file is the reference to restUserAssertion. This maps to an entry in the file below

A policy description ZIP file

  • This contains a policy description file.

The description ZIP file contains a single XML file which answers questions like;

  • Which parameters can be set for the policy?
  • Of which type are the parameters?
  • What are the default values of the parameters?
  • Is it an authentication or authorization policy?
  • Which bindings are supported by the policy?

The policy description file contains an element which maps to the entry in the policy-config.xml file. Also the ZIP file has a structure which is in line with the name and Id of the policy. It is like;

Thus the name of the policy is CUSTOM/rest_user_assertion_policy
This name is also part of the contents of the rest_user_assertion_policy file. You can also see there is again a reference to the implementation class and the restUserAssertion element which is in the policy-config.xml file is also there. The capabilities of the policy are mentioned in the restUserAssertion attributes.

Implementation

As indicated, for more detail see the installation manual here. The installation consists of:

  • Create a stripe, keystore, keypair to use for JWT signature encrytpion and validation
  • Add a system policy so the token service can access the keystore
  • Create a group tokenusers which can access the token service to obtain tokens
  • Deploy the token service
  • Apply Patch 24669800 if you’re not on 12.2.1.3
  • Copy the custom OWSM policy JAR file to the domain lib folder
  • Import the policy description

If you have done the required preparations, adding OAuth2 protection to Service Bus REST services is as easy as adding 2 policies to the service and indicating which principles (can be users or groups, comma separated list) are allowed to access the service.

Finally

As mentioned before, the installation manual and code can be found here. Of course this solution does not provide all the capabilities of a product like API Platform Cloud Service, OAM, OES. Usually you don’t need all those capabilities and complexity and just a simple token service / policy providing the OAuth2 credentials flow is enough. In such cases you can consider this alternative. Mind that the entire service is protected by the policy and not specific resources. That would require extending the custom OWSM policy with this functionality. If for example someone tries to login to the token service with basic authentication and uses a wrong password for the user weblogic, it may be locked. Because of this and other resources which are available by default on the WebLogic server / Service Bus, you’ll require some extra protection when exposing this to the internet such as a firewall, IP whitelisting, SSL offloading, etc.

 

The post Securing Oracle Service Bus REST services with OAuth2 client credentials flow (without using additional products) appeared first on AMIS Oracle and Java Blog.

Viewing all 20 articles
Browse latest View live