Friday, December 20, 2013

Kill the process in linux

Every time i want to kill the process i use to search and kill one by one!

Now i have a powerful killer machine :D to kill all the process that has my pattern!

for KILLPID in `ps ax | grep 'my_pattern' | awk ' { print $1;}'`; do kill -9 $KILLPID; done

Monday, November 18, 2013

Drop all user tables in Oracle

Simple anonymous code segment clear your DB!

   FOR cur_rec IN (SELECT object_name, object_type
                     FROM user_objects
                    WHERE object_type IN
         IF cur_rec.object_type = 'TABLE'
            EXECUTE IMMEDIATE    'DROP '
                              || cur_rec.object_type
                              || ' "'
                              || cur_rec.object_name
                              || '" CASCADE CONSTRAINTS';
            EXECUTE IMMEDIATE    'DROP '
                              || cur_rec.object_type
                              || ' "'
                              || cur_rec.object_name
                              || '"';
         END IF;
         WHEN OTHERS
            DBMS_OUTPUT.put_line (   'FAILED: DROP '
                                  || cur_rec.object_type
                                  || ' "'
                                  || cur_rec.object_name
                                  || '"'

Monday, November 4, 2013

JMS Correlation ID in WSO2 ESB

You can retrieve original message id from the $header Synapse XPath expression and set it as the JMS_COORELATION_ID property in your message flow. That'll set as the correlation id of the underline JMS message.

<property name="JMS_COORELATION_ID" action="set" scope="axis2" expression="$header/wsa:MessageID" xmlns:wsa=""/>

You'll need to define the Xpath appropriately to make sure that it address the both SOAP 1.1 and 1.2 messages.

Or else you can work around it in following method aslo.

1) In the request sequence we grab the message id:
 <property name="msgID" expression="get-property('MessageID')" />
2) In the response sequence we set the correlation id using:
 < roperty name="JMS_COORELATION_ID" expression="get-property('msgID')" scope="axis2" />

Saturday, November 2, 2013

Upload a file Using JSP

This demonstration will guide you, how to upload a file into folder (you can define the path) using JSP.


Application server - I am using Wso2 Application Server. Since it is very user friendly to configure and mange my application using cool user interfaces.

Further Readings,

Maven - As build tool i am using.

Further Readings,

let see the codes

Friday, November 1, 2013

Multicast based Clustering With deployment Systematization

In current competitive enterprise environment, integration is playing very important role. Using integration systems are not going to improve the environment, Unless Having right pattern and solution. Having said that, making integration platform in a high availability manner is make more meaningful to scale your enterprise.

In this blog, I am explaining how to make the Carbon 4.2.0 based products in high available manner and how to synchronize the deployment a cross the same products. Furthermore, in this example I have taken WSO2 DSS as product to experiment this feature. You can download this in official website of the WSO2 for free of charge!

Lets have look at the illustrated deployment pattern, I have deployed DSS in 3 diffident machines(Nodes). Important to note is, i have not create Master/Salve setup. Each and every node are in peer to peer way.  All the servers are load balanced by external hardware based "F5 Load Balancer  and inside the Demilitarized Zone(DMZ).

Deployment Diagram

Lets take a step into configuration to understand, How to implement above scenario! 

Step 1 - Downloaded a product into any one of the box and started and shutdown to testing purpose.

Step 2 - Since I have 3 different machine to configure and i wont to use the my domain name instant of localhost/!!! There for  i used a simple script to replace the values with in "conf" directory.
example - find ./ -type f -exec sed -i 's/localhost/' {} \;

Step 3 - Configure deployment synchronization in  /repository/conf/carbon.xml
 To understand further on SVN based Deployment synchronization, refer  [1]

Step 4 - Configure registry to maintain the metadata across the all three nodes. Refer further on [2]

Step 5 - Configure User Store to cater external user store. Refer further on [3]

Step 6 - Configure /repository/conf/axis2/axis2.xml to cluster the system.
This step is very important to understand! We are enabling the clustering in multicast mode! Let see the config .
  • In Line 2, I have enabled the clustering
  • In Line 5, I have mentioned "membershipScheme" as multicast.
  • In line 10, I have mentioned "localMemberHost" as name of my current machine ""
  • And more importantly, i have mentioned my other members detail (Line 20)
I have removed the comment in the configuration to compact the config file!

Step 7 - Just Start the server and observe is there any expedition or error due to the configuration. If there is anything resolve the issues. If there is no miss configuration, You will see the carbon log as above.
 Server is waiting for other nodes to start to make the clustering.
So now stop this sever for while and move to step 8.

Step 8 -  Take the copy of this server and copy to other servers and run the find and replace script to change the name.
Example:- find ./ -type f -exec sed -i 's/' {} \;

Step 9 - Go to /repository/conf/axis2/axis2.xml and make sure Step 6 followed or not in each and every nodes and make sure Members are defined properly!

Step 10 - Time to Start the servers and enjoy the High Available WSO2 Carbon 4.2.0 based server.




Wednesday, October 30, 2013

Countdown Starts Now - HTML/JS

Being ALONE in Tulsa (Oklahoma,USA) , make me so boring!. I have just completed my first month over here!! Hmm another one more month to stay here! It is really hard to stay away without my beautiful wife and charming two month old kid in Sri Lanka. So why i took this wise decision ? It is not easy when my kid turn 2 years or 3 years!, I have to be with him for his growth in that age.

Let see the fun part of it!, So i thought to create a simple html and Java Script based countdown page! yeah, I hear you! there are plenty of system/plugin available already in the market! So why the hell are you doing this? Answer "Read my first paragraph :P"

So lets get into the code and see what i have done with it!

If you see the code carefully, I have setup the "TargetDate" Date to be calculate! and used "Math" function to do the calculation! code is pretty simple to understand so go and grab it ;)

Output looks like bellow!

Monday, October 28, 2013

ZooKeeper - In my point of view

ZooKeeper: Because coordinating distributed systems is a Zoo

What is Apache Zoo Keeper? 

  • Apache ZooKeeper is a software project of the Apache Software Foundation, providing an open source distributed configuration service, synchronization service, and naming registry for large distributed systems.[1]
  • Apache ZooKeeper is an effort to develop and maintain an open-source server which enables highly reliable distributed coordination. [2]
  • ZooKeeper is a high-performance coordination service for distributed applications. It exposes common services - such as naming, configuration management, synchronization, and group services - in a simple interface so you don't have to write them from scratch. You can use it off-the-shelf to implement consensus, group management, leader election, and presence protocols. And you can build on it for your own, specific needs.[3]

Inherent Problems in Distributed Systems

Little things become complicated because of the distributed nature. For example, when running an application on a local machine, changing of an application involves, editing a configuration file and restarting the app. However, distributed applications run on different machines and need to see configuration changes and react to them. To make matters worse, machines may be temporarily down or partitioned from the network. Robust distributed applications also have the ability to incorporate new machines or decommission machines on the fly. This makes configuration of the distributed application should also be dynamic. 

Distributed applications need a service that they can just believe to oversee the distributed environment. 
The service needs to be as simple as possible and easy to understand as possible. A developer should not have trouble to integrating the service into their application.

The service needs to have good performance so that applications can use the service extensively. 

ZooKeeper aims to meet above requirements by collecting the essence of these different services into a very simple interface to a centralized coordination service. The service itself is distributed and highly reliable. Consensus, group management, leader election and presence protocols will be implemented by the service so that the applications do not need to implement them on their own. 

ZooKeeper instructions show how this simple service can be used to build much more powerful abstractions.

Originally ZooKeeper was a sub project of Hadoop. But from January 2011 Hadoop's ZooKeeper subproject has graduated to become a top-level Apache project. ZooKeeper have Java and C interfaces, and someday they hope to get Python, Perl, and REST interfaces for building applications and management interfaces.

ZooKeeper FileSystem / Data Model

ZooKeeper allows distributed processes to coordinate with each other through a shared hierarchical name space of data registers (we call these registers znodes), much like a tree based file system. Every znode in ZooKeeper's name space is identified by a path. Sequence of path elements separated by a slash ("/"). And every znode has a parent whose path is a prefix of the znode with one less element; the exception to this rule is root ("/") which has no parent. Also, a znode cannot be deleted if it has any children. There are no renames, no append semantics and no partial read writes. 

Data is read and written entirely

The main differences between ZooKeeper and standard file systems are that every znode can have data associated with it (every file can also be a directory and vice-versa) and znodes are limited to the amount of data that they can have. ZooKeeper was designed to store coordination data: status information, configuration, location information, etc. This kind of meta-information is usually measured in kilobytes, if not bytes. ZooKeeper has a built-in sanity check of 1M, to prevent it from being used as a large data store.

Znodes maintain a stat structure that includes version numbers for data changes, ACL changes, and timestamps, to allow cache validations and coordinated updates. Each time a znode's data changes, the version number increases. For instance, whenever a client retrieves data it also receives the version of the data.

1. The data stored at each znode in a namespace is read and written atomically.
2. Each node has an Access Control List (ACL) that restricts who can do what.
3. ZooKeeper supports the concept of watches. Clients can set a watch on a znodes. A watch will be triggered and removed when the znode changes. When a watch is triggered the client receives a packet saying that the znode has changed. And if the connection between the client and one of the Zoo Keeper servers is broken, the client will receive a local notification

ZooKeeper Distributed Architecture

Main Facts
  • All servers store a copy of the data in memory 
  • The leader is elected at startup 
  • Followers respond to clients 
  • All updates go through the leader 
  • Responses are sent when a majority of servers have persisted the change

The ZooKeeper service itself is replicated over a set of machines that comprise the service. These machines maintain an in-memory image of the data tree along with a transaction logs and snapshots in a persistent store. Because the data is kept in-memory ZooKeeper is able to get very high throughput and low latency numbers. The downside to an in memory database is that the size of the database that ZooKeeper can manage is limited by memory. This limitation is further reason to keep the amount of data stored in znodes small.

The servers that make up the ZooKeeper service must all know about each other. As long as a majority of the servers are available the ZooKeeper service will be available. Clients must also know the list of servers. The clients create a handle to the ZooKeeper service using this list of servers.

Clients only connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the client will connect to a different server. When a client first connects to the ZooKeeper service, the first ZooKeeper server will setup a session for the client. If the client needs to connect to another server, this session will get reestablished with the new server.

Given a cluster of Zookeeper servers, only one acts as a leader, whose role is to accept and coordinate all writes (via a quorum). All other servers are called followers who read-only replicas of the master. Read requests sent by a ZooKeeper client are processed locally at the ZooKeeper server to which the client is connected. If the read request registers a watch on a znode, that watch is also tracked locally at the ZooKeeper server. Write requests are forwarded to the leader and write requests go through consensus before a response is generated. The rest of the ZooKeeper servers (followers) receive message proposals from the leader and agree upon message delivery. Since followers are replicas of the leader, if the leader goes down, any other server can pick up the slack and immediately continue serving requests. 

The messaging layer takes care of replacing leaders on failures and syncing followers with leaders. Sync requests are also forwarded to another server, but does not actually go through consensus. Thus, the throughput of read requests scales with the number of servers and the throughput of write requests decreases with the number of servers. 

Order is very important to ZooKeeper. (They tend to be a bit obsessive compulsive.) All updates are totally ordered. ZooKeeper actually stamps each update with a number that reflects this order. We call this number the zxid (ZooKeeper Transaction Id). Each update will have a unique zxid. Reads (and watches) are ordered with respect to updates. Read responses will be stamped with the last zxid processed by the server that services the read.

ZooKeeper supports the concept of watches. Clients can set a watch on a znodes. A watch will be triggered and removed when the znode changes. When a watch is triggered the client receives a packet saying that the znode has changed. And if the connection between the client and one of the Zoo Keeper servers is broken, the client will receive a local notification.
ZooKeeper uses a custom atomic messaging protocol. Since the messaging layer is atomic, ZooKeeper can guarantee that the local replicas never diverge. When the leader receives a write request, it calculates what the state of the system is when the write is to be applied and transforms this into a transaction that captures this new state. 

And last but not least, what if you wanted to create a node, which only existed for the lifetime of your connection to Zookeper? That's what "ephemeral nodes" are for .Now, put all of these things together, and you have a powerful toolkit to solve many problems in distributed computing. Zookeeper guarantees completely ordered updates, data versioning; conditional updates (CAS), as well as, advanced features such as "ephemeral nodes", "generated names", and an async notification ("watch") API. 


ZooKeeper is Simple, Replicated, Ordered and Fast. ZooKeeper znode provides its clients high throughput, low latency, highly available, strictly ordered access to the znodes. 

  1. High Availability ,ZooKeeper's architecture supports high-availability through redundant services. The clients can thus ask another ZooKeeper master if the first fails to answer. ZooKeeper nodes store their data in a hierarchical name space, much like a file system or a trie data-structure. Clients can read and write from/to the nodes and in this way have a shared configuration service. Updates are totally ordered.
  2. Performance, The performance aspect of ZooKeeper allows it to be used in large distributed systems. 
  3. Reliability, The reliability aspects prevent it from becoming the single point of failure in big systems. Its strict ordering allows sophisticated synchronization primitives to be implemented at the client.


ZooKeeper is used by companies including WSO2 ,Rackspace and Yahoo! as well as open source enterprise search systems like Solr. Cloudera Inc. Hortonworks Inc. are some other organizations that use ZooKeeper. The Katta project, describes itself as Lucene in the Cloud, a scalable, fault-tolerant, distributed indexing system capable of serving large replicated Lucene indexes at high loads.[8]


Wednesday, October 23, 2013

Simple example for using the Distance Matrix.

Starting of this year I was playing around with good Distance Matrix API and wrote an HTML based application for proof of concept. I thought to extract a small part from that and write a blog to demonstrate the Google Distance Matrix.

As always, I have uploaded my simple code base in my GIT hub.
In this sample, I have configured the Origin as Colombo and destination as  Stockholm to calculated total distance! in between to location.

I initiated new "DistanceMatrixService" and called the "getDistanceMatrix" With parameters!

So go head, change code and play with it!

Friday, October 11, 2013

Numerology Name Calculator

I spend quite enough time to select name for my new born son. It took more than week to choose name and finalize it. As my family tradition they keep on ping me to see Numerological value for the name myself and wife each and every name in the list!

So i spend few minutes to create an simple html and JS based site to calculate Numerological value for name. Numerological values are based on Wikipedia[1]. I cross checked the values with some other reference too.

if anyone interested please free to utilize this :)


Monday, October 7, 2013

Find and Replace the String

So long time i was searching for single terminal grep or sed or find command that could find and replace all the place the string value in the document!

Finally i found the command!.

find /path -type f -exec sed -i 's/OldString/NewString/g' {} \;

This command will replace the OldString with NewString. If you want to search with the folder then your path should be "./"

Saturday, August 17, 2013

TCP- Close wait simulation - Part 3

Lets try to do some dirty way of coding to produce close_wait!

Check out my server code! I am opening up new socket and waiting for request from client!
Once request is received, writing out a message to client and closing the connection then and there!
Server Code

In Client code I am creating new connection with Server and puting the clinet into sleep mode with out closing the connection.
Client Code

This will results as bellow!

Tuesday, August 13, 2013

TCP- Close wait simulation - Part 2

In my last blog , I gave a brief idea about TCP and TCP states, Lets check how its work in real network by simulating the TCP connection using NS2 simulator.

So what is NS2?

NS2 is a discrete event network simulator. It is aimed at network research, and supports simulation of various facets of networking, including TCP, multicast, and routing protocols.

You can easly install NS2 in Ubunutu using "Ubunutu Software Center", as usual make use of your terminal and install. sudo apt-get install ns2 nam xgraph.

Below NS2 Script contain TCP - TCP compare.

Below NS2 Script contain TCP-UDP compare

you can run these codes using ns .tcl

Above NS2 scripts help you to simulate how TCP's functionality in the real network.
Lets see what has happened!

As shown in Figure 1, the network model was configured with two TCP (TransmissionControl Protocol) flows for two milliseconds and recorded observation in trace files.

Figure 1
Trace files were plotted using “xgraph” utility and it has been shown in figure 2
Figure 2
According to the results given by the experiment, TCP flows fairly shared the network bandwidth.

As shown in Figure 3, additional another experiment was conducted in respect to model the TCP flow and UDP (User Datagram Protocol) flow in shared network environment, which experiment was inspected for two milliseconds. Furthermore, observations were recorded in the trace file as previous experiment.
Figure 3 
Second experiment’s trace files were plotted using “xgraph” utility and it has been shown in Figure 4.

In accordance with the second experimental result given in the Figure 4, UPD flow was taken over the shared network resource from TCP flow and it did not moderately shared the band with it .

Figure 4

According to Figure 5, in first experiment; first TCP flow was shared the bandwidth with second TCP flow. In second experiment; TCP flow was trampled by UDP flow.Hence, as a conclusion of those two experiments, TCP Shares the network resource fairly.Nevertheless, UPD does not share the network resource.

Even though this is irrelevant to this topic, this help us to understand the behavior of the TCP connections.

Sunday, August 11, 2013

TCP - Close wait simulation - Part 1

In this blog series i will try to explain how to simulate CLOSE_WAIT state of TCP in java . In order to do that as a prerequisite you have to understand the TCP. Therefore i refer to refer to "Computer Network" by Anrew Tanenbaum [1] in this first part of this series. In my next blog post i will simulated the behavior of the TCP using NS2 network simulation tool.Thereafter, I will be show simple code snippet  to demonstrate the CLOSE_WAIT.

So lets start with TCP.......


Transmission Control Protocol (TCP) is designed to be a bidirectional, ordered, and reliable data transmission protocol between two end points (programs). In this context, the term reliable means that it will retransmit the packets if it gets lost in the middle. TCP guarantees the reliability by sending back Acknowledgment (ACK) packets back for a single or a range of packets received from the peer.

This goes same for the control signals such as termination request/response. RFC 793 defines the TIME-WAIT state to be as follows:

TIME-WAIT - represents waiting for enough time to pass to be sure the remote TCP received the acknowledgment of its connection termination request.

See the following TCP state diagram: 

alt text

TCP is a bidirectional communication protocol, so when the connection is established, there is not a difference between the client and the server. Also, either one can call quits, and both peers needs to agree on closing to fully close an established TCP connection.

Let's call the first one to call the quits as the active closer, and the other peer the passive closer. When the active closer sends FIN, the state goes to FIN-WAIT-1. Then it receives an ACK for the sent FIN and the state goes to FIN-WAIT-2. Once it receives FIN also from the passive closer, the active closer sends the ACK to the FIN and the state goes to TIME-WAIT. In case the passive closer did not received the ACK to the second FIN, it will retransmit the FIN packet.

RFC 793 sets the TIME-OUT to be twice the Maximum Segment Lifetime, or 2MSL. Since MSL, the maximum time a packet can wander around Internet, is set to 2 minutes, 2MSL is 4 minutes. Since there is no ACK to an ACK, the active closer can't do anything but to wait 4 minutes if it adheres to the TCP/IP protocol correctly, just in case the passive sender has not received the ACK to its FIN (theoretically).


[1]Tanenbaum, A. 1996. Computer networks. Upper Saddle River, N.J.: Prentice Hall PTR.

Saturday, August 10, 2013

Find your current location using your Browser - HTML 5

Since i have posted some WSO2 related posts in my recent past blogs, I thought of writing something in diffident context. As most of you know, HTML 5 supports for the Geo Location finder.

This blog can be answer to servaral question such like,
[1]How To Use HTML5 GeoLocation API With Google Maps
[2] How to locate my location using html5 compatible browser
[3] Locate me using html 5

So lets make use of it and develop small HTML web site to identify where are we ;)

As a prerequisite you have to obtain the key from Google in order to activate Google API.
Please refer Google Maps JavaScript API v3 to obtain more information about it.

Lets step into the code and see what is happening underneath...
In line 12 of the code snippet, I have configured the Google API with key in your case you have to replace with your Toke Key you have obtained with Google.

Immediately after this page is loaded, I am calling "getLocation()" using body onload event. In getLocation method (line 17) I am making use of HTML5 Geolocation API to identify my current location. I have set the google.maps.Marker with detected latitude and longitude to configured the Google map that is displayed bellow.

Friday, August 9, 2013

JMS Messagen Store and JMS Message Processor Behavior - WSO2 ESB - Part 3

So why did we meet the EVIL in the my last blog post? Why? yes that is the main reason behind this blog series!!

Now we have to take a deep breath before we dive into the WSO2 ESB and see why this has happened!!

If you carefully study the illustration shown in Figure 1, You will understand whats goes wrong ;)
Figure 1
Anyway, let me take some more time to explain little bit further.
Client sends message to Proxy. "Message Store" persist the message in the message store(JMS Queue). "Message Store" does not persist the message as it is in the JMS Queue. it serializes the message and other information into java serialized object and put it into JMS Queue.When "Message Processor" processes the message, it pulls Messages from JMS Queue and deserializes the java serialized object to further process.

So why did we encounter some exception and other problems in issue 1?
When Message Processor Pull the message from JMS Queue and try to deserialize, Deserilzation process fails since the fetched message is not Serialized by Message Store.

So is that possible to put message into Message Stores's "JMS Queue" ?? NO it is not possible!!!

So whats wrong with Issue 2?
Different JMS listener dose not know how to deserialize the message that fetched from JMS Queue(Message Store)!


Messages that are put into "JMS Queue" by "Message Store" are can be read only by "Message Processor" and "Message Processor" only capable of understanding the messages that are put by "Message Store".


Thursday, August 8, 2013

JMS MessagenStore and JMS Message Processor Behavior - WSO2 ESB - Part 2

I believe that you have referred my last post that elaborates simple use case of "Message Store" and "Message Processor" of WSO2 ESB.

There can be several issues occur in Real time or mistakes made by user
Lets take it one by one!

Issue #1 - Real Time

There can be network loss in between Backend and ESB!! Or Backend can be fail in the real world situation. Nevertheless, these situations are handled by several Enterprise Integration Patterns. And also above situation can be over come with use of Message Store and Message processor. That i have already mentioned in my last post.

Issue #2 - User Error

User error is mainly occur due to lack of knowledge on concept of  "Message Store" and "Message Processor" of WSO2 ESB. As I have noticed several time, Users are making two main issues, that i have mentioned below.

Message Insertion manual in JMS Queue  -Figure 1
As figure-1 illustrated, "Message Store" and "Message Processor" are configured to persist the message and send to Backend. However, User or Client is trying to manually add a message into the give JMS Queue!! 

So lets experiment this and see!

Step 1 - Deactivate the Message Processor and Shutdown the Backend.
Step 2 - Go to you Message Broker (I am using ActiveMQ in this scenario) and try to insert message manually.

Click on "Send To" to insert message manually in ActiveMQ - Figure 3
Add your message into Message body and Click on Send in ActiveMQ - Figure 4
As a result of Step 2, you can notice that in the "JMSMessageStore_Queue"  will be having a message that we inserted now!!!  Hmmmm So far so Good!!!!

Step 3 - Start-up the Backend and Reactivate the "Message Processor" in WSO2 ESB and have a look at the Carbon Log!

What ?? something gone wrong!!!!

Step 4 - Send some messages from client and observe what is happening!!

What ??? Nothing is happening!!!! Hmmmm... Until i explain next situation.... keep on thinking.

Note - you will see the error log In WSO2 ESB 4.6.0. Nonetheless, WSO2 4.7.0 does not show any error log until you enable debug log.  However, end of the day nothing will happen!!!

Fetch the message from JMS Queue using JMS Listener  - Figure 2

Other scenario also quite interesting!! As you see in Figure 2, another different JMS Listener is trying to access the persisted message in our JMS Queue.  So what are you going to do with the message that you got from JMS Queue???? Nothing can do!!! 

Why????? So until i write my next blog keep on thinking!!!!!

JMS Message Store and JMS Message Processor Behavior - WSO2 ESB - Part 1

I have seen lot of users of WSO2 ESB misunderstood the usage of "Message Stores" and "Message Processor" of the WSO2 ESB. In this Blog series, I will explain further about "Under the Hood" functionalists of the "Message Stores" and "Message Processor" to clear the doubts and myths behind those.

Lets take a very simple use case that i have illustrated bellow,

End of the day, Client is sending a message to Backend in this scenario. Nevertheless, under the hood we are persisting message in JMS queue using "Message Store" and sending it to backend using "Message Processor". This will make sure, we never loss any transaction even backend is fail.
When backend is up and running "Message Processor" send the persisted messages to Backend.

Lets try this concept using WSO2 ESB and ActiveMQ as a message broker. Please refer [1] to understand how to configure WSO2 ESB and ActiveMQ. 

As a Backend and Client, I am using WSO2 ESB SimpleStockQuoteService and Axis2 Client. Please note that, these are packed with WSO2 ESB by default. You can find more detail in [2].

Please refer to the Synapse configuration that has been given bellow which demonstrates the concept above. 

 Use the ant command as given bellow in order to invoke the proxy from client.

ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dtrpurl=http://localhost:8280/services/MessageReciveProxy -Dmode=placeorder

As result you, can notice in the Backend, there will be transaction has happened. So underneath, our "MessageReciveProxy"  Proxy received the message and persisted in "JMSMessageStore"  Message Store. Thereafter, "SampleMessageForwardingProcessor"  Message Processor took the message from "JMSMessageStore" and send it to Backend for further process.

So far so good! Hmmmm

In my next post I will introduce the evil ;)

Evil Mirror wallpaper from Evil wallpapers


Tuesday, July 30, 2013

Retry Configuration for Error Handling in Endpoint

“retryConfig” element in WSO2 ESB facilitates developer to retry the endpoint on failure of known error codes. “retryConfig” has two different extreme flavours known as “disabledErrorCodes”  and “enabledErrorCodes” to enhance user to manage known error codes on endpoint failures.

Lets take an Example based on WSO2 ESB sample 52 [1]

Above image is illustrating a simple use case of demonstrate simple load balancing among a set of endpoints use.

Given Synapse Configuration below meet above mentioned objective.

Nevertheless, system admin or developer might already know about the Error Codes [2] that occurs during the endpoint failure, In order to avoid above circumstance it is possible to use “retryConfig”.

In known Error occurring situation, “disabledErrorCodes” disables the retry only for defined error codes within the configuration. Whereas, “enabledErrorCodes” will enable the retry only for defined error codes. Furthermore, it is not possible to have both keywords at the same time for given endpoint.By default, WSO2 ESB accepts only  the “disabledErrorCodes” keyword in above mentioned situation.

Above Synapse configuration demonstrate the basic usage of “retryConfig”.

Follow the WSO2 ESB sample 52 [1] and configure the axis2servers, As per given Synapse configuration, load balancer won't retry on “server 1”s connection failure and print “COULDN'T SEND THE MESSAGE TO THE SERVER” on client side. Whereas, if “server 2”s connection failure only it will retry the other end points.


Monday, July 29, 2013

JSON to XML conversion using WSO2 ESB

I am using WSO2 ESB 4.6.0 version to demonstrate this sample, simple yet powerful.

You do not need to write your own code or relay on other library to convert json to xml. Simple WSO2 esb proxy can do the conversion for you, so we will try that. 

Lets set the requirement is to use a parameter to the methods to control whether to return JSON or XML. 

Given synapse configuration below has the very basic configuration to solve this requirement.

"JsonToXmlApi" rest api has two resource in the "Test" context. "/xml/" resource will covert the incoming json payload into xml format and send that back to client. whereas,  "/json/" resource without converting which will echo back to the client.

You can test this using curl commands that have given below,

Hello JAX-RS

In my last post i have given a brief  introduction to JAX-RS. Before look into this further It would be great if we do a hands on session to understand the usage and development procedure.

For the change, I will start with “Hello JAX-RS” instant of “Hello World”!!!!

In order to do this i am using maven as a builder, Tomcat Server as application server and Idea intellij as IDE. Most importantly I am using jersey 1.17.1 as the JAX-RS implementation.

First of all we have to create a web application project. This task can be achieved by maven.

mvn archetype:generate -DartifactId=RESTfulExample -DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

After project created by maven, the folder structure looks like. If you see carefully "java" folder is not there in the folder structure.

Folder Structure
Thu i have manually added a folder under src/main/java and ran mvn idea:idea (if you are using eclipse use mvn idea:idea)

I have share the complete project here 

Given pom.xml I have added the  dependency for  jersy and set the source as java 1.6. is the implementation for "Hello JAX - RS"demonstration.

If you carefully study into the code that i have given, you can understand that "@Path("/hello")" annotation indicate the context of the class "". "@GET" indicate the methods classify the methods as GET request method (HTTP vocabulary of request operation).  @Path("world/{param}") annotation is a resource in the "hello" context with the parameter.

Okay, Lets up and run this code.

First of all! compile the code using maven!,  get the "war" file! and configure the apache tomacat server with it.

In this example, web request from “projectURL/rest/hello/” will match to “HelloJaxRsService“, via @Path("/hello").

http://localhost:9000/RESTfulExample/rest/hello/world/human request hits the hello context and world resource.

http://localhost:9000/RESTfulExample/rest/hello/home/Vanjikumaran request hits the hello context and home resource.

Thursday, July 11, 2013

Java API for RESTful Services

Java API for RESTful Services (JAX-RS)

JAX-RS: Java API for RESTful Web Services is a Java API that provides support in creating web services according to the Representational State Transfer (REST) architectural pattern. Furthermore JAX-RS uses annotations to simplify the development and deployment of web service clients and endpoints.

To Understand further about JAX-RS refer the document provided by oracle[1] which contain compressible information about JAX-RS.

You can find various implementation that are available in industry and well known implementation are

1) Apache CXF - an open source Web service framework provided by Apache Software Foundation.
2) Jersey - an implementation from Oracle.
3) ESTeasy - an implementation from JBoss.
4) Apache Wink, Apache Software Foundation Incubator project, the server module implements JAX-RS.

I will provide a sample implementation for REST Web Service using JAX-RS in near future blog.
Until that please keep on reading :)


Wednesday, July 3, 2013

Debugging techniques In Idea intellij :- IDEA 1

Recently I was in the situation to debug couple of components that is not in the same pom.xml and It is not possible to open up different windows and debug!! 

After view minutes of search in google, i found out couple of cool techniques, that allow us to debug different component in the same place. Lets go though it, step by step (here i will refer Idea intellij as IDE).

Step 1

You can see the "Project Structure" button that appear in right hand side (Circled) of your current opened project.

Step 2
You can see a "Modules" in left hand side in project structure. once you clicked that, you can see there is a "Sources" tab under the Name of the project.

Once you get into there, you can see a button named with "Add Content Root"

Step 3
Then you have to select content root directory that you want to add into current project structure.

Step 4
Finally you can see, you have add a content in to your project structure. If you want to add more content, repeat the Step 2 and Step 3.

Click Ok, you can see the changes in project structure with new content in it!

Now you can do you normal way of debugging your projects!! 

Mount WSO2 Governance Registry to WSO2 ESB with Read only Mode

As most of you all know, we have a product called WSO2 Governance Registry which is known as GReg to facilitate and cater the right level of structure straight out of the box to support SOA Governance, configuration governance, development process governance, design and run-time governance, life-cycle management, and team collaboration.

So in this example I am going to use our GReg as registry for WSO2 ESB's resources.

Furthermore, the main intention of this demonstration is to showcase "how to configure WSO2 ESB in Read Only/Read Write mode for WSO2 GReg. Following illustration shows the conceptual deployment architecture. Where we have two WSO2 ESB, First is on READ and WRITE mode(Offset 1) and the latter one is with READ only mode(offset 2).

Before dirt our hands!! we have to do some prerequisite tasks!! Yes first of all download the products from wso2 product site.

Well it is not enough, we have to setup underlying storage for WSO2 Greg to Demonstrate this sample. Thus, we will use MYSQL as a database.

So lets take a step ahead and do the work,

Step 1
Create a Database in mysql (Lets say database name as “gregMount”)

Step 2
Grant the permission to the created database.

GRANT ALL ON gregMount.* TO regadmin@localhost IDENTIFIED BY "regadmin"

Alright, We have just finished the very basic step of this example!

Step 3
Well ESB and GReg need the MySql libraries to understand the MySql API. Thus put the MySQL driver into <home>/repository/components/lib/

Step 4
We will step into the Greg configuration Changes! Open up

Add a data source that look like following configuration

            <description>The datasource used for registry and user manager</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

Step 5
Start the GReg first time with  <home>/bin sh -Dsetup

Lets step into the WSO2 ESB and do the Read and Write mode configuration!

Step 6
Since we have used WSO2 GReg with default offset, we have to change the offset of the ESB (ReadWrite mode ESB) to <Offset>1</Offset>

In order to do this you have to change the default value of  Offset of  <home>/repository/conf/carbon.xml

Step 7

As mentioned bellow,add DataSource configuration into ESB's <home>/repository/conf/datasources/master-datasources.xml

            <description>The datasource used for registry and user manager</description>
            <definition type="RDBMS">
                    <validationQuery>SELECT 1</validationQuery>

Step 8
You need to do the following changes in the registry.xml that could be find in <home>/repository/conf/registry.xml.

Adding database configuration.

<dbConfig name="wso2registryRemort"> <dataSource>jdbc/WSO2CarbonDB_GREG</dataSource> </dbConfig>

add remote instance that point to Wso2 Greg
  <remoteInstance url="https://localhost:9443/registry">

Mount the config and governance

  <mount path="/_system/config" overwrite="true">
   <mount path="/_system/governance" overwrite="true">

If you notice the remoteInstance configuration carefully, we have configured this ESB as readOnly false mode, it means we can Read and Write to the GReg.

Perfect!!! now we have completed almost everything that is needed for this demonstration. Lets see how to configure the Read only mode ESB in this demonstration.

Step 9
follow the above steps 6 to 8 to the second instance of ESB but make the Offset as <Offset>1</Offset> and Read only mode as True

Now you have the GReg and two ESB to test the Read only Mode

Test cases :- Try to add endpoint using ReadOnly ESB to registry.