Wednesday, February 12, 2020

Cloud vs. Cloud Native

Introduction

These days everyone is moving “On cloud”. Having many cloud vendors with lucrative offers of TCO reduction, does deploying your application in cloud makes you cloud-ready?

“Cloud Native” is more than just moving part or all of your applications to a cloud vendor. This requires a completely different approach towards application architecture, application development, team alignment, infrastructure etc. etc. Let’s take a deep dive to understand what actually “Cloud Native Architecture” means and why it is important.  

By definition: “Cloud Native” is an approach for architecture, development and deployment of applications; which takes the characteristics and nature of the cloud into account - resulting in processes and workflows that fully take advantage of the platform.

So what does fully taking advantage of the cloud means?


Cloud Brings Microservice

Traditional on-premise infrastructure used to be a centralized system. To all intents and purposes, everything was in one place. However, in cloud servers, databases etc. are distributed. In this context, if you simply migrate your existing application from traditional on-premise server to the cloud, it won’t give you much ROI from your migration. It will be simply as equal to as changing your hardware on-premise (or may be worse in sense they are no longer one centralized system any more).This is because applications those were hosted on on-premise servers were built as monoliths. Applications were including every feature and service that was meant to make them a single, big lump of code. Current day, with microservice architecture, applications are being built as a collection of distributed services, which compliments the distributed nature of the cloud perfectly. This approach gives individual microservices the advantage to automate in all possible ways to maximize the efficiency and minimize the cost & effort:

·        Auto Provisioning: Automatically provisioning environments as code
·        Auto Scaling: Tracking utilization of various modules/services within your application and scale up/down resources automatically whenever & wherever required
·        Auto Redundancy: Cloud Native applications are resilient to failure, by nature. In the event of a service degradation, application instances move to another server or data center automatically and seamlessly.

So, Cloud with microservices provide more granular means of deploying resources required for maintaining performance & reliability (SLA). While it’s possible to just migrate your existing monolith application with its legacy codebase to a cloud platform, however you won’t be able to get any of the benefits of being truly “Cloud Native”.


Microservices Brings Containers

The main driving factor for the rise of underlying container ecosystem (e.g. Docker, CoreOS rkt, Mesos, LXC etc.) is the microservices architecture. Managing your applications as a collection of individual (micro) services has implications on infrastructure. Since every service in a microservices application needs to be a self-contained unit, they need their own allocation of computing, memory, and network resources. From a cost and management point of view, it is not feasible to multiply 10 or 100 times the number of VMs to host individual (micro) services of your application as you move to the cloud. This is where containers come into the existence. Containers lightweight, standalone, executable package of software that includes everything (in right amount) needed to run an application: code, runtime, system tools, system libraries and settings. They make great alternative to VMs for packaging microservices, enabling the benefits mentioned above.


Container Brings Service Mesh Capabilities

With introduction of microservice and container; services have become more agile and location transparent. So the “Cloud Native” architecture demands more mesh or collaboration capabilities from services.

Let’s imagine that you are running an application that invokes other services or within your Cloud Native application different modules / (micro) services interacting with each other over a REST API or Thrift API. In order to make a request, your consumer needs to know the location of the provider service instance. In a traditional application running on a few hardware servers in a data center, the service locations are relatively static. For example, your code can read the network locations from a configuration file that is occasionally updated. In a modern, “Cloud Native” Microservice driven application, where main ideology is to run anywhere transparently, this is a much more difficult problem to solve. Because Service instances have dynamically assigned network locations. Moreover, the set of service instances changes dynamically for Auto Scaling, failures, and upgrades. Consequently, your client code needs to use a more elaborate service discovery mechanism.

This bring the considerations to re imagine the dynamic discovery patterns (e.g. Client‑Side Discovery Pattern, Server‑Side Discovery Pattern etc.), Service registry, Circuit breakers, API Gateway etc.

There are bunch of open source projects that help to design better mesh or collaboration capabilities for services. E.g. Zuul, Eureka, Hystrix, Ribbon, Gravitee etc.


Cloud, Microservice, Containers, Service Mesh together Brings “Cloud Native” Tools

The fundamental change in infrastructure with introduction of cloud demands a change in tools as well. Legacy tools those were built to manage tasks across a few hardware servers in a data center, can no longer sustain the complexity of microservices running in containers across the cloud. While cutting across a distributed containerized microservice based application, simple things like latency optimization, RCA of the backend, E2E monitoring etc can be complex. The resource consumption of each microservice needs to be metered to ensure degrading SLA for one service don’t affect others.

There are bunch of open source projects that help run microservice applications. E.g. Kubernetes, Containerd, Fluentd, Prometheus, Envoyproxy, CoreDNS, Jaegertracing, Vitess etc.


Cloud, Microservice & Containers together Brings Devops

Traditional application require different teams for each stage of an application’s lifecycle e.g. Development, DBA, QA, System Admin, RM, PM etc. with each team having their own goals and priorities. Team’s priorities often clash, and it takes a toll on the overall application delivery & maintenance.

With introduction of cloud, microservices & containers, team structures can be realigned to reflect the change in application development, deployment & maintenance architecture.

Modern software delivery teams are small and cross-functional fitting perfectly into “two pizza rule”. Teams are structured and organized by the services they support. This keeps team’s agile, improving collaboration and decision-making. It also eliminates the accountability issues, whereby teams can renounce responsibility for a feature after it is moves on from their team, but before it is released.


Cloud Native Release

With the right architecture to build applications, the right tools to manage them, and the right team structure to bring it all together, the comprehensive effect is that Cloud Native Applications has become more agile & frequent in terms of their releases. In essence, there is no longer any major releases, since every release affects a single (micro) service, or a couple of (micro) services at most. Due to the limited scope for every release, the find & fix are easy now. With capability automatic rollback tools, it is much easier to revert to the previous stable release when a new release fails.

Autonomous independent teams can ship updates to the services they own without any conflict with other teams. With multiple teams simultaneously releasing updates to services frequently, the pace of deployment is sky high. Netflix engineers deploy code thousands of times per day.

The biggest advantage of a “Cloud Native” architecture is that it takes a concept to rollout (Time to Market) in the quickest possible time.


Conclusion

“Cloud Native” architecture is more than just migrating your existing application to any cloud instance. It goes way further to change the way application is architected, planned & provisioned into the infrastructure, effectively monitored and maintained.

You can start your modernization journey (if not yet started) with an application modernization assessment and then a single pilot project, allowing you to become familiar with this approach before growing your cloud adoption further over time.

Next time you hear someone talking about cloud, stop and think if it really mean “Cloud Native”.

Wednesday, August 30, 2017

Integrating business process automation engine with existing TIBCO EMS

In todays world of business process automation, many of us are using WebSphere Process Server as their Integrating business process automation engine. However the need arises to integrate with existing messaging infrastructure. Today we will see, how to integrate TIBCO Enterprise Message Service, with WebSphere Process Server.

WebSphere Process Server is a business process integration server that helps you solve complex business flows and a platform that you can use to connect to various technologies. Businesses seeking to use WebSphere Process Server to implement these complex business processes have to be connected to other back-end or front-end systems in order for these business processes to work properly. TIBCO Enterprise Message Service (EMS) is a common messaging resource used by businesses. Since EMS supports JMS, we will use the JMS support of WebSphere Process Server to bind to EMS.



Presuming our enviroment running the following versions of WID, EMS & WPS on Windows 32 bit:

WebSphere Integration Developer V6.0.x.
TIBCO EMS server.
WebSphere Process Server V6

Steps to follow:

A. Configuring EMS and WebSphere Process Server
A.1. Start the EMS server
A.2. Create the EMS administered objects
A.3. Configure WebSphere Process Server for the EMS
A.4. Add EMS as a JMS provider to WebSphere Process Server
A.5. Configure the JNDI bindings for the EMS Connection Factories for WebSphere Application Server
A.6. Configure the JNDI bindings for TIBCO Enterprise Message Service Destinations for WebSphere Application Server
A.7. Create the Listener Ports for EMS
A.8. Create the Connection Factories and Destinations for WebSphere Default Messaging
B. Creating a business process
C. Creating the TIBCO EMS Message Producer
D. Configuring the Resource references
E. Creating the TIBCO Message Receiver Message Drive Bean (MDB)
F. Building and running the sample


A. Configuring EMS and WebSphere Process Server


A.1. Start the EMS server

Install the EMS
Choose Start => All Programs => TIBCO => TIBCO EMS Evaluation Version => Start EMS Server
Choose Start => All Programs => TIBCO => TIBCO EMS Evaluation Version => Start EMS Server Administration Tool
Type in connect and press Enter
Press Enter for login and password (no security configured), and you will see a message saying connected to tcp://localhost:7222 and the same as the command prompt

A.2. Create the EMS administered objects

Start the admin tool and enter the following commands:
Create factory sample.XAQCF xaqueue.
Create queue sample.Q1.

A.3. Configure WebSphere Process Server for the EMS JNDI provider

Create a text file called jndi.properties in the directory: \AppServer\lib\ext
Add the following line into the file:
java.naming.factory.url.pkgs=com.tibco.tibjms.naming
Save the jndi.propertiesfile

A.4. Add EMS as a JMS provider to WebSphere Process Server

Start Process Server
Start the Process Server Administrative Console
Expand Resources and choose Generic JMS Providers
Click New
Enter the following values for the given properties:
Name= TIBCO
Description= TIBCO Enterprise Message Service
Classpath= (location of tibjms.jar)
External Initial Context Factory= com.tibco.tibjms.naming.TibjmsInitialContextFactory
External Provider URL= tibjmsnaming://localhost:7222
Click OK
Click Save on the task bar at the top of the console window
stop, and restart WebSphere Application Server

A.5. Configure the JNDI bindings for the EMS Connection Factories for WebSphere Application Server

Expand Resources and choose Generic JMS Providers
In the content pane, choose TIBCO
Scroll down and choose JMS Connection Factories
Click New
Enter the following values for the given properties:
Name= TIBCOConnectionFactory
Type= QUEUE
JNDI Name= jms/ConnectionFactory
Description= Sample Queue ConnectionFactory
External JNDI Name= sample.XAQCF
Click OK
Click Save on the task bar of the Administrative Console (and Save again to confirm)
stop, and restart the WebSphere Application Server

A.6. Configure the JNDI bindings for TIBCO Enterprise Message Service destinations for the WebSphere Application Server

Expand Resources and choose Generic JMS Providers
In the content pane, choose TIBCO
Scroll down and choose JMS Destinations
Click New
Enter the following values for the given properties:
Name= Q1
Type= QUEUE
JNDI Name= jms/Q1
Description= Sample Listen Queue
External JNDI Name= sample.Q1
Click OK
Click Save on the task bar of the Administrative Console (and Save again to confirm)
stop and restart the WebSphere Application Server

A.7. Create Listener Ports for EMS

Expand Servers and choose Application Servers
In the content pane, choose the name of the application server
In the Additional Properties Table, select Message Listener Service
In the content pane, select Listener Ports
In the content pane, click New
Enter the following values for the given listener port properties:
Name= TIBCOPtoPListenerPort
Initial State= Started
Description= Listener Port for TIBCO Point to Point
ConnectionFactory JNDI Name= jms/ConnectionFactory
Destination JNDI Name= jms/Q1
Click OK
Click Save on the task bar of the Administrative Console (and Save again to confirm)
stop and restart the application server to have the changes take effect


Verify that the new listener ports are in their proper initial state
expand Servers => Application Servers=> => Message Listener Service=> Listener Ports
TIBCO listener ports should have a solid green arrow, indicating as started


A.8. Create Connection Factories and Destinations for WebSphere Default Messaging

Create a Service Integration Bus
name= MyBus
Destination names= TIBCO_WPS_Queue
Create a JMS Queue Connection Factory
Name= MyQueueConnectionFactory
JNDI Name= jms/MyConnectionFactory
Bus= MyBus
Create JMS Connection Factory
Name= TibcoConnectionFactory
JNDI Name= jms/TibcoConnectionFactory
Bus= MyBus
Create a JMS Queue
Name= TIBCO_WPS_JMS_Queue
JNDI= Name jms/tibcowpsq
Queue Name= TIBCO_WPS_Queue
Bus= MyBus
Create a JMS Activation Spec
Name= TIBCO_WPS_AS
JNDI Name= eis/tibcowpsas
Destination Type= Queue
Destination JNDI name= jms/tibcowpsq
Bus= MyBus




To Be Continued  ...

(Sorry didn't get time to finish. But I promise to finish some day ;))
Note: For WPS & WID tutorial you may refer http://www.webagesolutions.com/knowledgebase/waskb/waskb019/

Tuesday, April 10, 2012

Using Hudson as CI tool on WLS

One of the key concept of Service Oriented Architechture is that services are no longer built to last, but rather built to change.

In software engineering, continuous integration (CI) implements continuous processes of applying quality control — small pieces of effort, applied frequently. Continuous integration aims to improve the quality of software, and to reduce the time taken to deliver it, by replacing the traditional practice of applying quality control after completing all development.

Now if we follow SOA principles, then the design and implementation should be optimized. However what SOA doesn't address is how we build, deploy, test and finally released that code into production.

This is where a CI tool (details of CI can be found here) comes very useful.

Hudson is such a continuous integration (CI) tool written in Java, which runs in a servlet container, such as Apache Tomcat or the WebLogic application server. It supports SCM tools including CVS, Subversion, Git and Clearcase and can execute Apache Ant and Apache Maven based projects, as well as arbitrary shell scripts and Windows batch commands.

In the following steps we will see how to configure Hudson on WebLogic (Due to Weblogic's class loading structure, if the hudson.war is directly deployed, the application will fail to startup. This is to because there are jar conflicts between Hudson and Weblogic. To get around the issue, we use the FilteringClassLoader mechanism so that specific hudson jars get priority in the classpath over Weblogic's jars).

Versions
-----------
Hudson: 2.2.0
WebLogic: 10.3.1 (11g R1)

Steps:
-----------
1> Downnload Hudson Binary (2.2.0) from here
2> Unzip downloaded hudson.zip
3> Go inside the unzipped location and make a .war file from the content (jar -cvf hudson.war .)
4> copy the hudson war to a directory (e.g. E:\installers\hudson) and create another folder (META-INF) in it.
5> add application.xml and weblogic-application.xml in the META-INF directory

application.xml
===============


weblogic-application.xml
========================


6> Repackage the complete content in hudson.ear and deply in WebLogic
7> Access the Hudson on http://<server_ip>:<port>/hudson

Monday, April 9, 2012

Enabling OSB to support X.509 token identity

In cryptography, X.509 is an ITU-T standard for a public key infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm. Further detail is available here

With OSB supporting X.509 certificates, guarantees that the hosts with which OSB communicates with are the ones that it expects.

Adding X.509 Support to the WebLogic Default Identity Asserter

Since OSB uses the underlying framework that is supplied by WLS, by setting the default identity asserter within WLS to accept X.509 certificates, we can allow users and/or processes to present X.509 certificates to identify themselves. To do this, we need to access the WebLogic Server console and open the Security Realm -> Providers -> Authentication tab.



within the Default Identity Asserter, under the common tab, move the X.509 token type from the available list to the chosen list.



Then under the Provider Specific tab, check the use of the Default Name Mapper. This checks that the certificates mapped to the user has not expired.



Adding WebLogic PKI Credential Mapping Provider

A Credential Mapping provider allows WebLogic Server to log into a remote system on behalf of a subject that has already been authenticated. You must have one Credential Mapping provider in a security realm, and you can configure multiple Credential Mapping providers in a security realm. It is only one of the types of credential mapper that are available. For this example we will need to have a keystore available. To do this you might need to use the keytool command.

The basic steps to create a keystores (jks) using keytool can be referred in the jdk documents, for reference the following are four easily usable commands to create identity, trust stores along with a certificate (self signed)

keytool -genkey -alias test -keyalg RSA -keysize 1024 -dname "cn=L-0219016978.XXX.com,ou=Consulting,o=XXX" -keypass password -keystore identity.jks -storepass password
keytool -selfcert -v -alias test -keypass password -keystore identity.jks -storepass password -storetype JKS
keytool -export -v -alias test -file rootCA.der -keystore identity.jks -storepass password
keytool -import -v -trustcacerts -alias test -file rootCA.der -keystore trust.jks -storepass password

To configure a Credential Mapping Provider, we need to add a new provider in the security realm, under Providers -> Credential Mapping, by clicking “New”





Once, the PKI Credential Mapper is created, configure it as below.



The type of keystore will by JKS and the keystore filename/password will be the ones given while using keytool to create the keystore.

Configuring OSB Inbound WebServices



Within the web services security list, the default_x509_handler is the one that we need to change (to use the newly created keystore).



Nw we need to set up the Token Handler Property of UseX509ForIdentity to be true.



Configuring WebLogic for SSL

The next step is to configure WebLogic Server for SSL, using the same keystore.





Next step is to create an user that is contained within certificate from the keystore. After these dynamic changes, server need to be restarted

Once we complete the above mentioned steps, our OSB system is ready to suppot the X.509 token identity.

Further reference can be pointed to Oracle WebLogic Documentation:

http://docs.oracle.com/cd/E13222_01/wls/docs103/ConsoleHelp/taskhelp/webservices/webservicesecurity/CreateDefaultWSSConfig.html
http://docs.oracle.com/cd/E13222_01/wls/docs103/ConsoleHelp/taskhelp/webservices/webservicesecurity/UseX509ForIdentity.html

Tuesday, October 11, 2011

Enterprise Integration using Web Services

From last couple of days, while working on an integration activity I was going through some of the Enterprise Integration Patters (EIP) and was also having couple of round of discussion with my clients. During this period of time I got the idea of writing some notes on this topic so that if anyone is already working on this can get some help from the post (read "can help me by giving me back some new ideas").

EI has numerous patterns and we will be looking most importantly the webservice based models in this post.

There are two principal architectures for Web service interfaces

A> Synchronous Web services
B> Asynchronous Web services


These two architectures are distinguished by their way of request/response handling pattern. With synchronous services, clients invoke a request on a service and then suspend activity to wait for a response. With asynchronous services, clients initiate a request to a service and then resume their processing without waiting for a response. The service handles the client request and returns a response when available later. Upon availability of the response, the client retrieves the response for further processing.

However a Web service may combine synchronous and asynchronous architecture depending upon the types of work, the service performs and the available technologies.

A> Synchronous Web Services

Synchronous services are characterized by the client invoking a service and then waiting for a response to the request. Since the client suspends its own processing after making request, this approach is best when the service can process the request in a small amount of time. Synchronous services are also best when applications require a more immediate response to a request. Web services that rely on synchronous communication are usually RPC-oriented. Generally, consider using an RPC-oriented approach for synchronous Web services.

A credit card service, used in an e-commerce application, is a good example of a synchronous service. Typically, a client (the e-commerce application) invokes the credit card service with the credit card details and then waits for the approval or denial of the credit card transaction. The client cannot continue its processing until the transaction completes, and obtaining credit approval is a prerequisite to completing the transaction.

A stock quote Web service is another example of a synchronous service. A client invokes the quote service with a particular stock symbol and waits for the stock price response.



Pic: synchronous webservice model


Synchronous Web service mostly leverages the JAX-RPC servlet endpoint. Client makes the request to the servlet endpoint and then the servlet delegates the request to the service's appropriate business logic, which may reside in the service's Web tier or EJB tier. The service's business logic processes the request (It may access the business data through a persistence mechanism, if required) and When it completes its processing, formulates a response which is then returned to the client through the JAX-RPC servlet endpoint.


Pic: Synchronous webservice with JAX-RPC


A synchronous architecture like this also has the facility to expose a Web service interface to an existing J2EE application that may already have a browser/wireless/rich client interface.

B> Asynchronous Web Services

With asynchronous services, the client generates the request but doesn't wait for the response. Often, with these services, the client does not want to wait for the response because it may take a significant amount of time for the service to process the request and the nenxt operation is not directly dependent on the response (unlike the credit card response for the e-commerce application during check out).

Generally, an asynchronous class of services is used for document-oriented approaches.

A travel desk service is a good example of a document-oriented service that might get benefitted by using asynchronous communication, where the client sends a document to the travel service (may be for requesting arrangements for a particular trip or reimbursement for a trip). Based on the document's content and/or the workflow defined within the business logic of the service, it starts processing. Since the travel service might perform time-consuming steps in its normal workflow, the client cannot afford to pause and wait for these steps to complete.

So far so good. The synchronous calls are easy enough, since we do not need to think about a way out of delayed response tracking. Now in case of Asynchronous calls the service may make the result to the client's request, available in one of two ways:

1> the client that invoked the service periodically checks the status of the request using the ID that was provided at the time the request was submitted. (This is also known as polling.)
2> If the client itself is a Web service peer, the service calls back the client's service with the result.


While designing the architecture of asynchronous webservice based integration, we can think of multiple design patterns. The above architecture shows one of the recommended approaches to achieve asynchronous communication. In this architecture, the client sends a request to the JAX-RPC servlet endpoint on the web container. The servlet endpoint delegates the client's request to the appropriate business logic of the service. It does so by sending the request as a JMS message to a designated queue or topic. The JMS layer (along with the message-driven beans) makes asynchronous communication possible.

The above architecture shows the scenario in case the client itself is a Web service peer. The client application (which could be a direct client or another web container) generates a request and sends it via JMS to the Processing Center (Service A). "Service A" has a Web service invoker that sends the request to the destination application's Web service endpoint (Service B). The endpoint receives the request and acknowledges the receipt by sending an ID back to the service invoker. Later, when the supplier application fulfills the request, its Web service invoker sends the result to "service A" endpoint.



As I mentioned earlier, there could be numerous approaches to achieve this pattern and the above mentioned are only two of them. In case you are interested to add anything, please feel free to write note on the post.

Sunday, September 4, 2011

SOA concept and role of an ESB in it

This posts is influenced by my one of my recent discussion about Service Oriented IT trends with couple of my friends. Since SOA being a recent buzz word and with couple of very good implementation of it present in current market from some of the software giants, it has become absolute necessity to understand the concept first.

Service-Oriented Architecture

Service-Oriented Architecture (SOA) has emerged as the leading IT agenda for infrastructure reformation, to optimize service delivery and ensure efficient business process management. Part of the paradigm shift of SOA are fundamental changes in the way IT infrastructure is designed—moving away from an application infrastructure to a converged service infrastructure. Service-Oriented Architecture enables discrete functions contained in enterprise applications to be organized as layers of interoperable, standards-based shared “services” that can be combined and reused in composite applications and processes.

In addition, this architectural approach also allows the incorporation of services offered by external service providers into the enterprise IT architecture. As a result, enterprises are able to unlock key business information in disparate silos, in a cost-effective manner. By organizing enterprise IT around services instead of around applications, SOA helps companies achieve faster time-to-service and respond more flexibly to fast-paced changes in business requirements.

In recent years, many enterprises have evolved from exploring pilot projects using ad-hoc adoption of SOA and expanded to a defined repeatable approach for optimized enterprise-wide SOA deployments. All layers of an IT SOA architecture have become service-enabled and comprise of presentation services, business processes, business services, data services, and shared services.







Fig: SOA Conceptual Architecture


Service Mediation Challenges

A major challenge for SOA initiatives is attributed to the inherently heterogeneous multi-vendor IT landscape in many enterprises, and the resultant individual silos of business information. Rather than incur the cost and complexity of replacing disparate components of legacy infrastructure, enterprises often choose to extend existing business applications as services for use in other business processes and applications.

The influx of Web service interfaces to functionality within existing packaged applications, often introduces services that do not adhere to established service and compliance guidelines. This is especially true if the services are published from core enterprise systems such as CRMs, Data Warehouses, and ERPs.

In the absence of robust and comprehensive service infrastructure solutions, developers have used a variety of “middleware” technologies to support program-to-program communication, such as object request brokers (ORBs), message-oriented middleware (MOM), remote procedure calls (RPC). More recently, IT infrastructure developers hard-coded complex integration logic as point-to-point connections to web services, in order to integrate disparate applications and processes. This inevitably resulted in complex service sprawls within enterprise IT environments. The following figure illustrates a typical static service integration scenario.


Fig: Service Sprawl Challenge


The following are other service related challenges attributed to heterogeneous IT architectures:



  • Tightly-coupled business services integration due to complex and rigid hard-wired connections

  • Difficulty managing deployed services due to disparate protocols and applications involved

  • High total cost of ownership for the enterprise

  • Impaired ability to reuse services

  • Inherent replication of transport, transformation, security, and routing details

  • Exponential redevelopment and redeployment efforts when service end-point interfaces change

  • Inevitable service disruption that significantly impact service consumers
Enterprise architects and web service modelers with goals to streamline IT infrastructure now require enterprise service capabilities that address the following IT needs:




  • Simplify access and updates to data residing in different sources

  • Reuse services developed across the enterprise and effectively manage their lifecycle

  • Provide dynamic configuration of complex integration logic and message routing behavior

  • Enable run-time configuration capabilities into the service infrastructure

  • Ensure consistent use of the enterprise services

  • Ensure enterprise services are secure and comply with IT policies

  • Monitor and audit service usage and manage system outages
Composite Applications and Service Layering

In an SOA initiative, composition is an integral part of achieving business flexibility through the ability to leverage existing assets in higher-order functions.Within a mature SOA environment, complete business applications are composed using existing services to quickly meet the business needs. Flexibility in the service provisioning process, is achieved by avoiding coding logic in service implementations.

Many organizations develop services at very granular levels and the proliferation of many small specific services are difficult to compose into broader logical services. Layering of Services is as a way of breaking out of the limitations of monolithic applications and shortening development, release and test cycles. By defining a layered approach to service definition and construction, the service infrastructure team can achieve the right mix of granular and course-grained services required to meet their current and future business demands. Service Layers typically comprise of the following services:




  • Physical Services: that may represent functions that retrieve data in its raw form

  • Canonical Services: that may define a standard view of information for the organization, leveraging industry-standard formats and supporting a very wide data footprint

  • Logical Services: that provide a more client-specific granular view of information, generated at compile time using highly-optimized queries

  • Application Services: that are consumed directly by applications in a line-of-business dependent fashion and may be exposed through presentation services
Service Bus Component of SOA

The core of SOA success depends on an Enterprise Service Bus (ESB) that supports dynamic synergy and alignment of business process interactions, continual evolution of existing services and rapid addition of new ones. To realize the benefits of SOA, it is imperative that IT organizations include a robust and intelligent service intermediary that provides a layer of abstraction to mask the complexities of service integration in heterogeneous IT environments, typical in today’s enterprises. While an intermediary layer of abstraction previously implied a platform for customizing enterprise applications, today it implies toolkits for service customization and scalable infrastructures that support loosely coupled service interactions with a focus on service mediation.




Fig: Enterprise Service Bus

ESBs have been instrumental in the evolution of integrated middleware infrastructure technology by combining features from previous technologies with new services, such as message validation, transformation, content-based routing, security and load balancing. ESBs use industry standards for most of the services they provide, thus facilitating cross-platform interoperability and becoming the logical choice for companies looking to implement SOA.

An ESB provides an efficient way to build and deploy enterprise SOA. ESB is a concept that has gained the attention of architects and developers, as it provides an effective approach to solving common SOA hurdles associated with service orchestration, application data synchronization, and business activity monitoring. In its most basic form, an ESB offers the following key features:




  • Web services: support for SOAP, WSDL and UDDI, as well as emerging standards such as WS-Reliable Messaging and WS-Security

  • Messaging: asynchronous store-and-forward delivery with multiple qualities of service

  • Data transformation: XML to XML

  • Content-based routing: publish and subscribe routing across multiple types of sources and destinations

  • Platform-neutral: connect to any technology in the enterprise, e.g. Java, .Net, mainframes, and databases


Fig: ESB Architecture



A robust SOA suite offers:




  • Adapters, to enable connectivity into packaged and custom enterprise applications, as well as leading technologies.

  • Distributed query engine, for easily enabling the creation of data services out of heterogeneous data sources

  • Service orchestration engine, for both long-running (stateful) and short-running (stateless) processes

  • Application development tools, to enable the rapid creation of user-facing applications

  • Presentation services, to enable the creation of personalized portals that aggregate services from multiple sources
Using ESBs offers greater flexibility for enterprises to connect heterogeneous resources, by eliminating the need for brittle high-maintenance point-to-point connections. Adding an ESB intermediary between service consumers and service providers, shields them from the implementation details of underlying service end-point interfaces, reducing or eliminating the redevelopment and redeployment impacts at the service-consumer level.

Best in class enterprises have achieved SOA success by harnessing high-speed enterprise-ready ESB intermediaries that strategically integrate service mediation capabilities and business process management functionality. Recognizing the significance of operational service management as a critical SOA success factor, they have implemented solutions that provide enterprise-class service scalability, reliability, customization and security. By adopting such solutions built specifically for management and governance of an SOA service lifecycle, these enterprises have obtained the following business benefits:




  • Minimized costs by accelerating SOA deployment initiatives

  • Ensured customer satisfaction by assurance of continuous service availability

  • Insulated service consumers to changes in service infrastructure by virtualizing service end points

  • Maximized ROI by leveraging shared services infrastructure and using consistent modeling methodologies

  • Reduced integration burden by simplifying service interactions

  • Improved effectiveness of SOA initiatives through accurate run-time governance of shared services

  • Justification of SOA spending by inventory and tracking of run-time services

  • Accurate cost benefit decisions by measuring the benefit or cost avoidance obtained through SOA

Fig: Enterprise Integration for SOA



Hope this will help to understand the concept and drive of the SOA initiative. My many thanks to one of the leading SOA implementation vendor document to put up this post together.

Tuesday, September 28, 2010

Browsing your destinations using Hermes

When we talk about the Messaging Middleware, administering the destinations is a must-have feature. A lot of queries come 'How to browse, search, copy, and delete messages from destinations'...

The WebLogic Server Administration Console comes with the feature to monitor and view JMS messages from WebLogic 9.x onwards. However it is a web-based (console) tool that is optimized for configuration, not monitoring and development testing. It can not do some advanced JMS activities like copy messages from one queue to another very easily. HermesJMS is an extensible console that helps you interact with JMS providers making it easy to browse or search queues and topics, copy messages around and delete them. It fully integrates with JNDI letting you discover administered objects stored, create JMS sessions from the connection factories and use any destinations found. Many providers include a plug-in that uses the native API to do non-JMS things like getting queue depths (and other statistics) or finding queue and topic names.

This little but powerful tool works with many of the popular JMS providers such as WebLogic JMS, Tibco EMS, JBoss MQ, Sonic MQ, WebSphere MQ, OpenJMS, SAP etc.

In this post we will see how to get Hermes installed and configured for use with WebLogic JMS.

1> Download the installer from HERE
2> Run “java –jar hermes-installer-1.13.jar” (at the time of writing this post the version available is 1.13, you may have a different JAR file name) in the directory where you have placed the installer jar.
3> Edit the /bin/hermes.bat or hermes.sh and set JAVA_HOME (if you do not have it set at the System level)
4> Create JMS module > Connection Factory > Destination (If you have an already existing Destination on WLS, then you may skip this point).
5> Launch Hermes by calling the hermes.bat
6> Right click on “sessions” and select “new -> New session”
7> Go to the “providers” tab at the bottom and add a “group” called “weblogic10” and a “library” specifying your path to weblogic.jar. Select “Don’t Scan” when prompted. (I am using WLS 10.0 GA for this posting)



8> Save the configuration.
9> Click on the “Sessions” tab. Enter a name for the session (in this case “examplesQCF”). Select “BEA WebLogic” for the “Plug In” section from the dropdown. In the “Connection Factory”, select “weblogic10” for the loader. For the properties values refer the below image (binding, URL and credentials would be specific to your WLS settings).



10> Now right click on the newly created “examplesQCF” session and select Discover. It would list all the available destinations under the examplesQCF session.



11> Now right click on a single entry under the Session and select “Browse”, you should be able to see the contents of the queue.



If you are able to see something very similar like the snapshot above, that mean you have configured the Hermes properly.

Cloud vs. Cloud Native

Introduction These days everyone is moving “On cloud”. Having many cloud vendors with lucrative offers of TCO reduction, does deploying yo...