Sunday, September 27, 2009

WS-Policy v/s WS-Security

Posting an article written in Sept 2008 regarding WS-Security and WS-Policy comparison after going through Policy Driven SOA by Sreedhar Kajeepeta.


WS-Policy - Will describe the capabilities and constraints of the security (and other business) policies on intermediaries and endpoints (e.g. required security tokens, supported encryption algorithms,privacy rules).

WS-Security- Describes how to attach signature and encryption headers to SOAP messages. In addition, it describes how to attach security tokens, including binary security tokens such as X.509 certificates and Kerberos tickets, to messages.

WS-Policy- WS-Policy will describe how senders and receivers can specify their requirements and capabilities.WS-Policy will be fully extensible and will not place limits on the types of requirements and capabilities that may be described; however, the specification will likely identify several basic service attributes including privacy attributes, encoding formats, security token requirements, and supported algorithms. This specification will define a generic SOAP policy format, which can support more than just security policies. This specification will also define a mechanism for attaching service policies to SOAP messages.

WS-Security -WS-Security describes enhancements to SOAP messaging to provide quality of protection through message integrity and message confidentiality. Message integrity is provided by leveraging XML Signature in conjunction with security tokens (which may contain or imply key data) to ensure that messages are transmitted without modifications. Similarly, message confidentiality is provided by leveraging XML Encryption in conjunction with security tokens to keep portions of SOAP messages confidential. Finally, WS-Security describes a mechanism for encoding binary security tokens.

3.Defining a Policy

WS-Policy- Policies are formulated through the use of different elements and document-level subjects provided by the various specifications under the WS-Policy Framework.

WS-Security - Coded as message handler using SAAJ API’s.

4.Integrating policies with services

WS-Policy-Policies may be integrated with services through addition of metadata either directly through usage of WS-PolicyAttachments or indirectly adding reusable Policy Definitions to registry/repository, and eventually referring to these through registry key references in the business service definition.

WS-Security - Message Handlers handling security configured in SOAP message chain using webservices.xml(Web Services deployment descriptors).

5.Policy enforcement Points(PEP)

WS-Policy-A policy enforcement tool references the registry/repository to determine which policies should be enforced for a given service. There are two ways to enforce policies: Using agents & Using a gateway.

WS-Security - SOAP Message chain enforces the security policy using SOAP headers. No additional tool required to enforce policy.

6. Policy Aware clients

WS-Policy - Yes, they can retrieve information about the policies through WS-MetadataExchange, and perform dynamic bindings with the endpoints, which satisfy the given criteria.

WS-Security - No. But once the contract is defined between service provider and service consumer, then SAAJ message handlers need to be coded for enforcing WS-Security specifications. But again not defined in WSDL or in repository.

7. Overhead of Frameworks

WS-Policy - Yes. Need to know PolicyExpressions, PolicyAssertive while defining policy. WS-Policy Attachment to integrate policy in WSDL. WS-MetadataExchange to get information about policy. Policy enforcement tools are required to enforce policies.

WS-Security - Only SAAJ are required to implement message handler.

8.Can policies be centrally managed

WS-Policy - Yes and can be implemented using XML network structure.

WS-Security - No

9.Policy registry/repository

WS-Policy - Policies can be centrally stored in registry/repository, which can be also used by policy ware clients to gather information about policy.

WS-Security - No. Implemented as part of web services deployment descriptor.

10.Centralized Management

WS-Policy - Yes, possible using some policy managements tools.

11.Monitoring and Alerting

WS-Policy - Yes, possible using some Monitoring tools.

12. Message Validation and Compliance

WS-Policy - Yes, can be done using the XML gateway(hardware) by reading the policies.

WS-Security - Message Interceptor Gateway can be coded using the message handler.

13.Access protection

WS-Policy - As part of web services infrastructure security, direct access to all service endpoints can be all disabled. Using an XML firewall or Web-proxy infrastructure that masks all the underlying service endpoints and communicates through network address translation (NAT) or URL rewriting mechanisms.

WS-Security - No

--Amit G --

Wednesday, September 23, 2009

First Shot at Service Component Architecture

First Shot at Service Component Architecture

Looking at the recent buzzword or key technologies in IT Industry - web services, Services Oriented Architecture, Software as a Service , Platform as a Service etc.The one thing is the Services is one of the common factor in those buzzwords and going forward IT industry can be renamed as Services Industry.
I am trying to write on Service Component Architecture specifications. heard about this specification from my SOA Knowledge Bank colleague namedd Ravi Venkidapathy. The first impression of this specification - SCA attempts to simplify building of Service Oriented Architectures by focusing on: Composition, Assembly and Securitized deployment aspects of SOA. But whether i agree or disagree to the definition defined by my colleague will be presently in latter half of this blog.

High Level View of Specifications

The SCA specifications define how to create components and how to combine those components into complete applications. Well these specification works on primary concept of breaking down of application/process in components and how these components work together. SCA brought using terms like Service, Component, Composite, Domain, Contribute and new configuration XML file called SCDL(Service Component Definition Language). Diagram 1 tells about this relationship

But within SCA application, components can be written in language independent way and even application can be accessed by non-SCA world. Although SCA application is defining the components and is not primarily defined for UI and Data services. They provide good integration APIS to integrate with UI and data tiers.
Defining Components as Remotable and local are reminding the feature of remote and local EJBs to avoid network latency.This information goes in Component but not in SCDL. THIS CAN BE CONSIDERED AS ENHANCEMENT TO SPECIFICATIONS as Remotable/Local component configurations should be set in SCDL where composites are being configured.
To allow cross-vendor interoperability, all remotable interfaces must be expressible in WSDL, which means they can potentially be accessed via SOAP.

A composite can also expose one or more services, where these services are implemented by components within composite - Known as Promotion of Services.

Shot at SCDL

SCDL takes its name analogy from WSDL and contents from Spring as Bean Configuration.SCDL defines references for other services and properties defined for component. Springs' dependency injection is used to set the values into component using constructore-level dependency injection, setter-method injection & property-level injection.SCDL alos defines the definition of composites, components(their names,implementation classes), bindings for services defined.Bindings can be assigned to services and to references, and each one specifies a particular protocol.
Bindings separate how a component communicates from what it does, they let the component’s business logic be largely divorced from the details of communication.A single service or reference can have multiple bindings, allowing different remote software to communicate with it in different ways, so separating these independent concerns can make life simpler for application designers and developers.
Instead, the bindings a service or reference relies on are either chosen by the runtime, for intra-domain communication, or set explicitly in a component’s SCDL configuration file[with URI information].

Domain-Runtime depenency

Runtime is presently used for wiring of the components(when have references defined) by creating the Proxy object for the component's references.It’s up to the SCA runtime to generate WSDL interfaces from the Java interfaces[for servcies defined with WS binding], fix up the service to be callable via SOAP, and do everything else required to let this component communicate via Web services.
Concept of wiring is similar to the one brought by Spring - "A wire is an abstract representation of the relationship between a reference and some service that meets the needs of that reference. Exactly what kind of communication a wire provides can vary:it depends on the specific runtime that’s used, what bindings are specified (if any), and other things."

Even though an SCA composite runs in a single-vendor environment, it can still communicate with applications outside its own domain.All of the communication between components and composites within each domain is done in a vendor-specific way.An SCA application communicating with another SCA application in a different domain sees that application just like a non-SCA application; its use of SCA isn’t visible outside its domain.

Every SCA runtime also provides an SCA binding.Instead, the SCA binding is only used when a service and its client are both running in the same domain.

SCA policy association framework that allows policies and policy subjects specified using WS-Policy and WS-PolicyAttachment, as well as with other policy languages(like WS-ReliableMessaging), to be associated with SCA components. These policies can be interaction(affects the interaction at runtime) or implementation(how components behave at runtime).Policyset can be attached to Component or Composite in SCDL and are enforced by SCA runtime.

Diagram 2 tells about SCA runtime relationship with other containers.

Bottom Line -

1.A primary goal of SCA composites is to provide a consistent way to assemble Spring,BPEL,Java different technology component into coherent applications and so make this diversity more manageable.
Although Sping,JEE(EJB,Jax-WS) has been providing lot of these features extensively & widely used by industry and also very less Runtimes(Tuscany, Fabric3) available for SCA makes a long road going ahead for SCA.
Even though the SCA is used as one of the frameworks to build SOA, but that's not the intent of SCA(from specifications).And i don't agree with ravi's
view on same.
2.If component Implementation class is changed from Java to BPEL, then with small changes in the SCDL configuration file the runtime will behave differently to execute this BPEL component(without changing the component definition).
3.Integration with Policy framework for components in SCDL.
3.Presently Can't create a component that spans multiple domains, and hence also exists vendor lock-in for SCA runtime for component.
4.SCA runtime are trying to work on adding extensions to runtime using OSGI.

So it looks like SCA specifications has been taken lot of points from existing technologies(Spring, EJB, JAX-WS) & applying configurable policies & bindings.

So wait for next post for more details on SCA.

--Amit G Piplani--

Reference -
SCA v/s JBI Article by Ravi Venkidapathy
Introducing SCA by David Chappell

Attachments in SOAP Messages

Web Services rely on SOAP, an XML-based communication protocol for exchanging messages between computers regardless of their operating systems, programming environment. SOAP is the de facto standard messaging protocol used by web services and codifies the use of XML as an encoding scheme for request and response parameters using HTTP as a means for transport. For better memory requirements, smaller message size, smaller process times for SOAP Messages, Attachments are used to prevent large volume of Data being send as part of SOAP Message or non-XML Data ( for e.g. media files etc) also need to be part of SOAP Message. Following different approaches were used to send attachments in SOAP Messages:
1. WS-Attachments over DIME(Direct Internet Message Encapsulation)
DIME is a packaging mechanism that allows multiple records of arbitrarily formatted data to be streamed together. Records are serialized into the stream one after the other and are delineated with an efficient binary header. For large records or records where the size of the data is not initially known, DIME has defined a "record chunk”. WS-Attachments indicate that the Primary SOAP Message Part (Main Message) must be contained in the first record of a DIME message. WS-Attachments define the use of the HREF attribute for making a reference to Attachment. For the most part it is similar to simply sending the primary SOAP message part on its own, except that the HTTP Content-Type header must be set to "application/dime" and the body of the HTTP request is the DIME message instead of the SOAP message.
2. SOAP with Attachments ( SwA)
SwA defines a way for binding attachments to a SOAP envelope using the multipart/related MIME type. MIME cannot be represented as an XML Infoset – this effectively breaks the web services model since attachments cannot be secured using WS-Security.
3. Message Transmission and Optimization Mechanism(MTOM)
MTOM is based over MIME and includes attachments as part of the Infoset (since SOAP 1.2 is built around Infoset), thus making the SOAP 1.2 processing model applicable to the attachments as well. MTOM combines the composability of Base 64 encoding with the transport efficiency of SOAP with Attachments. Non-XML data is processed just as it is with SOAP with Attachments SWA – the data is simply streamed as binary data in one of the MIME message parts.
MTOM is composed of three distinct specifications:
• MTOM CORE describes an abstract feature for optimizing the transmission and/or wire format of a SOAP 1.2 message by selectively encoding portions of the message, while still presenting an XML Infoset to the SOAP application.
• XOP (XML-binary Optimization Packaging) specifies the method for serializing XML Infosets with non-XML content into MIME packages.
• The Resource Representation SOAP Header Block specification defines a SOAP header block that can carry resource representations within SOAP messages.
MTOM attachments are streamed as binary data within a MIME message part, making it fairly easy to pass MTOM attachments to SWA or receive SWA attachments into an MTOM implementation.Image1 describes the attachment using MTOM.

MTOM specification is in fact part of Messaging (includes SOAP, WS-Addressing, and MTOM) part of WSIT & Web Services Enhancements (WSE) for Microsoft .NET specifications. Image-2 compares the different attachment approaches in SOAP messages.

MTOM support is fully interoperable with .NET clients and servers using the Metro distribution and is currently the best option for sending the attachments in SOAP Messages.

Hibernate Search ---- Bridging Lucene and Hibernate

Hibernate Search is a project that complements Hibernate Core by providing the ability to do full-text search queries on persistent domain models. So in this article will try to introduce the Lucene, Hibernate and disadvantages of Lucene while integrating full text search engine based on domain model and how Hibernate Search overcomes that problem.

Lucene is a powerful full-text search engine library hosted at the Apache Software Foundation ( It has rapidly become the de facto standard for implementing full-text search solutions in Java. Lucene consists of core APIs that allow indexing and searching of text.

Hibernate Core is probably the most famous and most used ORM tool in the Java industry. An ORM lets
you express your domain model in a pure object-oriented paradigm, and it persists this model to a relational database transparently for you. Hibernate Core lets you express queries in an object-oriented way through the use of its own portable SQL extension (HQL), an object-oriented criteria API, or a plain native SQL query. Typically, ORMs such as Hibernate Core apply optimization techniques that an SQL handcoded
solution would not: transactional write behind, batch processing, and first- and second-level caching.

With many Web2.0 web applications, providing the extensive text based search functionality to the end-users. The simple text based search on column can be implemented using Hibernate criteria and HQL. But with search getting complicated involving the multiple column value and displaying the search results as per rank etc., the application started using Lucene as full-text based search engine on domain model. But difficulties of integrating a Lucene into a Java application centered on a domain model and using Hibernate or Java Persistence to persist data are:

• Structural mismatch—How to convert the object domain into the text-only index; how to deal with relations between objects in the index. How to manage the type conversion to String, which is the form Lucene uses to store the index.
• Synchronization mismatch—How to keep the database and the index synchronized all the time.
• Retrieval mismatch—How to get a seamless integration between the domain model-centric data-retrieval methods and full-text search.

Hibernate Search leverages the Hibernate ORM and Apache Lucene (full-text search engine) technologies to address these mismatches. Hibernate Search is a bridge that brings Lucene features to the Hibernate world. Hibernate Search hides the low-level and sometimes complex Lucene API usage, applies the necessary options under the hood, and lets you index and retrieve the Hibernate persistent domain model
with minimal work.

So let’s see how hibernate search works & what needs to be done on top of hibernate configurations to achieve indexing on domain model?
Firstly hibernate search works well on top of both Hibernate and JPA. So firstly, hibernate-search.jar & lucene-core.jar needs to be added in Classpath and then modifying the “” property hibernate configurations to indicate where index files are stored on file system. If application is using only Hibernate Core(not hibernate annotations), then additional configurations will need to be added for Hibernate Event Listeners, whenever there is an update, insert or deletes happen on entity. The index stays synchronized with the database state automatically and transparently for the application. This feature helps to overcome the synchronization mismatch.

Once configurations are done, we will be mapping Object Model to Index Model. Firstly searchable entities should be annotated using the @Index so that Hibernate Search gathers the list of indexed entities from the list of persistence entities marked with the @Indexed annotation & stores them in the directory configured in configuration file. The second thing to do is to add a @DocumentId on the entity’s identity property. Hibernate Search uses this property to make the link between a database entry and an index entry. This documented will help in updating the document (in index) when the entity object is updated. To index a property, we need to use an @Field annotation. This annotation tells Hibernate Search that the property needs to be indexed in the Lucene document. So this helps us to overcome structural mismatch problem.
Once the domain problem has been setup to be index, let’s dive into core features of indexing and searching. Hibernate Search extends the Hibernate Core main API to provide access to Lucene capabilities. A FullTextSession is a subinterface of Session. Similarly, a FullTextEntityManager is a subinterface of EntityManager). Those two sub interfaces have the ability to manually index an object. Hibernate Search provides a helper class ( to retrieve a FullTextEntityManager from a Hibernate EntityManager as well as a helper class to retrieve a FullTextSession from a Session ( snippet of code will index the list of Employee objects to index:
FullTextEntityManager ftem = Search.getFullTextEntityManager(em);
List emps = em.createQuery("select i from Employee i").getResultList();
for (Employee emp : emps) {

The Lucene index will thus contain the necessary information to execute full-text queries matching these employees. Hibernate Search’s query facility integrates directly into the Hibernate query API and secondly, returns Hibernate managed objects out of the persistence context after running search. Following snippet of code tells about building the Hibernate Search Query on top of Lucene Query and firing it:
String searchQuery = "title:Software Engineer OR speciality:Java";
QueryParser parser = new QueryParser("title",new StandardAnalyzer()); luceneQuery =parser.parse(searchQuery);

FullTextSession ftSession = Search.getFullTextSession(session);
org.hibernate.Query query = ftSession.createFullTextQuery(luceneQuery, Employee.class);
List results = query.list();
//iterates the searched employees from list.
Query objects are respectively of type org.hibernate.Query or javax.persistence.Query but the returned result is composed of objects from domain model and not Documents from the Lucene API.

By focusing on ease of use and smooth integration with existing Hibernate Applications, Hibernate Search makes easy and affordable the benefits of full text search provided by Lucene in any web application.

--Amit G Piplani--

Complex Event Processing (CEP) – Part 1

Both Service Oriented Architecture (SOA) and Event Driven Architecture (EDA) are architecture styles which promote the concept of loose coupling through distributed computing. Although SOA helps deliver a loosely coupled solution, the resulting solution is generally synchronous in nature. In contrast, EDA provides loose coupling using the asynchronous publish-and-subscribe pattern. Having said that, SOA and EDA are not mutually exclusive and they bring complimentary features to the table. Event-driven architecture can complement service-oriented architecture (SOA) because services can be activated by triggers fired on incoming events. So, what is an event? And how is it different from the existing units of work in other architectures? What differentiates event processing from the other architectural paradigms? Our goal is to shed some light on these questions in this article and hopefully, go a longer mile in the forthcoming version. An event can be defined as significant change in the state of a system, and a complex event is an abstraction of other events. What differentiates EDA from other paradigms is that the system is described as a succession of events and the subsequent processing/handling of these events. At a minimum, we are then looking towards “events” and “event handlers”. There are three well-defined event processing engines in EDA:
 Simple Event Processing (SEP) - is concerned with simple events that are directly related to specific, measurable changes of condition. The common example of SEP is a typical pub-and-sub pattern being used in industry.
 Event Stream Processing (ESP) - deals with the task of processing multiple streams of event data with the goal of identifying the meaningful events within those streams.
 Complex Event Processing (CEP) - deals with the task of processing multiple events with the goal of identifying the meaningful events within an event cloud (range of events generated from multiple systems).
Then, comes the perennial question – how is ESP different from CEP? We will strive to bring out some of the differences between ESP and CEP processing through the following illustration:

So, how is CEP changing the development/tools landscape? SOA middleware vendors have expanded their CEP capabilities in order to offer event-driven architecture as an alternative or supplement to SOA. More vendors are working to “CEP-enable” their BPM environments and BAM tools to support split-second response to changing business conditions. Moreover, enterprise service bus (ESB) vendors are investing in CEP to provide a user-friendly event aggregation, correlation and visualization overlay to their publish-and-subscribe environments. Aleri , Apama (Progressive), AptSoft (IBM), Coral8, Streambase, TIBCO Business Events, BEA Event Server, IBM’s InfoSphere Streams etc. are a few of the commercial ESP/CEP engine providers, whereas Esper is an Open Source ESP/CEP engine provider. An EDA is a core requirement for most CEP applications. When an organization has implemented an EDA and event-enabled their business-sensory information, they can consider deploying CEP functionality in the form of high-speed rules engines, neural networks, Bayesian networks, and other analytical models. In upcoming articles, we will be concentrating on functional reference architecture for complex event processing and event stream processing engine for event driven architectures.

--Amit G Piplani--

Multi-Tenancy Security Approach

Security is the key factor for any application hosted as a “Software as a Service” (SaaS) model. Data is the most important asset for most business applications - data about employees, products, customers, suppliers, and more. There are three main approaches for data architecture for SaaS applications –
1. Totally isolated: Separate databases per tenant
2. Partially shared: Shared database, separate schema
3. Totally shared: Same database, same schema
We will be focusing on the data security of multi-tenant SaaS applications. The data isolation for security conscious tenants is more enhanced with approaches #1 or #2 above for data architecture. But implementing the totally shared approach for data architecture requires more additional effort for data security with well-defined security mechanisms. So here we will cover data security by explaining the Encryption and Permissions based Shared Data mechanisms for the “totally shared database” approach.
Sharing approach for data security: The isolated and partially shared approaches of data architecture secures the database/schema and even tables by granting access to tenants and hence making it easier to isolate data at the tenant level. The diagram below shows the approach to achieve the same in totally shared database tables (which needs to be secured).
In the above approach, the horizontal partitioning is done on master table using a tenant ID and then granting access to partitioned tables (for each tenant) by tenant ID and hence securing the database tables access by
tenant. The SaaS application can be coded to always perform create/update/delete operations on the master table and a trigger on master table will update the appropriate tenant specific table. But the same SaaS application needs to perform the read operations on the partitioned table for better performance. By this approach, we are making sure that the tenant will be accessing the data/tables to which it has been granted permission.
Also, the above approach requires partitioning at the database level and implementing application changes to read from the necessary tables to achieve the data security. Another approach to achieving shared security can be by using a SQL View for each tenant. The application may use the SQL view for accessing the data in a given table based on their tenant ID and hence restrict the sharing of data with other tenants. This approach does not separate the data in the database by tenant and also does not simplify the backup service as per tenant.
Encryption for data security: Irrespective of what data sharing approach has been followed for a multi-tenant database, the data needs to be encrypted before persisting in a cloud or any multi-tenant environment. Encryption is especially important in situations involving high-value data or due to privacy concerns, or when multiple tenants share the same set of database tables. So the ideal approach will be to use a shared keys approach for encrypting data while communicating with SaaS applications: Symmetric key and Asymmetric keys. Symmetric keys will be used for encrypting and decrypting the data in storage. Asymmetric keys will be used for encrypting and decrypting the symmetric keys during data transit. During the tenant provisioning process, the SaaS application provider will provide a private key to a tenant and will keep each tenant’s public key in their key store; so the data will be always stored as encrypted (with symmetric keys) in database. The SaaS provider can keep different symmetric keys for each tenant ID at their end. In short, the key to privacy in the cloud is the strict separation of sensitive data from non-sensitive data followed by encryption of the sensitive elements.
In addition to this, a tenant should perform off-site backups (possibly encrypt it as well) to make sure that any current and historical data can be recovered even if a SaaS application provider goes out of business. Organizations have expressed concerns over hosting their sensitive data on the cloud and these need to be assessed for organization specific needs, standards and regulatory requirements.
In addition to data security, a SaaS application provider should secure their infrastructure with network and host security. With the above mentioned approach, multi-tenant data security looks much better than with traditional internal data center. We will cover multi-tenant UI security with respect to authentication and authorization in an upcoming article.

Multi Tenant Database comparison

Separate Databases Shared Database, Separate Schema Same Database, Same Schema
1. Definition

Separate Databases - Each tenant has its own set of data that remains logically isolated from data that belongs to all other tenants.
Shared Database, Separate Schema - Each tenant having its own set of tables that are grouped into a schema created specifically for the tenant.This approach accommodates more tenants per server than separate databases approach.
Same Database, Same Schema - Using the same database and the same set of tables to host multiple tenants' data & allows to serve the largest number of tenants per database server.

Separate Databases - Database security prevents any tenant from accidentally or maliciously accessing other tenants' data. Easy to implement. Shared Database, Separate Schema - Moderate degree of logical data isolation for security-conscious tenants, but less than separate database approach.
Same Database, Same Schema - This approach may incur additional development effort in the area of security, to ensure that tenants can never access other tenants' data & require strong data safety.

Separate Databases - Easy to extend Data model as per tenant need
Shared Database, Separate Schema -Also Easy to extend Data model as per tenant need
Same Database, Same Schema - Customizations happen through additional columns/extra tables. So is harder to understand and maintain.

4.Data recovery
Separate Databases -Easy to implement.
Shared Database, Separate Schema - Harder to restore the data.
Same Database, Same Schema - Very hard to restore the data.

Separate Databases-Very costly.
Shared Database, Separate Schema-Lesser than separate databases.
Same Database, Same Schema- Lowest hardware and backup costs. Applications optimized for a shared approach tend to require a larger development effort but lower operational costs.

6.When to approach
Separate Databases-Appropriate for customers that are willing to pay extra for added security and customizability.If some or all tenants to store very large amounts of data.If there are going to be large number of users per average tenant to be supported.
When per-tenant value added services (e.g. per-tenant backup, restore etc.) are added.

Shared Database, Separate Schema -Appropriate for applications that use a relatively small number of database tables & when customers accepts having their data co-located with that of other tenants.

Shared Database, Separate Schema -This is appropriate when it is important that the application be capable of serving a large number of tenants with a small number of servers, and prospective customers are willing to surrender data isolation in exchange for the lower costs.
If there are a very large number of rows in the affected tables, this can cause performance to suffer noticeably for all the tenants that the database serves. But this can be overcome by scaling out a shared database is through horizontal (row-based) partitioning based on tenant ID.

--Amit G Piplani--