Monthly Archives: December 2011

Applied SOA: Part 6–Service Composition

This is part of a series of blog post about Applied SOA. The past blog entries are:

In this article, I’ll quickly cover service composition.

Service Composition is often stated as a goal of SOA.  I think it’s rather a consequence of a Service Oriented Architecture.  You first deploy building block-type of services.  Once those are running, you can start deploying higher order services reusing those building blocks.

As we’ve seen in the Service Taxonomy blog entry, a good service taxonomy helps you to structure service composition.  The one I presented, from, has composition built-in.  Typically, a business processes compose many services.  An activity within a process might also compose a few more agnostic services.

Composition is good, it’s incarnated reuse, so we should want it everywhere, shouldn’t we?  Well, like with everything sweet, there’s a tension related to its usage.  In the case of composition, this tension is with isolation.  As we’ve seen in different SOA definitions, isolation is important.  Composition compromises isolation in many ways:

  • It couples SLA:  the composing service is just as strong as its weakest link at any moment in time.
  • It couples services time-wise:  composition increases latency but also require the composed service to be up-and-running.
  • It also often couples service contracts if the composing service expose data from the composed services.  One way or the other, changes in the composed service will affect the composing service.

This tension is the reason why you won’t want to layer your services in many layers.  Doing so would result in weak SLA, long response time and high-maintenance.  If you do not layer at all, you’ll end up with duplicated logic.  As usual, patterns and judgement should help you balance your solution!


Integration in the Cloud

I ran across a series of good blog posts I wanted to share with you, dear readers.

From Richard Seroter, the series is about integration patterns in the cloud, looking at cloud-to-cloud & cloud-to-on-premise scenarios.

Richard looks at three main patterns:

For each pattern, Richard describe the pattern, the general challenges, the challenges specifically related to the cloud and finally he demonstrates the pattern using different cloud provider (e.g. Google, Amazon, Azure).

His text is very thorough yet straight to the point.  He puts his fingers on the right issues.  Great work Richard!

Departmental Application Migration to Azure Blog Series

Back in summer 2010 I did a proof of concept around Windows Azure & ADFS.  The challenge was:

How can we deploy a departmental application in the cloud and have employees connecting on it using their corporate account?

Basically, how to project a corporate account in the cloud?

This was before Azure AppFabric Access Control.  The solution had a web application using Active Directory Federation Services (ADFS) directly.

I did blog the experience of that POC over here:

Applied SOA: Part 5 – Interoperability

This is part of a series of blog post about Applied SOA. The past blog entries are:

In this article, I’ll cover the Interoperability.  SOA aims is to expose capabilities through services in order to let different agents leverage those capabilities.  A common theme is therefore interoperability.  Given the universal reach of services (as opposed to objects or components for instance), the interoperability bar tends to be quite high.

Technical Interoperability

Now the first thing that pops into technologists’ mind when interoperability is mentioned is technical interoperability:  the ability for two systems to communicate with each other despite their underlying technological stack differences.  This is no small challenges.

Indeed, we would think that with SOAP & WS-*, all those interoperability problems are behind us, all taken care of by standardisation bodies, right?  Well, not quite.  Although those standards enable basic interoperability in an unprecedented way, they still do not guarantee it.  The center of the problem is the way SOAP specs can be interpreted and how differently it was interpreted by different vendors.

As a result, it isn’t trivial to achieve interoperability between heterogeneous systems.  For instance, Microsoft published four months ago a package of bindings to use with different vendors.

Two opposing forces are at play when we consider technical interoperability:

  • The richness of the protocol used in the services
  • How interoperable we want it to be

For instance, consider the following list of Services protocol:

  • WS-* (e.g. WS-Atomic, WS-Security, etc.)
  • WS-I Basic Profile
  • REST / OData
  • SQL Views
  • File share

As we go down the list, protocols get simpler and simpler.  The first ones allow rich scenarios (e.g. distributing a transaction, sharing a claims based security context, etc.) while the later ones do not.  The amount of potential clients grow as we go down though:  who  can’t drop a file on a file share?

Now once you’ve decided on how much interoperability you can afford given the protocol your solution needs, you have many tactics to make it happen.  For web services, this typically involves getting closer protocol standards and not letting your development platform do as much as it otherwise would.

Indeed, many web services platform offer a great story to quickly author web services by eliminating a lot of the gory details of SOAP.  Different development platform (e.g. .NET vs Java) takes different decisions on those details which make them incompatible though.  Pesky details such as field ordering, native types to SOAP types mapping, how list of objects are mapped to XML schema, etc. can all bring incompatibilities between a service and its consumers if written in different platforms.

All that being said, those problems are known and great guidelines exist.  For the .NET platform, an excellent tool for authoring interoperable web services is WSCF (Web Service Contract First) by thinktecture.  Those tools accelerate protocol-centric development instead of marking details.

The key here is to test with the different target platforms.  Quite similar to multi-web browser development really.

Business Interoperability

Another aspect of interoperability is business interoperability.  Let’s say we solved the technical interoperability problem and we have a great recipe to build web services interoperable with our target consumers but that we do not speak the same business language, how really interoperable are we?

An example I like to give concerns two hypothetical services within a consulting company (yeah, I love consulting company examples!).  The first service returns a list of employees who didn’t fill up their weekly timesheets while a second service sends email to an employee.  We could easily combine those two services in order to warn offenders they haven’t filled their timesheet yet, couldn’t we?  What if the first service signature looks something like this:

EmployeeData[] GetOffenders()

where the EmployeeData data contract has the following shape:

EmployeeData :
  EmployeeNumber : integer,
  FirstName : string,

  LastName : string

and the second service looks like this:

SendEmail(emailAddress : string, mailBody : string)

Well…  we got ourselves a typical data mismatch problem.  The first service identify employees by employee number while the second one expect their email address.

A simplistic example, but quite common one in large enterprises, especially ones where acquisitions created a patchwork of different semantics (e.g. here a project is something with a billing code and people staffed on it, there it’s an aggregation of teams with no budget semantics).

Again, this is a problem with solid existing solutions, but it is still something to be mindful of when elaborating a Service Oriented Architecture.  Two typical solutions to this problem are:

  • Schema rationalisation (or Canonical Data Model, CDM) where services expose a canonical model where different entities only have one meaning and representation.
  • Entity Aggregation (e.g. Master Data Management) where different semantics and representations are allowed to exist but a central authority performs reconciliation.

In conclusion, it is important to define what type of interoperability (both technical & business) we target and to take the necessary steps to achieve it given the challenges of the solution’s context.

Applied SOA: Part 4 – Service Taxonomy

This is part of a series of blog post about Applied SOA. The past blog entries are:

In this article, I’ll cover the Service Taxonomy.  On an SOA project, we’re going to have services.  How should we classify them?

First why would we want to categorise them?

First, to give a structure to the solution.  Without structure, it’s hard to govern an important number of elements, especially if those elements are services.

Second, at the architecture level, we can give rules for categories as opposed to individual services.  Again, this is structure, we can manage fewer pieces.

Third, at the design level the taxonomy will help us clarifying the boundaries of a service.  At the end of the day, there are two extremes:  either you have one service with all the operation of your company in it or a lot of services with one operation each.  A service is just a container and unless you have some clear taxonomy, which operation belongs to which service can be a matter of taste and individual preferences, which seldom yields strong design decisions.

Finally, it gives us the ability to reason about the structure at a higher level without getting cut in the intricacy of specific services.

There are many ways to categorise services.  A popular one was defined by Tomas Erl in his book SOA Design Patterns.  That taxonomy is based on a few patterns exposed in that book.  Each pattern invokes separation of concerns in order to define a service layer.  Two axis are used for the separation of concern:

  • Agnosticism to business context
  • Business Logic

A component is said to be agnostic to the business context if it can be invoked from any business process and behave identically.  A component is said to be business logic specific if it contains business logic.

Nevertheless, this taxonomy is useful.  It defines three layers:

  • Utility Layer, agnostic of both business context and business logic.  On this layer exist services manipulating information without applying business logic on it or varying depending on the business context.  Typically those services offer communication capability:  emails, pub-sub, mapping, etc.  .
  • Entity Layer, agnostic of the business context but implementing business logic.  Services in this layer are typically data centric.
  • Task Layer, business context sensitive and implementing business logic.  Services at this layer typically participate in the activities of a specific business process.

Reusability is maximal at the Utility Layer and minimal at the Task Layer.  At the Task Layer, a service is specific to a business process (context) and can’t be reused elsewhere.

Business Logic increases as we go to the Task Layer.

Composition is done for the Task Layer to the Entity and to the Utility layer.  By composition I mean that a service from one layer can call a service from another one.  This is probably the strongest property of this taxonomy:  the composition rule is deduced from the definition of the taxonomy as opposed to imposed as another principle.

For example, a service at the Entity Layer, by definition, can be called from any business process.  It cannot, therefore, call a service at the Task Layer, being specific to only one business process.

I’ve used this taxonomy in the architecture of a sizable SOA project.  It worked well in terms of given structure and was quite resilient to changes along the project (e.g. security requirements, multiple consumers, etc.).

Something we did to enrich the taxonomy was to further subdivide each layer per domain.  A domain being an information / process breakdown.  Given those subdivisions, each service could only be within one layer and serving one domain within that layer.  That gave a very strong & clear service boundary definition.

Now the weakness of the taxonomy, we’ve found, is the vagueness of the definition of agnosticism to business context & business logic.  It sounds great on paper, but my service designer team, composed of brilliant and seasoned designers, spent the project having philosophical discussions about what is and what isn’t business context agnostic.  It boiled down to the fact that an operation being or not aware of the business context isn’t always clear.

On smaller project I’ve seen a taxonomy based only on data domains, hence having only an entity layer.  This simplifies the clustering of services as data domains are typically quite clear.  It does mix the concerns of process centric & data centric capabilities though.