Monthly Archives: November 2010

Gartner on Windows Azure AppFabric: A Strategic Core of Microsoft’s Cloud Platform

Yesterday, I talked about Forrester report around SQL Azure.  Today, I’m gona talk about a Gartner Report on Windows Azure AppFabric.  The report can be read on:

Basically, the key elements:

  • They acknowledge the fact that cloud computing is a strategic investment for Microsoft, comparing it to the web in the 1990’s.
  • Microsoft is target the traditional three layers of cloud computing (IaaS, PaaS & SaaS) & intend to be a leader in all.
  • Windows Azure AppFabric has emerged as a core platform technology in Microsoft’s Cloud vision
  • Many teams are involved in Cloud Computing at Microsoft and as a result, the lack of synchronization between the different delivery creates some confusion

As for their recommendations:

  • Microsoft should deliver a competitive and complete cloud platform within 2 or 3 years.  Gartner estimates the current state of the platform is incomplete & unproven.  Early adopters should expect bumpy ride.
  • IT Planners should consider Microsoft as a candidate for cloud computing, but again in the short term, the offer isn’t complete.

The paper gives a good overview of the platform as it is today.  That alone gives this paper a lot of value, since this information is otherwise quite widespread on Microsoft sites.

I found their recommendation a bit harsh though.  Calling Windows Azure unproven is bit extreme given the amount of sites running it today and the fact that Microsoft is slowly moving their assets into it as well.

There is no Gartner curve or quadrant, so I’ll have to rely on my natural verbosity at the next cocktail party!


SQL Azure & ACID Transactions: back to 2001

I meant to write about this since I read about it a little back in July, today is the day.

image You know I love Microsoft SQL Azure.

The technology impressed me when it was released.  Until then Azure contained only Azure storage.  Azure Storage is a great technology if you plan to be the next e-Bay on the block and shove billions of transactions a day to your back-end.  If you’re interested into migrating your enterprise application or host your mom & pop transactional web site in the cloud, it’s both an overkill in terms of scalability and a complete paradigm shift.  The latter frustrated a lot of early adopter.  A few months later, Microsoft replied by releasing SQL Azure.  I was impressed.

Not only did they listen to feedback but they worked around the clock to release quite a nice product.  SQL Azure isn’t just an SQL Server hosted in the cloud.  It’s totally self managed.  SQL Azure hosts 3 copies of your data in redundancy, so it’s totally resilient to hardware failures and maintenance:  like the rest of Azure it’s build with failure in mind as being part of life and dealt with by the platform.  Also, SQL Azure is trivial to provision:  just log to Windows Azure portal and click new SQL Azure…

This enables a lot of very interesting scenarios.  For instance, if you need to stage data once a month and don’t have to capacity in-house, go for it, you’re gona pay only for the time the DB is on-line.  You can easily sync it with other SQL Azure DB and soon you’re gona be able to run reporting in the cloud with it.  It’s a one stop shop, you pay for use, you don’t need to buy a license for SQL Server nor for Windows Server running underneath.

Now that is all nice and you might think, let’s move everything there!  Ok, it’s currently limited to 50Gb which is a show stopper for some enterprise applications and certainly a lot of e-Commerce applications, but that leaves a lot of scenarios addressed by it.

A little caveat I wanted to talk to you about today is…  its lack of distributed transaction support.

Of course, that makes sense.  An SQL Azure DB is a virtual service.  You can imagine that bogging down those services with locks wouldn’t scale very well.  Plus, it’s not because two SQL Azure instances resides in your account that they reside on the same servers.  So supporting distributed transactions would lead to quite a few issues.

Now most of you are probably saying to themselves:  “who cares, I hate those MS-DTS transactions requiring an extra port to be open anyway and I never use it”.  Now you might not use that but you might have become accustomed to using .NET Framework (2.0 and above) class System.Transactions.TransactionScope.  This wonderful component allows you to write code with the following elegant pattern:

using(scope=new TransactionScope())

//  Do DB operations


This pattern allows you to declaratively manage your transactions, committing them and rolling back if an exception is thrown.

Now…  that isn’t supported in SQL Azure!  How come?  Well, yes you’ve been using it with SQL Server 2005 & 2008 without ever needing Microsoft Distributed Transaction Service (MS DTS) but maybe you didn’t notice it but you were actually using a feature introduced in SQL Server 2005:  upgradable transaction.  This allows SQL Server 2005 to start a transaction as a light transaction on one DB and if need be, with time, to upgrade it to a distributed transaction on more than one transactional resources (e.g. another SQL Server DB, an MSMQ queue or what have you).

When your server doesn’t support upgradable transactions (e.g. SQL Server 2000), when you use System.Transactions.TransactionScope, it opens a distributed transaction right away.

Well, SQL Azure doesn’t support upgradable transactions (presumably because they have nothing to upgrade to), so when your code will run, it will try to open a distributed transaction and will blow.

Microsoft recommendation?  Use light transaction and manage them manually using BeginTransaction and Commit & Rollback on the returned SqlTransaction object.  Hence the title:  back to 2001.

Now, it depends what you do.  If you’re like a vast majority of developers (and even some architect) and you think that ACID transactions is related to LSD, then you probably never manage transactions at all in your code, so this news won’t affect you too much.  If you’re aware of transaction and like me embraced System.Transactions.TransactionScope and sprinkled it over your code like if it was paprika on an Hungarian dish, then you might think that migrating to SQL Azure will take a little longer than an afternoon.

Now it all varies.  If you wrapped your SQL Connection creation in a factory, you might be able to pull out something a little faster.

Anyhow, I found that feature quite disappointing.  A lot of people use SQL Server light transactions and that would be (I think) relatively easy to support.  The API could blow when you try to upgrade the transaction.  I suppose this would be a little complicated since it would require a new provider for SQL Azure.  This is what I proposed today at:

So please go and vote for it!

Forrester: SQL Azure Raises The Bar On Cloud Databases

image November 2nd 2010, Forester Research published a report around Microsoft SQL Azure.  The report can be found on Microsoft web site:

Basically, they interviewed 26 companies using the technology and concluded that:

  • SQL Azure is reliable
  • It delivers for small to medium scenarios
  • What seems to differentiate it from other cloud or DB vendors:
    • Multitenant architecture, which delivers better pricing
    • Easier to use

Currently the top size of SQL Azure is 50 Gb.  So «medium scenario» here might mean big or small for you depending where you are coming from.

Forrester positions Microsoft SQL Azure as a leader in their domain.  They do not have those fancy Gartner quadrant and curves that sits so well at cocktail party but it does deliver the goods:  SQL Azure rocks!

Now, just to show that I’m not only a zealot, I’m going to deliver a critic of one technical capability of SQL Azure in the next blog post 😉

Internet Explorer 9 – Beta Update

IE9An update to Internet Explorer 9 Beta is available from Microsoft as of yesterday (November 23rd 2010).

This is an update to the full browser as opposed to the developer preview build which isn’t the full Internet Explore, although the preview build does work side-by-side with any other version of IE.

Not much is mentioned about what the update brings.  Rumours circulate that a beta 2 would see the day before release candidates.  Stay tuned.

Sharing Data Contracts between clients & servers with WCF Data Services

I’ve been blogging a bit about the OData protocol put forward by Microsoft and even wrote an article about it on Code Project.  That article is supposed to be followed by others about WCF Data Services, the .NET implementation of OData, well…  stay tuned!

odata I love that OData protocol.  For me it finally brings forward the vision of web services replacing a database for data access and business logic.  What was missing from SOAP web services was the ability to query.  So yes, you could expose your data on the web, but you had to know in advance what type of query your clients will need.  If you were not sure, you would pretty much end up with the antics GetCustomerByID, GetCustomers, GetCustomersByContractID, GetCustomersByFirstName, SearchCustomerByName, GetCustomersWhoWasInTheLobbyWithAPipeWrench and the like.  All those web services were doing only one thing, being a tin adapter to your back-end store.  Each time I saw those it reminded me that Web Service was a young technology and SQL was a much more mature one.

With OData that changed a little since you can now query your web services, so one web service implementation should satisfy most of your client needs for read-operations.  Now, if like me you thought SOAP Web Services was a young and immature technology, wait until you meet OData and its .NET implementation WCF Data Services.

WCF Data Services has very little to do with WCF beside being an API for services exposed on the web.  Most of WCF pipeline is absent, the ABC (Address, Binding & Contract) of WCF is nowhere to be found and when it doesn’t work, you get a nice “resource not found” message on your browser as the only troubleshooting information.

Nevertheless, it does get the job done and exposes OData endpoints which are quite great and versatile.  With .NET 4.0, WCF Data Services got some improvements too.  A greater querying capabilities (e.g. ability to only count an entity set), interceptors and…  service operations.

Service operations really fills the gap missing between a SOAP web services with parameters and a simple OData where you expose an Entity set.  A service operations allows you to define an operation with parameters but where its result can be furthered queried (and filtered) by the client.

Now the client-side story isn’t as neat as WCF in general.  With WCF you can share your service and data contracts between your server and your client.  This isn’t always what you would like to do, but it’s a very useful scenario and allow you to share entities between the client and the server and enables you to share component dealing with those entities.  The key API there is ChannelFactory ; instead of using Visual Studio or a command-line tool to generate a proxy, you let that class generate it from your service contract.  It works very well.

This doesn’t exist with WCF Data Services.  You’re sort of force to generate proxies with Visual Studio or command tools.  This duplicates your types between your server and client and doesn’t allow you to have components (e.g. business logic) shared between the server and client using data entities.  As an extra, the generation tools don’t support Service Operations at all.

In order to use service operation on the client-side, you need to amend the generated proxy.  Shayne Burgess has a very good blog entry explaining how to do it.  Actually, I got inspired by that blog entry to write client proxies by hand, allowing us to share data entities across tiers.  Here is how to do it.

First you need a service operation.  You can learn how to do that here.  The twist we’re going to give here is to define an interface for the model, another one for the service operations and one containing both.  The reason to split those interfaces is due to the service operations being implemented in the data service directly while the non-service operations are in the data model itself.

public interface IPocQueryService : IPocQueryModel, IPocQueryOperations

public interface IPocQueryModel
    IQueryable<FileInfoData> Files { get; }


public interface IPocQueryOperations
    IQueryable<FileInfoData> GetFilesWithParties(int minPartyCount);

Now we can define our client-proxy.

public class PocQueryServiceProxy : DataServiceContext, IPocQueryService
    #region Constructor
    public static IPocQueryService CreateProxy(Uri serviceRoot)
        return new PocQueryServiceProxy(serviceRoot);

    private PocQueryServiceProxy(Uri serviceRoot)
        : base(serviceRoot)

    IQueryable<FileInfoData> IPocQueryModel.Files
        get { return CreateQuery<FileInfoData>("Files"); }

    IQueryable<FileInfoData> IPocQueryOperations.GetFilesWithParties(int minPartyCount)
        return CreateQuery<FileInfoData>("GetFilesWithParties").AddQueryOption("minPartyCount", minPartyCount);

On the client side, we can use the proxy as if we were talking directly to the DB.

var service = PocQueryServiceProxy.CreateProxy(builder.Uri);
var files = from m in service.Files
            where m.ID.Contains("M")
            orderby m.ID descending
            select m;
var files2 = service.GetFilesWithParties(2);

I’m not a huge fan of having so many interfaces for such a simple solution.  We could have only one and use it for the proxy only.  But then we would be loosely coupled with the server, which is what I am trying to avoid by sharing the interfaces on the client & server.

Windows 8: Desktop as a Service?

In the wild country of rumours about Windows 8, there’s a new entry:  Desktop as a Service (thanks to Mary-Jo Foley for the heads-up).  Some slides have indeed leaked from the London Microsoft architectural summit in April 2010 showing Microsoft’s vision of the next step for Windows virtualization.

The virtualization of the applications was done with App-V in Windows 7 while the virtualization of the OS is meant to mean native vhd booting.  That is already done in Windows 7 although it requires a bit of tweaking.

So we are left to speculate about Desktop as a Service (DaaS), although one of the slide gives some hints:

The desktop should not be associated with the device. (T)he desktop can be thought of as a portal which surfaces the users apps, data, user state and authorisation and access.

Now that is interesting.  With Office 365 (aka BPOS) for the server side of the apps, maybe Microsoft will eventually provide all the client apps as a service as well.

This would go a long way to resolve enterprise IT’s headaches where the migration from Windows XP is a major issue and the benefits rarely outweigh the costs.  With a more lightweight OS and DaaS, a migration would be a better value proposition.  This wouldn’t remove one of the big cost of migration though:  training.