Monthly Archives: August 2010

Live Mesh is back!

Live Mesh is back baby!

The service that has been in beta for around two years, known as Windows Live Mesh was packaged in Live Essentials 2011 as Live Sync.

Microsoft came back to the original name:  Windows Live Mesh.

Thanks to Mary-Jo Foley for digging that information!


I’ve been using Live Mesh since day one and I think it’s a fantastic service.  For those of you who don’t know what it is, it’s a Microsoft service with a cloud component and a local (desktop or mobile) component and it basically syncs stuff from your machine to the cloud.  The original service syncs folders, so you could easily setup a few folders on your machine to sync with folders in the cloud.

The UI is easy, everything syncs seamlessly, it’s great.  On top of that, you can use a few tricks, for instance, you can sync your favorites (which are files in a My Documents folder), so your favorites can follow you from a device to another.  You can also share folders with other users, which I never tried.

Now the real power of that technology comes from the architecture and the possibility to build apps on top of it.  The model is that there’s a local component which mirrors Live Mesh in the cloud ; when you need a resource or you want to update a resource, you do it locally and Live Mesh synchronizes the resources for you.  This provides a very simple architecture where Live Mesh abstracts the synchronization logic.

There was a Live Mesh SDK allowing you to develop applications by talking to this local version of Live Mesh.  There was even talks at PDCs about deploying applications using Live Mesh.  I don’t know how much of this is moving forward with Live Essentials 2011.

An issue I see with this service is that it relies on Windows Live.  I therefore cannot use Live Mesh to build an Enterprise Application.  I don’t know if we’ll see a more Azure-flavored version of Live Mesh in the near future…


Windows Azure Customer Feedback

Microsoft has setup a customer feedback site for Windows Azure a few months ago.  It has a voting feature where you can vote for new features you would like to see in Windows Azure.

Here are the top 9 ideas I liked:

And here is the suggestion I made:  Event Bus with events for storage manipulation (e.g. queue insert) in order to avoid poling.  Of course, this is quite related to the suggestion I made in a blog entry a few weeks ago.

So you can go and vote!

Departmental Application Migration to Azure – Part 4 – ADFS with Azure web app

This is part of a series of blogs.  See the preceding blog entries:

As mentioned before on this blog series, for me the authentication is the major challenge for this proof of concept.  I need this web application to support Enterprise Single Sign-On through Active Directory.  I’ve decided to use ADFS 2.0 and Windows Identity Foundation (WIF).  In the last article, I’ve showed how to get an on-premise web application using ADFS for authentication.  On this blog entry, I’m going to the cloud, so let’s get started!


You need to have ADFS 2.0 installed.  This was covered in a previous blog entry from this series.

You’ll also need the Windows Identity Foundation (WIF) SDK.  You can download it at

You’ll also need the Windows Azure SDK.  I’m using the one from June 2010, you can download it at

To top it all, you can install the Windows Azure Tools for Microsoft Visual Studio 1.2 (also from June 2010).  You can download it at

I’ll use Visual Studio 2010.

Creating a web role in Dev Fabric

Let’s start by creating a vanilla web role in Dev Fabric.  The Dev Fabric is the dev environment provided with the Windows Azure SDK, basically, the developer work station.

I start by creating an empty solution which I name AdfsAuthenticatedAzure.

I then create a Windows Azure Cloud Service named MyCloudFacade.


A Cloud service is a unit of deployment.  A service contains different roles and a configuration for the service.  The roles are all deployed together in one package.

I create one role within that service, a web role named AdfsAuthenticatedWebRole.


Like the solution on-premise I did in the last blog entry, I’m trimming the boiler-plate web project:

  • Delete the Account folder since ADFS will perform those functions.
  • In web.config:
    • Delete the connection string.
    • Delete the authentication xml node.
    • Delete the membership xml node.
    • Delete the profile xml node.
    • Delete the roleManager xml node.

I can now hit F5 and look at my wonderful generic web site!

Deploying to Azure

To integrate with ADFS, I’ll need my web site to be using SSL.  I’ll attach a certificate to my service and enable an https input endpoint.  I’ll start at the properties of my web role.


At the certificates tab:


I then add a certificate and select one of my local (self-issued) certificates.


I then go at the Endpoints tab, check the HTTPS check box and select the SSL certificate name I just added.


The Windows Azure Tools for Microsoft Visual Studio 1.2 allow you to deploy to Azure directly, instead of simply packaging (where you have to deploy the package using the web console).  There are plenty of web resources showing how to deploy an Azure service to the cloud.  If you have questions, please ask.

While it’s deploying, I’ll add the same certificate to my service in the cloud.  In my Azure Service Web console:


I need a PFX file to upload.  I use IIS Server Certificates module and export the same self-issued certificate I used in Visual Studio.


I can then save the certificate locally and protect it with a password.  I can then import it in the Azure web console, using the same password.

I first deploy to staging then to production.  As I mentioned in a previous blog entry, staging isn’t a real staging as Microsoft recommend to have a separate subscription altogether for staging vs production.  This is important here, since staging would be harder to integrate with ADFS given its unpredictable URL (Azure generates staging URLs using a GUID).

Of course I get warnings because my certificate is self-issued and hence unrecognized by IE as trustworthy.

Integrating ADFS in Dev Fabric

The steps here are a repetition of what I covered in the previous blob entry of this series, so I won’t go into much details.

First, I choose to Add STS Reference and set https://localhost:444.


(The https in-point port is set to 443, which is the default https port, but this port being used by IIS, the web role responds to port 443 in the dev environment)

For the rest of the Wizard, it’s the same steps I used in the precedent blog of this blog series.

I then run the modified web role in my dev environment.  I get an error from the ADFS site since my application isn’t recognized there yet.  So in the ADFS console, I create a relying trust party named Adfs Authenticated Dev.  The reason I append the name by dev is that we’ll need to create an entry for the production endpoint as well, since it has a different URL.  In the claims rules, I make sure ADFS will send me a few claims (e.g. windows name, email, title, etc.).

I can now use my local web role, after doing the form validation fix I spoke about last time.  I also remove the HTTP input endpoint since ADFS will only trust me if I come from HTTPS.  Finally, I add a grid view showing all the user claims like I did last time.

Integrating ADFS with Azure

I then run the Add STS Reference in Visual Studio again.  This time I use my production Windows Azure URL (https://****  I then go in the federation metadata file and remove reference to localhost:

<EndpointReference xmlns="">

After this, my meta isn’t usable for dev, but I only need to maintain it for production.

I run the ADFS relying party trust wizard again, still pointing to that meta data file.  I call this party Adfs Authenticated Prod.

The problem I’ll run into now is that the w-reply query string will forward me to my production web role.  This is related to how WIF is creating the URL to the STS.  It uses a URL specified in the configuration file.  The fix for this is documented in the Windows Azure training kit:  override the query string using the WS-Federation authentication module by adding the following code in the Global.asax.

/// <summary>
/// Retrieves the address that was used in the browser for accessing
/// the web application, and injects it as WREPLY parameter in the
/// request to the STS
/// </summary>
void WSFederationAuthenticationModule_RedirectingToIdentityProvider(object sender, RedirectingToIdentityProviderEventArgs e)
    // In the Windows Azure environment, build a wreply parameter for  the SignIn request
    // that reflects the real address of the application.
    HttpRequest request = HttpContext.Current.Request;
    Uri requestUrl = request.Url;
    StringBuilder wreply = new StringBuilder();

    wreply.Append(requestUrl.Scheme);     // e.g. "http" or "https"
    wreply.Append(request.Headers["Host"] ?? requestUrl.Authority);

    if (!request.ApplicationPath.EndsWith("/"))
    e.SignInRequestMessage.Reply = wreply.ToString();
    e.SignInRequestMessage.Realm = wreply.ToString();

We also need to add an audience URI in the web.config for (if like me you did register to the STS as localhost).

                <add value="; />
                <add value="
https://localhost:444/&quot; />

                  <add value="https://***; />

I am now prepared to deploy my application in Windows Azure.

Copy Local

Now here is an issue that took me a lot of time to resolve.  Every time you put dlls that aren’t part of the .NET Framework or your projects, you have to specify Copy Local to true in the assembly property.


This is our case for the WIF assemblies.  This setting will force the WIF assembly to be packaged and sent to the cloud with the rest of the custom code.

Deploying ADFS integration to the cloud

I deploy the application to the staging Windows Azure environment.  As mentioned before, this environment won’t fully work since the URL is not constant.

Once it’s in staging I flick it to production.

Et voila!

So, not too trivial to setup, but in the range of the feasible 😉

Doing a fetch-attributes on a container using SAS

I’ve bumped into a funny Windows Azure Storage API feature lately.

I was trying to read / write the meta data of a blob-container using a Shared Access Signature (SAS) and got a 404 Resource Not Found.

Well, after flipping my algorithm upside down quite a few times I considered that was by designed.

It is confirmed, it is by design!

You cannot use a Shared Access Signature to create or delete a container, or to read or write container properties or metadata.


Thanks to Neil Mackenzie for providing the reference!

Of course, this means that if you really need to do this, you have to use your Primary or Secondary Key and no SAS.  Sad…

Shared Access Signature

I’ve been spinning by head around to understand how to use this Azure Storage concept for quite a while, so I’ve decided to share my findings.

The most useful web resource I found was this blog entry.  Here I’m gona give a less API-driven approach, which is faster as long as you know which containers / blobs you want to share in advance.

What is shared access signature good for?

The primary way to authenticate is to use the primary or secondary access keys.  This method works fine, but it has three major drawbacks:

  1. It gives full access (read & write)
  2. It gives access to the entire storage account (table, queue & blobs).
  3. The only way to revoke access is to regenerate the key, in which case, everybody using it will be denied access.

Basically, it’s not very granular.  On top of that, the key must be passed in an HTTP header, which isn’t browser friendly.

Shared Access Signature is an alternative way to authenticate against Azure Storage.  As far as I know, it’s only useful for blobs.  It’s granular, since it can go to the blob level to grant permissions, permissions are also more granular (read, write, list, delete) and the revocation can be automatic (duration) or done on each share access.

How can I use it?

The easiest way to quickly can a hang of it is to go to  Now this site looks pretty dodgy but it actually belongs to the Windows Azure team, so you can feel free to enter your credentials in.

First thing, you enter the storage account name and the primary key (actually, the secondary works as well).

You can then go to the blobs tab.  If you don’t already have a container, you can create one there.  What I’m going to do here is to give read access to the entire container named content.  I select the Manage Policies menu option on the container menu.


There’s a dialog window popping.  I’m going to create a new policy for that container, so I click Add Policy:


I then fill the policies as follow:


Basically, I’m creating a policy named “Read”, starting today, ending in 2020 giving read permission.  I click OK.  I then select the Share menu option on the container menu.


Another dialog window pops.  I select the policy I just created (Read) and click the Get URL button:


The URL looks something like this:

http://<account name>

If we concentrate on the query string, it contains three components:

  • sr, the resource, here c which stands for container.  If we shared a blob, it would have been b.
  • si, the policy.
  • sig, the signature

I can then use the following code to access the blob container:

var credentials =
                new StorageCredentialsSharedAccessSignature("?sr=c&si=Read&sig=j4Rl1%2BPiwm3eUQfFQeIopULLs5SWeIYsXwqx%2FydFXAE%3D");
CloudBlobClient blobClient = new CloudBlobClient(
                new Uri("")’>http://<account name>"),

var container = blobClient.GetContainerReference("content");
var list = container.ListBlobs();

foreach (CloudBlob blob in list)

I can revoke access to the container by changing the policy.

How does that work?

Basically when you click the Get URL button, Azure does spit out a signature and remembers it.  So this URL is good forever, although the policy underneath can change.

We can also share the container directly, without using a policy, but the URL generated would contain the policy itself, ie the start time, end time, etc.  .  It is therefore less secure since if this URL becomes compromise, there is no way to revoke access.  For this reason, the API limits the exposure time to an hour:  we can’t give access for more than an hour this way.


Hope that gave shade some lights on the issue of shared access signature, have fun!

Dev, Staging & Production in Windows Azure

I finally got an answer about the different environments in Windows Azure on the Cloud Cover Channel 9 show.  Apparently my Fabio Avatar was quite popular and helped to be selected!

I thought I would blog out their nuggets of wisdom as they are quite good guidance and I haven’t seen it anywhere on the web.

environment[1] Basically, my question was around managing configurations between dev, staging & production environments.  For instance, if you’re pointing to an Azure storage account or an SQL Azure instance, you’ll probably want to have different data in dev, staging & production.

Well, the guidance is twofold.

1- Use configuration settings

Using the configuration settings within the Visual Studio Azure Service project is the way to control you configuration that will change from one environment to the other.


Because those configuration can be changed after deployment.  You can do that in the web UI or even automate it through the Azure API.

2- Use different subscriptions

In any case, it isn’t recommended to use one Azure subscription for both staging (and/or QA and/or UAT) and production.

The reasons for that is that with a subscription corresponds a Live ID and at most 5 API certificate. Therefore the staff having access to your staging environment would also have access to your production environment.

Therefore, it is recommended to run your environment on different subscriptions and use the staging facility only to swap the production environment quickly.