Robust Non-ACID Transaction pattern in Azure


I’ve been working some more on some detailed design of the Azure Pub-Sub CodePlex project.  This project aims at creating a pub-sub messaging infrastructure running on Windows Azure & using Azure Storage.

Azure storage doesn’t support ACID transactions in general.  Each operation is a transaction within a given partition, but you can’t span a transaction on many operations.  Now there is a very good reason for that:  ACID transactions don’t scale well.  They are typically implemented using locks which create contention and also require a trust between parties (should I trust you holding a lock on my resources?).  It also creates challenges in terms of high-availability:  what do you do if one of the party fails during a transaction, should the transaction be shared with fail-over servers?  Azure is built with a scale-less mind-set and therefore it doesn’t support transactions spanning multiple operations.  SQL Azure does support ACID transactions within one database.  But SQL Azure isn’t Azure Storage despite what the name of the technologies would let you believe.  Azure Storage is basically Tables, Queues & Blobs.

The challenge for me is that I’m not used to design in an environment not supporting transactions.  I eat transactions for breakfast and architected many systems to rely on them to guarantee the integrity of the systems.  At the same time, this is the whole point of this CodePlex project:  to meet those challenges and learn from it.  So let’s do that!

The main scenario I was trying to secure was the pushing of messages into a queue.  Now this scenario goes as follow:

  • An agent (Azure Worker Role) takes a message from an Azure Queue
  • The agent process the message and determine which subscriber it should send it to
  • The agent persists a bunch of meta data in tables & blobs
  • The agent persists an ID in another Azure Queue
  • The agent sends a notification via TCP or HTTP

I wanted to have this scenario to be fail-tolerant in that, if the agent fails at any of those point, at the end, the message will still be sent once-and-only-once to the subscribers.

You can’t secure this scenario with a wrapping transaction.  So what can you do?

Well, I won’t go through the trial & errors that went though my head.  Instead I’ll go directly to the solution.  First let’s formalize the operations happening:

  1. Read Input Azure Queue (making the message invisible)
  2. Process Message, Find subscribers
  3. Persists Meta Data in Tables
  4. Persists Data in Blobs
  5. For each targeted subscriber
  1. Check a confirmation table to see if the message exists, if so, go to step 6.
  2. Push an ID in the Azure Queue associated with the subscriber
  3. Write the subscriber queue-message ID into a confirmation table
  4. Send a notification to them (TCP)
  • Remove the Input Azure Queue message (it was invisible, now we permanently delete it)
  • Well, that’s about the solution, if you add some checking code in there.  Basically if we fail before point #6, the input message will eventually become visible again and will be reprocessed by the agent again.  The only thing we have to make sure is that we are fault tolerant if the agent finds half-done work on that message.

    Fort instance, when we’re writing meta in the tables or blob, we have to make sure that the write either overwrite previous writes or fail and that failure is detected, we check if what is already written is correct and continue.

    The key point is at point 5.3.  Basically, we confirm the message-push operation.  This ensures that the message is sent once-and-only-once.  If we fail at point 5.3, the process will restart and the message will be sent twice on the subscriber queue BUT the agent READING it will be able to discard the previous version because they don’t have the queue-ID in the confirmation table.

    I find this solution pretty elegant because it actually supports another more complex scenario, which was bulk-messaging.  I need to be able to process a message containing a sequence of messages, each of those should be processed and sent to their respective subscribers.  With this pattern, I can do it.

    The main axis of this pattern is to have ONE gateway for a data operation, even if the data is spread around.  Basically, you can write data everywhere, but your transaction isn’t considered completed until this gateway data operation is completed.  This way you can restart as often as you like as long as you can handle already-written-data.

    The major draw-back of this pattern is that it can create incomplete data-operation junk in storage.  This can be taken care of by purging mechanism.

    This pattern doesn’t cover every transaction, but is quite generic so I’ll be able to leverage it at a bunch of places.

    Did you ever face a similar challenge?  What was the solution you used?  Does this pattern seems optimal to you?  If not, why I’ would love to hear from you!

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s