Analysing Application Logs with DocumentDb

IC791289[1]Azure DocumentDB is Microsoft Document-centric NoSQL offering in the cloud.

I’ve been working with it since Septembre 2014 and I wanted to share a use case I found it really good at:  log analysis.

Now that takes some context.


I have been working for a customer using Microsoft Azure as a development Platform.  Applications were C# Web Apps and the logs were done using the standard .NET tracing capture by Azure to blob storage.

As with most applications, logging was in the backseat while applications were developed so developers would sprinkle lines such as:

System.Diagnostics.Trace.TraceError(“Service X was not available”);

Mind you, that’s better than my contribution, which typically has no logs at all except for a catch all in the outer-most scope so you can be informed that there has been a null exception somewhere!

I’ve architected quite a few systems and often end up troubleshooting issues in production environments, either functional or performance based.  Without good logs, that is impossible.

The problem I always found with text logs is they are so difficult to exploit.  Once you have a few megs of one-liners, they end-up being useless.  They lack two things:

Standard information examples?  Event name, correlation-ID, duration, exception, etc.  Because they are standard through your log, they are easier to search.

Structure?  Well, structure enables you to search more easily as well.  When a developer splits information blocks by comma, another one by pipes, etc.  it doesn’t simplify the consumption of logs.


More robust logging solutions do implement those two aspects.  For instance, Enterprise Library Semantic Logging implements custom EventSource being semantic, i.e. have strong typing.  This is borrowed from Windows ETW Tracing.

Now the constrain I had was to use the basic .NET tracing and to log in the blob storage.


What I did is that I separated the problem of logging and the problem of analysing the logs:  I did log strong typed JSON events serialized as a string into the .NET trace.

.NET only saw strings and was happy.

But I had structured and standard information into JSON objects.  Better yet, I didn’t have the same type of JSON objects everywhere.  This allowed me, for instance, to log controller interception with an information subset, method calls with another and errors with yet another.

You must see me coming by now…  DocumentDB was the perfect tool to analyse that data.  All I had to do was to write some code to load CSV files from the blob storage to a DocumentDB collection.

The way I did that was to take each row in CSV files and considered them as a JSON object with the ‘message’ column being a complex field in the JSON object (i.e. yet another JSON object).

Then I could analyse the logs by simply doing DocumentDB-SQL queries.  Since the documents were already fully indexed, the queries were instantaneous!


This was a life saver on the different projects I used that approach.  I could dive in massive amount of logs easily, get information, compile statistics, detect outlyers, etc.  .

I actually bundled all that in a web solution that was able to import data from a blob container, have some interactive queries and also allow the export of query results to CSV files so I could analyse further in Excel.


In a way this borrows a lot of patterns from Big Data:


This allowed me to learn a lot more about DocumentDB.  I hit the ingestion limits of the S1 tier pretty quickly even by bundling records together using a stored procedure and had to implement back-offs.  When I got too aggressive creating / deleting collections, I got trottled and the service refused to serve me.

But otherwise, the query speed was really excellent.  Being able to dive in JSON ojects in query is a huge enabler.

7 responses

  1. stefflocke 2016-08-02 at 04:28

    Hiya, I don’t suppose you have a github/codeplex repo for this do you? I have a very similar task and it sounds like you already did a fab job!

  2. Vincent-Philippe Lauzon 2016-08-02 at 06:42

    Hi Steff,

    Not on GIT but on a private TFS online… I could give you a copy of the project I used AS IS though if you have a location where I could drop it.

    If you’re interested… my Twitter account is @vplauzon, you should be able to send me a private message from there.

  3. stefflocke 2016-08-02 at 07:28

    Hiya Vincent, that’d be great! Thank you so much. I followed you on twitter earlier but I think you have to follow me back for me to DM you (@stefflocke). Cheers!

  4. Vincent-Philippe Lauzon 2016-08-02 at 07:53


  5. Mpho 2017-01-27 at 00:54

    Good day Vincent, i have read your blog and very informative, i would like to get in touch with you, reading what you done regarding logs for your application, i have resorted in using log analytics for my logs, but how you have done it, i would like to do it like that. Maybe bit of explaining what will be a good approach for me.

    regards, Mpho

  6. Vincent-Philippe Lauzon 2017-01-29 at 09:28


    Please reach out over LinkedIn.

  7. Anonymous 2017-01-29 at 22:56

    thanks will do

Leave a comment