Nuget WordPress REST API – Demo App

I’ve had a few requests to explain how to use the Nuget WordPress REST API beyond authentication.

In order to do this, I added a Demo App under source control.

The Nuget package source code is available at and if you download the code, you’ll see there are 3 projects:

  • WordPressRestApi:  essentially the Nuget package
  • ApiClientTest:  Unit tests on the previous project
  • WordPressDemo:  The new Web App project

The demo project shows you a way to use the package.


Last year I posted an article on WordPress authentication, explaining how it works.  I would first read that article.

When you start the demo, it will take you into a sort of Authentication Wizard when you hit the web app root (e.g. http://localhost:58373/).

You need to go to and create yourself a WordPress App.  Give it a name & a description.  This is purely to run the demo on your laptop / desktop.  The redirect URL should be of the shape http://localhost:58373/SignIn but where you replaced 58373 by whatever port number the web app run on your laptop.

Once you created the app, at the bottom of the screen you should have access to the two elements of information you’ll need to run the demo.


On the demo web site you should be prompted to enter the client ID.


Usually we would have the Client ID and other app’s information in the web.config.  But to simplify the demo setup, I have the user keypunch them.

Once you’ve enter the client ID, click submit.  The app should give you a link to sign in WordPress.  Click it.

Your browser will have you navigate to  From there, WordPress will request your consent.  Click Approve.

FYI, the demo App only does read operations:  it won’t modify or delete anything.

From there you should be brought back to the redirect URL you configured.


You need to again enter the client ID.  You also need to enter the client secret.

You can the click on the Fetch Token button.  This will allow the web app to fetch a token for the WordPress App.

From there you should land on http://localhost:58373/DemoRead.

You can follow the authentication code from HomeController & SignInController classes (and models).

Read Operations

The class DemoReadController does the read operations.  Here we demo a query on posts & one on tags.

Everything flows from the WordPressClient.

We got rid of the tentative of building LINQ queries around WordPress and instead went for a thin interface on top of its REST API.

The surface of the REST API exposed is quite limited at this point:  posts and tags.

A particularity of the interface is the use of IAsyncEnumerable<T>, which is a custom interface allowing us to add a filter (where clause) and / or projection (select).  Those aren’t sent to the API, a la LINQ SQL, but they are at least processed as the objects are hydrated from the requests.  This interface also respects the async semantic hence allowing us to build more scalable application on top of it.


This demo app is by no mean a base for your WordPress applications.  Rather, it illustrates how to use the Nuget Package.

I hope this gives you a better idea on how to use it.

Azure Monitoring – SIEM integration

Security camerasQuite a few of my customers have a Security Information and Event Management (SIEM) on premise.  They spent thousands of dollars fine tuning it to detect and correlate security events, it’s integrated with their ticketing system, but mostly…  it has been sanctioned as the corporate security tool.

So the next question is:  how do I integrate that with Azure resources?

Azure Monitoring

In the last post, I talked about Azure Monitor, the consolidated Azure monitoring platform.  If you haven’t read it and do not know what Azure Monitor is, go read it, I’ll wait for you here.

Either by itself, for medium deployment, or by upgrading to Log Analytics (part of OMS) for more complex or bigger deployments, we can monitor a solution or set of solutions.

By monitoring, I mean that we can have insights about what’s going on within our solutions, for instance:

  • Have any failures occurred or are currently occurring?
  • What is the current and past load?
  • Are resources well utilized or do we have under / over provisioning?

…  and so on.  This doesn’t address a lot of security requirements, such as treat detection, though.

Security Center

For a customer who do not have a legacy SIEM in place, I would recommend to take a look at Security Center.  It is a solid base for security monitoring and takes minutes to setup.

Security Center does threat detection and correlation between security events.

Microsoft Azure Log Integration

Back to my legacy SIEM scenario.

In July 2016, Microsoft released Microsoft Azure Log Integration.  As of November 2016, it is still in Preview.

You can download the tool at  It is meant to be installed on a VM along standard SIEM connector agent (e.g. Splunk Universal Forwarder, ArcSight Windows Event Collector agent or Qradar wincollect).  The VM is meant to act as a log gateway:  it collects logs from Storage Accounts and send them to the SIEM via standard SIEM connector.


A complete guide to set this up is available here.

Of course, this setup assume we are exporting all the logs to Azure Storage.

Networking considerations

Now that we’ve looked at the general solution, let’s consider where the different components could reside since as with all things depending on a pesky thing called physics, latency matter.

Following the diagram from previous section from left to right, the Azure resources & Azure storage are, of course, in Azure.  The VM running the Log Integration could be either in Azure or on Premise so can our SIEM.  So let’s consider the different possibilities.

Log Integration & SIEM on premise

In many ways, that might be the easiest way to install the log integration but also most likely the less efficient.

It is easy because you simply need a VM on premise with access to the SIEM and the internet, in order to access Azure Storage.

It is likely inefficient because of the latency between on premise site and Azure.  The Log Integration tool will probe different parts of Azure Storage at regular intervals with and the latency will build up.  Network bandwidth, which do not come for free on premise, will also add up.

For many Azure resources with verbose logs, the network bandwidth requirement might be so demanding that connecting over the internet is too slow or unreliable, in which case we would consider Azure Express Route.

Log Integration in Azure & SIEM on premise

This setup is slightly less straightforward because of hybrid connectivity.  Indeed, the Log Integration VM, being in Azure, needs some hybrid connectivity in order to communicate with the on premise SIEM.

This can easily be achieved with a site-to-site connection through an Azure VPN Gateway.


This setup is likely going to be more efficient network wise since all Azure Storage probing will occur in Azure with no bandwidth on premise.  Only the gathered logs shipped on premise to the SIEM will incur bandwidth.

This can still be a lot if we have a few resources in Azure and gather verbose logs.  Again Express Route could be considered.

Log Integration & SIEM in Azure

In this scenario, we migrate the SIEM in Azure.  Bam!  No more latency.

Unless we also moved all on premise workload in Azure we now have the reverse problem we had, i.e. we need to push on premise logs to Azure.

A variant on that scenario actually brings the best of both worlds.

Azure SIEM to generate security events

In this scenario, we install a SIEM in Azure while also keeping the one we have on premise.


The one in Azure acts as an aggregator of logs and generates security events.  Only those security events are shipped to the SIEM on premise.  This reduces the network pipe requirement considerably but still achieve the same goal, i.e. for the “main SIEM” to have enough information to be able to correlate security events from different sources in order to detect attack patterns.

A blocker for this approach would obviously be the license costs to install a SIEM in Azure.  I wouldn’t recommend this for a small deployment.

If workload migration is happening from on premise to Azure, as a corporate strategy for instance, we could consider inversing the SIEM at some point, i.e. having the main SIEM in Azure and an aggregator one on premise.  This could make sense early on since storage is cheaper in the cloud than on premise and SIEM tends to consume lots of those.  Also, a SIEM being a critical system, it tends to be easier / cheaper to have an high SLA in Azure than on premise.


Azure Log Integration with on premise SIEM is often a corporate requirement.  In this article we went through the solution and some variants regarding networking consideration.

Primer on Azure Monitor


Azure Monitor is the latest evolution of a set of technologies allowing Azure resources monitoring.

I’ve written about going the extra mile to be able to analyze logs in the past.

The thing is that once our stuff is in production with tons of users hitting it, it might very well start behaving in unpredictable ways.  If we do not have a monitoring strategy, we’re going to be blind to problems and only see unrelated symptoms.

Azure Monitor is a great set of tools.  It doesn’t try to be the end all solution.  On the contrary, although it offers analytics out of the box, it let us export the logs wherever we want to go further.

I found the documentation of Azure Monitor (as of November 2016) a tad confusing, so I thought I would give a summary overview here.  Hopefully it will get you started.

Three types of sources

First thing we come across in Azure Monitor’s literature is the three types of sources:  Activity Logs, Diagnostic Logs & Metrics.

There is a bit of confusion between Diagnostic Logs & Metrics, some references hinting towards the fact that metrics are generated by Azure while diagnostics are generated by the resource itself.  That is confusing & beside the point.  Let’s review those sources here.

Activity logs capture all operations performed on Azure resources.  They used to be called Audit Logs & Operational Logs.  Those comes directly from Azure APIs.  Any operations done on an Azure API (except HTTP-GET operations) traces an activity log.  Activity log are in JSON and contain the following information:  action, caller, status & time stamp.  We’ll want to keep track of those to understand changes done in our Azure environments.

Metrics are emitted by most Azure resources.  They are akin to performance counter, something that has a value (e.g. % CPU, IOPS, # messages in a queue, etc.) over time ; hence Azure Monitor, in the portal, allows us to plot those against time.  Metrics typically comes in JSON and tend to be emitted at regular interval (e.g. every minute) ; see this articles for available metrics.  We’ll want to check those to make sure our resources operate within expected bounds.

Diagnostic logs are logs emitted by a resource that provide detailed data about the operation of that particular resource.  That one is specific to the resource in terms of content:  each resource will have different logs.  Format will also vary (e.g. JSON, CSV, etc.), see this article for different schemas.  They also tend to be much more voluminous for an active resource.

That’s it.  That’s all there is to it.  Avoid the confusion and re-read the last three paragraphs.  It’s a time saver.  Promised.

We’ll discuss the export mechanisms & alerts below, but for now, here’s a summary of the capacity (as of November 2016) of each source:

Source Export to Supports Alerts
Activity Logs Storage Account & Event Hub Yes
Metrics Storage Account, Event Hub & Log Analytics Yes
Diagnostics Logs Storage Account, Event Hub & Log Analytics No

Activity Log example

We can see the activity log of our favorite subscription by opening the monitor blade, which should be on the left hand side, in the


If you do not find it there, hit the More Services and search for Monitor.

Selecting the Activity Logs, we should have a search form and some results.


ListKeys is a popular one.  Despite being conceptually a read operation, the List Key action, on a storage account, is done through a POST in the Azure REST API, specifically to trigger an audit trail.

We can select one of those ListKeys and, in the tray below, select the JSON format:

"relatedEvents": [],
"authorization": {
"action": "Microsoft.Storage/storageAccounts/listKeys/action",
"condition": null,
"role": null,
"scope": "/subscriptions/<MY SUB GUID>/resourceGroups/securitydata/providers/Microsoft.Storage/storageAccounts/a92430canadaeast"
"caller": null,
"category": {
"localizedValue": "Administrative",
"value": "Administrative"
"claims": {},
"correlationId": "6c619af4-453e-4b24-8a4c-508af47f2b26",
"description": "",
"eventChannels": 2,
"eventDataId": "09d35196-1cae-4eca-903d-6e9b1fc71a78",
"eventName": {
"localizedValue": "End request",
"value": "EndRequest"
"eventTimestamp": "2016-11-26T21:07:41.5355248Z",
"httpRequest": {
"clientIpAddress": "",
"clientRequestId": "ba51469e-9339-4329-b957-de5d3071d719",
"method": "POST",
"uri": null

I truncated the JSON here.  Basically, it is an activity event with all the details.

Metrics example

Metrics can be accessed from the “global” Monitor blade or from any Azure resource’s monitor blade.

Here I look at the CPU usage of an Azure Data Warehouse resource (which hasn’t run for months, hence flat lining).


Diagnostic Logs example

For diagnostics, let’s create a storage account and activate diagnostics on it.  For this, under the Monitoring section, let’s select Diagnostics, make sure the status is On and then select Blob logs.


We’ll notice that all metrics were already selected.  We also noticed that the retention is controlled there, in this case 7 days.

Let’s create a blob container, copy a file into it and try to access it via its URL.  Then let’s wait a few minutes for the diagnostics to be published.

We should see a special $logs container in the storage account.  This container will contain log files, stored by date & time.  For instance for the first file, just taking the first couple of lines:

1.0;2016-11-26T20:48:00.5433672Z;<strong>GetContainerACL</strong>;Success;200;3;3;authenticated;monitorvpl;monitorvpl;blob;"$logs?restype=container&amp;comp=acl";"/monitorvpl/$logs";295a75a6-0001-0021-7b26-48c117000000;0;;2015-12-11;537;0;217;62;0;;;"&quot;0x8D4163D73154695&quot;";Saturday, 26-Nov-16 20:47:34 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"9e78fc90-b419-11e6-a392-8b41713d952c"
1.0;2016-11-26T20:48:01.0383516Z;<strong>GetContainerACL</strong>;Success;200;3;3;authenticated;monitorvpl;monitorvpl;blob;"$logs?restype=container&amp;comp=acl";"/monitorvpl/$logs";06be52d9-0001-0093-7426-483a6d000000;0;;2015-12-11;537;0;217;62;0;;;"&quot;0x8D4163D73154695&quot;";Saturday, 26-Nov-16 20:47:34 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"9e9c6311-b419-11e6-a392-8b41713d952c"
1.0;2016-11-26T20:48:33.4973667Z;<strong>PutBlob</strong>;Success;201;6;6;authenticated;monitorvpl;monitorvpl;blob;"";"/monitorvpl/sample/A.txt";965cb819-0001-0000-2a26-48ac26000000;0;;2015-12-11;655;7;258;0;7;"Tj4nPz2/Vt7I1KEM2G8o4A==";"Tj4nPz2/Vt7I1KEM2G8o4A==";"&quot;0x8D4163D961A76BE&quot;";Saturday, 26-Nov-16 20:48:33 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"b2006050-b419-11e6-a392-8b41713d952c"

Storage Account diagnostics obviously log in semicolon delimited values (variant of CSV), which isn’t trivial to read the way I pasted it here.  But basically we can see the logs contain details:  each operation done around the blobs are logged with lots of details.


As seen in the examples, Azure Monitor allows us to query the logs.  This can be done in the portal but also using Azure Monitor REST API, cross platform Command-Line Interface (CLI) commands, PowerShell cmdlets or the .NET SDK.


We can export the sources to a Storage Account and specify a retention period in days.  We can also export them to Azure Event Hubs & Azure Log Analytics.  As specified in the table above, Activity logs can’t be sent to Log Analytics.  Also, Activity logs can be analyzed using Power BI.

There are a few reasons why we would export the logs:

  • Archiving scenario:  Azure Monitor keeps content for 30 days.  If we need more retention, we need to archive it ourselves.  We can do that by exporting the content to a storage account ; this also enables big data scenario where we keep the logs for future data mining.
  • Analytics:  Log Analytics offers more capacity for analyzing content.  It also offers 30 days of retention by default but can be extended to one year.  Basically, this would upgrade us to Log Analytics.
  • Alternatively, we could export the logs to a storage account where they could be ingested by another SIEM (e.g. HP Arcsight).  See this article for details about SIEM integration.
  • Near real time analysis:  Azure Event Hubs allow us to send the content to many different places, but also we could analyze it on the fly using Azure Stream Analytics.

Alerts (Notifications)

Both Activity Logs & Metrics can trigger alerts.  Currently (as of November 2016), only Metrics alert can be set in the portal ; Activity Logs alerts must be set by PowerShell, CLI or REST API.

Alerts are a powerful way to automatically react to our Azure resource behaviors ; when certain conditions are met (e.g. for a metric, when a value exceeds a threshold for a given period of time), the alert can send an email to a specified list of email addresses but also, it can invoke a Web Hook.

Again, the ability to invoke a web hook opens up the platform.  We could, for instance, expose an Azure Automation runbook as a Web Hook ; it therefore means an alert could trigger whatever a runbook is able to do.


There are two RBAC roles around monitoring:  Reader & Contributor.

There are also some security considerations around monitoring:

  • Use a dedicated storage account (or multiple dedicated storage accounts) for monitoring data.  Basically, avoid mixing monitoring and “other” data, so that people do not gain access to monitoring data inadvertently and, vis versa, that people needing access to monitoring data do not gain access to “other” data (e.g. sensitive business data).
  • For the same reasons, use a dedicated namespace with Event Hubs
  • Limit access to monitoring data by using RBAC, e.g. by putting them in a separate resource group
  • Never grant ListKeys permission across a subscription as users could then gain access to reading monitoring data
  • If you need to give access to monitoring data, consider using a SAS token (for either Storage Account or Event Hubs)


Azure Monitor brings together a suite of tools to monitor our Azure resources.  It is an open platform in the sense it integrates easily with solutions that can complement it.

Single VM SLA

seal-1771694_640 By now you’ve probably heard the news:  Azure became the first Public Cloud to offer SLA on single VM.

This was announced on Monday, November 21st.

In this article, I’ll quickly explore what that means.

Multi-VMs SLA

Before that announcement, in order to have SLA on connectivity to compute, we needed to have 2 or more VMs in an Availability Set.

This was and still is the High Availability solution.  It gives an SLA of %99.95 availability, measured monthly.

There is no constrain on the storage used (Standard or Premium) and the SLA includes planned maintenance and any failures.  So basically, we put 2+ VMs in an availability set and we’re good all the time.

Single-VM SLA

The new SLA has a few constraints.

  • The SLA isn’t the same.  Single-VM SLA is only %99.9 (as opposed to %99.95).
  • VMs must use Premium storage for both OS & Data disks.  Presumably, Premium Storage has a better reliability.  This is interesting since in terms of SLA, there are no distinction between Premium & Standard.
  • Single VM SLA doesn’t include planned maintenance.  This is important.  It means we are covered with %99.9 availability as long as there are no planned maintenance.  More on that below.
  • SLA is calculated on a monthly basis, as if the VM was up the entire month…  this means that if our VM is up the entire month, it has an SLA of %99.9.  If, on the contrary, we turn it off 12 hours / day, we won’t have %99.9 SLA on the 12 hours / day we are using it.
  • Since this was announced on November 21st, we can expect it to take effect 30 days later ; to be on the safe side, I tell customers January 1st, 2017

So, it is quite important to state that it isn’t a simple extension of the existing SLA to a single-VM.  But it is very useful nonetheless.

Planned maintenance

I just wanted to expand a bit on planned maintenance.

What is a planned maintenance?  Once in a while Azure needs some maintenance which requires a shutdown of hosts.  Either the host itself gets updated (software / hardware) or it gets decommissioned altogether.  In those cases, the underlying VMs are shutdown, the host is rebooted (or decommissioned, in which case the VMs get relocated) and then the VM are booted back.

This is a downtime for a VM.

With an Highly Available configuration, i.e. 2+ VMs in Availability Set, the downtime of one VM doesn’t affect the availability of the availability set since there is a guarantee that there will always be one VM available.

Without an Highly Available configuration, there is no such guarantee.  For that reason, I suppose, this downtime isn’t covered within the SLA.  Remember, %99.9 on a monthly basis means 43 minutes of downtime per month.  A planned maintenance would easily take a few minutes of downtime:  taking into account the VMs shutdown (all of the VMs on the host), the host restart and the VM boot.  That isn’t negligible compare to the 43 minutes of margin the SLA gives.

This would leave very little margin of manoeuver for potential hardware / software failures during the month.

Now, that isn’t the end of the world.  For quite a few months we have the redeploy me now feature in Azure.  This feature redeploys the VM to a new host.  If there is a planned maintenance in course in the Data Center, the new host should be an updated one already, in which case our VM won’t need a reboot anymore.

Planned maintenance follow a workflow where a notification is sent a week in advance subscription owner (see &  We can then trigger a redeploy at our earliest convenience (maintenance window).

Alternatively, we can trigger a redeploy every week, during a maintenance window and ignore notification emails.

High Availability

The previous section should have convinced you that Single-VM SLA isn’t a replacement for an Highly Available (HA) configuration.

On top of the Azure planned maintenance being outside the SLA, our own solution maintenance will impact the SLA of the solution.

In an HA configuration, we can take an instance down, update it, put it back, then upgrade the next one.

With a single VM we cannot do that and a solution maintenance will incur a downtime and should therefore be done inside maintenance window (of the solution).

For those reason, I still recommend to customers to use an HA configurations if HA is a requirement.

Enabled scenarios

What Single VM brings isn’t a cheaper HA configuration.  Instead, it enables non-HA configuration with SLA in Azure.

Until now, there were two modes.  Either we took the HA route or we lived without an SLA.

Often No SLA is ok.  For dev & test scenarios for instance, SLA is rarely required.

Often HA is required.  For most production scenarios I deal with, in the enterprise space & consumer facing space anyway, HA is a requirement.

Sometimes HA doesn’t make business sense and no SLA isn’t acceptable though.  HA might not make business sense when

  • The solution doesn’t support HA ; this is sadly the case for a lot of legacy application
  • The solution supports HA with a premium license, which itself doesn’t make business sense
  • Having HA increases the number of VMs to a point where the management of the solution would be cost prohibitive

For those scenarios, the single-VM SLA might hit the sweet spot.

Virtual Machine with 2 NICs

Colorful Ethernet CableIn Azure Resource Manager (ARM), Network Interface Cards (NICs) are a first class resource.  You can defined them without a Virtual Machine.

UPDATE:  As a reader kingly point out, NIC means Network Interface Controller, not Network Interface Card as I initially wrote.  Don’t be fooled by the Azure logo ;) 

Let’s take a step back and look at how the different Azure Lego blocks snap together to get you a VM exposed on the web.  ARM did decouple a lot of infrastructure components, so each of those are much simpler (compare to the old ASM Cloud Service), but there are many of them.

Related Resources

Here’s a diagram that can help:


Let’s look at the different components:

  • Availability Set:  contains a set of (or only one) VMs ; see Azure basics: Availability sets for details
  • Storage Account:  VM hard drives are page blobs located in one or many storage accounts
  • NIC:  A VM has one or many NICs
  • Virtual Network:  a NIC is part of a subnet, where it gets its private IP address
  • Load Balancer:  a load balancer exposed the port of a NIC (or a pool of NICs) through a public IP address

The important point for us here:  the NIC is the one being part of a subnet, not the VM.  That means a VM can have multiple NICs in different subnets.

Also, something not shown on the diagram above, a Network Security Group (NSG) can be associated with each NIC of a VM.

One VM, many NICs

Not all VMs can have multiple NICs.  For instance, in the standard A series, the following SKUs can have only one NIC:  A0, A1, A2 & A5.

You can take a look at to see how many NICs a given SKU support.

Why would you want to have multiple NICs?

Typically, this is a requirement for Network Appliances and for VMs passing traffic from one subnet to another.

Having multiple NICs enables more control, such as better traffic isolation.

Another requirement I’ve seen, typically with customer with high security requirements, is to isolate management traffic and transactional traffic.

For instance, let’s say you have a SQL VM with its port 1443 open to another VM (web server).  That VM needs to open its RDP port for maintenance (i.e. sys admin people to log in and do maintenance).  But if both port are opened on the same NIC, then a sys admin having RDP access could also have access to the port 1443.  For some customer, that’s unacceptable.

So the way around that is to have 2 NICs.  One NIC will be used for port 1443 (SQL) and the other for RDP (maintenance).  Then you can put each NIC in different subnet.  The SQL-NIC will be in a subnet with NSG allowing the web server to access it while the RDP-NIC will be in a subnet accessible only from the VPN Gateway, by maintenance people.


You will find here an ARM template (embedded in a Word document due to limitation of the Blog platform I’m using) deploying 2 VMs, each having 2 NICs, a Web NIC & a maintenance NIC.  The Web NICs are in the web subnet and are publically load balanced through a public IP while the maintenance NICs are in a maintenance subnet and accessible only via private IPs.  The maintenance subnet let RDP get in, via its NSG.

The template will take a little while to deploy, thanks to the fact it contains a VM.  You can see most of the resources deployed quite fast though.

If you’ve done VMs with ARM before, it is pretty much the same thing, except with two NICs references in the VM.  The only thing to be watchful for is that you have to specify which NIC is primary.  You do this with the primary property:

"networkProfile": {
  "networkInterfaces": [
      "id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('Web NIC Prefix'), '-', copyIndex()))]",
      "properties": {
        "primary": true
      "id": "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('Maintenance NIC Prefix'), '-', copyIndex()))]",
      "properties": {
        "primary": false

If you want to push the example and test it with a VPN gateway, consult to do a point-to-site connection with your PC.


Somewhat a special case for VMs, a VM with 2 NICs allow you to understand a lot of design choices in ARM.  For instance, why the NICs are stand-alone resource, why they are the one to be part of a subnet and why NSG are associated to them (not the VM).

To learn more, see

Refactoring Tags in WordPress Blog

imageI did refactor the tags of my blog this week end!

I display the tags (as a word cloud) on the right-hand side of my pages.  The tags grew organically since I started blogging in 2010.

As with many things that grow organically, it got out of hand with time.  First the technological space I covered in 2010 isn’t the same today but also, I did tend to add very specific tag (e.g. DocumentDB) and with time I ended up with over 60 tags.

With hundreds of blogs and 60 tags, the WordPress portal (I use for my blog) wasn’t ideal to start doing changes.

So I got my WordPress API .NET SDK out of the mothball Winking smile  I also updated my “portal” (deployed on Azure App Services) to that SDK.  This is probably one of the worst example of an ASP.NET MVC application but surely the ugliest site available out there.  It’s functional, that’s what I can say Winking smile

In case you want to do something similar on your blog, here is how I did it.


First I define target tags.  I did that by looking at the current list of tags but mostly I just gave myself a direction, i.e. the level of tags I wanted.

I then iterated on the following:

  • Look at the tags in WordPress portal, find targets for rename
  • Used the “Change Post Tags” functionality to change tags “en masse” on all the posts
  • Alternatively I used the “Edit Post Tags” functionality to edit the tags in Excel
  • Then use the “Clean up Tags” to remove tags that had no longer any posts associated to it

BTW, each of those functionalities tells you what they are going to do to your posts and ask for your consent before doing it.

After a couple of iterations I got the bulk of my tags cleaned up.

The last thing I did was to define the tags, i.e. add descriptions.  I did that within the WordPress portal.

Hopefully that should make my posts easier to search!

Azure Active Directory Labs Series – Multi-Factor Authentication

Back in June I had the pleasure of delivering a training on Azure Active Directory to two customer crowds.  I say pleasure because not only do I love to share knowledge but also, the preparation of the training forces me to go deep on some aspects of what I’m going to teach.

In that training there were 8 labs and I thought it would be great to share them to the more general public.  The labs follow each other and build on each other.

You can find the exhaustive list in Cloud Identity & Azure Active Directory page.  This is the eight and last lab.

In the current lab we configure AAD to provide multi-factor authentication.

Create MFA provider

  1. Go to the legacy portal @
  2. Scroll down the left menu to the bottom and select Active Directory
  3. You should see the following screen
  4. Select Multi-Factor Auth Providers
  5. Select Create a new multi-factor authentication provider
  6. Fill in the form
    • Name: DemoProvider
    • Usage Model: Leave it as it is for the demo
    • Subscription: Select the subscription you are using
    • Directory: Select the directory you have created in a previous lab
  7. Click Create button
  8. You should see the following screen
  9. In the screen bottom, click Manage
  10. This will open a new web page
  11. Click the Configure link (next to the gear icon)
  12. Here you could setup different policies on the MFA of your users
  13. On the left hand menu, select Caching
  14. Here you could define different caches to streamline authentication process, i.e. removing MFA once the user has authenticated using MFA for a time duration
  15. On the left hand menu, select Voice Messages
  16. Here you could configure personalized voice messages
  17. Close the browser page

Enable users for MFA

  1. Go to the legacy portal @
  2. Scroll down the left menu to the bottom and select Active Directory
  3. You should see the following screen
  4. Select a tenant you created for this lab & enter it
  5. Select the Users menu
  6. At the screen bottom, click the Manage Multi-Factor Auth button
  7. This will open a new web page
  8. Select the first user, i.e. Alan Scott
  9. In the right column, click the enable link
  10. In the dialog box, click the Enable multi-factor auth button
  11. Select the Service Settings tab at the top
  12. Scroll down to the verification options
  13. Select only text message to phone
  14. Click the Save button
  15. Close the web page
  16. Back to the user list in the portal, select the first user (the one we just enabled) and enter it
  17. Select the Work Info tab
  18. Under Contact Info & Mobile Phone, select Canada (+1) as region
  19. Enter your own mobile phone number
  20. Click the Save button

Test MFA

  1. Open an In private web browser
  2. Navigate to
  3. Enter credentials
    • For the email, enter the full name of the user we just enabled, this can be found in the Users list (user name column) ; e.g.
    • Enter the password of the user
  4. You will be prompted to setup MFA, click the Set it up now button
  5. You should see the following screen (with your mobile phone instead of the orange rectangle)
  6. Click the Contact Me button
  7. You should receive a text message on your mobile phone with a 6 digits number
  8. Enter that number in the web page
  9. Click the Verify button
  10. It should tell you verification successful
  11. Click the Done button
  12. You should proceed to the portal as an authenticated user

Post Lab

You can go back to the admin portal for MFA and try different configurations.