Tag Archives: PowerShell

PowerShell with a strong focus on Azure SDK / Automation.

Moving from Standard to Premium disks and back

Azure Managed Disks (introduced in February 2017) simplified the way Virtual Machine disks are managed in Azure.

A little known advantage of that resource is that it exposes its storage type, i.e. Standard vs Premium, as a simple property that can easily be changed.

Why would we do that?  Typically we’ll move from standard to premium storage to improve the disk latency but also its resilience (for instance, only VMs with Premium disks can have a Single-VM SLA).  We might want to move from Premium to Standard in order to drive the cost of solution down, although the storage rarely dominates the cost of a solution.

In general, it can be interesting to test performance on both.  As with many things in Azure, you can quickly do it, so why not?

Managed Disks

For this procedure to work, we need managed disk.

If our Virtual Machine have unmanaged disks (aka .vhd file in a blob storage container), we do need to convert them to managed disk first.

Fortunately, there is a simple procedure to migrate to managed disk.

Portal Experience

Let’s start with the portal experience.

First, let’s open a Resource Group where I know I do have some VMs.


There are two resources that should interest us in there.

The first one is a Virtual Machine.  We’ll need to make sure Virtual Machines are shutdown from the portal’s perspective, i.e. they aren’t provisioned anymore (as opposed to doing a shutdown from within the VMs).

The second resource is a disk.  Let’s click on that one.


Changing the account type is right on the overview tab of the disk resource.  We can simply change it, hit save, and within seconds the disk is marked as changed.

What really happens is that a copy is triggered in the background.  The disk can be used right way thanks to a mechanism called “copy on read”:  if the VM tries to read a page of the disk which hasn’t been copied yet, that page will be copied first before the read can occur.

For this reason we might experiment a little more latency at first so for performance test it is better to wait.  There are no mechanism to know when the background copy is completed so it is best to assume the worst for performance test.

PowerShell Script

The Portal Experience is quite straightforward, but as usual, automation via PowerShell scripts often is desirable if we have more than a handful of migration to do.  For instance, if we have 10 disks to migrate

As with the Portal Experience, we need to shutdown the impacted Virtual Machines first.  This can also be done using PowerShell scrip but I won’t cover it here.

The main cmdlets to know here are Get-AzureRmDisk & Update-AzureRmDisk.

We first do a GET in order to get the disk meta-data object, we then change the AccountType property and do an UPDATE to push back the change.

In the following example, I zoom in to a Resource Group and convert all the disks to Premium storage:

$rg = "Docker"

Get-AzureRmDisk -ResourceGroupName $rg | foreach {
    $disk = $_
    $disk.AccountType = "PremiumLRS"
    Update-AzureRmDisk -ResourceGroupName $disk.ResourceGroupName -DiskName $disk.Name -Disk $disk

The property AccountType can take the following values:

  • StandardLRS
  • PremiumLRS


We’ve seen how to easily migrate from one type of storage to another with Azure Virtual Machine Managed Disks.

This allows us to quickly change the property of an environment either permanently or in order to test those parameters (e.g. performance, stability, etc.).


How to know where a Service is Available in Azure

pexels-photo-269633[1]Azure has a Global Footprint of 40 regions at the time of this writing (mid-September 2017).

Not all services are available in every regions.  Most aren’t in fact.  Only foundational services (e.g. storage) are available everywhere.

In order to know where a service is available, we can look at:


This is handy when we’re building an architecture or a quote.

What if we want to build some automation around the availability of a service or simply check it via PowerShell because opening a browser is too hard today?

There are really 2 ways to get there.  Either we look at a specific region and query that services are in there or we look at a service and query where it’s available.

Provider Model

Services aren’t “first class citizens” in Azure.  Resource Providers are.

Each resource provider offers a set of resources and operations for working with an Azure service.

Where is my service available?

Let’s start by finding the regions where a given service is available.

The key PowerShell cmdlet is Get-AzureRmResourceProvider.

Let’s start by finding the service we’re interested at.

Get-AzureRmResourceProvider | select ProviderNamespace

This returns the name of all the Azure provider namespaces (around 40 at the time of this writing).

Let’s say we are interested in Microsoft.DataLakeStore.

Get-AzureRmResourceProvider -ProviderNamespace Microsoft.DataLakeStore

This returns the resource providers associated with the given namespace.

We now need to pick the one with the resource types interesting us.  In this case, let’s say, we are interested in Azure Data Lake Store accounts (the core resource for the service).  We can see it’s available in three regions:

ProviderNamespace : Microsoft.DataLakeStore
RegistrationState : Registered
ResourceTypes     : {accounts}
Locations         : {East US 2, North Europe, Central US}

Which services are available in my region?

Now, let’s take the opposite approach.  Let’s start with a region and see what services are available in there.

Here the key cmdlet is Get-AzureRmLocation

Get-AzureRmLocation | select Location

This lists the region we have access to.  A user rarely have access to all region which is why the list you see likely is smaller than 40 items at the time of this writing.

Let’s look at what’s available close to my place, canadaeast.

Get-AzureRmLocation | where {$_.Location -eq "canadaeast"} | select -ExpandProperty Providers

This gives us a quick view of what’s available in a region.


We saw how to query Azure REST API using PowerShell in order to know where a service is available or what services are available in a region.

This could be especially useful if we want to automate such a check or doing more sophisticated queries, e.g. which region have service X & Y available?

Azure SQL Elastic Pool – Moving databases across pools using PowerShell


I’ve written a bit about Azure SQL Elastic Pool lately:  an overview, about ARM template and about database size.

One of the many great features of Azure SQL Elastic Pool is that like Azure SQL Database (standalone), we can change the eDTU capacity of the pool “on the fly”, without downtime.

Unlike its standalone cousin though, we can’t change the edition of the pool.  The edition is either Basic, Standard or Premium.  It is set at creation and is immutable after that.

If we want to change the edition of a pool, the obvious way is to create another pool, move the databases there, delete the original, recreate it with a different edition and move the databases back.

This article shows how to do that using PowerShell.

You might want to move databases around for other reasons, typically to optimize the density and performance of pools.  You would then use a very similar script.

Look at the pool

Let’s start with the pools we established with the sample ARM template of a previous article.

From there we can look at the pool Pool-A using the following PowerShell command:

$old = Get-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName Pool-A -ServerName pooldemoserver


We can see the pool current edition is Standard while its Data Transaction Unit (DTU) count is 200.


Create a temporary pool

We’ll create a temporary pool, aptly named temporary, attached to the same server:

$temp = New-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName Temporary -ServerName pooldemoserver -Edition $old.Edition -Dtu $old.Dtu


It’s important to create a pool that will allow the databases to be moved into.  The maximum size of a database is dependent of the edition and number of DTU of the elastic pool.  The easiest way is to create a pool with the same edition / DTU and this is what we do here by referencing the $old variable.

Move databases across

First, let’s grab the databases in the original pool:

$dbs = Get-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver | where {$_.ElasticPoolName -eq $old.ElasticPoolName}

$dbs | select DatabaseName

ElasticPoolName is a property of a database.  We’ll simply change it by setting each database:

$dbs | foreach {Set-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver -DatabaseName $_.DatabaseName -ElasticPoolName $temp.ElasticPoolName}

That command takes longer to run as the databases have to move from one compute to another.

Delete / Recreate pool

We can now delete the original pool.  It’s important to note that we wouldn’t have been able to delete a pool with databases in it.

Remove-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $old.ElasticPoolName -ServerName pooldemoserver

$new = New-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $old.ElasticPoolName -ServerName pooldemoserver -Edition Premium -Dtu 250

The second line recreates it with Premium edition.  We could keep the original DTU, but it’s not always possible since different editions support different DTU values.  In this case, for instance, it wasn’t possible since 200 DTUs isn’t supported for Premium pools.

If you execute those two commands without pausing in between, you will likely receive an error.  It is one of those cases where the Azure REST API returns and the resource you asked to be removed seems to be removed but you can’t really recreate it back yet.  An easy work around consist in pausing or retrying.

Move databases back

We can then move the databases back to the new pool:

$dbs | foreach {Set-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver -DatabaseName $_.DatabaseName -ElasticPoolName $new.ElasticPoolName}

Remove-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $temp.ElasticPoolName -ServerName pooldemoserver

In the second line we delete the temporary pool.  Again, this takes a little longer to execute since databases must be moved from one compute to another.


We showed how to move databases from a pool to another.

The pretext was a change in elastic pool edition but we might want to move databases around for other reasons.

In practice you might not want to move your databases twice to avoid the duration of the operation and might be happy to have a different pool name.  In the demo we did, the move took less than a minute because we had two empty databases.  With many databases totaling a lot of storage it would take much more time to move those.

Finding ARM template ApiVersion

Writing an Azure ARM template for a Resource Group is getting easier every day but it remains a sport for the initiated.

Here I want to give a tip about something I often find hard:  how to get the API version of a resource in an ARM template?

As everything hardcore in the platform, we’ll use PowerShell!

The example I’ll use today is creating a backup vault using ARM.


As usual, please do login in PowerShell Integrated Scripting Environment (ISE) with the usual command Login-AzureRmAccount.

Find your provider

First you need to find the provider for the resource you want to create.

For that, use Get-AzureRmResourceProvider.  This will return the list of all available providers.


For me, since I’m looking for the backup vault, the Microsoft.Backup provider seems promissing.

The following will give you all the resource types under the provider (in this case only one):

(Get-AzureRmResourceProvider -ProviderNamespace "Microsoft.Backup").ResourceTypes


You see already that we get very valuable information here.  We get the API versions I was looking for but also the Azure regions where the resources are available.

ARM Template

Once you know the Api Version it is much easier to create the arm template.  In the case of my backup vault:

      "name": "AdvVault-cp",
      "type": "Microsoft.Backup/BackupVault",
      "apiVersion": "2015-03-15",
      "location": "[resourceGroup().location]",
      "tags": { },
      "properties": {
        "sku": {
          "name": "[parameters('skuName')]"


There really is a wealth of information you can undig by using just a few Azure PowerShell cmdlets.

One of the tricky part when you reverse engineer an ARM template starting from the Resource Explorer in the portal is to find the API version which you can get with a few cmd lets as demonstrated here.

Listing Resources under Resource Group with Azure PowerShell

Simple task:  I want to list resources under a single ResourceGroup within one of the subscription.

This is an excuse to show how to login on different subscription and play a bit with the PowerShell Azure SDK.

First thing, start PowerShell Integrated Script Environment (ISE) with the Azure SDK loaded.


Login to your accounts…


This will prompt you to enter an ID + credentials.  Once you enter them, you should be in your subscription.

…  if you have more than one subscription?  Ok, that’s a little more complicated.

You need to find the subscription ID you are interested in.  Either go to https://account.windowsazure.com/Subscriptions to find it in the UI or…  use more PowerShell scripts:


Once you have your subscription ID, simply grab the ID and pass it to:

Add-AzureRmAccount –SubscriptionId <your subscription ID here>

This should prompt you again and afterwards, you’ll be in the context of the right subscription.

List resources under resource group

Now that you’re within the right subscription, let’s list the resource groups within that subscription:


This will give you the list of resources under that resource group.  Grab the resource group name and then you can list the resources underneath:

$res = Get-AzureRmResource | Where–Object {$_.ResourceGroupName –eq <You resource group name>}


That’s it!  I just wanted to get this ceremony out of the way.

You can build on that and use all different kind of cmdlets to query and manipulate your subscriptions.

Move Azure Resources between Resource Groups using Powershell

Ouf…  I’ve been using Azure for quite a while in the old (current actually) portal.  Now I look into my resources in the new (preview) portal and…  what a mess of a resource group mosaic!

Unfortunately, at the time of this writing, you can’t move resources from a Resource Group to another via the portal…


If you’ve been there, hang on, I have the remedy and it involves Powershell!

I’ll assume you’ve installed the latest Azure PowerShell cmdlets.  Fire up Powershell ISE or your favorite interface.

First things first, don’t forget to tell Powershell to switch to Resource Manager SDK:

Switch-AzureMode AzureResourceManager

Then, you’ll need to know the resource ID of your resources.  You can use the cmdlet Get-AzureResource.

In my case, I want to move everything related to an app named “Readings” into a new resource group I’ve created (in the portal).  So I can grab all those resources:

Get-AzureResource | Where-Object {$_.Name.Contains(“eading”)}

Then I can move my resources:

Get-AzureResource | Where-Object {$_.Name.Contains(“eading”)} `
| foreach {Move-AzureResource -DestinationResourceGroupName “Reading” -ResourceId $_.ResourceId -Force}

Unfortunately, not every resource will accept to be moved like this.  I had issues with both Insights objects and Job Collection (Scheduler).  The latter is managed only in the old (current) portal, so I would think it isn’t under the Azure Resource Manager yet.  For Insight, a similar story probably applies.