Tag Archives: Automation

Azure Automation, automation of tasks, either scheduled or manually triggered.

How to know where a Service is Available in Azure

pexels-photo-269633[1]Azure has a Global Footprint of 40 regions at the time of this writing (mid-September 2017).

Not all services are available in every regions.  Most aren’t in fact.  Only foundational services (e.g. storage) are available everywhere.

In order to know where a service is available, we can look at:

https://azure.microsoft.com/en-us/regions/services/

This is handy when we’re building an architecture or a quote.

What if we want to build some automation around the availability of a service or simply check it via PowerShell because opening a browser is too hard today?

There are really 2 ways to get there.  Either we look at a specific region and query that services are in there or we look at a service and query where it’s available.

Provider Model

Services aren’t “first class citizens” in Azure.  Resource Providers are.

Each resource provider offers a set of resources and operations for working with an Azure service.

Where is my service available?

Let’s start by finding the regions where a given service is available.

The key PowerShell cmdlet is Get-AzureRmResourceProvider.

Let’s start by finding the service we’re interested at.


Get-AzureRmResourceProvider | select ProviderNamespace

This returns the name of all the Azure provider namespaces (around 40 at the time of this writing).

Let’s say we are interested in Microsoft.DataLakeStore.


Get-AzureRmResourceProvider -ProviderNamespace Microsoft.DataLakeStore

This returns the resource providers associated with the given namespace.

We now need to pick the one with the resource types interesting us.  In this case, let’s say, we are interested in Azure Data Lake Store accounts (the core resource for the service).  We can see it’s available in three regions:


ProviderNamespace : Microsoft.DataLakeStore
RegistrationState : Registered
ResourceTypes     : {accounts}
Locations         : {East US 2, North Europe, Central US}

Which services are available in my region?

Now, let’s take the opposite approach.  Let’s start with a region and see what services are available in there.

Here the key cmdlet is Get-AzureRmLocation


Get-AzureRmLocation | select Location

This lists the region we have access to.  A user rarely have access to all region which is why the list you see likely is smaller than 40 items at the time of this writing.

Let’s look at what’s available close to my place, canadaeast.


Get-AzureRmLocation | where {$_.Location -eq "canadaeast"} | select -ExpandProperty Providers

This gives us a quick view of what’s available in a region.

Summary

We saw how to query Azure REST API using PowerShell in order to know where a service is available or what services are available in a region.

This could be especially useful if we want to automate such a check or doing more sophisticated queries, e.g. which region have service X & Y available?

Advertisements

Creating an image with 2 Managed Disks for VM Scale Set

UPDATE (23-06-2017):  Fabio Hara, a colleague of mine from Brazil, has published the ARM template on his GitHub.  This makes it much easier to try the content of this article.  Thank you Fabio!

We talked about Managed Disks, now let’s use them.

Let’s create an image from an OS + Data disk & create a Scale Set with that image.

Deploy ARM Template

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "VM Admin User Name": {
      "defaultValue": "myadmin",
      "type": "string"
    },
    "VM Admin Password": {
      "defaultValue": null,
      "type": "securestring"
    },
    "VM Size": {
      "defaultValue": "Standard_DS4",
      "type": "string",
      "allowedValues": [
        "Standard_DS1",
        "Standard_DS2",
        "Standard_DS3",
        "Standard_DS4",
        "Standard_DS5"
      ],
      "metadata": {
        "description": "SKU of the VM."
      }
    },
    "Public Domain Label": {
      "type": "string"
    }
  },
  "variables": {
    "Vhds Container Name": "vhds",
    "frontIpRange": "10.0.1.0/24",
    "Public IP Name": "MyPublicIP",
    "Public LB Name": "PublicLB",
    "Front Address Pool Name": "frontPool",
    "Front NIC": "frontNic",
    "Front VM": "Demo-VM",
    "Front Availability Set Name": "frontAvailSet",
    "Private LB Name": "PrivateLB",
    "VNET Name": "Demo-VNet"
  },
  "resources": [
    {
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "[variables('Public IP Name')]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "Public IP"
      },
      "properties": {
        "publicIPAllocationMethod": "Dynamic",
        "idleTimeoutInMinutes": 4,
        "dnsSettings": {
          "domainNameLabel": "[parameters('Public Domain Label')]"
        }
      }
    },
    {
      "type": "Microsoft.Network/virtualNetworks",
      "name": "[variables('VNet Name')]",
      "apiVersion": "2016-03-30",
      "location": "[resourceGroup().location]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "10.0.0.0/16"
          ]
        },
        "subnets": [
          {
            "name": "front",
            "properties": {
              "addressPrefix": "[variables('frontIpRange')]",
              "networkSecurityGroup": {
                "id": "[resourceId('Microsoft.Network/networkSecurityGroups', 'frontNsg')]"
              }
            }
          }
        ]
      },
      "resources": [],
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkSecurityGroups', 'frontNsg')]"
      ]
    },
    {
      "type": "Microsoft.Network/loadBalancers",
      "name": "[variables('Public LB Name')]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "Public Load Balancer"
      },
      "properties": {
        "frontendIPConfigurations": [
          {
            "name": "LoadBalancerFrontEnd",
            "comments": "Front end of LB:  the IP address",
            "properties": {
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses/', variables('Public IP Name'))]"
              }
            }
          }
        ],
        "backendAddressPools": [
          {
            "name": "[variables('Front Address Pool Name')]"
          }
        ],
        "loadBalancingRules": [
          {
            "name": "Http",
            "properties": {
              "frontendIPConfiguration": {
                "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/frontendIPConfigurations/LoadBalancerFrontEnd')]"
              },
              "frontendPort": 80,
              "backendPort": 80,
              "enableFloatingIP": false,
              "idleTimeoutInMinutes": 4,
              "protocol": "Tcp",
              "loadDistribution": "Default",
              "backendAddressPool": {
                "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/backendAddressPools/', variables('Front Address Pool Name'))]"
              },
              "probe": {
                "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/probes/TCP-Probe')]"
              }
            }
          }
        ],
        "probes": [
          {
            "name": "TCP-Probe",
            "properties": {
              "protocol": "Tcp",
              "port": 80,
              "intervalInSeconds": 5,
              "numberOfProbes": 2
            }
          }
        ],
        "inboundNatRules": [
          {
            "name": "SSH-2-Primary",
            "properties": {
              "frontendIPConfiguration": {
                "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/frontendIPConfigurations/LoadBalancerFrontEnd')]"
              },
              "frontendPort": 22,
              "backendPort": 22,
              "protocol": "Tcp"
            }
          }
        ],
        "outboundNatRules": [],
        "inboundNatPools": []
      },
      "dependsOn": [
        "[resourceId('Microsoft.Network/publicIPAddresses', variables('Public IP Name'))]"
      ]
    },
    {
      "apiVersion": "2015-06-15",
      "name": "frontNsg",
      "type": "Microsoft.Network/networkSecurityGroups",
      "location": "[resourceGroup().location]",
      "tags": {},
      "properties": {
        "securityRules": [
          {
            "name": "Allow-SSH-From-Everywhere",
            "properties": {
              "protocol": "Tcp",
              "sourcePortRange": "*",
              "destinationPortRange": "22",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Allow",
              "priority": 100,
              "direction": "Inbound"
            }
          },
          {
            "name": "Allow-Health-Monitoring",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "AzureLoadBalancer",
              "destinationAddressPrefix": "*",
              "access": "Allow",
              "priority": 200,
              "direction": "Inbound"
            }
          },
          {
            "name": "Disallow-everything-else-Inbound",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Deny",
              "priority": 300,
              "direction": "Inbound"
            }
          },
          {
            "name": "Allow-to-VNet",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "VirtualNetwork",
              "access": "Allow",
              "priority": 100,
              "direction": "Outbound"
            }
          },
          {
            "name": "Allow-to-8443",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "8443",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "Internet",
              "access": "Allow",
              "priority": 200,
              "direction": "Outbound"
            }
          },
          {
            "name": "Disallow-everything-else-Outbound",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Deny",
              "priority": 300,
              "direction": "Outbound"
            }
          }
        ],
        "subnets": []
      }
    },
    {
      "type": "Microsoft.Network/networkInterfaces",
      "name": "[variables('Front NIC')]",
      "tags": {
        "displayName": "Front NICs"
      },
      "apiVersion": "2016-03-30",
      "location": "[resourceGroup().location]",
      "properties": {
        "ipConfigurations": [
          {
            "name": "ipconfig",
            "properties": {
              "privateIPAllocationMethod": "Dynamic",
              "subnet": {
                "id": "[concat(resourceId('Microsoft.Network/virtualNetworks', variables('VNet Name')), '/subnets/front')]"
              },
              "loadBalancerBackendAddressPools": [
                {
                  "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/backendAddressPools/', variables('Front Address Pool Name'))]"
                }
              ],
              "loadBalancerInboundNatRules": [
                {
                  "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/inboundNatRules/SSH-2-Primary')]"
                }
              ]
            }
          }
        ],
        "dnsSettings": {
          "dnsServers": []
        },
        "enableIPForwarding": false
      },
      "resources": [],
      "dependsOn": [
        "[resourceId('Microsoft.Network/virtualNetworks', variables('VNet Name'))]",
        "[resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name'))]"
      ]
    },
    {
      "type": "Microsoft.Compute/disks",
      "name": "[concat(variables('Front VM'), '-data')]",
      "apiVersion": "2016-04-30-preview",
      "location": "[resourceGroup().location]",
      "properties": {
        "creationData": {
          "createOption": "Empty"
        },
        "accountType": "Premium_LRS",
        "diskSizeGB": 32
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachines",
      "name": "[variables('Front VM')]",
      "tags": {
        "displayName": "Front VMs"
      },
      "apiVersion": "2016-04-30-preview",
      "location": "[resourceGroup().location]",
      "properties": {
        "availabilitySet": {
          "id": "[resourceId('Microsoft.Compute/availabilitySets', variables('Front Availability Set Name'))]"
        },
        "hardwareProfile": {
          "vmSize": "[parameters('VM Size')]"
        },
        "storageProfile": {
          "imageReference": {
            "publisher": "OpenLogic",
            "offer": "CentOS",
            "sku": "7.3",
            "version": "latest"
          },
          "osDisk": {
            "name": "[variables('Front VM')]",
            "createOption": "FromImage",
            "caching": "ReadWrite"
          },
          "dataDisks": [
            {
              "lun": 2,
              "name": "[concat(variables('Front VM'), '-data')]",
              "createOption": "attach",
              "managedDisk": {
                "id": "[resourceId('Microsoft.Compute/disks', concat(variables('Front VM'), '-data'))]"
              },
              "caching": "Readonly"
            }
          ]
        },
        "osProfile": {
          "computerName": "[variables('Front VM')]",
          "adminUsername": "[parameters('VM Admin User Name')]",
          "adminPassword": "[parameters('VM Admin Password')]"
        },
        "networkProfile": {
          "networkInterfaces": [
            {
              "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('Front NIC'))]"
            }
          ]
        }
      },
      "resources": [],
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkInterfaces', variables('Front NIC'))]",
        "[resourceId('Microsoft.Compute/availabilitySets', variables('Front Availability Set Name'))]",
        "[resourceId('Microsoft.Compute/disks', concat(variables('Front VM'), '-data'))]"
      ]
    },
    {
      "name": "[variables('Front Availability Set Name')]",
      "type": "Microsoft.Compute/availabilitySets",
      "location": "[resourceGroup().location]",
      "apiVersion": "2016-04-30-preview",
      "tags": {
        "displayName": "FrontAvailabilitySet"
      },
      "properties": {
        "platformUpdateDomainCount": 5,
        "platformFaultDomainCount": 3,
        "managed": true
      },
      "dependsOn": []
    }
  ]
}
  1. We use the resource group named md-demo-image
  2. This deploys a single Linux VM into a managed availability set using a premium managed disk
  3. The VM has both OS & a data disk
  4. The deployment takes a few minutes

Customize VM

  1. Login to the VM
  1. We suggest using Putty tool with SSH (SSH port is opened on NSG)
  2. Look at MyPublicIP to find the DNS of the public IP in order to SSH to it
  1. The data disk is LUN-2 (it should be /dev/sdc)
  2. We will mount it to /data
  3. Write the mount point permanently in /etc/fstab
  • In the bash shell, type
    cd /data
    sudo touch mydata
    ls
  • We just created a file on the data disk

Login into ISE

  1. Open up PowerShell ISE
  2. Type Add-AzureRmAccount
  3. Enter your credentials ; those credentials should be the same you are using to log into the Azure Portal
  4. If you have more than one subscriptions
  1. Type Get-AzureRmSubscription
  2. This should list all subscriptions you have access (even partial) to
  3. Select the SubscriptionId (a GUID) of the subscription you want to use
  4. Type Select-AzureRmSubscription -SubscriptionId <SubscriptionId>
    <SubscriptionId> is the value you just selected
  5. This will select the specified subscription as the “current one”, i.e. future queries will be done against that subscription

Create Image

You can read about details of this procedure at https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-linux-capture-image & https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-capture-image-resource.

  1. In bash shell, type
    sudo waagent -deprovision+user -force
  2. This de-provisions the VM itself
  3. In PowerShell, type
    $rgName = “md-demo-image”
    $imageName = “Demo-VM-Image”
    $vm = Get-AzureRmVM -ResourceGroupName $rgName
    Stop-AzureRmVM -ResourceGroupName $rgName -Name $vm.Name -Force
  4. This will stop the VM
  5. In PowerShell, type
    Set-AzureRmVm -ResourceGroupName $rgName -Name $vm.Name -Generalized
  6. This generalizes the VM
  7. In PowerShell, type
    $imageConfig = New-AzureRmImageConfig -Location $vm.Location -SourceVirtualMachineId $vm.Id
    New-AzureRmImage -ImageName $imageName -ResourceGroupName $rgName -Image $imageConfig
  8. This creates an image resource containing both the OS & data disks
  9. We can see the image in the portal and validate it has two disks in it

Clean up VM

In order to install a Scale Set in the same availability set, we need to remove the VM.

  1. In PowerShell, type
    Remove-AzureRmVM -ResourceGroupName $rgName -Name $vm.Name -Force
    Remove-AzureRmNetworkInterface -ResourceGroupName $rgName -Name frontNic -Force
    Remove-AzureRmAvailabilitySet -ResourceGroupName $rgName -Name frontAvailSet -Force
  2. Optionally, we can remove the disks
    Remove-AzureRmDisk -ResourceGroupName $rgName -DiskName Demo-VM -Force
    Remove-AzureRmDisk -ResourceGroupName $rgName -DiskName Demo-VM-data -Force
  3. Remove-AzureRmLoadBalancer -ResourceGroupName $rgName -Name PublicLB -Force

Deploy Scale Set

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "VM Admin User Name": {
      "defaultValue": "myadmin",
      "type": "string"
    },
    "VM Admin Password": {
      "defaultValue": null,
      "type": "securestring"
    },
    "Instance Count": {
      "defaultValue": 3,
      "type": "int"
    },
    "VM Size": {
      "defaultValue": "Standard_DS4",
      "type": "string",
      "allowedValues": [
        "Standard_DS1",
        "Standard_DS2",
        "Standard_DS3",
        "Standard_DS4",
        "Standard_DS5"
      ],
      "metadata": {
        "description": "SKU of the VM."
      }
    },
    "Public Domain Label": {
      "type": "string"
    }
  },
  "variables": {
    "frontIpRange": "10.0.1.0/24",
    "Public IP Name": "MyPublicIP",
    "Public LB Name": "PublicLB",
    "Front Address Pool Name": "frontPool",
    "Front Nat Pool Name": "frontNatPool",
    "VNET Name": "Demo-VNet",
    "NIC Prefix": "Nic",
    "Scale Set Name": "Demo-ScaleSet",
    "Image Name": "Demo-VM-Image",
    "VM Prefix": "Demo-VM",
    "IP Config Name": "ipConfig"
  },
  "resources": [
    {
      "type": "Microsoft.Network/publicIPAddresses",
      "name": "[variables('Public IP Name')]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "Public IP"
      },
      "properties": {
        "publicIPAllocationMethod": "Dynamic",
        "idleTimeoutInMinutes": 4,
        "dnsSettings": {
          "domainNameLabel": "[parameters('Public Domain Label')]"
        }
      }
    },
    {
      "type": "Microsoft.Network/virtualNetworks",
      "name": "[variables('VNet Name')]",
      "apiVersion": "2016-03-30",
      "location": "[resourceGroup().location]",
      "properties": {
        "addressSpace": {
          "addressPrefixes": [
            "10.0.0.0/16"
          ]
        },
        "subnets": [
          {
            "name": "front",
            "properties": {
              "addressPrefix": "[variables('frontIpRange')]",
              "networkSecurityGroup": {
                "id": "[resourceId('Microsoft.Network/networkSecurityGroups', 'frontNsg')]"
              }
            }
          }
        ]
      },
      "resources": [],
      "dependsOn": [
        "[resourceId('Microsoft.Network/networkSecurityGroups', 'frontNsg')]"
      ]
    },
    {
      "type": "Microsoft.Network/loadBalancers",
      "name": "[variables('Public LB Name')]",
      "apiVersion": "2015-06-15",
      "location": "[resourceGroup().location]",
      "tags": {
        "displayName": "Public Load Balancer"
      },
      "properties": {
        "frontendIPConfigurations": [
          {
            "name": "LoadBalancerFrontEnd",
            "comments": "Front end of LB:  the IP address",
            "properties": {
              "publicIPAddress": {
                "id": "[resourceId('Microsoft.Network/publicIPAddresses/', variables('Public IP Name'))]"
              }
            }
          }
        ],
        "backendAddressPools": [
          {
            "name": "[variables('Front Address Pool Name')]"
          }
        ],
        "loadBalancingRules": [],
        "probes": [
          {
            "name": "TCP-Probe",
            "properties": {
              "protocol": "Tcp",
              "port": 80,
              "intervalInSeconds": 5,
              "numberOfProbes": 2
            }
          }
        ],
        "inboundNatPools": [
          {
            "name": "[variables('Front Nat Pool Name')]",
            "properties": {
              "frontendIPConfiguration": {
                "id": "[concat(resourceId('Microsoft.Network/loadBalancers', variables('Public LB Name')), '/frontendIPConfigurations/loadBalancerFrontEnd')]"
              },
              "protocol": "tcp",
              "frontendPortRangeStart": 5000,
              "frontendPortRangeEnd": 5200,
              "backendPort": 22
            }
          }
        ]
      },
      "dependsOn": [
        "[resourceId('Microsoft.Network/publicIPAddresses', variables('Public IP Name'))]"
      ]
    },
    {
      "apiVersion": "2015-06-15",
      "name": "frontNsg",
      "type": "Microsoft.Network/networkSecurityGroups",
      "location": "[resourceGroup().location]",
      "tags": {},
      "properties": {
        "securityRules": [
          {
            "name": "Allow-SSH-From-Everywhere",
            "properties": {
              "protocol": "Tcp",
              "sourcePortRange": "*",
              "destinationPortRange": "22",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Allow",
              "priority": 100,
              "direction": "Inbound"
            }
          },
          {
            "name": "Allow-Health-Monitoring",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "AzureLoadBalancer",
              "destinationAddressPrefix": "*",
              "access": "Allow",
              "priority": 200,
              "direction": "Inbound"
            }
          },
          {
            "name": "Disallow-everything-else-Inbound",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Deny",
              "priority": 300,
              "direction": "Inbound"
            }
          },
          {
            "name": "Allow-to-VNet",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "VirtualNetwork",
              "access": "Allow",
              "priority": 100,
              "direction": "Outbound"
            }
          },
          {
            "name": "Allow-to-8443",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "8443",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "Internet",
              "access": "Allow",
              "priority": 200,
              "direction": "Outbound"
            }
          },
          {
            "name": "Disallow-everything-else-Outbound",
            "properties": {
              "protocol": "*",
              "sourcePortRange": "*",
              "destinationPortRange": "*",
              "sourceAddressPrefix": "*",
              "destinationAddressPrefix": "*",
              "access": "Deny",
              "priority": 300,
              "direction": "Outbound"
            }
          }
        ],
        "subnets": []
      }
    },
    {
      "type": "Microsoft.Compute/virtualMachineScaleSets",
      "name": "[variables('Scale Set Name')]",
      "location": "[resourceGroup().location]",
      "apiVersion": "2016-04-30-preview",
      "dependsOn": [
        "[concat('Microsoft.Network/loadBalancers/', variables('Public LB Name'))]",
        "[concat('Microsoft.Network/virtualNetworks/', variables('VNET Name'))]"
      ],
      "sku": {
        "name": "[parameters('VM Size')]",
        "tier": "Standard",
        "capacity": "[parameters('Instance Count')]"
      },
      "properties": {
        "overprovision": "true",
        "upgradePolicy": {
          "mode": "Manual"
        },
        "virtualMachineProfile": {
          "storageProfile": {
            "osDisk": {
              "createOption": "FromImage",
              "managedDisk": {
                "storageAccountType": "Premium_LRS"
              }
            },
            "imageReference": {
              "id": "[resourceId('Microsoft.Compute/images', variables('Image Name'))]"
            },
            "dataDisks": [
              {
                "createOption": "FromImage",
                "lun": "2",
                "managedDisk": {
                  "storageAccountType": "Premium_LRS"
                }
              }
            ]
          },
          "osProfile": {
            "computerNamePrefix": "[variables('VM Prefix')]",
            "adminUsername": "[parameters('VM Admin User Name')]",
            "adminPassword": "[parameters('VM Admin Password')]"
          },
          "networkProfile": {
            "networkInterfaceConfigurations": [
              {
                "name": "[variables('NIC Prefix')]",
                "properties": {
                  "primary": "true",
                  "ipConfigurations": [
                    {
                      "name": "[variables('IP Config Name')]",
                      "properties": {
                        "subnet": {
                          "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/virtualNetworks/', variables('VNET Name'), '/subnets/front')]"
                        },
                        "loadBalancerBackendAddressPools": [
                          {
                            "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('Public LB Name'), '/backendAddressPools/', variables('Front Address Pool Name'))]"
                          }
                        ],
                        "loadBalancerInboundNatPools": [
                          {
                            "id": "[concat('/subscriptions/', subscription().subscriptionId,'/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('Public LB Name'), '/inboundNatPools/', variables('Front Nat Pool Name'))]"
                          }
                        ]
                      }
                    }
                  ]
                }
              }
            ]
          }
        }
      }
    }
  ]
}

You can choose the number of instances, by default there are 3

Validate Instance

  1. Connect to the first instance available using SSH on port 5000 of the public IP
  2. SSH ports are NATed from port 5000 up to back-end port 22
  3. In the bash shell type
    ls /data
  4. You should see “mydata”, hence the image carried both the os & data disks

Clean up

We won’t be using the resource groups we have created so we can delete them

In ISE, type Remove-AzureRmResourceGroup -Name md-demo-image -Force

Automating Azure AD

https://pixabay.com/en/machine-factory-automation-1651014/

In the previous article, we explored how to interact (read / write) to an Azure AD tenant using Microsoft Graph API.

In the article before that, we looked at how to authenticate a user without using Azure AD web flow.

Those were motivated by a specific scenario:  replacing a LDAP server by Azure AD while migrating a SaaS application to Azure.

Now a SaaS application will typically have multiple tenants or instances.  Configuring Azure AD by hand, like any other Azure service, can be tedious and error prone.  Furthermore, once we’ve onboarded say 20 tenants, making a configuration change will be even more tedious and error prone.

This is why we’ll look at automation in this article.

We’ll look at how to automate the creation of Azure AD applications we used in the last two articles.  From there it’s pretty easy to generalize (aka exercise to the reader!).

Azure AD Tenant creation

From the get go, bad news, we can’t create the tenant by automation.

No API is exposed for that, we need to go through the portal.

Sorry.

Which PowerShell commands to use?

Automating Azure AD is a little confusing.  Too many options is like not enough.

The first approach should probably be to use the Azure PowerShell package like the rest of Azure services.  For instance, to create an application, we would use New-AzureRmADApplication.

The problem with that package for our scenario is that the Azure AD tenant isn’t attached to a subscription. This is typical for a SaaS model:  we have an Azure AD tenant to manage internal users on all subscriptions and then different tenants to manage external users.  Unfortunately, at the time of this writing, the Azure PowerShell package is tied around the Add-AzureRmAccount command to authenticate the user ; that command binds a subscription (or via the Select-AzureRmSubscription).  But in our case we do not have a subscription:  our Azure AD tenant isn’t managing a subscription.

The second approach would then be to use the MSOnline Module.  That’s centered around Azure AD, but it is slowly being deprecated for…

The third approach, Azure Active Directory V2 PowerShell module.  This is what we’re going to use.

I want to give a big shout to Chris Dituri for tapping the trail here.  His article Azure Active Directory: Creating Applications and SPNs with Powershell was instrumental to write this article.  As we’ll see, there are bunch of intricacies about the application permissions that aren’t documented and that Chris unraveled.

The first thing we’ll need to do is to install the PowerShell package.  Easy:

Install-Module AzureADPreview

If you read this from the future, this might have changed, so check out the documentation page for install instructions.

Connect-AzureAD

We need to connect to our tenant:

connect-azuread -TenantId bc7d0032…

You can see the documentation on the Connect-AzureAD command here.

Where do we take our tenant ID?

image

Now we can go and create applications.

Service-Client

Here we’ll replicate the applications we built by hand in the Authenticating to Azure AD non-interactively article.

Remember, those are two applications, a service and a client one.  The client one has permission to access the service one & let users sign in to it.  As we’ll see, giving those permissions are a little tricky.

Let’s start by the final PowerShell code:

#  Grab the Azure AD Service principal
$aad = (Get-AzureADServicePrincipal | `
    where {$_.ServicePrincipalNames.Contains("https://graph.windows.net")})[0]
#  Grab the User.Read permission
$userRead = $aad.Oauth2Permissions | ? {$_.Value -eq "User.Read"}

#  Resource Access User.Read + Sign in
$readUserAccess = [Microsoft.Open.AzureAD.Model.RequiredResourceAccess]@{
  ResourceAppId=$aad.AppId ;
  ResourceAccess=[Microsoft.Open.AzureAD.Model.ResourceAccess]@{
    Id = $userRead.Id ;
    Type = "Scope"}}

#  Create Service App
$svc = New-AzureADApplication -DisplayName "MyLegacyService" `
    -IdentifierUris "uri://mylegacyservice.com"
# Associate a Service Principal to the service Application 
$spSvc = New-AzureADServicePrincipal -AppId $svc.AppId
#  Grab the user-impersonation permission
$svcUserImpersonation = $spSvc.Oauth2Permissions | `
    ?{$_.Value -eq "user_impersonation"}
 
#  Resource Access 'Access' a service
$accessAccess = [Microsoft.Open.AzureAD.Model.RequiredResourceAccess]@{
  ResourceAppId=$svc.AppId ;
  ResourceAccess=[Microsoft.Open.AzureAD.Model.ResourceAccess]@{
    Id = $svcUserImpersonation.Id ;
    Type = "Scope"}}
#  Create Required Access 
$client = New-AzureADApplication -DisplayName "MyLegacyClient" `
  -PublicClient $true `
  -RequiredResourceAccess $readUserAccess, $accessAccess

As promised, there is ample amount of ceremony.  Let’s go through it.

  • Line 1:  we grab the service principal that has a https://graph.windows.net for a name ; you can check all the service principals living in your tenant with Get-AzureADServicePrincipal ; I have 15 with a clean tenant.  We’ll need the Graph one since we need to give access to it.
  • Line 5:  we grab the specific user read permission inside that service principal’s Oauth2Permissions collection.  Basically, service principals expose the permission that other apps can get with them.  We’re going to need the ID.  Lots of GUIDs in Azure AD.
  • Line 8:  we then construct a user-read RequiredResourceAccess object with that permission
  • Line 15:  we create our Legacy service app
  • Line 16:  we associate a service principal to that app
  • Line 20:  we grab the user impersonation permission of that service principal.  Same mechanism we used for the Graph API, just a different permission.
  • Line 24:  we build another RequiredResourceAccess object around that user impersonation permission.
  • Line 30:  we create our Legacy client app ; we attach both the user-read & user impersonation permission to it.

Grant Permissions

If we try to run the authentication piece of code we had in the article, we’ll first need to change the “clientID” value for $client.AppId (and make sure serviceUri has the value of “uri://mylegacyservice.com”).

Now if we run that, we’ll get an error along the line of

The user or administrator has not consented to use the application with ID ‘…’. Send an interactive authorization request for this user and resource.

What is that?

There is one manual step we have to take, that is to grant the permissions to the application.  In Azure AD, this must be performed by an admin and there are no API exposed for it.

We could sort of automate it with code via an authentication workflow (which is what the error message is suggesting to do), which I won’t do here.

Basically, an administrator (of the Azure AD tenant) needs to approve the use of the app.

We can also do it, still manually, via the portal as we did in the article.  But first, let’s throw the following command:

Get-AzureADOAuth2PermissionGrant

On an empty tenant, there should be nothing returned.  Unfortunately, there are no Add/New-AzureADOAuth2PermissionGrant at the time of this writing (this might have changed if you are from the future so make sure you check out the available commands).

So the manual step is, in the portal, to go in the MyLegacyClient App, select Required Permissions then click the Grant Permissions button.

image

Once we’ve done this we can run the same PowerShell command, i.e.

Get-AzureADOAuth2PermissionGrant

and have two entries now.

image

We see the two permissions we attached to MyLegacyClient.

We should now be able to run the authentication code.

Graph API App

Here we’ll replicate the application we created by hand in the Using Microsoft Graph API to interact with Azure AD article.

This is going to be quite similar, except we’re going to attach a client secret on the application so that we can authenticate against it.

#  Grab the Azure AD Service principal
$aad = (Get-AzureADServicePrincipal | `
    where {$_.ServicePrincipalNames.Contains("https://graph.windows.net")})[0]
#  Grab the User.Read permission
$userRead = $aad.Oauth2Permissions | ? {$_.Value -eq "User.Read"}
#  Grab the Directory.ReadWrite.All permission
$directoryWrite = $aad.Oauth2Permissions | `
  ? {$_.Value -eq "Directory.ReadWrite.All"}


#  Resource Access User.Read + Sign in & Directory.ReadWrite.All
$readWriteAccess = [Microsoft.Open.AzureAD.Model.RequiredResourceAccess]@{
  ResourceAppId=$aad.AppId ;
  ResourceAccess=[Microsoft.Open.AzureAD.Model.ResourceAccess]@{
    Id = $userRead.Id ;
    Type = "Scope"}, [Microsoft.Open.AzureAD.Model.ResourceAccess]@{
    Id = $directoryWrite.Id ;
    Type = "Role"}}

#  Create querying App
$queryApp = New-AzureADApplication -DisplayName "QueryingApp" `
    -IdentifierUris "uri://myqueryingapp.com" `
    -RequiredResourceAccess $readWriteAccess

#  Associate a Service Principal so it can login
$spQuery = New-AzureADServicePrincipal -AppId $queryApp.AppId

#  Create a key credential for the app valid from now
#  (-1 day, to accomodate client / service time difference)
#  till three months from now
$startDate = (Get-Date).AddDays(-1)
$endDate = $startDate.AddMonths(3)

$pwd = New-AzureADApplicationPasswordCredential -ObjectId $queryApp.ObjectId `
  -StartDate $startDate -EndDate $endDate `
  -CustomKeyIdentifier "MyCredentials"

You need to “grant permissions” for the new application before trying to authenticate against it.

Two big remarks on tiny bugs ; they might be fixed by the time you read this and they aren’t critical as they both have easy work around:

  1. The last command in the script, i.e. the password section, will fail with a “stream property was found in a JSON Light request payload. Stream properties are only supported in responses” if you execute the entire script in one go.  If you execute it separately, it doesn’t.  Beat me.
  2. This one took me 3 hours to realize, so use my wisdom:  DO NOT TRY TO AUTHENTICATE THE APP BEFORE GRANTING PERMISSIONS.  There seems to be some caching on the authentication service so if you do try to authenticate when you don’t have the permissions, you’ll keep receiving the same claims after even if you did grant the permissions.  Annoying, but easy to avoid once you know it.

An interesting aspect of the automation is that we have a much more fine grained control on the duration of the password than in the Portal (1 year, 2 years, infinite).  That allows us to implement a more aggressive rotation of secrets.

Summary

Automation with Azure AD, as with other services, helps reduce the effort to provision and the human errors.

There are two big manual steps that can’t be automated in Azure AD:

  • Azure AD tenant creation
  • Granting permissions on an application

That might change in the future, but for now, that limits the amount of automation you can do with human interactions.

Azure SQL Elastic Pool – Moving databases across pools using PowerShell

hand-truck-564242_640[1]

I’ve written a bit about Azure SQL Elastic Pool lately:  an overview, about ARM template and about database size.

One of the many great features of Azure SQL Elastic Pool is that like Azure SQL Database (standalone), we can change the eDTU capacity of the pool “on the fly”, without downtime.

Unlike its standalone cousin though, we can’t change the edition of the pool.  The edition is either Basic, Standard or Premium.  It is set at creation and is immutable after that.

If we want to change the edition of a pool, the obvious way is to create another pool, move the databases there, delete the original, recreate it with a different edition and move the databases back.

This article shows how to do that using PowerShell.

You might want to move databases around for other reasons, typically to optimize the density and performance of pools.  You would then use a very similar script.

Look at the pool

Let’s start with the pools we established with the sample ARM template of a previous article.

From there we can look at the pool Pool-A using the following PowerShell command:


$old = Get-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName Pool-A -ServerName pooldemoserver

$old

We can see the pool current edition is Standard while its Data Transaction Unit (DTU) count is 200.

image

Create a temporary pool

We’ll create a temporary pool, aptly named temporary, attached to the same server:


$temp = New-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName Temporary -ServerName pooldemoserver -Edition $old.Edition -Dtu $old.Dtu

$temp

It’s important to create a pool that will allow the databases to be moved into.  The maximum size of a database is dependent of the edition and number of DTU of the elastic pool.  The easiest way is to create a pool with the same edition / DTU and this is what we do here by referencing the $old variable.

Move databases across

First, let’s grab the databases in the original pool:


$dbs = Get-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver | where {$_.ElasticPoolName -eq $old.ElasticPoolName}

$dbs | select DatabaseName

ElasticPoolName is a property of a database.  We’ll simply change it by setting each database:


$dbs | foreach {Set-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver -DatabaseName $_.DatabaseName -ElasticPoolName $temp.ElasticPoolName}

That command takes longer to run as the databases have to move from one compute to another.

Delete / Recreate pool

We can now delete the original pool.  It’s important to note that we wouldn’t have been able to delete a pool with databases in it.


Remove-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $old.ElasticPoolName -ServerName pooldemoserver

$new = New-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $old.ElasticPoolName -ServerName pooldemoserver -Edition Premium -Dtu 250

The second line recreates it with Premium edition.  We could keep the original DTU, but it’s not always possible since different editions support different DTU values.  In this case, for instance, it wasn’t possible since 200 DTUs isn’t supported for Premium pools.

If you execute those two commands without pausing in between, you will likely receive an error.  It is one of those cases where the Azure REST API returns and the resource you asked to be removed seems to be removed but you can’t really recreate it back yet.  An easy work around consist in pausing or retrying.

Move databases back

We can then move the databases back to the new pool:


$dbs | foreach {Set-AzureRmSqlDatabase -ResourceGroupName DBs -ServerName pooldemoserver -DatabaseName $_.DatabaseName -ElasticPoolName $new.ElasticPoolName}

Remove-AzureRmSqlElasticPool -ResourceGroupName DBs -ElasticPoolName $temp.ElasticPoolName -ServerName pooldemoserver

In the second line we delete the temporary pool.  Again, this takes a little longer to execute since databases must be moved from one compute to another.

Summary

We showed how to move databases from a pool to another.

The pretext was a change in elastic pool edition but we might want to move databases around for other reasons.

In practice you might not want to move your databases twice to avoid the duration of the operation and might be happy to have a different pool name.  In the demo we did, the move took less than a minute because we had two empty databases.  With many databases totaling a lot of storage it would take much more time to move those.

Azure SQL Elastic Pool – Database Size

PlanetsI mentioned in a past article, regarding database sizes within an elastic pool:

“No policies limit an individual database to take more storage although a database maximum size can be set on a per-database basis.”

I’m going to focus on that in this article.

An Azure SQL Database resource has a MaxSizeInBytes property.  We can set it either in an ARM template (see this ARM template and the property maxSizeBytes) or in PowerShell.

An interesting aspect of that property is that:

  • It takes only specific values
  • Not all values are permitted, depending on the elastic pool edition (i.e. Basic, Standard or Premium)

Valid values

One way to find the valid values is to navigate to the ARM schema.  That documented schema likely is slightly out of date since, as of December 2016, the largest value is 500GB, which isn’t the largest possible database size (1 TB for a P15).

The online documentation of Set-AzureRmSqlDatabase isn’t fairing much better as the documentation for the MaxSizeBytes parameter refers to a parameter MaxSizeGB to know about the acceptable values.  Problem is, MaxSizeGB parameter doesn’t exist.

But let’s start with the documented schema as it probably only lacks the most recent DB sizes.

Using that schema list of possible values and comparing that with the stand alone database size for given editions, we can conclude (after testing with ARM templates of course), that a Basic pool can have databases up to 2GB, for Standard we have 250GB and of course Premium can take all values.

It is important to notice that the pool can have larger storage.  For instance, even the smallest basic pool, with 50 eDTUs, can have a maximum storage of 5 GB.  But each DB within that pool can only grow up to 2 GB.

That gives us the following landscape:

Maximum Size (in bytes) Maximum Size (in GB) Available for (edition)
104857600 0.1 Premium, Standard, Basic
524288000 0.5 Premium, Standard, Basic
1073741824 1 Premium, Standard, Basic
2147483648 2 Premium, Standard, Basic
5368709120 5 Premium, Standard
10737418240 10 Premium, Standard
21474836480 20 Premium, Standard
32212254720 30 Premium, Standard
42949672960 40 Premium, Standard
53687091200 50 Premium, Standard
107374182400 100 Premium, Standard
161061273600 150 Premium, Standard
214748364800 200 Premium, Standard
268435456000 250 Premium, Standard
322122547200 300 Premium
429496729600 400 Premium
536870912000 500 Premium

Storage Policies

We can now use this maximum database as a storage policy, i.e. a way to make sure a single database doesn’t take all the storage available in a pool.

Now, this isn’t as trivially useful as the eDTUs min / max we’ve seen in a pool.  In the eDTU case, that was controlling how much compute was given to a database at all time.  In the case of a database maximum size, once the database reaches that size, it becomes read only.  That will likely break our applications running on top of it unless we planned for it.

A better approach would be to monitor the different databases and react to size changes, by moving the database to other pool for instance.

The maximum size could be a safeguard though.  For instance, let’s imagine we want each database in a pool to stay below 50 GB and we’ll monitor for that and raise alerts in case that threshold is reached (see Azure Monitor for monitoring and alerts).  Now we might still put a maximum size for the databases of 100 GB.  This would act as a safeguard:  if we do not do anything about a database outgrowing its target 50GB, it won’t be able to grow indefinitely, which could top the pool maximum size and make the entire pool read only, affecting ALL the databases in the pool.

In that sense the maximum size still act as a resource governor, preventing noisy neighbour effect.

PowerShell example

We can’t change a database maximum size in the portal (as of December 2016).

Using ARM template, it is easy to change the parameter.  Here, let’s simply show how we would change it for an existing database.

Building on the example we gave in a previous article, we can easily grab the Pool-A-Db0 database in resource group DBs and server pooldemoserver:


Get-AzureRmSqlDatabase -ServerName pooldemoserver -ResourceGroupName DBs -DatabaseName Pool-A-Db0

image

We can see the size is the one that was specified in the ARM template (ARM parameter DB Max Size default value), i.e. 10 GB.  We can bump it to 50 GB, i.e. 53687091200 bytes:


Set-AzureRmSqlDatabase -ServerName pooldemoserver -ResourceGroupName DBs -DatabaseName Pool-A-Db0 -MaxSizeBytes 53687091200

We can confirm the change in the portal by looking at the properties.

image

Default Behaviour

If the MaxSizeByte property is omitted, either in an ARM Template or a new-AzureRmSqlDatabase PowerShell cmdlet, the default behaviour is for the database to have the maximum capacity (e.g. for Standard, 250 GB).

After creation, we can’t set the property value to null to obtain the same effect.  Omitting the parameter simply keep to previously set value.

Summary

We’ve looked at the maximum size property of a database.

It can be used to control the growth of a database inside a pool and prevent a database growth to affect others.

Azure SQL Elastic Pool – ARM Templates

coil-632650_640[1]In my last article, I covered Azure SQL Elastic Pool.  In this one I cover how to provision it using ARM templates.

As of today (December 2016), the documentation about Azure SQL Elastic Pool provisioning via ARM templates is…  not existing.

Searching for it I was able to gather hints via a few colleagues GitHub repos, but there are no examples in the ARM quickstart templates nor is the elastic pool resource schema documented.  Also, the Automation Script feature in the portal doesn’t reverse engineer an ARM template for the elastic pool.

So I hope this article fills that gap and is easy to search for & consume.

ARM Template

Here we’re going to provision a Server with two pools, Pool-A & Pool-B (yeah, sounds a bit like Thing 1 & Thing 2), each having a few (configurable number of) databases in them.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "Server Name": {
      "defaultValue": "pooldemoserver",
      "type": "string",
      "metadata": {
        "description": "Name of the SQL:  needs to be unique among all servers in Azure"
      }
    },
    "Admin Login": {
      "defaultValue": "myadmin",
      "type": "string",
      "metadata": {
        "description": "SQL Server Admin login name"
      }
    },
    "Admin Password": {
      "type": "securestring",
      "metadata": {
        "description": "SQL Server Admin login password"
      }
    },
    "Pool A Edition": {
      "defaultValue": "Standard",
      "type": "string",
      "allowedValues": [
        "Basic",
        "Standard",
        "Premium"
      ],
      "metadata": {
        "description": "Pool A Edition"
      }
    },
    "Pool B Edition": {
      "defaultValue": "Standard",
      "type": "string",
      "allowedValues": [
        "Basic",
        "Standard",
        "Premium"
      ],
      "metadata": {
        "description": "Pool B Edition"
      }
    },
    "DB Max Size": {
      "defaultValue": "10737418240",
      "type": "string",
      "allowedValues": [
        "104857600",
        "524288000",
        "1073741824",
        "2147483648",
        "5368709120",
        "10737418240",
        "21474836480",
        "32212254720",
        "42949672960",
        "53687091200",
        "107374182400",
        "161061273600",
        "214748364800",
        "268435456000",
        "322122547200",
        "429496729600",
        "536870912000"
      ],
      "metadata": {
        "description": "DB Max Size, in bytes"
      }
    }
  },
  "variables": {
    "Pool A": "Pool-A",
    "Pool B": "Pool-B",
    "DB A Prefix": "Pool-A-Db",
    "DB B Prefix": "Pool-B-Db",
    "Count A": 2,
    "Count B": 4
  },
  "resources": [
    {
      "name": "[parameters('Server Name')]",
      "type": "Microsoft.Sql/servers",
      "apiVersion": "2014-04-01-preview",
      "location": "[resourceGroup().location]",
      "dependsOn": [],
      "properties": {
        "administratorLogin": "[parameters('Admin Login')]",
        "administratorLoginPassword": "[parameters('Admin Password')]",
        "version": "12.0"
      },
      "resources": [
        {
          "type": "firewallRules",
          "kind": "v12.0",
          "name": "AllowAllAzureIps",
          "apiVersion": "2014-04-01-preview",
          "location": "[resourceGroup().location]",
          "dependsOn": [
            "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]"
          ],
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "0.0.0.0"
          }
        },
        {
          "type": "elasticpools",
          "name": "[variables('Pool A')]",
          "apiVersion": "2014-04-01-preview",
          "location": "[resourceGroup().location]",
          "dependsOn": [
            "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]"
          ],
          "properties": {
            "edition": "[parameters('Pool A Edition')]",
            "dtu": "200",
            "databaseDtuMin": "10",
            "databaseDtuMax": "50"
          }
        },
        {
          "type": "elasticpools",
          "name": "[variables('Pool B')]",
          "apiVersion": "2014-04-01-preview",
          "location": "[resourceGroup().location]",
          "dependsOn": [
            "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]"
          ],
          "properties": {
            "edition": "[parameters('Pool B Edition')]",
            "dtu": "400",
            "databaseDtuMin": "0",
            "databaseDtuMax": null
          }
        }
      ]
    },
    {
      "type": "Microsoft.Sql/servers/databases",
      "copy": {
        "name": "DBs-A",
        "count": "[variables('Count A')]"
      },
      "name": "[concat(parameters('Server Name'), '/', variables('DB A Prefix'), copyIndex())]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]",
        "[resourceId('Microsoft.Sql/servers/elasticpools', parameters('Server Name'), variables('Pool A'))]"
      ],
      "tags": {
        "displayName": "Pool-A DBs"
      },
      "apiVersion": "2014-04-01-preview",
      "properties": {
        "collation": "SQL_Latin1_General_CP1_CI_AS",
        "maxSizeBytes": "[parameters('DB Max Size')]",
        "requestedServiceObjectiveName": "ElasticPool",
        "elasticPoolName": "[variables('Pool A')]"
      }
    },
    {
      "type": "Microsoft.Sql/servers/databases",
      "copy": {
        "name": "DBs-B",
        "count": "[variables('Count B')]"
      },
      "name": "[concat(parameters('Server Name'), '/', variables('DB B Prefix'), copyIndex())]",
      "location": "[resourceGroup().location]",
      "dependsOn": [
        "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]",
        "[resourceId('Microsoft.Sql/servers/elasticpools', parameters('Server Name'), variables('Pool B'))]"
      ],
      "tags": {
        "displayName": "Pool-B DBs"
      },
      "apiVersion": "2014-04-01-preview",
      "properties": {
        "edition": "[parameters('Pool B Edition')]",
        "collation": "SQL_Latin1_General_CP1_CI_AS",
        "maxSizeBytes": "[parameters('DB Max Size')]",
        "requestedServiceObjectiveName": "ElasticPool",
        "elasticPoolName": "[variables('Pool B')]"
      }
    }
  ]
}

We can deploy the template as is.  We’ll need to enter at least an Admin password (for the Azure SQL server).

The “Server Name” parameter must be unique throughout Azure (not just your subscription).  So if it happens to be taken when you try to deploy the template (in which case you would receive an error message around Server ‘pooldemoserver’ is busy with another operation), try a new, more original name.

Each parameter is documented in the metadata description.

Results

Let’s look at the result.  Let’s first go in the resource group where we deployed the template.

In the resource list we should see the following:

image

We first have our server, with default name pooldemoserver, our two pools, Pool-A & Pool-B, and 6 databases.

Let’s select Pool-A.

image

We can see the pool is of Standard edition, has 200 eDTUs with a minimum of 10 eDTUs and maximum 50 per databases, which is faithful to its ARM definition (line 10-13).

        {
          "type": "elasticpools",
          "name": "[variables('Pool A')]",
          "apiVersion": "2014-04-01-preview",
          "location": "[resourceGroup().location]",
          "dependsOn": [
            "[resourceId('Microsoft.Sql/servers', parameters('Server Name'))]"
          ],
          "properties": {
            "edition": "[parameters('Pool A Edition')]",
            "dtu": "200",
            "databaseDtuMin": "10",
            "databaseDtuMax": "50"
          }
        }

Similarly, Pool-B has a minimum of 0 and a maximum of 100.  The maximum was set to null in the template and hence is the maximum allowed for a standard pool of 400 DTUs.

Let’s select the databases in Pool-B.  Alternatively, we can select the Configure pool tool bar option.

image

The following pane shows us the eDTUs consumed in the last 14 days.  It also allows us to change the assigned eDTUs to the pool.

It is in this pane that we can add / remove databases from the pool.

image

In order to remove databases from the pool, they must first be selected in the lower right pane corner.  We will have to chose a standalone pricing tier for each DB and hit save.  As of today (December 2016), there are no way to move databases from one pool to another directly, i.e. they must first be converted as a stand alone.  It is possible to move databases from a pool to another using PowerShell as I’ll demonstrate in a future article though.

If we go back to the resource group and select any of the database, we have a link to its parent pool.

image

Summary

Despite the current lack (as of December 2016) of documentation around it, it is quite possible to create databases within an elastic pool using ARM templates as we’ve demonstrated here.

Primer on Azure Monitor

pexels-photo-42384[1]

Azure Monitor is the latest evolution of a set of technologies allowing Azure resources monitoring.

I’ve written about going the extra mile to be able to analyze logs in the past.

The thing is that once our stuff is in production with tons of users hitting it, it might very well start behaving in unpredictable ways.  If we do not have a monitoring strategy, we’re going to be blind to problems and only see unrelated symptoms.

Azure Monitor is a great set of tools.  It doesn’t try to be the end all solution.  On the contrary, although it offers analytics out of the box, it let us export the logs wherever we want to go further.

I found the documentation of Azure Monitor (as of November 2016) a tad confusing, so I thought I would give a summary overview here.  Hopefully it will get you started.

Three types of sources

First thing we come across in Azure Monitor’s literature is the three types of sources:  Activity Logs, Diagnostic Logs & Metrics.

There is a bit of confusion between Diagnostic Logs & Metrics, some references hinting towards the fact that metrics are generated by Azure while diagnostics are generated by the resource itself.  That is confusing & beside the point.  Let’s review those sources here.

Activity logs capture all operations performed on Azure resources.  They used to be called Audit Logs & Operational Logs.  Those comes directly from Azure APIs.  Any operations done on an Azure API (except HTTP-GET operations) traces an activity log.  Activity log are in JSON and contain the following information:  action, caller, status & time stamp.  We’ll want to keep track of those to understand changes done in our Azure environments.

Metrics are emitted by most Azure resources.  They are akin to performance counter, something that has a value (e.g. % CPU, IOPS, # messages in a queue, etc.) over time ; hence Azure Monitor, in the portal, allows us to plot those against time.  Metrics typically comes in JSON and tend to be emitted at regular interval (e.g. every minute) ; see this articles for available metrics.  We’ll want to check those to make sure our resources operate within expected bounds.

Diagnostic logs are logs emitted by a resource that provide detailed data about the operation of that particular resource.  That one is specific to the resource in terms of content:  each resource will have different logs.  Format will also vary (e.g. JSON, CSV, etc.), see this article for different schemas.  They also tend to be much more voluminous for an active resource.

That’s it.  That’s all there is to it.  Avoid the confusion and re-read the last three paragraphs.  It’s a time saver.  Promised.

We’ll discuss the export mechanisms & alerts below, but for now, here’s a summary of the capacity (as of November 2016) of each source:

Source Export to Supports Alerts
Activity Logs Storage Account & Event Hub Yes
Metrics Storage Account, Event Hub & Log Analytics Yes
Diagnostics Logs Storage Account, Event Hub & Log Analytics No

Activity Log example

We can see the activity log of our favorite subscription by opening the monitor blade, which should be on the left hand side, in the https://portal.azure.com.

image

If you do not find it there, hit the More Services and search for Monitor.

Selecting the Activity Logs, we should have a search form and some results.

image

ListKeys is a popular one.  Despite being conceptually a read operation, the List Key action, on a storage account, is done through a POST in the Azure REST API, specifically to trigger an audit trail.

We can select one of those ListKeys and, in the tray below, select the JSON format:


{
"relatedEvents": [],
"authorization": {
"action": "Microsoft.Storage/storageAccounts/listKeys/action",
"condition": null,
"role": null,
"scope": "/subscriptions/<MY SUB GUID>/resourceGroups/securitydata/providers/Microsoft.Storage/storageAccounts/a92430canadaeast"
},
"caller": null,
"category": {
"localizedValue": "Administrative",
"value": "Administrative"
},
"claims": {},
"correlationId": "6c619af4-453e-4b24-8a4c-508af47f2b26",
"description": "",
"eventChannels": 2,
"eventDataId": "09d35196-1cae-4eca-903d-6e9b1fc71a78",
"eventName": {
"localizedValue": "End request",
"value": "EndRequest"
},
"eventTimestamp": "2016-11-26T21:07:41.5355248Z",
"httpRequest": {
"clientIpAddress": "104.208.33.166",
"clientRequestId": "ba51469e-9339-4329-b957-de5d3071d719",
"method": "POST",
"uri": null
},

I truncated the JSON here.  Basically, it is an activity event with all the details.

Metrics example

Metrics can be accessed from the “global” Monitor blade or from any Azure resource’s monitor blade.

Here I look at the CPU usage of an Azure Data Warehouse resource (which hasn’t run for months, hence flat lining).

image

Diagnostic Logs example

For diagnostics, let’s create a storage account and activate diagnostics on it.  For this, under the Monitoring section, let’s select Diagnostics, make sure the status is On and then select Blob logs.

image

We’ll notice that all metrics were already selected.  We also noticed that the retention is controlled there, in this case 7 days.

Let’s create a blob container, copy a file into it and try to access it via its URL.  Then let’s wait a few minutes for the diagnostics to be published.

We should see a special $logs container in the storage account.  This container will contain log files, stored by date & time.  For instance for the first file, just taking the first couple of lines:


1.0;2016-11-26T20:48:00.5433672Z;<strong>GetContainerACL</strong>;Success;200;3;3;authenticated;monitorvpl;monitorvpl;blob;"https://monitorvpl.blob.core.windows.net:443/$logs?restype=container&amp;comp=acl";"/monitorvpl/$logs";295a75a6-0001-0021-7b26-48c117000000;0;184.161.153.48:51484;2015-12-11;537;0;217;62;0;;;"&quot;0x8D4163D73154695&quot;";Saturday, 26-Nov-16 20:47:34 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"9e78fc90-b419-11e6-a392-8b41713d952c"
1.0;2016-11-26T20:48:01.0383516Z;<strong>GetContainerACL</strong>;Success;200;3;3;authenticated;monitorvpl;monitorvpl;blob;"https://monitorvpl.blob.core.windows.net:443/$logs?restype=container&amp;comp=acl";"/monitorvpl/$logs";06be52d9-0001-0093-7426-483a6d000000;0;184.161.153.48:51488;2015-12-11;537;0;217;62;0;;;"&quot;0x8D4163D73154695&quot;";Saturday, 26-Nov-16 20:47:34 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"9e9c6311-b419-11e6-a392-8b41713d952c"
1.0;2016-11-26T20:48:33.4973667Z;<strong>PutBlob</strong>;Success;201;6;6;authenticated;monitorvpl;monitorvpl;blob;"https://monitorvpl.blob.core.windows.net:443/sample/A.txt";"/monitorvpl/sample/A.txt";965cb819-0001-0000-2a26-48ac26000000;0;184.161.153.48:51622;2015-12-11;655;7;258;0;7;"Tj4nPz2/Vt7I1KEM2G8o4A==";"Tj4nPz2/Vt7I1KEM2G8o4A==";"&quot;0x8D4163D961A76BE&quot;";Saturday, 26-Nov-16 20:48:33 GMT;;"Microsoft Azure Storage Explorer, 0.8.5, win32, Azure-Storage/1.2.0 (NODE-VERSION v4.1.1; Windows_NT 10.0.14393)";;"b2006050-b419-11e6-a392-8b41713d952c"

Storage Account diagnostics obviously log in semicolon delimited values (variant of CSV), which isn’t trivial to read the way I pasted it here.  But basically we can see the logs contain details:  each operation done around the blobs are logged with lots of details.

Querying

As seen in the examples, Azure Monitor allows us to query the logs.  This can be done in the portal but also using Azure Monitor REST API, cross platform Command-Line Interface (CLI) commands, PowerShell cmdlets or the .NET SDK.

Export

We can export the sources to a Storage Account and specify a retention period in days.  We can also export them to Azure Event Hubs & Azure Log Analytics.  As specified in the table above, Activity logs can’t be sent to Log Analytics.  Also, Activity logs can be analyzed using Power BI.

There are a few reasons why we would export the logs:

  • Archiving scenario:  Azure Monitor keeps content for 30 days.  If we need more retention, we need to archive it ourselves.  We can do that by exporting the content to a storage account ; this also enables big data scenario where we keep the logs for future data mining.
  • Analytics:  Log Analytics offers more capacity for analyzing content.  It also offers 30 days of retention by default but can be extended to one year.  Basically, this would upgrade us to Log Analytics.
  • Alternatively, we could export the logs to a storage account where they could be ingested by another SIEM (e.g. HP Arcsight).  See this article for details about SIEM integration.
  • Near real time analysis:  Azure Event Hubs allow us to send the content to many different places, but also we could analyze it on the fly using Azure Stream Analytics.

Alerts (Notifications)

Both Activity Logs & Metrics can trigger alerts.  Currently (as of November 2016), only Metrics alert can be set in the portal ; Activity Logs alerts must be set by PowerShell, CLI or REST API.

Alerts are a powerful way to automatically react to our Azure resource behaviors ; when certain conditions are met (e.g. for a metric, when a value exceeds a threshold for a given period of time), the alert can send an email to a specified list of email addresses but also, it can invoke a Web Hook.

Again, the ability to invoke a web hook opens up the platform.  We could, for instance, expose an Azure Automation runbook as a Web Hook ; it therefore means an alert could trigger whatever a runbook is able to do.

Security

There are two RBAC roles around monitoring:  Reader & Contributor.

There are also some security considerations around monitoring:

  • Use a dedicated storage account (or multiple dedicated storage accounts) for monitoring data.  Basically, avoid mixing monitoring and “other” data, so that people do not gain access to monitoring data inadvertently and, vis versa, that people needing access to monitoring data do not gain access to “other” data (e.g. sensitive business data).
  • For the same reasons, use a dedicated namespace with Event Hubs
  • Limit access to monitoring data by using RBAC, e.g. by putting them in a separate resource group
  • Never grant ListKeys permission across a subscription as users could then gain access to reading monitoring data
  • If you need to give access to monitoring data, consider using a SAS token (for either Storage Account or Event Hubs)

Summary

Azure Monitor brings together a suite of tools to monitor our Azure resources.  It is an open platform in the sense it integrates easily with solutions that can complement it.