Deploying AKS with ARM Template – Network integration

In a past article, we looked at how Azure Kubernetes Services (AKS) integrated with Azure Networking.

AKS is a managed Kubernetes service in Azure.

In this article, we are going to do two things:

  1. Deploy an AKS cluster with Advanced Networking using an Azure ARM Template.
  2. Deploy a service on the cluster and validate the networking view we formed in the last article.

As usual, the code is in GitHub.

In AKS, Advance Networking means the cluster gets deployed in an existing Azure Virtual Network.

In the end, we should be able to experience the following:


We will only ommit the port mapping and run everything on port 80.

We’ll see that:

Deploying AKS

Let’s deploy AKS:


The first parameter is the DNS Prefix which is used for the domain name of the cluster (used by the API).

AKS needs a service principal as we’ve explored in a past article. The ARM template hence requires three parameters related to the service principal:

  1. Service Principal App ID
  2. Service Principal Object ID
  3. Service Principal Secret

We recently wrote an article showing how to create a Service Principal. We recommend creating a principal to use the template.

The template doesn’t expose options as parameters. We “hard coded” a lot of things in ARM variables which are typically parameters:

We did that for simplicity. It is quite easy to modify the template to expose options.

In order to create a SSH public key, we did the following command in Linux: ssh-keygen -o -f key -P "". This creates a public key locally in file We can then cat the file to get the content.

Let’s now look at the deployment.


By looking at the cluster overview, we should see it has the Kubernetes version of 1.11.1. This was the latest at the time of this writing (mid August 2018).

The total cores match the B2 VM size (times 3 nodes).

Virtual Network

The Virtual Network is segregated in two subnets as planned:


Both subnets are /20 hence accomodating 4096 IPs each (see CIDR notation). Of those, 5 are used by Azure as stated in the FAQ:

Azure reserves some IP addresses within each subnet. The first and last IP addresses of each subnet are reserved for protocol conformance, along with the x.x.x.1-x.x.x.3 addresses of each subnet, which are used for Azure services.

This explains why the services subnet has only 4091 IPs left (4096 - 5). The aks subnet has an extra 93 IPS taken away. There are 3 nodes in the cluster, which means each node takes 31 IPs. This is one IP for the VM and 30 IPs for the pods on the VM.

The maximum number of pods default to 30 but can be modified in the ARM template with the max pods property.

The VNET IP address needs to be compatible with RFC 1918. This means the IP ranges do not clash with public IPs (e.g.

Virtual Network's access

Let’s look at the access control on the virtual network.

Access Control (IAM)

We notice the Service Principal we provided in input is Network Contributor. This is a requirement for AKS in Advanced Networking mode.

This was accomplished using a role assignment resource. We covered that in a previous article:

    "type": "Microsoft.Network/virtualNetworks/providers/roleAssignments",
    "apiVersion": "2017-05-01",
    "name": "[variables('Role Assignment Name')]",
    "dependsOn": [
        "[resourceId('Microsoft.Network/virtualNetworks', variables('VNET Name'))]"
    "properties": {
        "roleDefinitionId": "[variables('Network Contributor Role')]",
        "principalId": "[parameters('Service Principal Object ID')]"

This is convenient as the ARM template hence self-sufficient.

Buddy Resource Group

The AKS cluster appears as one resource in our resource group. The underlying resources, the VMs constituting the cluster, are actually accessible to us.

They are in a resource group named MC___.

Buddy group

We can see all resources we would expect from a cluster.

If look at one of the NIC and its IP configuration we can see the following:

The NIC has multiple IPs. The primary one is the node’s IP while the secondary ones are a pool of IPs attributed to pods.

Deploy a service

Let’s deploy a service to Kubernetes.

We use a trivial image exposing a one-page web site telling on which node it is running.

Login to Kubernetes

First, let’s login to Kubernetes with the following commands:

az aks install-cli
az aks get-credentials --resource-group <Resource Group> --name cluster

The first command only need to be done once in an environment.

The second actually logs us in:

Deploying the service

Let’s deploy our service on the cluster.

The yaml file is on GitHub and has the following content:

apiVersion: apps/v1
kind: Deployment
  name: web
  replicas: 3
      app:  web-get-started
        app: web-get-started
      - name: myapp
        image: vplauzon/get-started:part2-no-redis
        - containerPort: 80
apiVersion: v1
kind: Service
  name: web-service
  annotations: "true" "services"
  type: LoadBalancer
  - port: 80
    app: web-get-started

The first part is a deployment of a replica-set of pods. There are 3 replicas and the container image is vplauzon/get-started:part2-no-redis. The code for that container is on GitHub.

The second part is a service. It does a selection on the labels we defined for the replica set, hence it will load balanced the pods we deployed. We also pass some annotations. The first one states that we want to use Azure Internal load balancer, i.e. using a private ip for the load balancer. The second one states in which subnet we want the load balancer to be.

We create this deployment with the command:

kubectl create -f service.yaml

Let’s look at how that impact our Azure resources.

Buddy Resource Group

Let’s go back to our buddy resource group:

Buddy group containing a load balancer

We see a load balancer that wasn’t there before.

Virtual Network

If we go back to our Virtual Network and scroll down we should find the load balancer:

Load balancer IP

We can see it belongs to the services subnet and has an IP in that subnet range.

In Kubernetes

Let’s look at how Kubernetes interpret those.

Let’s type:

kubectl get pods -o wide

This gives us a result similar to:

NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
web-54b885b89b-9q9cr   1/1       Running   0          15m   aks-agentpool-15447536-0
web-54b885b89b-b25rz   1/1       Running   0          15m   aks-agentpool-15447536-2
web-54b885b89b-khklp   1/1       Running   0          15m   aks-agentpool-15447536-1

We see 3 pods, corresponding to the replica set. They are each deployed in a different node. They each have an IP address belonging to the aks subnet.

Now if we type:

kubectl get services -o wide

We should see:

NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE       SELECTOR
kubernetes    ClusterIP       <none>        443/TCP        1d        <none>
web-service   LoadBalancer   80:30981/TCP   17m       app=web-get-started

Let’s ignore the kubernetes service, as this is an internal service. Let’s concentrate on the web-service service.

We see it is of type LoadBalancer as we requested. It has a cluster-ip and an external-ip. The external-ip is the Azure Load Balancer private IP. The cluster-ip is this ip accessible only from within the cluster. It also has a port on each node where the service is accessible.


In order to test, we need to have connectivity with the subnets.

The easiest way to do that is to create a VM and peer its network to the AKS network. We can then ssh (or RDP) to that VM and be “local” to the AKS cluster. This simulates having an Express Route (or VPN) connection to the AKS virtual network.

We recommend using our Docker VM for that.

From that VM we can curl to the first pod, i.e. IP in our case:


and receive the following:

<h3>Hello World!</h3><b>Hostname:</b> web-54b885b89b-9q9cr<br/><b>Visits:</b> undefined

We can note that web-54b885b89b-9q9cr is the name of the pod.

We can do that for each pod.

We can also curl the service:


We should receive a similar return. If we keep running this command we should see that we round robin through the pods.

We could also try the internal port (in our case 30981). For this we need to learn the nodes’ IPs:

kubectl get nodes -o wide

The returned internal-ip is the node’s IP:

aks-agentpool-15447536-0   Ready     agent     1d        v1.11.1   <none>        Ubuntu 16.04.5 LTS   4.15.0-1018-azure   docker://1.13.1
aks-agentpool-15447536-1   Ready     agent     1d        v1.11.1   <none>        Ubuntu 16.04.5 LTS   4.15.0-1018-azure   docker://1.13.1
aks-agentpool-15447536-2   Ready     agent     1d        v1.11.1    <none>        Ubuntu 16.04.5 LTS   4.15.0-1018-azure   docker://1.13.1

So now if we do:


We will again round robin through the pods. The same result would occur with the IP of any node.

In order to test the internal IPs, we need to be on the cluster itself. This can easily be done by running a container in interactive mode:

kubectl run -i --tty console --image=appropriate/curl -- sh

Here we simply take the classic Kubernetes trick but using the appropriate/curl image. That image has curl installed on it, which we can then use:


or whatever the cluster-ip of the service is. This will again round robin through the pods.

Since the ARM Template activated the HTTP Application Routing, we could actually use the service name:

curl web-service

which would work the same way.

Network Profile

Let’s finally look at the Network profile in the ARM template:

"networkProfile": {
    "networkPlugin": "azure",
    "serviceCidr": "[variables('Service Cidr')]",
    "dnsServiceIP": "[variables('Dns Service IP')]",
    "dockerBridgeCidr": "[variables('Docker Bridge Cidr')]"

With the variables:

"Service Cidr": "",
"Dns Service IP": "",
"Docker Bridge Cidr": ""

Those are explained in the AKS documentation. They are also explained in the ARM Template documentation.

The Service Cidr is where those cluster-ip for services are taken from. In our case, we had Those IPs are accessible only from the cluster. The important thing to remember is to choose a range that won’t be used by our applications. For instance, if one of our app uses an internal SAP server, we do not want the address of that server to be in the service cidr range. Kubernetes will redirect those IPs and we’ll never reach our SAP server.


We’ve basically dived into the view we formed in our past article:


The important things to remember are:

We can also control their private IP (which we didn’t do here).

We wanted to demonstrate that, as with all things, there is no magic. AKS simply integrates with Azure Networking and create resources on the fly.

One response

  1. Javier 2019-02-12 at 04:47

    Thank you for the good post, I see that this template has no RBAC parameter, wondering if you could add that as an option.

Leave a comment