AKS with Kubenet vs Azure Networking plug-in


I’ve been diving into Kubernetes / AKS Networking lately. I thought I would share some of the insights I stumble upon.

We know AKS has two types of networking, basic & advanced, right?

  • Basic provisions its own VNET and exposes only public IPs
  • Advanced uses an existing VNET and exposes private IPs

For the latter, check our last article where we deploy advanced networking using ARM Template.

It turns out it’s more subtle than that.

AKS’ Basic & Advanced networking are aggregation of Kubernetes’ concepts.

We can even have an “in between” the two configurations. The AKS cluster is deployed in an existing VNET but using cluster-IPs instead of VNET IPs for pods.

In this article we explore the two network plugins:

  • Kubenet network plugin (basic)
  • Azure network plugin (advanced)

As usual, the code used here is available in GitHub.

To simplify the discussion, we assume we deploy services using internal load balancer.

Azure plugin

We’ll start with the Azure plugin as it is the one under the Advanced Networking setup.

It is commonly known as the Azure VNET CNI Plugins and is an implementation of the
Container Network Interface Specification
.

The plugin assigns IPs to Kubernetes’ components.

As we’ve seen in a past article, pods are assigned private IPs from an Azure Virtual Network. Those IPs belong to the NICs of the VMs where those pods run. They are secondary IPs on those NICs.

We end up with a Network picture as follow:

Advanced Networking

Basically, both pods and services get a private IP. Services also get a cluster-ip (accessible only from within the cluster).

In order to test this configuration, we can deploy the following ARM Template:

Deploy button

The first three parameters are related to the service principal we need to create.

The fourth allows us to choose between Kubenet and Azure plugin. Let’s select Azure to test this section.

Once deployed we can connect to the cluster and deploy our service.yaml file:

kubectl apply -f service.yaml

This file deploys 3 pods in a deployment and a service to load balance them. Let’s look at the service:

kubectl get services

We should see something like this:

NAME          TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP      10.0.0.1      <none>        443/TCP        6d
web-service   LoadBalancer   10.0.218.59   172.16.16.4   80:31917/TCP   1m

Our service is the one named web-service. Its external IP belongs to the virtual network. The cluster-IP is something that can only be resolved within the cluster.

Service IPs aren’t public because the annotation service.beta.kubernetes.io/azure-load-balancer-internal in service.yaml.

If we look at pods:

kubectl get pods -o wide

We can see the pods’ IPs also are in the virtual network.

NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
web-54b885b89b-cwd7d   1/1       Running   0          3m        172.16.0.10   aks-agentpool-40932894-2
web-54b885b89b-j2xcc   1/1       Running   0          3m        172.16.0.73   aks-agentpool-40932894-1
web-54b885b89b-kwvxd   1/1       Running   0          3m        172.16.0.46   aks-agentpool-40932894-0

The Azure plugin gives private IPs to pods.

Kubenet plugin

The Kubenet plugin is related to basic networking. This is used in the online documentation, for instance in the quickstart.

Here pods’ IPs are cluster-IPs. That is they do not belong to the Azure Virtual Network but to Kubernetes Virtual Network. They are therefore resolvable only from within the cluster.

Here we are going to do something different than basic networking. We are going to deploy AKS inside an existing Virtual Network.

In order to test this configuration, we can deploy the following ARM Template:

Deploy button

Again, the first three parameters are related to the service principal we need to create.

The fourth allows us to choose between Kubenet and Azure plugin. Let’s select Kubenet to test this time around.

First, let’s look at the resources in the portal.

If we look at the virtual network, we can see the connected devices:

Network devices

We only see the cluster’s VMs. When we used Azure plugin, the pods got their own private IPs which were secondary IPs on the VMs’ NICs. Here pods aren’t exposed in the Virtual Network so there is no such private IPs. For that reason the Kubelet plugin consumes much less private IPs.

As usual with AKS, we’ll have a buddy resource group named MC___. Let’s open it.

Kubernet resources

We see the typical underlying resources of an AKS cluster. One we do not see in an advanced networking cluster is the Route Table. Let’s open it.

Routes

We should see there are three routes. That configuration routes 3 cluster-IP ranges to the three primary IPs of the three nodes.

This routing is necessary for requests to pods off node. Let’s imagine that a pod on the first node wants to contact a pod on the second node. The cluster-IP can’t be resolved internally. So the request is sent and the routing table routes it to the second node.

If we look at the subnets where it is attached, we see it isn’t attached. This is a current bug as of this writing (early September 2018) and is actually documented. We need to attach the routing table to our vnet manually. It only needs to be attached to the aks subnet.

Let’s connect to the cluster and deploy our service.yaml file:

kubectl apply -f service.yaml

It is the same file than the previous section. It deploys 3 pods in a deployment and a service to load balance them. Let’s look at the service:

kubectl get services

We should see something like this:

NAME          TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes    ClusterIP      10.0.0.1      <none>        443/TCP        2h
web-service   LoadBalancer   10.0.207.13   172.16.16.4   80:30431/TCP   1m

web-service has an external IP belonging to the virtual network. The cluster-IP is something that can only be resolved within the cluster.

AKS respected the service.beta.kubernetes.io/azure-load-balancer-internal annotation, since the IP is private. It also respected the service.beta.kubernetes.io/azure-load-balancer-internal-subnet annotation since the private IP belongs to the services subnet.

Now if we look at pods:

kubectl get pods -o wide

We can see the pods’ IPs do not belong to the virtual network.

NAME                   READY     STATUS    RESTARTS   AGE       IP            NODE
web-54b885b89b-b8446   1/1       Running   0          6m        10.16.0.200   aks-agentpool-37067697-0
web-54b885b89b-ml4tv   1/1       Running   0          6m        10.16.1.201   aks-agentpool-37067697-1
web-54b885b89b-pmklk   1/1       Running   0          6m        10.16.2.200   aks-agentpool-37067697-2

The kubenet plugin gives cluster IPs to pods.

When to choose kubenet vs Azure?

Now that we’ve seen both plugins, the natural question is when to use which?

We’ll address that by looking at scenarios:

Scenario Preferred approach Comments
Only public IPs kubenet For deploying only public IPs, the basic networking configuration is just simpler. There is no VNET to managed as AKS manages its own VNET.
Private IPs, outside access to pods Azure The only way to have access to pods from outside the cluster is to assign them a private IP address.
Private IPs, no outside access to pods kubenet Azure plugin would also work here. basic networking wouldn’t work here, we need this intermediate configuration we demonstrated in this article where we deploy AKS in an existing VNET.
Limited private IPs kubenet For companies linking their on-premise network to Azure (either via Express Route or simple VPN), there might be a concern of using large private IP ranges. Kubenet plugins doesn’t allocate private IPs to pods. This reduces the number of required private IPs.

Summary

We’ve looked at kubenet and Azure network plugins for Kubernetes.

The former is associated with basic networking. It manages pod IPs by allocating cluster IPs which are routable only from within the cluster.

The latter is associated with advanced networking. It manages pod IPs by allocating a VNET private IP. This is routable from anywhere within the VNET (or anything peered to that VNET).

We looked at a third configuration where we use the kubenet plugin but deploy AKS on an existing VNET. This still allows us to deploy services within the existing VNET. But it reduces the number of private IPs required.

Advertisements

3 thoughts on “AKS with Kubenet vs Azure Networking plug-in

  1. First of all, great post and very nice presentation of a point where many would profit from.

    One small remark on the documentation side:

    The Application ID is clear, this is the one in the portal, however the Object ID in the portal is the Object ID of the application not the Object ID of the service principal.

    For the ARM Template to work we will need the SP ID.

    Using PowerShell: Get-AzureRMADServicePrincipal -SearchString “AppName”

    To get the Application ID: Get-AzureRmADApplication -DisplayNameStartWith “AppName”

    Again great work, many thanks!

    Mo Ghaleb

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s