Understanding simple HTTP Ingress in AKS
Solution ·We looked at Kubernetes Ingress conceptually. We looked at different use cases: URL based routing and multiple domains.
We also looked at how ingress was implemented from an AKS perspective, i.e. where traffic was routed in nodes.
In this article, I wanted to get hands on. I figured we could start slowly with simple configuration: public internet endpoints and no TLS / certificates.
I found ingress thinly documented. Different online tutorials leave a lot of details unexplained. This gave me of a sense of “magic” around Ingress.
I love magic with hats and bunnies, not with computer technologies. So, let’s have a look under the hood to dissipate all that smoke.
Scripts used in this article are on GitHub for convenience.
Cluster Creation
Let’s first create a cluster.
We won’t do anything fancy around AKS network plugins as we did in a past article. Instead, we’ll go the easiest route based on the AKS online quick start documentation.
So in a shell, using the Azure CLI, let’s do:
az group create --name aks-group --location eastus2
az aks create --resource-group aks-group --name aks-cluster --node-count 3 --generate-ssh-keys -s Standard_B2ms --disable-rbac
az aks get-credentials -g aks-group -n aks-cluster
This script creates a cluster named aks-cluster in the resource group aks-group in East US 2 region.
The first line creates the resource group. The second the cluster. The third securely gets the credentials from the cluster so we can connect with kubectl.
The cluster has 3 nodes of B2 skus VMs. B2 are burstable VMs, the cheapest we can use with AKS.
The cluster has disable rbac which simplifies the following configurations we are going to do.
Helm
We’ll need Helm. We discussed Helm authoring. Here we are just going to use its package management capacity as a consumer.
So we need to install the Helm client. We recommend looking at the online Helm Documentation for installing the client.
Then we can install Helm server-side component, Tiller:
helm init
Warning, this installs the tiller in a non-secured manner and isn’t recommended for production scenarios. To secure tiller, see the Helm online documentation.
Installing Nginx Ingress Controller
As discussed in our conceptual survey of Ingress in AKS, the Ingress Controller is the component picking up web requests.
We’ll install it using Helm:
helm install stable/nginx-ingress --namespace kube-system --set controller.replicaCount=2 --set rbac.create=false
This command installs the nginx-ingress chart found in the stable repository. The stable repository is a public repo installed by default. We can look at installed repos by typing helm repo list
.
We install the chart in the kube-system namespace where other cluster-wide components live. This isn’t a requirement but is recommended practice. We override two configuration values:
- The number of replicas: we specify we want the controller to have 2 replicas for High Availability
- RBAC: we specify we do not use RBAC in our cluster
Nginx deployments
Let’s look at how the controller got deployed:
$ kubectl get deploy --namespace kube-system -l app=nginx-ingress
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
stultified-puffin-nginx-ingress-controller 2 2 2 2 9h
stultified-puffin-nginx-ingress-default-backend 1 1 1 1 9h
We see two deployments related to nginx:
- Ingress Controller (stultified-puffin-nginx-ingress-controller)
- Default Back end (stultified-puffin-nginx-ingress-default-backend)
The prefix of the deployment, in our case stultified-puffin, is randomly generated by Helm. It is the name of the Helm release as we can see with:
$ helm list
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
stultified-puffin 1 Sun Nov 4 07:58:39 2018 DEPLOYED nginx-ingress-0.28.2 0.19.0 kube-system
The Ingress controller has a desired number of replicas of 2. This is because we specified 2 in the controller.replicaCount value in the helm install.
In general, we can see all the values we can set in an Helm chart by inspecting it. For instance, helm inspect stable/nginx-ingress
will return all the configurations we can override. helm inspect stable/nginx-ingress | grep replica
narrows it down.
It is good practice to have 2 replicas of the controller for high availability. It would be silly to have the ingress controller being less available than the services it fronts.
The default back end hosts the page returned when a request doesn’t hit any of ingress rule. It is the catch all pod. It has only one replica: this shouldn’t be hit in practice so no need to burn compute on it.
Nginx Services
Let’s look at the corresponding services:
$ kubectl get services --namespace kube-system -l app=nginx-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
stultified-puffin-nginx-ingress-controller LoadBalancer 10.0.159.241 104.209.156.3 80:30737/TCP,443:32580/TCP 28m
stultified-puffin-nginx-ingress-default-backend ClusterIP 10.0.133.159 <none> 80/TCP 28m
As we discussed in the conceptual article, the ingress controller is itself a service that front other services.
The controller is a load balanced service exposed with a public IP. In our case this is 104.209.156.3. The public IP will be different for every deployment. This is configurable as we’ll see in future articles. For instance, the ingress controller can be exposed through a private IP.
The default backend has a ClusterIP only. It is common practice not to expose services externally if we expose them through an ingress.
The public IP is in the node resource group. This is the resource group where the underlying AKS resources (e.g. VMs) are deployed. Typically, it has a name of “shape” MC___
. The name of the group is actually a property of the cluster resource:
$ az aks show -g aks-group -n aks-cluster --query nodeResourceGroup -o tsv
MC_aks-group_aks-cluster_eastus2
$ az network public-ip list -g MC_aks-group_aks-cluster_eastus2 --query [*].ipAddress
[
"104.209.156.3"
]
Now if we browse at that IP:
$ curl 104.209.156.3
default backend - 404
We get the default back end service since no ingress is configured.
URL based routing
Now let’s see one of the patterns we discussed in a previous article:
This is called Simple fan out in Kubernetes documentation.
Let’s add a domain name on the IP address. This isn’t mandatory, but it will make the sample clearer.
Let’s go to the IP address in the node resource group, using the portal. In the Configuration tab, let’s specify the DNS name label as vincentpizza:
Let’s deploy url-based-routing.yaml:
kubectl apply -f url-based-routing.yaml
Let’s look at the file. It contains the deployment & service for pizza-offers, deployment & service for pizza-menu and the ingress combining both.
We leverage a simple container we did a while back. It takes an environment variable NAME. It outputs that variable on requests.
# Offers deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: pizza-offers-deploy
spec:
replicas: 2
selector:
matchLabels:
app: pizza-offers
template:
metadata:
labels:
app: pizza-offers
spec:
containers:
- name: myapp
image: vplauzon/get-started:part2-no-redis
env:
- name: NAME
value: pizza-offers
ports:
- containerPort: 80
---
# Offers Service
apiVersion: v1
kind: Service
metadata:
name: pizza-offers-svc
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: pizza-offers
Nothing fancy here. Again, services aren’t using private or public IPs in Azure since we are going to expose them through ingress. Instead they use a ClusterIP.
The ingress resource is interesting:
# Url Based Routing Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: url-routing-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: vincentpizza.eastus2.cloudapp.azure.com
http:
paths:
- path: /pizza-offers
backend:
serviceName: pizza-offers-svc
servicePort: 80
- path: /menu
backend:
serviceName: pizza-menu-svc
servicePort: 80
In the rules we route depending on paths.
We can then browse to both path on our public IP and see the ingress in action:
The ingress is a service reverse proxying other services. Under the same domain name we have two applications running on different pods (processes). They appear to be the same application thanks to ingress.
It is interesting to notice that if we browse to http://vincentpizza.eastus2.cloudapp.azure.com/, we’ll fall back to the default back-end.
Domain name overload
To properly demonstrate the domain name overload, we need multiple domain names.
This is called Name based virtual hosting in Kubernetes documentation.
Here we’ll simulate that by changing the DNS name label of our public IP between tests.
But first, let’s deploy domain-name-overload.yaml:
kubectl apply -f domain-name-overload.yaml
The file contains services and deployments similar to previous example. Instead of offers and menus, we have bikes and cars.
The ingress is different:
# Domain name overload Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: domain-name-overload-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: bikes.eastus2.cloudapp.azure.com
http:
paths:
- backend:
serviceName: bikes-svc
servicePort: 80
- host: cars.eastus2.cloudapp.azure.com
http:
paths:
- backend:
serviceName: cars-svc
servicePort: 80
Here instead of having multiple paths for one host, we have multiple hosts with no paths.
In order to test this, we must change the DNS name label on our public IP. First we change it to bikes then cars while browsing to respective host names.
We could use Azure DNS service to simulate this multi-host properly, but this would lengthen an already long blog post.
Validating communication
In our conceptual article, we establish that ingress communications look a bit like this:
Let’s validate this by looking at the Ingress Controller’s pods and the pizza-offers pods:
$ kubectl get pods --namespace kube-system -l app=nginx-ingress,component=controller -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
stultified-puffin-nginx-ingress-controller-584cfc6b8-vczgm 1/1 Running 0 9h 10.244.1.5 aks-nodepool1-10135362-1 <none>
stultified-puffin-nginx-ingress-controller-584cfc6b8-wrqgs 1/1 Running 0 9h 10.244.2.3 aks-nodepool1-10135362-2 <none>
$ kubectl get pods -l app=pizza-offers -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pizza-offers-deploy-78c8f6d797-fsb7c 1/1 Running 0 52m 10.244.0.6 aks-nodepool1-10135362-0 <none>
pizza-offers-deploy-78c8f6d797-lffrt 1/1 Running 0 52m 10.244.1.7 aks-nodepool1-10135362-1 <none>
We see the ingress controller is running on node 1 & 2 while the pizza-offers run on node 0 & 1.
Summary
We’ve taken an in-depth look at how Ingress Controllers are deployed in an AKS cluster and how they work.
I hope this achieve the goal of removing the smoke effect ingress often have due to high level documentation.
2 responses