Provision Nginx Ingress to expose applications outside the cluster

TL;DR:

  • Nginx ingress controller dynamically configures Nginx to proxy requests to Kubernetes services
  • Provision the ingress controller, a cloud loadbalancer and set up DNS to make Kubernetes services available outside the cluster

Introduction

Nginx is one of the most widely used reverse proxies. The Nginx ingress controller integrates Nginx with Kubernetes and dynamically configures Nginx to proxy requests to applications on Kubernetes.

Following this guide you will:

  1. provision Nginx ingress controller
  2. expose it outside the cluster using a cloud loadbalancer and
  3. configure the DNS zones provisioned by the Kubestack cluster modules to resolve to the loadbalancer and your ingress controller

This guide assumes that you have set up DNS following the DNS set up guide.

Before we can provision the Nginx ingress controller, we need a Kubestack repository. If you do not have a Kubestack repository yet, follow the Kubestack tutorial first. While the catalog modules can be used with any Terraform configuration, this guide assumes you have a Kubestack framework repository.

Nginx Ingress Installation

This first step installs the Nginx ingress controller on your cluster(s) using the Nginx inress module from the Kubestack catalog.

module "eks_kbst_eu-west-1_service_nginx" {
providers = {
kustomization = kustomization.eks_kbst_eu-west-1
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {}
ops = {}
}
}
module "aks_kbst_westeurope_service_nginx" {
providers = {
kustomization = kustomization.aks_kbst_westeurope
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {}
ops = {}
}
}
module "gke_kbst_europe-west1_service_nginx" {
providers = {
kustomization = kustomization.gke_kbst_europe-west1
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {}
ops = {}
}
}

Patch loadbalancer IP/CNAME

To ensure DNS resolves to the cloud loadbalancer the Kubernetes provisions for the service type loadbalancer that Nginx ingress upstream includes we have to patch it. For AWS, this means we have to query the CNAME of the ELB that was provisioned and add that to the DNS zone that the Kubestack cluster module provisioned. In the case of AKS and GKE, the cluster modules reserve an IP adress and add it to the DNS zone already. The Nginx ingress module here therefor needs to be patched to use the reserved IP address for the cloud loadbalancer.

For EKS, the provisioned loadbalancer will return a CNAME. We have to add this CNAME to the DNS zone that has been provisioned by the Kubestack cluster module.

module "eks_kbst_eu-west-1_service_nginx" {
providers = {
kustomization = kustomization.eks_kbst_eu-west-1
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {}
ops = {}
}
}

The Nginx module above creates the service type loadbalancer. And the DNS module below reads the loadbalancer's CNAME from the Kubernetes API and creates entries in the DNS zone. But for the data source in the DNS module to work, the service type loadbalancer has to exist first. There are two ways to achieve this.

  • Either, run terraform apply --target module.eks_kbst_eu-west-1_service_nginx to deploy the Nginx manifests. Then you can let the pipeline handle the rest.
  • Or, if you prefer to avoid the manual apply, split this change up into two commits, that you merge into main individually.

A note about depends_on on the DNS zone module. Generally, you could make the dependency explicit by using depends_on = [module.eks_kbst_eu-west-1_service_nginx]. This works initially, but occasionally the data source will be pushed into apply phase, and as a result the DNS entries will get recreated. To avoid this recreation, and the DNS entries being temporarily unresolvable as a result, the guide here avoids setting depends_on and offers the two alternatives above.

module "eks_kbst_eu-west-1_dns_zone" {
providers = {
aws = aws.eks_kbst_eu-west-1
kubernetes = kubernetes.eks_kbst_eu-west-1
}
# make sure to match your cluster module's version
source = "github.com/kbst/terraform-kubestack//aws/cluster/elb-dns?ref=v0.17.0-beta.0"
ingress_service_name = "ingress-nginx-controller"
ingress_service_namespace = "ingress-nginx"
metadata_fqdn = module.eks_kbst_eu-west-1.current_metadata["fqdn"]
}

For AKS, to ensure the loadbalancer provisioned by Kubernetes uses the IP address the Kubestack cluster module reserved and added to DNS, we have to patch the Kubernetes service. The patch extends the service type loadbalancer manifests coming from upstream Nginx, to set spec.loadbalancerIP to the reserved IP address.

module "aks_kbst_westeurope_service_nginx" {
providers = {
kustomization = kustomization.aks_kbst_westeurope
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {
patches = [{
patch = <<-EOF
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
loadBalancerIP: ${module.aks_kbst_westeurope.default_ingress_ip}
EOF
}]
}
ops = {}
}
}

For GKE, to ensure the loadbalancer provisioned by Kubernetes uses the IP address the Kubestack cluster module reserved and added to DNS, we have to patch the Kubernetes service. The patch extends the service type loadbalancer manifests coming from upstream Nginx, to set spec.loadbalancerIP to the reserved IP address.

module "gke_kbst_europe-west1_service_nginx" {
providers = {
kustomization = kustomization.gke_kbst_europe-west1
}
source = "kbst.xyz/catalog/nginx/kustomization"
version = "1.2.1-kbst.0"
# configuration_base_key = "apps-prod"
configuration = {
# apps-prod = {}
apps = {
patches = [{
patch = <<-EOF
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
loadBalancerIP: ${module.gke_kbst_europe-west1.default_ingress_ip}
EOF
}]
}
ops = {}
}
}

Apply Changes

Like for every change, it's time to commit and push to to review. Merge when the plan looks good. And finally promote the changes, once they have been validated in ops.

The full workflow is documented on the GitOps process page.

But here's a short summary for convenience:

# create a new feature branch
git checkout -b add-nginx-ingress-controller
# add the changes and commit them
git add .
git commit -m "Install nginx ingress controller and configure DNS"
# push the changes to trigger the pipeline
git push origin add-nginx-ingress-controller

Then follow the link in the output, to create a new pull request. Review the pipeline run. And merge the pull request, when everything is green.

Last but not least, promote the changes once you validated them in ops by setting a tag.

# make sure you're on the merge commit
git checkout main
git pull
# then tag the commit
git tag apps-deploy-$(date -I)-0
# finally push the tag, to trigger the pipeline to promote
git push origin apps-deploy-$(date -I)-0

Next Steps

With Nginx ingress controller deployed, a common next step is to deploy Cert-Manager and configure it to issue Let's Encrypt certificates.

This guide will walk you through setting this up: Cert-Manager and Let's Encrypt

If you haven't yet, you must set up DNS for both, applications be reachable from the internet, as well as Let's Encrypt to be able to issue certificates.