Set up nameserver records for your clusters in DNS
TL;DR:
- Kubestack cluster modules provision DNS zones per cluster and environment
- Zone names are based on the
name_prefix
,workspace
,region
andbase_domain
- Set NS records on the base domain to make DNS names resolvable
Introduction
Exposing workloads on Kubernetes to the internet can be achieved in many ways. A common way, and the way the Kubestack framework provides by default is to:
- Set up DNS to resolve a hostname to a cloud loadbalancer
- A cloud loadbalancer to route traffic from the internet to the cluster nodes
- And an ingress controller to route requests to the service inside the cluster
This guide explains how to set up the nameserver records for the DNS zones Kubestack provisions in your base domain's DNS. Requirements 2. and 3. are covered in the Nginx Ingress controller guide. The Nginx Ingress guide explains how to deploy the ingress controller inside the cluster and how to expose it using a Kubernetes type loadbalancer service. The DNS set up proposed here works for other ingress controllers or Istio ingress gateway too.
If you prefer to provision your own DNS set up, you can disable the Kubestack provisioned zones by setting disable_default_ingress = true
on the cluster module(s).
Query Name Servers
Kubestack sets up DNS zones per cluster and per environment to ensure changes to the DNS can be previewed and validated like any other change.
The zones are based on the name_prefix
, workspace
, region
and base_domain
variables.
To make DNS names in those zones resolvable, we have to complete two steps:
- Get the zone's name servers for each cluster and each environment
- Set nameserver (
NS
) records in thebase_domain
's DNS for each of them
List workspaces
As the first step, we need to list the workspaces, so that we know how many environments we have to query name servers for.
terraform workspace list
This command will either return default
, ops
and apps
or default
, ops
, apps
and apps-prod
.
Depending on how many infrastructure environments you configured for your platform stack.
If you customised the environment names, you see those names here instead.
With the environment names, you can now query the name servers for each cluster and environment. Repeat the following steps for each cluster in every environment.
DNS zones
The DNS zones are provisioned on the same cloud provider as the cluster. To query them, we use the Terraform state.
- First, we select one of the workspaces.
- Next, we list all the zones we're tracking in Terraform state.
- Finally, we query each zone's name servers.
You have to repeat these steps for every cluster in every environment. If you have clusters on more than one cloud provider, follow the instructions for each cloud provider.
Select workspace
# we use ops as the example here# but you need to do this for every workspace except defaultterraform workspace select ops
Query name servers
Select the cloud provider for each cluster. Follow the instructions once per cluster and infrastructure environment.
For ever cluster and environment, note down the name_servers
from below output.
You need to add NS
records for each of them in the following step.
terraform state list | grep aws_route53_zonemodule.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]
terraform state show module.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]# module.eks_kbst_eu-west-1.module.cluster.aws_route53_zone.current[0]:resource "aws_route53_zone" "current" {# [...]name = "kbst-ops-eu-west-1.aws.kubestack.example.com"name_servers = ["ns-1153.awsdns-16.org","ns-1777.awsdns-30.co.uk","ns-263.awsdns-32.com","ns-916.awsdns-50.net",]# [...]}
terraform state list | grep azurerm_dns_zonemodule.aks_kbst_westeurope.module.cluster.azurerm_dns_zone.current[0]
terraform state show module.aks_kbst_westeurope.module.cluster.azurerm_dns_zone.current[0]# module.aks_kbst_westeurope.module.cluster.azurerm_dns_zone.current[0]:resource "azurerm_dns_zone" "current" {# [...]name = "kbst-ops-westeurope.azure.kubestack.example.com"name_servers = ["ns1-07.azure-dns.com.","ns2-07.azure-dns.net.","ns3-07.azure-dns.org.","ns4-07.azure-dns.info.",]# [...]}
terraform state list | grep google_dns_managed_zonemodule.gke_kbst_europe-west1.module.cluster.google_dns_managed_zone.current[0]
terraform state show module.gke_kbst_europe-west1.module.cluster.google_dns_managed_zone.current[0]# module.gke_kbst_europe-west1.module.cluster.google_dns_managed_zone.current[0]:resource "google_dns_managed_zone" "current" {# [...]dns_name = "kbst-ops-europe-west1.gcp.kubestack.example.com."name_servers = ["ns-cloud-d1.googledomains.com.","ns-cloud-d2.googledomains.com.","ns-cloud-d3.googledomains.com.","ns-cloud-d4.googledomains.com.",]# [...]}
Configure NS entries
The name servers we just queried now have to be added to the base domain's DNS as NS records for the cluster's fully qualified domain name (FQDN).
How to do this will differ slightly for every DNS provider. But in general you have to:
- Log in to your DNS provider's management interface
- Select the DNS zone for your base domain
- Add NS entries for each FQDN of your clusters
- Set the name servers that you just queried as the value
Example zone file
Take a look at this example zone file to show the entries for two clusters in the ops and apps environments.
$ORIGIN kubestack.example.com.; Amazon example for ops environmentkbst-ops-eu-west-1.aws IN NS ns-1153.awsdns-16.org.kbst-ops-eu-west-1.aws IN NS ns-1777.awsdns-30.co.uk.kbst-ops-eu-west-1.aws IN NS ns-263.awsdns-32.com.kbst-ops-eu-west-1.aws IN NS ns-916.awsdns-50.net.; Amazon example for apps environmentkbst-apps-eu-west-1.aws IN NS ns-1252.awsdns-23.org.kbst-apps-eu-west-1.aws IN NS ns-1971.awsdns-48.co.uk.kbst-apps-eu-west-1.aws IN NS ns-252.awsdns-12.com.kbst-apps-eu-west-1.aws IN NS ns-186.awsdns-38.net.
$ORIGIN kubestack.example.com.; Azure example for ops environmentkbst-ops-westeurope.azure IN NS ns1-07.azure-dns.com.kbst-ops-westeurope.azure IN NS ns2-07.azure-dns.net.kbst-ops-westeurope.azure IN NS ns3-07.azure-dns.org.kbst-ops-westeurope.azure IN NS ns4-07.azure-dns.info.; Azure example for apps environmentkbst-apps-westeurope.azure IN NS ns1-08.azure-dns.com.kbst-apps-westeurope.azure IN NS ns2-08.azure-dns.net.kbst-apps-westeurope.azure IN NS ns3-08.azure-dns.org.kbst-apps-westeurope.azure IN NS ns4-08.azure-dns.info.
$ORIGIN kubestack.example.com.; Google example for ops environmentkbst-ops-europe-west1.gcp IN NS ns-cloud-d1.googledomains.com.kbst-ops-europe-west1.gcp IN NS ns-cloud-d2.googledomains.com.kbst-ops-europe-west1.gcp IN NS ns-cloud-d3.googledomains.com.kbst-ops-europe-west1.gcp IN NS ns-cloud-d4.googledomains.com.; Google example for apps environmentkbst-apps-europe-west1.gcp IN NS ns-cloud-c1.googledomains.com.kbst-apps-europe-west1.gcp IN NS ns-cloud-c2.googledomains.com.kbst-apps-europe-west1.gcp IN NS ns-cloud-c3.googledomains.com.kbst-apps-europe-west1.gcp IN NS ns-cloud-c4.googledomains.com.
Example DNS query
The goal is to have a DNS query for the NS records of each FQDN to return the respective name servers in the ANSWER SECTION
like below.
Here is an example query for the GKE cluster in the ops environment that corresponds to the Terraform state and zone file examples above.
dig NS kbst-ops-europe-west1.gcp.kubestack.example.com
Example DNS response
The DNS query should return output similiar to the one below. You want all four name servers to be returned.
[...];; ANSWER SECTION:kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d1.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d2.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d3.googledomains.com.kbst-ops-europe-west1.gcp.kubestack.example.com. 21599 IN NS ns-cloud-d4.googledomains.com.
Next Steps
With the DNS set up in place, a common next step is to deploy an ingress controller to allow exposing services outside the Kubernetes cluster.
The DNS set up shown here can be used for any ingress controller and also for Istio ingress gateway. Kubestack provides a guide for Nginx ingress controller, but you can also use it as an example how to deploy other ingress controllers. Additionally, a guide how to set up Cert-Manager to automate certificate provisioning with Let's Encrypt is available too.