Inheritance Model

TL;DR:

  • Desired configuration is set for the mission critical, external apps or apps-prod environment.
  • All other environments inherit configuration from apps or apps-prod.
  • The inherited configuration can be overwritten per environment.

Why inheritance

Inheritance from apps to ops, or from apps-prod to apps and ops is a cornerstone of Kubestack's reliable GitOps automation. The ops environment serves the purpose of validating changes before they are promoted to apps or apps-prod. Configuration drift risks rendering this protection ineffective.

Inheritance makes differences explicit. By default, everything is inherited. But if necessary, individual attributes of the inherited configuration can be overwritten. Explicit differences do not prevent configuration drift. But they make it easier to spot.

Kubestack implements inheritance to reduce the risk of configuration drift and increase automation reliability. The fewer differences there are, between the environment a change was validated against, and the environment it is promoted to, the less likely the promotion is to fail.

Implementation

Kubestack implements the inheritance model for all its module types.

All Kubestack modules accept two input variables:

  1. configuration
  2. configuration_base_key

Configuration expects a map where the keys are the names of the Terraform workspaces. And the values are the per workspace configurations. The configuration_base_key defaults to apps and controls which environment all others inherit their configuration from.

Default environments

Given Kubestack's default environment names, ops and apps, this is the basic structure of the configuration map:

configuration = {
apps = {}
ops = {}
}

Custom environments

For custom environments, consider this example with one internal ops and two external environments apps and apps-prod for the application production and staging environments.

configuration_base_key = "apps-prod"
configuration = {
apps-prod = {}
apps-stage = {}
ops = {}
}

Inheritance rules

The inheritance is implemented in the common/configuration module that all other Kubestack modules use internally. The module loops through all keys in the base environment (apps by default) and the current environment (determined by the value of the terraform.workspace variable, e.g. ops). If the key exists in the current environment, the value from the current environment is used. If it does not, the value from the base environment is used. This results in the following inheritance behaviour (assuming the default environment names):

  1. ops inherits everything from apps
  2. any attribute that is set in ops overwrites the inherited value from apps
  3. ops can add attributes that are not set in apps

To explain this, take a look at the following examples:

  1. A hypothetical example showing the inheritance rules >>
  2. A practical example for scaling cluster and platform service modules >>

Hypothetical example

Consider the following fictitious configuration:

module "configuration" {
source = "github.com/kbst/terraform-kubestack//common/configuration"
configuration_base_key = "apps"
configuration = {
apps = {
apps_key1 = "from_apps"
apps_key2 = "from_apps"
}
ops = {
ops_key = "from_ops"
apps_key1 = "from_ops"
}
}
}
output "apps_merged" {
value = module.configuration.merged["apps"]
}
output "ops_merged" {
value = module.configuration.merged["ops"]
}

This will result in the following outputs:

Outputs:
apps_merged = {
"apps_key1" = "from_apps"
"apps_key2" = "from_apps"
}
# ops overwrites apps_key1
# ops inherits apps_key2 unchanged
# ops adds ops_key
ops_merged = {
"apps_key1" = "from_ops"
"apps_key2" = "from_apps"
"ops_key" = "from_ops"
}

It is not possible to overwrite an inherited value with null to remove the attribute from the configuration.

Practical examples

The practical examples below show how to use inheritance to scale cluster modules and platform service modules per environment.

Cluster modules

The scaling configuration for a cluster is specified in the apps hash map. Half the compute resources would be wasted, if the ops environment used the exact same scaling settings.

Avoiding configuration differences that break the automation is important. And money spent on making the automation more reliable is an investment into sustainability for any team.

But a configuration that works for a three node cluster isn't likely to fail for a 30 node cluster. And within reason, neither is it, if ops uses burstable or shared-core nodes with reduced vCPUs and memory.

Below examples for AKS, EKS and GKE each auto-scale apps between 3 and 18 nodes (4 vCPUs, 16 GB memory) and ops between 3 and 6 nodes (2 vCPU burstable, 4 GB memory).

AmazonAzureGoogle
configuration = {
apps = {
# abbreviated example configuration
# ...
cluster_instance_type = "m5a.xlarge"
cluster_min_size = 3
cluster_max_size = 18
}
ops = {
# smaller, cheaper machine_type
cluster_instance_type = "t3a.medium"
# lower autoscaling min/max
cluster_min_size = 3
cluster_max_size = 6
}
}

Platform service modules

Similarly to how it makes sense to scale the cluster nodes differently for apps and ops. It also makes sense to scale platform services differently. Since cluster modules and platform service modules share the configuration inheritance, this works very similar. Just the configuration attributes are now Kubernetes/Kustomize specific and not AKS, EKS or GKE specific any more.

Below example scales the Nginx Ingress controller up to three replicas on apps and down to one replica on ops. The module defaults to two replicas.

configuration = {
apps = {
replicas = [{
name = "ingress-nginx-controller"
count = 3
}]
}
ops = {
replicas = [{
name = "ingress-nginx-controller"
count = 1
}]
}
}