Cluster Manifests

TL;DR:

  • Use cluster manifests for anything that is required before you can deploy your own applications on a cluster.
  • Do not break infrastructure and application separation by including application workloads.
  • Cluster manifests are fully integrated into the Terraform state using the Kustomize provider.

What are cluster manifests

Not all Kubernetes manifests should be part of the infrastructure automation repository. Include configuration or services shared between applications and environments that are required before the cluster is ready to run workloads. Exclude configuration or services not shared between applications and environments that merely define workloads.

Examples of cluster manifests:

  • Ingress controllers
  • RBAC
  • Namespaces
  • Quotas
  • Operators
  • Policy enforcement
  • Service meshes
  • Monitoring exporters/agents
  • Log forwarders

Services that do not meet above criteria should be maintained outside the infrastructure automation.

Deploying application workloads as part of the cluster manifests is discouraged because it breaks the separation of infrastructure and application environments.

It is important to prevent interdependence between infrastructure and application automation. Interdependence can make continuous deployment impossible. No continuous deployment increases the size and with that the risk of a change. This endangers the reliability and even the feasibility of your GitOps workflow.

Lifecycle integration

Due to their shared nature, just like changes to the infrastructure itself, changes to cluster manifests need to be validated before they can affect applications.

Kubestack fully integrates Kustomize resources into the Terraform lifecycle using the purpose built Kustomize provider.

This ensures that cluster manifests are part of Kubestack's gitops process and are validated on ops before they are applied to apps.

Working with cluster manifests

Cluster manifests can be used in two ways:

  1. Vendoring bases from the Kubestack catalog
  2. Writing your own bespoke bases

Vendoring from the catalog

All entries of the Kubestack catalog include documentation how to vendor, add, update and remove the base. Vendoring bases ensures downtime of third party services can not disrupt your infrastructure automation.

Inheritance between bases and overlays enables customizing the resulting configuration without making changes to the base itself. This makes updates seamless because old bases can be replaced with a new version. Modifications in any overlay inheriting from that base are still applied on top of the new base.

Customising catalog bases

The instructions on the catalog pages suggest inheriting in the apps overlay. This is useful when consuming the configuration from the catalog as is. If modifications are required you have two options:

  1. Customising in the environment overlay.

    The inheritance model documentation includes an example that reduces the ops environment's ingress controller replica count to one. This modification is environment specific and fits well into the ops overlay.

    Similarly, to handle high traffic, you could also increase the default replica count for the apps overlay.

  2. Customising in an intermediate overlay.

    Keeping non environment specific modifications out of the environment overlays improves maintainability. Certain modifications, like using Kustomize's namespace field, even require an intermediate overlay because they affect every resource in the overlay.

    Using an intermediate overlay means instead of consuming the catalog entry in the apps overlay, you consume the catalog entry in the intermediate overlay. You use the intermediate overlay to make the modifications required. Then you inherit from the intermediate overlay in the apps overlay.

    Consider including the intermediate overlay under manifest/overlays and naming it after the base and the modification. E.g. manifest/overlays/nginx-example.

Bespoke bases

Bespoke bases are useful to deploy cluster services that are not available from the catalog or to manage cluster manifests that are closely related to the cluster configuration and therefore make sense to be handled in the same repository. Think about namespace quotas for example. The benefit of configuring these in the cluster repository is, that adjustments to the quota should also be reflected in the cluster capacity. Tracking both in the same repository makes it easier for teams to jointly handle this during reviews.

Consider including bespoke bases under manifest/bases and making them re-useable between your environments. You can contribute bespoke bases, that add a service not available from the catalog yet, to the catalog repository on Github.