PostgreSQL Operator

PostgreSQL Operator
Terraform module for Kubernetes platforms

The PostgreSQL operator provisions and configures a PostgreSQL database cluster on top of Kubernetes. PostgreSQL is a powerful, open-source object-relational database system.

This Terraform module helps platform engineering teams provision PostgreSQL Operator on Kubernetes. It fully integrates the upstream Kubernetes resources into the Terraform plan/apply lifecycle and allows configuring PostgreSQL Operator using native Terraform syntax.

The PostgreSQL Operator module is continuously updated and tested when new upstream versions are released.

Build status for postgresql-v1.11.0-kbst.0


  • Use kbst add service postgresql to add PostgreSQL Operator to your platform
  • The kbst CLI scaffolds the Terraform module boilerplate for you
  • Kubestack platform service modules bundle upstream manifests and are fully customizable

Use the module

The kbst CLI helps you scaffold the Terraform code to provision PostgreSQL Operator on your platform. It takes care of calling the module once per cluster, and sets the correct source and latest version for the module. And it also makes sure the module's configuration and configuration_base_key match your platform.

# add PostgreSQL Operator service to all platform clusters
kbst add service postgresql
# or optionally only add PostgreSQL Operator to a single cluster
# 1. list existing platform modules
kbst list
# 2. add PostgreSQL Operator to a single cluster
kbst add service postgresql --cluster-name aks_gc0_westeurope

Scaffolding the boilerplate is convenient, but platform service modules are fully documented, standard Terraform modules. They can also be used standalone without the Kubestack framework.

Customize resources

All Kubestack platform service modules support the same module attributes and configuration as all Kubestack modules. The module configuration is a Kustomization set in the per environment configuration map following Kubestack's inheritance model.

The example below shows some options to customize the resources provisioned by the PostgreSQL Operator module.

module "example_postgresql" {
providers = {
kustomization = kustomization.example
source = ""
version = "1.11.0-kbst.0"
configuration = {
apps = {
+ # change the namespace of all resources
+ namespace = var.example_postgresql_namespace
+ # or add an annotation
+ common_annotations = {
+ "terraform-workspace" = terraform.workspace
+ }
+ # use images to pull from an internal proxy
+ # and avoid being rate limited
+ images = [{
+ # refers to the '' to modify the 'image' attribute of
+ name = "container-name"
+ # customize the 'registry/name' part of the image
+ new_name = ""
+ }]
ops = {
+ # scale down replicas in ops
+ replicas = [{
+ # refers to the '' of the resource to scale
+ name = "example"
+ # sets the desired number of replicas
+ count = 1
+ }]

In addition to the example attributes shown above, modules also support secret_generator, config_map_generator, patches and many other Kustomization attributes.

Full documentation how to customize a module's Kubernetes resources is available in the platform service module configuration section of the framework documentation.


Once the operator has been deployed to the Kubernetes cluster, you can use it to provision and operate one or more database clusters by creating a custom object of the operator's custom resource.

PostgreSQL Custom Object

Below is an example of a minimal PostgreSQL custom object to instruct the operator to provision a database cluster.

To get started, put the example below into a file called postgresql.yaml and add it to your application's manifests. Then apply the manifests including the postgresql.yaml as usual.

apiVersion: ""
kind: postgresql
name: acid-minimal-cluster
namespace: default
version: "10"
teamId: "ACID"
size: 1Gi
numberOfInstances: 2
# admin user
- superuser
# application user
app_user: []
# db_name: user_name
app_db: app_user

Configuring your cluster

Make sure to configure name and namespace and the users and databases according to your application's requirements.

You can find additional information on these parameters and additional options in the upstream project's documentation.