Provision Infrastructure

Overview

In this second part of the Kubestack tutorial you will:

  1. Setup authentication.
  2. Provision the remote state.
  3. Bootstrap your infrastructure.
  4. Set up DNS.

To make it easier to provide all prerequisites like the cloud provider command line utilities, Terraform, Kustomize and more we provide a container image and use it for bootstrapping now and also CI/CD later.

# Build the bootstrap container
docker build -t kbst-infra-automation:bootstrap .
# Exec into the bootstrap container
docker run --rm -ti \
-v `pwd`:/infra \
kbst-infra-automation:bootstrap

Setup authentication

Follow the instructions to authenticate with your chosen cloud provider. If you chose more than one, follow the instructions for each of them.

Use your personal user to create the automation user.

  1. Login with your personal user.

    # run aws configure and follow the instructions
    # make sure to set a default region
    aws configure
  2. Create the automation user and attach the AWS managed AdministratorAccess policy to it.

    # Create the user
    aws iam create-user --user-name kubestack-automation
    # Attach the managed admin policy to the user
    aws iam attach-user-policy \
    --policy-arn arn:aws:iam::aws:policy/AdministratorAccess \
    --user-name kubestack-automation
  3. Replace your personal credentials.

    # Create keys for the user
    ACCESS_KEY_JSON=$(aws iam create-access-key --user-name kubestack-automation)
    # Set the access key
    aws configure set aws_access_key_id $(echo $ACCESS_KEY_JSON | jq -r .AccessKey.AccessKeyId)
    # Set the secret access key
    aws configure set aws_secret_access_key $(echo $ACCESS_KEY_JSON | jq -r .AccessKey.SecretAccessKey)

Use your personal account to create the service principal.

  1. Login with your personal account.

    # run az login and follow the instructions
    az login
  2. Select the subscription to use.

    # List your subscriptions
    az account list --query "[].{name:name,subscription_id:id}" --output table
    # Select your subscription by ID
    read -p "Subscription ID: " SUBSCRIPTION_ID
  3. Create a service principal and configure role and permissions.

    # Create service principal and assign the contributor role
    AZ_SP_JSON=$(az ad sp create-for-rbac \
    --name="kubestack-automation" \
    --role="Contributor" \
    --scopes="/subscriptions/${SUBSCRIPTION_ID}" \
    --output="json")
    # Also add and grant active directory permissions
    az ad app permission add \
    --id $(echo $AZ_SP_JSON | jq -r .appId) \
    --api 00000002-0000-0000-c000-000000000000 \
    --api-permissions 1cda74f2-2616-4834-b122-5cb1b07f8a59=Role
    az ad app permission grant \
    --id $(echo $AZ_SP_JSON | jq -r .appId) \
    --api 00000002-0000-0000-c000-000000000000 \
    --scope user_impersonation
    az ad app permission admin-consent \
    --id $(echo $AZ_SP_JSON | jq -r .appId)
  4. Authenticate as the service principal.

    # For the CLI
    az login --service-principal \
    --username $(echo $AZ_SP_JSON | jq -r .appId) \
    --password $(echo $AZ_SP_JSON | jq -r .password) \
    --tenant $(echo $AZ_SP_JSON | jq -r .tenant)
    # For Terraform
    export ARM_CLIENT_ID=$(echo $AZ_SP_JSON | jq -r .appId)
    export ARM_CLIENT_SECRET=$(echo $AZ_SP_JSON | jq -r .password)
    export ARM_SUBSCRIPTION_ID="${SUBSCRIPTION_ID}"
    export ARM_TENANT_ID=$(echo $AZ_SP_JSON | jq -r .tenant)
  5. Get the storage account access key

    # Select the resource group
    az group list --query "[].{name:name}" --output table
    read -p "Resource group: " RESOURCE_GROUP
    # Select the storage account name
    az storage account list --query "[].{name:name}" --output table
    read -p "Storage account name: " STORAGE_ACCOUNT
    export ARM_ACCESS_KEY=$(az storage account keys list --account-name ${STORAGE_ACCOUNT} --resource-group ${RESOURCE_GROUP} | jq -r '.[0].value')
  6. Safe the ARM_* environment variables for CI/CD

    env | grep ARM_ > ~/.azure/KBST_AUTH_AZ

Use your personal account to create the service account.

  1. Login with your personal account.

    # run gcloud init and follow the instructions
    gcloud init
  2. Create the service account, assign a role and create keys.

    # Use the current project
    PROJECT=$(gcloud config get-value project)
    # Create the service account
    gcloud iam service-accounts create kubestack-automation \
    --description "SA used for Kubestack Github Actions" \
    --display-name "kubestack-automation"
    # Assign the owner role to the service account
    gcloud projects add-iam-policy-binding ${PROJECT} \
    --member serviceAccount:kubestack-automation@${PROJECT}.iam.gserviceaccount.com \
    --role roles/owner
    # Create service account keys
    gcloud iam service-accounts keys create \
    ~/.config/gcloud/application_default_credentials.json \
    --iam-account kubestack-automation@${PROJECT}.iam.gserviceaccount.com
  3. Activate the service account and stop using your personal account.

    gcloud auth activate-service-account --key-file ~/.config/gcloud/application_default_credentials.json

Provision the remote state

Terraform requires remote state to run in CI/CD. You are free to use any supported remote state storage. Below instructions use your cloud provider's object storage. If you're provisioning multi-cloud infrastructure, pick one of your providers to store the state.

Object storage bucket names have to be globally unique. The instructions below append the short git hash of our initial commit to achieve this.

Run the following commands to create a bucket in AWS S3 and change the state.tf file to configure Terraform to use the created bucket for remote state.

# Create bucket and configure remote state
BUCKET_NAME=terraform-state-kubestack-`git rev-parse --short HEAD`
REGION=`aws configure get region`
aws s3api create-bucket --bucket $BUCKET_NAME --region $REGION --create-bucket-configuration LocationConstraint=$REGION --acl private
aws s3api put-bucket-versioning --bucket $BUCKET_NAME --region $REGION --versioning-configuration Status=Enabled
aws s3api put-bucket-encryption --bucket $BUCKET_NAME --region $REGION --server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "AES256"
}
}
]
}'
cat > state.tf <<EOF
terraform {
backend "s3" {
bucket = "${BUCKET_NAME}"
region = "${REGION}"
key = "tfstate"
}
}
EOF

Run the following commands to create a storage container in Azure blob storage and change the state.tf file to configure Terraform to use the created storage container for remote state.

# Create storage container and configure remote state
STORAGE_CONTAINER=terraform-state-kubestack-`git rev-parse --short HEAD`
az storage container create --name $STORAGE_CONTAINER --account-name $STORAGE_ACCOUNT
cat > state.tf <<EOF
terraform {
backend "azurerm" {
storage_account_name = "${STORAGE_ACCOUNT}"
container_name = "${STORAGE_CONTAINER}"
key = "tfstate"
}
}
EOF

Run the following commands to create a bucket in Google cloud storage and change the state.tf file to configure Terraform to use the created bucket for remote state.

# Set the location of your multi-regional bucket
# valid values are `asia`, `eu` or `us`
read -p "Bucket location: " LOCATION
# Create bucket and configure remote state
BUCKET_NAME=terraform-state-kubestack-`git rev-parse --short HEAD`
gsutil mb -l $LOCATION gs://$BUCKET_NAME
gsutil versioning set on gs://$BUCKET_NAME
cat > state.tf <<EOF
terraform {
backend "gcs" {
bucket = "${BUCKET_NAME}"
}
}
EOF

Bootstrap your infrastructure

With the remote state set up, initialize Terraform.

# Initialize Terraform
terraform init

Terraform workspaces

Then create Terraform workspaces to match your infrastructure environments. How many and which workspaces you need to create depends on your choices in the first step:

If you chose the UI to design your platform, you got to choose between shared workload clusters and separate workload clusters. The CLI does only scaffold shared workload clusters.

# if using the default names ops and apps
terraform workspace new apps
terraform workspace new ops
# if using the default names ops, apps and apps-prod
terraform workspace new apps-prod
terraform workspace new apps
terraform workspace new ops

Terraform apply

Now, with the workspaces created, let's go ahead and provision the environments. Provision two or three environments, depending on your platform environments. If you changed the name of the workspaces, adapt them here accordingly.

# Bootstrap the ops environment
terraform apply --auto-approve
# Bootstrap the apps environment
terraform workspace select apps
terraform apply --auto-approve
# Bootstrap the ops environment
terraform apply --auto-approve
# Bootstrap the apps environment
terraform workspace select apps
terraform apply --auto-approve
# Bootstrap the apps-prod environment
terraform workspace select apps-prod
terraform apply --auto-approve

Commit changes

# Exit the bootstrap container
exit
# Commit the configuration
git add .
git commit -m "Add remote state configuration"

Recap

You now have the GitOps repository in place. You provisioned prerequisites like authentication credentials and the remote state. Finally, you also bootstrapped both the ops and apps infrastructure environments and configured DNS for each.

Continue with the second part of the tutorial to add the automation.