Set up Automation

Overview

To set up the automation pipeline we need to:

  1. Push your repository.
  2. Set up pipeline credentials.
  3. Add the pipeline file.
  4. Follow the GitOps process.

Github is used for the step-by-step instructions. But you can use other Git hosting and CI/CD providers too. The required steps tend to be similar.

Example pipelines for other CI/CD systems are available on Github.

Push your repository

  1. Create the remote repository

    Create a new repository and give it a descriptive name. For example infrastructure-automation.

    Github create new repository

  2. Push the repository

    Follow the instructions for the second option to push your existing repository.

    Github push an existing repository

Set up pipeline credentials

We need to provide the credentials to the service accounts we created in the previous step to our pipeline. For this, again, follow the step for each cloud provider you're using.

AmazonAzureGoogle
  1. Encode the AWS credentials file using base64.

    cat .user/.aws/credentials | base64 -w 0 && echo
  2. Find secrets under your repository settings and add a secret named KBST_AUTH_AWS with the base64 encoded credentials from the previous step as the value.

    Github add action secret

Add the pipeline file

  1. First create a new branch to work in.

    git checkout -b ghactions
  2. Add the pipeline as .github/workflows/main.yaml.

    mkdir -p .github/workflows
    cat > .github/workflows/main.yaml <<'EOF'
    name: deploy
    on:
    push:
    branches:
    - "**" # run for branches
    tags:
    - "*" # run for tags
    jobs:
    deploy:
    runs-on: ubuntu-latest
    strategy:
    fail-fast: false
    matrix:
    environment:
    - ops
    - apps
    # - apps-prod # uncomment if you have three environments
    concurrency:
    group: terraform-${{ matrix.environment }}
    cancel-in-progress: false
    env:
    KBST_DOCKER_ARGS: --rm -v ${{ github.workspace }}:/infra -e AWS_EC2_METADATA_DISABLED=true -e TF_IN_AUTOMATION=true
    KBST_DOCKER_IMAGE: kbst:${{ github.sha }}
    steps:
    - uses: actions/checkout@v3
    #
    #
    # Build image
    - name: Build image
    env:
    DOCKER_BUILDKIT: 1
    run: docker build -t $KBST_DOCKER_IMAGE .
    #
    #
    # Terraform init
    - name: Terraform init
    env:
    KBST_AUTH_AWS: ${{ secrets.KBST_AUTH_AWS }}
    KBST_AUTH_AZ: ${{ secrets.KBST_AUTH_AZ }}
    KBST_AUTH_GCLOUD: ${{ secrets.KBST_AUTH_GCLOUD }}
    run: |
    docker run \
    $KBST_DOCKER_ARGS \
    -e KBST_AUTH_AWS \
    -e KBST_AUTH_AZ \
    -e KBST_AUTH_GCLOUD \
    $KBST_DOCKER_IMAGE \
    terraform init
    #
    #
    # Select workspace based on matrix environment
    - name: Select ${{ matrix.environment }} workspace
    run: |
    docker run \
    $KBST_DOCKER_ARGS \
    $KBST_DOCKER_IMAGE \
    terraform workspace select ${{ matrix.environment }}
    #
    #
    # Terraform plan against current workspace
    - name: Terraform plan
    run: |
    docker run \
    $KBST_DOCKER_ARGS \
    $KBST_DOCKER_IMAGE \
    terraform plan --out=tfplan --input=false
    #
    #
    # Terraform apply against current workspace
    # if trigger matches environment
    - name: Terraform apply
    if: |
    (github.ref == 'refs/heads/main' && matrix.environment == 'ops') ||
    (startsWith(github.ref, 'refs/tags/apps-deploy-') && matrix.environment == 'apps') ||
    (startsWith(github.ref, 'refs/tags/apps-prod-deploy-') && matrix.environment == 'apps-prod')
    run: |
    docker run \
    $KBST_DOCKER_ARGS \
    $KBST_DOCKER_IMAGE \
    terraform apply --input=false tfplan
    EOF

You may have noticed, that the example pipeline only handles the ops and apps environments. If you additionally have the apps-prod environment, you must uncomment the respective line under jobs.deploy.strategy.matrix.environment. And if you changed the environment names, now is the time to adjust the pipeline to match.

Follow the GitOps process

  1. Add, commit and push the pipeline.

    git add .
    git commit -m "Add Github Actions pipeline"
    git push origin ghactions
  2. Open a pull request.

    The git push to GitHub will return a covenient link to create a new pull request. Go ahead and open a pull request.

    # [...]
    remote:
    remote: Create a pull request for 'ghactions' on GitHub by visiting:
    remote: https://github.com/pst/infrastructure-automation/pull/new/ghactions
    remote:
    To github.com:pst/infrastructure-automation.git
    * [new branch] ghactions -> ghactions
  3. Check the pipeline run.

    The pipeline run for the ghactions branch does not apply changes. It only provides the output of terraform plan against the ops workspace to determine what changes will be applied once merged.

    Github feature branch pipeline run

    Since we already bootstrapped the clusters, the pipeline at this point has no planned changes.

  4. Merge the pull request to apply changes to ops.

    Merge the pull request into main. The pipeline applies changes against the ops workspace on every commit to main.

    Github main branch pipeline run

  5. Finally, set a tag to apply changes to apps.

    # checkout the main branch
    git checkout main
    # pull changes from origin
    git pull
    # tag the merge commit
    git tag apps-deploy-0
    # push the tag to origin to trigger the pipeline
    git push origin apps-deploy-0

    Github tag pipeline run

Compare the output of the three pipeline runs. You will see how when triggered from a feature branch, triggered from the main branch or triggered from a tag the pipeline behaves differently.

For more details refer to the GitOps Flow making changes section.

Recap

To recap:

  • You bootstrapped a local repository.
  • Created prerequisites like the Terraform remote state and the identity for the automation runs.
  • You provisioned the ops and apps and potentially apps-prod infrastructure environments.
  • You linked your repository to trigger automated pipeline runs.

Congratulations, you now have fully GitOps automated Kubernetes infrastructure.

Next steps

With your clusters provisioned, it's time to start building out your platform features. Kubestack has a number of step-by-step guides for common tasks using Kubestack's integrated platform service modules.

Community Help: If you have any questions while following these guides, join the #kubestack channel on the Kubernetes community. To create an account request an invitation.

Kubestack Guides

The DNS, Nginx ingress and Cert-Manager guides combined allow you to expose applications outside the cluster with certificates issued by Let's Encrypt automatically. They will also show you how cluster infrastructure and platform services modules are integrated with each other to allow you to maintain your entire platform stack from a unified Terraform code base.

  1. Set up DNS
  2. Provision Nginx Ingress controller
  3. Install Cert-Manager and configure Let's Encrypt