Automatic PostgreSQL backups to AWS S3 (Kubernetes CronJob)

Automatic PostgreSQL backups to AWS S3 (Kubernetes CronJob)

July 1, 2024

I am not going to spend too much time explaining how important database backups are, I assume we are past that point since you are already here, reading this. But let's start with the saying:

Your data is as good as your last backup, and your backup is as good as your ability to restore it.

Here at WebGazer, we work hard to squeeze out the most valuable reports from the data collected. In addition to downtime notifications, we try to provide actionable insights that will help our customers improve their infrastructures proactively. That's why we pay great attention to integrity of our database, but every once in a while something goes south and a data migration causes some data loss, or an unforeseen case for an SQL query breaks the data. I am not going to say those times are easy. I still get anxious doing something manual on the production database, but backups, at least, ease the pain.

Anyway, let's get to the real stuff. In this tutorial, we are going to back up our PostgreSQL database, and upload the backup to AWS S3 every morning at 07:00. We are going to need:

  • A PostgreSQL DBMS running on a Kubernetes cluster (duh!)
  • An AWS S3 bucket (or something compatible)

Kubernetes CronJob

"cron" is originally a command line job scheduling utility developed in 1975 by AT&T Bell Laboratories. But ever since, "cronjob" is the common name for scheduled, periodic jobs in computer science terminology. Kubernetes has a workload type called CronJob that serves the very same purpose, and it is generally available since v1.21 (April 2021). That is what we are going to use.

postgres-s3-backup docker image

I created a docker image that uses pg_dump to create a backup for a PostgreSQL database, and uploads it to S3 using rclone. The source code for this image is available on, and the docker image is available on Docker Hub, th0th/postgres-s3-backup.

This image uses environment variables for configuration, you can see all configuration options on README. Here is an example Kubernetes CronJob resource definition that uses this image (you need to replace the placeholders indicated between < >):

apiVersion: batch/v1
kind: CronJob
  name: postgres-backup
  namespace: <NAMESPACE>
            - env:
              - name: AWS_ACCESS_KEY_ID
                value: <AWS_ACCESS_KEY_ID>
              - name: AWS_REGION
                value: <AWS_REGION>
              - name: AWS_S3_ENDPOINT
                value: <AWS_S3_ENDPOINT> # this is the path in the bucket (e.g. backups/postgres)
              - name: AWS_SECRET_ACCESS_KEY
                value: <AWS_SECRET_ACCESS_KEY>
              - name: POSTGRES_DB
                value: <POSTGRES_DB>
              - name: POSTGRES_HOST
                value: <POSTGRES_HOST> # this is hostname for the PostgreSQL service. if not set explicitly, defaults to "postgres"
              - name: POSTGRES_PASSWORD
                value: <POSTGRES_PASSWORD>
              - name: POSTGRES_PORT
                value: <POSTGRES_PORT> # if not set explicitly, defaults to "5432"
              - name: POSTGRES_USER
                value: <POSTGRES_USER> # if not set explicitly, defaults to "postgres"
              - name: POSTGRES_VERSION
                value: <POSTGRES_VERSION> # version of the PostgreSQL server. "14", "15" or "16". Defaults to "16".
              # - name: WEBGAZER_HEARTBEAT_URL
              #   value: <WEBGAZER_HEARTBEAT_MONITOR_URL> # we will talk about this in the BONUS section below
              image: th0th/postgres-s3-backup:0.2
              name: postgres-backup
  schedule: "0 7 * * *" # this runs on 07:00 every day. see

BONUS: Monitoring with WebGazer Heartbeat Monitoring

WebGazer offers heartbeat monitoring for making sure cron jobs like this run as they are supposed to. It basically works like this: you set a heartbeat monitor on WebGazer, send programmatic HTTP requests from the cron job to that monitor's URL. If the monitor doesn't get the request in time, it alerts you. So you get to know if a database backup isn't completed at the time it should be completed.

The docker image I mentioned has this built-in, just uncomment the lines near the end, and set the WEBGAZER_HEARTBEAT_URL to the heartbeat monitor's URL you get from WebGazer, and you are good to go.


Additionally, when WEBGAZER_HEARTBEAT_URL is set, this docker image reports a heartbeat parameter called seconds, total seconds it takes to take the database backup and upload it to S3, you can add a rule to the heartbeat monitor, too.