Automating Docker PostgreSQL Backups with Cron: Complete Guide
Eliminate manual backups with automated cron jobs. Learn scheduling strategies, environment setup, monitoring, alerting, troubleshooting, and complete working examples for production-grade PostgreSQL backup automation.
I wrote a backup script and thought I was done. For three months, I felt secure—backups automated, retention policies in place, checksums verified. Then I realized: I had never actually run the backup. The script was perfect, but cron had never executed it. A syntax error in the crontab meant zero backups were happening. That's when I learned that automation means nothing without verification.
This guide covers the complete automation strategy I use for PostgreSQL backups in Docker. It's not just about scheduling—it's about monitoring, alerting, troubleshooting, and proving that backups are actually running. This is infrastructure you can trust.
Why Automate?
Manual backups fail for obvious reasons:
- You forget to run them
- You run them inconsistently
- You don't verify they worked
- No audit trail of when backups ran
Automation solves these problems:
- ✓ Consistent schedule (daily, hourly, or custom intervals)
- ✓ Hands-free operation (backups run while you sleep)
- ✓ Compliance auditing (proof of regular backups)
- ✓ Alerting on failure (know immediately when something breaks)
- ✓ Monitoring integration (track backup metrics)
Production pattern: 6 backups/day × 2-day retention = 12 backups on disk at any time, with automatic cleanup.
Prerequisites
Before automating backups:
- ✓ Backup script is working (
./backup.sh my-postgresruns successfully) - ✓ PostgreSQL container is running and stable
- ✓ You have sudo or root access (required for /etc/cron.d/)
- ✓ You know your container name
- ✓ Disk space is sufficient for retention period
- ✓ Docker socket permissions allow backup user to run
docker exec
Cron Basics: The 5-Field Schedule
If you're new to cron, here's the essential format:
┌───────────── minute (0–59)
│ ┌───────────── hour (0–23)
│ │ ┌───────────── day of month (1–31)
│ │ │ ┌───────────── month (1–12)
│ │ │ │ ┌───────────── day of week (0–6, Sunday=0)
│ │ │ │ │
* * * * * command
Common schedules:
0 * * * *— Every hour0 2 * * *— Daily at 2 AM0 */4 * * *— Every 4 hours0 4 * * 0— Weekly on Sunday at 4 AM0 3 * * 1-5— Weekdays at 3 AM
Why /etc/cron.d/ Over User Crontab
I use /etc/cron.d/ for infrastructure automation instead of user crontab:
| Aspect | /etc/cron.d/ | User Crontab |
|---|---|---|
| Persistence | Survives daemon restarts | May be affected by updates |
| File Management | One file per service | Single user file for all jobs |
| Version Control | Easy to track in git | Difficult to manage |
| Permissions | Standard file permissions | Managed by crontab command |
| Running as | Any user (specified in file) | Current user only |
Step 1: Basic Cron Setup
Create a cron job file in /etc/cron.d/ for your backup schedule.
# Create backup cron job
sudo tee /etc/cron.d/docker-postgresql-backup << 'EOF'
# PostgreSQL Docker Backup Schedule
# Backup every 4 hours (00:00, 04:00, 08:00, 12:00, 16:00, 20:00 UTC)
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
CRON_TZ=UTC
# Replace:
# - CONTAINER_NAME with your PostgreSQL container name
# - /path/to/scripts with your backup script location
0 */4 * * * root /path/to/scripts/backup.sh CONTAINER_NAME >> /var/log/postgresql-backup.log 2>&1
EOF
# Verify file was created
cat /etc/cron.d/docker-postgresql-backup
# Test cron daemon recognizes the file
sudo systemctl restart cronCritical details:
- SHELL=/bin/bash - Use bash, not /bin/sh
- PATH - Explicit full path (cron uses minimal PATH)
- HOME=/root - Set home directory
- CRON_TZ=UTC - Ensure consistent timezone
- >> /var/log/postgresql-backup.log 2>&1 - Capture both stdout and stderr
- Trailing newline - File must end with newline or last line is ignored
Step 2: Environment Configuration
Cron runs with minimal environment. Configure variables explicitly.
# Complete environment setup for /etc/cron.d/
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root
USER=root
CRON_TZ=UTC
# Database configuration (if needed)
# Uncomment and adjust to your setup:
# PG_USER=postgres
# PG_DEFAULT_DB=postgres
# BACKUP_RETENTION_DAYS=2Docker-specific concerns:
If your backup script uses Docker, ensure cron user can access the Docker socket:
# For root cron jobs (automatic - root already has access)
# For non-root users, add to docker group:
sudo usermod -aG docker username
# Verify Docker is accessible
docker ps
# Test in cron-like environment
env -i HOME=/root PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin \
SHELL=/bin/bash docker psStep 3: Logging with Timestamps
Simple logging captures backup output, but adding timestamps makes it easier to trace issues.
Simple logging:
0 */4 * * * root /path/to/scripts/backup.sh my-postgres >> /var/log/postgresql-backup.log 2>&1With timestamps:
0 */4 * * * root /path/to/scripts/backup.sh my-postgres 2>&1 | awk '{print strftime("%Y-%m-%d %H:%M:%S") " " $0}' >> /var/log/postgresql-backup.logWith log rotation built-in:
Create a wrapper script that handles log rotation:
#!/usr/bin/env bash
# scripts/backup/backup-wrapper.sh
#
# Wrapper for backup script with log rotation
LOG_FILE="/var/log/postgresql-backup.log"
MAX_SIZE=10485760 # 10MB
# Rotate log if it exceeds max size
if [[ -f "$LOG_FILE" ]] && [[ $(stat -c%s "$LOG_FILE") -gt $MAX_SIZE ]]; then
mv "$LOG_FILE" "${LOG_FILE}.$(date +%Y%m%d-%H%M%S)"
touch "$LOG_FILE"
fi
exec >> "$LOG_FILE" 2>&1
# Add timestamp to all output
{
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Starting backup..."
/path/to/scripts/backup.sh "$@"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup completed"
} | awk '{print "[" strftime("%Y-%m-%d %H:%M:%S") "] " $0}'Make it executable:
chmod +x scripts/backup/backup-wrapper.shStep 4: Complete Cron Configuration
Here's a comprehensive cron setup with multiple backup schedules:
# /etc/cron.d/docker-postgresql-backup
# PostgreSQL Docker Backup Automation
# Configure container names and paths for your environment
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root
USER=root
CRON_TZ=UTC
# Replace:
# - CONTAINER_NAME with your actual PostgreSQL container name
# - /path/to/scripts with your backup script location
# Backup every 4 hours (00:00, 04:00, 08:00, 12:00, 16:00, 20:00 UTC)
0 */4 * * * root /path/to/scripts/backup.sh CONTAINER_NAME >> /var/log/postgresql-backup.log 2>&1
# Weekly verification on Sunday at 4:00 AM UTC
0 4 * * 0 root /path/to/scripts/backup.sh CONTAINER_NAME --verify-only >> /var/log/postgresql-backup-verify.log 2>&1
# Weekly list of available backups (for monitoring)
0 5 * * 0 root /path/to/scripts/backup.sh CONTAINER_NAME --list >> /var/log/postgresql-backup-list.log 2>&1Apply the configuration:
sudo systemctl restart cron
sudo systemctl status cronStep 5: Monitoring Backup Freshness
Verify that backups are actually running and recent.
#!/usr/bin/env bash
# scripts/monitoring/check-backup-freshness.sh
#
# Monitor backup freshness and alert if no recent backups
BACKUP_ROOT="/var/backups/postgresql"
THRESHOLD_HOURS=6
# Find backups created in the last N hours
find_recent_backups() {
local threshold_minutes=$((THRESHOLD_HOURS * 60))
find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" \
-mmin -"$threshold_minutes" 2>/dev/null | wc -l
}
# Check backup freshness
check_backup_freshness() {
local recent=$(find_recent_backups)
if [[ $recent -gt 0 ]]; then
echo "✓ Recent backups found: $recent"
return 0
else
echo "✗ ALERT: No backups within last $THRESHOLD_HOURS hours"
return 1
fi
}
# Count total backups
count_total_backups() {
find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" 2>/dev/null | wc -l
}
# Main
echo "[$(date '+%Y-%m-%d %H:%M:%S')] Backup freshness check"
echo "Threshold: $THRESHOLD_HOURS hours"
if check_backup_freshness; then
total=$(count_total_backups)
echo "Total backups on disk: $total"
else
total=$(count_total_backups)
echo "Total backups on disk: $total (may be old)"
exit 1
fiAdd to cron for daily verification:
# Add to /etc/cron.d/docker-postgresql-backup
30 2 * * * root /path/to/scripts/monitoring/check-backup-freshness.shStep 6: Alerting on Backup Failures
Configure alerts when backups fail.
Simple email alert on failure:
0 */4 * * * root /path/to/scripts/backup.sh my-postgres >> /var/log/postgresql-backup.log 2>&1 || \
echo "PostgreSQL backup failed on $(hostname) at $(date)" | mail -s "ALERT: Backup Failed" admin@example.comComprehensive alerting script:
#!/usr/bin/env bash
# scripts/alerting/alert-backup-failure.sh
ALERT_WEBHOOK="${SLACK_WEBHOOK_URL:-}"
ALERT_EMAIL="${ALERT_EMAIL:-admin@example.com}"
send_alert() {
local severity="$1"
local message="$2"
# Log to syslog (always)
logger -t "pg-backup-alert" -p "cron.${severity}" "$message"
# Email alert
if [[ -n "$ALERT_EMAIL" ]]; then
echo "$message" | mail -s "[$severity] PostgreSQL Backup Alert" "$ALERT_EMAIL"
fi
# Slack webhook
if [[ -n "$ALERT_WEBHOOK" ]]; then
curl -X POST "$ALERT_WEBHOOK" \
-H 'Content-Type: application/json' \
-d "{\"text\": \"[$severity] $message\"}" 2>/dev/null || true
fi
}
# Check last backup
BACKUP_ROOT="/var/backups/postgresql"
LATEST_BACKUP=$(find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" | sort -r | head -1)
if [[ -z "$LATEST_BACKUP" ]]; then
send_alert "critical" "No backups found in $BACKUP_ROOT"
exit 1
fi
# Check backup age
BACKUP_TIME=$(stat -c %Y "$LATEST_BACKUP")
CURRENT_TIME=$(date +%s)
BACKUP_AGE_HOURS=$(((CURRENT_TIME - BACKUP_TIME) / 3600))
if [[ $BACKUP_AGE_HOURS -gt 24 ]]; then
send_alert "warning" "Last backup is older than 24 hours ($BACKUP_AGE_HOURS hours)"
fi
if [[ $BACKUP_AGE_HOURS -gt 48 ]]; then
send_alert "critical" "Last backup is older than 48 hours ($BACKUP_AGE_HOURS hours)"
fi
exit 0Add to cron:
# Check backup health every 6 hours
0 */6 * * * root /path/to/scripts/alerting/alert-backup-failure.shStep 7: Timezone Handling
Correct timezone handling ensures backups run when you expect.
UTC-based (recommended for distributed systems):
CRON_TZ=UTC
# This runs at these UTC times: 00:00, 04:00, 08:00, 12:00, 16:00, 20:00
0 */4 * * * root /path/to/scripts/backup.sh my-postgres >> /var/log/postgresql-backup.log 2>&1Local timezone (for business-hour alignment):
CRON_TZ=America/New_York
# This runs at these New York times: 0 AM, 4 AM, 8 AM, 12 PM, 4 PM, 8 PM
0 */4 * * * root /path/to/scripts/backup.sh my-postgres >> /var/log/postgresql-backup.log 2>&1Always use UTC for backup timestamps:
# In your backup script, always generate UTC timestamps
TIMESTAMP=$(date -u +%Y-%m-%dT%H%M%S)Step 8: Testing and Validation
Before trusting automation, test it thoroughly.
Run in cron-like environment:
#!/usr/bin/env bash
# Test script execution in minimal cron environment
env -i \
HOME=/root \
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin \
SHELL=/bin/bash \
/path/to/scripts/backup.sh my-postgres --dry-run
echo "Exit code: $?"Verify cron file syntax:
# Check that crontab file is valid
sudo crontab -l -f /etc/cron.d/docker-postgresql-backup
# Check cron daemon status
sudo systemctl status cron
# View cron logs
sudo journalctl -u cron -n 50 --no-pager
# Alternative: check syslog
sudo grep CRON /var/log/syslog | tail -20Manual test of scheduled job:
# Run the exact command from cron manually
/path/to/scripts/backup.sh my-postgres >> /var/log/postgresql-backup.log 2>&1
# Check exit code
echo "Exit code: $?"
# Verify log output
tail -50 /var/log/postgresql-backup.logTroubleshooting Common Issues
| Issue | Cause | Solution |
|---|---|---|
| "Command not found" | PATH missing required binary | Add full path to command or expand PATH in cron |
| Docker exec fails | Socket permission denied | Verify root or add user to docker group |
| Env vars not set | Not defined in /etc/cron.d/ | Define explicitly in cron file header section |
| No log output | Redirect misconfigured | Use >> /path 2>&1 for both stdout and stderr |
| Works manually, fails in cron | Environment difference | Test with env -i simulation |
| Overlapping backups | Previous backup slow | Add lock file at script start; fail if lock exists |
| Cron daemon not running | Service stopped | sudo systemctl restart cron |
Emergency debugging:
# View all cron logs
sudo journalctl -u cron -n 100
# Check system cron logs
sudo grep cron /var/log/syslog | tail -20
# Run with debug output
sudo bash -x /path/to/scripts/backup.sh my-postgres 2>&1 | tee /tmp/debug.log
# Check cron job actually exists
sudo cat /etc/cron.d/docker-postgresql-backup
# List running cron jobs
ps aux | grep cronProduction Monitoring Dashboard
Create a simple monitoring script that reports backup status:
#!/usr/bin/env bash
# scripts/monitoring/backup-dashboard.sh
#
# Simple backup monitoring dashboard
BACKUP_ROOT="/var/backups/postgresql"
echo "========================================="
echo "PostgreSQL Backup Status"
echo "========================================="
echo ""
# Count backups
TOTAL=$(find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" 2>/dev/null | wc -l)
echo "Total backups: $TOTAL"
echo ""
# Show recent backups
echo "Recent backups:"
find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" -printf '%T@ %p\n' 2>/dev/null | \
sort -rn | head -10 | while read time path; do
age=$(($(date +%s) - ${time%.*}))
age_hours=$((age / 3600))
age_days=$((age / 86400))
if [[ $age_days -gt 0 ]]; then
time_display="${age_days}d ago"
else
time_display="${age_hours}h ago"
fi
db_count=$(find "$path" -maxdepth 1 -type f -name "*.sql.gz" 2>/dev/null | wc -l)
total_size=$(du -sh "$path" 2>/dev/null | cut -f1)
echo " $(basename "$path") - $db_count databases, $total_size ($time_display)"
done
echo ""
echo "Latest backup status:"
LATEST=$(find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" | sort -r | head -1)
if [[ -n "$LATEST" ]]; then
LATEST_TIME=$(stat -c %Y "$LATEST")
CURRENT_TIME=$(date +%s)
AGE_HOURS=$(((CURRENT_TIME - LATEST_TIME) / 3600))
if [[ $AGE_HOURS -lt 6 ]]; then
echo "✓ Recent (${AGE_HOURS}h ago)"
elif [[ $AGE_HOURS -lt 24 ]]; then
echo "⚠ Aging (${AGE_HOURS}h ago)"
else
echo "✗ Old (${AGE_HOURS}h ago)"
fi
else
echo "✗ No backups found"
fi
echo ""
echo "========================================="Run as a monitoring check:
/path/to/scripts/monitoring/backup-dashboard.shComplete Production Setup
Combine all elements into a production-ready automation setup:
# /etc/cron.d/docker-postgresql-backup
# Production PostgreSQL Docker Backup Automation
# Requires: /path/to/scripts/backup.sh (with --verify-only, --list options)
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root
USER=root
CRON_TZ=UTC
# Primary backup: every 4 hours
# Change CONTAINER_NAME and /path/to/scripts to your values
0 */4 * * * root /path/to/scripts/backup.sh CONTAINER_NAME >> /var/log/postgresql-backup.log 2>&1
# Verify last backup on Sundays
0 4 * * 0 root /path/to/scripts/backup.sh CONTAINER_NAME --verify-only >> /var/log/postgresql-backup-verify.log 2>&1
# List backups (for monitoring) on Sundays
0 5 * * 0 root /path/to/scripts/backup.sh CONTAINER_NAME --list >> /var/log/postgresql-backup-list.log 2>&1
# Check backup freshness every 6 hours
0 */6 * * * root /path/to/scripts/monitoring/check-backup-freshness.sh
# Dashboard check daily at 6 AM
0 6 * * * root /path/to/scripts/monitoring/backup-dashboard.sh >> /var/log/postgresql-backup-dashboard.log 2>&1Key Takeaways
- Use /etc/cron.d/ for infrastructure - More reliable than user crontab
- Explicit environment - Cron has minimal environment; define everything
- Test before production - Run in cron-like environment first
- Monitor actively - Verify backups are actually running
- Alert on failure - Know immediately when backups stop working
- Verify periodically - Run checksums weekly to confirm backup integrity
- Timezone consistency - Use UTC for timestamps, explicit timezone for cron
Next Steps
- Read Part 1: Docker PostgreSQL Backup Strategies
- Read Part 2: Restoring Docker PostgreSQL Safely
- Deploy to production: Copy the complete cron configuration to your
/etc/cron.d/ - Set up monitoring: Get alerts when backups fail
- Test a restore: Verify backups actually restore before you need them
Your backup system is now fully automated. Databases backup themselves, failed backups alert you immediately, and you can restore with confidence. That's infrastructure you can trust.