39 Commits
0.3.1 ... 0.4.0

Author SHA1 Message Date
Einar Forselv
151f3cfeeb Update README.md 2019-12-08 15:13:06 +01:00
Einar Forselv
2848738789 Update README.md 2019-12-08 15:10:06 +01:00
Einar Forselv
cf402d77ed Simplify README 2019-12-08 14:57:17 +01:00
Einar Forselv
ae835f30d3 Make crontab configurable 2019-12-08 06:38:56 +01:00
Einar Forselv
1e21ff422f Detect if a repository should be initialized
This is better than trying to initialize it every time
2019-12-08 05:09:05 +01:00
Einar Forselv
6347529701 Alert when nothing is found to back up + check repo after backup 2019-12-08 04:52:30 +01:00
Einar Forselv
3dacc0bfab Update containers.py 2019-12-08 04:51:07 +01:00
Einar Forselv
fa14880742 Indicate we might want to run check with --with-cache 2019-12-08 04:50:33 +01:00
Einar Forselv
5eb773eb34 Fx broken backup process container test 2019-12-08 04:50:09 +01:00
Einar Forselv
e8123922df Backup process label is now unique for each project 2019-12-08 03:51:57 +01:00
Einar Forselv
be74715595 Already running backup process container is an error message 2019-12-08 03:33:11 +01:00
Einar Forselv
515702ae78 Truncate field sizes in discord webhook 2019-12-08 03:32:42 +01:00
Einar Forselv
ff49d9c018 Bump version 2019-12-08 03:32:14 +01:00
Einar Forselv
187787425a Sane default buffer size in backup_from_stdin 2019-12-08 01:16:10 +01:00
Einar Forselv
fa1c982bf5 pep8 2019-12-08 01:14:57 +01:00
Einar Forselv
2bbd329047 Alert when backup process is already running 2019-12-08 01:05:26 +01:00
Einar Forselv
8097ac79af Remove stale backup process containers 2019-12-08 00:57:23 +01:00
Einar Forselv
5082244949 Gather stale backup process containers 2019-12-08 00:32:39 +01:00
Einar Forselv
d9e5a62458 List all containers including non-running ones 2019-12-08 00:26:02 +01:00
Einar Forselv
6085f5fc03 README: Docker version note 2019-12-08 00:10:40 +01:00
Einar Forselv
d89ed781ef Comment on log streaming for docker ce 17 and 18 2019-12-08 00:08:41 +01:00
Einar Forselv
e2dec9ffa0 Move all labels into enums module
This way we have more control over them
2019-12-07 23:59:27 +01:00
Einar Forselv
2b3a702f21 Mising docstrings 2019-12-07 09:19:16 +01:00
Einar Forselv
3456e1a899 Update README.md 2019-12-07 09:15:34 +01:00
Einar Forselv
105cdbb65e Incorrect pip install in travis.yml 2019-12-07 09:12:01 +01:00
Einar Forselv
d671ffb626 Fix broken travis config 2019-12-07 09:05:48 +01:00
Einar Forselv
f988b42881 Create .travis.yml 2019-12-07 08:43:07 +01:00
Einar Forselv
5d653c2c3c Clean up dockerignore 2019-12-07 08:39:05 +01:00
Einar Forselv
c80b2774d4 Better handling of stdout/stderr logging 2019-12-06 17:19:51 +01:00
Einar Forselv
12da998538 Incorrect requirements dir after moving test package 2019-12-06 17:19:06 +01:00
Einar Forselv
d17d776339 More comments in dev compose file 2019-12-06 10:53:51 +01:00
Einar Forselv
758c3075f1 Bump version 2019-12-06 09:09:05 +01:00
Einar Forselv
9cad6a5c71 Update cli.py 2019-12-06 09:08:48 +01:00
Einar Forselv
4ebe16af14 Properly get the container exit code 2019-12-06 08:21:21 +01:00
Einar Forselv
fd87ddc388 More logging during backup 2019-12-06 08:21:06 +01:00
Einar Forselv
2cbc5aa6fa bump version 2019-12-06 07:36:30 +01:00
Einar Forselv
ffa2dfc119 rcb version 2019-12-06 07:35:14 +01:00
Einar Forselv
cfc92b2284 Bug: The log stream from docker can be str or bytes
We don't know why this is the case..
2019-12-06 07:30:39 +01:00
Einar Forselv
216202dec7 Bump version 2019-12-06 06:07:03 +01:00
24 changed files with 419 additions and 200 deletions

3
.gitignore vendored
View File

@@ -21,9 +21,10 @@ restic_data/
restic_cache/ restic_cache/
alerts.env alerts.env
# docs # build
build/ build/
docs/_build docs/_build
dist
# tests # tests
.tox .tox

17
.travis.yml Normal file
View File

@@ -0,0 +1,17 @@
language: python
sudo: false
matrix:
include:
python: 3.7
dist: bionic
sudo: true
install:
- pip install -U setuptools pip wheel
- pip install -r src/tests/requirements.txt
- pip install ./src
script:
- tox

198
README.md
View File

@@ -3,7 +3,8 @@
![docs](https://readthedocs.org/projects/restic-compose-backup/badge/?version=latest) ![docs](https://readthedocs.org/projects/restic-compose-backup/badge/?version=latest)
Backup using https://restic.net/ for a docker-compose setup. Backup using [restic] for a docker-compose setup.
Currently tested with docker-ce 17, 18 and 19.
* [restic-compose-backup Documentation](https://restic-compose-backup.readthedocs.io) * [restic-compose-backup Documentation](https://restic-compose-backup.readthedocs.io)
* [restic-compose-backup on Github](https://github.com/ZettaIO/restic-compose-backup) * [restic-compose-backup on Github](https://github.com/ZettaIO/restic-compose-backup)
@@ -11,175 +12,145 @@ Backup using https://restic.net/ for a docker-compose setup.
Features: Features:
* Back up docker volumes or host binds * Backs up docker volumes or host binds
* Back up mariadb postgres * Backs up postgres, mariadb and mysql databases
* Back up mariadb databases * Notifications over mail/smtp or Discord webhooks
* Back up mysql databases
* Notifications over mail/smtp
* Notifications to Discord through webhooks
Please report issus on [github](https://github.com/ZettaIO/restic-compose-backup/issues). Please report issus on [github](https://github.com/ZettaIO/restic-compose-backup/issues).
Automatically detects and backs up volumes, mysql, mariadb and postgres databases in a docker-compose setup.
* Each service in the compose setup is configured with a label
to enable backup of volumes or databases
* When backup starts a new instance of the container is created
mapping in all the needed volumes. It will copy networks etc
to ensure databases can be reached
* Volumes are mounted to `/volumes/<service_name>/<path>`
in the backup process container. `/volumes` is pushed into restic
* Databases are backed up from stdin / dumps into restic using path `/databases/<service_name>/dump.sql`
* Cron triggers backup at 2AM every day
## Install ## Install
```bash ```bash
docker pull zettaio/restic-compose-backup docker pull zettaio/restic-compose-backup
``` ```
## Configuration ## Configuration (env vars)
Required env variables for restic: Minimum configuration
```bash ```bash
RESTIC_REPOSITORY RESTIC_REPOSITORY
RESTIC_PASSWORD RESTIC_PASSWORD
``` ```
Backend specific env vars : https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables More config options can be found in the [documentation].
Additional env vars: Restic backend specific env vars : https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables
## Compose Example
We simply control what should be backed up by adding
labels to our containers. More details are covered
in the [documentation].
restic-backup.env
```bash ```bash
# Prune rules RESTIC_REPOSITORY=<whatever backend restic supports>
RESTIC_PASSWORD=hopefullyasecturepw
# snapshot prune rules
RESTIC_KEEP_DAILY=7 RESTIC_KEEP_DAILY=7
RESTIC_KEEP_WEEKLY=4 RESTIC_KEEP_WEEKLY=4
RESTIC_KEEP_MONTHLY=12 RESTIC_KEEP_MONTHLY=12
RESTIC_KEEP_YEARLY=3 RESTIC_KEEP_YEARLY=3
# Cron schedule. Run every day at 1am
# Logging level (debug,info,warning,error) CRON_SCHEDULE="0 1 * * *"
LOG_LEVEL=info
# SMTP alerts
EMAIL_HOST=my.mail.host
EMAIL_PORT=465
EMAIL_HOST_USER=johndoe
EMAIL_HOST_PASSWORD=s3cr3tpassw0rd
EMAIL_SEND_TO=johndoe@gmail.com
# Discord webhook
DISCORD_WEBHOOK=https://discordapp.com/api/webhooks/...
``` ```
### Volumes docker-compose.yaml
```yaml ```yaml
version: '3' version: '3'
services: services:
# The backup service # The backup service
backup: backup:
image: zettaio/restic-compose-backup image: zettaio/restic-compose-backup:<version>
environment:
- RESTIC_REPOSITORY=<whatever restic supports>
- RESTIC_PASSWORD=hopefullyasecturepw
- RESTIC_KEEP_DAILY=7
- RESTIC_KEEP_WEEKLY=4
- RESTIC_KEEP_MONTHLY=12
- RESTIC_KEEP_YEARLY=3
env_file: env_file:
- some_other_vars.env - restic-backup.env
volumes: volumes:
# We need to communicate with docker
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
# Persistent storage of restic cache (greatly speeds up all restic operations)
example: - cache:/cache
web:
image: some_image image: some_image
# Enable volume backup with label
labels: labels:
# Enables backup of the volumes below
restic-compose-backup.volumes: true restic-compose-backup.volumes: true
# These volumes will be backed up
volumes: volumes:
# Docker volume
- media:/srv/media - media:/srv/media
# Host map
- /srv/files:/srv/files - /srv/files:/srv/files
volumes:
media:
```
A simple `include` and `exclude` filter is also available.
```yaml
example:
image: some_image
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: "files,data"
volumes:
# Source don't match include filter. No backup.
- media:/srv/media
# Matches include filter
- files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
```
Exclude
```yaml
example:
image: some_image
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.exclude: "media"
volumes:
# Excluded by filter
- media:/srv/media
# Backed up
- files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
```
### Databases
Will dump databases directly into restic through stdin.
They will appear in restic as a separate snapshot with
path `/databases/<service_name>/dump.sql` or similar.
```yaml
mariadb: mariadb:
image: mariadb:10 image: mariadb:10
labels: labels:
# Enables backup of this database
restic-compose-backup.mariadb: true restic-compose-backup.mariadb: true
``` env_file:
mariadb-credentials.env
```yaml volumes:
- mysqldata:/var/lib/mysql
mysql: mysql:
image: mysql:5 image: mysql:5
labels: labels:
# Enables backup of this database
restic-compose-backup.mysql: true restic-compose-backup.mysql: true
``` env_file:
mysql-credentials.env
volumes:
- mysqldata:/var/lib/mysql
```yaml
postgres: postgres:
image: postgres image: postgres
labels: labels:
# Enables backup of this database
restic-compose-backup.postgres: true restic-compose-backup.postgres: true
env_file:
postgres-credentials.env
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
media:
mysqldata:
mariadbdata:
pgdata:
cache:
``` ```
## The `rcb` command
Everything is controlled using the `rcb` command.
After configuring backup with labels and restarted
the affected services we can quickly view the
result using the `status` subcommand.
```bash
$ docker-compose run --rm backup rcb status
INFO: Status for compose project 'myproject'
INFO: Repository: '<restic repository>'
INFO: Backup currently running?: False
INFO: --------------- Detected Config ---------------
INFO: service: mysql
INFO: - mysql (is_ready=True)
INFO: service: mariadb
INFO: - mariadb (is_ready=True)
INFO: service: postgres
INFO: - postgres (is_ready=True)
INFO: service: web
INFO: - volume: media
INFO: - volume: /srv/files
```
The `status` subcommand lists what will be backed up and
even pings the database services checking their availability.
The `restic` command can also be used directly in the container.
More `rcb` commands can be found in the [documentation].
## Running Tests ## Running Tests
```bash ```bash
pip install -e src/ pip install -e ./src/
pip install -r src/tests/requirements.txt pip install -r src/tests/requirements.txt
tox tox
``` ```
@@ -194,3 +165,6 @@ python src/setup.py build_sphinx
## Contributing ## Contributing
Contributions are welcome regardless of experience level. Don't hesitate submitting issues, opening partial or completed pull requests. Contributions are welcome regardless of experience level. Don't hesitate submitting issues, opening partial or completed pull requests.
[restic]: https://restic.net/
[documentation]: https://restic-compose-backup.readthedocs.io

View File

@@ -8,8 +8,9 @@ services:
volumes: volumes:
# Map in docker socket # Map in docker socket
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
# Map backup database locally # Map local restic repository for dev
- ./restic_data:/restic_data - ./restic_data:/restic_data
# Map restic cache
- ./restic_cache:/cache - ./restic_cache:/cache
# Map in project source in dev # Map in project source in dev
- ./src:/restic-compose-backup - ./src:/restic-compose-backup

View File

@@ -22,8 +22,7 @@ copyright = '2019, Zetta.IO Technology AS'
author = 'Zetta.IO Technology AS' author = 'Zetta.IO Technology AS'
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.3.0' release = '0.4.0'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------

View File

@@ -2,6 +2,7 @@
- Update version in `setup.py` - Update version in `setup.py`
- Update version in `docs/conf.py` - Update version in `docs/conf.py`
- Update version in `restic_compose_backup/__init__.py`
- Build and tag image - Build and tag image
- push: `docker push zettaio/restic-compose-backup:<version>` - push: `docker push zettaio/restic-compose-backup:<version>`
- Ensure RTD has new docs published - Ensure RTD has new docs published
@@ -12,9 +13,9 @@ When releasing a bugfix version we need to update the
main image as well. main image as well.
```bash ```bash
docker build . --tag zettaio/restic-compose-backup:0.3 docker build src --tag zettaio/restic-compose-backup:0.3
docker build . --tag zettaio/restic-compose-backup:0.3.1 docker build src --tag zettaio/restic-compose-backup:0.3.3
docker push zettaio/restic-compose-backup:0.3 docker push zettaio/restic-compose-backup:0.3
docker push zettaio/restic-compose-backup:0.3.1 docker push zettaio/restic-compose-backup:0.3.3
``` ```

View File

@@ -10,6 +10,7 @@ RESTIC_KEEP_MONTHLY=12
RESTIC_KEEP_YEARLY=3 RESTIC_KEEP_YEARLY=3
LOG_LEVEL=info LOG_LEVEL=info
CRON_SCHEDULE=10 2 * * *
# EMAIL_HOST= # EMAIL_HOST=
# EMAIL_PORT= # EMAIL_PORT=

View File

@@ -1,19 +1,3 @@
.venv/
.vscode/
extras/
restic_cache/
restic_data/
tests/ tests/
.gitignore
*.env
*.log
docker-compose.yaml
*.ini
*.egg-info
__pycache__ __pycache__
.DS_Store .DS_Store
.git
.pytest_cache
.dockerignore
build/
docs/

View File

@@ -1 +1,2 @@
0 2 * * * source /env.sh && rcb backup > /proc/1/fd/1 10 2 * * * source /env.sh && rcb backup > /proc/1/fd/1

View File

@@ -3,6 +3,9 @@
# Dump all env vars so we can source them in cron jobs # Dump all env vars so we can source them in cron jobs
printenv | sed 's/^\(.*\)$/export \1/g' > /env.sh printenv | sed 's/^\(.*\)$/export \1/g' > /env.sh
# Write crontab
rcb crontab > crontab
# start cron in the foreground # start cron in the foreground
crontab crontab crontab crontab
crond -f crond -f

View File

@@ -0,0 +1 @@
__version__ = '0.4.0'

View File

@@ -30,11 +30,14 @@ class DiscordWebhookAlert(BaseAlert):
def send(self, subject: str = None, body: str = None, alert_type: str = None): def send(self, subject: str = None, body: str = None, alert_type: str = None):
"""Send basic webhook request. Max embed size is 6000""" """Send basic webhook request. Max embed size is 6000"""
logger.info("Triggering discord webhook") logger.info("Triggering discord webhook")
# NOTE: The title size is 2048
# The max description size is 2048
# Total embed size limit is 6000 characters (per embed)
data = { data = {
'embeds': [ 'embeds': [
{ {
'title': subject, 'title': subject[-256:],
'description': body[:5000], 'description': body[-2048:] if body else "",
}, },
] ]
} }

View File

@@ -1,8 +1,7 @@
import logging import logging
import os import os
import docker
from restic_compose_backup.config import Config from restic_compose_backup import utils
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -10,8 +9,7 @@ logger = logging.getLogger(__name__)
def run(image: str = None, command: str = None, volumes: dict = None, def run(image: str = None, command: str = None, volumes: dict = None,
environment: dict = None, labels: dict = None, source_container_id: str = None): environment: dict = None, labels: dict = None, source_container_id: str = None):
logger.info("Starting backup container") logger.info("Starting backup container")
config = Config() client = utils.docker_client()
client = docker.DockerClient(base_url=config.docker_base_url)
container = client.containers.run( container = client.containers.run(
image, image,
@@ -35,7 +33,13 @@ def run(image: str = None, command: str = None, volumes: dict = None,
line = "" line = ""
while True: while True:
try: try:
line += next(stream).decode() # Make log streaming work for docker ce 17 and 18.
# For some reason strings are returned instead if bytes.
data = next(stream)
if isinstance(data, bytes):
line += data.decode()
elif isinstance(data, str):
line += data
if line.endswith('\n'): if line.endswith('\n'):
break break
except StopIteration: except StopIteration:
@@ -51,9 +55,9 @@ def run(image: str = None, command: str = None, volumes: dict = None,
fd.write('\n') fd.write('\n')
logger.info(line) logger.info(line)
container.wait()
container.reload() container.reload()
logger.debug("Container ExitCode %s", container.attrs['State']['ExitCode']) logger.debug("Container ExitCode %s", container.attrs['State']['ExitCode'])
container.stop()
container.remove() container.remove()
return container.attrs['State']['ExitCode'] return container.attrs['State']['ExitCode']

View File

@@ -1,4 +1,5 @@
import argparse import argparse
import os
import logging import logging
from restic_compose_backup import ( from restic_compose_backup import (
@@ -9,6 +10,7 @@ from restic_compose_backup import (
) )
from restic_compose_backup.config import Config from restic_compose_backup.config import Config
from restic_compose_backup.containers import RunningContainers from restic_compose_backup.containers import RunningContainers
from restic_compose_backup import cron, utils
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -42,17 +44,35 @@ def main():
elif args.action == 'alert': elif args.action == 'alert':
alert(config, containers) alert(config, containers)
elif args.action == 'version':
import restic_compose_backup
print(restic_compose_backup.__version__)
elif args.action == "crontab":
crontab(config)
def status(config, containers): def status(config, containers):
"""Outputs the backup config for the compose setup""" """Outputs the backup config for the compose setup"""
logger.info("Status for compose project '%s'", containers.project_name) logger.info("Status for compose project '%s'", containers.project_name)
logger.info("Repository: '%s'", config.repository) logger.info("Repository: '%s'", config.repository)
logger.info("Backup currently running?: %s", containers.backup_process_running) logger.info("Backup currently running?: %s", containers.backup_process_running)
if containers.stale_backup_process_containers:
utils.remove_containers(containers.stale_backup_process_containers)
# Check if repository is initialized with restic snapshots
if not restic.is_initialized(config.repository):
logger.info("Could not get repository info. Attempting to initialize it.")
result = restic.init_repo(config.repository)
if result == 0:
logger.info("Successfully initialized repository: %s", config.repository)
else:
logger.error("Failed to initialize repository")
logger.info("%s Detected Config %s", "-" * 25, "-" * 25) logger.info("%s Detected Config %s", "-" * 25, "-" * 25)
logger.info("Initializing repository (may fail if already initalized)") # Start making snapshots
restic.init_repo(config.repository)
backup_containers = containers.containers_for_backup() backup_containers = containers.containers_for_backup()
for container in backup_containers: for container in backup_containers:
logger.info('service: %s', container.service_name) logger.info('service: %s', container.service_name)
@@ -79,7 +99,16 @@ def backup(config, containers):
"""Request a backup to start""" """Request a backup to start"""
# Make sure we don't spawn multiple backup processes # Make sure we don't spawn multiple backup processes
if containers.backup_process_running: if containers.backup_process_running:
raise ValueError("Backup process already running") alerts.send(
subject="Backup process container already running",
body=(
"A backup process container is already running. \n"
f"Id: {containers.backup_process_container.id}\n"
f"Name: {containers.backup_process_container.name}\n"
),
alert_type='ERROR',
)
raise RuntimeError("Backup process already running")
# Map all volumes from the backup container into the backup process container # Map all volumes from the backup container into the backup process container
volumes = containers.this_container.volumes volumes = containers.this_container.volumes
@@ -96,7 +125,7 @@ def backup(config, containers):
environment=containers.this_container.environment, environment=containers.this_container.environment,
source_container_id=containers.this_container.id, source_container_id=containers.this_container.id,
labels={ labels={
"restic-compose-backup.backup_process": 'True', containers.backup_process_label: 'True',
"com.docker.compose.project": containers.project_name, "com.docker.compose.project": containers.project_name,
}, },
) )
@@ -128,24 +157,38 @@ def start_backup_process(config, containers):
"Cannot run backup process in this container. Use backup command instead. " "Cannot run backup process in this container. Use backup command instead. "
"This will spawn a new container with the necessary mounts." "This will spawn a new container with the necessary mounts."
) )
return exit(1)
status(config, containers) status(config, containers)
errors = False errors = False
# Back up volumes # Did we actually get any volumes mounted?
try: try:
logger.info('Backing up volumes') has_volumes = os.stat('/volumes') is not None
vol_result = restic.backup_files(config.repository, source='/volumes') except FileNotFoundError:
logger.debug('Volume backup exit code: %s', vol_result) logger.warning("Found no volumes to back up")
if vol_result != 0: has_volumes = False
logger.error('Backup command exited with non-zero code: %s', vol_result)
# Warn if there is nothing to do
if len(containers.containers_for_backup()) == 0 and not has_volumes:
logger.error("No containers for backup found")
exit(1)
if has_volumes:
try:
logger.info('Backing up volumes')
vol_result = restic.backup_files(config.repository, source='/volumes')
logger.debug('Volume backup exit code: %s', vol_result)
if vol_result != 0:
logger.error('Volume backup exited with non-zero code: %s', vol_result)
errors = True
except Exception as ex:
logger.error('Exception raised during volume backup')
logger.exception(ex)
errors = True errors = True
except Exception as ex:
logger.exception(ex)
errors = True
# back up databases # back up databases
logger.info('Backing up databases')
for container in containers.containers_for_backup(): for container in containers.containers_for_backup():
if container.database_backup_enabled: if container.database_backup_enabled:
try: try:
@@ -161,14 +204,25 @@ def start_backup_process(config, containers):
errors = True errors = True
if errors: if errors:
logger.error('Exit code: %s', errors)
exit(1) exit(1)
# Only run cleanup if backup was successful # Only run cleanup if backup was successful
result = cleanup(config, container) result = cleanup(config, container)
logger.debug('cleanup exit code: %s', errors) logger.debug('cleanup exit code: %s', result)
if result != 0: if result != 0:
logger.error('cleanup exit code: %s', result)
exit(1) exit(1)
# Test the repository for errors
logger.info("Checking the repository for errors")
result = restic.check(config.repository)
if result != 0:
logger.error('Check exit code: %s', result)
exit(1)
logger.info('Backup completed')
def cleanup(config, containers): def cleanup(config, containers):
"""Run forget / prune to minimize storage space""" """Run forget / prune to minimize storage space"""
@@ -201,11 +255,25 @@ def alert(config, containers):
) )
def crontab(config):
"""Generate the crontab"""
print(cron.generate_crontab(config))
def parse_args(): def parse_args():
parser = argparse.ArgumentParser(prog='restic_compose_backup') parser = argparse.ArgumentParser(prog='restic_compose_backup')
parser.add_argument( parser.add_argument(
'action', 'action',
choices=['status', 'snapshots', 'backup', 'start-backup-process', 'alert', 'cleanup'], choices=[
'status',
'snapshots',
'backup',
'start-backup-process',
'alert',
'cleanup',
'version',
'crontab',
],
) )
parser.add_argument( parser.add_argument(
'--log-level', '--log-level',

View File

@@ -53,14 +53,12 @@ def run(cmd: List[str]) -> int:
child = Popen(cmd, stdout=PIPE, stderr=PIPE) child = Popen(cmd, stdout=PIPE, stderr=PIPE)
stdoutdata, stderrdata = child.communicate() stdoutdata, stderrdata = child.communicate()
if stdoutdata: if stdoutdata.strip():
logger.debug(stdoutdata.decode().strip()) log_std('stdout', stdoutdata.decode(),
logger.debug('-' * 28) logging.DEBUG if child.returncode == 0 else logging.ERROR)
if stderrdata: if stderrdata.strip():
logger.error('%s STDERR %s', '-' * 10, '-' * 10) log_std('stderr', stderrdata.decode(), logging.ERROR)
logger.error(stderrdata.decode().strip())
logger.error('-' * 28)
logger.debug("returncode %s", child.returncode) logger.debug("returncode %s", child.returncode)
return child.returncode return child.returncode
@@ -71,3 +69,23 @@ def run_capture_std(cmd: List[str]) -> Tuple[str, str]:
logger.debug('cmd: %s', ' '.join(cmd)) logger.debug('cmd: %s', ' '.join(cmd))
child = Popen(cmd, stdout=PIPE, stderr=PIPE) child = Popen(cmd, stdout=PIPE, stderr=PIPE)
return child.communicate() return child.communicate()
def log_std(source: str, data: str, level: int):
if isinstance(data, bytes):
data = data.decode()
if not data.strip():
return
log_func = logger.debug if level == logging.DEBUG else logger.error
log_func('%s %s %s', '-' * 10, source, '-' * 10)
lines = data.split('\n')
if lines[-1] == '':
lines.pop()
for line in lines:
log_func(line)
log_func('-' * 28)

View File

@@ -2,12 +2,17 @@ import os
class Config: class Config:
default_backup_command = "source /env.sh && rcb backup > /proc/1/fd/1"
default_crontab_schedule = "0 2 * * *"
"""Bag for config values""" """Bag for config values"""
def __init__(self, check=True): def __init__(self, check=True):
# Mandatory values # Mandatory values
self.repository = os.environ.get('RESTIC_REPOSITORY') self.repository = os.environ.get('RESTIC_REPOSITORY')
self.password = os.environ.get('RESTIC_REPOSITORY') self.password = os.environ.get('RESTIC_REPOSITORY')
self.docker_base_url = os.environ.get('DOCKER_BASE_URL') or "unix://tmp/docker.sock" self.docker_base_url = os.environ.get('DOCKER_BASE_URL') or "unix://tmp/docker.sock"
self.cron_schedule = os.environ.get('CRON_SCHEDULE') or self.default_crontab_schedule
self.cron_command = os.environ.get('CRON_COMMAND') or self.default_backup_command
# Log # Log
self.log_level = os.environ.get('LOG_LEVEL') self.log_level = os.environ.get('LOG_LEVEL')

View File

@@ -2,7 +2,8 @@ import os
from pathlib import Path from pathlib import Path
from typing import List from typing import List
from restic_compose_backup import utils from restic_compose_backup import enums, utils
VOLUME_TYPE_BIND = "bind" VOLUME_TYPE_BIND = "bind"
VOLUME_TYPE_VOLUME = "volume" VOLUME_TYPE_VOLUME = "volume"
@@ -27,8 +28,8 @@ class Container:
if self._labels is None: if self._labels is None:
raise ValueError('Container meta missing Config->Labels') raise ValueError('Container meta missing Config->Labels')
self._include = self._parse_pattern(self.get_label('restic-compose-backup.volumes.include')) self._include = self._parse_pattern(self.get_label(enums.LABEL_VOLUMES_INCLUDE))
self._exclude = self._parse_pattern(self.get_label('restic-compose-backup.volumes.exclude')) self._exclude = self._parse_pattern(self.get_label(enums.LABEL_VOLUMES_EXCLUDE))
@property @property
def instance(self) -> 'Container': def instance(self) -> 'Container':
@@ -65,6 +66,9 @@ class Container:
"""All configured env vars for the container as a list""" """All configured env vars for the container as a list"""
return self.get_config('Env') return self.get_config('Env')
def remove(self):
self._data.remove()
def get_config_env(self, name) -> str: def get_config_env(self, name) -> str:
"""Get a config environment variable by name""" """Get a config environment variable by name"""
# convert to dict and fetch env var by name # convert to dict and fetch env var by name
@@ -107,7 +111,8 @@ class Container:
@property @property
def volume_backup_enabled(self) -> bool: def volume_backup_enabled(self) -> bool:
return utils.is_true(self.get_label('restic-compose-backup.volumes')) """bool: If the ``restic-compose-backup.volumes`` label is set"""
return utils.is_true(self.get_label(enums.LABEL_VOLUMES_ENABLED))
@property @property
def database_backup_enabled(self) -> bool: def database_backup_enabled(self) -> bool:
@@ -120,24 +125,27 @@ class Container:
@property @property
def mysql_backup_enabled(self) -> bool: def mysql_backup_enabled(self) -> bool:
return utils.is_true(self.get_label('restic-compose-backup.mysql')) """bool: If the ``restic-compose-backup.mysql`` label is set"""
return utils.is_true(self.get_label(enums.LABEL_MYSQL_ENABLED))
@property @property
def mariadb_backup_enabled(self) -> bool: def mariadb_backup_enabled(self) -> bool:
return utils.is_true(self.get_label('restic-compose-backup.mariadb')) """bool: If the ``restic-compose-backup.mariadb`` label is set"""
return utils.is_true(self.get_label(enums.LABEL_MARIADB_ENABLED))
@property @property
def postgresql_backup_enabled(self) -> bool: def postgresql_backup_enabled(self) -> bool:
return utils.is_true(self.get_label('restic-compose-backup.postgres')) """bool: If the ``restic-compose-backup.postgres`` label is set"""
return utils.is_true(self.get_label(enums.LABEL_POSTGRES_ENABLED))
@property @property
def is_backup_process_container(self) -> bool: def is_backup_process_container(self) -> bool:
"""Is this container the running backup process?""" """Is this container the running backup process?"""
return self.get_label('restic-compose-backup.backup_process') == 'True' return self.get_label(self.backup_process_label) == 'True'
@property @property
def is_running(self) -> bool: def is_running(self) -> bool:
"""Is the container running?""" """bool: Is the container running?"""
return self._state.get('Running', False) return self._state.get('Running', False)
@property @property
@@ -150,6 +158,11 @@ class Container:
"""Name of the container/service""" """Name of the container/service"""
return self.get_label('com.docker.compose.service', default='') return self.get_label('com.docker.compose.service', default='')
@property
def backup_process_label(self) -> str:
"""str: The unique backup process label for this project"""
return f"{enums.LABEL_BACKUP_PROCESS}-{self.project_name}"
@property @property
def project_name(self) -> str: def project_name(self) -> str:
"""Name of the compose setup""" """Name of the compose setup"""
@@ -310,6 +323,7 @@ class RunningContainers:
self.containers = [] self.containers = []
self.this_container = None self.this_container = None
self.backup_process_container = None self.backup_process_container = None
self.stale_backup_process_containers = []
# Find the container we are running in. # Find the container we are running in.
# If we don't have this information we cannot continue # If we don't have this information we cannot continue
@@ -320,10 +334,20 @@ class RunningContainers:
if not self.this_container: if not self.this_container:
raise ValueError("Cannot find metadata for backup container") raise ValueError("Cannot find metadata for backup container")
# Gather all containers in the current compose setup # Gather all running containers in the current compose setup
for container_data in all_containers: for container_data in all_containers:
container = Container(container_data) container = Container(container_data)
# Gather stale backup process containers
if (self.this_container.image == container.image
and not container.is_running
and container.is_backup_process_container):
self.stale_backup_process_containers.append(container)
# We only care about running containers after this point
if not container.is_running:
continue
# Detect running backup process container # Detect running backup process container
if container.is_backup_process_container: if container.is_backup_process_container:
self.backup_process_container = container self.backup_process_container = container
@@ -339,6 +363,11 @@ class RunningContainers:
"""str: Name of the compose project""" """str: Name of the compose project"""
return self.this_container.project_name return self.this_container.project_name
@property
def backup_process_label(self) -> str:
"""str: The backup process label for this project"""
return self.this_container.backup_process_label
@property @property
def backup_process_running(self) -> bool: def backup_process_running(self) -> bool:
"""Is the backup process container running?""" """Is the backup process container running?"""
@@ -358,6 +387,7 @@ class RunningContainers:
return mounts return mounts
def get_service(self, name) -> Container: def get_service(self, name) -> Container:
"""Container: Get a service by name"""
for container in self.containers: for container in self.containers:
if container.service_name == name: if container.service_name == name:
return container return container

View File

@@ -0,0 +1,69 @@
"""
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * * command to execute
"""
QUOTE_CHARS = ['"', "'"]
def generate_crontab(config):
"""Generate a crontab entry for running backup job"""
command = config.cron_command.strip()
schedule = config.cron_schedule
if schedule:
schedule = schedule.strip()
schedule = strip_quotes(schedule)
if not validate_schedule(schedule):
schedule = config.default_crontab_schedule
else:
schedule = config.default_crontab_schedule
return f'{schedule} {command}\n'
def validate_schedule(schedule: str):
"""Validate crontab format"""
parts = schedule.split()
if len(parts) != 5:
return False
for p in parts:
if p != '*' and not p.isdigit():
return False
minute, hour, day, month, weekday = parts
try:
validate_field(minute, 0, 59)
validate_field(hour, 0, 23)
validate_field(day, 1, 31)
validate_field(month, 1, 12)
validate_field(weekday, 0, 6)
except ValueError:
return False
return True
def validate_field(value, min, max):
if value == '*':
return
i = int(value)
return min <= i <= max
def strip_quotes(value: str):
"""Strip enclosing single or double quotes if present"""
if value[0] in QUOTE_CHARS:
value = value[1:]
if value[-1] in QUOTE_CHARS:
value = value[:-1]
return value

View File

@@ -0,0 +1,11 @@
# Labels
LABEL_VOLUMES_ENABLED = 'restic-compose-backup.volumes'
LABEL_VOLUMES_INCLUDE = 'restic-compose-backup.volumes.include'
LABEL_VOLUMES_EXCLUDE = 'restic-compose-backup.volumes.exclude'
LABEL_MYSQL_ENABLED = 'restic-compose-backup.mysql'
LABEL_POSTGRES_ENABLED = 'restic-compose-backup.postgres'
LABEL_MARIADB_ENABLED = 'restic-compose-backup.mariadb'
LABEL_BACKUP_PROCESS = 'restic-compose-backup.process'

View File

@@ -40,33 +40,41 @@ def backup_from_stdin(repository: str, filename: str, source_command: List[str])
]) ])
# pipe source command into dest command # pipe source command into dest command
# NOTE: Using the default buffer size: io.DEFAULT_BUFFER_SIZE = 8192 source_process = Popen(source_command, stdout=PIPE, bufsize=65536)
# We might want to tweak that to speed up large dumps. dest_process = Popen(dest_command, stdin=source_process.stdout, stdout=PIPE, stderr=PIPE, bufsize=65536)
# Actual tests tests must be done.
source_process = Popen(source_command, stdout=PIPE)
dest_process = Popen(dest_command, stdin=source_process.stdout, stdout=PIPE, stderr=PIPE)
stdout, stderr = dest_process.communicate() stdout, stderr = dest_process.communicate()
if stdout:
for line in stdout.decode().split('\n'):
logger.debug(line)
if stderr:
for line in stderr.decode().split('\n'):
logger.error(line)
# Ensure both processes exited with code 0 # Ensure both processes exited with code 0
source_exit, dest_exit = source_process.poll(), dest_process.poll() source_exit, dest_exit = source_process.poll(), dest_process.poll()
return 0 if (source_exit == 0 and dest_exit == 0) else 1 exit_code = 0 if (source_exit == 0 and dest_exit == 0) else 1
if stdout:
commands.log_std('stdout', stdout, logging.DEBUG if exit_code == 0 else logging.ERROR)
if stderr:
commands.log_std('stderr', stderr, logging.ERROR)
return exit_code
def snapshots(repository: str, last=True) -> Tuple[str, str]: def snapshots(repository: str, last=True) -> Tuple[str, str]:
"""Returns the stdout and stderr info"""
args = ["snapshots"] args = ["snapshots"]
if last: if last:
args.append('--last') args.append('--last')
return commands.run_capture_std(restic(repository, args)) return commands.run_capture_std(restic(repository, args))
def is_initialized(repository: str) -> bool:
"""
Checks if a repository is initialized using snapshots command.
Note that this cannot separate between uninitalized repo
and other errors, but this method is reccomended by the restic
community.
"""
return commands.run(restic(repository, ["snapshots", '--last'])) == 0
def forget(repository: str, daily: str, weekly: str, monthly: str, yearly: str): def forget(repository: str, daily: str, weekly: str, monthly: str, yearly: str):
return commands.run(restic(repository, [ return commands.run(restic(repository, [
'forget', 'forget',
@@ -90,6 +98,7 @@ def prune(repository: str):
def check(repository: str): def check(repository: str):
return commands.run(restic(repository, [ return commands.run(restic(repository, [
"check", "check",
# "--with-cache",
])) ]))

View File

@@ -1,26 +1,45 @@
import os import os
import logging
from typing import List
from contextlib import contextmanager from contextlib import contextmanager
import docker import docker
from restic_compose_backup.config import Config from restic_compose_backup.config import Config
logger = logging.getLogger(__name__)
TRUE_VALUES = ['1', 'true', 'True', True, 1] TRUE_VALUES = ['1', 'true', 'True', True, 1]
def list_containers(): def docker_client():
config = Config()
return docker.DockerClient(base_url=config.docker_base_url)
def list_containers() -> List[dict]:
""" """
List all containers. List all containers.
Returns: Returns:
List of raw container json data from the api List of raw container json data from the api
""" """
config = Config() client = docker_client()
client = docker.DockerClient(base_url=config.docker_base_url) all_containers = client.containers.list(all=True)
all_containers = client.containers.list()
client.close() client.close()
return [c.attrs for c in all_containers] return [c.attrs for c in all_containers]
def remove_containers(containers: List['Container']):
client = docker_client()
logger.info('Attempting to delete stale backup process containers')
for container in containers:
logger.info(' -> deleting %s', container.name)
try:
c = client.containers.get(container.name)
c.remove()
except Exception as ex:
logger.exception(ex)
def is_true(value): def is_true(value):
""" """
Evaluates the truthfullness of a bool value in container labels Evaluates the truthfullness of a bool value in container labels

View File

@@ -3,7 +3,7 @@ from setuptools import setup, find_namespace_packages
setup( setup(
name="restic-compose-backup", name="restic-compose-backup",
url="https://github.com/ZettaIO/restic-compose-backup", url="https://github.com/ZettaIO/restic-compose-backup",
version="0.3.0", version="0.4.0",
author="Einar Forselv", author="Einar Forselv",
author_email="eforselv@gmail.com", author_email="eforselv@gmail.com",
packages=find_namespace_packages(include=['restic_compose_backup']), packages=find_namespace_packages(include=['restic_compose_backup']),

View File

@@ -191,7 +191,7 @@ class ResticBackupTests(unittest.TestCase):
{ {
'service': 'backup_runner', 'service': 'backup_runner',
'labels': { 'labels': {
'restic-compose-backup.backup_process': 'True', 'restic-compose-backup.process-default': 'True',
}, },
}, },
] ]

View File

@@ -14,7 +14,7 @@ basepython =
py37: python3.7 py37: python3.7
deps = deps =
-r{toxinidir}/tests/requirements.txt -r{toxinidir}/src//tests/requirements.txt
commands = commands =
; coverage run --source=restic_compose_backup -m pytest tests/ ; coverage run --source=restic_compose_backup -m pytest tests/
; coverage report ; coverage report