Compare commits

..

No commits in common. "master" and "0.3.1" have entirely different histories.

40 changed files with 275 additions and 2008 deletions

BIN
.github/logo.png vendored

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.1 KiB

3
.gitignore vendored
View File

@ -21,10 +21,9 @@ restic_data/
restic_cache/ restic_cache/
alerts.env alerts.env
# build # docs
build/ build/
docs/_build docs/_build
dist
# tests # tests
.tox .tox

View File

@ -1,24 +0,0 @@
# Read the Docs configuration file for Sphinx projects
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.10"
# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
# formats:
# - pdf
# - epub
python:
install:
- requirements: docs/requirements.txt

View File

@ -1,17 +0,0 @@
language: python
sudo: false
matrix:
include:
python: 3.7
dist: bionic
sudo: true
install:
- pip install -U setuptools pip wheel
- pip install -r src/tests/requirements.txt
- pip install ./src
script:
- tox

244
README.md
View File

@ -3,8 +3,7 @@
![docs](https://readthedocs.org/projects/restic-compose-backup/badge/?version=latest) ![docs](https://readthedocs.org/projects/restic-compose-backup/badge/?version=latest)
Backup using [restic] for a docker-compose setup. Backup using https://restic.net/ for a docker-compose setup.
Currently tested with docker-ce 17, 18 and 19.
* [restic-compose-backup Documentation](https://restic-compose-backup.readthedocs.io) * [restic-compose-backup Documentation](https://restic-compose-backup.readthedocs.io)
* [restic-compose-backup on Github](https://github.com/ZettaIO/restic-compose-backup) * [restic-compose-backup on Github](https://github.com/ZettaIO/restic-compose-backup)
@ -12,145 +11,175 @@ Currently tested with docker-ce 17, 18 and 19.
Features: Features:
* Backs up docker volumes or host binds * Back up docker volumes or host binds
* Backs up postgres, mariadb and mysql databases * Back up mariadb postgres
* Notifications over mail/smtp or Discord webhooks * Back up mariadb databases
* Back up mysql databases
* Notifications over mail/smtp
* Notifications to Discord through webhooks
Please report issus on [github](https://github.com/ZettaIO/restic-compose-backup/issues). Please report issus on [github](https://github.com/ZettaIO/restic-compose-backup/issues).
Automatically detects and backs up volumes, mysql, mariadb and postgres databases in a docker-compose setup.
* Each service in the compose setup is configured with a label
to enable backup of volumes or databases
* When backup starts a new instance of the container is created
mapping in all the needed volumes. It will copy networks etc
to ensure databases can be reached
* Volumes are mounted to `/volumes/<service_name>/<path>`
in the backup process container. `/volumes` is pushed into restic
* Databases are backed up from stdin / dumps into restic using path `/databases/<service_name>/dump.sql`
* Cron triggers backup at 2AM every day
## Install ## Install
```bash ```bash
docker pull zettaio/restic-compose-backup docker pull zettaio/restic-compose-backup
``` ```
## Configuration (env vars) ## Configuration
Minimum configuration Required env variables for restic:
```bash ```bash
RESTIC_REPOSITORY RESTIC_REPOSITORY
RESTIC_PASSWORD RESTIC_PASSWORD
``` ```
More config options can be found in the [documentation]. Backend specific env vars : https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables
Restic backend specific env vars : https://restic.readthedocs.io/en/stable/040_backup.html#environment-variables Additional env vars:
## Compose Example
We simply control what should be backed up by adding
labels to our containers. More details are covered
in the [documentation].
restic-backup.env
```bash ```bash
RESTIC_REPOSITORY=<whatever backend restic supports> # Prune rules
RESTIC_PASSWORD=hopefullyasecturepw
# snapshot prune rules
RESTIC_KEEP_DAILY=7 RESTIC_KEEP_DAILY=7
RESTIC_KEEP_WEEKLY=4 RESTIC_KEEP_WEEKLY=4
RESTIC_KEEP_MONTHLY=12 RESTIC_KEEP_MONTHLY=12
RESTIC_KEEP_YEARLY=3 RESTIC_KEEP_YEARLY=3
# Cron schedule. Run every day at 1am
CRON_SCHEDULE="0 1 * * *" # Logging level (debug,info,warning,error)
LOG_LEVEL=info
# SMTP alerts
EMAIL_HOST=my.mail.host
EMAIL_PORT=465
EMAIL_HOST_USER=johndoe
EMAIL_HOST_PASSWORD=s3cr3tpassw0rd
EMAIL_SEND_TO=johndoe@gmail.com
# Discord webhook
DISCORD_WEBHOOK=https://discordapp.com/api/webhooks/...
``` ```
docker-compose.yaml ### Volumes
```yaml ```yaml
version: '3' version: '3'
services: services:
# The backup service # The backup service
backup: backup:
image: zettaio/restic-compose-backup:<version> image: zettaio/restic-compose-backup
environment:
- RESTIC_REPOSITORY=<whatever restic supports>
- RESTIC_PASSWORD=hopefullyasecturepw
- RESTIC_KEEP_DAILY=7
- RESTIC_KEEP_WEEKLY=4
- RESTIC_KEEP_MONTHLY=12
- RESTIC_KEEP_YEARLY=3
env_file: env_file:
- restic-backup.env - some_other_vars.env
volumes: volumes:
# We need to communicate with docker
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
# Persistent storage of restic cache (greatly speeds up all restic operations)
- cache:/cache
web:
image: some_image
labels:
# Enables backup of the volumes below
restic-compose-backup.volumes: true
volumes:
- media:/srv/media
- /srv/files:/srv/files
mariadb:
image: mariadb:10
labels:
# Enables backup of this database
restic-compose-backup.mariadb: true
env_file:
mariadb-credentials.env
volumes:
- mysqldata:/var/lib/mysql
mysql:
image: mysql:5
labels:
# Enables backup of this database
restic-compose-backup.mysql: true
env_file:
mysql-credentials.env
volumes:
- mysqldata:/var/lib/mysql
postgres: example:
image: postgres image: some_image
# Enable volume backup with label
labels: labels:
# Enables backup of this database restic-compose-backup.volumes: true
restic-compose-backup.postgres: true # These volumes will be backed up
env_file:
postgres-credentials.env
volumes: volumes:
- pgdata:/var/lib/postgresql/data # Docker volume
- media:/srv/media
# Host map
- /srv/files:/srv/files
volumes: volumes:
media: media:
mysqldata:
mariadbdata:
pgdata:
cache:
``` ```
## The `rcb` command A simple `include` and `exclude` filter is also available.
Everything is controlled using the `rcb` command. ```yaml
After configuring backup with labels and restarted example:
the affected services we can quickly view the image: some_image
result using the `status` subcommand. labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: "files,data"
volumes:
# Source don't match include filter. No backup.
- media:/srv/media
# Matches include filter
- files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
```bash
$ docker-compose run --rm backup rcb status
INFO: Status for compose project 'myproject'
INFO: Repository: '<restic repository>'
INFO: Backup currently running?: False
INFO: --------------- Detected Config ---------------
INFO: service: mysql
INFO: - mysql (is_ready=True)
INFO: service: mariadb
INFO: - mariadb (is_ready=True)
INFO: service: postgres
INFO: - postgres (is_ready=True)
INFO: service: web
INFO: - volume: media
INFO: - volume: /srv/files
``` ```
The `status` subcommand lists what will be backed up and Exclude
even pings the database services checking their availability.
The `restic` command can also be used directly in the container.
More `rcb` commands can be found in the [documentation]. ```yaml
example:
image: some_image
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.exclude: "media"
volumes:
# Excluded by filter
- media:/srv/media
# Backed up
- files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
```
### Databases
Will dump databases directly into restic through stdin.
They will appear in restic as a separate snapshot with
path `/databases/<service_name>/dump.sql` or similar.
```yaml
mariadb:
image: mariadb:10
labels:
restic-compose-backup.mariadb: true
```
```yaml
mysql:
image: mysql:5
labels:
restic-compose-backup.mysql: true
```
```yaml
postgres:
image: postgres
labels:
restic-compose-backup.postgres: true
```
## Running Tests ## Running Tests
```bash ```bash
pip install -e ./src/ pip install -e src/
pip install -r src/tests/requirements.txt pip install -r src/tests/requirements.txt
tox tox
``` ```
@ -162,39 +191,6 @@ pip install -r docs/requirements.txt
python src/setup.py build_sphinx python src/setup.py build_sphinx
``` ```
# Local dev setup
The git repository contains a simple local setup for development
```bash
# Create an overlay network to link the compose project and stack
docker network create --driver overlay --attachable global
# Start the compose project
docker-compose up -d
# Deploy the stack
docker stack deploy -c swarm-stack.yml test
```
In dev we should ideally start the backup container manually
```bash
docker-compose run --rm backup sh
# pip install the package in the container in editable mode to auto sync changes from host source
pip3 install -e .
```
Remember to enable swarm mode with `docker swarm init/join` and disable swarm
mode with `docker swarm leave --force` when needed in development (single node setup).
## Contributing ## Contributing
Contributions are welcome regardless of experience level. Contributions are welcome regardless of experience level. Don't hesitate submitting issues, opening partial or completed pull requests.
Don't hesitate submitting issues, opening partial or completed pull requests.
[restic]: https://restic.net/
[documentation]: https://restic-compose-backup.readthedocs.io
---
This project is sponsored by [zetta.io](https://www.zetta.io)
[![Zetta.IO](https://raw.githubusercontent.com/ZettaIO/restic-compose-backup/master/.github/logo.png)](https://www.zetta.io)

View File

@ -1,22 +1,15 @@
version: '3.7' version: '3'
services: services:
backup: backup:
build: ./src build: ./src
env_file: env_file:
- restic_compose_backup.env - restic_compose_backup.env
# - alerts.env - alerts.env
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: 'src'
networks:
- default
- global
volumes: volumes:
# Map in docker socket # Map in docker socket
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
# Map local restic repository for dev # Map backup database locally
- ./restic_data:/restic_data - ./restic_data:/restic_data
# Map restic cache
- ./restic_cache:/cache - ./restic_cache:/cache
# Map in project source in dev # Map in project source in dev
- ./src:/restic-compose-backup - ./src:/restic-compose-backup
@ -26,13 +19,13 @@ services:
restic-compose-backup.volumes: true restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: "/tests" restic-compose-backup.volumes.include: "/tests"
volumes: volumes:
- ./src/tests:/srv/tests - ./tests:/srv/tests
- ./.vscode:/srv/code - ./.vscode:/srv/code
environment: environment:
- SOME_VALUE=test - SOME_VALUE=test
- ANOTHER_VALUE=1 - ANOTHER_VALUE=1
mysql5: mysql:
image: mysql:5 image: mysql:5
labels: labels:
restic-compose-backup.mysql: true restic-compose-backup.mysql: true
@ -42,19 +35,7 @@ services:
- MYSQL_USER=myuser - MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword - MYSQL_PASSWORD=mypassword
volumes: volumes:
- mysqldata5:/var/lib/mysql - mysqldata:/var/lib/mysql
mysql8:
image: mysql:8
labels:
restic-compose-backup.mysql: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword
volumes:
- mysqldata8:/var/lib/mysql
mariadb: mariadb:
image: mariadb:10 image: mariadb:10
@ -80,11 +61,6 @@ services:
- pgdata:/var/lib/postgresql/data - pgdata:/var/lib/postgresql/data
volumes: volumes:
mysqldata5: mysqldata:
mysqldata8:
mariadbdata: mariadbdata:
pgdata: pgdata:
networks:
global:
external: true

View File

@ -22,7 +22,8 @@ copyright = '2019, Zetta.IO Technology AS'
author = 'Zetta.IO Technology AS' author = 'Zetta.IO Technology AS'
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.6.0' release = '0.3.0'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------

View File

@ -1,30 +0,0 @@
Advanced
--------
Currently work in progress. These are only notes :D
Temp Notes
~~~~~~~~~~
* Quick setup guide from start to end
* we group snapshots by path when forgetting
* explain rcb commands
* examples of using restic directly
* Explain what happens during backup process
* Explain the backup process container
* cache directory
* Not displaying passwords in logs
Inner workings
~~~~~~~~~~~~~~
* Each service in the compose setup is configured with a label
to enable backup of volumes or databases
* When backup starts a new instance of the container is created
mapping in all the needed volumes. It will copy networks etc
to ensure databases can be reached
* Volumes are mounted to `/volumes/<service_name>/<path>`
in the backup process container. `/volumes` is pushed into restic
* Databases are backed up from stdin / dumps into restic using path
`/databases/<service_name>/dump.sql`
* Cron triggers backup at 2AM every day

View File

@ -1,447 +0,0 @@
Configuration
=============
Environment Variables
---------------------
RESTIC_REPOSITORY
~~~~~~~~~~~~~~~~~
Sets the restic repository path.
This is a standard environment variable
the ``restic`` command will read making it simple for
us to enter the container and use the restic command directly.
More about this value and supported backends:
https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html
RESTIC_PASSWORD
~~~~~~~~~~~~~~~
Sets the password is used to encrypt/decrypt data.
Losing this password will make recovery impossible.
This is a standard environment variable the ``restic``
command will read making it simple for us to enter the
container running the command directly.
RESTIC_KEEP_DAILY
~~~~~~~~~~~~~~~~~
**Default value**: ``7``
How many daily snapshots (grouped by path) back in time we
want to keep. This is passed to restic in the
``forget --keep-daily`` option.
RESTIC_KEEP_WEEKLY
~~~~~~~~~~~~~~~~~~
**Default value**: ``4``
How many weeks back we should keep at least one snapshot
(grouped by path). This is passed to restic in the
``forget --keep-weekly`` option.
RESTIC_KEEP_MONTHLY
~~~~~~~~~~~~~~~~~~~
**Default value**: ``12``
How many months back we should keep at least on snapshot
(grouped by path). This is passed to restic in the
``forget --keep-monthly`` option.
The schedule parameters only accepts numeric values
and is validated when the container starts. Providing
values cron does not understand will stall all backup.
RESTIC_KEEP_YEARLY
~~~~~~~~~~~~~~~~~~
**Default value**: ``3``
How many years back we should keep at least one snapshot
(grouped by path). This is passed to restic in the
``forget --keep-yearly`` option.
CRON_SCHEDULE
~~~~~~~~~~~~~
**Default value**: ``0 2 * * *`` (daily at 02:00)
The cron schedule parameters. The crontab is generated when the
container starts from the ``CRON_SCHEDULE`` and ``CRON_COMMAND``
env variables.
.. code::
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of the month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
* * * * * command to execute
CRON_COMMAND
~~~~~~~~~~~~
**Default value**: ``source /env.sh && rcb backup > /proc/1/fd/1``
The command executed in the crontab. A single line is generated when
the container starts from the ``CRON_SCHEDULE`` and ``CRON_COMMAND``
environment variables.
The default command sources a dump of all env vars, runs the
backup command and directs output to pid 1 so it appears in
docker logs.
By default the crontab will look like this::
0 2 * * * source /env.sh && rcb backup > /proc/1/fd/1
LOG_LEVEL
~~~~~~~~~
**Default value**: ``info``
Log level for the ``rcb`` command. Valid values are
``debug``, ``info``, ``warning``, ``error``.
EMAIL_HOST
~~~~~~~~~~
The email host to use.
Alerts can be tested using the ``rcb alerts`` command.
This will send a test message to all configured alert
backends.
EMAIL_PORT
~~~~~~~~~~
The port to connect to
Alerts can be tested using the ``rcb alerts`` command.
This will send a test message to all configured alert
backends.
EMAIL_HOST_USER
~~~~~~~~~~~~~~~
The user of the sender account
Alerts can be tested using the ``rcb alerts`` command.
This will send a test message to all configured alert
backends.
EMAIL_HOST_PASSWORD
~~~~~~~~~~~~~~~~~~~
The password for the sender account
Alerts can be tested using the ``rcb alerts`` command.
This will send a test message to all configured alert
backends.
EMAIL_SEND_TO
~~~~~~~~~~~~~
The email address to send alerts
Alerts can be tested using the ``rcb alerts`` command.
This will send a test message to all configured alert
backends.
DISCORD_WEBHOOK
~~~~~~~~~~~~~~~
The discord webhook url. And administrator can quickly set this up
by going to server settings in the discord client and create
a webhook that will post embedded messages to a specific channel.
The url usually looks like this: ``https://discordapp.com/api/webhooks/...```
DOCKER_HOST
~~~~~~~~~~~
**Default value**: ``unix://tmp/docker.sock``
The socket or host of the docker service.
DOCKER_TLS_VERIFY
~~~~~~~~~~~~~~~~~
If defined verify the host against a CA certificate.
Path to certs is defined in ``DOCKER_CERT_PATH``
and can be copied or mapped into this backup container.
DOCKER_CERT_PATH
~~~~~~~~~~~~~~~~
A path to a directory containing TLS certificates to use when
connecting to the Docker host. Combined with ``DOCKER_TLS_VERIFY``
this can be used to talk to docker through TLS in cases
were we cannot map in the docker socket.
INCLUDE_PROJECT_NAME
~~~~~~~~~~~~~~~~~~~~
Define this environment variable if your backup destination
paths needs project name as a prefix. This is useful
when running multiple projects.
EXCLUDE_BIND_MOUNTS
~~~~~~~~~~~~~~~~~~~
Docker has to volumes types. Binds and volumes.
Volumes are docker volumes (``docker`volume list``).
Binds are paths mapped into the container from
the host for example in the ``volumes`` section
of a service.
If defined all host binds will be ignored globally.
This is useful when you only care about actual
docker volumes. Often host binds are only used
for mapping in configuration. This saves the user
from manually excluding these bind volumes.
SWARM_MODE
~~~~~~~~~~
If defined containers in swarm stacks are also evaluated.
Compose Labels
--------------
What is backed up is controlled by simple labels in the compose
yaml file. At any point we can verify this configuration
by running the ``rcb status`` command.
.. code:
$ docker-compose run --rm backup rcb status
INFO: Status for compose project 'myproject'
INFO: Repository: '<restic repository>'
INFO: Backup currently running?: False
INFO: --------------- Detected Config ---------------
INFO: service: mysql
INFO: - mysql (is_ready=True)
INFO: service: mariadb
INFO: - mariadb (is_ready=True)
INFO: service: postgres
INFO: - postgres (is_ready=True)
INFO: service: web
INFO: - volume: media
INFO: - volume: /srv/files
Here we can see what volumes and databases are detected for backup.
Volumes
~~~~~~~
To enable volume backup for a service we simply add the
`restic-compose-backup.volumes: true` label. The value
must be ``true``.
Example:
.. code:: yaml
myservice:
image: some_image
labels:
restic-compose-backup.volumes: true
volumes:
- uploaded_media:/srv/media
- uploaded_files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
This will back up the three volumes mounted to this service.
Their path in restic will be:
- /volumes/myservice/srv/media
- /volumes/myservice/srv/files
- /volumes/myservice/srv/data
A simple `include` and `exclude` filter for what volumes
should be backed up is also available. Note that this
includes or excludes entire volumes and are not include/exclude
patterns for files in the volumes.
.. note:: The ``exclude`` and ``include`` filtering is applied on
the source path, not the destination.
Include example including two volumes only:
.. code:: yaml
myservice:
image: some_image
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: "uploaded_media,uploaded_files"
volumes:
- uploaded_media:/srv/media
- uploaded_files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
Exclude example achieving the same result as the example above.
.. code:: yaml
example:
image: some_image
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.exclude: "data"
volumes:
# Excluded by filter
- media:/srv/media
# Backed up
- files:/srv/files
- /srv/data:/srv/data
volumes:
media:
files:
The ``exclude`` and ``include`` tag can be used together
in more complex situations.
mariadb
~~~~~~~
To enable backup of mariadb simply add the
``restic-compose-backup.mariadb: true`` label.
Credentials are fetched from the following environment
variables in the mariadb service. This is the standard
when using the official mariadb_ image.
.. code::
MYSQL_USER
MYSQL_PASSWORD
Backups are done by dumping all databases directly into
restic through stdin using ``mysqldump``. It will appear
in restic as a separate snapshot with path
``/databases/<service_name>/all_databases.sql``.
.. warning: This will only back up the databases the
``MYSQL_USER` has access to. If you have multiple
databases this must be taken into consideration.
Example:
.. code:: yaml
mariadb:
image: mariadb:10
labels:
restic-compose-backup.mariadb: true
env_file:
mariadb-credentials.env
volumes:
- mariadb:/var/lib/mysql
volumes:
mariadb:
mysql
~~~~~
To enable backup of mysql simply add the
``restic-compose-backup.mysql: true`` label.
Credentials are fetched from the following environment
variables in the mysql service. This is the standard
when using the official mysql_ image.
.. code::
MYSQL_USER
MYSQL_PASSWORD
Backups are done by dumping all databases directly into
restic through stdin using ``mysqldump``. It will appear
in restic as a separate snapshot with path
``/databases/<service_name>/all_databases.sql``.
.. warning: This will only back up the databases the
``MYSQL_USER` has access to. If you have multiple
databases this must be taken into consideration.
Example:
.. code:: yaml
mysql:
image: mysql:5
labels:
restic-compose-backup.mysql: true
env_file:
mysql-credentials.env
volumes:
- mysql:/var/lib/mysql
volumes:
mysql:
postgres
~~~~~~~~
To enable backup of mysql simply add the
``restic-compose-backup.postgres: true`` label.
Credentials are fetched from the following environment
variables in the postgres service. This is the standard
when using the official postgres_ image.
.. code::
POSTGRES_USER
POSTGRES_PASSWORD
POSTGRES_DB
Backups are done by dumping the ``POSTGRES_DB`` directly into
restic through stdin using ``pg_dump``. It will appear
in restic as a separate snapshot with path
``/databases/<service_name>/<POSTGRES_DB>.sql``.
.. warning:: Currently only the ``POSTGRES_DB`` database
is dumped.
Example:
.. code:: yaml
postgres:
image: postgres:11
labels:
# Enables backup of this database
restic-compose-backup.postgres: true
env_file:
postgres-credentials.env
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
.. _mariadb: https://hub.docker.com/_/mariadb
.. _mysql: https://hub.docker.com/_/mysql
.. _postgres: https://hub.docker.com/_/postgres

View File

@ -1,53 +0,0 @@
Introduction
============
Install
-------
restic-compose-backup is available at docker `docker hub`_.
.. code::
docker pull restic-compose-backup
Optionally it can be built from source using the github_ repository.
.. code:: bash
git clone https://github.com/ZettaIO/restic-compose-backup.git
cd restic-compose-backup
# Build and tag the image locally
docker build src/ --tag restic-compose-backup
Bug reports and issues
----------------------
Please report bugs an issues on github_
Development setup
-----------------
Getting started with local development is fairly simple.
The github_ repository contains a simple ``docker-compose.yaml``
.. code:: bash
docker-compose up -d
# Enter the container in sh
docker-compose run --rm backup sh
The dev compose setup maps in the source from the host
and the spawned backup container will inherit all
the volumes from the backup service ensuring code changes
propagates during development.
Set up a local venv and install the package in development mode::
python -m venv .venv
. .venv/bin/activate
pip install -e ./src
.. _docker hub: https://hub.docker.com/r/zettaio/restic-compose-backup
.. _github: https://github.com/ZettaIO/restic-compose-backup

View File

@ -1,199 +0,0 @@
The `rcb` command
-----------------
The ``rcb`` command is is basically what this entire project is.
It provides useful commands interacting with the compose setup
and restic.
The command can be executed inside the container or through ``run``.
.. code:: bash
# Get the current status using run
$ docker-compose run --rm backup rcb status
# by entering the container
$ docker-compose exec backup sh
/restic-compose-backup # rcb status
Log level can be overridden by using the ``--log-level``
flag. This can help you better understand what is going on
for example by using ``--log-level debug``.
version
~~~~~~~
Displays the version.
Example output::
/restic-compose-backup # rcb version
0.4.0
status
~~~~~~
Shows the general status of our setup. The command is doing
the following operations
- Displays the name of the compose setup
- Displays the repository path
- Tells us if a backup is currently running
- Removes stale backup process containers if the exist
- Checks is the repository is initialized
- Initializes the repository if this is not already done
- Displays what volumes and databases are flagged for backup
Example output::
INFO: Status for compose project 'myproject'
INFO: Repository: '<restic repository>'
INFO: Backup currently running?: False
INFO: --------------- Detected Config ---------------
INFO: service: mysql
INFO: - mysql (is_ready=True)
INFO: service: mariadb
INFO: - mariadb (is_ready=True)
INFO: service: postgres
INFO: - postgres (is_ready=True)
INFO: service: web
INFO: - volume: media
INFO: - volume: /srv/files
alert
~~~~~
Sends a test message to all configured alert backends
and is there for you to verify that alerts are in
fact working and configured correctly.
The format of this message::
subject: myproject: Test Alert
body: Test message
snapshots
~~~~~~~~~
Displays the latest snapshots in restic. This can also
be done with ``restic snapshots``.
Example output::
/restic-compose-backup # rcb snapshots
repository f325264e opened successfully, password is correct
ID Time Host Tags Paths
---------------------------------------------------------------------------------------------
19928e1c 2019-12-09 02:07:44 b3038db04ec1 /volumes
7a642f37 2019-12-09 02:07:45 b3038db04ec1 /databases/mysql/all_databases.sql
883dada4 2019-12-09 02:07:46 b3038db04ec1 /databases/mariadb/all_databases.sql
76ef2457 2019-12-09 02:07:47 b3038db04ec1 /databases/postgres/test.sql
---------------------------------------------------------------------------------------------
4 snapshots
backup
~~~~~~
Starts a backup process by spawning a new docker container.
The network stack, mounted volumes, env vars etc. from the
backup service are copied to this container.
We attach to this container and stream the logs and delete
the container with the backup process is completed. If the
container for any reason should not be deleted, it will
be in next backup run as these containers are tagged with
a unique label and detected.
If anything goes wrong the exist status of the container
is non-zero and the logs from this backup run will be sent
to the user through the configure alerts.
This command is by default called by cron every
day at 02:00 unless configured otherwise. We can also run this
manually is needed.
Running this command will do the following:
* Checks if a backup process is already running.
If so, we alert the user and abort
* Gathers all the volumes configured for backup and starts
the backup process with these volumes mounted into ``/volumes``
* Checks the status of the process and reports to the user
if anything failed
The backup process does the following:
* ``status`` is first called to ensure everything is ok
* Backs up ``/volumes`` if any volumes were mounted
* Backs up each configured database
* Runs ``cleanup`` purging snapshots based on the configured policy
* Checks the health of the repository
Example::
$ docker-compose exec backup sh
/restic-compose-backup # rcb backup
INFO: Starting backup container
INFO: Backup process container: loving_jepsen
INFO: 2019-12-09 04:50:22,817 - INFO: Status for compose project 'restic-compose-backup'
INFO: 2019-12-09 04:50:22,817 - INFO: Repository: '/restic_data'
INFO: 2019-12-09 04:50:22,817 - INFO: Backup currently running?: True
INFO: 2019-12-09 04:50:23,701 - INFO: ------------------------- Detected Config -------------------------
INFO: 2019-12-09 04:50:23,701 - INFO: service: mysql
INFO: 2019-12-09 04:50:23,718 - INFO: - mysql (is_ready=True)
INFO: 2019-12-09 04:50:23,718 - INFO: service: mariadb
INFO: 2019-12-09 04:50:23,726 - INFO: - mariadb (is_ready=True)
INFO: 2019-12-09 04:50:23,727 - INFO: service: postgres
INFO: 2019-12-09 04:50:23,734 - INFO: - postgres (is_ready=True)
INFO: 2019-12-09 04:50:23,735 - INFO: service: web
INFO: 2019-12-09 04:50:23,736 - INFO: - volume: /some/volume
INFO: 2019-12-09 04:50:23,736 - INFO: -------------------------------------------------------------------
INFO: 2019-12-09 04:50:23,736 - INFO: Backing up volumes
INFO: 2019-12-09 04:50:24,661 - INFO: Backing up databases
INFO: 2019-12-09 04:50:24,661 - INFO: Backing up mysql in service mysql
INFO: 2019-12-09 04:50:25,643 - INFO: Backing up mariadb in service mariadb
INFO: 2019-12-09 04:50:26,580 - INFO: Backing up postgres in service postgres
INFO: 2019-12-09 04:50:27,555 - INFO: Forget outdated snapshots
INFO: 2019-12-09 04:50:28,457 - INFO: Prune stale data freeing storage space
INFO: 2019-12-09 04:50:31,547 - INFO: Checking the repository for errors
INFO: 2019-12-09 04:50:32,869 - INFO: Backup completed
INFO: Backup container exit code: 0
crontab
~~~~~~~
Generates and verifies the crontab. This is done automatically when
the container starts. It can be user to verify the configuration.
Example output::
/restic-compose-backup # rcb crontab
10 2 * * * source /env.sh && rcb backup > /proc/1/fd/1
cleanup
~~~~~~~
Purges all snapshots based on the configured policy. (``RESTIC_KEEP_*``
env variables). It runs ``restic forget`` and ``restic purge``.
Example output::
/restic-compose-backup # rcb cleanup
2019-12-09 05:09:52,892 - INFO: Forget outdated snapshots
2019-12-09 05:09:53,776 - INFO: Prune stale data freeing storage space
start-backup-process
~~~~~~~~~~~~~~~~~~~~
This can only be executed by the backup process container.
Attempting to run this command in the backup service
will simply tell you it's not possible.
The backup process is doing the following:
* ``status`` is first called to ensure everything is ok
* Backs up ``/volumes`` if any volumes were mounted
* Backs up each configured database
* Runs ``cleanup`` purging snapshots based on the configured policy
* Checks the health of the repository

View File

@ -6,16 +6,12 @@
Welcome to restic-compose-backup's documentation! Welcome to restic-compose-backup's documentation!
================================================= =================================================
Simple backup with restic_ for docker-compose setups. Simple backup with restic for small to medium docker-compose setups.
.. toctree:: .. toctree::
:maxdepth: 3 :maxdepth: 2
:caption: Contents: :caption: Contents:
guide/install
guide/configuration
guide/rcb
guide/advanced
Indices and tables Indices and tables
================== ==================
@ -23,5 +19,3 @@ Indices and tables
* :ref:`genindex` * :ref:`genindex`
* :ref:`modindex` * :ref:`modindex`
* :ref:`search` * :ref:`search`
.. _restic: https://restic.net/

View File

@ -1,196 +0,0 @@
{
"Id": "efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd",
"Created": "2020-03-05T21:07:34.88927951Z",
"Path": "docker-entrypoint.sh",
"Args": ["mysqld"],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 4887,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-03-06T01:31:39.728842925Z",
"FinishedAt": "2020-03-06T01:31:33.847583199Z"
},
"Image": "sha256:1fd0e719c4952e22a99e30662fdd7daad53e7e53fbe135d543cc6b82be213951",
"ResolvConfPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/hostname",
"HostsPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/hosts",
"LogPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd-json.log",
"Name": "/restic-compose-backup_mariadb_1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": ["restic-compose-backup_mariadbdata:/var/lib/mysql:rw"],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "restic-compose-backup_default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [0, 0],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": ["/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware"],
"ReadonlyPaths": ["/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger"]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0-init/diff:/var/lib/docker/overlay2/38780a41f93b7a20de03f1d76febb885f9213906fb30bad17cb3ad231fb7ce43/diff:/var/lib/docker/overlay2/a2abce521690b1baf6aa61e109a4659cb4272936871bc1afa73271eb8e453449/diff:/var/lib/docker/overlay2/a696286588d1d33b994b7f6e31c176c5f7e67c4f757d730323a7b6591d55f786/diff:/var/lib/docker/overlay2/c4bd8133c0d9547945d38a9998439082ce7b53df7e64737add5a5c824e6f67f2/diff:/var/lib/docker/overlay2/110e275ef21b8c9cc2cd0cce312fed5aabceb056460f637b958dfee56b7b3be8/diff:/var/lib/docker/overlay2/831c8a624e424f298766028e76a8ac08df0c5cf4564f63cae61330a8bce0cf63/diff:/var/lib/docker/overlay2/7ad8ae774951ec40c68b0993ef07ef3d70aa8aed44ea9f1e4d943ca5404cc717/diff:/var/lib/docker/overlay2/19bca9fb61ef1156f8a97313c126a6c06d7fe44a6c49e3affe16f50f2d5e56ff/diff:/var/lib/docker/overlay2/dcd4dda04d06b0a0c7e78517c6209fd67735b3027afda2c85a92de37ff7297d1/diff:/var/lib/docker/overlay2/babf41f5fe1f7b88c17cfce27214a4ad9473b0f8e0f118db948d2acddf4d4798/diff:/var/lib/docker/overlay2/b5f97865010acd5b04b4031d6223cd0b34fab89267891d61256ea16936be52f8/diff:/var/lib/docker/overlay2/6aba0159141ebb6d6783181d154c65046447b7d2bebce65d44c4939ba7943cca/diff:/var/lib/docker/overlay2/c71c34fe0e7e95409a9fc18698f0aee505940fd96aa3718836e2d89f3cfb2d49/diff:/var/lib/docker/overlay2/3be993436e2a6764a6c3c57a2e948f7a57e45ed0ec26cdd3366f4c1106c69869/diff",
"MergedDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/merged",
"UpperDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/diff",
"WorkDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/work"
},
"Name": "overlay2"
},
"Mounts": [{
"Type": "volume",
"Name": "restic-compose-backup_mariadbdata",
"Source": "/var/lib/docker/volumes/restic-compose-backup_mariadbdata/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}],
"Config": {
"Hostname": "efa5196b4959",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": ["MYSQL_ROOT_PASSWORD=my-secret-pw", "MYSQL_DATABASE=mydb", "MYSQL_USER=myuser", "MYSQL_PASSWORD=mypassword", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.10", "GPG_KEYS=177F4010FE56CA3336300305F1656F24C74CD1D8", "MARIADB_MAJOR=10.4", "MARIADB_VERSION=1:10.4.12+maria~bionic"],
"Cmd": ["mysqld"],
"Image": "mariadb:10",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": ["docker-entrypoint.sh"],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "c6ecde85ad111d324a4c97cde3a03898074b026c68ecffc0f7020e5eca9a71d7",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "restic-compose-backup",
"com.docker.compose.project.config_files": "docker-compose.yaml",
"com.docker.compose.project.working_dir": "C:\\Users\\efors\\projects\\zetta.io\\projects\\restic-compose-backup",
"com.docker.compose.service": "mariadb",
"com.docker.compose.version": "1.25.4",
"restic-compose-backup.mariadb": "True"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d462bb5dfdd26aba12b8a395ac90262ab00d65408bf60dfa1ade0ab6a1851c70",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3306/tcp": null
},
"SandboxKey": "/var/run/docker/netns/d462bb5dfdd2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"restic-compose-backup_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": ["efa5196b4959", "mariadb"],
"NetworkID": "8f3349b0debec88f9f48fff02d84cda3feae0a0e8c516e8b42e5777bb03db1cb",
"EndpointID": "0b75d3f00aa077fe95156bc80463d33fb21d241a287b33c06769047855c38400",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:03",
"DriverOpts": null
}
}
}
}

View File

@ -1,207 +0,0 @@
{
"Id": "56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76",
"Created": "2020-03-06T22:36:17.266061631Z",
"Path": "docker-entrypoint.sh",
"Args": ["mysqld"],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 35967,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-03-06T22:36:17.636265528Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1fd0e719c4952e22a99e30662fdd7daad53e7e53fbe135d543cc6b82be213951",
"ResolvConfPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/hostname",
"HostsPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/hosts",
"LogPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76-json.log",
"Name": "/test_mariadb.1.q4uji32qvw4tkuvwx3pbnbgqq",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [0, 0],
"Isolation": "default",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"Mounts": [{
"Type": "volume",
"Source": "test_mariadbdata",
"Target": "/var/lib/mysql",
"VolumeOptions": {
"Labels": {
"com.docker.stack.namespace": "test"
}
}
}],
"MaskedPaths": ["/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware"],
"ReadonlyPaths": ["/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger"]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b-init/diff:/var/lib/docker/overlay2/38780a41f93b7a20de03f1d76febb885f9213906fb30bad17cb3ad231fb7ce43/diff:/var/lib/docker/overlay2/a2abce521690b1baf6aa61e109a4659cb4272936871bc1afa73271eb8e453449/diff:/var/lib/docker/overlay2/a696286588d1d33b994b7f6e31c176c5f7e67c4f757d730323a7b6591d55f786/diff:/var/lib/docker/overlay2/c4bd8133c0d9547945d38a9998439082ce7b53df7e64737add5a5c824e6f67f2/diff:/var/lib/docker/overlay2/110e275ef21b8c9cc2cd0cce312fed5aabceb056460f637b958dfee56b7b3be8/diff:/var/lib/docker/overlay2/831c8a624e424f298766028e76a8ac08df0c5cf4564f63cae61330a8bce0cf63/diff:/var/lib/docker/overlay2/7ad8ae774951ec40c68b0993ef07ef3d70aa8aed44ea9f1e4d943ca5404cc717/diff:/var/lib/docker/overlay2/19bca9fb61ef1156f8a97313c126a6c06d7fe44a6c49e3affe16f50f2d5e56ff/diff:/var/lib/docker/overlay2/dcd4dda04d06b0a0c7e78517c6209fd67735b3027afda2c85a92de37ff7297d1/diff:/var/lib/docker/overlay2/babf41f5fe1f7b88c17cfce27214a4ad9473b0f8e0f118db948d2acddf4d4798/diff:/var/lib/docker/overlay2/b5f97865010acd5b04b4031d6223cd0b34fab89267891d61256ea16936be52f8/diff:/var/lib/docker/overlay2/6aba0159141ebb6d6783181d154c65046447b7d2bebce65d44c4939ba7943cca/diff:/var/lib/docker/overlay2/c71c34fe0e7e95409a9fc18698f0aee505940fd96aa3718836e2d89f3cfb2d49/diff:/var/lib/docker/overlay2/3be993436e2a6764a6c3c57a2e948f7a57e45ed0ec26cdd3366f4c1106c69869/diff",
"MergedDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/merged",
"UpperDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/diff",
"WorkDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/work"
},
"Name": "overlay2"
},
"Mounts": [{
"Type": "volume",
"Name": "test_mariadbdata",
"Source": "/var/lib/docker/volumes/test_mariadbdata/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}],
"Config": {
"Hostname": "56c57903b6da",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": ["MYSQL_DATABASE=mydb", "MYSQL_PASSWORD=mypassword", "MYSQL_ROOT_PASSWORD=my-secret-pw", "MYSQL_USER=myuser", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.10", "GPG_KEYS=177F4010FE56CA3336300305F1656F24C74CD1D8", "MARIADB_MAJOR=10.4", "MARIADB_VERSION=1:10.4.12+maria~bionic"],
"Cmd": ["mysqld"],
"Image": "mariadb:10@sha256:d1ceee944c90ee3b596266de1b0ac25d2f34adbe9c35156b75bcb9a7047c7545",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": ["docker-entrypoint.sh"],
"OnBuild": null,
"Labels": {
"com.docker.stack.namespace": "test",
"com.docker.swarm.node.id": "gj73oe0vgmldlv2pdcj243231",
"com.docker.swarm.service.id": "jewh88xvythjkga24wy1thxc2",
"com.docker.swarm.service.name": "test_mariadb",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "q4uji32qvw4tkuvwx3pbnbgqq",
"com.docker.swarm.task.name": "test_mariadb.1.q4uji32qvw4tkuvwx3pbnbgqq",
"restic-compose-backup.mariadb": "true"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5aa81b0859dfd6f6be629eb966ce365f22dc86620359cce3e3d25d5291b539db",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3306/tcp": null
},
"SandboxKey": "/var/run/docker/netns/5aa81b0859df",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test_default": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": ["56c57903b6da"],
"NetworkID": "8aweh54u31eq3i47vqdr2aonc",
"EndpointID": "5369b4c82a479a3e9dfb3547cb7ac3a0fab888e38ad5c1d0ad02b0e9a9523a64",
"Gateway": "",
"IPAddress": "10.0.1.3",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:00:01:03",
"DriverOpts": null
}
}
}
}

View File

@ -1,93 +0,0 @@
{
"ID": "k5427pk4t7ss4d7ylacumeavz",
"Version": {
"Index": 30
},
"CreatedAt": "2020-03-08T17:25:59.451947759Z",
"UpdatedAt": "2020-03-08T17:26:38.552002711Z",
"Spec": {
"Labels": {},
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "docker-desktop",
"Platform": {
"Architecture": "x86_64",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 4000000000,
"MemoryBytes": 2085535744
},
"Engine": {
"EngineVersion": "19.03.5",
"Plugins": [{
"Type": "Log",
"Name": "awslogs"
}, {
"Type": "Log",
"Name": "fluentd"
}, {
"Type": "Log",
"Name": "gcplogs"
}, {
"Type": "Log",
"Name": "gelf"
}, {
"Type": "Log",
"Name": "journald"
}, {
"Type": "Log",
"Name": "json-file"
}, {
"Type": "Log",
"Name": "local"
}, {
"Type": "Log",
"Name": "logentries"
}, {
"Type": "Log",
"Name": "splunk"
}, {
"Type": "Log",
"Name": "syslog"
}, {
"Type": "Network",
"Name": "bridge"
}, {
"Type": "Network",
"Name": "host"
}, {
"Type": "Network",
"Name": "ipvlan"
}, {
"Type": "Network",
"Name": "macvlan"
}, {
"Type": "Network",
"Name": "null"
}, {
"Type": "Network",
"Name": "overlay"
}, {
"Type": "Volume",
"Name": "local"
}]
},
"TLSInfo": {
"TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBazCCARCgAwIBAgIUfx7TP8c4SHCrwPPxjSFJQcfTP5QwCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjAwMzA4MTcyMTAwWhcNNDAwMzAzMTcy\nMTAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABGOa/9Rdd6qNc24wvuL/I9t5Vt3MJzlwC+WN0R6HrA4Ik1h2dmSRZTQqnCI7\nWh16y+PLaFwIfN0JkN4FrpnUBsyjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBTAPDjHRwjQhNGUxqE3COHCOQrOkjAKBggqhkjO\nPQQDAgNJADBGAiEAxd/lPEKy3gt3nfZ8DX7kDaaNH8jSPgCBx3ejUs3SoaUCIQD3\nZ8dVxNvG4+Gvn28mDjWhTNLCn0BYW6JFE8eTI0xv4A==\n-----END CERTIFICATE-----\n",
"CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
"CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEY5r/1F13qo1zbjC+4v8j23lW3cwnOXAL5Y3RHoesDgiTWHZ2ZJFlNCqcIjtaHXrL48toXAh83QmQ3gWumdQGzA=="
}
},
"Status": {
"State": "ready",
"Addr": "192.168.65.3"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "192.168.65.3:2377"
}
}

View File

@ -2,7 +2,6 @@
- Update version in `setup.py` - Update version in `setup.py`
- Update version in `docs/conf.py` - Update version in `docs/conf.py`
- Update version in `restic_compose_backup/__init__.py`
- Build and tag image - Build and tag image
- push: `docker push zettaio/restic-compose-backup:<version>` - push: `docker push zettaio/restic-compose-backup:<version>`
- Ensure RTD has new docs published - Ensure RTD has new docs published
@ -13,9 +12,9 @@ When releasing a bugfix version we need to update the
main image as well. main image as well.
```bash ```bash
docker build src --tag zettaio/restic-compose-backup:0.6 docker build . --tag zettaio/restic-compose-backup:0.3
docker build src --tag zettaio/restic-compose-backup:0.6.0 docker build . --tag zettaio/restic-compose-backup:0.3.1
docker push zettaio/restic-compose-backup:0.5 docker push zettaio/restic-compose-backup:0.3
docker push zettaio/restic-compose-backup:0.5.0 docker push zettaio/restic-compose-backup:0.3.1
``` ```

View File

@ -1,13 +1,6 @@
# DON'T COMMIT THIS FILE IF YOU MODIFY IN DEV # DON'T COMMIT THIS FILE IF YOU MODIFY IN DEV
# DOCKER_HOST=unix://tmp/docker.sock DOCKER_BASE_URL=unix://tmp/docker.sock
# DOCKER_TLS_VERIFY=1
# DOCKER_CERT_PATH=''
SWARM_MODE=true
INCLUDE_PROJECT_NAME=false
EXCLUDE_BIND_MOUNTS=false
RESTIC_REPOSITORY=/restic_data RESTIC_REPOSITORY=/restic_data
RESTIC_PASSWORD=password RESTIC_PASSWORD=password
@ -17,7 +10,6 @@ RESTIC_KEEP_MONTHLY=12
RESTIC_KEEP_YEARLY=3 RESTIC_KEEP_YEARLY=3
LOG_LEVEL=info LOG_LEVEL=info
CRON_SCHEDULE=10 2 * * *
# EMAIL_HOST= # EMAIL_HOST=
# EMAIL_PORT= # EMAIL_PORT=

View File

@ -1,3 +1,19 @@
.venv/
.vscode/
extras/
restic_cache/
restic_data/
tests/ tests/
.gitignore
*.env
*.log
docker-compose.yaml
*.ini
*.egg-info
__pycache__ __pycache__
.DS_Store .DS_Store
.git
.pytest_cache
.dockerignore
build/
docs/

View File

@ -1,14 +1,10 @@
FROM restic/restic:0.9.6 FROM restic/restic:0.9.6
RUN apk update && apk add python3 \ RUN apk update && apk add python3 dcron mariadb-client postgresql-client
dcron \
mariadb-client \
postgresql-client \
mariadb-connector-c-dev
ADD . /restic-compose-backup ADD . /restic-compose-backup
WORKDIR /restic-compose-backup WORKDIR /restic-compose-backup
RUN pip3 install -U pip setuptools wheel && pip3 install -e . RUN pip3 install -U pip setuptools && pip3 install -e .
ENV XDG_CACHE_HOME=/cache ENV XDG_CACHE_HOME=/cache
ENTRYPOINT [] ENTRYPOINT []

View File

@ -1,2 +1 @@
10 2 * * * source /env.sh && rcb backup > /proc/1/fd/1 0 2 * * * source /env.sh && rcb backup > /proc/1/fd/1

View File

@ -1,10 +1,7 @@
#!/bin/sh #!/bin/sh
# Dump all env vars so we can source them in cron jobs # Dump all env vars so we can source them in cron jobs
rcb dump-env > /env.sh printenv | sed 's/^\(.*\)$/export \1/g' > /env.sh
# Write crontab
rcb crontab > crontab
# start cron in the foreground # start cron in the foreground
crontab crontab crontab crontab

View File

@ -1 +0,0 @@
__version__ = '0.7.1'

View File

@ -30,14 +30,11 @@ class DiscordWebhookAlert(BaseAlert):
def send(self, subject: str = None, body: str = None, alert_type: str = None): def send(self, subject: str = None, body: str = None, alert_type: str = None):
"""Send basic webhook request. Max embed size is 6000""" """Send basic webhook request. Max embed size is 6000"""
logger.info("Triggering discord webhook") logger.info("Triggering discord webhook")
# NOTE: The title size is 2048
# The max description size is 2048
# Total embed size limit is 6000 characters (per embed)
data = { data = {
'embeds': [ 'embeds': [
{ {
'title': subject[-256:], 'title': subject,
'description': body[-2048:] if body else "", 'description': body[:5000],
}, },
] ]
} }

View File

@ -15,7 +15,7 @@ class SMTPAlert(BaseAlert):
self.host = host self.host = host
self.port = port self.port = port
self.user = user self.user = user
self.password = password or "" self.password = password
self.to = to self.to = to
@classmethod @classmethod
@ -34,7 +34,7 @@ class SMTPAlert(BaseAlert):
@property @property
def properly_configured(self) -> bool: def properly_configured(self) -> bool:
return self.host and self.port and self.user and len(self.to) > 0 return self.host and self.port and self.user and self.password and len(self.to) > 0
def send(self, subject: str = None, body: str = None, alert_type: str = 'INFO'): def send(self, subject: str = None, body: str = None, alert_type: str = 'INFO'):
# send_mail("Hello world!") # send_mail("Hello world!")

View File

@ -1,7 +1,8 @@
import logging import logging
import os import os
import docker
from restic_compose_backup import utils from restic_compose_backup.config import Config
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -9,7 +10,8 @@ logger = logging.getLogger(__name__)
def run(image: str = None, command: str = None, volumes: dict = None, def run(image: str = None, command: str = None, volumes: dict = None,
environment: dict = None, labels: dict = None, source_container_id: str = None): environment: dict = None, labels: dict = None, source_container_id: str = None):
logger.info("Starting backup container") logger.info("Starting backup container")
client = utils.docker_client() config = Config()
client = docker.DockerClient(base_url=config.docker_base_url)
container = client.containers.run( container = client.containers.run(
image, image,
@ -17,7 +19,7 @@ def run(image: str = None, command: str = None, volumes: dict = None,
labels=labels, labels=labels,
# auto_remove=True, # We remove the container further down # auto_remove=True, # We remove the container further down
detach=True, detach=True,
environment=environment + ['BACKUP_PROCESS_CONTAINER=true'], environment=environment,
volumes=volumes, volumes=volumes,
network_mode=f'container:{source_container_id}', # Reuse original container's network stack. network_mode=f'container:{source_container_id}', # Reuse original container's network stack.
working_dir=os.getcwd(), working_dir=os.getcwd(),
@ -33,13 +35,7 @@ def run(image: str = None, command: str = None, volumes: dict = None,
line = "" line = ""
while True: while True:
try: try:
# Make log streaming work for docker ce 17 and 18. line += next(stream).decode()
# For some reason strings are returned instead if bytes.
data = next(stream)
if isinstance(data, bytes):
line += data.decode()
elif isinstance(data, str):
line += data
if line.endswith('\n'): if line.endswith('\n'):
break break
except StopIteration: except StopIteration:
@ -55,9 +51,9 @@ def run(image: str = None, command: str = None, volumes: dict = None,
fd.write('\n') fd.write('\n')
logger.info(line) logger.info(line)
container.wait()
container.reload() container.reload()
logger.debug("Container ExitCode %s", container.attrs['State']['ExitCode']) logger.debug("Container ExitCode %s", container.attrs['State']['ExitCode'])
container.stop()
container.remove() container.remove()
return container.attrs['State']['ExitCode'] return container.attrs['State']['ExitCode']

View File

@ -1,5 +1,4 @@
import argparse import argparse
import os
import logging import logging
from restic_compose_backup import ( from restic_compose_backup import (
@ -10,7 +9,6 @@ from restic_compose_backup import (
) )
from restic_compose_backup.config import Config from restic_compose_backup.config import Config
from restic_compose_backup.containers import RunningContainers from restic_compose_backup.containers import RunningContainers
from restic_compose_backup import cron, utils
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -44,73 +42,29 @@ def main():
elif args.action == 'alert': elif args.action == 'alert':
alert(config, containers) alert(config, containers)
elif args.action == 'version':
import restic_compose_backup
print(restic_compose_backup.__version__)
elif args.action == "crontab":
crontab(config)
elif args.action == "dump-env":
dump_env()
# Random test stuff here
elif args.action == "test":
nodes = utils.get_swarm_nodes()
print("Swarm nodes:")
for node in nodes:
addr = node.attrs['Status']['Addr']
state = node.attrs['Status']['State']
print(' - {} {} {}'.format(node.id, addr, state))
def status(config, containers): def status(config, containers):
"""Outputs the backup config for the compose setup""" """Outputs the backup config for the compose setup"""
logger.info("Status for compose project '%s'", containers.project_name) logger.info("Status for compose project '%s'", containers.project_name)
logger.info("Repository: '%s'", config.repository) logger.info("Repository: '%s'", config.repository)
logger.info("Backup currently running?: %s", containers.backup_process_running) logger.info("Backup currently running?: %s", containers.backup_process_running)
logger.info("Include project name in backup path?: %s", utils.is_true(config.include_project_name))
logger.debug("Exclude bind mounts from backups?: %s", utils.is_true(config.exclude_bind_mounts))
logger.info("Checking docker availability")
utils.list_containers()
if containers.stale_backup_process_containers:
utils.remove_containers(containers.stale_backup_process_containers)
# Check if repository is initialized with restic snapshots
if not restic.is_initialized(config.repository):
logger.info("Could not get repository info. Attempting to initialize it.")
result = restic.init_repo(config.repository)
if result == 0:
logger.info("Successfully initialized repository: %s", config.repository)
else:
logger.error("Failed to initialize repository")
logger.info("%s Detected Config %s", "-" * 25, "-" * 25) logger.info("%s Detected Config %s", "-" * 25, "-" * 25)
# Start making snapshots logger.info("Initializing repository (may fail if already initalized)")
restic.init_repo(config.repository)
backup_containers = containers.containers_for_backup() backup_containers = containers.containers_for_backup()
for container in backup_containers: for container in backup_containers:
logger.info('service: %s', container.service_name) logger.info('service: %s', container.service_name)
if container.volume_backup_enabled: if container.volume_backup_enabled:
for mount in container.filter_mounts(): for mount in container.filter_mounts():
logger.info( logger.info(' - volume: %s', mount.source)
' - volume: %s -> %s',
mount.source,
container.get_volume_backup_destination(mount, '/volumes'),
)
if container.database_backup_enabled: if container.database_backup_enabled:
instance = container.instance instance = container.instance
ping = instance.ping() ping = instance.ping()
logger.info( logger.info(' - %s (is_ready=%s)', instance.container_type, ping == 0)
' - %s (is_ready=%s) -> %s',
instance.container_type,
ping == 0,
instance.backup_destination_path(),
)
if ping != 0: if ping != 0:
logger.error("Database '%s' in service %s cannot be reached", logger.error("Database '%s' in service %s cannot be reached",
instance.container_type, container.service_name) instance.container_type, container.service_name)
@ -125,16 +79,7 @@ def backup(config, containers):
"""Request a backup to start""" """Request a backup to start"""
# Make sure we don't spawn multiple backup processes # Make sure we don't spawn multiple backup processes
if containers.backup_process_running: if containers.backup_process_running:
alerts.send( raise ValueError("Backup process already running")
subject="Backup process container already running",
body=(
"A backup process container is already running. \n"
f"Id: {containers.backup_process_container.id}\n"
f"Name: {containers.backup_process_container.name}\n"
),
alert_type='ERROR',
)
raise RuntimeError("Backup process already running")
# Map all volumes from the backup container into the backup process container # Map all volumes from the backup container into the backup process container
volumes = containers.this_container.volumes volumes = containers.this_container.volumes
@ -143,7 +88,6 @@ def backup(config, containers):
mounts = containers.generate_backup_mounts('/volumes') mounts = containers.generate_backup_mounts('/volumes')
volumes.update(mounts) volumes.update(mounts)
logger.debug('Starting backup container with image %s', containers.this_container.image)
try: try:
result = backup_runner.run( result = backup_runner.run(
image=containers.this_container.image, image=containers.this_container.image,
@ -152,7 +96,7 @@ def backup(config, containers):
environment=containers.this_container.environment, environment=containers.this_container.environment,
source_container_id=containers.this_container.id, source_container_id=containers.this_container.id,
labels={ labels={
containers.backup_process_label: 'True', "restic-compose-backup.backup_process": 'True',
"com.docker.compose.project": containers.project_name, "com.docker.compose.project": containers.project_name,
}, },
) )
@ -178,50 +122,30 @@ def backup(config, containers):
def start_backup_process(config, containers): def start_backup_process(config, containers):
"""The actual backup process running inside the spawned container""" """The actual backup process running inside the spawned container"""
if not utils.is_true(os.environ.get('BACKUP_PROCESS_CONTAINER')): if (not containers.backup_process_container
or containers.this_container == containers.backup_process_container is False):
logger.error( logger.error(
"Cannot run backup process in this container. Use backup command instead. " "Cannot run backup process in this container. Use backup command instead. "
"This will spawn a new container with the necessary mounts." "This will spawn a new container with the necessary mounts."
) )
alerts.send( return
subject="Cannot run backup process in this container",
body=(
"Cannot run backup process in this container. Use backup command instead. "
"This will spawn a new container with the necessary mounts."
)
)
exit(1)
status(config, containers) status(config, containers)
errors = False errors = False
# Did we actually get any volumes mounted? # Back up volumes
try: try:
has_volumes = os.stat('/volumes') is not None logger.info('Backing up volumes')
except FileNotFoundError: vol_result = restic.backup_files(config.repository, source='/volumes')
logger.warning("Found no volumes to back up") logger.debug('Volume backup exit code: %s', vol_result)
has_volumes = False if vol_result != 0:
logger.error('Backup command exited with non-zero code: %s', vol_result)
# Warn if there is nothing to do
if len(containers.containers_for_backup()) == 0 and not has_volumes:
logger.error("No containers for backup found")
exit(1)
if has_volumes:
try:
logger.info('Backing up volumes')
vol_result = restic.backup_files(config.repository, source='/volumes')
logger.debug('Volume backup exit code: %s', vol_result)
if vol_result != 0:
logger.error('Volume backup exited with non-zero code: %s', vol_result)
errors = True
except Exception as ex:
logger.error('Exception raised during volume backup')
logger.exception(ex)
errors = True errors = True
except Exception as ex:
logger.exception(ex)
errors = True
# back up databases # back up databases
logger.info('Backing up databases')
for container in containers.containers_for_backup(): for container in containers.containers_for_backup():
if container.database_backup_enabled: if container.database_backup_enabled:
try: try:
@ -237,25 +161,14 @@ def start_backup_process(config, containers):
errors = True errors = True
if errors: if errors:
logger.error('Exit code: %s', errors)
exit(1) exit(1)
# Only run cleanup if backup was successful # Only run cleanup if backup was successful
result = cleanup(config, container) result = cleanup(config, container)
logger.debug('cleanup exit code: %s', result) logger.debug('cleanup exit code: %s', errors)
if result != 0: if result != 0:
logger.error('cleanup exit code: %s', result)
exit(1) exit(1)
# Test the repository for errors
logger.info("Checking the repository for errors")
result = restic.check(config.repository)
if result != 0:
logger.error('Check exit code: %s', result)
exit(1)
logger.info('Backup completed')
def cleanup(config, containers): def cleanup(config, containers):
"""Run forget / prune to minimize storage space""" """Run forget / prune to minimize storage space"""
@ -288,35 +201,11 @@ def alert(config, containers):
) )
def crontab(config):
"""Generate the crontab"""
print(cron.generate_crontab(config))
def dump_env():
"""Dump all environment variables to a script that can be sourced from cron"""
print("#!/bin/bash")
print("# This file was generated by restic-compose-backup")
for key, value in os.environ.items():
print("export {}='{}'".format(key, value))
def parse_args(): def parse_args():
parser = argparse.ArgumentParser(prog='restic_compose_backup') parser = argparse.ArgumentParser(prog='restic_compose_backup')
parser.add_argument( parser.add_argument(
'action', 'action',
choices=[ choices=['status', 'snapshots', 'backup', 'start-backup-process', 'alert', 'cleanup'],
'status',
'snapshots',
'backup',
'start-backup-process',
'alert',
'cleanup',
'version',
'crontab',
'dump-env',
'test',
],
) )
parser.add_argument( parser.add_argument(
'--log-level', '--log-level',

View File

@ -53,12 +53,14 @@ def run(cmd: List[str]) -> int:
child = Popen(cmd, stdout=PIPE, stderr=PIPE) child = Popen(cmd, stdout=PIPE, stderr=PIPE)
stdoutdata, stderrdata = child.communicate() stdoutdata, stderrdata = child.communicate()
if stdoutdata.strip(): if stdoutdata:
log_std('stdout', stdoutdata.decode(), logger.debug(stdoutdata.decode().strip())
logging.DEBUG if child.returncode == 0 else logging.ERROR) logger.debug('-' * 28)
if stderrdata.strip(): if stderrdata:
log_std('stderr', stderrdata.decode(), logging.ERROR) logger.error('%s STDERR %s', '-' * 10, '-' * 10)
logger.error(stderrdata.decode().strip())
logger.error('-' * 28)
logger.debug("returncode %s", child.returncode) logger.debug("returncode %s", child.returncode)
return child.returncode return child.returncode
@ -69,23 +71,3 @@ def run_capture_std(cmd: List[str]) -> Tuple[str, str]:
logger.debug('cmd: %s', ' '.join(cmd)) logger.debug('cmd: %s', ' '.join(cmd))
child = Popen(cmd, stdout=PIPE, stderr=PIPE) child = Popen(cmd, stdout=PIPE, stderr=PIPE)
return child.communicate() return child.communicate()
def log_std(source: str, data: str, level: int):
if isinstance(data, bytes):
data = data.decode()
if not data.strip():
return
log_func = logger.debug if level == logging.DEBUG else logger.error
log_func('%s %s %s', '-' * 10, source, '-' * 10)
lines = data.split('\n')
if lines[-1] == '':
lines.pop()
for line in lines:
log_func(line)
log_func('-' * 28)

View File

@ -2,19 +2,12 @@ import os
class Config: class Config:
default_backup_command = "source /env.sh && rcb backup > /proc/1/fd/1"
default_crontab_schedule = "0 2 * * *"
"""Bag for config values""" """Bag for config values"""
def __init__(self, check=True): def __init__(self, check=True):
# Mandatory values # Mandatory values
self.repository = os.environ.get('RESTIC_REPOSITORY') self.repository = os.environ.get('RESTIC_REPOSITORY')
self.password = os.environ.get('RESTIC_REPOSITORY') self.password = os.environ.get('RESTIC_REPOSITORY')
self.cron_schedule = os.environ.get('CRON_SCHEDULE') or self.default_crontab_schedule self.docker_base_url = os.environ.get('DOCKER_BASE_URL') or "unix://tmp/docker.sock"
self.cron_command = os.environ.get('CRON_COMMAND') or self.default_backup_command
self.swarm_mode = os.environ.get('SWARM_MODE') or False
self.include_project_name = os.environ.get('INCLUDE_PROJECT_NAME') or False
self.exclude_bind_mounts = os.environ.get('EXCLUDE_BIND_MOUNTS') or False
# Log # Log
self.log_level = os.environ.get('LOG_LEVEL') self.log_level = os.environ.get('LOG_LEVEL')
@ -34,6 +27,3 @@ class Config:
if not self.password: if not self.password:
raise ValueError("RESTIC_REPOSITORY env var not set") raise ValueError("RESTIC_REPOSITORY env var not set")
config = Config()

View File

@ -1,12 +1,8 @@
import os import os
import logging
from pathlib import Path from pathlib import Path
from typing import List from typing import List
from restic_compose_backup import enums, utils from restic_compose_backup import utils
from restic_compose_backup.config import config
logger = logging.getLogger(__name__)
VOLUME_TYPE_BIND = "bind" VOLUME_TYPE_BIND = "bind"
VOLUME_TYPE_VOLUME = "volume" VOLUME_TYPE_VOLUME = "volume"
@ -31,8 +27,8 @@ class Container:
if self._labels is None: if self._labels is None:
raise ValueError('Container meta missing Config->Labels') raise ValueError('Container meta missing Config->Labels')
self._include = self._parse_pattern(self.get_label(enums.LABEL_VOLUMES_INCLUDE)) self._include = self._parse_pattern(self.get_label('restic-compose-backup.volumes.include'))
self._exclude = self._parse_pattern(self.get_label(enums.LABEL_VOLUMES_EXCLUDE)) self._exclude = self._parse_pattern(self.get_label('restic-compose-backup.volumes.exclude'))
@property @property
def instance(self) -> 'Container': def instance(self) -> 'Container':
@ -64,45 +60,11 @@ class Container:
"""Image name""" """Image name"""
return self.get_config('Image') return self.get_config('Image')
@property
def name(self) -> str:
"""Container name"""
return self._data['Name'].replace('/', '')
@property
def service_name(self) -> str:
"""Name of the container/service"""
return self.get_label('com.docker.compose.service', default='') or \
self.get_label('com.docker.swarm.service.name', default='')
@property
def backup_process_label(self) -> str:
"""str: The unique backup process label for this project"""
return f"{enums.LABEL_BACKUP_PROCESS}-{self.project_name}"
@property
def project_name(self) -> str:
"""str: Name of the compose setup"""
return self.get_label('com.docker.compose.project', default='')
@property
def stack_name(self) -> str:
"""str: Name of the stack is present"""
return self.get_label("com.docker.stack.namespace")
@property
def is_oneoff(self) -> bool:
"""Was this container started with run command?"""
return self.get_label('com.docker.compose.oneoff', default='False') == 'True'
@property @property
def environment(self) -> list: def environment(self) -> list:
"""All configured env vars for the container as a list""" """All configured env vars for the container as a list"""
return self.get_config('Env') return self.get_config('Env')
def remove(self):
self._data.remove()
def get_config_env(self, name) -> str: def get_config_env(self, name) -> str:
"""Get a config environment variable by name""" """Get a config environment variable by name"""
# convert to dict and fetch env var by name # convert to dict and fetch env var by name
@ -145,8 +107,7 @@ class Container:
@property @property
def volume_backup_enabled(self) -> bool: def volume_backup_enabled(self) -> bool:
"""bool: If the ``restic-compose-backup.volumes`` label is set""" return utils.is_true(self.get_label('restic-compose-backup.volumes'))
return utils.is_true(self.get_label(enums.LABEL_VOLUMES_ENABLED))
@property @property
def database_backup_enabled(self) -> bool: def database_backup_enabled(self) -> bool:
@ -159,29 +120,46 @@ class Container:
@property @property
def mysql_backup_enabled(self) -> bool: def mysql_backup_enabled(self) -> bool:
"""bool: If the ``restic-compose-backup.mysql`` label is set""" return utils.is_true(self.get_label('restic-compose-backup.mysql'))
return utils.is_true(self.get_label(enums.LABEL_MYSQL_ENABLED))
@property @property
def mariadb_backup_enabled(self) -> bool: def mariadb_backup_enabled(self) -> bool:
"""bool: If the ``restic-compose-backup.mariadb`` label is set""" return utils.is_true(self.get_label('restic-compose-backup.mariadb'))
return utils.is_true(self.get_label(enums.LABEL_MARIADB_ENABLED))
@property @property
def postgresql_backup_enabled(self) -> bool: def postgresql_backup_enabled(self) -> bool:
"""bool: If the ``restic-compose-backup.postgres`` label is set""" return utils.is_true(self.get_label('restic-compose-backup.postgres'))
return utils.is_true(self.get_label(enums.LABEL_POSTGRES_ENABLED))
@property @property
def is_backup_process_container(self) -> bool: def is_backup_process_container(self) -> bool:
"""Is this container the running backup process?""" """Is this container the running backup process?"""
return self.get_label(self.backup_process_label) == 'True' return self.get_label('restic-compose-backup.backup_process') == 'True'
@property @property
def is_running(self) -> bool: def is_running(self) -> bool:
"""bool: Is the container running?""" """Is the container running?"""
return self._state.get('Running', False) return self._state.get('Running', False)
@property
def name(self) -> str:
"""Container name"""
return self._data['Name'].replace('/', '')
@property
def service_name(self) -> str:
"""Name of the container/service"""
return self.get_label('com.docker.compose.service', default='')
@property
def project_name(self) -> str:
"""Name of the compose setup"""
return self.get_label('com.docker.compose.project', default='')
@property
def is_oneoff(self) -> bool:
"""Was this container started with run command?"""
return self.get_label('com.docker.compose.oneoff', default='False') == 'True'
def get_config(self, name, default=None): def get_config(self, name, default=None):
"""Get value from config dict""" """Get value from config dict"""
return self._config.get(name, default) return self._config.get(name, default)
@ -194,15 +172,11 @@ class Container:
"""Get all mounts for this container matching include/exclude filters""" """Get all mounts for this container matching include/exclude filters"""
filtered = [] filtered = []
# If exclude_bind_mounts is true, only volume mounts are kept in the list of mounts
exclude_bind_mounts = utils.is_true(config.exclude_bind_mounts)
mounts = list(filter(lambda m: not exclude_bind_mounts or m.type == "volume", self._mounts))
if not self.volume_backup_enabled: if not self.volume_backup_enabled:
return filtered return filtered
if self._include: if self._include:
for mount in mounts: for mount in self._mounts:
for pattern in self._include: for pattern in self._include:
if pattern in mount.source: if pattern in mount.source:
break break
@ -212,14 +186,14 @@ class Container:
filtered.append(mount) filtered.append(mount)
elif self._exclude: elif self._exclude:
for mount in mounts: for mount in self._mounts:
for pattern in self._exclude: for pattern in self._exclude:
if pattern in mount.source: if pattern in mount.source:
break break
else: else:
filtered.append(mount) filtered.append(mount)
else: else:
return mounts return self._mounts
return filtered return filtered
@ -229,26 +203,12 @@ class Container:
volumes = {} volumes = {}
for mount in mounts: for mount in mounts:
volumes[mount.source] = { volumes[mount.source] = {
'bind': self.get_volume_backup_destination(mount, source_prefix), 'bind': str(Path(source_prefix) / self.service_name / Path(utils.strip_root(mount.destination))),
'mode': mode, 'mode': mode,
} }
return volumes return volumes
def get_volume_backup_destination(self, mount, source_prefix) -> str:
"""Get the destination path for backups of the given mount"""
destination = Path(source_prefix)
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != '':
destination /= project_name
destination /= self.service_name
destination /= Path(utils.strip_root(mount.destination))
return str(destination)
def get_credentials(self) -> dict: def get_credentials(self) -> dict:
"""dict: get credentials for the service""" """dict: get credentials for the service"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
@ -261,10 +221,6 @@ class Container:
"""Back up this service""" """Back up this service"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
def backup_destination_path(self) -> str:
"""Return the path backups will be saved at"""
raise NotImplementedError("Base container class don't implement this")
def dump_command(self) -> list: def dump_command(self) -> list:
"""list: create a dump command restic and use to send data through stdin""" """list: create a dump command restic and use to send data through stdin"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
@ -354,7 +310,6 @@ class RunningContainers:
self.containers = [] self.containers = []
self.this_container = None self.this_container = None
self.backup_process_container = None self.backup_process_container = None
self.stale_backup_process_containers = []
# Find the container we are running in. # Find the container we are running in.
# If we don't have this information we cannot continue # If we don't have this information we cannot continue
@ -365,51 +320,25 @@ class RunningContainers:
if not self.this_container: if not self.this_container:
raise ValueError("Cannot find metadata for backup container") raise ValueError("Cannot find metadata for backup container")
# Gather all running containers in the current compose setup # Gather all containers in the current compose setup
for container_data in all_containers: for container_data in all_containers:
container = Container(container_data) container = Container(container_data)
# Gather stale backup process containers
if (self.this_container.image == container.image
and not container.is_running
and container.is_backup_process_container):
self.stale_backup_process_containers.append(container)
# We only care about running containers after this point
if not container.is_running:
continue
# Detect running backup process container # Detect running backup process container
if container.is_backup_process_container: if container.is_backup_process_container:
self.backup_process_container = container self.backup_process_container = container
# --- Determine what containers should be evaludated # Detect containers belonging to the current compose setup
if (container.project_name == self.this_container.project_name
# If not swarm mode we need to filter in compose project and not container.is_oneoff):
if not config.swarm_mode: if container.id != self.this_container.id:
if container.project_name != self.this_container.project_name: self.containers.append(container)
continue
# Containers started manually are not included
if container.is_oneoff:
continue
# Do not include the backup process container
if container == self.backup_process_container:
continue
self.containers.append(container)
@property @property
def project_name(self) -> str: def project_name(self) -> str:
"""str: Name of the compose project""" """str: Name of the compose project"""
return self.this_container.project_name return self.this_container.project_name
@property
def backup_process_label(self) -> str:
"""str: The backup process label for this project"""
return self.this_container.backup_process_label
@property @property
def backup_process_running(self) -> bool: def backup_process_running(self) -> bool:
"""Is the backup process container running?""" """Is the backup process container running?"""
@ -429,7 +358,6 @@ class RunningContainers:
return mounts return mounts
def get_service(self, name) -> Container: def get_service(self, name) -> Container:
"""Container: Get a service by name"""
for container in self.containers: for container in self.containers:
if container.service_name == name: if container.service_name == name:
return container return container

View File

@ -1,7 +1,5 @@
from pathlib import Path
from restic_compose_backup.containers import Container from restic_compose_backup.containers import Container
from restic_compose_backup.config import config, Config from restic_compose_backup.config import Config
from restic_compose_backup import ( from restic_compose_backup import (
commands, commands,
restic, restic,
@ -41,7 +39,6 @@ class MariadbContainer(Container):
f"--port={creds['port']}", f"--port={creds['port']}",
f"--user={creds['username']}", f"--user={creds['username']}",
"--all-databases", "--all-databases",
"--no-tablespaces",
] ]
def backup(self): def backup(self):
@ -51,23 +48,10 @@ class MariadbContainer(Container):
with utils.environment('MYSQL_PWD', creds['password']): with utils.environment('MYSQL_PWD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
self.backup_destination_path(), f'/databases/{self.service_name}/all_databases.sql',
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= "all_databases.sql"
return destination
class MysqlContainer(Container): class MysqlContainer(Container):
container_type = 'mysql' container_type = 'mysql'
@ -101,7 +85,6 @@ class MysqlContainer(Container):
f"--port={creds['port']}", f"--port={creds['port']}",
f"--user={creds['username']}", f"--user={creds['username']}",
"--all-databases", "--all-databases",
"--no-tablespaces",
] ]
def backup(self): def backup(self):
@ -111,23 +94,10 @@ class MysqlContainer(Container):
with utils.environment('MYSQL_PWD', creds['password']): with utils.environment('MYSQL_PWD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
self.backup_destination_path(), f'/databases/{self.service_name}/all_databases.sql',
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= "all_databases.sql"
return destination
class PostgresContainer(Container): class PostgresContainer(Container):
container_type = 'postgres' container_type = 'postgres'
@ -171,19 +141,6 @@ class PostgresContainer(Container):
with utils.environment('PGPASSWORD', creds['password']): with utils.environment('PGPASSWORD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
self.backup_destination_path(), f"/databases/{self.service_name}/{creds['database']}.sql",
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= f"{self.get_credentials()['database']}.sql"
return destination

View File

@ -1,69 +0,0 @@
"""
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday;
# │ │ │ │ │ 7 is also Sunday on some systems)
# │ │ │ │ │
# │ │ │ │ │
# * * * * * command to execute
"""
QUOTE_CHARS = ['"', "'"]
def generate_crontab(config):
"""Generate a crontab entry for running backup job"""
command = config.cron_command.strip()
schedule = config.cron_schedule
if schedule:
schedule = schedule.strip()
schedule = strip_quotes(schedule)
if not validate_schedule(schedule):
schedule = config.default_crontab_schedule
else:
schedule = config.default_crontab_schedule
return f'{schedule} {command}\n'
def validate_schedule(schedule: str):
"""Validate crontab format"""
parts = schedule.split()
if len(parts) != 5:
return False
for p in parts:
if p != '*' and not p.isdigit():
return False
minute, hour, day, month, weekday = parts
try:
validate_field(minute, 0, 59)
validate_field(hour, 0, 23)
validate_field(day, 1, 31)
validate_field(month, 1, 12)
validate_field(weekday, 0, 6)
except ValueError:
return False
return True
def validate_field(value, min, max):
if value == '*':
return
i = int(value)
return min <= i <= max
def strip_quotes(value: str):
"""Strip enclosing single or double quotes if present"""
if value[0] in QUOTE_CHARS:
value = value[1:]
if value[-1] in QUOTE_CHARS:
value = value[:-1]
return value

View File

@ -1,11 +0,0 @@
# Labels
LABEL_VOLUMES_ENABLED = 'restic-compose-backup.volumes'
LABEL_VOLUMES_INCLUDE = 'restic-compose-backup.volumes.include'
LABEL_VOLUMES_EXCLUDE = 'restic-compose-backup.volumes.exclude'
LABEL_MYSQL_ENABLED = 'restic-compose-backup.mysql'
LABEL_POSTGRES_ENABLED = 'restic-compose-backup.postgres'
LABEL_MARIADB_ENABLED = 'restic-compose-backup.mariadb'
LABEL_BACKUP_PROCESS = 'restic-compose-backup.process'

View File

@ -22,7 +22,7 @@ def setup(level: str = 'warning'):
ch = logging.StreamHandler(stream=sys.stdout) ch = logging.StreamHandler(stream=sys.stdout)
ch.setLevel(level) ch.setLevel(level)
# ch.setFormatter(logging.Formatter('%(asctime)s - {HOSTNAME} - %(name)s - %(levelname)s - %(message)s')) # ch.setFormatter(logging.Formatter(f'%(asctime)s - {HOSTNAME} - %(name)s - %(levelname)s - %(message)s'))
# ch.setFormatter(logging.Formatter('%(asctime)s - {HOSTNAME} - %(levelname)s - %(message)s')) # ch.setFormatter(logging.Formatter(f'%(asctime)s - {HOSTNAME} - %(levelname)s - %(message)s'))
ch.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s: %(message)s')) ch.setFormatter(logging.Formatter(f'%(asctime)s - %(levelname)s: %(message)s'))
logger.addHandler(ch) logger.addHandler(ch)

View File

@ -40,46 +40,36 @@ def backup_from_stdin(repository: str, filename: str, source_command: List[str])
]) ])
# pipe source command into dest command # pipe source command into dest command
source_process = Popen(source_command, stdout=PIPE, bufsize=65536) # NOTE: Using the default buffer size: io.DEFAULT_BUFFER_SIZE = 8192
dest_process = Popen(dest_command, stdin=source_process.stdout, stdout=PIPE, stderr=PIPE, bufsize=65536) # We might want to tweak that to speed up large dumps.
# Actual tests tests must be done.
source_process = Popen(source_command, stdout=PIPE)
dest_process = Popen(dest_command, stdin=source_process.stdout, stdout=PIPE, stderr=PIPE)
stdout, stderr = dest_process.communicate() stdout, stderr = dest_process.communicate()
if stdout:
for line in stdout.decode().split('\n'):
logger.debug(line)
if stderr:
for line in stderr.decode().split('\n'):
logger.error(line)
# Ensure both processes exited with code 0 # Ensure both processes exited with code 0
source_exit, dest_exit = source_process.poll(), dest_process.poll() source_exit, dest_exit = source_process.poll(), dest_process.poll()
exit_code = 0 if (source_exit == 0 and dest_exit == 0) else 1 return 0 if (source_exit == 0 and dest_exit == 0) else 1
if stdout:
commands.log_std('stdout', stdout, logging.DEBUG if exit_code == 0 else logging.ERROR)
if stderr:
commands.log_std('stderr', stderr, logging.ERROR)
return exit_code
def snapshots(repository: str, last=True) -> Tuple[str, str]: def snapshots(repository: str, last=True) -> Tuple[str, str]:
"""Returns the stdout and stderr info"""
args = ["snapshots"] args = ["snapshots"]
if last: if last:
args.append('--last') args.append('--last')
return commands.run_capture_std(restic(repository, args)) return commands.run_capture_std(restic(repository, args))
def is_initialized(repository: str) -> bool:
"""
Checks if a repository is initialized using snapshots command.
Note that this cannot separate between uninitalized repo
and other errors, but this method is reccomended by the restic
community.
"""
return commands.run(restic(repository, ["snapshots", '--last'])) == 0
def forget(repository: str, daily: str, weekly: str, monthly: str, yearly: str): def forget(repository: str, daily: str, weekly: str, monthly: str, yearly: str):
return commands.run(restic(repository, [ return commands.run(restic(repository, [
'forget', 'forget',
'--group-by',
'paths',
'--keep-daily', '--keep-daily',
daily, daily,
'--keep-weekly', '--keep-weekly',
@ -100,7 +90,6 @@ def prune(repository: str):
def check(repository: str): def check(repository: str):
return commands.run(restic(repository, [ return commands.run(restic(repository, [
"check", "check",
# "--with-cache",
])) ]))

View File

@ -1,69 +1,26 @@
import os import os
import logging
from typing import List, TYPE_CHECKING
from contextlib import contextmanager from contextlib import contextmanager
import docker import docker
if TYPE_CHECKING: from restic_compose_backup.config import Config
from restic_compose_backup.containers import Container
logger = logging.getLogger(__name__)
TRUE_VALUES = ['1', 'true', 'True', True, 1] TRUE_VALUES = ['1', 'true', 'True', True, 1]
def docker_client(): def list_containers():
"""
Create a docker client from the following environment variables::
DOCKER_HOST=unix://tmp/docker.sock
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=''
"""
# NOTE: Remove this fallback in 1.0
if not os.environ.get('DOCKER_HOST'):
os.environ['DOCKER_HOST'] = 'unix://tmp/docker.sock'
return docker.from_env()
def list_containers() -> List[dict]:
""" """
List all containers. List all containers.
Returns: Returns:
List of raw container json data from the api List of raw container json data from the api
""" """
client = docker_client() config = Config()
all_containers = client.containers.list(all=True) client = docker.DockerClient(base_url=config.docker_base_url)
all_containers = client.containers.list()
client.close() client.close()
return [c.attrs for c in all_containers] return [c.attrs for c in all_containers]
def get_swarm_nodes():
client = docker_client()
# NOTE: If not a swarm node docker.errors.APIError is raised
# 503 Server Error: Service Unavailable
# ("This node is not a swarm manager. Use "docker swarm init" or
# "docker swarm join" to connect this node to swarm and try again.")
try:
return client.nodes.list()
except docker.errors.APIError:
return []
def remove_containers(containers: List['Container']):
client = docker_client()
logger.info('Attempting to delete stale backup process containers')
for container in containers:
logger.info(' -> deleting %s', container.name)
try:
c = client.containers.get(container.name)
c.remove()
except Exception as ex:
logger.exception(ex)
def is_true(value): def is_true(value):
""" """
Evaluates the truthfullness of a bool value in container labels Evaluates the truthfullness of a bool value in container labels

View File

@ -3,15 +3,12 @@ from setuptools import setup, find_namespace_packages
setup( setup(
name="restic-compose-backup", name="restic-compose-backup",
url="https://github.com/ZettaIO/restic-compose-backup", url="https://github.com/ZettaIO/restic-compose-backup",
version="0.7.1", version="0.3.0",
author="Einar Forselv", author="Einar Forselv",
author_email="eforselv@gmail.com", author_email="eforselv@gmail.com",
packages=find_namespace_packages(include=[ packages=find_namespace_packages(include=['restic_compose_backup']),
'restic_compose_backup',
'restic_compose_backup.*',
]),
install_requires=[ install_requires=[
'docker~=6.1.3', 'docker==4.1.*',
], ],
entry_points={'console_scripts': [ entry_points={'console_scripts': [
'restic-compose-backup = restic_compose_backup.cli:main', 'restic-compose-backup = restic_compose_backup.cli:main',

View File

@ -160,7 +160,7 @@
"OpenStdin": true, "OpenStdin": true,
"StdinOnce": true, "StdinOnce": true,
"Env": [ "Env": [
"DOCKER_HOST=unix://tmp/docker.sock", "DOCKER_BASE_URL=unix://tmp/docker.sock",
"RESTIC_REPOSITORY=/tmp/backup", "RESTIC_REPOSITORY=/tmp/backup",
"RESTIC_PASSWORD=password", "RESTIC_PASSWORD=password",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

View File

@ -3,9 +3,6 @@ import os
import unittest import unittest
from unittest import mock from unittest import mock
os.environ['RESTIC_REPOSITORY'] = "test"
os.environ['RESTIC_PASSWORD'] = "password"
from restic_compose_backup import utils from restic_compose_backup import utils
from restic_compose_backup.containers import RunningContainers from restic_compose_backup.containers import RunningContainers
import fixtures import fixtures
@ -18,8 +15,8 @@ class ResticBackupTests(unittest.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
"""Set up basic environment variables""" """Set up basic environment variables"""
# os.environ['RESTIC_REPOSITORY'] = "test" os.environ['RESTIC_REPOSITORY'] = "test"
# os.environ['RESTIC_PASSWORD'] = "password" os.environ['RESTIC_PASSWORD'] = "password"
def createContainers(self): def createContainers(self):
backup_hash = fixtures.generate_sha256() backup_hash = fixtures.generate_sha256()
@ -80,7 +77,7 @@ class ResticBackupTests(unittest.TestCase):
] ]
with mock.patch(list_containers_func, fixtures.containers(containers=containers)): with mock.patch(list_containers_func, fixtures.containers(containers=containers)):
result = RunningContainers() result = RunningContainers()
self.assertEqual(len(result.containers), 4, msg="Three containers expected") self.assertEqual(len(result.containers), 3, msg="Three containers expected")
self.assertNotEqual(result.this_container, None, msg="No backup container found") self.assertNotEqual(result.this_container, None, msg="No backup container found")
web_service = result.get_service('web') web_service = result.get_service('web')
self.assertNotEqual(web_service, None) self.assertNotEqual(web_service, None)
@ -194,7 +191,7 @@ class ResticBackupTests(unittest.TestCase):
{ {
'service': 'backup_runner', 'service': 'backup_runner',
'labels': { 'labels': {
'restic-compose-backup.process-default': 'True', 'restic-compose-backup.backup_process': 'True',
}, },
}, },
] ]

View File

@ -1,30 +0,0 @@
version: '3.7'
services:
mariadb:
image: mariadb:10
labels:
restic-compose-backup.mariadb: "true"
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword
networks:
- global
volumes:
- mariadbdata:/var/lib/mysql
files:
image: nginx:1.17-alpine
labels:
restic-compose-backup.volumes: "true"
volumes:
- files:/srv/files
volumes:
mariadbdata:
files:
networks:
global:
external: true

10
tox.ini
View File

@ -5,16 +5,16 @@
skipsdist = True skipsdist = True
setupdir={toxinidir}/src setupdir={toxinidir}/src
envlist = envlist =
py38 py37
pep8 pep8
[testenv] [testenv]
usedevelop = True usedevelop = True
basepython = basepython =
py38: python3.8 py37: python3.7
deps = deps =
-r{toxinidir}/src//tests/requirements.txt -r{toxinidir}/tests/requirements.txt
commands = commands =
; coverage run --source=restic_compose_backup -m pytest tests/ ; coverage run --source=restic_compose_backup -m pytest tests/
; coverage report ; coverage report
@ -23,7 +23,7 @@ commands =
[testenv:pep8] [testenv:pep8]
usedevelop = false usedevelop = false
deps = flake8 deps = flake8
basepython = python3.8 basepython = python3.7
commands = flake8 commands = flake8
[pytest] [pytest]
@ -53,4 +53,4 @@ norecursedirs = tests/* .venv/* .tox/* build/ docs/
ignore = H405,D100,D101,D102,D103,D104,D105,D200,D202,D203,D204,D205,D211,D301,D400,D401,W503 ignore = H405,D100,D101,D102,D103,D104,D105,D200,D202,D203,D204,D205,D211,D301,D400,D401,W503
show-source = True show-source = True
max-line-length = 120 max-line-length = 120
exclude = .tox,env,tests,build,conf.py exclude = .tox,.venv*,tests,build,conf.py