Compare commits

...

49 Commits

Author SHA1 Message Date
einarf e6ca4aa9ca Add rtd config file 2024-01-05 04:13:20 +01:00
einarf 093dab93ca Make rcb dump env vars to properly escape them 2024-01-05 03:50:13 +01:00
einarf 405bd4af15 Various tweaks
* Use --no-tablespaces in mysqldump
* Dump to 0.7.0
* pin docker version
* Include missing packages in setup.py
2023-11-10 22:24:29 +01:00
dreadper 28dda6b09d fix TypeError: request() got an unexpected keyword argument 'chunked' by upgrading pip package docker (from 4.1.* to 5.1.*) 2023-10-28 14:54:20 +02:00
dreadper b400138b73 fix [Plugin caching_sha2_password could not be loaded](https://github.com/arey/mysql-client/issues/5) 2023-10-28 14:20:40 +02:00
Einar Forselv b52655a23b
Merge pull request #37 from wehrstedt/email-allow-empty-password
Allow empty password for smpt
2021-06-18 09:54:06 +02:00
Maximilian Wehrstedt 323e299b7e Allow empty password for smpt 2021-06-17 15:47:39 +02:00
einarf 5c33ccf0b1
Bump version 2020-05-28 01:27:06 +02:00
einarf 764aac6744
Docs for EXCLUDE_BIND_MOUNTS and INCLUDE_PROJECT_NAME 2020-05-28 01:18:21 +02:00
einarf bbe57dfd69
Broken test intialization 2020-05-28 00:18:04 +02:00
einarf 4517880846
pep8 2020-05-28 00:13:27 +02:00
einarf 1fefd63c72
Use py38 in tests 2020-05-28 00:13:04 +02:00
Einar Forselv 4e1af219e2
Merge pull request #27 from jannikw/exclude-bind-mounts
Allow excluding all bind mounts from backups
2020-05-28 00:03:35 +02:00
Einar Forselv 93f080d5b3
Merge pull request #26 from jannikw/project-names
Allow grouping of backup directories by project name
2020-05-27 23:59:27 +02:00
Jannik 0fc620bb1f Allow excluding all bind mounts from backups 2020-05-26 15:30:54 +02:00
Jannik f7958d7db9 Add docstring for get_volume_backup_destination 2020-05-26 14:54:36 +02:00
Jannik 18ddb173ac Allow inclusion of project name in backup path 2020-05-26 14:30:59 +02:00
Jannik f59a046bbc Output backup destination path with output command 2020-05-26 10:42:29 +02:00
Einar Forselv 8b934dc12f Update README.md 2020-05-15 23:37:44 +02:00
Einar Forselv 8e9105fed5 Install wheel 2020-05-15 23:37:39 +02:00
Einar Forselv d7492e51f6 Test command for listing swarm nodes 2020-03-08 18:36:29 +01:00
Einar Forselv 07a19f7f42 Get swarm nodes + robust fallback 2020-03-08 18:36:07 +01:00
Einar Forselv 13d8e07a33 Add swarm node json example 2020-03-08 18:35:37 +01:00
Einar Forselv 25b39b9908 Missing label in swarm test file 2020-03-08 17:38:01 +01:00
Einar Forselv d0fdf2d1d3 README: dev note about joining and leaving swarm 2020-03-08 17:37:40 +01:00
Einar Forselv 3aa0704045 Add service with volume in local dev stack sample 2020-03-08 14:51:15 +01:00
Einar Forselv cf668e2153 Bump version 2020-03-07 03:10:48 +01:00
Einar Forselv d4c77cf43d Bug: Properly resolve conainer service name 2020-03-07 03:05:40 +01:00
Einar Forselv cecc647a10 Include swarm containers when SWARM_MODE is set 2020-03-07 02:56:36 +01:00
Einar Forselv 61ec487e24 Support SWARM_MODE 2020-03-07 02:55:22 +01:00
Einar Forselv 0bab85f5cf Update local dev compose and stack setup 2020-03-07 02:54:56 +01:00
Einar Forselv 1a100d73ab Update README.md 2020-03-07 02:28:46 +01:00
Einar Forselv 270137d931 README: Local dev setup 2020-03-07 02:24:22 +01:00
Einar Forselv e4263822bf Container: Stack name + reorganize 2020-03-07 01:27:24 +01:00
Einar Forselv 311bedb5ab Store stack and compose project sample json 2020-03-07 00:14:58 +01:00
Einar Forselv 88cf894689 Simple swarm stack 2020-03-07 00:14:11 +01:00
Einar Forselv 6817f0999f Update release.md 2019-12-17 00:32:29 +01:00
Einar Forselv 74c0954e6f pep8 2019-12-16 22:38:52 +01:00
Einar Forselv f6995eb506 Update env var docs including docker config 2019-12-16 22:36:23 +01:00
Einar Forselv ef28baed5e Update fixtures 2019-12-16 22:36:04 +01:00
Einar Forselv 336cace237 Bump version 2019-12-16 22:21:11 +01:00
Einar Forselv cab4676b91 Create docker client from standard env vars 2019-12-16 22:19:57 +01:00
Einar Forselv d002ad9390 Update env example 2019-12-16 22:19:05 +01:00
Einar Forselv 98a10bf994 Mark backup process container with env variable 2019-12-16 21:51:37 +01:00
Einar Forselv 2535ce3421 Add sponsor 2019-12-11 13:17:32 +01:00
Einar Forselv 8858f88ba4 Broken test 2019-12-10 17:28:35 +01:00
Einar Forselv c5b7f11db7 Attempt to partly fix #18 2019-12-10 07:57:37 +01:00
Einar Forselv a099060b2e Bump version 2019-12-09 06:33:31 +01:00
Einar Forselv dd40152fe1 dev compose mapping incorrect tests dir 2019-12-09 06:32:31 +01:00
27 changed files with 903 additions and 84 deletions

BIN
.github/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

24
.readthedocs.yaml Normal file
View File

@ -0,0 +1,24 @@
# Read the Docs configuration file for Sphinx projects
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
# Set the OS, Python version and other tools you might need
build:
os: ubuntu-22.04
tools:
python: "3.10"
# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/conf.py
# Optionally build your docs in additional formats such as PDF and ePub
# formats:
# - pdf
# - epub
python:
install:
- requirements: docs/requirements.txt

View File

@ -162,9 +162,39 @@ pip install -r docs/requirements.txt
python src/setup.py build_sphinx python src/setup.py build_sphinx
``` ```
# Local dev setup
The git repository contains a simple local setup for development
```bash
# Create an overlay network to link the compose project and stack
docker network create --driver overlay --attachable global
# Start the compose project
docker-compose up -d
# Deploy the stack
docker stack deploy -c swarm-stack.yml test
```
In dev we should ideally start the backup container manually
```bash
docker-compose run --rm backup sh
# pip install the package in the container in editable mode to auto sync changes from host source
pip3 install -e .
```
Remember to enable swarm mode with `docker swarm init/join` and disable swarm
mode with `docker swarm leave --force` when needed in development (single node setup).
## Contributing ## Contributing
Contributions are welcome regardless of experience level. Don't hesitate submitting issues, opening partial or completed pull requests. Contributions are welcome regardless of experience level.
Don't hesitate submitting issues, opening partial or completed pull requests.
[restic]: https://restic.net/ [restic]: https://restic.net/
[documentation]: https://restic-compose-backup.readthedocs.io [documentation]: https://restic-compose-backup.readthedocs.io
---
This project is sponsored by [zetta.io](https://www.zetta.io)
[![Zetta.IO](https://raw.githubusercontent.com/ZettaIO/restic-compose-backup/master/.github/logo.png)](https://www.zetta.io)

View File

@ -1,10 +1,16 @@
version: '3' version: '3.7'
services: services:
backup: backup:
build: ./src build: ./src
env_file: env_file:
- restic_compose_backup.env - restic_compose_backup.env
- alerts.env # - alerts.env
labels:
restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: 'src'
networks:
- default
- global
volumes: volumes:
# Map in docker socket # Map in docker socket
- /var/run/docker.sock:/tmp/docker.sock:ro - /var/run/docker.sock:/tmp/docker.sock:ro
@ -20,13 +26,13 @@ services:
restic-compose-backup.volumes: true restic-compose-backup.volumes: true
restic-compose-backup.volumes.include: "/tests" restic-compose-backup.volumes.include: "/tests"
volumes: volumes:
- ./tests:/srv/tests - ./src/tests:/srv/tests
- ./.vscode:/srv/code - ./.vscode:/srv/code
environment: environment:
- SOME_VALUE=test - SOME_VALUE=test
- ANOTHER_VALUE=1 - ANOTHER_VALUE=1
mysql: mysql5:
image: mysql:5 image: mysql:5
labels: labels:
restic-compose-backup.mysql: true restic-compose-backup.mysql: true
@ -36,7 +42,19 @@ services:
- MYSQL_USER=myuser - MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword - MYSQL_PASSWORD=mypassword
volumes: volumes:
- mysqldata:/var/lib/mysql - mysqldata5:/var/lib/mysql
mysql8:
image: mysql:8
labels:
restic-compose-backup.mysql: true
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword
volumes:
- mysqldata8:/var/lib/mysql
mariadb: mariadb:
image: mariadb:10 image: mariadb:10
@ -62,6 +80,11 @@ services:
- pgdata:/var/lib/postgresql/data - pgdata:/var/lib/postgresql/data
volumes: volumes:
mysqldata: mysqldata5:
mysqldata8:
mariadbdata: mariadbdata:
pgdata: pgdata:
networks:
global:
external: true

View File

@ -22,7 +22,7 @@ copyright = '2019, Zetta.IO Technology AS'
author = 'Zetta.IO Technology AS' author = 'Zetta.IO Technology AS'
# The full version, including alpha/beta/rc tags # The full version, including alpha/beta/rc tags
release = '0.4.0' release = '0.6.0'
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------

View File

@ -166,12 +166,54 @@ a webhook that will post embedded messages to a specific channel.
The url usually looks like this: ``https://discordapp.com/api/webhooks/...``` The url usually looks like this: ``https://discordapp.com/api/webhooks/...```
DOCKER_BASE_URL DOCKER_HOST
~~~~~~~~~~~~~~~ ~~~~~~~~~~~
**Default value**: ``unix://tmp/docker.sock`` **Default value**: ``unix://tmp/docker.sock``
The location of the docker socket. The socket or host of the docker service.
DOCKER_TLS_VERIFY
~~~~~~~~~~~~~~~~~
If defined verify the host against a CA certificate.
Path to certs is defined in ``DOCKER_CERT_PATH``
and can be copied or mapped into this backup container.
DOCKER_CERT_PATH
~~~~~~~~~~~~~~~~
A path to a directory containing TLS certificates to use when
connecting to the Docker host. Combined with ``DOCKER_TLS_VERIFY``
this can be used to talk to docker through TLS in cases
were we cannot map in the docker socket.
INCLUDE_PROJECT_NAME
~~~~~~~~~~~~~~~~~~~~
Define this environment variable if your backup destination
paths needs project name as a prefix. This is useful
when running multiple projects.
EXCLUDE_BIND_MOUNTS
~~~~~~~~~~~~~~~~~~~
Docker has to volumes types. Binds and volumes.
Volumes are docker volumes (``docker`volume list``).
Binds are paths mapped into the container from
the host for example in the ``volumes`` section
of a service.
If defined all host binds will be ignored globally.
This is useful when you only care about actual
docker volumes. Often host binds are only used
for mapping in configuration. This saves the user
from manually excluding these bind volumes.
SWARM_MODE
~~~~~~~~~~
If defined containers in swarm stacks are also evaluated.
Compose Labels Compose Labels
-------------- --------------

View File

@ -0,0 +1,196 @@
{
"Id": "efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd",
"Created": "2020-03-05T21:07:34.88927951Z",
"Path": "docker-entrypoint.sh",
"Args": ["mysqld"],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 4887,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-03-06T01:31:39.728842925Z",
"FinishedAt": "2020-03-06T01:31:33.847583199Z"
},
"Image": "sha256:1fd0e719c4952e22a99e30662fdd7daad53e7e53fbe135d543cc6b82be213951",
"ResolvConfPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/hostname",
"HostsPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/hosts",
"LogPath": "/var/lib/docker/containers/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd/efa5196b4959648e3efcf5ae9f24bc4032849c2665805a5b405216f343b4decd-json.log",
"Name": "/restic-compose-backup_mariadb_1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": ["restic-compose-backup_mariadbdata:/var/lib/mysql:rw"],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "restic-compose-backup_default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [0, 0],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": ["/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware"],
"ReadonlyPaths": ["/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger"]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0-init/diff:/var/lib/docker/overlay2/38780a41f93b7a20de03f1d76febb885f9213906fb30bad17cb3ad231fb7ce43/diff:/var/lib/docker/overlay2/a2abce521690b1baf6aa61e109a4659cb4272936871bc1afa73271eb8e453449/diff:/var/lib/docker/overlay2/a696286588d1d33b994b7f6e31c176c5f7e67c4f757d730323a7b6591d55f786/diff:/var/lib/docker/overlay2/c4bd8133c0d9547945d38a9998439082ce7b53df7e64737add5a5c824e6f67f2/diff:/var/lib/docker/overlay2/110e275ef21b8c9cc2cd0cce312fed5aabceb056460f637b958dfee56b7b3be8/diff:/var/lib/docker/overlay2/831c8a624e424f298766028e76a8ac08df0c5cf4564f63cae61330a8bce0cf63/diff:/var/lib/docker/overlay2/7ad8ae774951ec40c68b0993ef07ef3d70aa8aed44ea9f1e4d943ca5404cc717/diff:/var/lib/docker/overlay2/19bca9fb61ef1156f8a97313c126a6c06d7fe44a6c49e3affe16f50f2d5e56ff/diff:/var/lib/docker/overlay2/dcd4dda04d06b0a0c7e78517c6209fd67735b3027afda2c85a92de37ff7297d1/diff:/var/lib/docker/overlay2/babf41f5fe1f7b88c17cfce27214a4ad9473b0f8e0f118db948d2acddf4d4798/diff:/var/lib/docker/overlay2/b5f97865010acd5b04b4031d6223cd0b34fab89267891d61256ea16936be52f8/diff:/var/lib/docker/overlay2/6aba0159141ebb6d6783181d154c65046447b7d2bebce65d44c4939ba7943cca/diff:/var/lib/docker/overlay2/c71c34fe0e7e95409a9fc18698f0aee505940fd96aa3718836e2d89f3cfb2d49/diff:/var/lib/docker/overlay2/3be993436e2a6764a6c3c57a2e948f7a57e45ed0ec26cdd3366f4c1106c69869/diff",
"MergedDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/merged",
"UpperDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/diff",
"WorkDir": "/var/lib/docker/overlay2/96e51e6162c0cb4385248375192ec777dd42b3ae7973e402de351f3932c502d0/work"
},
"Name": "overlay2"
},
"Mounts": [{
"Type": "volume",
"Name": "restic-compose-backup_mariadbdata",
"Source": "/var/lib/docker/volumes/restic-compose-backup_mariadbdata/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}],
"Config": {
"Hostname": "efa5196b4959",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": ["MYSQL_ROOT_PASSWORD=my-secret-pw", "MYSQL_DATABASE=mydb", "MYSQL_USER=myuser", "MYSQL_PASSWORD=mypassword", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.10", "GPG_KEYS=177F4010FE56CA3336300305F1656F24C74CD1D8", "MARIADB_MAJOR=10.4", "MARIADB_VERSION=1:10.4.12+maria~bionic"],
"Cmd": ["mysqld"],
"Image": "mariadb:10",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": ["docker-entrypoint.sh"],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "c6ecde85ad111d324a4c97cde3a03898074b026c68ecffc0f7020e5eca9a71d7",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "restic-compose-backup",
"com.docker.compose.project.config_files": "docker-compose.yaml",
"com.docker.compose.project.working_dir": "C:\\Users\\efors\\projects\\zetta.io\\projects\\restic-compose-backup",
"com.docker.compose.service": "mariadb",
"com.docker.compose.version": "1.25.4",
"restic-compose-backup.mariadb": "True"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d462bb5dfdd26aba12b8a395ac90262ab00d65408bf60dfa1ade0ab6a1851c70",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3306/tcp": null
},
"SandboxKey": "/var/run/docker/netns/d462bb5dfdd2",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"restic-compose-backup_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": ["efa5196b4959", "mariadb"],
"NetworkID": "8f3349b0debec88f9f48fff02d84cda3feae0a0e8c516e8b42e5777bb03db1cb",
"EndpointID": "0b75d3f00aa077fe95156bc80463d33fb21d241a287b33c06769047855c38400",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.3",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:03",
"DriverOpts": null
}
}
}
}

View File

@ -0,0 +1,207 @@
{
"Id": "56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76",
"Created": "2020-03-06T22:36:17.266061631Z",
"Path": "docker-entrypoint.sh",
"Args": ["mysqld"],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 35967,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-03-06T22:36:17.636265528Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:1fd0e719c4952e22a99e30662fdd7daad53e7e53fbe135d543cc6b82be213951",
"ResolvConfPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/hostname",
"HostsPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/hosts",
"LogPath": "/var/lib/docker/containers/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76/56c57903b6da3afd331312b244ddd0324f5b21cbbe5fc30072edf24781d80f76-json.log",
"Name": "/test_mariadb.1.q4uji32qvw4tkuvwx3pbnbgqq",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Capabilities": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [0, 0],
"Isolation": "default",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"Mounts": [{
"Type": "volume",
"Source": "test_mariadbdata",
"Target": "/var/lib/mysql",
"VolumeOptions": {
"Labels": {
"com.docker.stack.namespace": "test"
}
}
}],
"MaskedPaths": ["/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware"],
"ReadonlyPaths": ["/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger"]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b-init/diff:/var/lib/docker/overlay2/38780a41f93b7a20de03f1d76febb885f9213906fb30bad17cb3ad231fb7ce43/diff:/var/lib/docker/overlay2/a2abce521690b1baf6aa61e109a4659cb4272936871bc1afa73271eb8e453449/diff:/var/lib/docker/overlay2/a696286588d1d33b994b7f6e31c176c5f7e67c4f757d730323a7b6591d55f786/diff:/var/lib/docker/overlay2/c4bd8133c0d9547945d38a9998439082ce7b53df7e64737add5a5c824e6f67f2/diff:/var/lib/docker/overlay2/110e275ef21b8c9cc2cd0cce312fed5aabceb056460f637b958dfee56b7b3be8/diff:/var/lib/docker/overlay2/831c8a624e424f298766028e76a8ac08df0c5cf4564f63cae61330a8bce0cf63/diff:/var/lib/docker/overlay2/7ad8ae774951ec40c68b0993ef07ef3d70aa8aed44ea9f1e4d943ca5404cc717/diff:/var/lib/docker/overlay2/19bca9fb61ef1156f8a97313c126a6c06d7fe44a6c49e3affe16f50f2d5e56ff/diff:/var/lib/docker/overlay2/dcd4dda04d06b0a0c7e78517c6209fd67735b3027afda2c85a92de37ff7297d1/diff:/var/lib/docker/overlay2/babf41f5fe1f7b88c17cfce27214a4ad9473b0f8e0f118db948d2acddf4d4798/diff:/var/lib/docker/overlay2/b5f97865010acd5b04b4031d6223cd0b34fab89267891d61256ea16936be52f8/diff:/var/lib/docker/overlay2/6aba0159141ebb6d6783181d154c65046447b7d2bebce65d44c4939ba7943cca/diff:/var/lib/docker/overlay2/c71c34fe0e7e95409a9fc18698f0aee505940fd96aa3718836e2d89f3cfb2d49/diff:/var/lib/docker/overlay2/3be993436e2a6764a6c3c57a2e948f7a57e45ed0ec26cdd3366f4c1106c69869/diff",
"MergedDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/merged",
"UpperDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/diff",
"WorkDir": "/var/lib/docker/overlay2/ba8a39bdb1d2e25d373b6b00c764be3d37353e57cf03981c4c3e5a20ae6a602b/work"
},
"Name": "overlay2"
},
"Mounts": [{
"Type": "volume",
"Name": "test_mariadbdata",
"Source": "/var/lib/docker/volumes/test_mariadbdata/_data",
"Destination": "/var/lib/mysql",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}],
"Config": {
"Hostname": "56c57903b6da",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"3306/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": ["MYSQL_DATABASE=mydb", "MYSQL_PASSWORD=mypassword", "MYSQL_ROOT_PASSWORD=my-secret-pw", "MYSQL_USER=myuser", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "GOSU_VERSION=1.10", "GPG_KEYS=177F4010FE56CA3336300305F1656F24C74CD1D8", "MARIADB_MAJOR=10.4", "MARIADB_VERSION=1:10.4.12+maria~bionic"],
"Cmd": ["mysqld"],
"Image": "mariadb:10@sha256:d1ceee944c90ee3b596266de1b0ac25d2f34adbe9c35156b75bcb9a7047c7545",
"Volumes": {
"/var/lib/mysql": {}
},
"WorkingDir": "",
"Entrypoint": ["docker-entrypoint.sh"],
"OnBuild": null,
"Labels": {
"com.docker.stack.namespace": "test",
"com.docker.swarm.node.id": "gj73oe0vgmldlv2pdcj243231",
"com.docker.swarm.service.id": "jewh88xvythjkga24wy1thxc2",
"com.docker.swarm.service.name": "test_mariadb",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "q4uji32qvw4tkuvwx3pbnbgqq",
"com.docker.swarm.task.name": "test_mariadb.1.q4uji32qvw4tkuvwx3pbnbgqq",
"restic-compose-backup.mariadb": "true"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5aa81b0859dfd6f6be629eb966ce365f22dc86620359cce3e3d25d5291b539db",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"3306/tcp": null
},
"SandboxKey": "/var/run/docker/netns/5aa81b0859df",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"test_default": {
"IPAMConfig": {
"IPv4Address": "10.0.1.3"
},
"Links": null,
"Aliases": ["56c57903b6da"],
"NetworkID": "8aweh54u31eq3i47vqdr2aonc",
"EndpointID": "5369b4c82a479a3e9dfb3547cb7ac3a0fab888e38ad5c1d0ad02b0e9a9523a64",
"Gateway": "",
"IPAddress": "10.0.1.3",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:0a:00:01:03",
"DriverOpts": null
}
}
}
}

View File

@ -0,0 +1,93 @@
{
"ID": "k5427pk4t7ss4d7ylacumeavz",
"Version": {
"Index": 30
},
"CreatedAt": "2020-03-08T17:25:59.451947759Z",
"UpdatedAt": "2020-03-08T17:26:38.552002711Z",
"Spec": {
"Labels": {},
"Role": "manager",
"Availability": "active"
},
"Description": {
"Hostname": "docker-desktop",
"Platform": {
"Architecture": "x86_64",
"OS": "linux"
},
"Resources": {
"NanoCPUs": 4000000000,
"MemoryBytes": 2085535744
},
"Engine": {
"EngineVersion": "19.03.5",
"Plugins": [{
"Type": "Log",
"Name": "awslogs"
}, {
"Type": "Log",
"Name": "fluentd"
}, {
"Type": "Log",
"Name": "gcplogs"
}, {
"Type": "Log",
"Name": "gelf"
}, {
"Type": "Log",
"Name": "journald"
}, {
"Type": "Log",
"Name": "json-file"
}, {
"Type": "Log",
"Name": "local"
}, {
"Type": "Log",
"Name": "logentries"
}, {
"Type": "Log",
"Name": "splunk"
}, {
"Type": "Log",
"Name": "syslog"
}, {
"Type": "Network",
"Name": "bridge"
}, {
"Type": "Network",
"Name": "host"
}, {
"Type": "Network",
"Name": "ipvlan"
}, {
"Type": "Network",
"Name": "macvlan"
}, {
"Type": "Network",
"Name": "null"
}, {
"Type": "Network",
"Name": "overlay"
}, {
"Type": "Volume",
"Name": "local"
}]
},
"TLSInfo": {
"TrustRoot": "-----BEGIN CERTIFICATE-----\nMIIBazCCARCgAwIBAgIUfx7TP8c4SHCrwPPxjSFJQcfTP5QwCgYIKoZIzj0EAwIw\nEzERMA8GA1UEAxMIc3dhcm0tY2EwHhcNMjAwMzA4MTcyMTAwWhcNNDAwMzAzMTcy\nMTAwWjATMREwDwYDVQQDEwhzd2FybS1jYTBZMBMGByqGSM49AgEGCCqGSM49AwEH\nA0IABGOa/9Rdd6qNc24wvuL/I9t5Vt3MJzlwC+WN0R6HrA4Ik1h2dmSRZTQqnCI7\nWh16y+PLaFwIfN0JkN4FrpnUBsyjQjBAMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMB\nAf8EBTADAQH/MB0GA1UdDgQWBBTAPDjHRwjQhNGUxqE3COHCOQrOkjAKBggqhkjO\nPQQDAgNJADBGAiEAxd/lPEKy3gt3nfZ8DX7kDaaNH8jSPgCBx3ejUs3SoaUCIQD3\nZ8dVxNvG4+Gvn28mDjWhTNLCn0BYW6JFE8eTI0xv4A==\n-----END CERTIFICATE-----\n",
"CertIssuerSubject": "MBMxETAPBgNVBAMTCHN3YXJtLWNh",
"CertIssuerPublicKey": "MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEY5r/1F13qo1zbjC+4v8j23lW3cwnOXAL5Y3RHoesDgiTWHZ2ZJFlNCqcIjtaHXrL48toXAh83QmQ3gWumdQGzA=="
}
},
"Status": {
"State": "ready",
"Addr": "192.168.65.3"
},
"ManagerStatus": {
"Leader": true,
"Reachability": "reachable",
"Addr": "192.168.65.3:2377"
}
}

View File

@ -13,9 +13,9 @@ When releasing a bugfix version we need to update the
main image as well. main image as well.
```bash ```bash
docker build src --tag zettaio/restic-compose-backup:0.3 docker build src --tag zettaio/restic-compose-backup:0.6
docker build src --tag zettaio/restic-compose-backup:0.3.3 docker build src --tag zettaio/restic-compose-backup:0.6.0
docker push zettaio/restic-compose-backup:0.3 docker push zettaio/restic-compose-backup:0.5
docker push zettaio/restic-compose-backup:0.3.3 docker push zettaio/restic-compose-backup:0.5.0
``` ```

View File

@ -1,6 +1,13 @@
# DON'T COMMIT THIS FILE IF YOU MODIFY IN DEV # DON'T COMMIT THIS FILE IF YOU MODIFY IN DEV
DOCKER_BASE_URL=unix://tmp/docker.sock # DOCKER_HOST=unix://tmp/docker.sock
# DOCKER_TLS_VERIFY=1
# DOCKER_CERT_PATH=''
SWARM_MODE=true
INCLUDE_PROJECT_NAME=false
EXCLUDE_BIND_MOUNTS=false
RESTIC_REPOSITORY=/restic_data RESTIC_REPOSITORY=/restic_data
RESTIC_PASSWORD=password RESTIC_PASSWORD=password

View File

@ -1,10 +1,14 @@
FROM restic/restic:0.9.6 FROM restic/restic:0.9.6
RUN apk update && apk add python3 dcron mariadb-client postgresql-client RUN apk update && apk add python3 \
dcron \
mariadb-client \
postgresql-client \
mariadb-connector-c-dev
ADD . /restic-compose-backup ADD . /restic-compose-backup
WORKDIR /restic-compose-backup WORKDIR /restic-compose-backup
RUN pip3 install -U pip setuptools && pip3 install -e . RUN pip3 install -U pip setuptools wheel && pip3 install -e .
ENV XDG_CACHE_HOME=/cache ENV XDG_CACHE_HOME=/cache
ENTRYPOINT [] ENTRYPOINT []

View File

@ -1,7 +1,7 @@
#!/bin/sh #!/bin/sh
# Dump all env vars so we can source them in cron jobs # Dump all env vars so we can source them in cron jobs
printenv | sed 's/^\(.*\)$/export \1/g' > /env.sh rcb dump-env > /env.sh
# Write crontab # Write crontab
rcb crontab > crontab rcb crontab > crontab

View File

@ -1 +1 @@
__version__ = '0.4.0' __version__ = '0.7.1'

View File

@ -15,7 +15,7 @@ class SMTPAlert(BaseAlert):
self.host = host self.host = host
self.port = port self.port = port
self.user = user self.user = user
self.password = password self.password = password or ""
self.to = to self.to = to
@classmethod @classmethod
@ -34,7 +34,7 @@ class SMTPAlert(BaseAlert):
@property @property
def properly_configured(self) -> bool: def properly_configured(self) -> bool:
return self.host and self.port and self.user and self.password and len(self.to) > 0 return self.host and self.port and self.user and len(self.to) > 0
def send(self, subject: str = None, body: str = None, alert_type: str = 'INFO'): def send(self, subject: str = None, body: str = None, alert_type: str = 'INFO'):
# send_mail("Hello world!") # send_mail("Hello world!")

View File

@ -17,7 +17,7 @@ def run(image: str = None, command: str = None, volumes: dict = None,
labels=labels, labels=labels,
# auto_remove=True, # We remove the container further down # auto_remove=True, # We remove the container further down
detach=True, detach=True,
environment=environment, environment=environment + ['BACKUP_PROCESS_CONTAINER=true'],
volumes=volumes, volumes=volumes,
network_mode=f'container:{source_container_id}', # Reuse original container's network stack. network_mode=f'container:{source_container_id}', # Reuse original container's network stack.
working_dir=os.getcwd(), working_dir=os.getcwd(),

View File

@ -51,12 +51,29 @@ def main():
elif args.action == "crontab": elif args.action == "crontab":
crontab(config) crontab(config)
elif args.action == "dump-env":
dump_env()
# Random test stuff here
elif args.action == "test":
nodes = utils.get_swarm_nodes()
print("Swarm nodes:")
for node in nodes:
addr = node.attrs['Status']['Addr']
state = node.attrs['Status']['State']
print(' - {} {} {}'.format(node.id, addr, state))
def status(config, containers): def status(config, containers):
"""Outputs the backup config for the compose setup""" """Outputs the backup config for the compose setup"""
logger.info("Status for compose project '%s'", containers.project_name) logger.info("Status for compose project '%s'", containers.project_name)
logger.info("Repository: '%s'", config.repository) logger.info("Repository: '%s'", config.repository)
logger.info("Backup currently running?: %s", containers.backup_process_running) logger.info("Backup currently running?: %s", containers.backup_process_running)
logger.info("Include project name in backup path?: %s", utils.is_true(config.include_project_name))
logger.debug("Exclude bind mounts from backups?: %s", utils.is_true(config.exclude_bind_mounts))
logger.info("Checking docker availability")
utils.list_containers()
if containers.stale_backup_process_containers: if containers.stale_backup_process_containers:
utils.remove_containers(containers.stale_backup_process_containers) utils.remove_containers(containers.stale_backup_process_containers)
@ -79,12 +96,21 @@ def status(config, containers):
if container.volume_backup_enabled: if container.volume_backup_enabled:
for mount in container.filter_mounts(): for mount in container.filter_mounts():
logger.info(' - volume: %s', mount.source) logger.info(
' - volume: %s -> %s',
mount.source,
container.get_volume_backup_destination(mount, '/volumes'),
)
if container.database_backup_enabled: if container.database_backup_enabled:
instance = container.instance instance = container.instance
ping = instance.ping() ping = instance.ping()
logger.info(' - %s (is_ready=%s)', instance.container_type, ping == 0) logger.info(
' - %s (is_ready=%s) -> %s',
instance.container_type,
ping == 0,
instance.backup_destination_path(),
)
if ping != 0: if ping != 0:
logger.error("Database '%s' in service %s cannot be reached", logger.error("Database '%s' in service %s cannot be reached",
instance.container_type, container.service_name) instance.container_type, container.service_name)
@ -117,6 +143,7 @@ def backup(config, containers):
mounts = containers.generate_backup_mounts('/volumes') mounts = containers.generate_backup_mounts('/volumes')
volumes.update(mounts) volumes.update(mounts)
logger.debug('Starting backup container with image %s', containers.this_container.image)
try: try:
result = backup_runner.run( result = backup_runner.run(
image=containers.this_container.image, image=containers.this_container.image,
@ -151,12 +178,18 @@ def backup(config, containers):
def start_backup_process(config, containers): def start_backup_process(config, containers):
"""The actual backup process running inside the spawned container""" """The actual backup process running inside the spawned container"""
if (not containers.backup_process_container if not utils.is_true(os.environ.get('BACKUP_PROCESS_CONTAINER')):
or containers.this_container == containers.backup_process_container is False):
logger.error( logger.error(
"Cannot run backup process in this container. Use backup command instead. " "Cannot run backup process in this container. Use backup command instead. "
"This will spawn a new container with the necessary mounts." "This will spawn a new container with the necessary mounts."
) )
alerts.send(
subject="Cannot run backup process in this container",
body=(
"Cannot run backup process in this container. Use backup command instead. "
"This will spawn a new container with the necessary mounts."
)
)
exit(1) exit(1)
status(config, containers) status(config, containers)
@ -260,6 +293,14 @@ def crontab(config):
print(cron.generate_crontab(config)) print(cron.generate_crontab(config))
def dump_env():
"""Dump all environment variables to a script that can be sourced from cron"""
print("#!/bin/bash")
print("# This file was generated by restic-compose-backup")
for key, value in os.environ.items():
print("export {}='{}'".format(key, value))
def parse_args(): def parse_args():
parser = argparse.ArgumentParser(prog='restic_compose_backup') parser = argparse.ArgumentParser(prog='restic_compose_backup')
parser.add_argument( parser.add_argument(
@ -273,6 +314,8 @@ def parse_args():
'cleanup', 'cleanup',
'version', 'version',
'crontab', 'crontab',
'dump-env',
'test',
], ],
) )
parser.add_argument( parser.add_argument(

View File

@ -10,9 +10,11 @@ class Config:
# Mandatory values # Mandatory values
self.repository = os.environ.get('RESTIC_REPOSITORY') self.repository = os.environ.get('RESTIC_REPOSITORY')
self.password = os.environ.get('RESTIC_REPOSITORY') self.password = os.environ.get('RESTIC_REPOSITORY')
self.docker_base_url = os.environ.get('DOCKER_BASE_URL') or "unix://tmp/docker.sock"
self.cron_schedule = os.environ.get('CRON_SCHEDULE') or self.default_crontab_schedule self.cron_schedule = os.environ.get('CRON_SCHEDULE') or self.default_crontab_schedule
self.cron_command = os.environ.get('CRON_COMMAND') or self.default_backup_command self.cron_command = os.environ.get('CRON_COMMAND') or self.default_backup_command
self.swarm_mode = os.environ.get('SWARM_MODE') or False
self.include_project_name = os.environ.get('INCLUDE_PROJECT_NAME') or False
self.exclude_bind_mounts = os.environ.get('EXCLUDE_BIND_MOUNTS') or False
# Log # Log
self.log_level = os.environ.get('LOG_LEVEL') self.log_level = os.environ.get('LOG_LEVEL')
@ -32,3 +34,6 @@ class Config:
if not self.password: if not self.password:
raise ValueError("RESTIC_REPOSITORY env var not set") raise ValueError("RESTIC_REPOSITORY env var not set")
config = Config()

View File

@ -1,9 +1,12 @@
import os import os
import logging
from pathlib import Path from pathlib import Path
from typing import List from typing import List
from restic_compose_backup import enums, utils from restic_compose_backup import enums, utils
from restic_compose_backup.config import config
logger = logging.getLogger(__name__)
VOLUME_TYPE_BIND = "bind" VOLUME_TYPE_BIND = "bind"
VOLUME_TYPE_VOLUME = "volume" VOLUME_TYPE_VOLUME = "volume"
@ -61,6 +64,37 @@ class Container:
"""Image name""" """Image name"""
return self.get_config('Image') return self.get_config('Image')
@property
def name(self) -> str:
"""Container name"""
return self._data['Name'].replace('/', '')
@property
def service_name(self) -> str:
"""Name of the container/service"""
return self.get_label('com.docker.compose.service', default='') or \
self.get_label('com.docker.swarm.service.name', default='')
@property
def backup_process_label(self) -> str:
"""str: The unique backup process label for this project"""
return f"{enums.LABEL_BACKUP_PROCESS}-{self.project_name}"
@property
def project_name(self) -> str:
"""str: Name of the compose setup"""
return self.get_label('com.docker.compose.project', default='')
@property
def stack_name(self) -> str:
"""str: Name of the stack is present"""
return self.get_label("com.docker.stack.namespace")
@property
def is_oneoff(self) -> bool:
"""Was this container started with run command?"""
return self.get_label('com.docker.compose.oneoff', default='False') == 'True'
@property @property
def environment(self) -> list: def environment(self) -> list:
"""All configured env vars for the container as a list""" """All configured env vars for the container as a list"""
@ -148,31 +182,6 @@ class Container:
"""bool: Is the container running?""" """bool: Is the container running?"""
return self._state.get('Running', False) return self._state.get('Running', False)
@property
def name(self) -> str:
"""Container name"""
return self._data['Name'].replace('/', '')
@property
def service_name(self) -> str:
"""Name of the container/service"""
return self.get_label('com.docker.compose.service', default='')
@property
def backup_process_label(self) -> str:
"""str: The unique backup process label for this project"""
return f"{enums.LABEL_BACKUP_PROCESS}-{self.project_name}"
@property
def project_name(self) -> str:
"""Name of the compose setup"""
return self.get_label('com.docker.compose.project', default='')
@property
def is_oneoff(self) -> bool:
"""Was this container started with run command?"""
return self.get_label('com.docker.compose.oneoff', default='False') == 'True'
def get_config(self, name, default=None): def get_config(self, name, default=None):
"""Get value from config dict""" """Get value from config dict"""
return self._config.get(name, default) return self._config.get(name, default)
@ -185,11 +194,15 @@ class Container:
"""Get all mounts for this container matching include/exclude filters""" """Get all mounts for this container matching include/exclude filters"""
filtered = [] filtered = []
# If exclude_bind_mounts is true, only volume mounts are kept in the list of mounts
exclude_bind_mounts = utils.is_true(config.exclude_bind_mounts)
mounts = list(filter(lambda m: not exclude_bind_mounts or m.type == "volume", self._mounts))
if not self.volume_backup_enabled: if not self.volume_backup_enabled:
return filtered return filtered
if self._include: if self._include:
for mount in self._mounts: for mount in mounts:
for pattern in self._include: for pattern in self._include:
if pattern in mount.source: if pattern in mount.source:
break break
@ -199,14 +212,14 @@ class Container:
filtered.append(mount) filtered.append(mount)
elif self._exclude: elif self._exclude:
for mount in self._mounts: for mount in mounts:
for pattern in self._exclude: for pattern in self._exclude:
if pattern in mount.source: if pattern in mount.source:
break break
else: else:
filtered.append(mount) filtered.append(mount)
else: else:
return self._mounts return mounts
return filtered return filtered
@ -216,12 +229,26 @@ class Container:
volumes = {} volumes = {}
for mount in mounts: for mount in mounts:
volumes[mount.source] = { volumes[mount.source] = {
'bind': str(Path(source_prefix) / self.service_name / Path(utils.strip_root(mount.destination))), 'bind': self.get_volume_backup_destination(mount, source_prefix),
'mode': mode, 'mode': mode,
} }
return volumes return volumes
def get_volume_backup_destination(self, mount, source_prefix) -> str:
"""Get the destination path for backups of the given mount"""
destination = Path(source_prefix)
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != '':
destination /= project_name
destination /= self.service_name
destination /= Path(utils.strip_root(mount.destination))
return str(destination)
def get_credentials(self) -> dict: def get_credentials(self) -> dict:
"""dict: get credentials for the service""" """dict: get credentials for the service"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
@ -234,6 +261,10 @@ class Container:
"""Back up this service""" """Back up this service"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
def backup_destination_path(self) -> str:
"""Return the path backups will be saved at"""
raise NotImplementedError("Base container class don't implement this")
def dump_command(self) -> list: def dump_command(self) -> list:
"""list: create a dump command restic and use to send data through stdin""" """list: create a dump command restic and use to send data through stdin"""
raise NotImplementedError("Base container class don't implement this") raise NotImplementedError("Base container class don't implement this")
@ -352,11 +383,22 @@ class RunningContainers:
if container.is_backup_process_container: if container.is_backup_process_container:
self.backup_process_container = container self.backup_process_container = container
# Detect containers belonging to the current compose setup # --- Determine what containers should be evaludated
if (container.project_name == self.this_container.project_name
and not container.is_oneoff): # If not swarm mode we need to filter in compose project
if container.id != self.this_container.id: if not config.swarm_mode:
self.containers.append(container) if container.project_name != self.this_container.project_name:
continue
# Containers started manually are not included
if container.is_oneoff:
continue
# Do not include the backup process container
if container == self.backup_process_container:
continue
self.containers.append(container)
@property @property
def project_name(self) -> str: def project_name(self) -> str:

View File

@ -1,5 +1,7 @@
from pathlib import Path
from restic_compose_backup.containers import Container from restic_compose_backup.containers import Container
from restic_compose_backup.config import Config from restic_compose_backup.config import config, Config
from restic_compose_backup import ( from restic_compose_backup import (
commands, commands,
restic, restic,
@ -39,6 +41,7 @@ class MariadbContainer(Container):
f"--port={creds['port']}", f"--port={creds['port']}",
f"--user={creds['username']}", f"--user={creds['username']}",
"--all-databases", "--all-databases",
"--no-tablespaces",
] ]
def backup(self): def backup(self):
@ -48,10 +51,23 @@ class MariadbContainer(Container):
with utils.environment('MYSQL_PWD', creds['password']): with utils.environment('MYSQL_PWD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
f'/databases/{self.service_name}/all_databases.sql', self.backup_destination_path(),
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= "all_databases.sql"
return destination
class MysqlContainer(Container): class MysqlContainer(Container):
container_type = 'mysql' container_type = 'mysql'
@ -85,6 +101,7 @@ class MysqlContainer(Container):
f"--port={creds['port']}", f"--port={creds['port']}",
f"--user={creds['username']}", f"--user={creds['username']}",
"--all-databases", "--all-databases",
"--no-tablespaces",
] ]
def backup(self): def backup(self):
@ -94,10 +111,23 @@ class MysqlContainer(Container):
with utils.environment('MYSQL_PWD', creds['password']): with utils.environment('MYSQL_PWD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
f'/databases/{self.service_name}/all_databases.sql', self.backup_destination_path(),
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= "all_databases.sql"
return destination
class PostgresContainer(Container): class PostgresContainer(Container):
container_type = 'postgres' container_type = 'postgres'
@ -141,6 +171,19 @@ class PostgresContainer(Container):
with utils.environment('PGPASSWORD', creds['password']): with utils.environment('PGPASSWORD', creds['password']):
return restic.backup_from_stdin( return restic.backup_from_stdin(
config.repository, config.repository,
f"/databases/{self.service_name}/{creds['database']}.sql", self.backup_destination_path(),
self.dump_command(), self.dump_command(),
) )
def backup_destination_path(self) -> str:
destination = Path("/databases")
if utils.is_true(config.include_project_name):
project_name = self.project_name
if project_name != "":
destination /= project_name
destination /= self.service_name
destination /= f"{self.get_credentials()['database']}.sql"
return destination

View File

@ -22,7 +22,7 @@ def setup(level: str = 'warning'):
ch = logging.StreamHandler(stream=sys.stdout) ch = logging.StreamHandler(stream=sys.stdout)
ch.setLevel(level) ch.setLevel(level)
# ch.setFormatter(logging.Formatter(f'%(asctime)s - {HOSTNAME} - %(name)s - %(levelname)s - %(message)s')) # ch.setFormatter(logging.Formatter('%(asctime)s - {HOSTNAME} - %(name)s - %(levelname)s - %(message)s'))
# ch.setFormatter(logging.Formatter(f'%(asctime)s - {HOSTNAME} - %(levelname)s - %(message)s')) # ch.setFormatter(logging.Formatter('%(asctime)s - {HOSTNAME} - %(levelname)s - %(message)s'))
ch.setFormatter(logging.Formatter(f'%(asctime)s - %(levelname)s: %(message)s')) ch.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s: %(message)s'))
logger.addHandler(ch) logger.addHandler(ch)

View File

@ -1,9 +1,11 @@
import os import os
import logging import logging
from typing import List from typing import List, TYPE_CHECKING
from contextlib import contextmanager from contextlib import contextmanager
import docker import docker
from restic_compose_backup.config import Config
if TYPE_CHECKING:
from restic_compose_backup.containers import Container
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -11,8 +13,18 @@ TRUE_VALUES = ['1', 'true', 'True', True, 1]
def docker_client(): def docker_client():
config = Config() """
return docker.DockerClient(base_url=config.docker_base_url) Create a docker client from the following environment variables::
DOCKER_HOST=unix://tmp/docker.sock
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=''
"""
# NOTE: Remove this fallback in 1.0
if not os.environ.get('DOCKER_HOST'):
os.environ['DOCKER_HOST'] = 'unix://tmp/docker.sock'
return docker.from_env()
def list_containers() -> List[dict]: def list_containers() -> List[dict]:
@ -28,6 +40,18 @@ def list_containers() -> List[dict]:
return [c.attrs for c in all_containers] return [c.attrs for c in all_containers]
def get_swarm_nodes():
client = docker_client()
# NOTE: If not a swarm node docker.errors.APIError is raised
# 503 Server Error: Service Unavailable
# ("This node is not a swarm manager. Use "docker swarm init" or
# "docker swarm join" to connect this node to swarm and try again.")
try:
return client.nodes.list()
except docker.errors.APIError:
return []
def remove_containers(containers: List['Container']): def remove_containers(containers: List['Container']):
client = docker_client() client = docker_client()
logger.info('Attempting to delete stale backup process containers') logger.info('Attempting to delete stale backup process containers')

View File

@ -3,12 +3,15 @@ from setuptools import setup, find_namespace_packages
setup( setup(
name="restic-compose-backup", name="restic-compose-backup",
url="https://github.com/ZettaIO/restic-compose-backup", url="https://github.com/ZettaIO/restic-compose-backup",
version="0.4.0", version="0.7.1",
author="Einar Forselv", author="Einar Forselv",
author_email="eforselv@gmail.com", author_email="eforselv@gmail.com",
packages=find_namespace_packages(include=['restic_compose_backup']), packages=find_namespace_packages(include=[
'restic_compose_backup',
'restic_compose_backup.*',
]),
install_requires=[ install_requires=[
'docker==4.1.*', 'docker~=6.1.3',
], ],
entry_points={'console_scripts': [ entry_points={'console_scripts': [
'restic-compose-backup = restic_compose_backup.cli:main', 'restic-compose-backup = restic_compose_backup.cli:main',

View File

@ -160,7 +160,7 @@
"OpenStdin": true, "OpenStdin": true,
"StdinOnce": true, "StdinOnce": true,
"Env": [ "Env": [
"DOCKER_BASE_URL=unix://tmp/docker.sock", "DOCKER_HOST=unix://tmp/docker.sock",
"RESTIC_REPOSITORY=/tmp/backup", "RESTIC_REPOSITORY=/tmp/backup",
"RESTIC_PASSWORD=password", "RESTIC_PASSWORD=password",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

View File

@ -3,6 +3,9 @@ import os
import unittest import unittest
from unittest import mock from unittest import mock
os.environ['RESTIC_REPOSITORY'] = "test"
os.environ['RESTIC_PASSWORD'] = "password"
from restic_compose_backup import utils from restic_compose_backup import utils
from restic_compose_backup.containers import RunningContainers from restic_compose_backup.containers import RunningContainers
import fixtures import fixtures
@ -15,8 +18,8 @@ class ResticBackupTests(unittest.TestCase):
@classmethod @classmethod
def setUpClass(cls): def setUpClass(cls):
"""Set up basic environment variables""" """Set up basic environment variables"""
os.environ['RESTIC_REPOSITORY'] = "test" # os.environ['RESTIC_REPOSITORY'] = "test"
os.environ['RESTIC_PASSWORD'] = "password" # os.environ['RESTIC_PASSWORD'] = "password"
def createContainers(self): def createContainers(self):
backup_hash = fixtures.generate_sha256() backup_hash = fixtures.generate_sha256()
@ -77,7 +80,7 @@ class ResticBackupTests(unittest.TestCase):
] ]
with mock.patch(list_containers_func, fixtures.containers(containers=containers)): with mock.patch(list_containers_func, fixtures.containers(containers=containers)):
result = RunningContainers() result = RunningContainers()
self.assertEqual(len(result.containers), 3, msg="Three containers expected") self.assertEqual(len(result.containers), 4, msg="Three containers expected")
self.assertNotEqual(result.this_container, None, msg="No backup container found") self.assertNotEqual(result.this_container, None, msg="No backup container found")
web_service = result.get_service('web') web_service = result.get_service('web')
self.assertNotEqual(web_service, None) self.assertNotEqual(web_service, None)

30
swarm-stack.yml Normal file
View File

@ -0,0 +1,30 @@
version: '3.7'
services:
mariadb:
image: mariadb:10
labels:
restic-compose-backup.mariadb: "true"
environment:
- MYSQL_ROOT_PASSWORD=my-secret-pw
- MYSQL_DATABASE=mydb
- MYSQL_USER=myuser
- MYSQL_PASSWORD=mypassword
networks:
- global
volumes:
- mariadbdata:/var/lib/mysql
files:
image: nginx:1.17-alpine
labels:
restic-compose-backup.volumes: "true"
volumes:
- files:/srv/files
volumes:
mariadbdata:
files:
networks:
global:
external: true

View File

@ -5,13 +5,13 @@
skipsdist = True skipsdist = True
setupdir={toxinidir}/src setupdir={toxinidir}/src
envlist = envlist =
py37 py38
pep8 pep8
[testenv] [testenv]
usedevelop = True usedevelop = True
basepython = basepython =
py37: python3.7 py38: python3.8
deps = deps =
-r{toxinidir}/src//tests/requirements.txt -r{toxinidir}/src//tests/requirements.txt
@ -23,7 +23,7 @@ commands =
[testenv:pep8] [testenv:pep8]
usedevelop = false usedevelop = false
deps = flake8 deps = flake8
basepython = python3.7 basepython = python3.8
commands = flake8 commands = flake8
[pytest] [pytest]
@ -53,4 +53,4 @@ norecursedirs = tests/* .venv/* .tox/* build/ docs/
ignore = H405,D100,D101,D102,D103,D104,D105,D200,D202,D203,D204,D205,D211,D301,D400,D401,W503 ignore = H405,D100,D101,D102,D103,D104,D105,D200,D202,D203,D204,D205,D211,D301,D400,D401,W503
show-source = True show-source = True
max-line-length = 120 max-line-length = 120
exclude = .tox,.venv*,tests,build,conf.py exclude = .tox,env,tests,build,conf.py