Compare commits

..

No commits in common. "master" and "2.0.0.dev0" have entirely different histories.

505 changed files with 57409 additions and 69413 deletions

27
.appveyor.yml Normal file
View file

@ -0,0 +1,27 @@
version: '{build}'
environment:
matrix:
- PYTHON: C:\Python38-x64
# Disable automatic builds
build: off
# Build artifacts: all wheel and exe files in the dist folder
artifacts:
- path: 'dist\*.whl'
- path: 'dist\*.exe'
install:
- ps: scripts\win-download-openssl.ps1
- ps: |
& $env:PYTHON\python.exe -m venv borg-env
borg-env\Scripts\activate.ps1
python -m pip install -U pip
pip install -r requirements.d/development.txt
pip install wheel pyinstaller
build_script:
- ps: |
borg-env\Scripts\activate.ps1
scripts\win-build.ps1

36
.coafile Normal file
View file

@ -0,0 +1,36 @@
[all]
# note: put developer specific settings into ~/.coarc (e.g. editor = ...)
max_line_length = 255
use_spaces = True
ignore = src/borg/(checksums.c|chunker.c|compress.c|hashindex.c|item.c),
src/borg/crypto/low_level.c,
src/borg/platform/*.c
[all.general]
files = src/borg/**/*.(py|pyx|c)
bears = SpaceConsistencyBear, FilenameBear, InvalidLinkBear, LineLengthBear
file_naming_convention = snake
[all.python]
files = src/borg/**/*.py
bears = PEP8Bear, PyDocStyleBear, PyLintBear
pep_ignore = E123,E125,E126,E127,E128,E226,E301,E309,E402,F401,F405,F811,W690
pylint_disable = C0103, C0111, C0112, C0122, C0123, C0301, C0302, C0325, C0330, C0411, C0412, C0413, C1801,
I1101,
W0102, W0104, W0106, W0108, W0120, W0201, W0212, W0221, W0231, W0401, W0404,
W0511, W0603, W0611, W0612, W0613, W0614, W0621, W0622, W0640, W0702, W0703,
W1201, W1202, W1401,
R0101, R0201, R0204, R0901, R0902, R0903, R0904, R0911, R0912, R0913, R0914, R0915,
R0916, R1701, R1704, R1705, R1706, R1710,
E0102, E0202, E0401, E0601, E0611, E0702, E1101, E1102, E1120, E1129, E1130
pydocstyle_ignore = D100, D101, D102, D103, D104, D105, D200, D201, D202, D203, D204, D205, D209, D210,
D212, D213, D300, D301, D400, D401, D402, D403, D404
[all.c]
files = src/borg/**/*.c
bears = CPPCheckBear
[all.html]
files = src/borg/**/*.html
bears = HTMLLintBear
htmllint_ignore = *

24
.coveragerc Normal file
View file

@ -0,0 +1,24 @@
[run]
branch = True
disable_warnings = module-not-measured
source = src/borg
omit =
*/borg/__init__.py
*/borg/__main__.py
*/borg/_version.py
*/borg/fuse.py
*/borg/support/*
*/borg/testsuite/*
*/borg/hash_sizes.py
[report]
exclude_lines =
pragma: no cover
pragma: freebsd only
pragma: unknown platform only
def __repr__
raise AssertionError
raise NotImplementedError
if 0:
if __name__ == .__main__.:
ignore_errors = True

View file

@ -1,2 +0,0 @@
# Migrate code style to Black
7957af562d5ce8266b177039783be4dc8bdd7898

5
.github/FUNDING.yml vendored
View file

@ -1,6 +1,5 @@
# These are supported funding model platforms # These are supported funding model platforms
github: borgbackup # github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
liberapay: borgbackup # liberapay: borgbackup
open_collective: borgbackup
custom: ['https://www.borgbackup.org/support/fund.html'] custom: ['https://www.borgbackup.org/support/fund.html']

View file

@ -1,54 +1,56 @@
<!-- <!--
Thank you for reporting an issue. Thank you for reporting an issue.
*IMPORTANT* Before creating a new issue, please look around: *IMPORTANT* - *before* creating a new issue please look around:
- BorgBackup documentation: https://borgbackup.readthedocs.io/en/stable/index.html - Borgbackup documentation: http://borgbackup.readthedocs.io/en/stable/index.html
- FAQ: https://borgbackup.readthedocs.io/en/stable/faq.html - FAQ: https://borgbackup.readthedocs.io/en/stable/faq.html
- Open issues in the GitHub tracker: https://github.com/borgbackup/borg/issues and
- open issues in Github tracker: https://github.com/borgbackup/borg/issues
If you cannot find a similar problem, then create a new issue. If you cannot find a similar problem, then create a new issue.
Please fill in as much of the template as possible. Please fill in as much of the template as possible.
--> -->
## Have you checked the BorgBackup docs, FAQ, and open GitHub issues? ## Have you checked borgbackup docs, FAQ, and open Github issues?
No No
## Is this a bug/issue report or a question? ## Is this a BUG / ISSUE report or a QUESTION?
Bug/Issue/Question Invalid
## System information. For client/server mode, post info for both machines. ## System information. For client/server mode post info for both machines.
#### Your Borg version (borg -V). #### Your borg version (borg -V).
#### Operating system (distribution) and version. #### Operating system (distribution) and version.
#### Hardware/network configuration and filesystems used. #### Hardware / network configuration, and filesystems used.
#### How much data is handled by Borg? #### How much data is handled by borg?
#### Full Borg command line that led to the problem (leave out excludes and passwords). #### Full borg commandline that lead to the problem (leave away excludes and passwords)
## Describe the problem you're observing. ## Describe the problem you're observing.
#### Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue. #### Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
#### Include any warnings/errors/backtraces from the system logs #### Include any warning/errors/backtraces from the system logs
<!-- <!--
If this complaint relates to Borg performance, please include CRUD benchmark If this complaint relates to borg performance, please include CRUD benchmark
results and any steps you took to troubleshoot. results and any steps you took to troubleshoot.
How to run the benchmark: https://borgbackup.readthedocs.io/en/stable/usage/benchmark.html How to run benchmark: http://borgbackup.readthedocs.io/en/stable/usage/benchmark.html
*IMPORTANT* Please mark logs and terminal command output, otherwise GitHub will not display them correctly. *IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below. An example is provided below.
Example: Example:
``` ```
this is an example of how log text should be marked (wrap it with ```) this is an example how log text should be marked (wrap it with ```)
``` ```
--> -->

8
.github/PULL_REQUEST_TEMPLATE vendored Normal file
View file

@ -0,0 +1,8 @@
Thank you for contributing code to Borg, your help is appreciated!
Please, before you submit a pull request, make sure it complies with the
guidelines given in our documentation:
https://borgbackup.readthedocs.io/en/latest/development.html#contributions
**Please remove all above text before submitting your pull request.**

View file

@ -1,18 +0,0 @@
<!--
Thank you for contributing to BorgBackup!
Please make sure your PR complies with our contribution guidelines:
https://borgbackup.readthedocs.io/en/latest/development.html#contributions
-->
## Description
<!-- What does this PR do? Reference any related issues with "fixes #XXXX". -->
## Checklist
- [ ] PR is against `master` (or maintenance branch if only applicable there)
- [ ] New code has tests and docs where appropriate
- [ ] Tests pass (run `tox` or the relevant test subset)
- [ ] Commit messages are clean and reference related issues

View file

@ -1,24 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
groups:
actions:
patterns:
- "*"
- package-ecosystem: "pip"
directory: "/requirements.d"
ignore:
- dependency-name: "black"
schedule:
interval: "weekly"
cooldown:
semver-major-days: 90
semver-minor-days: 30
groups:
pip-dependencies:
patterns:
- "*"

View file

@ -1,38 +0,0 @@
name: Backport pull request
on:
pull_request_target:
types: [closed]
issue_comment:
types: [created]
permissions:
contents: write # so it can comment
pull-requests: write # so it can create pull requests
jobs:
backport:
name: Backport pull request
runs-on: ubuntu-24.04
timeout-minutes: 5
# Only run when pull request is merged
# or when a comment starting with `/backport` is created by someone other than the
# https://github.com/backport-action bot user (user id: 97796249). Note that if you use your
# own PAT as `github_token`, that you should replace this id with yours.
if: >
(
github.event_name == 'pull_request_target' &&
github.event.pull_request.merged
) || (
github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
github.event.comment.user.id != 97796249 &&
startsWith(github.event.comment.body, '/backport')
)
steps:
- uses: actions/checkout@v6
- name: Create backport pull requests
uses: korthout/backport-action@v4
with:
label_pattern: '^port/(.+)$'

View file

@ -1,30 +0,0 @@
# https://black.readthedocs.io/en/stable/integrations/github_actions.html#usage
# See also what we use locally in requirements.d/codestyle.txt — this should be the same version here.
name: Lint
on:
push:
paths:
- '**.py'
- 'pyproject.toml'
- '.github/workflows/black.yaml'
pull_request:
paths:
- '**.py'
- 'pyproject.toml'
- '.github/workflows/black.yaml'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
lint:
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v6
- uses: psf/black@6305bf1ae645ab7541be4f5028a86239316178eb # 26.1.0
with:
version: "~= 24.0"

View file

@ -5,8 +5,16 @@ name: CI
on: on:
push: push:
branches: [ master ] branches: [ master ]
tags: paths:
- '2.*' - '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '**.yml'
- '**.cfg'
- '**.ini'
- 'requirements.d/*'
- '!docs/**'
pull_request: pull_request:
branches: [ master ] branches: [ master ]
paths: paths:
@ -15,670 +23,109 @@ on:
- '**.c' - '**.c'
- '**.h' - '**.h'
- '**.yml' - '**.yml'
- '**.toml'
- '**.cfg' - '**.cfg'
- '**.ini' - '**.ini'
- 'requirements.d/*' - 'requirements.d/*'
- '!docs/**' - '!docs/**'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
permissions:
contents: read
jobs: jobs:
lint: lint:
runs-on: ubuntu-22.04 runs-on: ubuntu-latest
timeout-minutes: 5 timeout-minutes: 10
steps: steps:
- uses: actions/checkout@v6 - uses: actions/checkout@v2
- uses: astral-sh/ruff-action@v3
security:
runs-on: ubuntu-24.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v6
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v6 uses: actions/setup-python@v2
with: with:
python-version: '3.10' python-version: 3.9
- name: Install dependencies - name: Lint with flake8
run: | run: |
python -m pip install --upgrade pip pip install flake8
pip install bandit[toml] flake8 src scripts conftest.py
- name: Run Bandit
run: |
bandit -r src/borg -c pyproject.toml
asan_ubsan: pytest:
runs-on: ubuntu-24.04 needs: lint
timeout-minutes: 25
needs: [lint]
steps:
- uses: actions/checkout@v6
with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all.
fetch-depth: 0
fetch-tags: true
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.12'
- name: Install system packages
run: |
sudo apt-get update
sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.d/development.txt
- name: Build Borg with ASan/UBSan
# Build the C/Cython extensions with AddressSanitizer and UndefinedBehaviorSanitizer enabled.
# How this works:
# - The -fsanitize=address,undefined flags inject runtime checks into our native code. If a bug is hit
# (e.g., buffer overflow, use-after-free, out-of-bounds, or undefined behavior), the sanitizer prints
# a detailed error report to stderr, including a stack trace, and forces the process to exit with
# non-zero status. In CI, this will fail the step/job so you will notice.
# - ASAN_OPTIONS/UBSAN_OPTIONS configure the sanitizers' runtime behavior (see below for meanings).
env:
CFLAGS: "-O1 -g -fno-omit-frame-pointer -fsanitize=address,undefined"
CXXFLAGS: "-O1 -g -fno-omit-frame-pointer -fsanitize=address,undefined"
LDFLAGS: "-fsanitize=address,undefined"
# ASAN_OPTIONS controls AddressSanitizer runtime tweaks:
# - detect_leaks=0: Disable LeakSanitizer to avoid false positives with CPython/pymalloc in short-lived tests.
# - strict_string_checks=1: Make invalid string operations (e.g., over-reads) more likely to be detected.
# - check_initialization_order=1: Catch uses that depend on static initialization order (C++).
# - detect_stack_use_after_return=1: Detect stack-use-after-return via stack poisoning (may increase overhead).
ASAN_OPTIONS: "detect_leaks=0:strict_string_checks=1:check_initialization_order=1:detect_stack_use_after_return=1"
# UBSAN_OPTIONS controls UndefinedBehaviorSanitizer runtime:
# - print_stacktrace=1: Include a stack trace for UB reports to ease debugging.
# Note: UBSan is recoverable by default (process may continue after reporting). If you want CI to
# abort immediately and fail on the first UB, add `halt_on_error=1` (e.g., UBSAN_OPTIONS="print_stacktrace=1:halt_on_error=1").
UBSAN_OPTIONS: "print_stacktrace=1"
# PYTHONDEVMODE enables additional Python runtime checks and warnings.
PYTHONDEVMODE: "1"
run: pip install -e .
- name: Run tests under sanitizers
env:
ASAN_OPTIONS: "detect_leaks=0:strict_string_checks=1:check_initialization_order=1:detect_stack_use_after_return=1"
UBSAN_OPTIONS: "print_stacktrace=1"
PYTHONDEVMODE: "1"
# Ensure the ASan runtime is loaded first to avoid "ASan runtime does not come first" warnings.
# We discover libasan/libubsan paths via gcc and preload them for the Python test process.
# the remote tests are slow and likely won't find anything useful
run: |
set -euo pipefail
export LD_PRELOAD="$(gcc -print-file-name=libasan.so):$(gcc -print-file-name=libubsan.so)"
echo "Using LD_PRELOAD=$LD_PRELOAD"
pytest -v --benchmark-skip -k "not remote"
native_tests:
needs: [lint]
permissions:
contents: read
id-token: write
attestations: write
strategy: strategy:
fail-fast: true matrix:
# noinspection YAMLSchemaValidation include:
matrix: >- - os: ubuntu-20.04
${{ fromJSON( python-version: '3.9'
github.event_name == 'pull_request' && '{ toxenv: py39-fuse2
"include": [ - os: ubuntu-20.04
{"os": "ubuntu-22.04", "python-version": "3.10", "toxenv": "mypy"}, python-version: '3.10'
{"os": "ubuntu-22.04", "python-version": "3.11", "toxenv": "docs"}, toxenv: py310-fuse3
{"os": "ubuntu-22.04", "python-version": "3.10", "toxenv": "py310-llfuse"}, - os: macos-10.15 # macos-latest is macos 11.6.2 and hanging at test_fuse, #6099
{"os": "ubuntu-24.04", "python-version": "3.14", "toxenv": "py314-mfusepy"} python-version: '3.9'
] toxenv: py39-fuse2
}' || '{
"include": [
{"os": "ubuntu-22.04", "python-version": "3.11", "toxenv": "py311-pyfuse3", "binary": "borg-linux-glibc235-x86_64-gh"},
{"os": "ubuntu-22.04-arm", "python-version": "3.11", "toxenv": "py311-pyfuse3", "binary": "borg-linux-glibc235-arm64-gh"},
{"os": "ubuntu-24.04", "python-version": "3.12", "toxenv": "py312-llfuse"},
{"os": "ubuntu-24.04", "python-version": "3.13", "toxenv": "py313-pyfuse3"},
{"os": "ubuntu-24.04", "python-version": "3.14", "toxenv": "py314-mfusepy"},
{"os": "macos-15-intel", "python-version": "3.11", "toxenv": "py311-none", "binary": "borg-macos-15-x86_64-gh"},
{"os": "macos-15", "python-version": "3.11", "toxenv": "py311-none", "binary": "borg-macos-15-arm64-gh"}
]
}'
) }}
env: env:
# Configure pkg-config to use OpenSSL from Homebrew
PKG_CONFIG_PATH: /usr/local/opt/openssl@1.1/lib/pkgconfig
BORG_LIBDEFLATE_PREFIX: /usr # on ubuntu 20.04 pkgconfig does not find libdeflate
TOXENV: ${{ matrix.toxenv }} TOXENV: ${{ matrix.toxenv }}
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
# macOS machines can be slow, if overloaded. timeout-minutes: 40
timeout-minutes: 360
steps: steps:
- uses: actions/checkout@v6 - uses: actions/checkout@v2
with: with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all. # just fetching 1 commit is not enough for setuptools-scm, so we fetch all
fetch-depth: 0 fetch-depth: 0
fetch-tags: true
- name: Set up Python ${{ matrix.python-version }} - name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6 uses: actions/setup-python@v2
with: with:
python-version: ${{ matrix.python-version }} python-version: ${{ matrix.python-version }}
- name: Cache pip - name: Cache pip
uses: actions/cache@v5 uses: actions/cache@v2
with: with:
path: ~/.cache/pip path: ~/.cache/pip
key: ${{ runner.os }}-${{ runner.arch }}-pip-${{ hashFiles('requirements.d/development.txt') }} key: ${{ runner.os }}-pip-${{ hashFiles('requirements.d/development.txt') }}
restore-keys: | restore-keys: |
${{ runner.os }}-${{ runner.arch }}-pip- ${{ runner.os }}-pip-
${{ runner.os }}-${{ runner.arch }}- ${{ runner.os }}-
- name: Cache tox environments
uses: actions/cache@v5
with:
path: .tox
key: ${{ runner.os }}-${{ runner.arch }}-tox-${{ matrix.toxenv }}-${{ hashFiles('requirements.d/development.txt', 'pyproject.toml') }}
restore-keys: |
${{ runner.os }}-${{ runner.arch }}-tox-${{ matrix.toxenv }}-
${{ runner.os }}-${{ runner.arch }}-tox-
- name: Install Linux packages - name: Install Linux packages
if: ${{ runner.os == 'Linux' }} if: ${{ runner.os == 'Linux' }}
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y pkg-config build-essential sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev
sudo apt-get install -y bash zsh fish # for shell completion tests sudo apt-get install -y libfuse-dev fuse || true # Required for Python llfuse module
sudo apt-get install -y rclone openssh-server curl sudo apt-get install -y libfuse3-dev fuse3 || true # Required for Python pyfuse3 module
if [[ "$TOXENV" == *"llfuse"* ]]; then
sudo apt-get install -y libfuse-dev fuse # Required for Python llfuse module
elif [[ "$TOXENV" == *"pyfuse3"* || "$TOXENV" == *"mfusepy"* ]]; then
sudo apt-get install -y libfuse3-dev fuse3 # Required for Python pyfuse3 module
fi
- name: Install macOS packages - name: Install macOS packages
if: ${{ runner.os == 'macOS' }} if: ${{ runner.os == 'macOS' }}
run: | run: |
brew unlink pkg-config@0.29.2 || true brew install pkg-config || brew upgrade pkg-config
brew bundle install brew install zstd || brew upgrade zstd
brew install lz4 || brew upgrade lz4
- name: Configure OpenSSH SFTP server (test only) brew install libdeflate || brew upgrade libdeflate
if: ${{ runner.os == 'Linux' && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }} brew install xxhash || brew upgrade xxhash
run: | brew install openssl@1.1 || brew upgrade openssl@1.1
sudo mkdir -p /run/sshd brew install --cask macfuse || brew upgrade --cask macfuse # Required for Python llfuse module
sudo useradd -m -s /bin/bash sftpuser || true
# Create SSH key for the CI user and authorize it for sftpuser
mkdir -p ~/.ssh
chmod 700 ~/.ssh
test -f ~/.ssh/id_ed25519 || ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519
sudo mkdir -p /home/sftpuser/.ssh
sudo chmod 700 /home/sftpuser/.ssh
sudo cp ~/.ssh/id_ed25519.pub /home/sftpuser/.ssh/authorized_keys
sudo chown -R sftpuser:sftpuser /home/sftpuser/.ssh
sudo chmod 600 /home/sftpuser/.ssh/authorized_keys
# Allow publickey auth and enable Subsystem sftp
sudo sed -i 's/^#\?PasswordAuthentication .*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PubkeyAuthentication .*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
if ! grep -q '^Subsystem sftp' /etc/ssh/sshd_config; then echo 'Subsystem sftp /usr/lib/openssh/sftp-server' | sudo tee -a /etc/ssh/sshd_config; fi
# Ensure host keys exist to avoid slow generation on first sshd start
sudo ssh-keygen -A
# Start sshd (listen on default 22 inside runner)
sudo /usr/sbin/sshd -D &
# Add host key to known_hosts so paramiko trusts it
ssh-keyscan -H localhost 127.0.0.1 | tee -a ~/.ssh/known_hosts
# Start ssh-agent and add our key so paramiko can use the agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
# Export SFTP test URL for tox via GITHUB_ENV
echo "BORG_TEST_SFTP_REPO=sftp://sftpuser@localhost:22/borg/sftp-repo" >> $GITHUB_ENV
- name: Install and configure MinIO S3 server (test only)
if: ${{ runner.os == 'Linux' && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
run: |
set -e
arch=$(uname -m)
case "$arch" in
x86_64|amd64) srv_url=https://dl.min.io/server/minio/release/linux-amd64/minio; cli_url=https://dl.min.io/client/mc/release/linux-amd64/mc ;;
aarch64|arm64) srv_url=https://dl.min.io/server/minio/release/linux-arm64/minio; cli_url=https://dl.min.io/client/mc/release/linux-arm64/mc ;;
*) echo "Unsupported arch: $arch"; exit 1 ;;
esac
curl -fsSL -o /usr/local/bin/minio "$srv_url"
curl -fsSL -o /usr/local/bin/mc "$cli_url"
sudo chmod +x /usr/local/bin/minio /usr/local/bin/mc
export PATH=/usr/local/bin:$PATH
# Start MinIO on :9000 with default credentials (minioadmin/minioadmin)
MINIO_DIR="$GITHUB_WORKSPACE/.minio-data"
MINIO_LOG="$GITHUB_WORKSPACE/.minio.log"
mkdir -p "$MINIO_DIR"
nohup minio server "$MINIO_DIR" --address ":9000" >"$MINIO_LOG" 2>&1 &
# Wait for MinIO port to be ready
for i in $(seq 1 60); do (echo > /dev/tcp/127.0.0.1/9000) >/dev/null 2>&1 && break; sleep 1; done
# Configure client and create bucket
mc alias set local http://127.0.0.1:9000 minioadmin minioadmin
mc mb --ignore-existing local/borg
# Export S3 test URL for tox via GITHUB_ENV
echo "BORG_TEST_S3_REPO=s3:minioadmin:minioadmin@http://127.0.0.1:9000/borg/s3-repo" >> $GITHUB_ENV
- name: Install Python requirements - name: Install Python requirements
run: | run: |
python -m pip install --upgrade pip setuptools wheel python -m pip install --upgrade pip setuptools wheel
pip install -r requirements.d/development.txt pip install -r requirements.d/development.txt
- name: Install borgbackup - name: Install borgbackup
run: | run: |
if [[ "$TOXENV" == *"llfuse"* ]]; then # pip install -e .
pip install -ve ".[llfuse,cockpit,s3,sftp]" python setup.py -v develop
elif [[ "$TOXENV" == *"pyfuse3"* ]]; then - name: run pytest via tox
pip install -ve ".[pyfuse3,cockpit,s3,sftp]"
elif [[ "$TOXENV" == *"mfusepy"* ]]; then
pip install -ve ".[mfusepy,cockpit,s3,sftp]"
else
pip install -ve ".[cockpit,s3,sftp]"
fi
- name: Build Borg fat binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
pip install -r requirements.d/pyinstaller.txt
mkdir -p dist/binary
pyinstaller --clean --distpath=dist/binary scripts/borg.exe.spec
- name: Smoke-test the built binary (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
pushd dist/binary
echo "single-file binary"
chmod +x borg.exe
./borg.exe -V
echo "single-directory binary"
chmod +x borg-dir/borg.exe
./borg-dir/borg.exe -V
tar czf borg.tgz borg-dir
popd
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
echo "borg.exe binary in PATH"
borg.exe -V
- name: Prepare binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
mkdir -p artifacts
if [ -f dist/binary/borg.exe ]; then
cp dist/binary/borg.exe artifacts/${{ matrix.binary }}
fi
if [ -f dist/binary/borg.tgz ]; then
cp dist/binary/borg.tgz artifacts/${{ matrix.binary }}.tgz
fi
echo "binary files"
ls -l artifacts/
- name: Attest binaries provenance (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
uses: actions/attest-build-provenance@v3
with:
subject-path: 'artifacts/*'
- name: Upload binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
uses: actions/upload-artifact@v6
with:
name: ${{ matrix.binary }}
path: artifacts/*
if-no-files-found: error
- name: run tox env
run: | run: |
# do not use fakeroot, but run as root. avoids the dreaded EISDIR sporadic failures. see #2482. # do not use fakeroot, but run as root. avoids the dreaded EISDIR sporadic failures. see #2482.
#sudo -E bash -c "tox -e py" #sudo -E bash -c "tox -e py"
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests tox --skip-missing-interpreters
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
tox --skip-missing-interpreters -- --junitxml=test-results.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: ${{ matrix.python-version }}
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: test_results
env_vars: OS,python
files: test-results.xml
- name: Upload coverage to Codecov - name: Upload coverage to Codecov
if: ${{ !cancelled() && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }} uses: codecov/codecov-action@v1
uses: codecov/codecov-action@v5
env: env:
OS: ${{ runner.os }} OS: ${{ runner.os }}
python: ${{ matrix.python-version }} python: ${{ matrix.python-version }}
with: with:
token: ${{ secrets.CODECOV_TOKEN }} token: ${{ secrets.CODECOV_TOKEN }}
report_type: coverage env_vars: OS, python
env_vars: OS,python
vm_tests:
permissions:
contents: read
id-token: write
attestations: write
runs-on: ubuntu-24.04
timeout-minutes: 90
needs: [lint]
continue-on-error: true
strategy:
fail-fast: false
matrix:
include:
- os: freebsd
version: '14.3'
display_name: FreeBSD
# Controls binary build and provenance attestation on tags
do_binaries: true
artifact_prefix: borg-freebsd-14-x86_64-gh
- os: netbsd
version: '10.1'
display_name: NetBSD
do_binaries: false
- os: openbsd
version: '7.7'
display_name: OpenBSD
do_binaries: false
- os: omnios
version: 'r151056'
display_name: OmniOS
do_binaries: false
- os: haiku
version: 'r1beta5'
display_name: Haiku
do_binaries: false
steps:
- name: Check out repository
uses: actions/checkout@v6
with:
fetch-depth: 0
fetch-tags: true
- name: Test on ${{ matrix.display_name }}
id: cross_os
uses: cross-platform-actions/action@v0.32.0
env:
DO_BINARIES: ${{ matrix.do_binaries }}
with:
operating_system: ${{ matrix.os }}
version: ${{ matrix.version }}
shell: bash
run: |
set -euxo pipefail
case "${{ matrix.os }}" in
freebsd)
export IGNORE_OSVERSION=yes
sudo -E pkg update -f
sudo -E pkg install -y xxhash liblz4 pkgconf
sudo -E pkg install -y fusefs-libs
sudo -E kldload fusefs
sudo -E sysctl vfs.usermount=1
sudo -E chmod 666 /dev/fuse
sudo -E pkg install -y rust
sudo -E pkg install -y gmake
sudo -E pkg install -y git
sudo -E pkg install -y python310 py310-sqlite3
sudo -E pkg install -y python311 py311-sqlite3 py311-pip py311-virtualenv
sudo ln -sf /usr/local/bin/python3.11 /usr/local/bin/python3
sudo ln -sf /usr/local/bin/python3.11 /usr/local/bin/python
sudo ln -sf /usr/local/bin/pip3.11 /usr/local/bin/pip3
sudo ln -sf /usr/local/bin/pip3.11 /usr/local/bin/pip
# required for libsodium/pynacl build
export MAKE=gmake
python -m venv .venv
. .venv/bin/activate
python -V
pip -V
python -m pip install --upgrade pip wheel
pip install -r requirements.d/development.txt
pip install -e ".[mfusepy,cockpit,s3,sftp]"
tox -e py311-mfusepy
if [[ "${{ matrix.do_binaries }}" == "true" && "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
python -m pip install -r requirements.d/pyinstaller.txt
mkdir -p dist/binary
pyinstaller --clean --distpath=dist/binary scripts/borg.exe.spec
pushd dist/binary
echo "single-file binary"
chmod +x borg.exe
./borg.exe -V
echo "single-directory binary"
chmod +x borg-dir/borg.exe
./borg-dir/borg.exe -V
tar czf borg.tgz borg-dir
popd
mkdir -p artifacts
if [ -f dist/binary/borg.exe ]; then
cp -v dist/binary/borg.exe artifacts/${{ matrix.artifact_prefix }}
fi
if [ -f dist/binary/borg.tgz ]; then
cp -v dist/binary/borg.tgz artifacts/${{ matrix.artifact_prefix }}.tgz
fi
fi
;;
netbsd)
arch="$(uname -m)"
sudo -E mkdir -p /usr/pkg/etc/pkgin
echo "https://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/${arch}/10.1/All" | sudo tee /usr/pkg/etc/pkgin/repositories.conf > /dev/null
sudo -E pkgin update
sudo -E pkgin -y upgrade
sudo -E pkgin -y install lz4 xxhash git
sudo -E pkgin -y install rust
sudo -E pkgin -y install pkg-config
sudo -E pkgin -y install py311-pip py311-virtualenv py311-tox
sudo -E ln -sf /usr/pkg/bin/python3.11 /usr/pkg/bin/python3
sudo -E ln -sf /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip3
sudo -E ln -sf /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv3
sudo -E ln -sf /usr/pkg/bin/tox-3.11 /usr/pkg/bin/tox3
# Ensure base system admin tools are on PATH for the non-root shell
export PATH="/sbin:/usr/sbin:$PATH"
echo "--- Preparing an extattr-enabled filesystem ---"
# On many NetBSD setups /tmp is tmpfs without extended attributes.
# Create a FFS image with extended attributes enabled and use it for TMPDIR.
VNDDEV="vnd0"
IMGFILE="/tmp/fs.img"
sudo -E dd if=/dev/zero of=${IMGFILE} bs=1m count=1024
sudo -E vndconfig -c "${VNDDEV}" "${IMGFILE}"
sudo -E newfs -O 2ea /dev/r${VNDDEV}a
MNT="/mnt/eafs"
sudo -E mkdir -p ${MNT}
sudo -E mount -t ffs -o extattr /dev/${VNDDEV}a $MNT
export TMPDIR="${MNT}/tmp"
sudo -E mkdir -p ${TMPDIR}
sudo -E chmod 1777 ${TMPDIR}
touch ${TMPDIR}/testfile
lsextattr user ${TMPDIR}/testfile && echo "[xattr] *** xattrs SUPPORTED on ${TMPDIR}! ***"
tox3 -e py311-none
;;
openbsd)
sudo -E pkg_add xxhash lz4 git
sudo -E pkg_add rust
sudo -E pkg_add openssl%3.4
sudo -E pkg_add py3-pip py3-virtualenv py3-tox
export BORG_OPENSSL_NAME=eopenssl34
tox -e py312-none
;;
omnios)
sudo pkg install gcc14 git pkg-config python-313 gnu-make gnu-coreutils
sudo ln -sf /usr/bin/python3.13 /usr/bin/python3
sudo ln -sf /usr/bin/python3.13-config /usr/bin/python3-config
sudo python3 -m ensurepip
sudo python3 -m pip install virtualenv
# install libxxhash from source
git clone --depth 1 https://github.com/Cyan4973/xxHash.git
cd xxHash
sudo gmake install INSTALL=/usr/gnu/bin/install PREFIX=/usr/local
cd ..
export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig:${PKG_CONFIG_PATH:-}"
export LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH:-}"
python3 -m venv .venv
. .venv/bin/activate
python -V
pip -V
python -m pip install --upgrade pip wheel
pip install -r requirements.d/development.txt
# no fuse support on omnios in our tests usually
pip install -e .
tox -e py313-none
;;
haiku)
pkgman refresh
pkgman install -y git pkgconfig lz4 xxhash
pkgman install -y openssl3
pkgman install -y rust_bin
pkgman install -y python3.10
pkgman install -y cffi
pkgman install -y lz4_devel xxhash_devel openssl3_devel libffi_devel
# there is no pkgman package for tox, so we install it into a venv
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip wheel
python3 -m venv .venv
. .venv/bin/activate
export PKG_CONFIG_PATH="/system/develop/lib/pkgconfig:/system/lib/pkgconfig:${PKG_CONFIG_PATH:-}"
export BORG_LIBLZ4_PREFIX=/system/develop
export BORG_LIBXXHASH_PREFIX=/system/develop
export BORG_OPENSSL_PREFIX=/system/develop
pip install -r requirements.d/development.txt
pip install -e .
# troubles with either tox or pytest xdist, so we run pytest manually:
pytest -v -rs --benchmark-skip -k "not remote and not socket"
;;
esac
- name: Upload artifacts
if: startsWith(github.ref, 'refs/tags/') && matrix.do_binaries
uses: actions/upload-artifact@v6
with:
name: ${{ matrix.artifact_prefix }}
path: artifacts/*
if-no-files-found: ignore
- name: Attest provenance
if: startsWith(github.ref, 'refs/tags/') && matrix.do_binaries
uses: actions/attest-build-provenance@v3
with:
subject-path: 'artifacts/*'
windows_tests:
if: true # can be used to temporarily disable the build
runs-on: windows-latest
timeout-minutes: 90
needs: [lint]
env:
PY_COLORS: 1
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: msys2/setup-msys2@v2
with:
msystem: UCRT64
update: true
- name: Install system packages
run: ./scripts/msys2-install-deps development
- name: Build python venv
run: |
# building cffi / argon2-cffi in the venv fails, so we try to use the system packages
python -m venv --system-site-packages env
. env/bin/activate
# python -m pip install --upgrade pip
# pip install --upgrade setuptools build wheel
pip install -r requirements.d/pyinstaller.txt
- name: Build
run: |
# build borg.exe
. env/bin/activate
pip install -e ".[cockpit,s3,sftp]"
mkdir -p dist/binary
pyinstaller -y --clean --distpath=dist/binary scripts/borg.exe.spec
# build sdist and wheel in dist/...
python -m build
- uses: actions/upload-artifact@v6
with:
name: borg-windows
path: dist/binary/borg.exe
- name: Run tests
run: |
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
borg.exe -V
. env/bin/activate
python -m pytest -n4 --benchmark-skip -vv -rs -k "not remote" --junitxml=test-results.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() }}
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: '3.11'
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: test_results
env_vars: OS,python
files: test-results.xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: '3.11'
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: coverage
env_vars: OS,python

View file

@ -5,33 +5,16 @@ name: "CodeQL"
on: on:
push: push:
branches: [ master ] branches: [ master ]
paths:
- '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '.github/workflows/codeql-analysis.yml'
pull_request: pull_request:
# The branches below must be a subset of the branches above # The branches below must be a subset of the branches above
branches: [ master ] branches: [ master ]
paths:
- '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '.github/workflows/codeql-analysis.yml'
schedule: schedule:
- cron: '39 2 * * 5' - cron: '39 2 * * 5'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs: jobs:
analyze: analyze:
name: Analyze name: Analyze
runs-on: ubuntu-24.04 runs-on: ubuntu-latest
timeout-minutes: 20
permissions: permissions:
actions: read actions: read
contents: read contents: read
@ -42,20 +25,23 @@ jobs:
matrix: matrix:
language: [ 'cpp', 'python' ] language: [ 'cpp', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ] # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://codeql.github.com/docs/codeql-overview/supported-languages-and-frameworks/ # Learn more about CodeQL language support at https://git.io/codeql-language-support
env:
BORG_LIBDEFLATE_PREFIX: /usr # on ubuntu 20.04 pkgconfig does not find libdeflate
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v6 uses: actions/checkout@v2
with: with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all. # just fetching 1 commit is not enough for setuptools-scm, so we fetch all
fetch-depth: 0 fetch-depth: 0
- name: Set up Python - name: Set up Python
uses: actions/setup-python@v6 uses: actions/setup-python@v2
with: with:
python-version: 3.11 python-version: 3.9
- name: Cache pip - name: Cache pip
uses: actions/cache@v5 uses: actions/cache@v2
with: with:
path: ~/.cache/pip path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.d/development.txt') }} key: ${{ runner.os }}-pip-${{ hashFiles('requirements.d/development.txt') }}
@ -66,10 +52,10 @@ jobs:
run: | run: |
sudo apt-get update sudo apt-get update
sudo apt-get install -y pkg-config build-essential sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v4 uses: github/codeql-action/init@v1
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@ -81,6 +67,6 @@ jobs:
python3 -m venv ../borg-env python3 -m venv ../borg-env
source ../borg-env/bin/activate source ../borg-env/bin/activate
pip3 install -r requirements.d/development.txt pip3 install -r requirements.d/development.txt
pip3 install -ve . pip3 install -e .
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4 uses: github/codeql-action/analyze@v1

15
.gitignore vendored
View file

@ -2,18 +2,17 @@ MANIFEST
docs/_build docs/_build
build build
dist dist
external
borg-env
.tox .tox
src/borg/compress.c src/borg/compress.c
src/borg/hashindex.c
src/borg/crypto/low_level.c src/borg/crypto/low_level.c
src/borg/hashindex.c
src/borg/item.c src/borg/item.c
src/borg/chunkers/buzhash.c src/borg/chunker.c
src/borg/chunkers/buzhash64.c
src/borg/chunkers/reader.c
src/borg/checksums.c src/borg/checksums.c
src/borg/platform/darwin.c src/borg/platform/darwin.c
src/borg/platform/freebsd.c src/borg/platform/freebsd.c
src/borg/platform/netbsd.c
src/borg/platform/linux.c src/borg/platform/linux.c
src/borg/platform/syncfilerange.c src/borg/platform/syncfilerange.c
src/borg/platform/posix.c src/borg/platform/posix.c
@ -24,10 +23,12 @@ src/borg/_version.py
*.pyd *.pyd
*.so *.so
.idea/ .idea/
.junie/ .cache/
.vscode/ .vscode/
borg.build/
borg.dist/
borg.exe borg.exe
.coverage .coverage
.coverage.* .coverage.*
.vagrant .vagrant
.eggs

View file

@ -1,9 +0,0 @@
repos:
- repo: https://github.com/psf/black
rev: 24.8.0
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.0
hooks:
- id: ruff

View file

@ -1,32 +0,0 @@
# .readthedocs.yaml - Read the Docs configuration file.
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
jobs:
post_checkout:
- git fetch --unshallow
apt_packages:
- build-essential
- pkg-config
- libacl1-dev
- libssl-dev
- liblz4-dev
- libxxhash-dev
python:
install:
- requirements: requirements.d/development.lock.txt
- requirements: requirements.d/docs.txt
- method: pip
path: .
sphinx:
configuration: docs/conf.py
formats:
- htmlzip

28
AUTHORS
View file

@ -1,5 +1,5 @@
Email addresses listed here are not intended for support. E-mail addresses listed here are not intended for support, please see
Please see the `support section`_ instead. the `support section`_ instead.
.. _support section: https://borgbackup.readthedocs.io/en/stable/support.html .. _support section: https://borgbackup.readthedocs.io/en/stable/support.html
@ -44,3 +44,27 @@ Attic Patches and Suggestions
- Johann Klähn - Johann Klähn
- Petros Moisiadis - Petros Moisiadis
- Thomas Waldmann - Thomas Waldmann
BLAKE2
------
Borg includes BLAKE2: Copyright 2012, Samuel Neves <sneves@dei.uc.pt>, licensed under the terms
of the CC0, the OpenSSL Licence, or the Apache Public License 2.0.
Slicing CRC32
-------------
Borg includes a fast slice-by-8 implementation of CRC32, Copyright 2011-2015 Stephan Brumme,
licensed under the terms of a zlib license. See http://create.stephan-brumme.com/crc32/
Folding CRC32
-------------
Borg includes an extremely fast folding implementation of CRC32, Copyright 2013 Intel Corporation,
licensed under the terms of the zlib license.
xxHash
------
XXH64, a fast non-cryptographic hash algorithm. Copyright 2012-2016 Yann Collet,
licensed under a BSD 2-clause license.

View file

@ -1,11 +0,0 @@
brew 'pkgconf'
brew 'lz4'
brew 'xxhash'
brew 'openssl@3'
# osxfuse (aka macFUSE) is only required for "borg mount",
# but won't work on GitHub Actions' workers.
# It requires installing a kernel extension, so some users
# may want it and some won't.
#cask 'osxfuse'

View file

@ -1,4 +1,4 @@
Copyright (C) 2015-2025 The Borg Collective (see AUTHORS file) Copyright (C) 2015-2022 The Borg Collective (see AUTHORS file)
Copyright (C) 2010-2014 Jonas Borgström <jonas@borgstrom.se> Copyright (C) 2010-2014 Jonas Borgström <jonas@borgstrom.se>
All rights reserved. All rights reserved.

View file

@ -1,7 +1,7 @@
# The files we need to include in the sdist are handled automatically by # stuff we need to include into the sdist is handled automatically by
# setuptools_scm - it includes all git-committed files. # setuptools_scm - it includes all git-committed files.
# But we want to exclude some committed files/directories not needed in the sdist: # but we want to exclude some committed files/dirs not needed in the sdist:
exclude .editorconfig .gitattributes .gitignore .mailmap Vagrantfile exclude .coafile .editorconfig .gitattributes .gitignore .mailmap Vagrantfile
prune .github prune .github
include src/borg/platform/darwin.c src/borg/platform/freebsd.c src/borg/platform/linux.c src/borg/platform/posix.c include src/borg/platform/darwin.c src/borg/platform/freebsd.c src/borg/platform/linux.c src/borg/platform/posix.c
include src/borg/platform/syncfilerange.c include src/borg/platform/syncfilerange.c

View file

@ -1,23 +1,6 @@
This is borg2! |screencast_basic|
--------------
Please note that this is the README for borg2 / master branch.
For the stable version's docs, please see here:
https://borgbackup.readthedocs.io/en/stable/
Borg2 is currently in beta testing and might get major and/or
breaking changes between beta releases (and there is no beta to
next-beta upgrade code, so you will have to delete and re-create repos).
Thus, **DO NOT USE BORG2 FOR YOUR PRODUCTION BACKUPS!** Please help with
testing it, but set it up *additionally* to your production backups.
TODO: the screencasts need a remake using borg2, see here:
https://github.com/borgbackup/borg/issues/6303
More screencasts: `installation`_, `advanced usage`_
What is BorgBackup? What is BorgBackup?
------------------- -------------------
@ -25,17 +8,17 @@ What is BorgBackup?
BorgBackup (short: Borg) is a deduplicating backup program. BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption. Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to back up data. The main goal of Borg is to provide an efficient and secure way to backup data.
The data deduplication technique used makes Borg suitable for daily backups The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored. since only changes are stored.
The authenticated encryption technique makes it suitable for backups to targets not The authenticated encryption technique makes it suitable for backups to not
fully trusted. fully trusted targets.
See the `installation manual`_ or, if you have already See the `installation manual`_ or, if you have already
downloaded Borg, ``docs/installation.rst`` to get started with Borg. downloaded Borg, ``docs/installation.rst`` to get started with Borg.
There is also an `offline documentation`_ available, in multiple formats. There is also an `offline documentation`_ available, in multiple formats.
.. _installation manual: https://borgbackup.readthedocs.io/en/master/installation.html .. _installation manual: https://borgbackup.readthedocs.org/en/stable/installation.html
.. _offline documentation: https://readthedocs.org/projects/borgbackup/downloads .. _offline documentation: https://readthedocs.org/projects/borgbackup/downloads
Main features Main features
@ -69,16 +52,15 @@ Main features
**Speed** **Speed**
* performance-critical code (chunking, compression, encryption) is * performance-critical code (chunking, compression, encryption) is
implemented in C/Cython implemented in C/Cython
* local caching * local caching of files/chunks index data
* quick detection of unmodified files * quick detection of unmodified files
**Data encryption** **Data encryption**
All data can be protected client-side using 256-bit authenticated encryption All data can be protected using 256-bit AES encryption, data integrity and
(AES-OCB or chacha20-poly1305), ensuring data confidentiality, integrity and authenticity is verified using HMAC-SHA256. Data is encrypted clientside.
authenticity.
**Obfuscation** **Obfuscation**
Optionally, Borg can actively obfuscate, e.g., the size of files/chunks to Optionally, borg can actively obfuscate e.g. the size of files / chunks to
make fingerprinting attacks more difficult. make fingerprinting attacks more difficult.
**Compression** **Compression**
@ -91,24 +73,24 @@ Main features
* lzma (low speed, high compression) * lzma (low speed, high compression)
**Off-site backups** **Off-site backups**
Borg can store data on any remote host accessible over SSH. If Borg is Borg can store data on any remote host accessible over SSH. If Borg is
installed on the remote host, significant performance gains can be achieved installed on the remote host, big performance gains can be achieved
compared to using a network file system (sshfs, NFS, ...). compared to using a network filesystem (sshfs, nfs, ...).
**Backups mountable as file systems** **Backups mountable as filesystems**
Backup archives are mountable as user-space file systems for easy interactive Backup archives are mountable as userspace filesystems for easy interactive
backup examination and restores (e.g., by using a regular file manager). backup examination and restores (e.g. by using a regular file manager).
**Easy installation on multiple platforms** **Easy installation on multiple platforms**
We offer single-file binaries that do not require installing anything - We offer single-file binaries that do not require installing anything -
you can just run them on these platforms: you can just run them on these platforms:
* Linux * Linux
* macOS * Mac OS X
* FreeBSD * FreeBSD
* OpenBSD and NetBSD (no xattrs/ACLs support or binaries yet) * OpenBSD and NetBSD (no xattrs/ACLs support or binaries yet)
* Cygwin (experimental, no binaries yet) * Cygwin (experimental, no binaries yet)
* Windows Subsystem for Linux (WSL) on Windows 10/11 (experimental) * Linux Subsystem of Windows 10 (experimental)
**Free and Open Source Software** **Free and Open Source Software**
* security and functionality can be audited independently * security and functionality can be audited independently
@ -118,57 +100,61 @@ Main features
Easy to use Easy to use
~~~~~~~~~~~ ~~~~~~~~~~~
For ease of use, set the BORG_REPO environment variable:: Initialize a new backup repository (see ``borg init --help`` for encryption options)::
$ export BORG_REPO=/path/to/repo $ borg init -e repokey /path/to/repo
Create a new backup repository (see ``borg repo-create --help`` for encryption options):: Create a backup archive::
$ borg repo-create -e repokey-aes-ocb $ borg create /path/to/repo::Saturday1 ~/Documents
Create a new backup archive:: Now doing another backup, just to show off the great deduplication::
$ borg create Monday1 ~/Documents $ borg create -v --stats /path/to/repo::Saturday2 ~/Documents
-----------------------------------------------------------------------------
Archive name: Saturday2
Archive fingerprint: 622b7c53c...
Time (start): Sat, 2016-02-27 14:48:13
Time (end): Sat, 2016-02-27 14:48:14
Duration: 0.88 seconds
Number of files: 163
-----------------------------------------------------------------------------
Original size Compressed size Deduplicated size
This archive: 6.85 MB 6.85 MB 30.79 kB <-- !
All archives: 13.69 MB 13.71 MB 6.88 MB
Now do another backup, just to show off the great deduplication:: Unique chunks Total chunks
Chunk index: 167 330
$ borg create -v --stats Monday2 ~/Documents -----------------------------------------------------------------------------
Repository: /path/to/repo
Archive name: Monday2
Archive fingerprint: 7714aef97c1a24539cc3dc73f79b060f14af04e2541da33d54c7ee8e81a00089
Time (start): Mon, 2022-10-03 19:57:35 +0200
Time (end): Mon, 2022-10-03 19:57:35 +0200
Duration: 0.01 seconds
Number of files: 24
Original size: 29.73 MB
Deduplicated size: 520 B
Helping, donations and bounties, becoming a Patron For a graphical frontend refer to our complementary project `BorgWeb <https://borgweb.readthedocs.io/>`_.
Helping, Donations and Bounties, becoming a Patron
-------------------------------------------------- --------------------------------------------------
Your help is always welcome! Your help is always welcome!
Spread the word, give feedback, help with documentation, testing or development. Spread the word, give feedback, help with documentation, testing or development.
You can also give monetary support to the project, see here for details: You can also give monetary support to the project, see there for details:
https://www.borgbackup.org/support/fund.html https://www.borgbackup.org/support/fund.html
Links Links
----- -----
* `Main website <https://borgbackup.readthedocs.io/>`_ * `Main Web Site <https://borgbackup.readthedocs.org/>`_
* `Releases <https://github.com/borgbackup/borg/releases>`_, * `Releases <https://github.com/borgbackup/borg/releases>`_,
`PyPI packages <https://pypi.org/project/borgbackup/>`_ and `PyPI packages <https://pypi.python.org/pypi/borgbackup>`_ and
`Changelog <https://github.com/borgbackup/borg/blob/master/docs/changes.rst>`_ `ChangeLog <https://github.com/borgbackup/borg/blob/master/docs/changes.rst>`_
* `Offline documentation <https://readthedocs.org/projects/borgbackup/downloads>`_ * `Offline Documentation <https://readthedocs.org/projects/borgbackup/downloads>`_
* `GitHub <https://github.com/borgbackup/borg>`_ and * `GitHub <https://github.com/borgbackup/borg>`_ and
`Issue tracker <https://github.com/borgbackup/borg/issues>`_. `Issue Tracker <https://github.com/borgbackup/borg/issues>`_.
* `Web chat (IRC) <https://web.libera.chat/#borgbackup>`_ and * `Web-Chat (IRC) <https://web.libera.chat/#borgbackup>`_ and
`Mailing list <https://mail.python.org/mailman/listinfo/borgbackup>`_ `Mailing List <https://mail.python.org/mailman/listinfo/borgbackup>`_
* `License <https://borgbackup.readthedocs.io/en/master/authors.html#license>`_ * `License <https://borgbackup.readthedocs.org/en/stable/authors.html#license>`_
* `Security contact <https://borgbackup.readthedocs.io/en/master/support.html#security-contact>`_ * `Security contact <https://borgbackup.readthedocs.io/en/latest/support.html#security-contact>`_
Compatibility notes Compatibility notes
------------------- -------------------
@ -178,18 +164,22 @@ CHANGES (like when going from 0.x.y to 1.0.0 or from 1.x.y to 2.0.0).
NOT RELEASED DEVELOPMENT VERSIONS HAVE UNKNOWN COMPATIBILITY PROPERTIES. NOT RELEASED DEVELOPMENT VERSIONS HAVE UNKNOWN COMPATIBILITY PROPERTIES.
THIS IS SOFTWARE IN DEVELOPMENT, DECIDE FOR YOURSELF WHETHER IT FITS YOUR NEEDS. THIS IS SOFTWARE IN DEVELOPMENT, DECIDE YOURSELF WHETHER IT FITS YOUR NEEDS.
Security issues should be reported to the `Security contact`_ (or Security issues should be reported to the `Security contact`_ (or
see ``docs/support.rst`` in the source distribution). see ``docs/support.rst`` in the source distribution).
.. start-badges .. start-badges
|doc| |build| |coverage| |bestpractices| |doc| |build| |coverage| |bestpractices| |bounties|
.. |doc| image:: https://readthedocs.org/projects/borgbackup/badge/?version=master .. |bounties| image:: https://api.bountysource.com/badge/team?team_id=78284&style=bounties_posted
:alt: Bounty Source
:target: https://www.bountysource.com/teams/borgbackup
.. |doc| image:: https://readthedocs.org/projects/borgbackup/badge/?version=stable
:alt: Documentation :alt: Documentation
:target: https://borgbackup.readthedocs.io/en/master/ :target: https://borgbackup.readthedocs.org/en/stable/
.. |build| image:: https://github.com/borgbackup/borg/workflows/CI/badge.svg?branch=master .. |build| image:: https://github.com/borgbackup/borg/workflows/CI/badge.svg?branch=master
:alt: Build Status (master) :alt: Build Status (master)

48
README_WINDOWS.rst Normal file
View file

@ -0,0 +1,48 @@
Borg Native on Windows
======================
Running borg natively on windows is in a early alpha stage. Expect many things to fail.
Do not use the native windows build on any data which you do not want to lose!
Build Requirements
------------------
- VC 14.0 Compiler
- OpenSSL Library v1.1.1c, 64bit (available at https://github.com/python/cpython-bin-deps)
Please use the `win-download-openssl.ps1` script to download and extract the library to
the correct location. See also the OpenSSL section below.
- Patience and a lot of coffee / beer
What's working
--------------
.. note::
The following examples assume that the `BORG_REPO` and `BORG_PASSPHRASE` environment variables are set
if the repo or passphrase is not explicitly given.
- Borg does not crash if called with ``borg``
- ``borg init --encryption repokey-blake2 ./demoRepo`` runs without an error/warning.
Note that absolute paths only work if the protocol is explicitly set to file://
- ``borg create ::backup-{now} D:\DemoData`` works as expected.
- ``borg list`` works as expected.
- ``borg extract --strip-components 1 ::backup-XXXX`` works.
If absolute paths are extracted, it's important to pass ``--strip-components 1`` as
otherwise the data is restored to the original location!
What's NOT working
------------------
- Extracting a backup which was created on windows machine on a non windows machine will fail.
- And many things more.
OpenSSL, Windows and Python
---------------------------
Windows does not ship OpenSSL by default, so we need to get the library from somewhere else.
However, a default python installation does include `libcrypto` which is required by borg.
The only things which are missing to build borg are the header and `*.lib` files.
Luckily the python developers provide all required files in a separate repository.
The `win-download-openssl.ps1` script can be used to download the package from
https://github.com/python/cpython-bin-deps and extract the files to the correct location.
For Anaconda, the required libraries can be installed with `conda install -c anaconda openssl`.

View file

@ -2,18 +2,16 @@
## Supported Versions ## Supported Versions
These Borg releases are currently supported with security updates. These borg releases are currently supported with security updates.
| Version | Supported | | Version | Supported |
|---------|--------------------| |---------|--------------------|
| 2.0.x | :x: (beta) | | 1.2.x | :white_check_mark: |
| 1.4.x | :white_check_mark: | | 1.1.x | :white_check_mark: |
| 1.2.x | :x: (no new releases, critical fixes may still be backported) |
| 1.1.x | :x: |
| < 1.1 | :x: | | < 1.1 | :x: |
## Reporting a Vulnerability ## Reporting a Vulnerability
See here: See there:
https://borgbackup.readthedocs.io/en/latest/support.html#security-contact https://borgbackup.readthedocs.io/en/latest/support.html#security-contact

309
Vagrantfile vendored
View file

@ -1,10 +1,10 @@
# -*- mode: ruby -*- # -*- mode: ruby -*-
# vi: set ft=ruby : # vi: set ft=ruby :
# Automated creation of testing environments/binaries on miscellaneous platforms # Automated creation of testing environments / binaries on misc. platforms
$cpus = Integer(ENV.fetch('VMCPUS', '8')) # create VMs with that many cpus $cpus = Integer(ENV.fetch('VMCPUS', '4')) # create VMs with that many cpus
$xdistn = Integer(ENV.fetch('XDISTN', '8')) # dispatch tests to that many pytest workers $xdistn = Integer(ENV.fetch('XDISTN', '4')) # dispatch tests to that many pytest workers
$wmem = $xdistn * 256 # give the VM additional memory for workers [MB] $wmem = $xdistn * 256 # give the VM additional memory for workers [MB]
def packages_debianoid(user) def packages_debianoid(user)
@ -15,8 +15,7 @@ def packages_debianoid(user)
apt-get -y -qq update apt-get -y -qq update
apt-get -y -qq dist-upgrade apt-get -y -qq dist-upgrade
# for building borgbackup and dependencies: # for building borgbackup and dependencies:
apt install -y pkg-config apt install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev pkg-config
apt install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev || true
apt install -y libfuse-dev fuse || true apt install -y libfuse-dev fuse || true
apt install -y libfuse3-dev fuse3 || true apt install -y libfuse3-dev fuse3 || true
apt install -y locales || true apt install -y locales || true
@ -28,6 +27,9 @@ def packages_debianoid(user)
apt install -y python3-dev python3-setuptools virtualenv apt install -y python3-dev python3-setuptools virtualenv
# for building python: # for building python:
apt install -y zlib1g-dev libbz2-dev libncurses5-dev libreadline-dev liblzma-dev libsqlite3-dev libffi-dev apt install -y zlib1g-dev libbz2-dev libncurses5-dev libreadline-dev liblzma-dev libsqlite3-dev libffi-dev
# older debian / ubuntu have no .pc file for these, so we need to point at the lib/header location:
echo 'export BORG_LIBXXHASH_PREFIX=/usr' >> ~vagrant/.bash_profile
echo 'export BORG_LIBDEFLATE_PREFIX=/usr' >> ~vagrant/.bash_profile
EOF EOF
end end
@ -38,17 +40,16 @@ def packages_freebsd
# install all the (security and other) updates, base system # install all the (security and other) updates, base system
freebsd-update --not-running-from-cron fetch install freebsd-update --not-running-from-cron fetch install
# for building borgbackup and dependencies: # for building borgbackup and dependencies:
pkg install -y xxhash liblz4 pkgconf pkg install -y xxhash libdeflate liblz4 zstd pkgconf
pkg install -y fusefs-libs || true pkg install -y fusefs-libs || true
pkg install -y fusefs-libs3 || true pkg install -y fusefs-libs3 || true
pkg install -y rust
pkg install -y git bash # fakeroot causes lots of troubles on freebsd pkg install -y git bash # fakeroot causes lots of troubles on freebsd
pkg install -y python310 py310-sqlite3 # for building python (for the tests we use pyenv built pythons):
pkg install -y python311 py311-sqlite3 py311-pip py311-virtualenv pkg install -y python39 py39-sqlite3
# make sure there is a python3/pip3/virtualenv command # make sure there is a python3 command
ln -sf /usr/local/bin/python3.11 /usr/local/bin/python3 ln -sf /usr/local/bin/python3.9 /usr/local/bin/python3
ln -sf /usr/local/bin/pip-3.11 /usr/local/bin/pip3 python3 -m ensurepip
ln -sf /usr/local/bin/virtualenv-3.11 /usr/local/bin/virtualenv pip3 install virtualenv
# make bash default / work: # make bash default / work:
chsh -s bash vagrant chsh -s bash vagrant
mount -t fdescfs fdesc /dev/fd mount -t fdescfs fdesc /dev/fd
@ -65,85 +66,81 @@ def packages_freebsd
pkg update pkg update
yes | pkg upgrade yes | pkg upgrade
echo 'export BORG_OPENSSL_PREFIX=/usr' >> ~vagrant/.bash_profile echo 'export BORG_OPENSSL_PREFIX=/usr' >> ~vagrant/.bash_profile
# (re)mount / with acls
mount -o acls /
EOF EOF
end end
def packages_openbsd def packages_openbsd
return <<-EOF return <<-EOF
hostname "openbsd77.localdomain"
echo "$(hostname)" > /etc/myname
echo "127.0.0.1 localhost" > /etc/hosts
echo "::1 localhost" >> /etc/hosts
echo "127.0.0.1 $(hostname) $(hostname -s)" >> /etc/hosts
echo "https://ftp.eu.openbsd.org/pub/OpenBSD" > /etc/installurl
ftp https://cdn.openbsd.org/pub/OpenBSD/$(uname -r)/$(uname -m)/comp$(uname -r | tr -d .).tgz
tar -C / -xzphf comp$(uname -r | tr -d .).tgz
rm comp$(uname -r | tr -d .).tgz
pkg_add bash pkg_add bash
chsh -s bash vagrant chsh -s bash vagrant
pkg_add xxhash pkg_add xxhash
pkg_add libdeflate
pkg_add lz4 pkg_add lz4
pkg_add zstd
pkg_add git # no fakeroot pkg_add git # no fakeroot
pkg_add rust pkg_add openssl%1.1
pkg_add openssl%3.4
pkg_add py3-pip pkg_add py3-pip
pkg_add py3-virtualenv pkg_add py3-virtualenv
echo 'export BORG_OPENSSL_NAME=eopenssl30' >> ~vagrant/.bash_profile
EOF EOF
end end
def packages_netbsd def packages_netbsd
return <<-EOF return <<-EOF
echo 'https://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/$arch/9.3/All' > /usr/pkg/etc/pkgin/repositories.conf # use the latest stuff, some packages in "9.2" are quite broken
echo 'http://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/$arch/9.0_current/All' > /usr/pkg/etc/pkgin/repositories.conf
pkgin update pkgin update
pkgin -y upgrade pkgin -y upgrade
pkg_add lz4 xxhash git pkg_add zstd lz4 xxhash git
pkg_add rust
pkg_add bash pkg_add bash
chsh -s bash vagrant chsh -s bash vagrant
echo "export PROMPT_COMMAND=" >> ~vagrant/.bash_profile # bug in netbsd 9.3, .bash_profile broken for screen echo "export PROMPT_COMMAND=" >> ~vagrant/.bash_profile # bug in netbsd 9.2, .bash_profile broken for screen
echo "export PROMPT_COMMAND=" >> ~root/.bash_profile # bug in netbsd 9.3, .bash_profile broken for screen echo "export PROMPT_COMMAND=" >> ~root/.bash_profile # bug in netbsd 9.2, .bash_profile broken for screen
pkg_add pkg-config pkg_add pkg-config
# pkg_add fuse # llfuse supports netbsd, but is still buggy. # pkg_add fuse # llfuse supports netbsd, but is still buggy.
# https://bitbucket.org/nikratio/python-llfuse/issues/70/perfuse_open-setsockopt-no-buffer-space # https://bitbucket.org/nikratio/python-llfuse/issues/70/perfuse_open-setsockopt-no-buffer-space
pkg_add py311-sqlite3 py311-pip py311-virtualenv py311-expat pkg_add python39 py39-sqlite3 py39-pip py39-virtualenv py39-expat
ln -s /usr/pkg/bin/python3.11 /usr/pkg/bin/python ln -s /usr/pkg/bin/python3.9 /usr/pkg/bin/python
ln -s /usr/pkg/bin/python3.11 /usr/pkg/bin/python3 ln -s /usr/pkg/bin/python3.9 /usr/pkg/bin/python3
ln -s /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip ln -s /usr/pkg/bin/pip3.9 /usr/pkg/bin/pip
ln -s /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip3 ln -s /usr/pkg/bin/pip3.9 /usr/pkg/bin/pip3
ln -s /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv ln -s /usr/pkg/bin/virtualenv-3.9 /usr/pkg/bin/virtualenv
ln -s /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv3 ln -s /usr/pkg/bin/virtualenv-3.9 /usr/pkg/bin/virtualenv3
ln -s /usr/pkg/lib/python3.9/_sysconfigdata_netbsd9.py /usr/pkg/lib/python3.9/_sysconfigdata__netbsd9_.py # bug in netbsd 9.2, expected filename not there.
EOF EOF
end end
def package_update_openindiana def packages_darwin
return <<-EOF return <<-EOF
echo "nameserver 1.1.1.1" > /etc/resolv.conf # install all the (security and other) updates
# needs separate provisioning step + reboot to become effective: sudo softwareupdate --ignore iTunesX
pkg update sudo softwareupdate --ignore iTunes
sudo softwareupdate --ignore Safari
sudo softwareupdate --ignore "Install macOS High Sierra"
sudo softwareupdate --install --all
which brew || CI=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew update > /dev/null
brew install pkg-config readline openssl@1.1 xxhash libdeflate zstd lz4 xz
brew install --cask macfuse
# brew upgrade # upgrade everything (takes rather long)
echo 'export PKG_CONFIG_PATH=/usr/local/opt/openssl@1.1/lib/pkgconfig' >> ~vagrant/.bash_profile
EOF EOF
end end
def packages_openindiana def packages_openindiana
return <<-EOF return <<-EOF
pkg install gcc-13 git # needs separate provisioning step + reboot:
pkg install pkg-config libxxhash #pkg update
pkg install python-313 #pkg install gcc-7 python-39 setuptools-39
ln -sf /usr/bin/python3.13 /usr/bin/python3 ln -sf /usr/bin/python3.9 /usr/bin/python3
ln -sf /usr/bin/python3.13-config /usr/bin/python3-config
python3 -m ensurepip python3 -m ensurepip
ln -sf /usr/bin/pip3.13 /usr/bin/pip3 ln -sf /usr/bin/pip3.9 /usr/bin/pip3
pip3 install virtualenv pip3 install virtualenv
# let borg's pkg-config find openssl:
pfexec pkg set-mediator -V 3 openssl
EOF EOF
end end
def install_pyenv(boxname) def install_pyenv(boxname)
return <<-EOF return <<-EOF
echo 'export PYTHON_CONFIGURE_OPTS="${PYTHON_CONFIGURE_OPTS} --enable-shared"' >> ~/.bash_profile echo 'export PYTHON_CONFIGURE_OPTS="--enable-shared"' >> ~/.bash_profile
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
. ~/.bash_profile . ~/.bash_profile
@ -156,11 +153,17 @@ def install_pyenv(boxname)
EOF EOF
end end
def fix_pyenv_darwin(boxname)
return <<-EOF
echo 'export PYTHON_CONFIGURE_OPTS="--enable-framework"' >> ~/.bash_profile
EOF
end
def install_pythons(boxname) def install_pythons(boxname)
return <<-EOF return <<-EOF
. ~/.bash_profile . ~/.bash_profile
echo "PYTHON_CONFIGURE_OPTS: ${PYTHON_CONFIGURE_OPTS}" pyenv install 3.10.0 # tests, version supporting openssl 1.1
pyenv install 3.13.8 pyenv install 3.9.12 # tests, version supporting openssl 1.1, binary build
pyenv rehash pyenv rehash
EOF EOF
end end
@ -177,9 +180,9 @@ def build_pyenv_venv(boxname)
return <<-EOF return <<-EOF
. ~/.bash_profile . ~/.bash_profile
cd /vagrant/borg cd /vagrant/borg
# use the latest 3.13 release # use the latest 3.9 release
pyenv global 3.13.8 pyenv global 3.9.12
pyenv virtualenv 3.13.8 borg-env pyenv virtualenv 3.9.12 borg-env
ln -s ~/.pyenv/versions/borg-env . ln -s ~/.pyenv/versions/borg-env .
EOF EOF
end end
@ -192,10 +195,8 @@ def install_borg(fuse)
pip install -U wheel # upgrade wheel, might be too old pip install -U wheel # upgrade wheel, might be too old
cd borg cd borg
pip install -r requirements.d/development.lock.txt pip install -r requirements.d/development.lock.txt
python3 scripts/make.py clean python setup.py clean
# install borgstore WITH all options, so it pulls in the needed python setup.py clean2
# requirements, so they will also get into the binaries built. #8574
pip install borgstore[sftp,s3]
pip install -e .[#{fuse}] pip install -e .[#{fuse}]
EOF EOF
end end
@ -205,7 +206,10 @@ def install_pyinstaller()
. ~/.bash_profile . ~/.bash_profile
cd /vagrant/borg cd /vagrant/borg
. borg-env/bin/activate . borg-env/bin/activate
pip install -r requirements.d/pyinstaller.txt git clone https://github.com/thomaswaldmann/pyinstaller.git
cd pyinstaller
git checkout v4.7-maint
python setup.py install
EOF EOF
end end
@ -228,8 +232,8 @@ def run_tests(boxname, skip_env)
. ../borg-env/bin/activate . ../borg-env/bin/activate
if which pyenv 2> /dev/null; then if which pyenv 2> /dev/null; then
# for testing, use the earliest point releases of the supported python versions: # for testing, use the earliest point releases of the supported python versions:
pyenv global 3.13.8 pyenv global 3.9.12 3.10.0
pyenv local 3.13.8 pyenv local 3.9.12 3.10.0
fi fi
# otherwise: just use the system python # otherwise: just use the system python
# some OSes can only run specific test envs, e.g. because they miss FUSE support: # some OSes can only run specific test envs, e.g. because they miss FUSE support:
@ -270,95 +274,51 @@ Vagrant.configure(2) do |config|
v.cpus = $cpus v.cpus = $cpus
end end
config.vm.define "noble" do |b| config.vm.define "jammy64" do |b|
b.vm.box = "bento/ubuntu-24.04"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("noble")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("noble", ".*none.*")
end
config.vm.define "jammy" do |b|
b.vm.box = "ubuntu/jammy64" b.vm.box = "ubuntu/jammy64"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem v.memory = 1024 + $wmem
end end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant") b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("jammy") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("jammy64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("jammy", ".*none.*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("jammy64", ".*none.*")
end end
config.vm.define "trixie" do |b| config.vm.define "bullseye64" do |b|
b.vm.box = "debian/testing64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("trixie")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("trixie")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("trixie")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("trixie")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("trixie", ".*none.*")
end
config.vm.define "bookworm32" do |b|
b.vm.box = "generic-x32/debian12"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bookworm32")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bookworm32")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bookworm32")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bookworm32")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bookworm32", ".*none.*")
end
config.vm.define "bookworm" do |b|
b.vm.box = "debian/bookworm64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bookworm")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bookworm")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bookworm")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bookworm")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bookworm", ".*none.*")
end
config.vm.define "bullseye" do |b|
b.vm.box = "debian/bullseye64" b.vm.box = "debian/bullseye64"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem v.memory = 1024 + $wmem
end end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant") b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bullseye") b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bullseye64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bullseye") b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bullseye64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bullseye") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bullseye64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller() b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bullseye") b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bullseye64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bullseye", ".*none.*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bullseye64", ".*none.*")
end end
config.vm.define "freebsd13" do |b| config.vm.define "buster64" do |b|
b.vm.box = "debian/buster64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("buster64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("buster64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("buster64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("buster64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("buster64", ".*none.*")
end
config.vm.define "freebsd64" do |b|
b.vm.box = "generic/freebsd13" b.vm.box = "generic/freebsd13"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem v.memory = 1024 + $wmem
@ -366,68 +326,79 @@ Vagrant.configure(2) do |config|
b.ssh.shell = "sh" b.ssh.shell = "sh"
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages freebsd", :type => :shell, :inline => packages_freebsd b.vm.provision "packages freebsd", :type => :shell, :inline => packages_freebsd
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd13") b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd13") b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd13") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller() b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd13") b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd13", ".*(pyfuse3|none).*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd64", ".*(fuse3|none).*")
end end
config.vm.define "freebsd14" do |b| config.vm.define "openbsd64" do |b|
b.vm.box = "generic/freebsd14" b.vm.box = "openbsd71-64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.ssh.shell = "sh"
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages freebsd", :type => :shell, :inline => packages_freebsd
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd14")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd14")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd14")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd14")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd14", ".*(pyfuse3|none).*")
end
config.vm.define "openbsd7" do |b|
b.vm.box = "l3system/openbsd77-amd64"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem v.memory = 1024 + $wmem
end end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages openbsd", :type => :shell, :inline => packages_openbsd b.vm.provision "packages openbsd", :type => :shell, :inline => packages_openbsd
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openbsd7") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openbsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openbsd7", ".*fuse.*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openbsd64", ".*fuse.*")
end end
config.vm.define "netbsd9" do |b| config.vm.define "netbsd64" do |b|
b.vm.box = "generic/netbsd9" b.vm.box = "generic/netbsd9"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 4096 + $wmem # need big /tmp tmpfs in RAM! v.memory = 4096 + $wmem # need big /tmp tmpfs in RAM!
end end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages netbsd", :type => :shell, :inline => packages_netbsd b.vm.provision "packages netbsd", :type => :shell, :inline => packages_netbsd
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("netbsd9") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("netbsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg(false)
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("netbsd9", ".*fuse.*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("netbsd64", ".*fuse.*")
end
config.vm.define "darwin64" do |b|
b.vm.box = "macos-sierra"
b.vm.provider :virtualbox do |v|
v.memory = 4096 + $wmem
v.customize ['modifyvm', :id, '--ostype', 'MacOS_64']
v.customize ['modifyvm', :id, '--paravirtprovider', 'default']
v.customize ['modifyvm', :id, '--nested-hw-virt', 'on']
# Adjust CPU settings according to
# https://github.com/geerlingguy/macos-virtualbox-vm
v.customize ['modifyvm', :id, '--cpuidset',
'00000001', '000306a9', '00020800', '80000201', '178bfbff']
# Disable USB variant requiring Virtualbox proprietary extension pack
v.customize ["modifyvm", :id, '--usbehci', 'off', '--usbxhci', 'off']
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages darwin", :type => :shell, :privileged => false, :inline => packages_darwin
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("darwin64")
b.vm.provision "fix pyenv", :type => :shell, :privileged => false, :inline => fix_pyenv_darwin("darwin64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("darwin64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("darwin64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("darwin64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("darwin64", ".*(fuse3|none).*")
end end
# rsync on openindiana has troubles, does not set correct owner for /vagrant/borg and thus gives lots of # rsync on openindiana has troubles, does not set correct owner for /vagrant/borg and thus gives lots of
# permission errors. can be manually fixed in the VM by: sudo chown -R vagrant /vagrant/borg ; then rsync again. # permission errors. can be manually fixed in the VM by: sudo chown -R vagrant /vagrant/borg ; then rsync again.
config.vm.define "openindiana" do |b| config.vm.define "openindiana64" do |b|
b.vm.box = "openindiana/hipster" b.vm.box = "openindiana"
b.vm.provider :virtualbox do |v| b.vm.provider :virtualbox do |v|
v.memory = 2048 + $wmem v.memory = 2048 + $wmem
end end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant") b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "package update openindiana", :type => :shell, :inline => package_update_openindiana, :reboot => true
b.vm.provision "packages openindiana", :type => :shell, :inline => packages_openindiana b.vm.provision "packages openindiana", :type => :shell, :inline => packages_openindiana
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openindiana") b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openindiana64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse") b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openindiana", ".*fuse.*") b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openindiana64", ".*fuse.*")
end end
# TODO: create more VMs with python 3.9+ and openssl 1.1 or 3.0.
# See branch 1.1-maint for a better equipped Vagrantfile (but still on py35 and openssl 1.0).
end end

75
conftest.py Normal file
View file

@ -0,0 +1,75 @@
import os
import pytest
# needed to get pretty assertion failures in unit tests:
if hasattr(pytest, 'register_assert_rewrite'):
pytest.register_assert_rewrite('borg.testsuite')
import borg.cache # noqa: E402
from borg.logger import setup_logging # noqa: E402
# Ensure that the loggers exist for all tests
setup_logging()
from borg.testsuite import has_lchflags, has_llfuse, has_pyfuse3 # noqa: E402
from borg.testsuite import are_symlinks_supported, are_hardlinks_supported, is_utime_fully_supported # noqa: E402
from borg.testsuite.platform import fakeroot_detected # noqa: E402
@pytest.fixture(autouse=True)
def clean_env(tmpdir_factory, monkeypatch):
# avoid that we access / modify the user's normal .config / .cache directory:
monkeypatch.setenv('XDG_CONFIG_HOME', str(tmpdir_factory.mktemp('xdg-config-home')))
monkeypatch.setenv('XDG_CACHE_HOME', str(tmpdir_factory.mktemp('xdg-cache-home')))
# also avoid to use anything from the outside environment:
keys = [key for key in os.environ
if key.startswith('BORG_') and key not in ('BORG_FUSE_IMPL', )]
for key in keys:
monkeypatch.delenv(key, raising=False)
# Speed up tests
monkeypatch.setenv("BORG_TESTONLY_WEAKEN_KDF", "1")
def pytest_report_header(config, startdir):
tests = {
"BSD flags": has_lchflags,
"fuse2": has_llfuse,
"fuse3": has_pyfuse3,
"root": not fakeroot_detected(),
"symlinks": are_symlinks_supported(),
"hardlinks": are_hardlinks_supported(),
"atime/mtime": is_utime_fully_supported(),
"modes": "BORG_TESTS_IGNORE_MODES" not in os.environ
}
enabled = []
disabled = []
for test in tests:
if tests[test]:
enabled.append(test)
else:
disabled.append(test)
output = "Tests enabled: " + ", ".join(enabled) + "\n"
output += "Tests disabled: " + ", ".join(disabled)
return output
class DefaultPatches:
def __init__(self, request):
self.org_cache_wipe_cache = borg.cache.LocalCache.wipe_cache
def wipe_should_not_be_called(*a, **kw):
raise AssertionError("Cache wipe was triggered, if this is part of the test add "
"@pytest.mark.allow_cache_wipe")
if 'allow_cache_wipe' not in request.keywords:
borg.cache.LocalCache.wipe_cache = wipe_should_not_be_called
request.addfinalizer(self.undo)
def undo(self):
borg.cache.LocalCache.wipe_cache = self.org_cache_wipe_cache
@pytest.fixture(autouse=True)
def default_patches(request):
return DefaultPatches(request)

View file

@ -1,5 +1,5 @@
Here we store third-party documentation, licenses, etc. Here we store 3rd party documentation, licenses, etc.
Please note that all files inside the "borg" package directory (except those Please note that all files inside the "borg" package directory (except the
excluded in setup.py) will be installed, so do not keep docs or licenses stuff excluded in setup.py) will be INSTALLED, so don't keep docs or licenses
there. there.

View file

@ -21,7 +21,7 @@ help:
@echo " singlehtml to make a single large HTML file" @echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files" @echo " pickle to make pickle files"
@echo " json to make JSON files" @echo " json to make JSON files"
@echo " htmlhelp to make HTML files and an HTML help project" @echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project" @echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project" @echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub" @echo " epub to make an epub"

View file

@ -1,6 +1,6 @@
<div class="sidebar-block"> <div class="sidebar-block">
<div class="sidebar-toc"> <div class="sidebar-toc">
{# Restrict the sidebar ToC depth to two levels while generating command usage pages. {# Restrict the sidebar toc depth to two levels while generating command usage pages.
This avoids superfluous entries for each "Description" and "Examples" heading. #} This avoids superfluous entries for each "Description" and "Examples" heading. #}
{% if pagename.startswith("usage/") and pagename not in ( {% if pagename.startswith("usage/") and pagename not in (
"usage/general", "usage/help", "usage/debug", "usage/notes", "usage/general", "usage/help", "usage/debug", "usage/notes",

View file

@ -1,173 +0,0 @@
{%- extends "basic/layout.html" %}
{# Do this so that Bootstrap is included before the main CSS file. #}
{%- block htmltitle %}
{% set script_files = script_files + ["_static/myscript.js"] %}
<!-- Licensed under the Apache 2.0 License -->
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/fonts/open-sans/stylesheet.css', 1) }}" />
<!-- Licensed under the SIL Open Font License -->
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/fonts/source-serif-pro/source-serif-pro.css', 1) }}" />
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/css/bootstrap.min.css', 1) }}" />
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/css/bootstrap-theme.min.css', 1) }}" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{{ super() }}
{%- endblock %}
{%- block extrahead %}
{% if theme_touch_icon %}
<link rel="apple-touch-icon" href="{{ pathto('_static/' ~ theme_touch_icon, 1) }}" />
{% endif %}
{{ super() }}
{% endblock %}
{# Displays the URL for the homepage if it's set, or the master_doc if it is not. #}
{% macro homepage() -%}
{%- if theme_homepage %}
{%- if hasdoc(theme_homepage) %}
{{ pathto(theme_homepage) }}
{%- else %}
{{ theme_homepage }}
{%- endif %}
{%- else %}
{{ pathto(master_doc) }}
{%- endif %}
{%- endmacro %}
{# Displays the URL for the tospage if it's set, or falls back to the homepage macro. #}
{% macro tospage() -%}
{%- if theme_tospage %}
{%- if hasdoc(theme_tospage) %}
{{ pathto(theme_tospage) }}
{%- else %}
{{ theme_tospage }}
{%- endif %}
{%- else %}
{{ homepage() }}
{%- endif %}
{%- endmacro %}
{# Displays the URL for the projectpage if it's set, or falls back to the homepage macro. #}
{% macro projectlink() -%}
{%- if theme_projectlink %}
{%- if hasdoc(theme_projectlink) %}
{{ pathto(theme_projectlink) }}
{%- else %}
{{ theme_projectlink }}
{%- endif %}
{%- else %}
{{ homepage() }}
{%- endif %}
{%- endmacro %}
{# Displays the next and previous links both before and after the content. #}
{% macro render_relations() -%}
{% if prev or next %}
<div class="footer-relations">
{% if prev %}
<div class="pull-left">
<a class="btn btn-default" href="{{ prev.link|e }}" title="{{ _('previous chapter')}} (use the left arrow)">{{ prev.title }}</a>
</div>
{% endif %}
{%- if next and next.title != '&lt;no title&gt;' %}
<div class="pull-right">
<a class="btn btn-default" href="{{ next.link|e }}" title="{{ _('next chapter')}} (use the right arrow)">{{ next.title }}</a>
</div>
{%- endif %}
</div>
<div class="clearer"></div>
{% endif %}
{%- endmacro %}
{%- macro guzzle_sidebar() %}
<div id="left-column">
<div class="sphinxsidebar">
{%- if sidebars != None %}
{#- New-style sidebar: explicitly include/exclude templates. #}
{%- for sidebartemplate in sidebars %}
{%- include sidebartemplate %}
{%- endfor %}
{% else %}
{% include "logo-text.html" %}
{% include "globaltoc.html" %}
{% include "searchbox.html" %}
{%- endif %}
</div>
</div>
{%- endmacro %}
{%- block content %}
{%- if pagename == 'index' and theme_index_template %}
{% include theme_index_template %}
{%- else %}
<div class="container-wrapper">
<div id="mobile-toggle">
<a href="#"><span class="glyphicon glyphicon-align-justify" aria-hidden="true"></span></a>
</div>
{%- block sidebar1 %}{{ guzzle_sidebar() }}{% endblock %}
{%- block document_wrapper %}
{%- block document %}
<div id="right-column">
{% block breadcrumbs %}
<div role="navigation" aria-label="breadcrumbs navigation">
<ol class="breadcrumb">
<li><a href="{{ pathto(master_doc) }}">Docs</a></li>
{% for doc in parents %}
<li><a href="{{ doc.link|e }}">{{ doc.title }}</a></li>
{% endfor %}
<li>{{ title }}</li>
</ol>
</div>
{% endblock %}
<div class="document clearer body" role="main">
{% block body %} {% endblock %}
</div>
{%- block bottom_rel_links %}
{{ render_relations() }}
{%- endblock %}
</div>
<div class="clearfix"></div>
{%- endblock %}
{%- endblock %}
{%- block comments -%}
{% if theme_disqus_comments_shortname %}
<div class="container comment-container">
{% include "comments.html" %}
</div>
{% endif %}
{%- endblock %}
</div>
{%- endif %}
{%- endblock %}
{%- block footer %}
<script type="text/javascript">
$("#mobile-toggle a").click(function () {
$("#left-column").toggle();
});
</script>
<script type="text/javascript" src="{{ pathto('_static/js/bootstrap.js', 1)}}"></script>
{%- block footer_wrapper %}
<div class="footer">
&copy; Copyright {{ copyright }}. Created using <a href="http://sphinx.pocoo.org/">Sphinx</a>.
</div>
{%- endblock %}
{%- block ga %}
{%- if theme_google_analytics_account %}
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', '{{ theme_google_analytics_account }}']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
{%- endif %}
{%- endblock %}
{%- endblock %}

View file

@ -1,117 +0,0 @@
Binary BorgBackup builds
========================
General notes
-------------
The binaries are supposed to work on the specified platform without installing anything else.
There are some limitations, though:
- for Linux, your system must have the same or newer glibc version as the one used for building
- for macOS, you need to have the same or newer macOS version as the one used for building
- for other OSes, there are likely similar limitations
If you don't find something working on your system, check the older borg releases.
*.asc are GnuPG signatures - only provided for locally built binaries.
*.exe (or no extension) is the single-file fat binary.
*.tgz is the single-directory fat binary (extract it once with tar -xzf).
Using the single-directory build is faster and does not require as much space
in the temporary directory as the self-extracting single-file build.
macOS: to avoid issues, download the file via the command line OR remove the
"quarantine" attribute after downloading:
$ xattr -dr com.apple.quarantine borg-macos1012.tgz
Download the correct files
--------------------------
Binaries built on GitHub servers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
borg-linux-glibc235-x86_64-gh Linux AMD/Intel (built on Ubuntu 22.04 LTS with glibc 2.35)
borg-linux-glibc235-arm64-gh Linux ARM (built on Ubuntu 22.04 LTS with glibc 2.35)
borg-macos-15-arm64-gh macOS Apple Silicon (built on macOS 15 w/o FUSE support)
borg-macos-15-x86_64-gh macOS Intel (built on macOS 15 w/o FUSE support)
borg-freebsd-14-x86_64-gh FreeBSD AMD/Intel (built on FreeBSD 14)
Binaries built locally
~~~~~~~~~~~~~~~~~~~~~~
borg-linux-glibc231-x86_64 Linux (built on Debian 11 "Bullseye" with glibc 2.31)
Note: if you don't find a specific binary here, check release 1.4.1 or 1.2.9.
Verifying your download
-----------------------
I provide GPG signatures for files which I have built locally on my machines.
To check the GPG signature, download both the file and the corresponding
signature (*.asc file) and then (on the shell) type, for example:
gpg --recv-keys 9F88FB52FAF7B393
gpg --verify borgbackup.tar.gz.asc borgbackup.tar.gz
The files are signed by:
Thomas Waldmann <tw@waldmann-edv.de>
GPG key fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
My fingerprint is also in the footer of all my BorgBackup mailing list posts.
Provenance attestations for GitHub-built binaries
-------------------------------------------------
For binaries built on GitHub (files with a "-gh" suffix in the name), we publish
an artifact provenance attestation that proves the binary was built by our
GitHub Actions workflow from a specific commit or tag. You can verify this using
the GitHub CLI (gh). Install it from https://cli.github.com/ and make sure you
use a recent version that supports "gh attestation".
Practical example (Linux, 2.0.0b20 tag):
curl -LO https://github.com/borgbackup/borg/releases/download/2.0.0b20/borg-linux-glibc235-x86_64-gh
gh attestation verify --repo borgbackup/borg --source-ref refs/tags/2.0.0b20 borg-linux-glibc235-x86_64-gh
If verification succeeds, gh prints a summary stating the subject (your file),
that it was attested by GitHub Actions, and the job/workflow reference.
Installing
----------
It is suggested that you rename or symlink the binary to just "borg".
If you need "borgfs", just also symlink it to the same binary; it will
detect internally under which name it was invoked.
On UNIX-like platforms, /usr/local/bin/ or ~/bin/ is a nice place for it,
but you can invoke it from anywhere by providing the full path to it.
Make sure the file is readable and executable (chmod +rx borg on UNIX-like
platforms).
Reporting issues
----------------
Please first check the FAQ and whether a GitHub issue already exists.
If you find a NEW issue, please open a ticket on our issue tracker:
https://github.com/borgbackup/borg/issues/
There, please give:
- the version number (it is displayed if you invoke borg -V)
- the sha256sum of the binary
- a good description of what the issue is
- a good description of how to reproduce your issue
- a traceback with system info (if you have one)
- your precise platform (CPU, 32/64-bit?), OS, distribution, release
- your Python and (g)libc versions

View file

@ -5,8 +5,8 @@
Borg documentation Borg documentation
================== ==================
.. When you add an element here, do not forget to add it to index.rst. .. when you add an element here, do not forget to add it to index.rst
.. Note: Some things are in appendices (see latex_appendices in conf.py). .. Note: Some things are in appendices (see latex_appendices in conf.py)
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2

View file

@ -52,7 +52,8 @@ h1 {
} }
.container.experimental, .container.experimental,
#debugging-facilities { #debugging-facilities,
#borg-recreate {
/* don't change text dimensions */ /* don't change text dimensions */
margin: 0 -30px; /* padding below + border width */ margin: 0 -30px; /* padding below + border width */
padding: 0 10px; /* 10 px visual margin between edge of text and the border */ padding: 0 10px; /* 10 px visual margin between edge of text and the border */

File diff suppressed because it is too large Load diff

View file

@ -1,807 +0,0 @@
.. _changelog_0x:
Change Log 0.x
==============
Version 0.30.0 (2016-01-23)
---------------------------
Compatibility notes:
- The new default logging level is WARNING. Previously, it was INFO, which was
more verbose. Use -v (or --info) to show once again log level INFO messages.
See the "general" section in the usage docs.
- For borg create, you need --list (in addition to -v) to see the long file
list (was needed so you can have e.g. --stats alone without the long list)
- See below about BORG_DELETE_I_KNOW_WHAT_I_AM_DOING (was:
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING)
Bug fixes:
- fix crash when using borg create --dry-run --keep-tag-files, #570
- make sure teardown with cleanup happens for Cache and RepositoryCache,
avoiding leftover locks and TEMP dir contents, #285 (partially), #548
- fix locking KeyError, partial fix for #502
- log stats consistently, #526
- add abbreviated weekday to timestamp format, fixes #496
- strip whitespace when loading exclusions from file
- unset LD_LIBRARY_PATH before invoking ssh, fixes strange OpenSSL library
version warning when using the borg binary, #514
- add some error handling/fallback for C library loading, #494
- added BORG_DELETE_I_KNOW_WHAT_I_AM_DOING for check in "borg delete", #503
- remove unused "repair" rpc method name
New features:
- borg create: implement exclusions using regular expression patterns.
- borg create: implement inclusions using patterns.
- borg extract: support patterns, #361
- support different styles for patterns:
- fnmatch (`fm:` prefix, default when omitted), like borg <= 0.29.
- shell (`sh:` prefix) with `*` not matching directory separators and
`**/` matching 0..n directories
- path prefix (`pp:` prefix, for unifying borg create pp1 pp2 into the
patterns system), semantics like in borg <= 0.29
- regular expression (`re:`), new!
- --progress option for borg upgrade (#291) and borg delete <archive>
- update progress indication more often (e.g. for borg create within big
files or for borg check repo), #500
- finer chunker granularity for items metadata stream, #547, #487
- borg create --list is now used (in addition to -v) to enable the verbose
file list output
- display borg version below tracebacks, #532
Other changes:
- hashtable size (and thus: RAM and disk consumption) follows a growth policy:
grows fast while small, grows slower when getting bigger, #527
- Vagrantfile: use pyinstaller 3.1 to build binaries, freebsd sqlite3 fix,
fixes #569
- no separate binaries for centos6 any more because the generic linux binaries
also work on centos6 (or in general: on systems with a slightly older glibc
than debian7
- dev environment: require virtualenv<14.0 so we get a py32 compatible pip
- docs:
- add space-saving chunks.archive.d trick to FAQ
- important: clarify -v and log levels in usage -> general, please read!
- sphinx configuration: create a simple man page from usage docs
- add a repo server setup example
- disable unneeded SSH features in authorized_keys examples for security.
- borg prune only knows "--keep-within" and not "--within"
- add gource video to resources docs, #507
- add netbsd install instructions
- authors: make it more clear what refers to borg and what to attic
- document standalone binary requirements, #499
- rephrase the mailing list section
- development docs: run build_api and build_usage before tagging release
- internals docs: hash table max. load factor is 0.75 now
- markup, typo, grammar, phrasing, clarifications and other fixes.
- add gcc gcc-c++ to redhat/fedora/corora install docs, fixes #583
Version 0.29.0 (2015-12-13)
---------------------------
Compatibility notes:
- When upgrading to 0.29.0, you need to upgrade client as well as server
installations due to the locking and command-line interface changes; otherwise
you'll get an error message about an RPC protocol mismatch or a wrong command-line
option.
If you run a server that needs to support both old and new clients, it is
suggested that you have a "borg-0.28.2" and a "borg-0.29.0" command.
clients then can choose via e.g. "borg --remote-path=borg-0.29.0 ...".
- The default waiting time for a lock changed from infinity to 1 second for a
better interactive user experience. If the repo you want to access is
currently locked, borg will now terminate after 1s with an error message.
If you have scripts that should wait for the lock for a longer time, use
--lock-wait N (with N being the maximum wait time in seconds).
Bug fixes:
- hash table tuning (better chosen hashtable load factor 0.75 and prime initial
size of 1031 gave ~1000x speedup in some scenarios)
- avoid creation of an orphan lock for one case, #285
- --keep-tag-files: fix file mode and multiple tag files in one directory, #432
- fixes for "borg upgrade" (attic repo converter), #466
- remove --progress isatty magic (and also --no-progress option) again, #476
- borg init: display proper repo URL
- fix format of umask in help pages, #463
New features:
- implement --lock-wait, support timeout for UpgradableLock, #210
- implement borg break-lock command, #157
- include system info below traceback, #324
- sane remote logging, remote stderr, #461:
- remote log output: intercept it and log it via local logging system,
with "Remote: " prefixed to message. log remote tracebacks.
- remote stderr: output it to local stderr with "Remote: " prefixed.
- add --debug and --info (same as --verbose) to set the log level of the
builtin logging configuration (which otherwise defaults to warning), #426
note: there are few messages emitted at DEBUG level currently.
- optionally configure logging via env var BORG_LOGGING_CONF
- add --filter option for status characters: e.g. to show only the added
or modified files (and also errors), use "borg create -v --filter=AME ...".
- more progress indicators, #394
- use ISO-8601 date and time format, #375
- "borg check --prefix" to restrict archive checking to that name prefix, #206
Other changes:
- hashindex_add C implementation (speed up cache re-sync for new archives)
- increase FUSE read_size to 1024 (speed up metadata operations)
- check/delete/prune --save-space: free unused segments quickly, #239
- increase rpc protocol version to 2 (see also Compatibility notes), #458
- silence borg by default (via default log level WARNING)
- get rid of C compiler warnings, #391
- upgrade OS X FUSE to 3.0.9 on the OS X binary build system
- use python 3.5.1 to build binaries
- docs:
- new mailing list borgbackup@python.org, #468
- readthedocs: color and logo improvements
- load coverage icons over SSL (avoids mixed content)
- more precise binary installation steps
- update release procedure docs about OS X FUSE
- FAQ entry about unexpected 'A' status for unchanged file(s), #403
- add docs about 'E' file status
- add "borg upgrade" docs, #464
- add developer docs about output and logging
- clarify encryption, add note about client-side encryption
- add resources section, with videos, talks, presentations, #149
- Borg moved to Arch Linux [community]
- fix wrong installation instructions for archlinux
Version 0.28.2 (2015-11-15)
---------------------------
New features:
- borg create --exclude-if-present TAGFILE - exclude directories that have the
given file from the backup. You can additionally give --keep-tag-files to
preserve just the directory roots and the tag-files (but not back up other
directory contents), #395, attic #128, attic #142
Other changes:
- do not create docs sources at build time (just have them in the repo),
completely remove have_cython() hack, do not use the "mock" library at build
time, #384
- avoid hidden import, make it easier for PyInstaller, easier fix for #218
- docs:
- add description of item flags / status output, fixes #402
- explain how to regenerate usage and API files (build_api or
build_usage) and when to commit usage files directly into git, #384
- minor install docs improvements
Version 0.28.1 (2015-11-08)
---------------------------
Bug fixes:
- do not try to build api / usage docs for production install,
fixes unexpected "mock" build dependency, #384
Other changes:
- avoid using msgpack.packb at import time
- fix formatting issue in changes.rst
- fix build on readthedocs
Version 0.28.0 (2015-11-08)
---------------------------
Compatibility notes:
- changed return codes (exit codes), see docs. in short:
old: 0 = ok, 1 = error. now: 0 = ok, 1 = warning, 2 = error
New features:
- refactor return codes (exit codes), fixes #61
- add --show-rc option enable "terminating with X status, rc N" output, fixes 58, #351
- borg create backups atime and ctime additionally to mtime, fixes #317
- extract: support atime additionally to mtime
- FUSE: support ctime and atime additionally to mtime
- support borg --version
- emit a warning if we have a slow msgpack installed
- borg list --prefix=thishostname- REPO, fixes #205
- Debug commands (do not use except if you know what you do: debug-get-obj,
debug-put-obj, debug-delete-obj, debug-dump-archive-items.
Bug fixes:
- setup.py: fix bug related to BORG_LZ4_PREFIX processing
- fix "check" for repos that have incomplete chunks, fixes #364
- borg mount: fix unlocking of repository at umount time, fixes #331
- fix reading files without touching their atime, #334
- non-ascii ACL fixes for Linux, FreeBSD and OS X, #277
- fix acl_use_local_uid_gid() and add a test for it, attic #359
- borg upgrade: do not upgrade repositories in place by default, #299
- fix cascading failure with the index conversion code, #269
- borg check: implement 'cmdline' archive metadata value decoding, #311
- fix RobustUnpacker, it missed some metadata keys (new atime and ctime keys
were missing, but also bsdflags). add check for unknown metadata keys.
- create from stdin: also save atime, ctime (cosmetic)
- use default_notty=False for confirmations, fixes #345
- vagrant: fix msgpack installation on centos, fixes #342
- deal with unicode errors for symlinks in same way as for regular files and
have a helpful warning message about how to fix wrong locale setup, fixes #382
- add ACL keys the RobustUnpacker must know about
Other changes:
- improve file size displays, more flexible size formatters
- explicitly commit to the units standard, #289
- archiver: add E status (means that an error occurred when processing this
(single) item
- do binary releases via "github releases", closes #214
- create: use -x and --one-file-system (was: --do-not-cross-mountpoints), #296
- a lot of changes related to using "logging" module and screen output, #233
- show progress display if on a tty, output more progress information, #303
- factor out status output so it is consistent, fix surrogates removal,
maybe fixes #309
- move away from RawConfigParser to ConfigParser
- archive checker: better error logging, give chunk_id and sequence numbers
(can be used together with borg debug-dump-archive-items).
- do not mention the deprecated passphrase mode
- emit a deprecation warning for --compression N (giving a just a number)
- misc .coverragerc fixes (and coverage measurement improvements), fixes #319
- refactor confirmation code, reduce code duplication, add tests
- prettier error messages, fixes #307, #57
- tests:
- add a test to find disk-full issues, #327
- travis: also run tests on Python 3.5
- travis: use tox -r so it rebuilds the tox environments
- test the generated pyinstaller-based binary by archiver unit tests, #215
- vagrant: tests: announce whether fakeroot is used or not
- vagrant: add vagrant user to fuse group for debianoid systems also
- vagrant: llfuse install on darwin needs pkgconfig installed
- vagrant: use pyinstaller from develop branch, fixes #336
- benchmarks: test create, extract, list, delete, info, check, help, fixes #146
- benchmarks: test with both the binary and the python code
- archiver tests: test with both the binary and the python code, fixes #215
- make basic test more robust
- docs:
- moved docs to borgbackup.readthedocs.org, #155
- a lot of fixes and improvements, use mobile-friendly RTD standard theme
- use zlib,6 compression in some examples, fixes #275
- add missing rename usage to docs, closes #279
- include the help offered by borg help <topic> in the usage docs, fixes #293
- include a list of major changes compared to attic into README, fixes #224
- add OS X install instructions, #197
- more details about the release process, #260
- fix linux glibc requirement (binaries built on debian7 now)
- build: move usage and API generation to setup.py
- update docs about return codes, #61
- remove api docs (too much breakage on rtd)
- borgbackup install + basics presentation (asciinema)
- describe the current style guide in documentation
- add section about debug commands
- warn about not running out of space
- add example for rename
- improve chunker params docs, fixes #362
- minor development docs update
Version 0.27.0 (2015-10-07)
---------------------------
New features:
- "borg upgrade" command - attic -> borg one time converter / migration, #21
- temporary hack to avoid using lots of disk space for chunks.archive.d, #235:
To use it: rm -rf chunks.archive.d ; touch chunks.archive.d
- respect XDG_CACHE_HOME, attic #181
- add support for arbitrary SSH commands, attic #99
- borg delete --cache-only REPO (only delete cache, not REPO), attic #123
Bug fixes:
- use Debian 7 (wheezy) to build pyinstaller borgbackup binaries, fixes slow
down observed when running the Centos6-built binary on Ubuntu, #222
- do not crash on empty lock.roster, fixes #232
- fix multiple issues with the cache config version check, #234
- fix segment entry header size check, attic #352
plus other error handling improvements / code deduplication there.
- always give segment and offset in repo IntegrityErrors
Other changes:
- stop producing binary wheels, remove docs about it, #147
- docs:
- add warning about prune
- generate usage include files only as needed
- development docs: add Vagrant section
- update / improve / reformat FAQ
- hint to single-file pyinstaller binaries from README
Version 0.26.1 (2015-09-28)
---------------------------
This is a minor update, just docs and new pyinstaller binaries.
- docs update about python and binary requirements
- better docs for --read-special, fix #220
- re-built the binaries, fix #218 and #213 (glibc version issue)
- update web site about single-file pyinstaller binaries
Note: if you did a python-based installation, there is no need to upgrade.
Version 0.26.0 (2015-09-19)
---------------------------
New features:
- Faster cache sync (do all in one pass, remove tar/compression stuff), #163
- BORG_REPO env var to specify the default repo, #168
- read special files as if they were regular files, #79
- implement borg create --dry-run, attic issue #267
- Normalize paths before pattern matching on OS X, #143
- support OpenBSD and NetBSD (except xattrs/ACLs)
- support / run tests on Python 3.5
Bug fixes:
- borg mount repo: use absolute path, attic #200, attic #137
- chunker: use off_t to get 64bit on 32bit platform, #178
- initialize chunker fd to -1, so it's not equal to STDIN_FILENO (0)
- fix reaction to "no" answer at delete repo prompt, #182
- setup.py: detect lz4.h header file location
- to support python < 3.2.4, add less buggy argparse lib from 3.2.6 (#194)
- fix for obtaining ``char *`` from temporary Python value (old code causes
a compile error on Mint 17.2)
- llfuse 0.41 install troubles on some platforms, require < 0.41
(UnicodeDecodeError exception due to non-ascii llfuse setup.py)
- cython code: add some int types to get rid of unspecific python add /
subtract operations (avoid ``undefined symbol FPE_``... error on some platforms)
- fix verbose mode display of stdin backup
- extract: warn if a include pattern never matched, fixes #209,
implement counters for Include/ExcludePatterns
- archive names with slashes are invalid, attic issue #180
- chunker: add a check whether the POSIX_FADV_DONTNEED constant is defined -
fixes building on OpenBSD.
Other changes:
- detect inconsistency / corruption / hash collision, #170
- replace versioneer with setuptools_scm, #106
- docs:
- pkg-config is needed for llfuse installation
- be more clear about pruning, attic issue #132
- unit tests:
- xattr: ignore security.selinux attribute showing up
- ext3 seems to need a bit more space for a sparse file
- do not test lzma level 9 compression (avoid MemoryError)
- work around strange mtime granularity issue on netbsd, fixes #204
- ignore st_rdev if file is not a block/char device, fixes #203
- stay away from the setgid and sticky mode bits
- use Vagrant to do easy cross-platform testing (#196), currently:
- Debian 7 "wheezy" 32bit, Debian 8 "jessie" 64bit
- Ubuntu 12.04 32bit, Ubuntu 14.04 64bit
- Centos 7 64bit
- FreeBSD 10.2 64bit
- OpenBSD 5.7 64bit
- NetBSD 6.1.5 64bit
- Darwin (OS X Yosemite)
Version 0.25.0 (2015-08-29)
---------------------------
Compatibility notes:
- lz4 compression library (liblz4) is a new requirement (#156)
- the new compression code is very compatible: as long as you stay with zlib
compression, older borg releases will still be able to read data from a
repo/archive made with the new code (note: this is not the case for the
default "none" compression, use "zlib,0" if you want a "no compression" mode
that can be read by older borg). Also the new code is able to read repos and
archives made with older borg versions (for all zlib levels 0..9).
Deprecations:
- --compression N (with N being a number, as in 0.24) is deprecated.
We keep the --compression 0..9 for now not to break scripts, but it is
deprecated and will be removed later, so better fix your scripts now:
--compression 0 (as in 0.24) is the same as --compression zlib,0 (now).
BUT: if you do not want compression, use --compression none
(which is the default).
--compression 1 (in 0.24) is the same as --compression zlib,1 (now)
--compression 9 (in 0.24) is the same as --compression zlib,9 (now)
New features:
- create --compression none (default, means: do not compress, just pass through
data "as is". this is more efficient than zlib level 0 as used in borg 0.24)
- create --compression lz4 (super-fast, but not very high compression)
- create --compression zlib,N (slower, higher compression, default for N is 6)
- create --compression lzma,N (slowest, highest compression, default N is 6)
- honor the nodump flag (UF_NODUMP) and do not back up such items
- list --short just outputs a simple list of the files/directories in an archive
Bug fixes:
- fixed --chunker-params parameter order confusion / malfunction, fixes #154
- close fds of segments we delete (during compaction)
- close files which fell out the lrucache
- fadvise DONTNEED now is only called for the byte range actually read, not for
the whole file, fixes #158.
- fix issue with negative "all archives" size, fixes #165
- restore_xattrs: ignore if setxattr fails with EACCES, fixes #162
Other changes:
- remove fakeroot requirement for tests, tests run faster without fakeroot
(test setup does not fail any more without fakeroot, so you can run with or
without fakeroot), fixes #151 and #91.
- more tests for archiver
- recover_segment(): don't assume we have an fd for segment
- lrucache refactoring / cleanup, add dispose function, py.test tests
- generalize hashindex code for any key length (less hardcoding)
- lock roster: catch file not found in remove() method and ignore it
- travis CI: use requirements file
- improved docs:
- replace hack for llfuse with proper solution (install libfuse-dev)
- update docs about compression
- update development docs about fakeroot
- internals: add some words about lock files / locking system
- support: mention BountySource and for what it can be used
- theme: use a lighter green
- add pypi, wheel, dist package based install docs
- split install docs into system-specific preparations and generic instructions
Version 0.24.0 (2015-08-09)
---------------------------
Incompatible changes (compared to 0.23):
- borg now always issues --umask NNN option when invoking another borg via ssh
on the repository server. By that, it's making sure it uses the same umask
for remote repos as for local ones. Because of this, you must upgrade both
server and client(s) to 0.24.
- the default umask is 077 now (if you do not specify via --umask) which might
be a different one as you used previously. The default umask avoids that
you accidentally give access permissions for group and/or others to files
created by borg (e.g. the repository).
Deprecations:
- "--encryption passphrase" mode is deprecated, see #85 and #97.
See the new "--encryption repokey" mode for a replacement.
New features:
- borg create --chunker-params ... to configure the chunker, fixes #16
(attic #302, attic #300, and somehow also #41).
This can be used to reduce memory usage caused by chunk management overhead,
so borg does not create a huge chunks index/repo index and eats all your RAM
if you back up lots of data in huge files (like VM disk images).
See docs/misc/create_chunker-params.txt for more information.
- borg info now reports chunk counts in the chunk index.
- borg create --compression 0..9 to select zlib compression level, fixes #66
(attic #295).
- borg init --encryption repokey (to store the encryption key into the repo),
fixes #85
- improve at-end error logging, always log exceptions and set exit_code=1
- LoggedIO: better error checks / exceptions / exception handling
- implement --remote-path to allow non-default-path borg locations, #125
- implement --umask M and use 077 as default umask for better security, #117
- borg check: give a named single archive to it, fixes #139
- cache sync: show progress indication
- cache sync: reimplement the chunk index merging in C
Bug fixes:
- fix segfault that happened for unreadable files (chunker: n needs to be a
signed size_t), #116
- fix the repair mode, #144
- repo delete: add destroy to allowed rpc methods, fixes issue #114
- more compatible repository locking code (based on mkdir), maybe fixes #92
(attic #317, attic #201).
- better Exception msg if no Borg is installed on the remote repo server, #56
- create a RepositoryCache implementation that can cope with >2GiB,
fixes attic #326.
- fix Traceback when running check --repair, attic #232
- clarify help text, fixes #73.
- add help string for --no-files-cache, fixes #140
Other changes:
- improved docs:
- added docs/misc directory for misc. writeups that won't be included
"as is" into the html docs.
- document environment variables and return codes (attic #324, attic #52)
- web site: add related projects, fix web site url, IRC #borgbackup
- Fedora/Fedora-based install instructions added to docs
- Cygwin-based install instructions added to docs
- updated AUTHORS
- add FAQ entries about redundancy / integrity
- clarify that borg extract uses the cwd as extraction target
- update internals doc about chunker params, memory usage and compression
- added docs about development
- add some words about resource usage in general
- document how to back up a raw disk
- add note about how to run borg from virtual env
- add solutions for (ll)fuse installation problems
- document what borg check does, fixes #138
- reorganize borgbackup.github.io sidebar, prev/next at top
- deduplicate and refactor the docs / README.rst
- use borg-tmp as prefix for temporary files / directories
- short prune options without "keep-" are deprecated, do not suggest them
- improved tox configuration
- remove usage of unittest.mock, always use mock from pypi
- use entrypoints instead of scripts, for better use of the wheel format and
modern installs
- add requirements.d/development.txt and modify tox.ini
- use travis-ci for testing based on Linux and (new) OS X
- use coverage.py, pytest-cov and codecov.io for test coverage support
I forgot to list some stuff already implemented in 0.23.0, here they are:
New features:
- efficient archive list from manifest, meaning a big speedup for slow
repo connections and "list <repo>", "delete <repo>", "prune" (attic #242,
attic #167)
- big speedup for chunks cache sync (esp. for slow repo connections), fixes #18
- hashindex: improve error messages
Other changes:
- explicitly specify binary mode to open binary files
- some easy micro optimizations
Version 0.23.0 (2015-06-11)
---------------------------
Incompatible changes (compared to attic, fork related):
- changed sw name and cli command to "borg", updated docs
- package name (and name in urls) uses "borgbackup" to have fewer collisions
- changed repo / cache internal magic strings from ATTIC* to BORG*,
changed cache location to .cache/borg/ - this means that it currently won't
accept attic repos (see issue #21 about improving that)
Bug fixes:
- avoid defect python-msgpack releases, fixes attic #171, fixes attic #185
- fix traceback when trying to do unsupported passphrase change, fixes attic #189
- datetime does not like the year 10.000, fixes attic #139
- fix "info" all archives stats, fixes attic #183
- fix parsing with missing microseconds, fixes attic #282
- fix misleading hint the fuse ImportError handler gave, fixes attic #237
- check unpacked data from RPC for tuple type and correct length, fixes attic #127
- fix Repository._active_txn state when lock upgrade fails
- give specific path to xattr.is_enabled(), disable symlink setattr call that
always fails
- fix test setup for 32bit platforms, partial fix for attic #196
- upgraded versioneer, PEP440 compliance, fixes attic #257
New features:
- less memory usage: add global option --no-cache-files
- check --last N (only check the last N archives)
- check: sort archives in reverse time order
- rename repo::oldname newname (rename repository)
- create -v output more informative
- create --progress (backup progress indicator)
- create --timestamp (utc string or reference file/dir)
- create: if "-" is given as path, read binary from stdin
- extract: if --stdout is given, write all extracted binary data to stdout
- extract --sparse (simple sparse file support)
- extra debug information for 'fread failed'
- delete <repo> (deletes whole repo + local cache)
- FUSE: reflect deduplication in allocated blocks
- only allow whitelisted RPC calls in server mode
- normalize source/exclude paths before matching
- use posix_fadvise not to spoil the OS cache, fixes attic #252
- toplevel error handler: show tracebacks for better error analysis
- sigusr1 / sigint handler to print current file infos - attic PR #286
- RPCError: include the exception args we get from remote
Other changes:
- source: misc. cleanups, pep8, style
- docs and faq improvements, fixes, updates
- cleanup crypto.pyx, make it easier to adapt to other AES modes
- do os.fsync like recommended in the python docs
- source: Let chunker optionally work with os-level file descriptor.
- source: Linux: remove duplicate os.fsencode calls
- source: refactor _open_rb code a bit, so it is more consistent / regular
- source: refactor indicator (status) and item processing
- source: use py.test for better testing, flake8 for code style checks
- source: fix tox >=2.0 compatibility (test runner)
- pypi package: add python version classifiers, add FreeBSD to platforms
Attic Changelog
---------------
Here you can see the full list of changes between each Attic release until Borg
forked from Attic:
Version 0.17
~~~~~~~~~~~~
(bugfix release, released on X)
- Fix hashindex ARM memory alignment issue (#309)
- Improve hashindex error messages (#298)
Version 0.16
~~~~~~~~~~~~
(bugfix release, released on May 16, 2015)
- Fix typo preventing the security confirmation prompt from working (#303)
- Improve handling of systems with improperly configured file system encoding (#289)
- Fix "All archives" output for attic info. (#183)
- More user friendly error message when repository key file is not found (#236)
- Fix parsing of iso 8601 timestamps with zero microseconds (#282)
Version 0.15
~~~~~~~~~~~~
(bugfix release, released on Apr 15, 2015)
- xattr: Be less strict about unknown/unsupported platforms (#239)
- Reduce repository listing memory usage (#163).
- Fix BrokenPipeError for remote repositories (#233)
- Fix incorrect behavior with two character directory names (#265, #268)
- Require approval before accessing relocated/moved repository (#271)
- Require approval before accessing previously unknown unencrypted repositories (#271)
- Fix issue with hash index files larger than 2GB.
- Fix Python 3.2 compatibility issue with noatime open() (#164)
- Include missing pyx files in dist files (#168)
Version 0.14
~~~~~~~~~~~~
(feature release, released on Dec 17, 2014)
- Added support for stripping leading path segments (#95)
"attic extract --strip-segments X"
- Add workaround for old Linux systems without acl_extended_file_no_follow (#96)
- Add MacPorts' path to the default openssl search path (#101)
- HashIndex improvements, eliminates unnecessary IO on low memory systems.
- Fix "Number of files" output for attic info. (#124)
- limit create file permissions so files aren't read while restoring
- Fix issue with empty xattr values (#106)
Version 0.13
~~~~~~~~~~~~
(feature release, released on Jun 29, 2014)
- Fix sporadic "Resource temporarily unavailable" when using remote repositories
- Reduce file cache memory usage (#90)
- Faster AES encryption (utilizing AES-NI when available)
- Experimental Linux, OS X and FreeBSD ACL support (#66)
- Added support for backup and restore of BSDFlags (OSX, FreeBSD) (#56)
- Fix bug where xattrs on symlinks were not correctly restored
- Added cachedir support. CACHEDIR.TAG compatible cache directories
can now be excluded using ``--exclude-caches`` (#74)
- Fix crash on extreme mtime timestamps (year 2400+) (#81)
- Fix Python 3.2 specific lockf issue (EDEADLK)
Version 0.12
~~~~~~~~~~~~
(feature release, released on April 7, 2014)
- Python 3.4 support (#62)
- Various documentation improvements a new style
- ``attic mount`` now supports mounting an entire repository not only
individual archives (#59)
- Added option to restrict remote repository access to specific path(s):
``attic serve --restrict-to-path X`` (#51)
- Include "all archives" size information in "--stats" output. (#54)
- Added ``--stats`` option to ``attic delete`` and ``attic prune``
- Fixed bug where ``attic prune`` used UTC instead of the local time zone
when determining which archives to keep.
- Switch to SI units (Power of 1000 instead 1024) when printing file sizes
Version 0.11
~~~~~~~~~~~~
(feature release, released on March 7, 2014)
- New "check" command for repository consistency checking (#24)
- Documentation improvements
- Fix exception during "attic create" with repeated files (#39)
- New "--exclude-from" option for attic create/extract/verify.
- Improved archive metadata deduplication.
- "attic verify" has been deprecated. Use "attic extract --dry-run" instead.
- "attic prune --hourly|daily|..." has been deprecated.
Use "attic prune --keep-hourly|daily|..." instead.
- Ignore xattr errors during "extract" if not supported by the filesystem. (#46)
Version 0.10
~~~~~~~~~~~~
(bugfix release, released on Jan 30, 2014)
- Fix deadlock when extracting 0 sized files from remote repositories
- "--exclude" wildcard patterns are now properly applied to the full path
not just the file name part (#5).
- Make source code endianness agnostic (#1)
Version 0.9
~~~~~~~~~~~
(feature release, released on Jan 23, 2014)
- Remote repository speed and reliability improvements.
- Fix sorting of segment names to ignore NFS left over files. (#17)
- Fix incorrect display of time (#13)
- Improved error handling / reporting. (#12)
- Use fcntl() instead of flock() when locking repository/cache. (#15)
- Let ssh figure out port/user if not specified so we don't override .ssh/config (#9)
- Improved libcrypto path detection (#23).
Version 0.8.1
~~~~~~~~~~~~~
(bugfix release, released on Oct 4, 2013)
- Fix segmentation fault issue.
Version 0.8
~~~~~~~~~~~
(feature release, released on Oct 3, 2013)
- Fix xattr issue when backing up sshfs filesystems (#4)
- Fix issue with excessive index file size (#6)
- Support access of read only repositories.
- New syntax to enable repository encryption:
attic init --encryption="none|passphrase|keyfile".
- Detect and abort if repository is older than the cache.
Version 0.7
~~~~~~~~~~~
(feature release, released on Aug 5, 2013)
- Ported to FreeBSD
- Improved documentation
- Experimental: Archives mountable as FUSE filesystems.
- The "user." prefix is no longer stripped from xattrs on Linux
Version 0.6.1
~~~~~~~~~~~~~
(bugfix release, released on July 19, 2013)
- Fixed an issue where mtime was not always correctly restored.
Version 0.6
~~~~~~~~~~~
First public release on July 9, 2013

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
# Documentation build configuration file, created by # documentation build configuration file, created by
# sphinx-quickstart on Sat Sep 10 18:18:25 2011. # sphinx-quickstart on Sat Sep 10 18:18:25 2011.
# #
# This file is execfile()d with the current directory set to its containing directory. # This file is execfile()d with the current directory set to its containing dir.
# #
# Note that not all possible configuration values are present in this # Note that not all possible configuration values are present in this
# autogenerated file. # autogenerated file.
@ -12,164 +12,167 @@
# If extensions (or modules to document with autodoc) are in another directory, # If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
import sys import sys, os
import os sys.path.insert(0, os.path.abspath('../src'))
sys.path.insert(0, os.path.abspath("../src"))
from borg import __version__ as sw_version from borg import __version__ as sw_version
# -- General configuration ----------------------------------------------------- # -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0' #needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions # Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones. # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [] extensions = []
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"] templates_path = ['_templates']
# The suffix of source filenames. # The suffix of source filenames.
source_suffix = ".rst" source_suffix = '.rst'
# The encoding of source files. # The encoding of source files.
# source_encoding = 'utf-8-sig' #source_encoding = 'utf-8-sig'
# The master toctree document. # The master toctree document.
master_doc = "index" master_doc = 'index'
# General information about the project. # General information about the project.
project = "Borg - Deduplicating Archiver" project = 'Borg - Deduplicating Archiver'
copyright = "2010-2014 Jonas Borgström, 2015-2025 The Borg Collective (see AUTHORS file)" copyright = '2010-2014 Jonas Borgström, 2015-2022 The Borg Collective (see AUTHORS file)'
# The version info for the project you're documenting, acts as replacement for # The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the # |version| and |release|, also used in various other places throughout the
# built documents. # built documents.
# #
# The short X.Y version. # The short X.Y version.
split_char = "+" if "+" in sw_version else "-" split_char = '+' if '+' in sw_version else '-'
version = sw_version.split(split_char)[0] version = sw_version.split(split_char)[0]
# The full version, including alpha/beta/rc tags. # The full version, including alpha/beta/rc tags.
release = version release = version
suppress_warnings = ["image.nonlocal_uri"] suppress_warnings = ['image.nonlocal_uri']
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
# language = None #language = None
# There are two options for replacing |today|: either, you set today to some # There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used: # non-false value, then it is used:
# today = '' #today = ''
# Else, today_fmt is used as the format for a strftime call. # Else, today_fmt is used as the format for a strftime call.
today_fmt = "%Y-%m-%d" today_fmt = '%Y-%m-%d'
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
exclude_patterns = ["_build"] exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents. # The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None #default_role = None
# The Borg docs contain no or very little Python docs. # The Borg docs contain no or very little Python docs.
# Thus, the primary domain is RST. # Thus, the primary domain is rst.
primary_domain = "rst" primary_domain = 'rst'
# If true, '()' will be appended to :func: etc. cross-reference text. # If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True #add_function_parentheses = True
# If true, the current module name will be prepended to all description # If true, the current module name will be prepended to all description
# unit titles (such as .. function::). # unit titles (such as .. function::).
# add_module_names = True #add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the # If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default. # output. They are ignored by default.
# show_authors = False #show_authors = False
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx" pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting. # A list of ignored prefixes for module index sorting.
# modindex_common_prefix = [] #modindex_common_prefix = []
# -- Options for HTML output --------------------------------------------------- # -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of built-in themes. # a list of builtin themes.
import guzzle_sphinx_theme import guzzle_sphinx_theme
html_theme_path = guzzle_sphinx_theme.html_theme_path() html_theme_path = guzzle_sphinx_theme.html_theme_path()
html_theme = "guzzle_sphinx_theme" html_theme = 'guzzle_sphinx_theme'
def set_rst_settings(app): def set_rst_settings(app):
app.env.settings.update({"field_name_limit": 0, "option_limit": 0}) app.env.settings.update({
'field_name_limit': 0,
'option_limit': 0,
})
def setup(app): def setup(app):
app.setup_extension("sphinxcontrib.jquery") app.add_css_file('css/borg.css')
app.add_css_file("css/borg.css") app.connect('builder-inited', set_rst_settings)
app.connect("builder-inited", set_rst_settings)
# Theme options are theme-specific and customize the look and feel of a theme # Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the # further. For a list of options available for each theme, see the
# documentation. # documentation.
html_theme_options = {"project_nav_name": "Borg %s" % version} html_theme_options = {
'project_nav_name': 'Borg %s' % version,
}
# Add any paths that contain custom themes here, relative to this directory. # Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = ['_themes'] #html_theme_path = ['_themes']
# The name for this set of Sphinx documents. If None, it defaults to # The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation". # "<project> v<release> documentation".
# html_title = None #html_title = None
# A shorter title for the navigation bar. Default is the same as html_title. # A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None #html_short_title = None
# The name of an image file (relative to this directory) to place at the top # The name of an image file (relative to this directory) to place at the top
# of the sidebar. # of the sidebar.
html_logo = "_static/logo.svg" html_logo = '_static/logo.svg'
# The name of an image file (within the static path) to use as favicon of the # The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large. # pixels large.
html_favicon = "_static/favicon.ico" html_favicon = '_static/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["borg_theme"] html_static_path = ['borg_theme']
html_extra_path = ["../src/borg/paperkey.html"] html_extra_path = ['../src/borg/paperkey.html']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format. # using the given strftime format.
html_last_updated_fmt = "%Y-%m-%d" html_last_updated_fmt = '%Y-%m-%d'
# If true, SmartyPants will be used to convert quotes and dashes to # If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities. # typographically correct entities.
html_use_smartypants = True html_use_smartypants = True
smartquotes_action = "qe" # no D in there means "do not transform -- and ---" smartquotes_action = 'qe' # no D in there means "do not transform -- and ---"
# Custom sidebar templates, maps document names to template names. # Custom sidebar templates, maps document names to template names.
html_sidebars = {"**": ["logo-text.html", "searchbox.html", "globaltoc.html"]} html_sidebars = {
'**': ['logo-text.html', 'searchbox.html', 'globaltoc.html'],
}
# Additional templates that should be rendered to pages, maps page names to # Additional templates that should be rendered to pages, maps page names to
# template names. # template names.
# html_additional_pages = {} #html_additional_pages = {}
# If false, no module index is generated. # If false, no module index is generated.
# html_domain_indices = True #html_domain_indices = True
# If false, no index is generated. # If false, no index is generated.
html_use_index = False html_use_index = False
# If true, the index is split into individual pages for each letter. # If true, the index is split into individual pages for each letter.
# html_split_index = False #html_split_index = False
# If true, links to the reST sources are added to the pages. # If true, links to the reST sources are added to the pages.
html_show_sourcelink = False html_show_sourcelink = False
@ -183,45 +186,57 @@ html_show_copyright = False
# If true, an OpenSearch description file will be output, and all pages will # If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the # contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served. # base URL from which the finished HTML is served.
# html_use_opensearch = '' #html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml"). # This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None #html_file_suffix = None
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = "borgdoc" htmlhelp_basename = 'borgdoc'
# -- Options for LaTeX output -------------------------------------------------- # -- Options for LaTeX output --------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]). # (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [("book", "Borg.tex", "Borg Documentation", "The Borg Collective", "manual")] latex_documents = [
('book', 'Borg.tex', 'Borg Documentation',
'The Borg Collective', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of # The name of an image file (relative to this directory) to place at the top of
# the title page. # the title page.
latex_logo = "_static/logo.pdf" latex_logo = '_static/logo.pdf'
latex_elements = {"papersize": "a4paper", "pointsize": "10pt", "figure_align": "H"} latex_elements = {
'papersize': 'a4paper',
'pointsize': '10pt',
'figure_align': 'H',
}
# For "manual" documents, if this is true, then toplevel headings are parts, # For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters. # not chapters.
# latex_use_parts = False #latex_use_parts = False
# If true, show page references after internal links. # If true, show page references after internal links.
# latex_show_pagerefs = False #latex_show_pagerefs = False
# If true, show URL addresses after external links. # If true, show URL addresses after external links.
latex_show_urls = "footnote" latex_show_urls = 'footnote'
# Additional stuff for the LaTeX preamble. # Additional stuff for the LaTeX preamble.
# latex_preamble = '' #latex_preamble = ''
# Documents to append as an appendix to all manuals. # Documents to append as an appendix to all manuals.
latex_appendices = ["support", "resources", "changes", "authors"] latex_appendices = [
'support',
'resources',
'changes',
'authors',
]
# If false, no module index is generated. # If false, no module index is generated.
# latex_domain_indices = True #latex_domain_indices = True
# -- Options for manual page output -------------------------------------------- # -- Options for manual page output --------------------------------------------
@ -229,23 +244,21 @@ latex_appendices = ["support", "resources", "changes", "authors"]
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [ man_pages = [
( ('usage', 'borg',
"usage", 'BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.',
"borg", ['The Borg Collective (see AUTHORS file)'],
"BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.", 1),
["The Borg Collective (see AUTHORS file)"],
1,
)
] ]
extensions = [ extensions = [
"sphinx.ext.extlinks", 'sphinx.ext.extlinks',
"sphinx.ext.autodoc", 'sphinx.ext.autodoc',
"sphinx.ext.todo", 'sphinx.ext.todo',
"sphinx.ext.coverage", 'sphinx.ext.coverage',
"sphinx.ext.viewcode", 'sphinx.ext.viewcode',
"sphinxcontrib.jquery", # jquery is not included anymore by default
"guzzle_sphinx_theme", # register the theme as an extension to generate a sitemap.xml
] ]
extlinks = {"issue": ("https://github.com/borgbackup/borg/issues/%s", "#%s")} extlinks = {
'issue': ('https://github.com/borgbackup/borg/issues/%s', '#'),
'targz_url': ('https://pypi.python.org/packages/source/b/borgbackup/%%s-%s.tar.gz' % version, None),
}

View file

@ -14,4 +14,3 @@ This chapter details deployment strategies for the following scenarios.
deployment/automated-local deployment/automated-local
deployment/image-backup deployment/image-backup
deployment/pull-backup deployment/pull-backup
deployment/non-root-user

View file

@ -14,13 +14,13 @@ systemd and udev.
Overview Overview
-------- --------
A udev rule is created to trigger on the addition of block devices. The rule contains a tag An udev rule is created to trigger on the addition of block devices. The rule contains a tag
that triggers systemd to start a one-shot service. The one-shot service executes a script in that triggers systemd to start a oneshot service. The oneshot service executes a script in
the standard systemd service environment, which automatically captures stdout/stderr and the standard systemd service environment, which automatically captures stdout/stderr and
logs it to the journal. logs it to the journal.
The script mounts the added block device if it is a registered backup drive and creates The script mounts the added block device, if it is a registered backup drive, and creates
backups on it. When done, it optionally unmounts the filesystem and spins the drive down, backups on it. When done, it optionally unmounts the file system and spins the drive down,
so that it may be physically disconnected. so that it may be physically disconnected.
Configuring the system Configuring the system
@ -29,13 +29,26 @@ Configuring the system
First, create the ``/etc/backups`` directory (as root). First, create the ``/etc/backups`` directory (as root).
All configuration goes into this directory. All configuration goes into this directory.
Find out the ID of the partition table of your backup disk (here assumed to be /dev/sdz):: Then, create ``/etc/backups/40-backup.rules`` with the following content (all on one line)::
lsblk --fs -o +PTUUID /dev/sdz ACTION=="add", SUBSYSTEM=="bdi", DEVPATH=="/devices/virtual/bdi/*",
TAG+="systemd", ENV{SYSTEMD_WANTS}="automatic-backup.service"
Then, create ``/etc/backups/80-backup.rules`` with the following content (all on one line):: .. topic:: Finding a more precise udev rule
ACTION=="add", SUBSYSTEM=="block", ENV{ID_PART_TABLE_UUID}=="<the PTUUID you just noted>", TAG+="systemd", ENV{SYSTEMD_WANTS}+="automatic-backup.service" If you always connect the drive(s) to the same physical hardware path, e.g. the same
eSATA port, then you can make a more precise udev rule.
Execute ``udevadm monitor`` and connect a drive to the port you intend to use.
You should see a flurry of events, find those regarding the `block` subsystem.
Pick the event whose device path ends in something similar to a device file name,
typically`sdX/sdXY`. Use the event's device path and replace `sdX/sdXY` after the
`/block/` part in the path with a star (\*). For example:
`DEVPATH=="/devices/pci0000:00/0000:00:11.0/ata3/host2/target2:0:0/2:0:0:0/block/*"`.
Reboot a few times to ensure that the hardware path does not change: on some motherboards
components of it can be random. In these cases you cannot use a more accurate rule,
or need to insert additional stars for matching the path.
The "systemd" tag in conjunction with the SYSTEMD_WANTS environment variable has systemd The "systemd" tag in conjunction with the SYSTEMD_WANTS environment variable has systemd
launch the "automatic-backup" service, which we will create next, as the launch the "automatic-backup" service, which we will create next, as the
@ -47,8 +60,8 @@ launch the "automatic-backup" service, which we will create next, as the
Type=oneshot Type=oneshot
ExecStart=/etc/backups/run.sh ExecStart=/etc/backups/run.sh
Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template; Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template,
modify it to suit your needs (e.g., more backup sets, dumping databases, etc.). modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
.. code-block:: bash .. code-block:: bash
@ -94,10 +107,10 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
echo "Disk $uuid is a backup disk" echo "Disk $uuid is a backup disk"
partition_path=/dev/disk/by-uuid/$uuid partition_path=/dev/disk/by-uuid/$uuid
# Mount filesystem if not already done. This assumes that if something is already # Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It will not find the drive if # mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else. # it was mounted somewhere else.
findmnt $MOUNTPOINT >/dev/null || mount $partition_path $MOUNTPOINT (mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1) drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive" echo "Drive path: $drive"
@ -106,13 +119,13 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
# #
# Options for borg create # Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4" BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
# Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export, # Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export,
# if encryption is used. # if encryption is used.
# Because no one can answer these questions non-interactively, it is better to # No one can answer if Borg asks these questions, it is better to just fail quickly
# fail quickly instead of hanging. # instead of hanging.
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
@ -123,16 +136,16 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
# This is just an example, change it however you see fit # This is just an example, change it however you see fit
borg create $BORG_OPTS \ borg create $BORG_OPTS \
--exclude root/.cache \ --exclude /root/.cache \
--exclude var/lib/docker/devicemapper \ --exclude /var/lib/docker/devicemapper \
$TARGET::$DATE-$$-system \ $TARGET::$DATE-$$-system \
/ /boot / /boot
# /home is often a separate partition/filesystem. # /home is often a separate partition / file system.
# Even if it is not (add --exclude /home above), it probably makes sense # Even if it isn't (add --exclude /home above), it probably makes sense
# to have /home in a separate archive. # to have /home in a separate archive.
borg create $BORG_OPTS \ borg create $BORG_OPTS \
--exclude 'sh:home/*/.cache' \ --exclude 'sh:/home/*/.cache' \
$TARGET::$DATE-$$-home \ $TARGET::$DATE-$$-home \
/home/ /home/
@ -151,20 +164,21 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
fi fi
Create the ``/etc/backups/autoeject`` file to have the script automatically eject the drive Create the ``/etc/backups/autoeject`` file to have the script automatically eject the drive
after creating the backup. Rename the file to something else (e.g., ``/etc/backups/autoeject-no``) after creating the backup. Rename the file to something else (e.g. ``/etc/backup/autoeject-no``)
when you want to do something with the drive after creating backups (e.g., running checks). when you want to do something with the drive after creating backups (e.g running check).
Create the ``/etc/backups/backup-suspend`` file if the machine should suspend after completing Create the ``/etc/backups/backup-suspend`` file if the machine should suspend after completing
the backup. Don't forget to disconnect the device physically before resuming, the backup. Don't forget to physically disconnect the device before resuming,
otherwise you'll enter a cycle. You can also add an option to power down instead. otherwise you'll enter a cycle. You can also add an option to power down instead.
Create an empty ``/etc/backups/backup.disks`` file, in which you will register your backup drives. Create an empty ``/etc/backups/backup.disks`` file, you'll register your backup drives
there.
Finally, enable the udev rules and services: The last part is to actually enable the udev rules and services:
.. code-block:: bash .. code-block:: bash
ln -s /etc/backups/80-backup.rules /etc/udev/rules.d/80-backup.rules ln -s /etc/backups/40-backup.rules /etc/udev/rules.d/40-backup.rules
ln -s /etc/backups/automatic-backup.service /etc/systemd/system/automatic-backup.service ln -s /etc/backups/automatic-backup.service /etc/systemd/system/automatic-backup.service
systemctl daemon-reload systemctl daemon-reload
udevadm control --reload udevadm control --reload
@ -173,13 +187,13 @@ Adding backup hard drives
------------------------- -------------------------
Connect your backup hard drive. Format it, if not done already. Connect your backup hard drive. Format it, if not done already.
Find the UUID of the filesystem on which backups should be stored:: Find the UUID of the file system that backups should be stored on::
lsblk -o+uuid,label lsblk -o+uuid,label
Record the UUID in the ``/etc/backups/backup.disks`` file. Note the UUID into the ``/etc/backup/backup.disks`` file.
Mount the drive at /mnt/backup. Mount the drive to /mnt/backup.
Initialize a Borg repository at the location indicated by ``TARGET``:: Initialize a Borg repository at the location indicated by ``TARGET``::
@ -197,14 +211,14 @@ See backup logs using journalctl::
Security considerations Security considerations
----------------------- -----------------------
The script as shown above will mount any filesystem with a UUID listed in The script as shown above will mount any file system with an UUID listed in
``/etc/backups/backup.disks``. The UUID check is a safety/annoyance-reduction ``/etc/backup/backup.disks``. The UUID check is a safety / annoyance-reduction
mechanism to keep the script from blowing up whenever a random USB thumb drive is connected. mechanism to keep the script from blowing up whenever a random USB thumb drive is connected.
It is not meant as a security mechanism. Mounting filesystems and reading repository It is not meant as a security mechanism. Mounting file systems and reading repository
data exposes additional attack surfaces (kernel filesystem drivers, data exposes additional attack surfaces (kernel file system drivers,
possibly userspace services, and Borg itself). On the other hand, someone possibly user space services and Borg itself). On the other hand, someone
standing right next to your computer can attempt a lot of attacks, most of which standing right next to your computer can attempt a lot of attacks, most of which
are easier to do than, e.g., exploiting filesystems (installing a physical keylogger, are easier to do than e.g. exploiting file systems (installing a physical key logger,
DMA attacks, stealing the machine, ...). DMA attacks, stealing the machine, ...).
Borg ensures that backups are not created on random drives that "just happen" Borg ensures that backups are not created on random drives that "just happen"

View file

@ -1,48 +1,47 @@
.. include:: ../global.rst.inc .. include:: ../global.rst.inc
.. highlight:: none .. highlight:: none
.. _central-backup-server:
Central repository server with Ansible or Salt Central repository server with Ansible or Salt
============================================== ==============================================
This section gives an example of how to set up a Borg repository server for multiple This section will give an example how to setup a borg repository server for multiple
clients. clients.
Machines Machines
-------- --------
This section uses multiple machines, referred to by their There are multiple machines used in this section and will further be named by their
respective fully qualified domain names (FQDNs). respective fully qualified domain name (fqdn).
* The backup server: `backup01.srv.local` * The backup server: `backup01.srv.local`
* The clients: * The clients:
- John Doe's desktop: `johndoe.clnt.local` - John Doe's desktop: `johndoe.clnt.local`
- Web server 01: `web01.srv.local` - Webserver 01: `web01.srv.local`
- Application server 01: `app01.srv.local` - Application server 01: `app01.srv.local`
User and group User and group
-------------- --------------
The repository server should have a single UNIX user for all the clients. The repository server needs to have only one UNIX user for all the clients.
Recommended user and group with additional settings: Recommended user and group with additional settings:
* User: `backup` * User: `backup`
* Group: `backup` * Group: `backup`
* Shell: `/bin/bash` (or another shell capable of running the `borg serve` command) * Shell: `/bin/bash` (or other capable to run the `borg serve` command)
* Home: `/home/backup` * Home: `/home/backup`
Most clients should initiate a backup as the root user to capture all Most clients shall initiate a backup from the root user to catch all
users, groups, and permissions (e.g., when backing up `/home`). users, groups and permissions (e.g. when backing up `/home`).
Folders Folders
------- -------
The following directory layout is suggested on the repository server: The following folder tree layout is suggested on the repository server:
* User home directory, /home/backup * User home directory, /home/backup
* Repositories path (storage pool): /home/backup/repos * Repositories path (storage pool): /home/backup/repos
* Clients restricted paths (`/home/backup/repos/<client fqdn>`): * Clients restricted paths (`/home/backup/repos/<client fqdn>`):
- johndoe.clnt.local: `/home/backup/repos/johndoe.clnt.local` - johndoe.clnt.local: `/home/backup/repos/johndoe.clnt.local`
- web01.srv.local: `/home/backup/repos/web01.srv.local` - web01.srv.local: `/home/backup/repos/web01.srv.local`
@ -60,10 +59,10 @@ but no other directories. You can allow a client to access several separate dire
which could make sense if multiple machines belong to one person which should then have access to all the which could make sense if multiple machines belong to one person which should then have access to all the
backups of their machines. backups of their machines.
Only one SSH key per client is allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and There is only one ssh key per client allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and
``app01.srv.local``. They will access the backup under a single UNIX user account as ``app01.srv.local``. But they will access the backup under only one UNIX user account as:
``backup@backup01.srv.local``. Every key in ``$HOME/.ssh/authorized_keys`` has a ``backup@backup01.srv.local``. Every key in ``$HOME/.ssh/authorized_keys`` has a
forced command and restrictions applied, as shown below: forced command and restrictions applied as shown below:
:: ::
@ -73,19 +72,19 @@ forced command and restrictions applied, as shown below:
.. note:: The text shown above needs to be written on a single line! .. note:: The text shown above needs to be written on a single line!
The options added to the key perform the following: The options which are added to the key will perform the following:
1. Change working directory 1. Change working directory
2. Run ``borg serve`` restricted to the client base path 2. Run ``borg serve`` restricted to the client base path
3. Restrict ssh and do not allow stuff which imposes a security risk 3. Restrict ssh and do not allow stuff which imposes a security risk
Because of the ``cd`` command, the server automatically changes the current Due to the ``cd`` command we use, the server automatically changes the current
working directory. The client then does not need to know the absolute working directory. Then client doesn't need to have knowledge of the absolute
or relative remote repository path and can directly access the repositories at or relative remote repository path and can directly access the repositories at
``ssh://<user>@<host>/./<repo>``. ``ssh://<user>@<host>/./<repo>``.
.. note:: The setup above ignores all client-given command line parameters .. note:: The setup above ignores all client given commandline parameters
that are normally appended to the `borg serve` command. which are normally appended to the `borg serve` command.
Client Client
------ ------
@ -96,15 +95,15 @@ The client needs to initialize the `pictures` repository like this:
borg init ssh://backup@backup01.srv.local/./pictures borg init ssh://backup@backup01.srv.local/./pictures
Or with the full path (this should not be used in practice; it is only for demonstration purposes). Or with the full path (should actually never be used, as only for demonstrational purposes).
The server automatically changes the current working directory to the `<client fqdn>` directory. The server should automatically change the current working directory to the `<client fqdn>` folder.
:: ::
borg init ssh://backup@backup01.srv.local/home/backup/repos/johndoe.clnt.local/pictures borg init ssh://backup@backup01.srv.local/home/backup/repos/johndoe.clnt.local/pictures
When `johndoe.clnt.local` tries to access a path outside its restriction, the following error is raised. When `johndoe.clnt.local` tries to access a not restricted path the following error is raised.
John Doe tries to back up into the web01 path: John Doe tries to backup into the Web 01 path:
:: ::
@ -203,7 +202,7 @@ Salt running on a Debian system.
Enhancements Enhancements
------------ ------------
As this section only describes a simple and effective setup, it could be further As this section only describes a simple and effective setup it could be further
enhanced when supporting (a limited set) of client supplied commands. A wrapper enhanced when supporting (a limited set) of client supplied commands. A wrapper
for starting `borg serve` could be written. Or borg itself could be enhanced to for starting `borg serve` could be written. Or borg itself could be enhanced to
autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment

View file

@ -5,25 +5,26 @@
Hosting repositories Hosting repositories
==================== ====================
This section shows how to provide repository storage securely for users. This sections shows how to securely provide repository storage for users.
Optionally, each user can have a storage quota.
Repositories are accessed through SSH. Each user of the service should Repositories are accessed through SSH. Each user of the service should
have their own login, which is only able to access that user's files. have her own login which is only able to access the user's files.
Technically, it is possible to have multiple users share one login; Technically it would be possible to have multiple users share one login,
however, separating them is better. Separate logins increase isolation however, separating them is better. Separate logins increase isolation
and provide an additional layer of security and safety for both the and are thus an additional layer of security and safety for both the
provider and the users. provider and the users.
For example, if a user manages to breach ``borg serve``, they can For example, if a user manages to breach ``borg serve`` then she can
only damage their own data (assuming that the system does not have further only damage her own data (assuming that the system does not have further
vulnerabilities). vulnerabilities).
Use the standard directory structure of the operating system. Each user Use the standard directory structure of the operating system. Each user
is assigned a home directory, and that user's repositories reside in their is assigned a home directory and repositories of the user reside in her
home directory. home directory.
The following ``~user/.ssh/authorized_keys`` file is the most important The following ``~user/.ssh/authorized_keys`` file is the most important
piece for a correct deployment. It allows the user to log in via piece for a correct deployment. It allows the user to login via
their public key (which must be provided by the user), and restricts their public key (which must be provided by the user), and restricts
SSH access to safe operations only. SSH access to safe operations only.
@ -36,18 +37,18 @@ SSH access to safe operations only.
.. warning:: .. warning::
If this file should be automatically updated (e.g. by a web console), If this file should be automatically updated (e.g. by a web console),
pay **utmost attention** to sanitizing user input. Strip all whitespace pay **utmost attention** to sanitizing user input. Strip all whitespace
around the user-supplied key, ensure that it **only** contains ASCII around the user-supplied key, ensure that it **only** contains ASCII
with no control characters and that it consists of three parts separated with no control characters and that it consists of three parts separated
by a single space. Ensure that no newlines are contained within the key. by a single space. Ensure that no newlines are contained within the key.
The ``restrict`` keyword enables all restrictions, i.e. disables port, agent The ``restrict`` keyword enables all restrictions, i.e. disables port, agent
and X11 forwarding, as well as disabling PTY allocation and execution of ~/.ssh/rc. and X11 forwarding, as well as disabling PTY allocation and execution of ~/.ssh/rc.
If any future restriction capabilities are added to authorized_keys If any future restriction capabilities are added to authorized_keys
files they will be included in this set. files they will be included in this set.
The ``command`` keyword forces execution of the specified command The ``command`` keyword forces execution of the specified command line
upon login. This must be ``borg serve``. The ``--restrict-to-repository`` upon login. This must be ``borg serve``. The ``--restrict-to-repository``
option permits access to exactly **one** repository. It can be given option permits access to exactly **one** repository. It can be given
multiple times to permit access to more than one repository. multiple times to permit access to more than one repository.
@ -55,6 +56,30 @@ multiple times to permit access to more than one repository.
The repository may not exist yet; it can be initialized by the user, The repository may not exist yet; it can be initialized by the user,
which allows for encryption. which allows for encryption.
**Storage quotas** can be enabled by adding the ``--storage-quota`` option
to the ``borg serve`` command line::
restrict,command="borg serve --storage-quota 20G ..." ...
The storage quotas of repositories are completely independent. If a
client is able to access multiple repositories, each repository
can be filled to the specified quota.
If storage quotas are used, ensure that all deployed Borg releases
support storage quotas.
Refer to :ref:`internals_storage_quota` for more details on storage quotas.
**Specificities: Append-only repositories**
Running ``borg init`` via a ``borg serve --append-only`` server will **not**
create a repository that is configured to be append-only by its repository
config.
But, ``--append-only`` arguments in ``authorized_keys`` will override the
repository config, therefore append-only mode can be enabled on a key by key
basis.
Refer to the `sshd(8) <https://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8>`_ Refer to the `sshd(8) <https://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8>`_
man page for more details on SSH options. man page for more details on SSH options.
See also :ref:`borg_serve` See also :ref:`borg_serve`

View file

@ -6,48 +6,13 @@ Backing up entire disk images
Backing up disk images can still be efficient with Borg because its `deduplication`_ Backing up disk images can still be efficient with Borg because its `deduplication`_
technique makes sure only the modified parts of the file are stored. Borg also has technique makes sure only the modified parts of the file are stored. Borg also has
optional simple sparse file support for extraction. optional simple sparse file support for extract.
It is of utmost importance to pin down the disk you want to back up.
Use the disk's SERIAL for that.
Use:
.. code-block:: bash
# You can find the short disk serial by:
# udevadm info --query=property --name=nvme1n1 | grep ID_SERIAL_SHORT | cut -d '=' -f 2
export BORG_REPO=/path/to/repo
DISK_SERIAL="7VS0224F"
DISK_ID=$(readlink -f /dev/disk/by-id/*"${DISK_SERIAL}") # Returns /dev/nvme1n1
mapfile -t PARTITIONS < <(lsblk -o NAME,TYPE -p -n -l "$DISK_ID" | awk '$2 == "part" {print $1}')
echo "Partitions of $DISK_ID:"
echo "${PARTITIONS[@]}"
echo "Disk Identifier: $DISK_ID"
# Use the following line to perform a Borg backup for the full disk:
# borg create --read-special disk-backup "$DISK_ID"
# Use the following to perform a Borg backup for all partitions of the disk
# borg create --read-special partitions-backup "${PARTITIONS[@]}"
# Example output:
# Partitions of /dev/nvme1n1:
# /dev/nvme1n1p1
# /dev/nvme1n1p2
# /dev/nvme1n1p3
# Disk Identifier: /dev/nvme1n1
# borg create --read-special disk-backup /dev/nvme1n1
# borg create --read-special partitions-backup /dev/nvme1n1p1 /dev/nvme1n1p2 /dev/nvme1n1p3
Decreasing the size of image backups Decreasing the size of image backups
------------------------------------ ------------------------------------
Disk images are as large as the full disk when uncompressed and might not get much Disk images are as large as the full disk when uncompressed and might not get much
smaller post-deduplication after heavy use because virtually all filesystems do not smaller post-deduplication after heavy use because virtually all file systems don't
actually delete file data on disk but instead delete the filesystem entries referencing actually delete file data on disk but instead delete the filesystem entries referencing
the data. Therefore, if a disk nears capacity and files are deleted again, the change the data. Therefore, if a disk nears capacity and files are deleted again, the change
will barely decrease the space it takes up when compressed and deduplicated. Depending will barely decrease the space it takes up when compressed and deduplicated. Depending
@ -63,28 +28,28 @@ deduplicating. For backup, save the disk header and the contents of each partiti
HEADER_SIZE=$(sfdisk -lo Start $DISK | grep -A1 -P 'Start$' | tail -n1 | xargs echo) HEADER_SIZE=$(sfdisk -lo Start $DISK | grep -A1 -P 'Start$' | tail -n1 | xargs echo)
PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d') PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d')
dd if=$DISK count=$HEADER_SIZE | borg create --repo repo hostname-partinfo - dd if=$DISK count=$HEADER_SIZE | borg create repo::hostname-partinfo -
echo "$PARTITIONS" | grep NTFS | cut -d' ' -f1 | while read x; do echo "$PARTITIONS" | grep NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$") PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
ntfsclone -so - $x | borg create --repo repo hostname-part$PARTNUM - ntfsclone -so - $x | borg create repo::hostname-part$PARTNUM -
done done
# to back up non-NTFS partitions as well: # to backup non-NTFS partitions as well:
echo "$PARTITIONS" | grep -v NTFS | cut -d' ' -f1 | while read x; do echo "$PARTITIONS" | grep -v NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$") PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
borg create --read-special --repo repo hostname-part$PARTNUM $x borg create --read-special repo::hostname-part$PARTNUM $x
done done
Restoration is a similar process:: Restoration is a similar process::
borg extract --stdout --repo repo hostname-partinfo | dd of=$DISK && partprobe borg extract --stdout repo::hostname-partinfo | dd of=$DISK && partprobe
PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d') PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d')
borg list --format {archive}{NL} repo | grep 'part[0-9]*$' | while read x; do borg list --format {archive}{NL} repo | grep 'part[0-9]*$' | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$") PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
PARTITION=$(echo "$PARTITIONS" | grep -E "$DISKp?$PARTNUM" | head -n1) PARTITION=$(echo "$PARTITIONS" | grep -E "$DISKp?$PARTNUM" | head -n1)
if echo "$PARTITION" | cut -d' ' -f2- | grep -q NTFS; then if echo "$PARTITION" | cut -d' ' -f2- | grep -q NTFS; then
borg extract --stdout --repo repo $x | ntfsclone -rO $(echo "$PARTITION" | cut -d' ' -f1) - borg extract --stdout repo::$x | ntfsclone -rO $(echo "$PARTITION" | cut -d' ' -f1) -
else else
borg extract --stdout --repo repo $x | dd of=$(echo "$PARTITION" | cut -d' ' -f1) borg extract --stdout repo::$x | dd of=$(echo "$PARTITION" | cut -d' ' -f1)
fi fi
done done
@ -105,18 +70,18 @@ except it works in place, zeroing the original partition. This makes the backup
a bit simpler:: a bit simpler::
sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d' | grep Linux | cut -d' ' -f1 | xargs -n1 zerofree sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d' | grep Linux | cut -d' ' -f1 | xargs -n1 zerofree
borg create --read-special --repo repo hostname-disk $DISK borg create --read-special repo::hostname-disk $DISK
Because the partitions were zeroed in place, restoration is only one command:: Because the partitions were zeroed in place, restoration is only one command::
borg extract --stdout --repo repo hostname-disk | dd of=$DISK borg extract --stdout repo::hostname-disk | dd of=$DISK
.. note:: The "traditional" way to zero out space on a partition, especially one already .. note:: The "traditional" way to zero out space on a partition, especially one already
mounted, is simply to ``dd`` from ``/dev/zero`` to a temporary file and delete mounted, is to simply ``dd`` from ``/dev/zero`` to a temporary file and delete
it. This is ill-advised for the reasons mentioned in the ``zerofree`` man page: it. This is ill-advised for the reasons mentioned in the ``zerofree`` man page:
- it is slow. - it is slow
- it makes the disk image (temporarily) grow to its maximal extent. - it makes the disk image (temporarily) grow to its maximal extent
- it (temporarily) uses all free space on the disk, so other concurrent write actions may fail. - it (temporarily) uses all free space on the disk, so other concurrent write actions may fail.
Virtual machines Virtual machines
@ -129,7 +94,7 @@ regular file to Borg with the same issues as regular files when it comes to conc
reading and writing from the same file. reading and writing from the same file.
For backing up live VMs use filesystem snapshots on the VM host, which establishes For backing up live VMs use filesystem snapshots on the VM host, which establishes
crash-consistency for the VM images. This means that with most filesystems (that crash-consistency for the VM images. This means that with most file systems (that
are journaling) the FS will always be fine in the backup (but may need a journal are journaling) the FS will always be fine in the backup (but may need a journal
replay to become accessible). replay to become accessible).
@ -145,10 +110,10 @@ to reach application-consistency; it's a broad and complex issue that cannot be
in entirety here. in entirety here.
Hypervisor snapshots capturing most of the VM's state can also be used for backups and Hypervisor snapshots capturing most of the VM's state can also be used for backups and
can be a better alternative to pure filesystem-based snapshots of the VM's disk, since can be a better alternative to pure file system based snapshots of the VM's disk, since
no state is lost. Depending on the application this can be the easiest and most reliable no state is lost. Depending on the application this can be the easiest and most reliable
way to create application-consistent backups. way to create application-consistent backups.
Borg does not intend to address these issues due to their huge complexity and Borg doesn't intend to address these issues due to their huge complexity and
platform/software dependency. Combining Borg with the mechanisms provided by the platform platform/software dependency. Combining Borg with the mechanisms provided by the platform
(snapshots, hypervisor features) will be the best approach to start tackling them. (snapshots, hypervisor features) will be the best approach to start tackling them.

View file

@ -1,66 +0,0 @@
.. include:: ../global.rst.inc
.. highlight:: none
.. _non_root_user:
================================
Backing up using a non-root user
================================
This section describes how to run Borg as a non-root user and still be able to
back up every file on the system.
Normally, Borg is run as the root user to bypass all filesystem permissions and
be able to read all files. However, in theory this also allows Borg to modify or
delete files on your system (for example, in case of a bug).
To eliminate this possibility, we can run Borg as a non-root user and give it read-only
permissions to all files on the system.
Using Linux capabilities inside a systemd service
=================================================
One way to do so is to use Linux `capabilities
<https://man7.org/linux/man-pages/man7/capabilities.7.html>`_ within a systemd
service.
Linux capabilities allow us to grant parts of the root users privileges to
a non-root user. This works on a per-thread level and does not grant permissions
to the non-root user as a whole.
For this, we need to run the backup script from a systemd service and use the `AmbientCapabilities
<https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#AmbientCapabilities=>`_
option added in systemd 229.
A very basic unit file would look like this:
::
[Unit]
Description=Borg Backup
[Service]
Type=oneshot
User=borg
ExecStart=/usr/local/sbin/backup.sh
AmbientCapabilities=CAP_DAC_READ_SEARCH
The ``CAP_DAC_READ_SEARCH`` capability gives Borg read-only access to all files and directories on the system.
This service can then be started manually using ``systemctl start``, a systemd timer or other methods.
Restore considerations
======================
Use the root user when restoring files. If you use the non-root user, ``borg extract`` will
change ownership of all restored files to the non-root user. Using ``borg mount`` will not allow the
non-root user to access files it would not be able to access on the system itself.
Other than that, you can use the same restore process you would use when running the backup as root.
.. warning::
When using a local repository and running Borg commands as root, make sure to use only commands that do not
modify the repository itself, such as extract or mount. Modifying the repository as root will break it for the
non-root user, since some files inside the repository will then be owned by root.

View file

@ -6,51 +6,51 @@
Backing up in pull mode Backing up in pull mode
======================= =======================
Typically the Borg client connects to a backup server using SSH as a transport Typically the borg client connects to a backup server using SSH as a transport
when initiating a backup. This is referred to as push mode. when initiating a backup. This is referred to as push mode.
However, if you require the backup server to initiate the connection, or prefer If you however require the backup server to initiate the connection or prefer
it to initiate the backup run, one of the following workarounds is required to it to initiate the backup run, one of the following workarounds is required to
allow such a pull-mode setup. allow such a pull mode setup.
A common use case for pull mode is to back up a remote server to a local personal A common use case for pull mode is to backup a remote server to a local personal
computer. computer.
SSHFS SSHFS
===== =====
Assume you have a pull backup system set up with Borg, where a backup server Assuming you have a pull backup system set up with borg, where a backup server
pulls data from the target via SSHFS. In this mode, the backup client's filesystem pulls the data from the target via SSHFS. In this mode, the backup client's file
is mounted remotely on the backup server. Pull mode is even possible if system is mounted remotely on the backup server. Pull mode is even possible if
the SSH connection must be established by the client via a remote tunnel. Other the SSH connection must be established by the client via a remote tunnel. Other
network file systems like NFS or SMB could be used as well, but SSHFS is very network file systems like NFS or SMB could be used as well, but SSHFS is very
simple to set up and probably the most secure one. simple to set up and probably the most secure one.
There are some restrictions caused by SSHFS. For example, unless you define UID There are some restrictions caused by SSHFS. For example, unless you define UID
and GID mappings when mounting via ``sshfs``, owners and groups of the mounted and GID mappings when mounting via ``sshfs``, owners and groups of the mounted
filesystem will probably change, and you may not have access to those files if file system will probably change, and you may not have access to those files if
Borg is not run with root privileges. BorgBackup is not run with root privileges.
SSHFS is a FUSE filesystem and uses the SFTP protocol, so there may also be SSHFS is a FUSE file system and uses the SFTP protocol, so there may be also
unsupported features that the actual implementations of SSHFS, libfuse, and other unsupported features that the actual implementations of sshfs, libfuse and
SFTP on the backup server do not support, like filename encodings, ACLs, xattrs, sftp on the backup server do not support, like file name encodings, ACLs, xattrs
or flags. Therefore, there is no guarantee that you can restore a system or flags. So there is no guarantee that you are able to restore a system
completely in every aspect from such a backup. completely in every aspect from such a backup.
.. warning:: .. warning::
To mount the client's root filesystem you will need root access to the To mount the client's root file system you will need root access to the
client. This contradicts the usual threat model of Borg, where client. This contradicts to the usual threat model of BorgBackup, where
clients do not need to trust the backup server (data is encrypted). In pull clients don't need to trust the backup server (data is encrypted). In pull
mode the server (when logged in as root) could cause unlimited damage to the mode the server (when logged in as root) could cause unlimited damage to the
client. Therefore, pull mode should be used only with servers you fully client. Therefore, pull mode should be used only from servers you do fully
trust! trust!
.. warning:: .. warning::
Additionally, while chrooted into the client's root filesystem, Additionally, while being chrooted into the client's root file system,
code from the client will be executed. Therefore, you should do this only when code from the client will be executed. Thus, you should only do that when
you fully trust the client. fully trusting the client.
.. warning:: .. warning::
@ -98,9 +98,9 @@ create the backup, retaining the original paths, excluding the repository:
:: ::
borg create --exclude borgrepo --files-cache ctime,size --repo /borgrepo archive / borg create --exclude /borgrepo --files-cache ctime,size /borgrepo::archive /
For the sake of simplicity only ``borgrepo`` is excluded here. You may want to For the sake of simplicity only ``/borgrepo`` is excluded here. You may want to
set up an exclude file with additional files and folders to be excluded. Also set up an exclude file with additional files and folders to be excluded. Also
note that we have to modify Borg's file change detection behaviour SSHFS note that we have to modify Borg's file change detection behaviour SSHFS
cannot guarantee stable inode numbers, so we have to supply the cannot guarantee stable inode numbers, so we have to supply the
@ -159,9 +159,9 @@ Now we can run
:: ::
borg extract --repo /borgrepo archive PATH borg extract /borgrepo::archive PATH
to restore whatever we like partially. Finally, do the clean-up: to partially restore whatever we like. Finally, do the clean-up:
:: ::
@ -187,7 +187,7 @@ and extract a backup, utilizing the ``--numeric-ids`` option:
sshfs root@host:/ /mnt/sshfs sshfs root@host:/ /mnt/sshfs
cd /mnt/sshfs cd /mnt/sshfs
borg extract --numeric-ids --repo /path/to/repo archive borg extract --numeric-ids /path/to/repo::archive
cd ~ cd ~
umount /mnt/sshfs umount /mnt/sshfs
@ -199,7 +199,7 @@ directly extract it without the need of mounting with SSHFS:
:: ::
borg export-tar --repo /path/to/repo archive - | ssh root@host 'tar -C / -x' borg export-tar /path/to/repo::archive - | ssh root@host 'tar -C / -x'
Note that in this scenario the tar format is the limiting factor it cannot Note that in this scenario the tar format is the limiting factor it cannot
restore all the advanced features that BorgBackup supports. See restore all the advanced features that BorgBackup supports. See
@ -209,8 +209,8 @@ socat
===== =====
In this setup a SSH connection from the backup server to the client is In this setup a SSH connection from the backup server to the client is
established that uses SSH reverse port forwarding to tunnel data established that uses SSH reverse port forwarding to transparently
transparently between UNIX domain sockets on the client and server and the socat tunnel data between UNIX domain sockets on the client and server and the socat
tool to connect these with the borg client and server processes, respectively. tool to connect these with the borg client and server processes, respectively.
The program socat has to be available on the backup server and on the client The program socat has to be available on the backup server and on the client
@ -247,7 +247,7 @@ to *borg-client* has to have read and write permissions on ``/run/borg``::
On *borg-server*, we have to start the command ``borg serve`` and make its On *borg-server*, we have to start the command ``borg serve`` and make its
standard input and output available to a unix socket:: standard input and output available to a unix socket::
borg-server:~$ socat UNIX-LISTEN:/run/borg/reponame.sock,fork EXEC:"borg serve --restrict-to-path /path/to/repo" borg-server:~$ socat UNIX-LISTEN:/run/borg/reponame.sock,fork EXEC:"borg serve --append-only --restrict-to-path /path/to/repo"
Socat will wait until a connection is opened. Then socat will execute the Socat will wait until a connection is opened. Then socat will execute the
command given, redirecting Standard Input and Output to the unix socket. The command given, redirecting Standard Input and Output to the unix socket. The
@ -277,7 +277,7 @@ forwarding can do this for us::
Warning: remote port forwarding failed for listen path /run/borg/reponame.sock Warning: remote port forwarding failed for listen path /run/borg/reponame.sock
When you are done, you have to remove the socket file manually, otherwise When you are done, you have to manually remove the socket file, otherwise
you may see an error like this when trying to execute borg commands:: you may see an error like this when trying to execute borg commands::
Remote: YYYY/MM/DD HH:MM:SS socat[XXX] E connect(5, AF=1 "/run/borg/reponame.sock", 13): Connection refused Remote: YYYY/MM/DD HH:MM:SS socat[XXX] E connect(5, AF=1 "/run/borg/reponame.sock", 13): Connection refused
@ -301,7 +301,7 @@ ignore all arguments intended for the SSH command.
All Borg commands can now be executed on *borg-client*. For example to create a All Borg commands can now be executed on *borg-client*. For example to create a
backup execute the ``borg create`` command:: backup execute the ``borg create`` command::
borg-client:~$ borg create --repo ssh://borg-server/path/to/repo archive /path_to_backup borg-client:~$ borg create ssh://borg-server/path/to/repo::archive /path_to_backup
When automating backup creation, the When automating backup creation, the
interactive ssh session may seem inappropriate. An alternative way of creating interactive ssh session may seem inappropriate. An alternative way of creating
@ -312,7 +312,7 @@ a backup may be the following command::
borgc@borg-client \ borgc@borg-client \
borg create \ borg create \
--rsh "sh -c 'exec socat STDIO UNIX-CONNECT:/run/borg/reponame.sock'" \ --rsh "sh -c 'exec socat STDIO UNIX-CONNECT:/run/borg/reponame.sock'" \
--repo ssh://borg-server/path/to/repo archive /path_to_backup \ ssh://borg-server/path/to/repo::archive /path_to_backup \
';' rm /run/borg/reponame.sock ';' rm /run/borg/reponame.sock
This command also automatically removes the socket file after the ``borg This command also automatically removes the socket file after the ``borg
@ -350,7 +350,7 @@ dedicated ssh key:
borgs@borg-server$ install -m 700 -d ~/.ssh/ borgs@borg-server$ install -m 700 -d ~/.ssh/
borgs@borg-server$ ssh-keygen -N '' -t rsa -f ~/.ssh/borg-client_key borgs@borg-server$ ssh-keygen -N '' -t rsa -f ~/.ssh/borg-client_key
borgs@borg-server$ { echo -n 'command="borg serve --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys borgs@borg-server$ { echo -n 'command="borg serve --append-only --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys
borgs@borg-server$ chmod 600 ~/.ssh/authorized_keys borgs@borg-server$ chmod 600 ~/.ssh/authorized_keys
``install -m 700 -d ~/.ssh/`` ``install -m 700 -d ~/.ssh/``
@ -365,10 +365,12 @@ dedicated ssh key:
Another more complex approach is using a unique ssh key for each pull operation. Another more complex approach is using a unique ssh key for each pull operation.
This is more secure as it guarantees that the key will not be used for other purposes. This is more secure as it guarantees that the key will not be used for other purposes.
``{ echo -n 'command="borg serve --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys`` ``{ echo -n 'command="borg serve --append-only --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys``
Add borg-client's ssh public key to ~/.ssh/authorized_keys with forced command and restricted mode. Add borg-client's ssh public key to ~/.ssh/authorized_keys with forced command and restricted mode.
The borg client is restricted to use one repo at the specified path. The borg client is restricted to use one repo at the specified path and to append-only operation.
Commands like *delete*, *prune* and *compact* have to be executed another way, for example directly on *borg-server*
side or from a privileged, less restricted client (using another authorized_keys entry).
``chmod 600 ~/.ssh/authorized_keys`` ``chmod 600 ~/.ssh/authorized_keys``
@ -415,88 +417,8 @@ Parentheses are not needed when using a dedicated bash process.
*ssh://borgs@borg-server/~/repo* refers to the repository *repo* within borgs's home directory on *borg-server*. *ssh://borgs@borg-server/~/repo* refers to the repository *repo* within borgs's home directory on *borg-server*.
*StrictHostKeyChecking=no* is used to add host keys automatically to *~/.ssh/known_hosts* without user intervention. *StrictHostKeyChecking=no* is used to automatically add host keys to *~/.ssh/known_hosts* without user intervention.
``kill "${SSH_AGENT_PID}"`` ``kill "${SSH_AGENT_PID}"``
Kill ssh-agent with loaded keys when it is not needed anymore. Kill ssh-agent with loaded keys when it is not needed anymore.
Remote forwarding
=================
The standard ssh client allows to create tunnels to forward local ports to a remote server (local forwarding) and also
to allow remote ports to be forwarded to local ports (remote forwarding).
This remote forwarding can be used to allow remote backup clients to access the backup server even if the backup server
cannot be reached by the backup client.
This can even be used in cases where neither the backup server can reach the backup client and the backup client cannot
reach the backup server, but some intermediate host can access both.
A schematic approach is as follows
::
Backup Server (backup@mybackup) Intermediate Machine (john@myinter) Backup Client (bob@myclient)
1. Establish SSH remote forwarding -----------> SSH listen on local port
2. Starting ``borg create`` establishes
3. SSH forwards to intermediate machine <------- SSH connection to the local port
4. Receives backup connection <------- and further on to backup server
via SSH
So for the backup client the backup is done via SSH to a local port and for the backup server there is a normal backup
performed via ssh.
In order to achieve this, the following commands can be used to create the remote port forwarding:
1. On machine ``myinter``
``ssh bob@myclient -v -C -R 8022:mybackup:22 -N``
This will listen for ssh-connections on port ``8022`` on ``myclient`` and forward connections to port 22 on ``mybackup``.
You can also remove the need for machine ``myinter`` and create the port forwarding on the backup server directly by
using ``localhost`` instead of ``mybackup``
2. On machine ``myclient``
``borg create -v --progress --stats ssh://backup@localhost:8022/home/backup/repos/myclient /``
Make sure to use port ``8022`` and ``localhost`` for the repository as this instructs borg on ``myclient`` to use the
remote forwarded ssh connection.
SSH Keys
--------
If you want to automate backups when using this method, the ssh ``known_hosts`` and ``authorized_keys`` need to be set up
to allow connections.
Security Considerations
-----------------------
Opening up SSH access this way can pose a security risk as it effectively opens remote access to your
backup server on the client even if it is located outside of your company network.
To reduce the chances of compromise, you should configure a forced command in ``authorized_keys`` to prevent
anyone from performing any other action on the backup server.
This can be done e.g. by adding the following in ``$HOME/.ssh/authorized_keys`` on ``mybackup`` with proper
path and client-fqdn:
::
command="cd /home/backup/repos/<client fqdn>;borg serve --restrict-to-path /home/backup/repos/<client fqdn>"
All the additional security considerations for borg should be applied, see :ref:`central-backup-server` for some additional
hints.
More information
----------------
See `remote forwarding`_ and the `ssh man page`_ for more information about remote forwarding.
.. _remote forwarding: https://linuxize.com/post/how-to-setup-ssh-tunneling/
.. _ssh man page: https://manpages.debian.org/testing/manpages-de/ssh.1.de.html

View file

@ -8,7 +8,7 @@ Development
This chapter will get you started with Borg development. This chapter will get you started with Borg development.
Borg is written in Python (with a little bit of Cython and C for Borg is written in Python (with a little bit of Cython and C for
the performance-critical parts). the performance critical parts).
Contributions Contributions
------------- -------------
@ -19,7 +19,7 @@ Some guidance for contributors:
- Discuss changes on the GitHub issue tracker, on IRC or on the mailing list. - Discuss changes on the GitHub issue tracker, on IRC or on the mailing list.
- Make your PRs on the ``master`` branch (see `Branching Model`_ for details and exceptions). - Make your PRs on the ``master`` branch (see `Branching Model`_ for details).
- Do clean changesets: - Do clean changesets:
@ -52,14 +52,14 @@ Borg development happens on the ``master`` branch and uses GitHub pull
requests (if you don't have GitHub or don't want to use it you can requests (if you don't have GitHub or don't want to use it you can
send smaller patches via the borgbackup mailing list to the maintainers). send smaller patches via the borgbackup mailing list to the maintainers).
Stable releases are maintained on maintenance branches named ``x.y-maint``, e.g., Stable releases are maintained on maintenance branches named ``x.y-maint``, eg.
the maintenance branch of the 1.4.x series is ``1.4-maint``. the maintenance branch of the 1.0.x series is ``1.0-maint``.
Most PRs should be filed against the ``master`` branch. Only if an Most PRs should be filed against the ``master`` branch. Only if an
issue affects **only** a particular maintenance branch a PR should be issue affects **only** a particular maintenance branch a PR should be
filed against it directly. filed against it directly.
While discussing/reviewing a PR it will be decided whether the While discussing / reviewing a PR it will be decided whether the
change should be applied to maintenance branches. Each maintenance change should be applied to maintenance branches. Each maintenance
branch has a corresponding *backport/x.y-maint* label, which will then branch has a corresponding *backport/x.y-maint* label, which will then
be applied. be applied.
@ -105,110 +105,11 @@ were collected:
Previously (until release 1.0.10) Borg used a `"merge upwards" Previously (until release 1.0.10) Borg used a `"merge upwards"
<https://git-scm.com/docs/gitworkflows#_merging_upwards>`_ model where <https://git-scm.com/docs/gitworkflows#_merging_upwards>`_ model where
most minor changes and fixes were committed to a maintenance branch most minor changes and fixes where committed to a maintenance branch
(e.g. 1.0-maint), and the maintenance branch(es) were regularly merged (eg. 1.0-maint), and the maintenance branch(es) were regularly merged
back into the main development branch. This became more and more back into the main development branch. This became more and more
troublesome due to merges growing more conflict-heavy and error-prone. troublesome due to merges growing more conflict-heavy and error-prone.
How to submit a pull request
----------------------------
In order to contribute to Borg, you will need to fork the ``borgbackup/borg``
main repository to your own Github repository. Then clone your Github repository
to your local machine. The instructions for forking and cloning a repository
can be found there:
`<https://docs.github.com/en/get-started/quickstart/fork-a-repo>`_ .
Make sure you also fetched the git tags, because without them, ``setuptools-scm``
will run into issues determining the correct borg version. Check if ``git tag``
shows a lot of release tags (version numbers).
If it does not, use ``git fetch --tags`` to fetch them.
To work on your contribution, you first need to decide which branch your pull
request should be against. Often, this might be master branch (esp. for big /
risky contributions), but it could be also a maintenance branch like e.g.
1.4-maint (esp. for small fixes that should go into next maintenance release,
e.g. 1.4.x).
Start by checking out the appropriate branch:
::
git checkout master
It is best practice for a developer to keep local ``master`` branch as an
up-to-date copy of the upstream ``master`` branch and always do own work in a
separate feature or bugfix branch.
This is useful to be able to rebase own branches onto the upstream branches
they were branched from, if necessary.
This also applies to other upstream branches (like e.g. ``1.4-maint``), not
only to ``master``.
Thus, create a new branch now:
::
git checkout -b MYCONTRIB-master # choose an appropriate own branch name
Now, work on your contribution in that branch. Use these git commands:
::
git status # is there anything that needs to be added?
git add ... # if so, add it
git commit # finally, commit it. use a descriptive comment.
Then push the changes to your Github repository:
::
git push --set-upstream origin MYCONTRIB-master
Finally, make a pull request on ``borgbackup/borg`` Github repository against
the appropriate branch (e.g. ``master``) so that your changes can be reviewed.
What to do if work was accidentally started in wrong branch
-----------------------------------------------------------
If you accidentally worked in ``master`` branch, check out the ``master``
branch and make sure there are no uncommitted changes. Then, create a feature
branch from that, so that your contribution is in a feature branch.
::
git checkout master
git checkout -b MYCONTRIB-master
Next, check out the ``master`` branch again. Find the commit hash of the last
commit that was made before you started working on your contribution and perform
a hard reset.
::
git checkout master
git log
git reset --hard THATHASH
Then, update the local ``master`` branch with changes made in the upstream
repository.
::
git pull borg master
Rebase feature branch onto updated master branch
------------------------------------------------
After updating the local ``master`` branch from upstream, the feature branch
can be checked out and rebased onto (the now up-to-date) ``master`` branch.
::
git checkout MYCONTRIB-master
git rebase -i master
Next, check if there are any commits that exist in the feature branch
but not in the ``master`` branch and vice versa. If there are no
conflicts or after resolving them, push your changes to your Github repository.
::
git log
git diff master
git push -f
Code and issues Code and issues
--------------- ---------------
@ -218,35 +119,24 @@ Code is stored on GitHub, in the `Borgbackup organization
<https://github.com/borgbackup/borg/pulls>`_ should be sent there as <https://github.com/borgbackup/borg/pulls>`_ should be sent there as
well. See also the :ref:`support` section for more details. well. See also the :ref:`support` section for more details.
Style guide / Automated Code Formatting Style guide
--------------------------------------- -----------
We use `black`_ for automatically formatting the code. We generally follow `pep8
<https://www.python.org/dev/peps/pep-0008/>`_, with 120 columns
If you work on the code, it is recommended that you run black **before each commit** instead of 79. We do *not* use form-feed (``^L``) characters to
(so that new code is always using the desired formatting and no additional commits separate sections either. Compliance is tested automatically when
are required to fix the formatting). you run the tests.
::
pip install -r requirements.d/codestyle.txt # everybody use same black version
black --check . # only check, don't change
black . # reformat the code
The CI workflows will check the code formatting and will fail if it is not formatted correctly.
When (mass-)reformatting existing code, we need to avoid ruining `git blame`, so please
follow their `guide about avoiding ruining git blame`_:
.. _black: https://black.readthedocs.io/
.. _guide about avoiding ruining git blame: https://black.readthedocs.io/en/stable/guides/introducing_black_to_your_project.html#avoiding-ruining-git-blame
Continuous Integration Continuous Integration
---------------------- ----------------------
All pull requests go through `GitHub Actions`_, which runs the tests on misc. All pull requests go through `GitHub Actions`_, which runs the tests on Linux
Python versions and on misc. platforms as well as some additional checks. and Mac OS X as well as the flake8 style checker. Windows builds run on AppVeyor_,
while additional Unix-like platforms are tested on Golem_.
.. _AppVeyor: https://ci.appveyor.com/project/borgbackup/borg/
.. _Golem: https://golem.enkore.de/view/Borg/
.. _GitHub Actions: https://github.com/borgbackup/borg/actions .. _GitHub Actions: https://github.com/borgbackup/borg/actions
Output and Logging Output and Logging
@ -274,12 +164,6 @@ virtual env and run::
pip install -r requirements.d/development.txt pip install -r requirements.d/development.txt
This project utilizes pre-commit to format and lint code before it is committed.
Although pre-commit is installed when running the command above, the pre-commit hooks
will have to be installed separately. Run this command to install the pre-commit hooks::
pre-commit install
Running the tests Running the tests
----------------- -----------------
@ -287,7 +171,7 @@ The tests are in the borg/testsuite package.
To run all the tests, you need to have fakeroot installed. If you do not have To run all the tests, you need to have fakeroot installed. If you do not have
fakeroot, you still will be able to run most tests, just leave away the fakeroot, you still will be able to run most tests, just leave away the
``fakeroot -u`` from the given command lines. `fakeroot -u` from the given command lines.
To run the test suite use the following command:: To run the test suite use the following command::
@ -298,7 +182,7 @@ Some more advanced examples::
# verify a changed tox.ini (run this after any change to tox.ini): # verify a changed tox.ini (run this after any change to tox.ini):
fakeroot -u tox --recreate fakeroot -u tox --recreate
fakeroot -u tox -e py313 # run all tests, but only on python 3.13 fakeroot -u tox -e py39 # run all tests, but only on python 3.9
fakeroot -u tox borg.testsuite.locking # only run 1 test module fakeroot -u tox borg.testsuite.locking # only run 1 test module
@ -310,24 +194,26 @@ Important notes:
- When using ``--`` to give options to py.test, you MUST also give ``borg.testsuite[.module]``. - When using ``--`` to give options to py.test, you MUST also give ``borg.testsuite[.module]``.
Running the tests (using the pypi package)
------------------------------------------
Since borg 1.4, it is also possible to run the tests without a development Running more checks using coala
environment, using the borgbackup dist package (downloaded from pypi.org or -------------------------------
github releases page):
First install coala and some checkers ("bears"):
:: ::
# optional: create and use a virtual env: pip install -r requirements.d/coala.txt
python3 -m venv env
. env/bin/activate
# install packages You can now run coala from the toplevel directory; it will read its settings
pip install borgbackup from ``.coafile`` there:
pip install pytest pytest-benchmark
::
coala
Some bears have additional requirements and they usually tell you about
them in case they are missing.
# run the tests
pytest -v -rs --benchmark-skip --pyargs borg.testsuite
Adding a compression algorithm Adding a compression algorithm
------------------------------ ------------------------------
@ -350,8 +236,8 @@ for easier use by packagers downstream.
When a command is added, a command line flag changed, added or removed, When a command is added, a command line flag changed, added or removed,
the usage docs need to be rebuilt as well:: the usage docs need to be rebuilt as well::
python scripts/make.py build_usage python setup.py build_usage
python scripts/make.py build_man python setup.py build_man
However, we prefer to do this as part of our :ref:`releasing` However, we prefer to do this as part of our :ref:`releasing`
preparations, so it is generally not necessary to update these when preparations, so it is generally not necessary to update these when
@ -405,49 +291,6 @@ Usage::
# To copy files from the VM (in this case, the generated binary): # To copy files from the VM (in this case, the generated binary):
vagrant scp OS:/vagrant/borg/borg.exe . vagrant scp OS:/vagrant/borg/borg.exe .
Using Podman
------------
macOS-based developers (and others who prefer containers) can run the Linux test suite locally using Podman.
Prerequisites:
- Install Podman (e.g., ``brew install podman``).
- Initialize the Podman machine, only once: ``podman machine init``.
- Start the Podman machine, before using it: ``podman machine start``.
Usage::
# Open an interactive shell in the container (default if no command given):
./scripts/linux-run
# Run the default tox environment:
./scripts/linux-run tox
# Run a specific tox environment:
./scripts/linux-run tox -e py311-pyfuse3
# Pass arguments to pytest (e.g., run specific tests):
./scripts/linux-run tox -e py313-pyfuse3 -- -k mount
# Switch base image (temporarily):
./scripts/linux-run --image python:3.11-bookworm tox
Resource Usage
~~~~~~~~~~~~~~
The default Podman VM uses 2GB RAM and half your CPUs.
For heavy tests (parallel execution), this might be tight.
- **Check usage:** Run ``podman stats`` in another terminal while tests are running.
- **Increase resources:**
::
podman machine stop
podman machine set --cpus 6 --memory 4096
podman machine start
Creating standalone binaries Creating standalone binaries
---------------------------- ----------------------------
@ -467,6 +310,7 @@ If you encounter issues, see also our `Vagrantfile` for details.
work on same OS, same architecture (x86 32bit, amd64 64bit) work on same OS, same architecture (x86 32bit, amd64 64bit)
without external dependencies. without external dependencies.
.. _releasing: .. _releasing:
Creating a new release Creating a new release
@ -482,18 +326,12 @@ Checklist:
- Update ``CHANGES.rst``, based on ``git log $PREVIOUS_RELEASE..``. - Update ``CHANGES.rst``, based on ``git log $PREVIOUS_RELEASE..``.
- Check version number of upcoming release in ``CHANGES.rst``. - Check version number of upcoming release in ``CHANGES.rst``.
- Render ``CHANGES.rst`` via ``make html`` and check for markup errors. - Render ``CHANGES.rst`` via ``make html`` and check for markup errors.
- Verify that ``MANIFEST.in``, ``pyproject.toml`` and ``setup.py`` are complete. - Verify that ``MANIFEST.in`` and ``setup.py`` are complete.
- Run these commands, check git status for files that might need to be added, and commit:: - ``python setup.py build_usage ; python setup.py build_man`` and commit.
python scripts/make.py build_usage
python scripts/make.py build_man
- Tag the release:: - Tag the release::
git tag -s -m "tagged/signed release X.Y.Z" X.Y.Z git tag -s -m "tagged/signed release X.Y.Z" X.Y.Z
- Push the release PR branch to GitHub, make a pull request.
- Also push the release tag.
- Create a clean repo and use it for the following steps:: - Create a clean repo and use it for the following steps::
git clone borg borg-clean git clone borg borg-clean
@ -502,9 +340,8 @@ Checklist:
It will also reveal uncommitted required files. It will also reveal uncommitted required files.
Moreover, it makes sure the vagrant machines only get committed files and Moreover, it makes sure the vagrant machines only get committed files and
do a fresh start based on that. do a fresh start based on that.
- Optional: run tox and/or binary builds on all supported platforms via vagrant, - Run tox and/or binary builds on all supported platforms via vagrant,
check for test failures. This is now optional as we do platform testing and check for test failures.
binary building on GitHub.
- Create sdist, sign it, upload release to (test) PyPi: - Create sdist, sign it, upload release to (test) PyPi:
:: ::
@ -512,32 +349,26 @@ Checklist:
scripts/sdist-sign X.Y.Z scripts/sdist-sign X.Y.Z
scripts/upload-pypi X.Y.Z test scripts/upload-pypi X.Y.Z test
scripts/upload-pypi X.Y.Z scripts/upload-pypi X.Y.Z
- Put binaries into dist/borg-OSNAME and sign them:
Note: the signature is not uploaded to PyPi any more, but we upload it to ::
github releases.
- When GitHub CI looks good on the release PR, merge it and then check "Actions":
GitHub will create binary assets after the release PR is merged within the
CI testing of the merge. Check the "Upload binaries" step on Ubuntu (AMD/Intel
and ARM64) and macOS (Intel and ARM64), fetch the ZIPs with the binaries.
- Unpack the ZIPs and test the binaries, upload the binaries to the GitHub
release page (borg-OS-SPEC-ARCH-gh and borg-OS-SPEC-ARCH-gh.tgz).
scripts/sign-binaries 201912312359
- Close the release milestone on GitHub. - Close the release milestone on GitHub.
- `Update borgbackup.org - `Update borgbackup.org
<https://github.com/borgbackup/borgbackup.github.io/pull/53/files>`_ with the <https://github.com/borgbackup/borgbackup.github.io/pull/53/files>`_ with the
new version number and release date. new version number and release date.
- Announce on: - Announce on:
- Mailing list. - Mailing list.
- Mastodon / BlueSky / X (aka Twitter). - Twitter.
- IRC channel (change ``/topic``). - IRC channel (change ``/topic``).
- Create a GitHub release, include: - Create a GitHub release, include:
- pypi dist package and signature * Standalone binaries (see above for how to create them).
- Standalone binaries (see above for how to create them).
- For macOS binaries **with** FUSE support, document the macFUSE version + For OS X, document the OS X Fuse version in the README of the binaries.
in the README of the binaries. macFUSE uses a kernel extension that needs OS X FUSE uses a kernel extension that needs to be compatible with the
to be compatible with the code contained in the binary. code contained in the binary.
- A link to ``CHANGES.rst``. * A link to ``CHANGES.rst``.

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
.. highlight:: bash .. highlight:: bash
.. |package_dirname| replace:: borgbackup-|version| .. |package_dirname| replace:: borgbackup-|version|
.. |package_filename| replace:: |package_dirname|.tar.gz .. |package_filename| replace:: |package_dirname|.tar.gz
.. |package_url| replace:: https://pypi.org/project/borgbackup/#files .. |package_url| replace:: https://pypi.python.org/packages/source/b/borgbackup/|package_filename|
.. |git_url| replace:: https://github.com/borgbackup/borg.git .. |git_url| replace:: https://github.com/borgbackup/borg.git
.. _github: https://github.com/borgbackup/borg .. _github: https://github.com/borgbackup/borg
.. _issue tracker: https://github.com/borgbackup/borg/issues .. _issue tracker: https://github.com/borgbackup/borg/issues
@ -10,21 +10,20 @@
.. _HMAC-SHA256: https://en.wikipedia.org/wiki/HMAC .. _HMAC-SHA256: https://en.wikipedia.org/wiki/HMAC
.. _SHA256: https://en.wikipedia.org/wiki/SHA-256 .. _SHA256: https://en.wikipedia.org/wiki/SHA-256
.. _PBKDF2: https://en.wikipedia.org/wiki/PBKDF2 .. _PBKDF2: https://en.wikipedia.org/wiki/PBKDF2
.. _argon2: https://en.wikipedia.org/wiki/Argon2
.. _ACL: https://en.wikipedia.org/wiki/Access_control_list .. _ACL: https://en.wikipedia.org/wiki/Access_control_list
.. _libacl: https://savannah.nongnu.org/projects/acl/ .. _libacl: https://savannah.nongnu.org/projects/acl/
.. _libattr: https://savannah.nongnu.org/projects/attr/ .. _libattr: https://savannah.nongnu.org/projects/attr/
.. _libxxhash: https://github.com/Cyan4973/xxHash
.. _liblz4: https://github.com/Cyan4973/lz4 .. _liblz4: https://github.com/Cyan4973/lz4
.. _libzstd: https://github.com/facebook/zstd .. _libzstd: https://github.com/facebook/zstd
.. _libb2: https://github.com/BLAKE2/libb2
.. _OpenSSL: https://www.openssl.org/ .. _OpenSSL: https://www.openssl.org/
.. _`Python 3`: https://www.python.org/ .. _`Python 3`: https://www.python.org/
.. _Buzhash: https://en.wikipedia.org/wiki/Buzhash .. _Buzhash: https://en.wikipedia.org/wiki/Buzhash
.. _msgpack: https://msgpack.org/ .. _msgpack: https://msgpack.org/
.. _`msgpack-python`: https://pypi.org/project/msgpack-python/ .. _`msgpack-python`: https://pypi.python.org/pypi/msgpack-python/
.. _llfuse: https://pypi.org/project/llfuse/ .. _llfuse: https://pypi.python.org/pypi/llfuse/
.. _mfusepy: https://pypi.org/project/mfusepy/ .. _pyfuse3: https://pypi.python.org/pypi/pyfuse3/
.. _pyfuse3: https://pypi.org/project/pyfuse3/
.. _userspace filesystems: https://en.wikipedia.org/wiki/Filesystem_in_Userspace .. _userspace filesystems: https://en.wikipedia.org/wiki/Filesystem_in_Userspace
.. _Cython: https://cython.org/ .. _Cython: http://cython.org/
.. _virtualenv: https://pypi.org/project/virtualenv/ .. _virtualenv: https://pypi.python.org/pypi/virtualenv/
.. _mailing list discussion about internals: http://librelist.com/browser/attic/2014/5/6/questions-and-suggestions-about-inner-working-of-attic>

View file

@ -6,7 +6,7 @@ Borg Documentation
.. include:: ../README.rst .. include:: ../README.rst
.. When you add an element here, do not forget to add it to book.rst. .. when you add an element here, do not forget to add it to book.rst
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 2
@ -18,8 +18,6 @@ Borg Documentation
faq faq
support support
changes changes
changes_1.x
changes_0.x
internals internals
development development
authors authors

View file

@ -13,7 +13,6 @@ There are different ways to install Borg:
that comes bundled with all dependencies. that comes bundled with all dependencies.
- :ref:`source-install`, either: - :ref:`source-install`, either:
- :ref:`windows-binary` - builds a binary file for Windows using MSYS2.
- :ref:`pip-installation` - installing a source package with pip needs - :ref:`pip-installation` - installing a source package with pip needs
more installation steps and requires all dependencies with more installation steps and requires all dependencies with
development headers and a compiler. development headers and a compiler.
@ -43,7 +42,7 @@ package which can be installed with the package manager.
Distribution Source Command Distribution Source Command
============ ============================================= ======= ============ ============================================= =======
Alpine Linux `Alpine repository`_ ``apk add borgbackup`` Alpine Linux `Alpine repository`_ ``apk add borgbackup``
Arch Linux `[extra]`_ ``pacman -S borg`` Arch Linux `[community]`_ ``pacman -S borg``
Debian `Debian packages`_ ``apt install borgbackup`` Debian `Debian packages`_ ``apt install borgbackup``
Gentoo `ebuild`_ ``emerge borgbackup`` Gentoo `ebuild`_ ``emerge borgbackup``
GNU Guix `GNU Guix`_ ``guix package --install borg`` GNU Guix `GNU Guix`_ ``guix package --install borg``
@ -64,14 +63,14 @@ Ubuntu `Ubuntu packages`_, `Ubuntu PPA`_ ``apt install borgbac
============ ============================================= ======= ============ ============================================= =======
.. _Alpine repository: https://pkgs.alpinelinux.org/packages?name=borgbackup .. _Alpine repository: https://pkgs.alpinelinux.org/packages?name=borgbackup
.. _[extra]: https://www.archlinux.org/packages/?name=borg .. _[community]: https://www.archlinux.org/packages/?name=borg
.. _Debian packages: https://packages.debian.org/search?keywords=borgbackup&searchon=names&exact=1&suite=all&section=all .. _Debian packages: https://packages.debian.org/search?keywords=borgbackup&searchon=names&exact=1&suite=all&section=all
.. _Fedora official repository: https://packages.fedoraproject.org/pkgs/borgbackup/borgbackup/ .. _Fedora official repository: https://apps.fedoraproject.org/packages/borgbackup
.. _FreeBSD ports: https://www.freshports.org/archivers/py-borgbackup/ .. _FreeBSD ports: https://www.freshports.org/archivers/py-borgbackup/
.. _ebuild: https://packages.gentoo.org/packages/app-backup/borgbackup .. _ebuild: https://packages.gentoo.org/packages/app-backup/borgbackup
.. _GNU Guix: https://www.gnu.org/software/guix/package-list.html#borg .. _GNU Guix: https://www.gnu.org/software/guix/package-list.html#borg
.. _pkgsrc: https://pkgsrc.se/sysutils/py-borgbackup .. _pkgsrc: http://pkgsrc.se/sysutils/py-borgbackup
.. _cauldron: https://madb.mageia.org/package/show/application/0/release/cauldron/name/borgbackup .. _cauldron: http://madb.mageia.org/package/show/application/0/release/cauldron/name/borgbackup
.. _.nix file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/backup/borgbackup/default.nix .. _.nix file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/backup/borgbackup/default.nix
.. _OpenBSD ports: https://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/sysutils/borgbackup/ .. _OpenBSD ports: https://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/sysutils/borgbackup/
.. _OpenIndiana hipster repository: https://pkg.openindiana.org/hipster/en/search.shtml?token=borg&action=Search .. _OpenIndiana hipster repository: https://pkg.openindiana.org/hipster/en/search.shtml?token=borg&action=Search
@ -82,9 +81,9 @@ Ubuntu `Ubuntu packages`_, `Ubuntu PPA`_ ``apt install borgbac
.. _Ubuntu packages: https://launchpad.net/ubuntu/+source/borgbackup .. _Ubuntu packages: https://launchpad.net/ubuntu/+source/borgbackup
.. _Ubuntu PPA: https://launchpad.net/~costamagnagianfranco/+archive/ubuntu/borgbackup .. _Ubuntu PPA: https://launchpad.net/~costamagnagianfranco/+archive/ubuntu/borgbackup
Please ask package maintainers to build a package or, if you can package/ Please ask package maintainers to build a package or, if you can package /
submit it yourself, please help us with that! See :issue:`105` on submit it yourself, please help us with that! See :issue:`105` on
GitHub to follow up on packaging efforts. github to followup on packaging efforts.
**Current status of package in the repositories** **Current status of package in the repositories**
@ -106,12 +105,12 @@ Standalone Binary
.. note:: Releases are signed with an OpenPGP key, see .. note:: Releases are signed with an OpenPGP key, see
:ref:`security-contact` for more instructions. :ref:`security-contact` for more instructions.
Borg x86/x64 AMD/Intel compatible binaries (generated with `pyinstaller`_) Borg x86/x64 amd/intel compatible binaries (generated with `pyinstaller`_)
are available on the releases_ page for the following platforms: are available on the releases_ page for the following platforms:
* **Linux**: glibc >= 2.28 (ok for most supported Linux releases). * **Linux**: glibc >= 2.28 (ok for most supported Linux releases).
Older glibc releases are untested and may not work. Older glibc releases are untested and may not work.
* **macOS**: 10.12 or newer (To avoid signing issues, download the file via * **MacOS**: 10.12 or newer (To avoid signing issues download the file via
command line **or** remove the ``quarantine`` attribute after downloading: command line **or** remove the ``quarantine`` attribute after downloading:
``$ xattr -dr com.apple.quarantine borg-macosx64.tgz``) ``$ xattr -dr com.apple.quarantine borg-macosx64.tgz``)
* **FreeBSD**: 12.1 (unknown whether it works for older releases) * **FreeBSD**: 12.1 (unknown whether it works for older releases)
@ -135,10 +134,10 @@ fail if /tmp has not enough free space or is mounted with the ``noexec``
option. You can change the temporary directory by setting the ``TEMP`` option. You can change the temporary directory by setting the ``TEMP``
environment variable before running Borg. environment variable before running Borg.
If a new version is released, you will have to download it manually and replace If a new version is released, you will have to manually download it and replace
the old version using the same steps as shown above. the old version using the same steps as shown above.
.. _pyinstaller: https://www.pyinstaller.org .. _pyinstaller: http://www.pyinstaller.org
.. _releases: https://github.com/borgbackup/borg/releases .. _releases: https://github.com/borgbackup/borg/releases
.. _source-install: .. _source-install:
@ -158,31 +157,29 @@ Dependencies
~~~~~~~~~~~~ ~~~~~~~~~~~~
To install Borg from a source package (including pip), you have to install the To install Borg from a source package (including pip), you have to install the
following dependencies first. For the libraries you will also need their following dependencies first:
development header files (sometimes in a separate `-dev` or `-devel` package).
* `Python 3`_ >= 3.10.0 * `Python 3`_ >= 3.9.0, plus development headers.
* OpenSSL_ >= 1.1.1 (LibreSSL will not work) * Libraries (library plus development headers):
* libacl_ (which depends on libattr_)
* libxxhash_ >= 0.8.1 - OpenSSL_ >= 1.1.1 (LibreSSL will not work)
* liblz4_ >= 1.7.0 (r129) - libacl_ (which depends on libattr_)
* libffi (required for argon2-cffi-bindings) - liblz4_ >= 1.7.0 (r129)
* pkg-config (cli tool) - Borg uses this to discover header and library - libzstd_ >= 1.3.0
locations automatically. Alternatively, you can also point to them via some - libxxhash >= 0.8.1 (0.8.0 might work also)
environment variables, see setup.py. - libdeflate >= 1.5
* Some other Python dependencies, pip will automatically install them for you. * pkg-config (cli tool) and pkgconfig python package (borg uses these to
* Optionally, if you wish to mount an archive as a FUSE filesystem, you need discover header and library location - if it can't import pkgconfig and
is not pointed to header/library locations via env vars [see setup.py],
it will raise a fatal error).
**These must be present before invoking setup.py!**
* some other Python dependencies, pip will automatically install them for you.
* optionally, if you wish to mount an archive as a FUSE filesystem, you need
a FUSE implementation for Python: a FUSE implementation for Python:
- mfusepy_ >= 3.1.0 (for fuse 2 and fuse 3, use `pip install borgbackup[mfusepy]`), or - Either pyfuse3_ (preferably, newer and maintained) or llfuse_ (older,
- pyfuse3_ >= 3.1.1 (for fuse 3, use `pip install borgbackup[pyfuse3]`), or unmaintained now). See also the BORG_FUSE_IMPL env variable.
- llfuse_ >= 1.3.8 (for fuse 2, use `pip install borgbackup[llfuse]`). - See setup.py about the version requirements.
- Additionally, your OS will need to have FUSE support installed
(e.g. a package `fuse` for fuse 2 or a package `fuse3` for fuse 3 support).
* Optionally, if you wish to use S3/B2 Backend:
- borgstore[s3] ~= 0.3.0 (use `pip install borgbackup[s3]`)
* Optionally, if you wish to use SFTP Backend:
- borgstore[sftp] ~= 0.3.0 (use `pip install borgbackup[sftp]`)
If you have troubles finding the right package names, have a look at the If you have troubles finding the right package names, have a look at the
distribution specific sections below or the Vagrantfile in the git repository, distribution specific sections below or the Vagrantfile in the git repository,
@ -190,35 +187,24 @@ which contains installation scripts for a number of operating systems.
In the following, the steps needed to install the dependencies are listed for a In the following, the steps needed to install the dependencies are listed for a
selection of platforms. If your distribution is not covered by these selection of platforms. If your distribution is not covered by these
instructions, try to use your package manager to install the dependencies. instructions, try to use your package manager to install the dependencies. On
FreeBSD, you may need to get a recent enough OpenSSL version from FreeBSD
ports.
After you have installed the dependencies, you can proceed with steps outlined After you have installed the dependencies, you can proceed with steps outlined
under :ref:`pip-installation`. under :ref:`pip-installation`.
Arch Linux
++++++++++
Install the runtime and build dependencies::
pacman -S python python-pip python-virtualenv openssl acl xxhash lz4 base-devel
pacman -S fuse2 # needed for llfuse
pacman -S fuse3 # needed for pyfuse3
Note that Arch Linux specifically doesn't support
`partial upgrades <https://wiki.archlinux.org/title/Partial_upgrade>`__,
so in case some packages cannot be retrieved from the repo, run with ``pacman -Syu``.
Debian / Ubuntu Debian / Ubuntu
+++++++++++++++ +++++++++++++++
Install the dependencies with development headers:: Install the dependencies with development headers::
sudo apt-get install python3 python3-dev python3-pip python3-virtualenv \ sudo apt-get install python3 python3-dev python3-pip python3-virtualenv \
libacl1-dev \ libacl1-dev libacl1 \
libssl-dev \ libssl-dev \
liblz4-dev libxxhash-dev \ liblz4-dev libzstd-dev libxxhash-dev libdeflate-dev \
libffi-dev \ build-essential \
build-essential pkg-config pkg-config python3-pkgconfig
sudo apt-get install libfuse-dev fuse # needed for llfuse sudo apt-get install libfuse-dev fuse # needed for llfuse
sudo apt-get install libfuse3-dev fuse3 # needed for pyfuse3 sudo apt-get install libfuse3-dev fuse3 # needed for pyfuse3
@ -232,11 +218,10 @@ Fedora
Install the dependencies with development headers:: Install the dependencies with development headers::
sudo dnf install python3 python3-devel python3-pip python3-virtualenv \ sudo dnf install python3 python3-devel python3-pip python3-virtualenv \
libacl-devel \ libacl-devel libacl \
openssl-devel \ openssl-devel \
lz4-devel xxhash-devel \ lz4-devel libzstd-devel xxhash-devel libdeflate-devel \
libffi-devel \ pkgconf python3-pkgconfig
pkgconf
sudo dnf install gcc gcc-c++ redhat-rpm-config sudo dnf install gcc gcc-c++ redhat-rpm-config
sudo dnf install fuse-devel fuse # needed for llfuse sudo dnf install fuse-devel fuse # needed for llfuse
sudo dnf install fuse3-devel fuse3 # needed for pyfuse3 sudo dnf install fuse3-devel fuse3 # needed for pyfuse3
@ -251,8 +236,8 @@ Install the dependencies automatically using zypper::
Alternatively, you can enumerate all build dependencies in the command line:: Alternatively, you can enumerate all build dependencies in the command line::
sudo zypper install python3 python3-devel \ sudo zypper install python3 python3-devel \
libacl-devel openssl-devel xxhash-devel liblz4-devel \ libacl-devel openssl-devel \
libffi-devel \ libxxhash-devel libdeflate-devel \
python3-Cython python3-Sphinx python3-msgpack-python python3-pkgconfig pkgconf \ python3-Cython python3-Sphinx python3-msgpack-python python3-pkgconfig pkgconf \
python3-pytest python3-setuptools python3-setuptools_scm \ python3-pytest python3-setuptools python3-setuptools_scm \
python3-sphinx_rtd_theme gcc gcc-c++ python3-sphinx_rtd_theme gcc gcc-c++
@ -261,10 +246,16 @@ Alternatively, you can enumerate all build dependencies in the command line::
macOS macOS
+++++ +++++
When installing borgbackup via Homebrew_, the basic dependencies are installed automatically. When installing via Homebrew_, dependencies are installed automatically. To install
dependencies manually::
For FUSE support to mount the backup archives, you need macFUSE, which is available brew install python3 openssl zstd lz4 xxhash libdeflate
via `github <https://github.com/osxfuse/osxfuse/releases/latest>`__, or Homebrew:: brew install pkg-config
pip3 install virtualenv pkgconfig
For FUSE support to mount the backup archives, you need at least version 3.0 of
macFUSE, which is available via `github
<https://github.com/osxfuse/osxfuse/releases/latest>`__, or Homebrew::
brew install --cask macfuse brew install --cask macfuse
@ -274,14 +265,7 @@ the installed ``openssl`` formula, point pkg-config to the correct path::
PKG_CONFIG_PATH="/usr/local/opt/openssl@1.1/lib/pkgconfig" pip install borgbackup[llfuse] PKG_CONFIG_PATH="/usr/local/opt/openssl@1.1/lib/pkgconfig" pip install borgbackup[llfuse]
When working from a borg git repo workdir, you can install dependencies using the For OS X Catalina and later, be aware that you must authorize full disk access.
Brewfile::
brew install python@3.11 # can be any supported python3 version
brew bundle install # install requirements from borg repo's ./Brewfile
pip3 install virtualenv
Be aware that for all recent macOS releases you must authorize full disk access.
It is no longer sufficient to run borg backups as root. If you have not yet It is no longer sufficient to run borg backups as root. If you have not yet
granted full disk access, and you run Borg backup from cron, you will see granted full disk access, and you run Borg backup from cron, you will see
messages such as:: messages such as::
@ -302,7 +286,7 @@ and commands to make FUSE work for using the mount command.
pkg install -y python3 pkgconf pkg install -y python3 pkgconf
pkg install openssl pkg install openssl
pkg install liblz4 xxhash pkg install liblz4 zstd xxhash libdeflate
pkg install fusefs-libs # needed for llfuse pkg install fusefs-libs # needed for llfuse
pkg install -y git pkg install -y git
python3 -m ensurepip # to install pip for Python3 python3 -m ensurepip # to install pip for Python3
@ -312,20 +296,6 @@ and commands to make FUSE work for using the mount command.
kldload fuse kldload fuse
sysctl vfs.usermount=1 sysctl vfs.usermount=1
.. _windows_deps:
Windows
+++++++
.. note::
Running under Windows is experimental.
.. warning::
This script needs to be run in the UCRT64 environment in MSYS2.
Install the dependencies with the provided script::
./scripts/msys2-install-deps
Windows 10's Linux Subsystem Windows 10's Linux Subsystem
++++++++++++++++++++++++++++ ++++++++++++++++++++++++++++
@ -344,34 +314,11 @@ Cygwin
Use the Cygwin installer to install the dependencies:: Use the Cygwin installer to install the dependencies::
python39 python39-devel python39 python39-devel python39-pkgconfig
python39-setuptools python39-pip python39-wheel python39-virtualenv python39-setuptools python39-pip python39-wheel python39-virtualenv
libssl-devel libxxhash-devel liblz4-devel libssl-devel libxxhash-devel libdeflate-devel liblz4-devel libzstd-devel
binutils gcc-g++ git make openssh binutils gcc-g++ git make openssh
Make sure to use a virtual environment to avoid confusions with any Python installed on Windows.
.. _windows-binary:
Building a binary on Windows
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
This is experimental.
.. warning::
This needs to be run in the UCRT64 environment in MSYS2.
Ensure to install the dependencies as described within :ref:`Dependencies: Windows <windows_deps>`.
::
# Needed for setuptools < 70.2.0 to work - https://www.msys2.org/docs/python/#known-issues
# export SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .
pyinstaller -y scripts/borg.exe.spec
A standalone executable will be created in ``dist/borg.exe``.
.. _pip-installation: .. _pip-installation:
@ -382,13 +329,11 @@ Virtualenv_ can be used to build and install Borg without affecting
the system Python or requiring root access. Using a virtual environment is the system Python or requiring root access. Using a virtual environment is
optional, but recommended except for the most simple use cases. optional, but recommended except for the most simple use cases.
Ensure to install the dependencies as described within :ref:`source-install`.
.. note:: .. note::
If you install into a virtual environment, you need to **activate** it If you install into a virtual environment, you need to **activate** it
first (``source borg-env/bin/activate``), before running ``borg``. first (``source borg-env/bin/activate``), before running ``borg``.
Alternatively, symlink ``borg-env/bin/borg`` into some directory that is in Alternatively, symlink ``borg-env/bin/borg`` into some directory that is in
your ``PATH`` so you can run ``borg``. your ``PATH`` so you can just run ``borg``.
This will use ``pip`` to install the latest release from PyPi:: This will use ``pip`` to install the latest release from PyPi::
@ -398,6 +343,9 @@ This will use ``pip`` to install the latest release from PyPi::
# might be required if your tools are outdated # might be required if your tools are outdated
pip install -U pip setuptools wheel pip install -U pip setuptools wheel
# pkgconfig MUST be available before borg is installed!
pip install pkgconfig
# install Borg + Python dependencies into virtualenv # install Borg + Python dependencies into virtualenv
pip install borgbackup pip install borgbackup
# or alternatively (if you want FUSE support): # or alternatively (if you want FUSE support):
@ -409,19 +357,6 @@ activating your virtual environment::
pip install -U borgbackup # or ... borgbackup[llfuse/pyfuse3] pip install -U borgbackup # or ... borgbackup[llfuse/pyfuse3]
When doing manual pip installation, man pages are not automatically
installed. You can run these commands to install the man pages
locally::
# get borg from github
git clone https://github.com/borgbackup/borg.git borg
# Install the files with proper permissions
install -D -m 0644 borg/docs/man/borg*.1* $HOME/.local/share/man/man1/borg.1
# Update the man page cache
mandb
.. _git-installation: .. _git-installation:
Using git Using git
@ -430,30 +365,20 @@ Using git
This uses latest, unreleased development code from git. This uses latest, unreleased development code from git.
While we try not to break master, there are no guarantees on anything. While we try not to break master, there are no guarantees on anything.
Ensure to install the dependencies as described within :ref:`source-install`.
Version metadata is obtained dynamically at install time using ``setuptools-scm``.
Please ensure that your git repo either has correct tags, or provide the version
manually using the ``SETUPTOOLS_SCM_PRETEND_VERSION`` environment variable.
:: ::
# get borg from github # get borg from github
git clone https://github.com/borgbackup/borg.git git clone https://github.com/borgbackup/borg.git
# create a virtual environment # create a virtual environment
virtualenv --python=$(which python3) borg-env virtualenv --python=${which python3} borg-env
source borg-env/bin/activate # always before using! source borg-env/bin/activate # always before using!
# install borg dependencies into virtualenv # install borg + dependencies into virtualenv
cd borg cd borg
pip install -r requirements.d/development.txt pip install -r requirements.d/development.txt
pip install -r requirements.d/docs.txt # optional, to build the docs pip install -r requirements.d/docs.txt # optional, to build the docs
# set a borg version if setuptools-scm fails to do so automatically
export SETUPTOOLS_SCM_PRETEND_VERSION=
# install borg into virtualenv
pip install -e . # in-place editable mode pip install -e . # in-place editable mode
or or
pip install -e .[pyfuse3] # in-place editable mode, use pyfuse3 pip install -e .[pyfuse3] # in-place editable mode, use pyfuse3
@ -471,11 +396,11 @@ If you need to use a different version of Python you can install this using ``py
... ...
# create a virtual environment # create a virtual environment
pyenv install 3.10.0 # minimum, preferably use something more recent! pyenv install 3.9.0 # minimum, preferably use something more recent!
pyenv global 3.10.0 pyenv global 3.9.0
pyenv local 3.10.0 pyenv local 3.9.0
virtualenv --python=${pyenv which python} borg-env virtualenv --python=${pyenv which python} borg-env
source borg-env/bin/activate # always before using! source borg-env/bin/activate # always before using!
... ...
.. note:: As a developer or power user, you should always use a virtual environment. .. note:: As a developer or power user, you always want to use a virtual environment.

View file

@ -4,7 +4,7 @@
Internals Internals
========= =========
The internals chapter describes and analyzes most of the inner workings The internals chapter describes and analyses most of the inner workings
of Borg. of Borg.
Borg uses a low-level, key-value store, the :ref:`repository`, and Borg uses a low-level, key-value store, the :ref:`repository`, and
@ -19,18 +19,18 @@ specified when the backup was performed.
Deduplication is performed globally across all data in the repository Deduplication is performed globally across all data in the repository
(multiple backups and even multiple hosts), both on data and file (multiple backups and even multiple hosts), both on data and file
metadata, using :ref:`chunks` created by the chunker using the metadata, using :ref:`chunks` created by the chunker using the
Buzhash_ algorithm ("buzhash" and "buzhash64" chunker) or a simpler Buzhash_ algorithm ("buzhash" chunker) or a simpler fixed blocksize
fixed block size algorithm ("fixed" chunker). algorithm ("fixed" chunker).
To perform the repository-wide deduplication, a hash of each To actually perform the repository-wide deduplication, a hash of each
chunk is checked against the :ref:`chunks cache <cache>`, which is a chunk is checked against the :ref:`chunks cache <cache>`, which is a
hash table of all chunks that already exist. hash-table of all chunks that already exist.
.. figure:: internals/structure.png .. figure:: internals/structure.png
:figwidth: 100% :figwidth: 100%
:width: 100% :width: 100%
Layers in Borg. At the very top, commands are implemented, using Layers in Borg. On the very top commands are implemented, using
a data access layer provided by the Archive and Item classes. a data access layer provided by the Archive and Item classes.
The "key" object provides both compression and authenticated The "key" object provides both compression and authenticated
encryption used by the data access layer. The "key" object represents encryption used by the data access layer. The "key" object represents

Binary file not shown.

After

Width:  |  Height:  |  Size: 757 KiB

Binary file not shown.

File diff suppressed because it is too large Load diff

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

View file

@ -11,13 +11,16 @@ but does mean that there are no release-to-release guarantees on what you might
even for point releases (1.1.x), and there is no documentation beyond the code and the internals documents. even for point releases (1.1.x), and there is no documentation beyond the code and the internals documents.
Borg does on the other hand provide an API on a command-line level. In other words, a frontend should Borg does on the other hand provide an API on a command-line level. In other words, a frontend should
(for example) create a backup archive by invoking :ref:`borg_create`, provide command-line parameters/options (for example) create a backup archive just invoke :ref:`borg_create`, give commandline parameters/options
as needed, and parse JSON output from Borg. as needed and parse JSON output from borg.
Important: JSON output is expected to be UTF-8, but currently borg depends on the locale being configured Important: JSON output is expected to be UTF-8, but currently borg depends on the locale being configured
for that (must be a UTF-8 locale and *not* "C" or "ascii"), so that Python will choose to encode to UTF-8. for that (must be a UTF-8 locale and *not* "C" or "ascii"), so that Python will choose to encode to UTF-8.
The same applies to any inputs read by borg, they are expected to be UTF-8 encoded also. The same applies to any inputs read by borg, they are expected to be UTF-8 encoded also.
We consider this a bug (see :issue:`2273`) and might fix it later, so borg will use UTF-8 independent of
the locale.
On POSIX systems, you can usually set environment vars to choose a UTF-8 locale: On POSIX systems, you can usually set environment vars to choose a UTF-8 locale:
:: ::
@ -26,53 +29,6 @@ On POSIX systems, you can usually set environment vars to choose a UTF-8 locale:
export LC_CTYPE=en_US.UTF-8 export LC_CTYPE=en_US.UTF-8
Another way to get Python's stdin/stdout/stderr streams to use UTF-8 encoding (without having
a UTF-8 locale / LANG / LC_CTYPE) is:
::
export PYTHONIOENCODING=utf-8
See :issue:`2273` for more details.
Dealing with non-unicode byte sequences and JSON limitations
------------------------------------------------------------
Paths on POSIX systems can have arbitrary bytes in them (except 0x00 which is used as string terminator in C).
Nowadays, UTF-8 encoded paths (which decode to valid unicode) are the usual thing, but a lot of systems
still have paths from the past, when other, non-unicode codings were used. Especially old Samba shares often
have wild mixtures of misc. encodings, sometimes even very broken stuff.
borg deals with such non-unicode paths ("with funny/broken characters") by decoding such byte sequences using
UTF-8 coding and "surrogateescape" error handling mode, which maps invalid bytes to special unicode code points
(surrogate escapes). When encoding such a unicode string back to a byte sequence, the original byte sequence
will be reproduced exactly.
JSON should only contain valid unicode text without any surrogate escapes, so we can't just directly have a
surrogate-escaped path in JSON ("path" is only one example, this also affects other text-like content).
Borg deals with this situation like this (since borg 2.0):
For a valid unicode path (no surrogate escapes), the JSON will only have "path": path.
For a non-unicode path (with surrogate escapes), the JSON will have 2 entries:
- "path": path_approximation (pure valid unicode, all invalid bytes will show up as "?")
- "path_b64": path_bytes_base64_encoded (if you decode the base64, you get the original path byte string)
JSON users need to pick whatever suits their needs best. The suggested procedure (shown for "path") is:
- check if there is a "path_b64" key.
- if it is there, you will know that the original bytes path did not cleanly UTF-8-decode into unicode (has
some invalid bytes) and that the string given by the "path" key is only an approximation, but not the precise
path. if you need precision, you must base64-decode the value of "path_b64" and deal with the arbitrary byte
string you'll get. if an approximation is fine, use the value of the "path" key.
- if it is not there, the value of the "path" key is all you need (the original bytes path is its UTF-8 encoding).
Logging Logging
------- -------
@ -84,6 +40,8 @@ where each line is a JSON object. The *type* key of the object determines its ot
parsing error will be printed in plain text, because logging set-up happens after all arguments are parsing error will be printed in plain text, because logging set-up happens after all arguments are
parsed. parsed.
Since JSON can only encode text, any string representing a file system path may miss non-text parts.
The following types are in use. Progress information is governed by the usual rules for progress information, The following types are in use. Progress information is governed by the usual rules for progress information,
it is not produced unless ``--progress`` is specified. it is not produced unless ``--progress`` is specified.
@ -92,20 +50,17 @@ archive_progress
The following keys exist, each represents the current progress. The following keys exist, each represents the current progress.
original_size original_size
Original size of data processed so far (before compression and deduplication, may be empty/absent) Original size of data processed so far (before compression and deduplication)
compressed_size compressed_size
Compressed size (may be empty/absent) Compressed size
deduplicated_size deduplicated_size
Deduplicated size (may be empty/absent) Deduplicated size
nfiles nfiles
Number of (regular) files processed so far (may be empty/absent) Number of (regular) files processed so far
path path
Current path (may be empty/absent) Current path
time time
Unix timestamp (float) Unix timestamp (float)
finished
boolean indicating whether the operation has finished, only the last object for an *operation*
can have this property set to *true*.
progress_message progress_message
A message-based progress information with no concrete progress information, just a message A message-based progress information with no concrete progress information, just a message
@ -135,14 +90,12 @@ progress_percent
can have this property set to *true*. can have this property set to *true*.
message message
A formatted progress message, this will include the percentage and perhaps other information A formatted progress message, this will include the percentage and perhaps other information
(absent for finished == true)
current current
Current value (always less-or-equal to *total*, absent for finished == true) Current value (always less-or-equal to *total*)
info info
Array that describes the current item, may be *null*, contents depend on *msgid* Array that describes the current item, may be *null*, contents depend on *msgid*
(absent for finished == true)
total total
Total value (absent for finished == true) Total value
time time
Unix timestamp (float) Unix timestamp (float)
@ -255,7 +208,7 @@ Passphrase prompts should be handled differently. Use the environment variables
and *BORG_NEW_PASSPHRASE* (see :ref:`env_vars` for reference) to pass passphrases to Borg, don't and *BORG_NEW_PASSPHRASE* (see :ref:`env_vars` for reference) to pass passphrases to Borg, don't
use the interactive passphrase prompts. use the interactive passphrase prompts.
When setting a new passphrase (:ref:`borg_repo-create`, :ref:`borg_key_change-passphrase`) normally When setting a new passphrase (:ref:`borg_init`, :ref:`borg_key_change-passphrase`) normally
Borg prompts whether it should display the passphrase. This can be suppressed by setting Borg prompts whether it should display the passphrase. This can be suppressed by setting
the environment variable *BORG_DISPLAY_PASSPHRASE* to *no*. the environment variable *BORG_DISPLAY_PASSPHRASE* to *no*.
@ -280,7 +233,7 @@ and :ref:`borg_list` implement a ``--json`` option which turns their regular out
Some commands, like :ref:`borg_list` and :ref:`borg_diff`, can produce *a lot* of JSON. Since many JSON implementations Some commands, like :ref:`borg_list` and :ref:`borg_diff`, can produce *a lot* of JSON. Since many JSON implementations
don't support a streaming mode of operation, which is pretty much required to deal with this amount of JSON, these don't support a streaming mode of operation, which is pretty much required to deal with this amount of JSON, these
commands implement a ``--json-lines`` option which generates output in the `JSON lines <https://jsonlines.org/>`_ format, commands implement a ``--json-lines`` option which generates output in the `JSON lines <http://jsonlines.org/>`_ format,
which is simply a number of JSON objects separated by new lines. which is simply a number of JSON objects separated by new lines.
Dates are formatted according to ISO 8601 in local time. No explicit time zone is specified *at this time* Dates are formatted according to ISO 8601 in local time. No explicit time zone is specified *at this time*
@ -299,7 +252,7 @@ last_modified
The *encryption* key, if present, contains: The *encryption* key, if present, contains:
mode mode
Textual encryption mode name (same as :ref:`borg_repo-create` ``--encryption`` names) Textual encryption mode name (same as :ref:`borg_init` ``--encryption`` names)
keyfile keyfile
Path to the local key file used for access. Depending on *mode* this key may be absent. Path to the local key file used for access. Depending on *mode* this key may be absent.
@ -316,8 +269,12 @@ stats
Number of unique chunks Number of unique chunks
total_size total_size
Total uncompressed size of all chunks multiplied with their reference counts Total uncompressed size of all chunks multiplied with their reference counts
total_csize
Total compressed and encrypted size of all chunks multiplied with their reference counts
unique_size unique_size
Uncompressed size of all chunks Uncompressed size of all chunks
unique_csize
Compressed and encrypted size of all chunks
.. highlight: json .. highlight: json
@ -328,8 +285,10 @@ Example *borg info* output::
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23", "path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": { "stats": {
"total_chunks": 511533, "total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792, "total_size": 22635749792,
"total_unique_chunks": 54892, "total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468 "unique_size": 2449675468
} }
}, },
@ -373,6 +332,11 @@ stats
Deduplicated size (against the current repository, not when the archive was created) Deduplicated size (against the current repository, not when the archive was created)
nfiles nfiles
Number of regular files in the archive Number of regular files in the archive
limits
Object describing the utilization of Borg limits
max_archive_size
Float between 0 and 1 describing how large this archive is relative to the maximum size allowed by Borg
command_line command_line
Array of strings of the command line that created the archive Array of strings of the command line that created the archive
@ -442,6 +406,9 @@ The same archive with more information (``borg info --last 1 --json``)::
"end": "2017-02-27T12:27:20.789123", "end": "2017-02-27T12:27:20.789123",
"hostname": "host", "hostname": "host",
"id": "80cd07219ad725b3c5f665c1dcf119435c4dee1647a560ecac30f8d40221a46a", "id": "80cd07219ad725b3c5f665c1dcf119435c4dee1647a560ecac30f8d40221a46a",
"limits": {
"max_archive_size": 0.0001330855110409714
},
"name": "host-system-backup-2017-02-27", "name": "host-system-backup-2017-02-27",
"start": "2017-02-27T12:27:20.789123", "start": "2017-02-27T12:27:20.789123",
"stats": { "stats": {
@ -457,8 +424,10 @@ The same archive with more information (``borg info --last 1 --json``)::
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23", "path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": { "stats": {
"total_chunks": 511533, "total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792, "total_size": 22635749792,
"total_unique_chunks": 54892, "total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468 "unique_size": 2449675468
} }
}, },
@ -480,15 +449,14 @@ Refer to the *borg list* documentation for the available keys and their meaning.
Example (excerpt) of ``borg list --json-lines``:: Example (excerpt) of ``borg list --json-lines``::
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux", "target": "", "flags": null, "mtime": "2017-02-27T12:27:20.023407", "size": 0} {"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux", "healthy": true, "source": "", "linktarget": "", "flags": null, "mtime": "2017-02-27T12:27:20.023407", "size": 0}
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux/baz", "target": "", "flags": null, "mtime": "2017-02-27T12:27:20.585407", "size": 0} {"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux/baz", "healthy": true, "source": "", "linktarget": "", "flags": null, "mtime": "2017-02-27T12:27:20.585407", "size": 0}
Archive Differencing Archive Differencing
++++++++++++++++++++ ++++++++++++++++++++
Each archive difference item (file contents, user/group/mode) output by :ref:`borg_diff` is represented by an *ItemDiff* object. Each archive difference item (file contents, user/group/mode) output by :ref:`borg_diff` is represented by an *ItemDiff* object.
The properties of an *ItemDiff* object are: The propertiese of an *ItemDiff* object are:
path: path:
The filename/path of the *Item* (file, directory, symlink). The filename/path of the *Item* (file, directory, symlink).
@ -527,26 +495,26 @@ added:
removed: removed:
See **added** property. See **added** property.
old_mode: old_mode:
If **type** == '*mode*', then **old_mode** and **new_mode** provide the mode and permissions changes. If **type** == '*mode*', then **old_mode** and **new_mode** provide the mode and permissions changes.
new_mode: new_mode:
See **old_mode** property. See **old_mode** property.
old_user: old_user:
If **type** == '*owner*', then **old_user**, **new_user**, **old_group** and **new_group** provide the user If **type** == '*owner*', then **old_user**, **new_user**, **old_group** and **new_group** provide the user
and group ownership changes. and group ownership changes.
old_group: old_group:
See **old_user** property. See **old_user** property.
new_user: new_user:
See **old_user** property. See **old_user** property.
new_group: new_group:
See **old_user** property. See **old_user** property.
Example (excerpt) of ``borg diff --json-lines``:: Example (excerpt) of ``borg diff --json-lines``::
@ -565,171 +533,92 @@ Message IDs are strings that essentially give a log message or operation a name,
full text, since texts change more frequently. Message IDs are unambiguous and reduce the need to parse full text, since texts change more frequently. Message IDs are unambiguous and reduce the need to parse
log messages. log messages.
Assigned message IDs and related error RCs (exit codes) are: Assigned message IDs are:
.. See scripts/errorlist.py; this is slightly edited. .. See scripts/errorlist.py; this is slightly edited.
Errors Errors
Error rc: 2 traceback: no Archive.AlreadyExists
Error: {}
ErrorWithTraceback rc: 2 traceback: yes
Error: {}
Buffer.MemoryLimitExceeded rc: 2 traceback: no
Requested buffer size {} is above the limit of {}.
EfficientCollectionQueue.SizeUnderflow rc: 2 traceback: no
Could not pop_front first {} elements, collection only has {} elements..
RTError rc: 2 traceback: no
Runtime Error: {}
CancelledByUser rc: 3 traceback: no
Cancelled by user.
CommandError rc: 4 traceback: no
Command Error: {}
PlaceholderError rc: 5 traceback: no
Formatting Error: "{}".format({}): {}({})
InvalidPlaceholder rc: 6 traceback: no
Invalid placeholder "{}" in string: {}
Repository.AlreadyExists rc: 10 traceback: no
A repository already exists at {}.
Repository.CheckNeeded rc: 12 traceback: yes
Inconsistency detected. Please run "borg check {}".
Repository.DoesNotExist rc: 13 traceback: no
Repository {} does not exist.
Repository.InsufficientFreeSpaceError rc: 14 traceback: no
Insufficient free space to complete transaction (required: {}, available: {}).
Repository.InvalidRepository rc: 15 traceback: no
{} is not a valid repository. Check repo config.
Repository.InvalidRepositoryConfig rc: 16 traceback: no
{} does not have a valid configuration. Check repo config [{}].
Repository.ObjectNotFound rc: 17 traceback: yes
Object with key {} not found in repository {}.
Repository.ParentPathDoesNotExist rc: 18 traceback: no
The parent path of the repo directory [{}] does not exist.
Repository.PathAlreadyExists rc: 19 traceback: no
There is already something at {}.
Repository.PathPermissionDenied rc: 21 traceback: no
Permission denied to {}.
MandatoryFeatureUnsupported rc: 25 traceback: no
Unsupported repository feature(s) {}. A newer version of borg is required to access this repository.
NoManifestError rc: 26 traceback: no
Repository has no manifest.
UnsupportedManifestError rc: 27 traceback: no
Unsupported manifest envelope. A newer version is required to access this repository.
Archive.AlreadyExists rc: 30 traceback: no
Archive {} already exists Archive {} already exists
Archive.DoesNotExist rc: 31 traceback: no Archive.DoesNotExist
Archive {} does not exist Archive {} does not exist
Archive.IncompatibleFilesystemEncodingError rc: 32 traceback: no Archive.IncompatibleFilesystemEncodingError
Failed to encode filename "{}" into file system encoding "{}". Consider configuring the LANG environment variable. Failed to encode filename "{}" into file system encoding "{}". Consider configuring the LANG environment variable.
Cache.CacheInitAbortedError
KeyfileInvalidError rc: 40 traceback: no
Invalid key data for repository {} found in {}.
KeyfileMismatchError rc: 41 traceback: no
Mismatch between repository {} and key file {}.
KeyfileNotFoundError rc: 42 traceback: no
No key file for repository {} found in {}.
NotABorgKeyFile rc: 43 traceback: no
This file is not a borg key backup, aborting.
RepoKeyNotFoundError rc: 44 traceback: no
No key entry found in the config of repository {}.
RepoIdMismatch rc: 45 traceback: no
This key backup seems to be for a different backup repository, aborting.
UnencryptedRepo rc: 46 traceback: no
Key management not available for unencrypted repositories.
UnknownKeyType rc: 47 traceback: no
Key type {0} is unknown.
UnsupportedPayloadError rc: 48 traceback: no
Unsupported payload type {}. A newer version is required to access this repository.
UnsupportedKeyFormatError rc: 49 traceback:no
Your borg key is stored in an unsupported format. Try using a newer version of borg.
NoPassphraseFailure rc: 50 traceback: no
can not acquire a passphrase: {}
PasscommandFailure rc: 51 traceback: no
passcommand supplied in BORG_PASSCOMMAND failed: {}
PassphraseWrong rc: 52 traceback: no
passphrase supplied in BORG_PASSPHRASE, by BORG_PASSCOMMAND or via BORG_PASSPHRASE_FD is incorrect.
PasswordRetriesExceeded rc: 53 traceback: no
exceeded the maximum password retries
Cache.CacheInitAbortedError rc: 60 traceback: no
Cache initialization aborted Cache initialization aborted
Cache.EncryptionMethodMismatch rc: 61 traceback: no Cache.EncryptionMethodMismatch
Repository encryption method changed since last access, refusing to continue Repository encryption method changed since last access, refusing to continue
Cache.RepositoryAccessAborted rc: 62 traceback: no Cache.RepositoryAccessAborted
Repository access aborted Repository access aborted
Cache.RepositoryIDNotUnique rc: 63 traceback: no Cache.RepositoryIDNotUnique
Cache is newer than repository - do you have multiple, independently updated repos with same ID? Cache is newer than repository - do you have multiple, independently updated repos with same ID?
Cache.RepositoryReplay rc: 64 traceback: no Cache.RepositoryReplay
Cache, or information obtained from the security directory is newer than repository - this is either an attack or unsafe (multiple repos with same ID) Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)
Buffer.MemoryLimitExceeded
LockError rc: 70 traceback: no Requested buffer size {} is above the limit of {}.
ExtensionModuleError
The Borg binary extension modules do not seem to be properly installed
IntegrityError
Data integrity error: {}
NoManifestError
Repository has no manifest.
PlaceholderError
Formatting Error: "{}".format({}): {}({})
KeyfileInvalidError
Invalid key file for repository {} found in {}.
KeyfileMismatchError
Mismatch between repository {} and key file {}.
KeyfileNotFoundError
No key file for repository {} found in {}.
PassphraseWrong
passphrase supplied in BORG_PASSPHRASE is incorrect
PasswordRetriesExceeded
exceeded the maximum password retries
RepoKeyNotFoundError
No key entry found in the config of repository {}.
UnsupportedManifestError
Unsupported manifest envelope. A newer version is required to access this repository.
UnsupportedPayloadError
Unsupported payload type {}. A newer version is required to access this repository.
NotABorgKeyFile
This file is not a borg key backup, aborting.
RepoIdMismatch
This key backup seems to be for a different backup repository, aborting.
UnencryptedRepo
Keymanagement not available for unencrypted repositories.
UnknownKeyType
Keytype {0} is unknown.
LockError
Failed to acquire the lock {}. Failed to acquire the lock {}.
LockErrorT rc: 71 traceback: yes LockErrorT
Failed to acquire the lock {}. Failed to acquire the lock {}.
LockFailed rc: 72 traceback: yes ConnectionClosed
Failed to create/acquire the lock {} ({}).
LockTimeout rc: 73 traceback: no
Failed to create/acquire the lock {} (timeout).
NotLocked rc: 74 traceback: yes
Failed to release the lock {} (was not locked).
NotMyLock rc: 75 traceback: yes
Failed to release the lock {} (was/is locked, but not by me).
ConnectionClosed rc: 80 traceback: no
Connection closed by remote host Connection closed by remote host
ConnectionClosedWithHint rc: 81 traceback: no InvalidRPCMethod
Connection closed by remote host. {}
InvalidRPCMethod rc: 82 traceback: no
RPC method {} is not valid RPC method {} is not valid
PathNotAllowed rc: 83 traceback: no PathNotAllowed
Repository path not allowed: {} Repository path not allowed
RemoteRepository.RPCServerOutdated rc: 84 traceback: no RemoteRepository.RPCServerOutdated
Borg server is too old for {}. Required version {} Borg server is too old for {}. Required version {}
UnexpectedRPCDataFormatFromClient rc: 85 traceback: no UnexpectedRPCDataFormatFromClient
Borg {}: Got unexpected RPC data format from client. Borg {}: Got unexpected RPC data format from client.
UnexpectedRPCDataFormatFromServer rc: 86 traceback: no UnexpectedRPCDataFormatFromServer
Got unexpected RPC data format from server: Got unexpected RPC data format from server:
{} {}
ConnectionBrokenWithHint rc: 87 traceback: no Repository.AlreadyExists
Connection to remote host is broken. {} Repository {} already exists.
Repository.CheckNeeded
IntegrityError rc: 90 traceback: yes Inconsistency detected. Please run "borg check {}".
Data integrity error: {} Repository.DoesNotExist
FileIntegrityError rc: 91 traceback: yes Repository {} does not exist.
File failed integrity check: {} Repository.InsufficientFreeSpaceError
DecompressionError rc: 92 traceback: yes Insufficient free space to complete transaction (required: {}, available: {}).
Decompression error: {} Repository.InvalidRepository
{} is not a valid repository. Check repo config.
Repository.AtticRepository
Warnings Attic repository detected. Please run "borg upgrade {}".
BorgWarning rc: 1 Repository.ObjectNotFound
Warning: {} Object with key {} not found in repository {}.
BackupWarning rc: 1
{}: {}
FileChangedWarning rc: 100
{}: file changed while we backed it up
IncludePatternNeverMatchedWarning rc: 101
Include pattern '{}' never matched.
BackupError rc: 102
{}: backup error
BackupRaceConditionError rc: 103
{}: file type or inode changed while we backed it up (race condition, skipped file)
BackupOSError rc: 104
{}: {}
BackupPermissionError rc: 105
{}: {}
BackupIOError rc: 106
{}: {}
BackupFileNotFoundError rc: 107
{}: {}
Operations Operations
- cache.begin_transaction - cache.begin_transaction
@ -743,7 +632,6 @@ Operations
- repository.check - repository.check
- check.verify_data - check.verify_data
- check.rebuild_manifest - check.rebuild_manifest
- check.rebuild_refcounts
- extract - extract
*info* is one string element, the name of the path currently extracted. *info* is one string element, the name of the path currently extracted.
@ -761,4 +649,4 @@ Prompts
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING BORG_CHECK_I_KNOW_WHAT_I_AM_DOING
For "This is a potentially dangerous function..." (check --repair) For "This is a potentially dangerous function..." (check --repair)
BORG_DELETE_I_KNOW_WHAT_I_AM_DOING BORG_DELETE_I_KNOW_WHAT_I_AM_DOING
For "You requested to DELETE the repository completely *including* all archives it contains:" For "You requested to completely DELETE the repository *including* all archives it contains:"

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 313 KiB

Binary file not shown.

View file

@ -1,5 +1,3 @@
.. include:: ../global.rst.inc
.. somewhat surprisingly the "bash" highlighter gives nice results with .. somewhat surprisingly the "bash" highlighter gives nice results with
the pseudo-code notation used in the "Encryption" section. the pseudo-code notation used in the "Encryption" section.
@ -24,22 +22,25 @@ The attack model of Borg is that the environment of the client process
attacker has any and all access to the repository, including interactive attacker has any and all access to the repository, including interactive
manipulation (man-in-the-middle) for remote repositories. manipulation (man-in-the-middle) for remote repositories.
Furthermore, the client environment is assumed to be persistent across Furthermore the client environment is assumed to be persistent across
attacks (practically this means that the security database cannot be attacks (practically this means that the security database cannot be
deleted between attacks). deleted between attacks).
Under these circumstances Borg guarantees that the attacker cannot Under these circumstances Borg guarantees that the attacker cannot
1. modify the data of any archive without the client detecting the change 1. modify the data of any archive without the client detecting the change
2. rename or add an archive without the client detecting the change 2. rename, remove or add an archive without the client detecting the change
3. recover plain-text data 3. recover plain-text data
4. recover definite (heuristics based on access patterns are possible) 4. recover definite (heuristics based on access patterns are possible)
structural information such as the object graph (which archives structural information such as the object graph (which archives
refer to what chunks) refer to what chunks)
The attacker can always impose a denial of service by definition (they could The attacker can always impose a denial of service per definition (he could
block connections to the repository, or delete it partly or entirely). forbid connections to the repository, or delete it entirely).
When the above attack model is extended to include multiple clients
independently updating the same repository, then Borg fails to provide
confidentiality (i.e. guarantees 3) and 4) do not apply any more).
.. _security_structural_auth: .. _security_structural_auth:
@ -47,12 +48,12 @@ Structural Authentication
------------------------- -------------------------
Borg is fundamentally based on an object graph structure (see :ref:`internals`), Borg is fundamentally based on an object graph structure (see :ref:`internals`),
where the root objects are the archives. where the root object is called the manifest.
Borg follows the `Horton principle`_, which states that Borg follows the `Horton principle`_, which states that
not only the message must be authenticated, but also its meaning (often not only the message must be authenticated, but also its meaning (often
expressed through context), because every object used is referenced by a expressed through context), because every object used is referenced by a
parent object through its object ID up to the archive list entry. The object ID in parent object through its object ID up to the manifest. The object ID in
Borg is a MAC of the object's plaintext, therefore this ensures that Borg is a MAC of the object's plaintext, therefore this ensures that
an attacker cannot change the context of an object without forging the MAC. an attacker cannot change the context of an object without forging the MAC.
@ -60,45 +61,50 @@ In other words, the object ID itself only authenticates the plaintext of the
object and not its context or meaning. The latter is established by a different object and not its context or meaning. The latter is established by a different
object referring to an object ID, thereby assigning a particular meaning to object referring to an object ID, thereby assigning a particular meaning to
an object. For example, an archive item contains a list of object IDs that an object. For example, an archive item contains a list of object IDs that
represent packed file metadata. On their own, it's not clear that these objects represent packed file metadata. On their own it's not clear that these objects
would represent what they do, but by the archive item referring to them would represent what they do, but by the archive item referring to them
in a particular part of its own data structure assigns this meaning. in a particular part of its own data structure assigns this meaning.
This results in a directed acyclic graph of authentication from the archive This results in a directed acyclic graph of authentication from the manifest
list entry to the data chunks of individual files. to the data chunks of individual files.
Above used to be all for borg 1.x and was the reason why it needed the .. _tam_description:
tertiary authentication mechanism (TAM) for manifest and archives.
borg 2 now stores the ro_type ("meaning") of a repo object's data into that .. rubric:: Authenticating the manifest
object's metadata (like e.g.: manifest vs. archive vs. user file content data).
When loading data from the repo, borg verifies that the type of object it got
matches the type it wanted. borg 2 does not use TAMs any more.
As both the object's metadata and data are AEAD encrypted and also bound to Since the manifest has a fixed ID (000...000) the aforementioned authentication
the object ID (via giving the ID as AAD), there is no way an attacker (without does not apply to it, indeed, cannot apply to it; it is impossible to authenticate
access to the borg key) could change the type of the object or move content the root node of a DAG through its edges, since the root node has no incoming edges.
to a different object ID.
This effectively 'anchors' each archive to the key, which is controlled by the With the scheme as described so far an attacker could easily replace the manifest,
client, thereby anchoring the DAG starting from the archives list entry, therefore Borg includes a tertiary authentication mechanism (TAM) that is applied
making it impossible for an attacker to add or modify any part of the to the manifest since version 1.0.9 (see :ref:`tam_vuln`).
DAG without Borg being able to detect the tampering.
Please note that removing an archive by removing an entry from archives/* TAM works by deriving a separate key through HKDF_ from the other encryption and
is possible and is done by ``borg delete`` and ``borg prune`` within their authentication keys and calculating the HMAC of the metadata to authenticate [#]_::
normal operation. An attacker could also remove some entries there, but, due to
encryption, would not know what exactly they are removing. An attacker with
repository access could also remove other parts of the repository or the whole
repository, so there is not much point in protecting against archive removal.
The borg 1.x way of having the archives list within the manifest chunk was # RANDOM(n) returns n random bytes
problematic as it required a read-modify-write operation on the manifest, salt = RANDOM(64)
requiring a lock on the repository. We want to try less locking and more
parallelism in future.
Passphrase notes ikm = id_key || enc_key || enc_hmac_key
---------------- # *context* depends on the operation, for manifest authentication it is
# the ASCII string "borg-metadata-authentication-manifest".
tam_key = HKDF-SHA-512(ikm, salt, context)
# *data* is a dict-like structure
data[hmac] = zeroes
packed = pack(data)
data[hmac] = HMAC(tam_key, packed)
packed_authenticated = pack(data)
Since an attacker cannot gain access to this key and also cannot make the
client authenticate arbitrary data using this mechanism, the attacker is unable
to forge the authentication.
This effectively 'anchors' the manifest to the key, which is controlled by the
client, thereby anchoring the entire DAG, making it impossible for an attacker
to add, remove or modify any part of the DAG without Borg being able to detect
the tampering.
Note that when using BORG_PASSPHRASE the attacker cannot swap the *entire* Note that when using BORG_PASSPHRASE the attacker cannot swap the *entire*
repository against a new repository with e.g. repokey mode and no passphrase, repository against a new repository with e.g. repokey mode and no passphrase,
@ -108,6 +114,11 @@ However, interactively a user might not notice this kind of attack
immediately, if she assumes that the reason for the absent passphrase immediately, if she assumes that the reason for the absent passphrase
prompt is a set BORG_PASSPHRASE. See issue :issue:`2169` for details. prompt is a set BORG_PASSPHRASE. See issue :issue:`2169` for details.
.. [#] The reason why the authentication tag is stored in the packed
data itself is that older Borg versions can still read the
manifest this way, while a changed layout would have broken
compatibility.
.. _security_encryption: .. _security_encryption:
Encryption Encryption
@ -118,12 +129,12 @@ AEAD modes
Modes: --encryption (repokey|keyfile)-[blake2-](aes-ocb|chacha20-poly1305) Modes: --encryption (repokey|keyfile)-[blake2-](aes-ocb|chacha20-poly1305)
Supported: borg 2.0+ Supported: borg 1.3+
Encryption with these modes is based on AEAD ciphers (authenticated encryption Encryption with these modes is based on AEAD ciphers (authenticated encryption
with associated data) and session keys. with associated data) and session keys.
Depending on the chosen mode (see :ref:`borg_repo-create`) different AEAD ciphers are used: Depending on the chosen mode (see :ref:`borg_init`) different AEAD ciphers are used:
- AES-256-OCB - super fast, single-pass algorithm IF you have hw accelerated AES. - AES-256-OCB - super fast, single-pass algorithm IF you have hw accelerated AES.
- chacha20-poly1305 - very fast, purely software based AEAD cipher. - chacha20-poly1305 - very fast, purely software based AEAD cipher.
@ -136,8 +147,7 @@ The chunk ID is derived via a MAC over the plaintext (mac key taken from borg ke
For each borg invocation, a new session id is generated by `os.urandom`_. For each borg invocation, a new session id is generated by `os.urandom`_.
From that session id, the initial key material (ikm, taken from the borg key) From that session id, the initial key material (ikm, taken from the borg key)
and an application and cipher specific salt, borg derives a session key using a and an application and cipher specific salt, borg derives a session key via HKDF.
"one-step KDF" based on just sha256.
For each session key, IVs (nonces) are generated by a counter which increments for For each session key, IVs (nonces) are generated by a counter which increments for
each encrypted message. each encrypted message.
@ -145,8 +155,9 @@ each encrypted message.
Session:: Session::
sessionid = os.urandom(24) sessionid = os.urandom(24)
domain = "borg-session-key-CIPHERNAME" ikm = enc_key || enc_hmac_key
sessionkey = sha256(crypt_key + sessionid + domain) salt = "borg-session-key-CIPHERNAME"
sessionkey = HKDF(ikm, sessionid, salt)
message_iv = 0 message_iv = 0
Encryption:: Encryption::
@ -167,13 +178,13 @@ Decryption::
ASSERT(type-byte is correct) ASSERT(type-byte is correct)
domain = "borg-session-key-CIPHERNAME" past_key = HKDF(ikm, past_sessionid, salt)
past_key = sha256(crypt_key + past_sessionid + domain)
decrypted = AEAD_decrypt(past_key, past_message_iv, authenticated) decrypted = AEAD_decrypt(past_key, past_message_iv, authenticated)
decompressed = decompress(decrypted) decompressed = decompress(decrypted)
ASSERT( CONSTANT-TIME-COMPARISON( chunk-id, MAC(id_key, decompressed) ) )
Notable: Notable:
- More modern and often faster AEAD ciphers instead of self-assembled stuff. - More modern and often faster AEAD ciphers instead of self-assembled stuff.
@ -190,13 +201,106 @@ Legacy modes
Modes: --encryption (repokey|keyfile)-[blake2] Modes: --encryption (repokey|keyfile)-[blake2]
Supported: borg < 2.0 Supported: all borg versions, blake2 since 1.1
These were the AES-CTR based modes in previous borg versions. DEPRECATED. We strongly suggest you use the safer AEAD modes, see above.
borg 2.0 does not support creating new repos using these modes, Encryption with these modes is based on the Encrypt-then-MAC construction,
but ``borg transfer`` can still read such existing repos. which is generally seen as the most robust way to create an authenticated
encryption scheme from encryption and message authentication primitives.
Every operation (encryption, MAC / authentication, chunk ID derivation)
uses independent, random keys generated by `os.urandom`_.
Borg does not support unauthenticated encryption -- only authenticated encryption
schemes are supported. No unauthenticated encryption schemes will be added
in the future.
Depending on the chosen mode (see :ref:`borg_init`) different primitives are used:
- Legacy encryption modes use AES-256 in CTR mode. The
counter is added in plaintext, since it is needed for decryption,
and is also tracked locally on the client to avoid counter reuse.
- The authentication primitive is either HMAC-SHA-256 or BLAKE2b-256
in a keyed mode.
Both HMAC-SHA-256 and BLAKE2b have undergone extensive cryptanalysis
and have proven secure against known attacks. The known vulnerability
of SHA-256 against length extension attacks does not apply to HMAC-SHA-256.
The authentication primitive should be chosen based upon SHA hardware support.
With SHA hardware support, hmac-sha256 is likely to be much faster.
If no hardware support is provided, Blake2b-256 will outperform hmac-sha256.
To find out if you have SHA hardware support, use::
$ borg benchmark cpu
The output will include an evaluation of cryptographic hashes/MACs like::
Cryptographic hashes / MACs ====================================
hmac-sha256 1GB 0.436s
blake2b-256 1GB 1.579s
Based upon your output, choose the primitive that is faster (in the above
example, hmac-sha256 is much faster, which indicates SHA hardware support).
- The primitive used for authentication is always the same primitive
that is used for deriving the chunk ID, but they are always
used with independent keys.
Encryption::
id = AUTHENTICATOR(id_key, data)
compressed = compress(data)
iv = reserve_iv()
encrypted = AES-256-CTR(enc_key, 8-null-bytes || iv, compressed)
authenticated = type-byte || AUTHENTICATOR(enc_hmac_key, encrypted) || iv || encrypted
Decryption::
# Given: input *authenticated* data, possibly a *chunk-id* to assert
type-byte, mac, iv, encrypted = SPLIT(authenticated)
ASSERT(type-byte is correct)
ASSERT( CONSTANT-TIME-COMPARISON( mac, AUTHENTICATOR(enc_hmac_key, encrypted) ) )
decrypted = AES-256-CTR(enc_key, 8-null-bytes || iv, encrypted)
decompressed = decompress(decrypted)
ASSERT( CONSTANT-TIME-COMPARISON( chunk-id, AUTHENTICATOR(id_key, decompressed) ) )
The client needs to track which counter values have been used, since
encrypting a chunk requires a starting counter value and no two chunks
may have overlapping counter ranges (otherwise the bitwise XOR of the
overlapping plaintexts is revealed).
The client does not directly track the counter value, because it
changes often (with each encrypted chunk), instead it commits a
"reservation" to the security database and the repository by taking
the current counter value and adding 4 GiB / 16 bytes (the block size)
to the counter. Thus the client only needs to commit a new reservation
every few gigabytes of encrypted data.
This mechanism also avoids reusing counter values in case the client
crashes or the connection to the repository is severed, since any
reservation would have been committed to both the security database
and the repository before any data is encrypted. Borg uses its
standard mechanism (SaveFile) to ensure that reservations are durable
(on most hardware / storage systems), therefore a crash of the
client's host would not impact tracking of reservations.
However, this design is not infallible, and requires synchronization
between clients, which is handled through the repository. Therefore in
a multiple-client scenario a repository can trick a client into
reusing counter values by ignoring counter reservations and replaying
the manifest (which will fail if the client has seen a more recent
manifest or has a more recent nonce reservation). If the repository is
untrusted, but a trusted synchronization channel exists between
clients, the security database could be synchronized between them over
said trusted channel. This is not part of Borg's functionality.
.. _key_encryption: .. _key_encryption:
@ -210,23 +314,32 @@ For offline storage of the encryption keys they are encrypted with a
user-chosen passphrase. user-chosen passphrase.
A 256 bit key encryption key (KEK) is derived from the passphrase A 256 bit key encryption key (KEK) is derived from the passphrase
using argon2_ with a random 256 bit salt. The KEK is then used using PBKDF2-HMAC-SHA256 with a random 256 bit salt which is then used
to Encrypt-*then*-MAC a packed representation of the keys using the to Encrypt-*and*-MAC (unlike the Encrypt-*then*-MAC approach used
chacha20-poly1305 AEAD cipher and a constant IV == 0. otherwise) a packed representation of the keys with AES-256-CTR with a
The ciphertext is then converted to base64. constant initialization vector of 0. A HMAC-SHA256 of the plaintext is
generated using the same KEK and is stored alongside the ciphertext,
which is converted to base64 in its entirety.
This base64 blob (commonly referred to as *keyblob*) is then stored in This base64 blob (commonly referred to as *keyblob*) is then stored in
the key file or in the repository config (keyfile and repokey modes the key file or in the repository config (keyfile and repokey modes
respectively). respectively).
The use of a constant IV is secure because an identical passphrase will This scheme, and specifically the use of a constant IV with the CTR
result in a different derived KEK for every key encryption due to the salt. mode, is secure because an identical passphrase will result in a
different derived KEK for every key encryption due to the salt.
The use of Encrypt-and-MAC instead of Encrypt-then-MAC is seen as
uncritical (but not ideal) here, since it is combined with AES-CTR mode,
which is not vulnerable to padding attacks.
.. seealso:: .. seealso::
Refer to the :ref:`key_files` section for details on the format. Refer to the :ref:`key_files` section for details on the format.
Refer to issue :issue:`747` for suggested improvements of the encryption
scheme and password-based key derivation.
Implementations used Implementations used
-------------------- --------------------
@ -234,16 +347,30 @@ Implementations used
We do not implement cryptographic primitives ourselves, but rely We do not implement cryptographic primitives ourselves, but rely
on widely used libraries providing them: on widely used libraries providing them:
- AES-OCB and CHACHA20-POLY1305 from OpenSSL 1.1 are used, - AES-CTR, AES-OCB, CHACHA20-POLY1305 and HMAC-SHA-256 from OpenSSL 1.1 are used,
which is also linked into the static binaries we provide. which is also linked into the static binaries we provide.
We think this is not an additional risk, since we don't ever We think this is not an additional risk, since we don't ever
use OpenSSL's networking, TLS or X.509 code, but only their use OpenSSL's networking, TLS or X.509 code, but only their
primitives implemented in libcrypto. primitives implemented in libcrypto.
- SHA-256, SHA-512 and BLAKE2b from Python's hashlib_ standard library module are used. - SHA-256, SHA-512 and BLAKE2b from Python's hashlib_ standard library module are used.
- HMAC and a constant-time comparison from Python's hmac_ standard library module are used. Borg requires a Python built with OpenSSL support (due to PBKDF2), therefore
- argon2 is used via argon2-cffi. these functions are delegated to OpenSSL by Python.
- HMAC, PBKDF2 and a constant-time comparison from Python's hmac_ standard
library module is used. While the HMAC implementation is written in Python,
the PBKDF2 implementation is provided by OpenSSL. The constant-time comparison
(``compare_digest``) is written in C and part of Python.
Implemented cryptographic constructions are:
- AEAD modes: AES-OCB and CHACHA20-POLY1305 are straight from OpenSSL.
- Legacy modes: Encrypt-then-MAC based on AES-256-CTR and either HMAC-SHA-256
or keyed BLAKE2b256 as described above under Encryption_.
- Encrypt-and-MAC based on AES-256-CTR and HMAC-SHA-256
as described above under `Offline key security`_.
- HKDF_-SHA-512
.. _Horton principle: https://en.wikipedia.org/wiki/Horton_Principle .. _Horton principle: https://en.wikipedia.org/wiki/Horton_Principle
.. _HKDF: https://tools.ietf.org/html/rfc5869
.. _length extension: https://en.wikipedia.org/wiki/Length_extension_attack .. _length extension: https://en.wikipedia.org/wiki/Length_extension_attack
.. _hashlib: https://docs.python.org/3/library/hashlib.html .. _hashlib: https://docs.python.org/3/library/hashlib.html
.. _hmac: https://docs.python.org/3/library/hmac.html .. _hmac: https://docs.python.org/3/library/hmac.html
@ -264,7 +391,7 @@ SSH server -- Borg RPC does not contain *any* networking
code. Networking is done by the SSH client running in a separate code. Networking is done by the SSH client running in a separate
process, Borg only communicates over the standard pipes (stdout, process, Borg only communicates over the standard pipes (stdout,
stderr and stdin) with this process. This also means that Borg doesn't stderr and stdin) with this process. This also means that Borg doesn't
have to use a SSH client directly (or SSH at all). For example, have to directly use a SSH client (or SSH at all). For example,
``sudo`` or ``qrexec`` could be used as an intermediary. ``sudo`` or ``qrexec`` could be used as an intermediary.
By using the system's SSH client and not implementing a By using the system's SSH client and not implementing a
@ -341,12 +468,13 @@ Compression and Encryption
Combining encryption with compression can be insecure in some contexts (e.g. online protocols). Combining encryption with compression can be insecure in some contexts (e.g. online protocols).
There was some discussion about this in :issue:`1040` and for Borg some developers There was some discussion about this in `github issue #1040`_ and for Borg some developers
concluded this is no problem at all, some concluded this is hard and extremely slow to exploit concluded this is no problem at all, some concluded this is hard and extremely slow to exploit
and thus no problem in practice. and thus no problem in practice.
No matter what, there is always the option not to use compression if you are worried about this. No matter what, there is always the option not to use compression if you are worried about this.
.. _github issue #1040: https://github.com/borgbackup/borg/issues/1040
Fingerprinting Fingerprinting
============== ==============
@ -361,25 +489,19 @@ The chunks stored in the repo are the (compressed, encrypted and authenticated)
output of the chunker. The sizes of these stored chunks are influenced by the output of the chunker. The sizes of these stored chunks are influenced by the
compression, encryption and authentication. compression, encryption and authentication.
buzhash and buzhash64 chunker buzhash chunker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +++++++++++++++
The buzhash chunkers chunk according to the input data, the chunker's The buzhash chunker chunks according to the input data, the chunker's
parameters and secret key material (which all influence the chunk boundary parameters and the secret chunker seed (which all influence the chunk boundary
positions). positions).
Secret key material:
- "buzhash": chunker seed (32bits), used for XORing the hardcoded buzhash table
- "buzhash64": bh64_key (256bits) is derived from ID key, used to cryptographically
generate the table.
Small files below some specific threshold (default: 512 KiB) result in only one Small files below some specific threshold (default: 512 KiB) result in only one
chunk (identical content / size as the original file), bigger files result in chunk (identical content / size as the original file), bigger files result in
multiple chunks. multiple chunks.
fixed chunker fixed chunker
~~~~~~~~~~~~~ +++++++++++++
This chunker yields fixed sized chunks, with optional support of a differently This chunker yields fixed sized chunks, with optional support of a differently
sized header chunk. The last chunk is not required to have the full block size sized header chunk. The last chunk is not required to have the full block size

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

View file

@ -1,8 +1,8 @@
Introduction Introduction
============ ============
.. This shim is here to fix the structure in the PDF .. this shim is here to fix the structure in the PDF
rendering. Without this stub, the elements in the toctree of rendering. without this stub, the elements in the toctree of
index.rst show up a level below the README file included. index.rst show up a level below the README file included
.. include:: ../README.rst .. include:: ../README.rst

View file

@ -1,91 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-ANALYZE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-analyze \- Analyzes archives.
.SH SYNOPSIS
.sp
borg [common options] analyze [options]
.SH DESCRIPTION
.sp
Analyze archives to find \(dqhot spots\(dq.
.sp
\fBborg analyze\fP relies on the usual archive matching options to select the
archives that should be considered for analysis (e.g. \fB\-a series_name\fP).
Then it iterates over all matching archives, over all contained files, and
collects information about chunks stored in all directories it encounters.
.sp
It considers chunk IDs and their plaintext sizes (we do not have the compressed
size in the repository easily available) and adds up the sizes of added and removed
chunks per direct parent directory, and outputs a list of \(dqdirectory: size\(dq.
.sp
You can use that list to find directories with a lot of \(dqactivity\(dq — maybe
some of these are temporary or cache directories you forgot to exclude.
.sp
To avoid including these unwanted directories in your backups, you can carefully
exclude them in \fBborg create\fP (for future backups) or use \fBborg recreate\fP
to recreate existing archives without them.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-BENCHMARK-CPU 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark-cpu \- Benchmark CPU bound operations.
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,24 +30,17 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-BENCHMARK-CPU" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark-cpu \- Benchmark CPU-bound operations.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] benchmark cpu [options] borg [common options] benchmark cpu [options]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command benchmarks miscellaneous CPU\-bound Borg operations. This command benchmarks misc. CPU bound borg operations.
.sp .sp
It creates input data in memory, runs the operation and then displays throughput. It creates input data in memory, runs the operation and then displays throughput.
To reduce outside influence on the timings, please make sure to run this with: To reduce outside influence on the timings, please make sure to run this with:
.INDENT 0.0 \- an otherwise as idle as possible machine
.IP \(bu 2 \- enough free memory so there will be no slow down due to paging activity
an otherwise as idle as possible machine
.IP \(bu 2
enough free memory so there will be no slow down due to paging activity
.UNINDENT
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-BENCHMARK-CRUD 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark-crud \- Benchmark Create, Read, Update, Delete for archives.
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,29 +30,28 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-BENCHMARK-CRUD" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark-crud \- Benchmark Create, Read, Update, Delete for archives.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] benchmark crud [options] PATH borg [common options] benchmark crud [options] REPOSITORY PATH
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command benchmarks borg CRUD (create, read, update, delete) operations. This command benchmarks borg CRUD (create, read, update, delete) operations.
.sp .sp
It creates input data below the given PATH and backs up this data into the given REPO. It creates input data below the given PATH and backups this data into the given REPO.
The REPO must already exist (it could be a fresh empty repo or an existing repo, the The REPO must already exist (it could be a fresh empty repo or an existing repo, the
command will create / read / update / delete some archives named borg\-benchmark\-crud* there. command will create / read / update / delete some archives named borg\-benchmark\-crud* there.
.sp .sp
Make sure you have free space there; you will need about 1 GB each (+ overhead). Make sure you have free space there, you\(aqll need about 1GB each (+ overhead).
.sp .sp
If your repository is encrypted and borg needs a passphrase to unlock the key, use: If your repository is encrypted and borg needs a passphrase to unlock the key, use:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
BORG_PASSPHRASE=mysecret borg benchmark crud REPO PATH BORG_PASSPHRASE=mysecret borg benchmark crud REPO PATH
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -86,8 +88,11 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B REPOSITORY
repository to use for benchmark (must exist)
.TP
.B PATH .B PATH
path where to create benchmark input data path were to create benchmark input data
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-BENCHMARK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark \- benchmark command
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,9 +30,6 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-BENCHMARK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark \- benchmark command
.SH SYNOPSIS .SH SYNOPSIS
.nf .nf
borg [common options] benchmark crud ... borg [common options] benchmark crud ...

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-BREAK-LOCK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-break-lock \- Break the repository lock (e.g. in case it was left by a dead borg.
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,20 +30,23 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-BREAK-LOCK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-break-lock \- Breaks the repository lock (for example, if it was left by a dead Borg process).
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] break\-lock [options] borg [common options] break\-lock [options] [REPOSITORY]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command breaks the repository and cache locks. This command breaks the repository and cache locks.
Use with care and only when no Borg process (on any machine) is Please use carefully and only while no borg process (on any machine) is
trying to access the cache or the repository. trying to access the Cache or the Repository.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository for which to break the locks
.UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP \fIborg\-common(1)\fP

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-CHANGE-PASSPHRASE 1 "2017-11-25" "" "borg backup tool"
.SH NAME
borg-change-passphrase \- Change repository key file passphrase
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,42 +30,24 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-REPO-INFO" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-repo-info \- Show repository information.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] repo\-info [options] borg [common options] change\-passphrase [options] [REPOSITORY]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command displays detailed information about the repository. The key files used for repository encryption are optionally passphrase
protected. This command can be used to change this passphrase.
.sp
Please note that this command only changes the passphrase, but not any
secret protected by it (like e.g. encryption/MAC keys or chunker seed).
Thus, changing the passphrase after passphrase and borg key got compromised
does not protect future (nor past) backups to the same repository.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options .SS arguments
.INDENT 0.0
.TP
.B \-\-json
format output as JSON
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp .sp
.EX REPOSITORY
$ borg repo\-info
Repository ID: 0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Location: /Users/tw/w/borg/path/to/repo
Encrypted: Yes (repokey AES\-OCB)
Cache: /Users/tw/.cache/borg/0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Security dir: /Users/tw/.config/borg/security/0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Original size: 152.14 MB
Deduplicated size: 30.38 MB
Unique chunks: 654
Total chunks: 3302
.EE
.UNINDENT
.UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP \fIborg\-common(1)\fP

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-CHECK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-check \- Check repository consistency
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,172 +30,146 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-CHECK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-check \- Checks repository consistency.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] check [options] borg [common options] check [options] [REPOSITORY_OR_ARCHIVE]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
The check command verifies the consistency of a repository and its archives. The check command verifies the consistency of a repository and the corresponding archives.
It consists of two major steps:
.INDENT 0.0
.IP 1. 3
Checking the consistency of the repository itself. This includes checking
the file magic headers, and both the metadata and data of all objects in
the repository. The read data is checked by size and hash. Bit rot and other
types of accidental damage can be detected this way. Running the repository
check can be split into multiple partial checks using \fB\-\-max\-duration\fP\&.
When checking an <ssh://> remote repository, please note that the checks run on
the server and do not cause significant network traffic.
.IP 2. 3
Checking consistency and correctness of the archive metadata and optionally
archive data (requires \fB\-\-verify\-data\fP). This includes ensuring that the
repository manifest exists, the archive metadata chunk is present, and that
all chunks referencing files (items) in the archive exist. This requires
reading archive and file metadata, but not data. To scan for archives whose
entries were lost from the archive directory, pass \fB\-\-find\-lost\-archives\fP\&.
It requires reading all data and is hence very time\-consuming.
To additionally cryptographically verify the file (content) data integrity,
pass \fB\-\-verify\-data\fP, which is even more time\-consuming.
.sp .sp
When checking archives of a remote repository, archive checks run on the client check \-\-repair is a potentially dangerous function and might lead to data loss
machine because they require decrypting data and therefore the encryption key. (for kinds of corruption it is not capable of dealing with). BE VERY CAREFUL!
.UNINDENT
.sp
Both steps can also be run independently. Pass \fB\-\-repository\-only\fP to run the
repository checks only, or pass \fB\-\-archives\-only\fP to run the archive checks
only.
.sp
The \fB\-\-max\-duration\fP option can be used to split a long\-running repository
check into multiple partial checks. After the given number of seconds, the check
is interrupted. The next partial check will continue where the previous one
stopped, until the full repository has been checked. Assuming a complete check
would take 7 hours, then running a daily check with \fB\-\-max\-duration=3600\fP
(1 hour) would result in one full repository check per week. Doing a full
repository check aborts any previous partial check; the next partial check will
restart from the beginning. With partial repository checks you can run neither
archive checks, nor enable repair mode. Consequently, if you want to use
\fB\-\-max\-duration\fP you must also pass \fB\-\-repository\-only\fP, and must not pass
\fB\-\-archives\-only\fP, nor \fB\-\-repair\fP\&.
.sp
\fBWarning:\fP Please note that partial repository checks (i.e., running with
\fB\-\-max\-duration\fP) can only perform non\-cryptographic checksum checks on the
repository files. Enabling partial repository checks excludes archive checks
for the same reason. Therefore, partial checks may be useful only with very large
repositories where a full check would take too long.
.sp
The \fB\-\-verify\-data\fP option will perform a full integrity verification (as
opposed to checking just the xxh64) of data, which means reading the
data from the repository, decrypting and decompressing it. It is a complete
cryptographic verification and hence very time\-consuming, but will detect any
accidental and malicious corruption. Tamper\-resistance is only guaranteed for
encrypted repositories against attackers without access to the keys. You cannot
use \fB\-\-verify\-data\fP with \fB\-\-repository\-only\fP\&.
.sp
The \fB\-\-find\-lost\-archives\fP option will also scan the whole repository, but
tells Borg to search for lost archive metadata. If Borg encounters any archive
metadata that does not match an archive directory entry (including
soft\-deleted archives), it means that an entry was lost.
Unless \fBborg compact\fP is called, these archives can be fully restored with
\fB\-\-repair\fP\&. Please note that \fB\-\-find\-lost\-archives\fP must read a lot of
data from the repository and is thus very time\-consuming. You cannot use
\fB\-\-find\-lost\-archives\fP with \fB\-\-repository\-only\fP\&.
.SS About repair mode
.sp
The check command is a read\-only task by default. If any corruption is found,
Borg will report the issue and proceed with checking. To actually repair the
issues found, pass \fB\-\-repair\fP\&.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
\fB\-\-repair\fP is a \fBPOTENTIALLY DANGEROUS FEATURE\fP and might lead to data
loss! This does not just include data that was previously lost anyway, but
might include more data for kinds of corruption it is not capable of
dealing with. \fBBE VERY CAREFUL!\fP
.UNINDENT
.UNINDENT
.sp .sp
Pursuant to the previous warning it is also highly recommended to test the Pursuant to the previous warning it is also highly recommended to test the
reliability of the hardware running Borg with stress testing software. This reliability of the hardware running this software with stress testing software
especially includes storage and memory testers. Unreliable hardware might lead such as memory testers. Unreliable hardware can also lead to data loss especially
to additional data loss. when this command is run in repair mode.
.sp .sp
It is highly recommended to create a backup of your repository before running First, the underlying repository data files are checked:
in repair mode (i.e. running it with \fB\-\-repair\fP).
.sp
Repair mode will attempt to fix any corruptions found. Fixing corruptions does
not mean recovering lost data: Borg cannot magically restore data lost due to
e.g. a hardware failure. Repairing a repository means sacrificing some data
for the sake of the repository as a whole and the remaining data. Hence it is,
by definition, a potentially lossy task.
.sp
In practice, repair mode hooks into both the repository and archive checks:
.INDENT 0.0 .INDENT 0.0
.IP 1. 3 .IP \(bu 2
When checking the repository\(aqs consistency, repair mode removes corrupted For all segments, the segment magic header is checked.
objects from the repository after it did a 2nd try to read them correctly. .IP \(bu 2
.IP 2. 3 For all objects stored in the segments, all metadata (e.g. CRC and size) and
When checking the consistency and correctness of archives, repair mode might all data is read. The read data is checked by size and CRC. Bit rot and other
remove whole archives from the manifest if their archive metadata chunk is types of accidental damage can be detected this way.
corrupt or lost. Borg will also report files that reference missing chunks. .IP \(bu 2
In repair mode, if an integrity error is detected in a segment, try to recover
as many objects from the segment as possible.
.IP \(bu 2
In repair mode, make sure that the index is consistent with the data stored in
the segments.
.IP \(bu 2
If checking a remote repo via \fBssh:\fP, the repo check is executed on the server
without causing significant network traffic.
.IP \(bu 2
The repository check can be skipped using the \fB\-\-archives\-only\fP option.
.IP \(bu 2
A repository check can be time consuming. Partial checks are possible with the
\fB\-\-max\-duration\fP option.
.UNINDENT .UNINDENT
.sp .sp
If \fB\-\-repair \-\-find\-lost\-archives\fP is given, previously lost entries will Second, the consistency and correctness of the archive metadata is verified:
be recreated in the archive directory. This is only possible before .INDENT 0.0
\fBborg compact\fP would remove the archives\(aq data completely. .IP \(bu 2
Is the repo manifest present? If not, it is rebuilt from archive metadata
chunks (this requires reading and decrypting of all metadata and data).
.IP \(bu 2
Check if archive metadata chunk is present; if not, remove archive from manifest.
.IP \(bu 2
For all files (items) in the archive, for all chunks referenced by these
files, check if chunk is present. In repair mode, if a chunk is not present,
replace it with a same\-size replacement chunk of zeroes. If a previously lost
chunk reappears (e.g. via a later backup), in repair mode the all\-zero replacement
chunk will be replaced by the correct chunk. This requires reading of archive and
file metadata, but not data.
.IP \(bu 2
In repair mode, when all the archives were checked, orphaned chunks are deleted
from the repo. One cause of orphaned chunks are input file related errors (like
read errors) in the archive creation process.
.IP \(bu 2
In verify\-data mode, a complete cryptographic verification of the archive data
integrity is performed. This conflicts with \fB\-\-repository\-only\fP as this mode
only makes sense if the archive checks are enabled. The full details of this mode
are documented below.
.IP \(bu 2
If checking a remote repo via \fBssh:\fP, the archive check is executed on the
client machine because it requires decryption, and this is always done client\-side
as key access is needed.
.IP \(bu 2
The archive checks can be time consuming; they can be skipped using the
\fB\-\-repository\-only\fP option.
.UNINDENT
.sp
The \fB\-\-max\-duration\fP option can be used to split a long\-running repository check
into multiple partial checks. After the given number of seconds the check is
interrupted. The next partial check will continue where the previous one stopped,
until the complete repository has been checked. Example: Assuming a complete check took 7
hours, then running a daily check with \-\-max\-duration=3600 (1 hour) resulted in one
completed check per week.
.sp
Attention: A partial \-\-repository\-only check can only do way less checking than a full
\-\-repository\-only check: only the non\-cryptographic checksum checks on segment file
entries are done, while a full \-\-repository\-only check would also do a repo index check.
A partial check cannot be combined with the \fB\-\-repair\fP option. Partial checks
may therefore be useful only with very large repositories where a full check would take
too long.
Doing a full repository check aborts a partial check; the next partial check will restart
from the beginning.
.sp
The \fB\-\-verify\-data\fP option will perform a full integrity verification (as opposed to
checking the CRC32 of the segment) of data, which means reading the data from the
repository, decrypting and decompressing it. This is a cryptographic verification,
which will detect (accidental) corruption. For encrypted repositories it is
tamper\-resistant as well, unless the attacker has access to the keys. It is also very
slow.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-repository\-only .B REPOSITORY_OR_ARCHIVE
repository or archive to check consistency of
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-repository\-only
only perform repository checks only perform repository checks
.TP .TP
.B \-\-archives\-only .B \-\-archives\-only
only perform archive checks only perform archives checks
.TP .TP
.B \-\-verify\-data .B \-\-verify\-data
perform cryptographic archive data integrity verification (conflicts with \fB\-\-repository\-only\fP) perform cryptographic archive data integrity verification (conflicts with \fB\-\-repository\-only\fP)
.TP .TP
.B \-\-repair .B \-\-repair
attempt to repair any inconsistencies found attempt to repair any inconsistencies found
.TP .TP
.B \-\-find\-lost\-archives .B \-\-save\-space
attempt to find lost archives work slower, but using less space
.TP .TP
.BI \-\-max\-duration \ SECONDS .BI \-\-max\-duration \ SECONDS
perform only a partial repository check for at most SECONDS seconds (default: unlimited) do only a partial repo check for max. SECONDS seconds (Default: unlimited)
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive filters
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN .BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq. only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP .TP
.BI \-\-sort\-by \ KEYS .BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP .TP
.BI \-\-first \ N .BI \-\-first \ N
consider the first N archives after other filters are applied consider first N archives after other filters were applied
.TP .TP
.BI \-\-last \ N .BI \-\-last \ N
consider the last N archives after other filters are applied consider last N archives after other filters were applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-COMMON 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-common \- Common options of Borg commands
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,74 +30,77 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-COMMON" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-common \- Common options of Borg commands
.SH SYNOPSIS .SH SYNOPSIS
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-h\fP,\fB \-\-help .B \-h\fP,\fB \-\-help
show this help message and exit show this help message and exit
.TP .TP
.B \-\-critical .B \-\-critical
work on log level CRITICAL work on log level CRITICAL
.TP .TP
.B \-\-error .B \-\-error
work on log level ERROR work on log level ERROR
.TP .TP
.B \-\-warning .B \-\-warning
work on log level WARNING (default) work on log level WARNING (default)
.TP .TP
.B \-\-info\fP,\fB \-v\fP,\fB \-\-verbose .B \-\-info\fP,\fB \-v\fP,\fB \-\-verbose
work on log level INFO work on log level INFO
.TP .TP
.B \-\-debug .B \-\-debug
enable debug output, work on log level DEBUG enable debug output, work on log level DEBUG
.TP .TP
.BI \-\-debug\-topic \ TOPIC .BI \-\-debug\-topic \ TOPIC
enable TOPIC debugging (can be specified multiple times). The logger path is borg.debug.<TOPIC> if TOPIC is not fully qualified. enable TOPIC debugging (can be specified multiple times). The logger path is borg.debug.<TOPIC> if TOPIC is not fully qualified.
.TP .TP
.B \-p\fP,\fB \-\-progress .B \-p\fP,\fB \-\-progress
show progress information show progress information
.TP .TP
.B \-\-iec .B \-\-iec
format using IEC units (1KiB = 1024B) format using IEC units (1KiB = 1024B)
.TP .TP
.B \-\-log\-json .B \-\-log\-json
Output one JSON object per log line instead of formatted text. Output one JSON object per log line instead of formatted text.
.TP .TP
.BI \-\-lock\-wait \ SECONDS .BI \-\-lock\-wait \ SECONDS
wait at most SECONDS for acquiring a repository/cache lock (default: 10). wait at most SECONDS for acquiring a repository/cache lock (default: 1).
.TP .TP
.B \-\-show\-version .B \-\-bypass\-lock
Bypass locking mechanism
.TP
.B \-\-show\-version
show/log the borg version show/log the borg version
.TP .TP
.B \-\-show\-rc .B \-\-show\-rc
show/log the return code (rc) show/log the return code (rc)
.TP .TP
.BI \-\-umask \ M .BI \-\-umask \ M
set umask to M (local only, default: 0077) set umask to M (local only, default: 0077)
.TP .TP
.BI \-\-remote\-path \ PATH .BI \-\-remote\-path \ PATH
use PATH as borg executable on the remote (default: \(dqborg\(dq) use PATH as borg executable on the remote (default: "borg")
.TP
.BI \-\-remote\-ratelimit \ RATE
deprecated, use \fB\-\-upload\-ratelimit\fP instead
.TP .TP
.BI \-\-upload\-ratelimit \ RATE .BI \-\-upload\-ratelimit \ RATE
set network upload rate limit in kiByte/s (default: 0=unlimited) set network upload rate limit in kiByte/s (default: 0=unlimited)
.TP .TP
.BI \-\-remote\-buffer \ UPLOAD_BUFFER
deprecated, use \fB\-\-upload\-buffer\fP instead
.TP
.BI \-\-upload\-buffer \ UPLOAD_BUFFER .BI \-\-upload\-buffer \ UPLOAD_BUFFER
set network upload buffer size in MiB. (default: 0=no buffer) set network upload buffer size in MiB. (default: 0=no buffer)
.TP .TP
.B \-\-consider\-part\-files
treat part files like normal files (e.g. to list/extract them)
.TP
.BI \-\-debug\-profile \ FILE .BI \-\-debug\-profile \ FILE
Write execution profile in Borg format into FILE. For local use a Python\-compatible file can be generated by suffixing FILE with \(dq.pyprof\(dq. Write execution profile in Borg format into FILE. For local use a Python\-compatible file can be generated by suffixing FILE with ".pyprof".
.TP .TP
.BI \-\-rsh \ RSH .BI \-\-rsh \ RSH
Use this command to connect to the \(aqborg serve\(aq process (default: \(aqssh\(aq) Use this command to connect to the \(aqborg serve\(aq process (default: \(aqssh\(aq)
.TP
.BI \-\-socket \ PATH
Use UNIX DOMAIN (IPC) socket at PATH for client/server communication with socket: protocol.
.TP
.BI \-r \ REPO\fR,\fB \ \-\-repo \ REPO
repository to use
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-COMPACT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-compact \- compact segment files in the repository
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,77 +30,64 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-COMPACT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-compact \- Collects garbage in the repository.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] compact [options] borg [common options] compact [options] [REPOSITORY]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
Free repository space by deleting unused chunks. This command frees repository space by compacting segments.
.sp .sp
\fBborg compact\fP analyzes all existing archives to determine which repository Use this regularly to avoid running out of space \- you do not need to use this
objects are actually used (referenced). It then deletes all unused objects after each borg command though. It is especially useful after deleting archives,
from the repository to free space. because only compaction will really free repository space.
.sp .sp
Unused objects may result from: borg compact does not need a key, so it is possible to invoke it from the
.INDENT 0.0 client or also from the server.
.IP \(bu 2
use of \fBborg delete\fP or \fBborg prune\fP
.IP \(bu 2
interrupted backups (consider retrying the backup before running compact)
.IP \(bu 2
backups of source files that encountered an I/O error mid\-transfer and were skipped
.IP \(bu 2
corruption of the repository (e.g., the archives directory lost entries; see notes below)
.UNINDENT
.sp .sp
You usually do not want to run \fBborg compact\fP after every write operation, but Depending on the amount of segments that need compaction, it may take a while,
either regularly (e.g., once a month, possibly together with \fBborg check\fP) or so consider using the \fB\-\-progress\fP option.
when disk space needs to be freed.
.sp .sp
\fBImportant:\fP A segment is compacted if the amount of saved space is above the percentage value
given by the \fB\-\-threshold\fP option. If omitted, a threshold of 10% is used.
When using \fB\-\-verbose\fP, borg will output an estimate of the freed space.
.sp .sp
After compacting, it is no longer possible to use \fBborg undelete\fP to recover After upgrading borg (server) to 1.2+, you can use \fBborg compact \-\-cleanup\-commits\fP
previously soft\-deleted archives. to clean up the numerous 17byte commit\-only segments that borg 1.1 did not clean up
due to a bug. It is enough to do that once per repository. After cleaning up the
commits, borg will also do a normal compaction.
.sp .sp
\fBborg compact\fP might also delete data from archives that were \(dqlost\(dq due to See \fIseparate_compaction\fP in Additional Notes for more details.
archives directory corruption. Such archives could potentially be restored with
\fBborg check \-\-find\-lost\-archives [\-\-repair]\fP, which is slow. You therefore
might not want to do that unless there are signs of lost archives (e.g., when
seeing fatal errors when creating backups or when archives are missing in
\fBborg repo\-list\fP).
.sp
When using the \fB\-\-stats\fP option, borg will internally list all repository
objects to determine their existence and stored size. It will build a fresh
chunks index from that information and cache it in the repository. For some
types of repositories, this might be very slow. It will tell you the sum of
stored object sizes, before and after compaction.
.sp
Without \fB\-\-stats\fP, borg will rely on the cached chunks index to determine
existing object IDs (but there is no stored size information in the index,
thus it cannot compute before/after compaction size statistics).
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B REPOSITORY
do not change the repository repository to compact
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP .TP
.B \-s\fP,\fB \-\-stats .B \-\-cleanup\-commits
print statistics (might be much slower) cleanup commit\-only 17\-byte segment files
.TP
.BI \-\-threshold \ PERCENT
set minimum threshold for saved space in PERCENT (Default: 10)
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Compact segments and free repository disk space .ft C
$ borg compact # compact segments and free repo disk space
.EE $ borg compact /path/to/repo
# same as above plus clean up 17byte commit\-only segments
$ borg compact \-\-cleanup\-commits /path/to/repo
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,76 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-COMPLETION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-completion \- Output shell completion script for the given shell.
.SH SYNOPSIS
.sp
borg [common options] completion [options] SHELL
.SH DESCRIPTION
.sp
This command prints a shell completion script for the given shell.
.sp
Please note that for some dynamic completions (like archive IDs), the shell
completion script will call borg to query the repository. This will work best
if that call can be made without prompting for user input, so you may want to
set BORG_REPO and BORG_PASSPHRASE environment variables.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B SHELL
shell to generate completion for (one of: %(choices)s)
.UNINDENT
.SH EXAMPLES
.sp
To activate completion in your current shell session, evaluate the output
of this command. To enable it persistently, add the corresponding line to
your shell\(aqs startup file.
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Bash (in ~/.bashrc)
eval \(dq$(borg completion bash)\(dq
# Zsh (in ~/.zshrc)
eval \(dq$(borg completion zsh)\(dq
.EE
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-COMPRESSION 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-compression \- Details regarding compression
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,19 +30,16 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-COMPRESSION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-compression \- Details regarding compression
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
It is no problem to mix different compression methods in one repository, It is no problem to mix different compression methods in one repo,
deduplication is done on the source data chunks (not on the compressed deduplication is done on the source data chunks (not on the compressed
or encrypted data). or encrypted data).
.sp .sp
If some specific chunk was once compressed and stored into the repository, creating If some specific chunk was once compressed and stored into the repo, creating
another backup that also uses this chunk will not change the stored chunk. another backup that also uses this chunk will not change the stored chunk.
So if you use different compression specs for the backups, whichever stores a So if you use different compression specs for the backups, whichever stores a
chunk first determines its compression. See also \fBborg recreate\fP\&. chunk first determines its compression. See also borg recreate.
.sp .sp
Compression is lz4 by default. If you want something else, you have to specify what you want. Compression is lz4 by default. If you want something else, you have to specify what you want.
.sp .sp
@ -53,19 +53,20 @@ Do not compress.
Use lz4 compression. Very high speed, very low compression. (default) Use lz4 compression. Very high speed, very low compression. (default)
.TP .TP
.B zstd[,L] .B zstd[,L]
Use zstd (\(dqzstandard\(dq) compression, a modern wide\-range algorithm. Use zstd ("zstandard") compression, a modern wide\-range algorithm.
If you do not explicitly give the compression level L (ranging from 1 If you do not explicitly give the compression level L (ranging from 1
to 22), it will use level 3. to 22), it will use level 3.
Archives compressed with zstd are not compatible with borg < 1.1.4.
.TP .TP
.B zlib[,L] .B zlib[,L]
Use zlib (\(dqgz\(dq) compression. Medium speed, medium compression. Use zlib ("gz") compression. Medium speed, medium compression.
If you do not explicitly give the compression level L (ranging from 0 If you do not explicitly give the compression level L (ranging from 0
to 9), it will use level 6. to 9), it will use level 6.
Giving level 0 (means \(dqno compression\(dq, but still has zlib protocol Giving level 0 (means "no compression", but still has zlib protocol
overhead) is usually pointless, you better use \(dqnone\(dq compression. overhead) is usually pointless, you better use "none" compression.
.TP .TP
.B lzma[,L] .B lzma[,L]
Use lzma (\(dqxz\(dq) compression. Low speed, high compression. Use lzma ("xz") compression. Low speed, high compression.
If you do not explicitly give the compression level L (ranging from 0 If you do not explicitly give the compression level L (ranging from 0
to 9), it will use level 6. to 9), it will use level 6.
Giving levels above 6 is pointless and counterproductive because it does Giving levels above 6 is pointless and counterproductive because it does
@ -75,103 +76,55 @@ lots of CPU cycles and RAM.
.B auto,C[,L] .B auto,C[,L]
Use a built\-in heuristic to decide per chunk whether to compress or not. Use a built\-in heuristic to decide per chunk whether to compress or not.
The heuristic tries with lz4 whether the data is compressible. The heuristic tries with lz4 whether the data is compressible.
For incompressible data, it will not use compression (uses \(dqnone\(dq). For incompressible data, it will not use compression (uses "none").
For compressible data, it uses the given C[,L] compression \- with C[,L] For compressible data, it uses the given C[,L] compression \- with C[,L]
being any valid compression specifier. This can be helpful for media files being any valid compression specifier.
which often cannot be compressed much more.
.TP .TP
.B obfuscate,SPEC,C[,L] .B obfuscate,SPEC,C[,L]
Use compressed\-size obfuscation to make fingerprinting attacks based on Use compressed\-size obfuscation to make fingerprinting attacks based on
the observable stored chunk size more difficult. Note: the observable stored chunk size more difficult.
.INDENT 7.0 Note:
.IP \(bu 2 \- you must combine this with encryption or it won\(aqt make any sense.
You must combine this with encryption, or it won\(aqt make any sense. \- your repo size will be bigger, of course.
.IP \(bu 2
Your repo size will be bigger, of course.
.IP \(bu 2
A chunk is limited by the constant \fBMAX_DATA_SIZE\fP (cur. ~20MiB).
.UNINDENT
.sp .sp
The SPEC value determines how the size obfuscation works: The SPEC value will determine how the size obfuscation will work:
.sp
\fIRelative random reciprocal size variation\fP (multiplicative)
.sp .sp
Relative random reciprocal size variation:
Size will increase by a factor, relative to the compressed data size. Size will increase by a factor, relative to the compressed data size.
Smaller factors are used often, larger factors rarely. Smaller factors are often used, larger factors rarely.
1: factor 0.01 .. 100.0
2: factor 0.1 .. 1000.0
3: factor 1.0 .. 10000.0
4: factor 10.0 .. 100000.0
5: factor 100.0 .. 1000000.0
6: factor 1000.0 .. 10000000.0
.sp .sp
Available factors: Add a randomly sized padding up to the given size:
.INDENT 7.0 110: 1kiB
.INDENT 3.5
.sp
.EX
1: 0.01 .. 100
2: 0.1 .. 1,000
3: 1 .. 10,000
4: 10 .. 100,000
5: 100 .. 1,000,000
6: 1,000 .. 10,000,000
.EE
.UNINDENT
.UNINDENT
.sp
Example probabilities for SPEC \fB1\fP:
.INDENT 7.0
.INDENT 3.5
.sp
.EX
90 % 0.01 .. 0.1
9 % 0.1 .. 1
0.9 % 1 .. 10
0.09% 10 .. 100
.EE
.UNINDENT
.UNINDENT
.sp
\fIRandomly sized padding up to the given size\fP (additive)
.INDENT 7.0
.INDENT 3.5
.sp
.EX
110: 1kiB (2 ^ (SPEC \- 100))
\&... \&...
120: 1MiB 120: 1MiB
\&... \&...
123: 8MiB (max.) 123: 8MiB (max.)
.EE
.UNINDENT
.UNINDENT
.sp
\fIPadmé padding\fP (deterministic)
.INDENT 7.0
.INDENT 3.5
.sp
.EX
250: pads to sums of powers of 2, max 12% overhead
.EE
.UNINDENT
.UNINDENT
.sp
Uses the Padmé algorithm to deterministically pad the compressed size to a sum of
powers of 2, limiting overhead to 12%. See <https://lbarman.ch/blog/padme/> for details.
.UNINDENT .UNINDENT
.sp .sp
Examples: Examples:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
borg create \-\-compression lz4 \-\-repo REPO ARCHIVE data .ft C
borg create \-\-compression zstd \-\-repo REPO ARCHIVE data borg create \-\-compression lz4 REPO::ARCHIVE data
borg create \-\-compression zstd,10 \-\-repo REPO ARCHIVE data borg create \-\-compression zstd REPO::ARCHIVE data
borg create \-\-compression zlib \-\-repo REPO ARCHIVE data borg create \-\-compression zstd,10 REPO::ARCHIVE data
borg create \-\-compression zlib,1 \-\-repo REPO ARCHIVE data borg create \-\-compression zlib REPO::ARCHIVE data
borg create \-\-compression auto,lzma,6 \-\-repo REPO ARCHIVE data borg create \-\-compression zlib,1 REPO::ARCHIVE data
borg create \-\-compression auto,lzma,6 REPO::ARCHIVE data
borg create \-\-compression auto,lzma ... borg create \-\-compression auto,lzma ...
borg create \-\-compression obfuscate,110,none ... borg create \-\-compression obfuscate,3,none ...
borg create \-\-compression obfuscate,3,auto,zstd,10 ... borg create \-\-compression obfuscate,3,auto,zstd,10 ...
borg create \-\-compression obfuscate,2,zstd,6 ... borg create \-\-compression obfuscate,2,zstd,6 ...
borg create \-\-compression obfuscate,250,zstd,3 ... .ft P
.EE .fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH AUTHOR .SH AUTHOR

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-CONFIG 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-config \- get, set, and delete values in a repository or cache config file
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,22 +30,19 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-CONFIG" 1 "2024-07-19" "" "borg backup tool"
.SH NAME
borg-config \- get, set, and delete values in a repository or cache config file
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] config [options] [NAME] [VALUE] borg [common options] config [options] [REPOSITORY] [NAME] [VALUE]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command gets and sets options in a local repository or cache config file. This command gets and sets options in a local repository or cache config file.
For security reasons, this command only works on local repositories. For security reasons, this command only works on local repositories.
.sp .sp
To delete a config value entirely, use \fB\-\-delete\fP\&. To list the values To delete a config value entirely, use \fB\-\-delete\fP\&. To list the values
of the configuration file or the default values, use \fB\-\-list\fP\&. To get an existing of the configuration file or the default values, use \fB\-\-list\fP\&. To get and existing
key, pass only the key name. To set a key, pass both the key name and key, pass only the key name. To set a key, pass both the key name and
the new value. Keys can be specified in the format \(dqsection.name\(dq or the new value. Keys can be specified in the format "section.name" or
simply \(dqname\(dq; the section will default to \(dqrepository\(dq and \(dqcache\(dq for simply "name"; the section will default to "repository" and "cache" for
the repo and cache configs, respectively. the repo and cache configs, respectively.
.sp .sp
By default, borg config manipulates the repository config file. Using \fB\-\-cache\fP By default, borg config manipulates the repository config file. Using \fB\-\-cache\fP
@ -53,6 +53,9 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B REPOSITORY
repository to configure
.TP
.B NAME .B NAME
name of config key name of config key
.TP .TP
@ -62,13 +65,13 @@ new value for key
.SS optional arguments .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-c\fP,\fB \-\-cache .B \-c\fP,\fB \-\-cache
get and set values from the repo cache get and set values from the repo cache
.TP .TP
.B \-d\fP,\fB \-\-delete .B \-d\fP,\fB \-\-delete
delete the key from the config file delete the key from the config file
.TP .TP
.B \-l\fP,\fB \-\-list .B \-l\fP,\fB \-\-list
list the configuration of the repo list the configuration of the repo
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
@ -87,13 +90,13 @@ making changes!
.nf .nf
.ft C .ft C
# find cache directory # find cache directory
$ cd ~/.cache/borg/$(borg config id) $ cd ~/.cache/borg/$(borg config /path/to/repo id)
# reserve some space # reserve some space
$ borg config additional_free_space 2G $ borg config /path/to/repo additional_free_space 2G
# make a repo append\-only # make a repo append\-only
$ borg config append_only 1 $ borg config /path/to/repo append_only 1
.ft P .ft P
.fi .fi
.UNINDENT .UNINDENT

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-CREATE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-create \- Create new archive
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,41 +30,33 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-CREATE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-create \- Creates a new archive.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] create [options] NAME [PATH...] borg [common options] create [options] ARCHIVE [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command creates a backup archive containing all files found while recursively This command creates a backup archive containing all files found while recursively
traversing all specified paths. Paths are added to the archive as they are given, traversing all paths specified. Paths are added to the archive as they are given,
which means that if relative paths are desired, the command must be run from the correct that means if relative paths are desired, the command has to be run from the correct
directory. directory.
.sp .sp
The slashdot hack in paths (recursion roots) is triggered by using \fB/./\fP: When giving \(aq\-\(aq as path, borg will read data from standard input and create a
\fB/this/gets/stripped/./this/gets/archived\fP means to process that fs object, but file \(aqstdin\(aq in the created archive from that data. In some cases it\(aqs more
strip the prefix on the left side of \fB\&./\fP from the archived items (in this case, appropriate to use \-\-content\-from\-command, however. See section \fIReading from
\fBthis/gets/archived\fP will be the path in the archived item). stdin\fP below for details.
.sp
When specifying \(aq\-\(aq as a path, borg will read data from standard input and create a
file named \(aqstdin\(aq in the created archive from that data. In some cases, it is more
appropriate to use \-\-content\-from\-command. See the section \fIReading from stdin\fP
below for details.
.sp .sp
The archive will consume almost no disk space for files or parts of files that The archive will consume almost no disk space for files or parts of files that
have already been stored in other archives. have already been stored in other archives.
.sp .sp
The archive name does not need to be unique; you can and should use the same The archive name needs to be unique. It must not end in \(aq.checkpoint\(aq or
name for a series of archives. The unique archive identifier is its ID (hash), \(aq.checkpoint.N\(aq (with N being a number), because these names are used for
and you can abbreviate the ID as long as it is unique. checkpoints and treated in special ways.
.sp .sp
In the archive name, you may use the following placeholders: In the archive name, you may use the following placeholders:
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others. {now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
.sp .sp
Backup speed is increased by not reprocessing files that are already part of Backup speed is increased by not reprocessing files that are already part of
existing archives and were not modified. The detection of unmodified files is existing archives and weren\(aqt modified. The detection of unmodified files is
done by comparing multiple file metadata values with previous values kept in done by comparing multiple file metadata values with previous values kept in
the files cache. the files cache.
.sp .sp
@ -96,29 +91,15 @@ ctime vs. mtime: safety vs. speed
.INDENT 0.0 .INDENT 0.0
.IP \(bu 2 .IP \(bu 2
ctime is a rather safe way to detect changes to a file (metadata and contents) ctime is a rather safe way to detect changes to a file (metadata and contents)
as it cannot be set from userspace. But a metadata\-only change will already as it can not be set from userspace. But, a metadata\-only change will already
update the ctime, so there might be some unnecessary chunking/hashing even update the ctime, so there might be some unnecessary chunking/hashing even
without content changes. Some filesystems do not support ctime (change time). without content changes. Some filesystems do not support ctime (change time).
E.g. doing a chown or chmod to a file will change its ctime. E.g. doing a chown or chmod to a file will change its ctime.
.IP \(bu 2 .IP \(bu 2
mtime usually works and only updates if file contents were changed. But mtime mtime usually works and only updates if file contents were changed. But mtime
can be arbitrarily set from userspace, e.g., to set mtime back to the same value can be arbitrarily set from userspace, e.g. to set mtime back to the same value
it had before a content change happened. This can be used maliciously as well as it had before a content change happened. This can be used maliciously as well as
well\-meant, but in both cases mtime\-based cache modes can be problematic. well\-meant, but in both cases mtime based cache modes can be problematic.
.UNINDENT
.INDENT 0.0
.TP
.B The \fB\-\-files\-changed\fP option controls how Borg detects if a file has changed during backup:
.INDENT 7.0
.IP \(bu 2
ctime (default): Use ctime to detect changes. This is the safest option.
.IP \(bu 2
mtime: Use mtime to detect changes.
.IP \(bu 2
disabled: Disable the \(dqfile has changed while we backed it up\(dq detection completely.
This is not recommended unless you know what you\(aqre doing, as it could lead to
inconsistent backups if files change during the backup process.
.UNINDENT
.UNINDENT .UNINDENT
.sp .sp
The mount points of filesystems or filesystem snapshots should be the same for every The mount points of filesystems or filesystem snapshots should be the same for every
@ -126,13 +107,13 @@ creation of a new archive to ensure fast operation. This is because the file cac
is used to determine changed files quickly uses absolute filenames. is used to determine changed files quickly uses absolute filenames.
If this is not possible, consider creating a bind mount to a stable location. If this is not possible, consider creating a bind mount to a stable location.
.sp .sp
The \fB\-\-progress\fP option shows (from left to right) Original and (uncompressed) The \fB\-\-progress\fP option shows (from left to right) Original, Compressed and Deduplicated
deduplicated size (O and U respectively), then the Number of files (N) processed so far, (O, C and D, respectively), then the Number of files (N) processed so far, followed by
followed by the currently processed path. the currently processed path.
.sp .sp
When using \fB\-\-stats\fP, you will get some statistics about how much data was When using \fB\-\-stats\fP, you will get some statistics about how much data was
added \- the \(dqThis Archive\(dq deduplicated size there is most interesting as that is added \- the "This Archive" deduplicated size there is most interesting as that is
how much your repository will grow. Please note that the \(dqAll archives\(dq stats refer to how much your repository will grow. Please note that the "All archives" stats refer to
the state after creation. Also, the \fB\-\-stats\fP and \fB\-\-dry\-run\fP options are mutually the state after creation. Also, the \fB\-\-stats\fP and \fB\-\-dry\-run\fP options are mutually
exclusive because the data is not actually compressed and deduplicated during a dry run. exclusive because the data is not actually compressed and deduplicated during a dry run.
.sp .sp
@ -145,55 +126,58 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B ARCHIVE
specify the archive name name of archive to create (must be also a valid directory name)
.TP .TP
.B PATH .B PATH
paths to archive paths to archive
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B \-n\fP,\fB \-\-dry\-run
do not create a backup archive do not create a backup archive
.TP .TP
.B \-s\fP,\fB \-\-stats .B \-s\fP,\fB \-\-stats
print statistics for the created archive print statistics for the created archive
.TP .TP
.B \-\-list .B \-\-list
output a verbose list of items (files, dirs, ...) output verbose list of items (files, dirs, ...)
.TP .TP
.BI \-\-filter \ STATUSCHARS .BI \-\-filter \ STATUSCHARS
only display items with the given status characters (see description) only display items with the given status characters (see description)
.TP .TP
.B \-\-json .B \-\-json
output stats as JSON. Implies \fB\-\-stats\fP\&. output stats as JSON. Implies \fB\-\-stats\fP\&.
.TP .TP
.B \-\-no\-cache\-sync
experimental: do not synchronize the cache. Implies not using the files cache.
.TP
.BI \-\-stdin\-name \ NAME .BI \-\-stdin\-name \ NAME
use NAME in archive for stdin data (default: \(aqstdin\(aq) use NAME in archive for stdin data (default: \(aqstdin\(aq)
.TP .TP
.BI \-\-stdin\-user \ USER .BI \-\-stdin\-user \ USER
set user USER in archive for stdin data (default: do not store user/uid) set user USER in archive for stdin data (default: \(aqroot\(aq)
.TP .TP
.BI \-\-stdin\-group \ GROUP .BI \-\-stdin\-group \ GROUP
set group GROUP in archive for stdin data (default: do not store group/gid) set group GROUP in archive for stdin data (default: \(aqroot\(aq)
.TP .TP
.BI \-\-stdin\-mode \ M .BI \-\-stdin\-mode \ M
set mode to M in archive for stdin data (default: 0660) set mode to M in archive for stdin data (default: 0660)
.TP .TP
.B \-\-content\-from\-command .B \-\-content\-from\-command
interpret PATH as a command and store its stdout. See also the section \(aqReading from stdin\(aq below. interpret PATH as command and store its stdout. See also section Reading from stdin below.
.TP .TP
.B \-\-paths\-from\-stdin .B \-\-paths\-from\-stdin
read DELIM\-separated list of paths to back up from stdin. All control is external: it will back up all files given \- no more, no less. read DELIM\-separated list of paths to backup from stdin. Will not recurse into directories.
.TP .TP
.B \-\-paths\-from\-command .B \-\-paths\-from\-command
interpret PATH as command and treat its output as \fB\-\-paths\-from\-stdin\fP interpret PATH as command and treat its output as \fB\-\-paths\-from\-stdin\fP
.TP .TP
.BI \-\-paths\-delimiter \ DELIM .BI \-\-paths\-delimiter \ DELIM
set path delimiter for \fB\-\-paths\-from\-stdin\fP and \fB\-\-paths\-from\-command\fP (default: \fB\en\fP) set path delimiter for \fB\-\-paths\-from\-stdin\fP and \fB\-\-paths\-from\-command\fP (default: n)
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -208,52 +192,61 @@ include/exclude paths matching PATTERN
.BI \-\-patterns\-from \ PATTERNFILE .BI \-\-patterns\-from \ PATTERNFILE
read include/exclude patterns from PATTERNFILE, one per line read include/exclude patterns from PATTERNFILE, one per line
.TP .TP
.B \-\-exclude\-caches .B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file ( <http://www.bford.info/cachedir/spec.html> ) exclude directories that contain a CACHEDIR.TAG file (\fI\%http://www.bford.info/cachedir/spec.html\fP)
.TP .TP
.BI \-\-exclude\-if\-present \ NAME .BI \-\-exclude\-if\-present \ NAME
exclude directories that are tagged by containing a filesystem object with the given NAME exclude directories that are tagged by containing a filesystem object with the given NAME
.TP .TP
.B \-\-keep\-exclude\-tags .B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, do not omit the tag objects themselves from the backup archive if tag objects are specified with \fB\-\-exclude\-if\-present\fP, don\(aqt omit the tag objects themselves from the backup archive
.TP
.B \-\-exclude\-nodump
exclude files flagged NODUMP
.UNINDENT .UNINDENT
.SS Filesystem options .SS Filesystem options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-x\fP,\fB \-\-one\-file\-system .B \-x\fP,\fB \-\-one\-file\-system
stay in the same file system and do not store mount points of other file systems \- this might behave different from your expectations, see the description below. stay in the same file system and do not store mount points of other file systems. This might behave different from your expectations, see the docs.
.TP .TP
.B \-\-numeric\-ids .B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
only store numeric user and group identifiers only store numeric user and group identifiers
.TP .TP
.B \-\-atime .B \-\-noatime
do not store atime into archive
.TP
.B \-\-atime
do store atime into archive do store atime into archive
.TP .TP
.B \-\-noctime .B \-\-noctime
do not store ctime into archive do not store ctime into archive
.TP .TP
.B \-\-nobirthtime .B \-\-nobirthtime
do not store birthtime (creation date) into archive do not store birthtime (creation date) into archive
.TP .TP
.B \-\-noflags .B \-\-nobsdflags
deprecated, use \fB\-\-noflags\fP instead
.TP
.B \-\-noflags
do not read and store flags (e.g. NODUMP, IMMUTABLE) into archive do not read and store flags (e.g. NODUMP, IMMUTABLE) into archive
.TP .TP
.B \-\-noacls .B \-\-noacls
do not read and store ACLs into archive do not read and store ACLs into archive
.TP .TP
.B \-\-noxattrs .B \-\-noxattrs
do not read and store xattrs into archive do not read and store xattrs into archive
.TP .TP
.B \-\-sparse .B \-\-sparse
detect sparse holes in input (supported only by fixed chunker) detect sparse holes in input (supported only by fixed chunker)
.TP .TP
.BI \-\-files\-cache \ MODE .BI \-\-files\-cache \ MODE
operate files cache in MODE. default: ctime,size,inode operate files cache in MODE. default: ctime,size,inode
.TP .TP
.BI \-\-files\-changed \ MODE .B \-\-read\-special
specify how to detect if a file has changed during backup (ctime, mtime, disabled). default: ctime
.TP
.B \-\-read\-special
open and read block and char device files as well as FIFOs as if they were regular files. Also follows symlinks pointing to these kinds of files. open and read block and char device files as well as FIFOs as if they were regular files. Also follows symlinks pointing to these kinds of files.
.UNINDENT .UNINDENT
.SS Archive options .SS Archive options
@ -263,113 +256,106 @@ open and read block and char device files as well as FIFOs as if they were regul
add a comment text to the archive add a comment text to the archive
.TP .TP
.BI \-\-timestamp \ TIMESTAMP .BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). Alternatively, give a reference file/directory.
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP .TP
.BI \-\-chunker\-params \ PARAMS .BI \-\-chunker\-params \ PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095 specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
.TP .TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION .BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details. select compression algorithm, see the output of the "borg help compression" command for details.
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Archive series and performance: In Borg 2, archives that share the same NAME form an \(dqarchive series\(dq.
The files cache is maintained per series. For best performance on repeated backups, reuse the same
NAME every time you run \fBborg create\fP for the same dataset (e.g. always use \fBmy\-documents\fP).
Frequently changing the NAME (for example by embedding date/time like \fBmy\-documents\-2025\-11\-10\fP)
prevents cache reuse and forces Borg to re\-scan and re\-chunk files, which can make incremental
backups vastly slower. Only vary the NAME if you intentionally want to start a new series.
.sp
If you must vary the archive name but still want cache reuse across names, see the advanced
knobs described in \fIupgradenotes2\fP (\fBBORG_FILES_CACHE_SUFFIX\fP and \fBBORG_FILES_CACHE_TTL\fP),
but the recommended approach is to keep a stable NAME per series.
.UNINDENT
.UNINDENT
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Backup ~/Documents into an archive named \(dqmy\-documents\(dq .ft C
$ borg create my\-documents ~/Documents # Backup ~/Documents into an archive named "my\-documents"
$ borg create /path/to/repo::my\-documents ~/Documents
# same, but list all files as we process them # same, but list all files as we process them
$ borg create \-\-list my\-documents ~/Documents $ borg create \-\-list /path/to/repo::my\-documents ~/Documents
# Backup /mnt/disk/docs, but strip path prefix using the slashdot hack
$ borg create \-\-repo /path/to/repo docs /mnt/disk/./docs
# Backup ~/Documents and ~/src but exclude pyc files # Backup ~/Documents and ~/src but exclude pyc files
$ borg create my\-files \e $ borg create /path/to/repo::my\-files \e
~/Documents \e ~/Documents \e
~/src \e ~/src \e
\-\-exclude \(aq*.pyc\(aq \-\-exclude \(aq*.pyc\(aq
# Backup home directories excluding image thumbnails (i.e. only # Backup home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.) # /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create my\-files /home \-\-exclude \(aqsh:home/*/.thumbnails\(aq $ borg create /path/to/repo::my\-files /home \e
\-\-exclude \(aqsh:/home/*/.thumbnails\(aq
# Back up the root filesystem into an archive named \(dqroot\-archive\(dq # Backup the root filesystem into an archive named "root\-YYYY\-MM\-DD"
# Use zlib compression (good, but slow) — default is LZ4 (fast, low compression ratio) # use zlib compression (good, but slow) \- default is lz4 (fast, low compression ratio)
$ borg create \-C zlib,6 \-\-one\-file\-system root\-archive / $ borg create \-C zlib,6 \-\-one\-file\-system /path/to/repo::root\-{now:%Y\-%m\-%d} /
# Backup into an archive name like FQDN\-root # Backup onto a remote host ("push" style) via ssh to port 2222,
$ borg create \(aq{fqdn}\-root\(aq / # logging in as user "borg" and storing into /path/to/repo
$ borg create ssh://borg@backup.example.org:2222/path/to/repo::{fqdn}\-root\-{now} /
# Back up a remote host locally (\(dqpull\(dq style) using SSHFS # Backup a remote host locally ("pull" style) using sshfs
$ mkdir sshfs\-mount $ mkdir sshfs\-mount
$ sshfs root@example.com:/ sshfs\-mount $ sshfs root@example.com:/ sshfs\-mount
$ cd sshfs\-mount $ cd sshfs\-mount
$ borg create example.com\-root . $ borg create /path/to/repo::example.com\-root\-{now:%Y\-%m\-%d} .
$ cd .. $ cd ..
$ fusermount \-u sshfs\-mount $ fusermount \-u sshfs\-mount
# Make a big effort in fine\-grained deduplication (big chunk management # Make a big effort in fine granular deduplication (big chunk management
# overhead, needs a lot of RAM and disk space; see the formula in the internals docs): # overhead, needs a lot of RAM and disk space, see formula in internals
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 small /smallstuff # docs \- same parameters as borg < 1.0 or attic):
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 /path/to/repo::small /smallstuff
# Backup a raw device (must not be active/in use/mounted at that time) # Backup a raw device (must not be active/in use/mounted at that time)
$ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 my\-sdx /dev/sdX $ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 /path/to/repo::my\-sdx /dev/sdX
# Backup a sparse disk image (must not be active/in use/mounted at that time) # Backup a sparse disk image (must not be active/in use/mounted at that time)
$ borg create \-\-sparse \-\-chunker\-params fixed,4194304 my\-disk my\-disk.raw $ borg create \-\-sparse \-\-chunker\-params fixed,4194304 /path/to/repo::my\-disk my\-disk.raw
# No compression (none) # No compression (none)
$ borg create \-\-compression none arch ~ $ borg create \-\-compression none /path/to/repo::arch ~
# Super fast, low compression (lz4, default) # Super fast, low compression (lz4, default)
$ borg create arch ~ $ borg create /path/to/repo::arch ~
# Less fast, higher compression (zlib, N = 0..9) # Less fast, higher compression (zlib, N = 0..9)
$ borg create \-\-compression zlib,N arch ~ $ borg create \-\-compression zlib,N /path/to/repo::arch ~
# Even slower, even higher compression (lzma, N = 0..9) # Even slower, even higher compression (lzma, N = 0..9)
$ borg create \-\-compression lzma,N arch ~ $ borg create \-\-compression lzma,N /path/to/repo::arch ~
# Only compress compressible data with lzma,N (N = 0..9) # Only compress compressible data with lzma,N (N = 0..9)
$ borg create \-\-compression auto,lzma,N arch ~ $ borg create \-\-compression auto,lzma,N /path/to/repo::arch ~
# Use the short hostname and username as the archive name # Use short hostname, user name and current time in archive name
$ borg create \(aq{hostname}\-{user}\(aq ~ $ borg create /path/to/repo::{hostname}\-{user}\-{now} ~
# Similar, use the same datetime format that is default as of borg 1.1
$ borg create /path/to/repo::{hostname}\-{user}\-{now:%Y\-%m\-%dT%H:%M:%S} ~
# As above, but add nanoseconds
$ borg create /path/to/repo::{hostname}\-{user}\-{now:%Y\-%m\-%dT%H:%M:%S.%f} ~
# Back up relative paths by moving into the correct directory first # Backing up relative paths by moving into the correct directory first
$ cd /home/user/Documents $ cd /home/user/Documents
# The root directory of the archive will be \(dqprojectA\(dq # The root directory of the archive will be "projectA"
$ borg create \(aqdaily\-projectA\(aq projectA $ borg create /path/to/repo::daily\-projectA\-{now:%Y\-%m\-%d} projectA
# Use external command to determine files to archive # Use external command to determine files to archive
# Use \-\-paths\-from\-stdin with find to back up only files less than 1 MB in size # Use \-\-paths\-from\-stdin with find to only backup files less than 1MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin small\-files\-only $ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin /path/to/repo::small\-files\-only
# Use \-\-paths\-from\-command with find to back up files from only a given user # Use \-\-paths\-from\-command with find to only backup files from a given user
$ borg create \-\-paths\-from\-command joes\-files \-\- find /srv/samba/shared \-user joe $ borg create \-\-paths\-from\-command /path/to/repo::joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them) # Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them)
$ find ~ \-size \-1000k \-print0 | borg create \e $ find ~ \-size \-1000k \-print0 | borg create \e
\-\-paths\-from\-stdin \e \-\-paths\-from\-stdin \e
\-\-paths\-delimiter \(dq\e0\(dq \e \-\-paths\-delimiter "\e0" \e
smallfiles\-handle\-newline /path/to/repo::smallfiles\-handle\-newline
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH NOTES .SH NOTES
@ -390,13 +376,13 @@ through using the \fB\-\-keep\-exclude\-tags\fP option.
The \fB\-x\fP or \fB\-\-one\-file\-system\fP option excludes directories, that are mountpoints (and everything in them). The \fB\-x\fP or \fB\-\-one\-file\-system\fP option excludes directories, that are mountpoints (and everything in them).
It detects mountpoints by comparing the device number from the output of \fBstat()\fP of the directory and its It detects mountpoints by comparing the device number from the output of \fBstat()\fP of the directory and its
parent directory. Specifically, it excludes directories for which \fBstat()\fP reports a device number different parent directory. Specifically, it excludes directories for which \fBstat()\fP reports a device number different
from the device number of their parent. from the device number of their parent. Be aware that in Linux (and possibly elsewhere) there are directories
In general: be aware that there are directories with device number different from their parent, which the kernel with device number different from their parent, which the kernel does not consider a mountpoint and also the
does not consider a mountpoint and also the other way around. other way around. Examples are bind mounts (possibly same device number, but always a mountpoint) and ALL
Linux examples for this are bind mounts (possibly same device number, but always a mountpoint) and ALL subvolumes of a btrfs (different device number from parent but not necessarily a mountpoint). Therefore when
subvolumes of a btrfs (different device number from parent but not necessarily a mountpoint). using \fB\-\-one\-file\-system\fP, one should make doubly sure that the backup works as intended especially when using
macOS examples are the apfs mounts of a typical macOS installation. btrfs. This is even more important, if the btrfs layout was created by someone else, e.g. a distribution
Therefore, when using \fB\-\-one\-file\-system\fP, you should double\-check that the backup works as intended. installer.
.SS Item flags .SS Item flags
.sp .sp
\fB\-\-list\fP outputs a list of all files, directories and other \fB\-\-list\fP outputs a list of all files, directories and other
@ -409,7 +395,7 @@ If you are interested only in a subset of that output, you can give e.g.
below). below).
.sp .sp
A uppercase character represents the status of a regular file relative to the A uppercase character represents the status of a regular file relative to the
\(dqfiles\(dq cache (not relative to the repo \-\- this is an issue if the files cache "files" cache (not relative to the repo \-\- this is an issue if the files cache
is not used). Metadata is stored in any case and for \(aqA\(aq and \(aqM\(aq also new data is not used). Metadata is stored in any case and for \(aqA\(aq and \(aqM\(aq also new data
chunks are stored. For \(aqU\(aq all data chunks refer to already existing chunks. chunks are stored. For \(aqU\(aq all data chunks refer to already existing chunks.
.INDENT 0.0 .INDENT 0.0
@ -435,7 +421,7 @@ borg usually just stores their metadata:
.IP \(bu 2 .IP \(bu 2
\(aqc\(aq = char device \(aqc\(aq = char device
.IP \(bu 2 .IP \(bu 2
\(aqh\(aq = regular file, hard link (to already seen inodes) \(aqh\(aq = regular file, hardlink (to already seen inodes)
.IP \(bu 2 .IP \(bu 2
\(aqs\(aq = symlink \(aqs\(aq = symlink
.IP \(bu 2 .IP \(bu 2
@ -445,24 +431,26 @@ borg usually just stores their metadata:
Other flags used include: Other flags used include:
.INDENT 0.0 .INDENT 0.0
.IP \(bu 2 .IP \(bu 2
\(aq+\(aq = included, item would be backed up (if not in dry\-run mode)
.IP \(bu 2
\(aq\-\(aq = excluded, item would not be / was not backed up
.IP \(bu 2
\(aqi\(aq = backup data was read from standard input (stdin) \(aqi\(aq = backup data was read from standard input (stdin)
.IP \(bu 2 .IP \(bu 2
\(aq\-\(aq = dry run, item was \fInot\fP backed up
.IP \(bu 2
\(aqx\(aq = excluded, item was \fInot\fP backed up
.IP \(bu 2
\(aq?\(aq = missing status code (if you see this, please file a bug report!) \(aq?\(aq = missing status code (if you see this, please file a bug report!)
.UNINDENT .UNINDENT
.SS Reading backup data from stdin .SS Reading from stdin
.sp .sp
There are two methods to read from stdin. Either specify \fB\-\fP as path and There are two methods to read from stdin. Either specify \fB\-\fP as path and
pipe directly to borg: pipe directly to borg:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
backup\-vm \-\-id myvm \-\-stdout | borg create \-\-repo REPO ARCHIVE \- .ft C
.EE backup\-vm \-\-id myvm \-\-stdout | borg create REPO::ARCHIVE \-
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -473,9 +461,11 @@ to the command:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
borg create \-\-content\-from\-command \-\-repo REPO ARCHIVE \-\- backup\-vm \-\-id myvm \-\-stdout .ft C
.EE borg create \-\-content\-from\-command REPO::ARCHIVE \-\- backup\-vm \-\-id myvm \-\-stdout
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -496,22 +486,9 @@ creation a bit.
.sp .sp
By default, the content read from stdin is stored in a file called \(aqstdin\(aq. By default, the content read from stdin is stored in a file called \(aqstdin\(aq.
Use \fB\-\-stdin\-name\fP to change the name. Use \fB\-\-stdin\-name\fP to change the name.
.SS Feeding all file paths from externally
.sp
Usually, you give a starting path (recursion root) to borg and then borg
automatically recurses, finds and backs up all fs objects contained in
there (optionally considering include/exclude rules).
.sp
If you need more control and you want to give every single fs object path
to borg (maybe implementing your own recursion or your own rules), you can use
\fB\-\-paths\-from\-stdin\fP or \fB\-\-paths\-from\-command\fP (with the latter, borg will
fail to create an archive should the command fail).
.sp
Borg supports paths with the slashdot hack to strip path prefixes here also.
So, be careful not to unintentionally trigger that.
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-check(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-placeholders(1)\fP, \fIborg\-compression(1)\fP, \fIborg\-repo\-create(1)\fP \fIborg\-common(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-check(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-placeholders(1)\fP, \fIborg\-compression(1)\fP
.SH AUTHOR .SH AUTHOR
The Borg Collective The Borg Collective
.\" Generated by docutils manpage writer. .\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-DELETE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-delete \- Delete an existing repository or archives
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,103 +30,125 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-DELETE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-delete \- Deletes archives.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] delete [options] [NAME] borg [common options] delete [options] [REPOSITORY_OR_ARCHIVE] [ARCHIVE...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command soft\-deletes archives from the repository. This command deletes an archive from the repository or the complete repository.
.sp .sp
Important: Important: When deleting archives, repository disk space is \fBnot\fP freed until
.INDENT 0.0
.IP \(bu 2
The delete command will only mark archives for deletion (\(dqsoft\-deletion\(dq),
repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.IP \(bu 2
You can use \fBborg undelete\fP to undelete archives, but only until
you run \fBborg compact\fP\&. you run \fBborg compact\fP\&.
.UNINDENT .sp
When you delete a complete repository, the security info and local cache for it
(if any) are also deleted. Alternatively, you can delete just the local cache
with the \fB\-\-cache\-only\fP option, or keep the security info with the
\fB\-\-keep\-security\-info\fP option.
.sp .sp
When in doubt, use \fB\-\-dry\-run \-\-list\fP to see what would be deleted. When in doubt, use \fB\-\-dry\-run \-\-list\fP to see what would be deleted.
.sp .sp
You can delete multiple archives by specifying a match pattern using When using \fB\-\-stats\fP, you will get some statistics about how much data was
the \fB\-\-match\-archives PATTERN\fP option (for more information on these deleted \- the "Deleted data" deduplicated size there is most interesting as
patterns, see \fIborg_patterns\fP). that is how much your repository will shrink.
Please note that the "All archives" stats refer to the state after deletion.
.sp
You can delete multiple archives by specifying their common prefix, if they
have one, using the \fB\-\-prefix PREFIX\fP option. You can also specify a shell
pattern to match multiple archives using the \fB\-\-glob\-archives GLOB\fP option
(for more info on these patterns, see \fIborg_patterns\fP). Note that these
two options are mutually exclusive.
.sp
To avoid accidentally deleting archives, especially when using glob patterns,
it might be helpful to use the \fB\-\-dry\-run\fP to test out the command without
actually making any changes to the repository.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B REPOSITORY_OR_ARCHIVE
specify the archive name repository or archive to delete
.TP
.B ARCHIVE
archives to delete
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B \-n\fP,\fB \-\-dry\-run
do not change the repository do not change repository
.TP .TP
.B \-\-list .B \-\-list
output a verbose list of archives output verbose list of archives
.TP
.B \-s\fP,\fB \-\-stats
print statistics for the deleted archive
.TP
.B \-\-cache\-only
delete only the local cache for the given repository
.TP
.B \-\-force
force deletion of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
.TP
.B \-\-keep\-security\-info
keep the local security info when deleting a repository
.TP
.B \-\-save\-space
work slower, but using less space
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive filters
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN .BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq. only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP .TP
.BI \-\-sort\-by \ KEYS .BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP .TP
.BI \-\-first \ N .BI \-\-first \ N
consider the first N archives after other filters are applied consider first N archives after other filters were applied
.TP .TP
.BI \-\-last \ N .BI \-\-last \ N
consider the last N archives after other filters are applied consider last N archives after other filters were applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Delete all backup archives named \(dqkenny\-files\(dq: .ft C
$ borg delete \-a kenny\-files # delete a single backup archive:
# Actually free disk space: $ borg delete /path/to/repo::Monday
$ borg compact # actually free disk space:
$ borg compact /path/to/repo
# Delete a specific backup archive using its unique archive ID prefix # delete all archives whose names begin with the machine\(aqs hostname followed by "\-"
$ borg delete aid:d34db33f $ borg delete \-\-prefix \(aq{hostname}\-\(aq /path/to/repo
# Delete all archives whose names begin with the machine\(aqs hostname followed by \(dq\-\(dq # delete all archives whose names contain "\-2012\-"
$ borg delete \-a \(aqsh:{hostname}\-*\(aq $ borg delete \-\-glob\-archives \(aq*\-2012\-*\(aq /path/to/repo
# Delete all archives whose names contain \(dq\-2012\-\(dq # see what would be deleted if delete was run without \-\-dry\-run
$ borg delete \-a \(aqsh:*\-2012\-*\(aq $ borg delete \-\-list \-\-dry\-run \-a \(aq*\-May\-*\(aq /path/to/repo
# See what would be deleted if delete was run without \-\-dry\-run # delete the whole repository and the related local cache:
$ borg delete \-\-list \-\-dry\-run \-a \(aqsh:*\-May\-*\(aq $ borg delete /path/to/repo
.EE You requested to completely DELETE the repository *including* all archives it contains:
repo Mon, 2016\-02\-15 19:26:54
root\-2016\-02\-15 Mon, 2016\-02\-15 19:36:29
newname Mon, 2016\-02\-15 19:50:19
Type \(aqYES\(aq if you understand this and want to continue: YES
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP, \fIborg\-compact(1)\fP, \fIborg\-repo\-delete(1)\fP \fIborg\-common(1)\fP, \fIborg\-compact(1)\fP
.SH AUTHOR .SH AUTHOR
The Borg Collective The Borg Collective
.\" Generated by docutils manpage writer. .\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-DIFF 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-diff \- Diff contents of two archives
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,54 +30,61 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-DIFF" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-diff \- Finds differences between two archives.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] diff [options] ARCHIVE1 ARCHIVE2 [PATH...] borg [common options] diff [options] REPO::ARCHIVE1 ARCHIVE2 [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command finds differences (file contents, metadata) between ARCHIVE1 and ARCHIVE2. This command finds differences (file contents, user/group/mode) between archives.
.sp .sp
For more help on include/exclude patterns, see the output of the \fIborg_patterns\fP command. A repository location and an archive name must be specified for REPO::ARCHIVE1.
ARCHIVE2 is just another archive name in same repository (no repository location
allowed).
.sp
For archives created with Borg 1.1 or newer diff automatically detects whether
the archives are created with the same chunker params. If so, only chunk IDs
are compared, which is very fast.
.sp
For archives prior to Borg 1.1 chunk contents are compared by default.
If you did not create the archives with different chunker params,
pass \fB\-\-same\-chunker\-params\fP\&.
Note that the chunker params changed from Borg 0.xx to 1.0.
.sp
For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B ARCHIVE1 .B REPO::ARCHIVE1
ARCHIVE1 name repository location and ARCHIVE1 name
.TP .TP
.B ARCHIVE2 .B ARCHIVE2
ARCHIVE2 name ARCHIVE2 name (no repository location allowed)
.TP .TP
.B PATH .B PATH
paths of items inside the archives to compare; patterns are supported. paths of items inside the archives to compare; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-numeric\-ids .B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
only consider numeric user and group identifiers only consider numeric user and group identifiers
.TP .TP
.B \-\-same\-chunker\-params .B \-\-same\-chunker\-params
override the check of chunker parameters Override check of chunker parameters.
.TP .TP
.BI \-\-format \ FORMAT .B \-\-sort
specify format for differences between archives (default: \(dq{change} {path}{NL}\(dq) Sort the output lines by file path.
.TP .TP
.B \-\-json\-lines .B \-\-json\-lines
Format output as JSON Lines. Format output as JSON Lines.
.TP
.B \-\-sort\-by
Sort output by comma\-separated fields (e.g., \(aq>size_added,path\(aq).
.TP
.B \-\-content\-only
Only compare differences in content (exclude metadata differences)
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -93,174 +103,50 @@ read include/exclude patterns from PATTERNFILE, one per line
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ borg diff archive1 archive2 .ft C
$ borg init \-e=none testrepo
$ mkdir testdir
$ cd testdir
$ echo asdf > file1
$ dd if=/dev/urandom bs=1M count=4 > file2
$ touch file3
$ borg create ../testrepo::archive1 .
$ chmod a+x file1
$ echo "something" >> file2
$ borg create ../testrepo::archive2 .
$ echo "testing 123" >> file1
$ rm file3
$ touch file4
$ borg create ../testrepo::archive3 .
$ cd ..
$ borg diff testrepo::archive1 archive2
[\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
+135 B \-252 B file2
$ borg diff testrepo::archive2 archive3
+17 B \-5 B file1
added 0 B file4
removed 0 B file3
$ borg diff testrepo::archive1 archive3
+17 B \-5 B [\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1 +17 B \-5 B [\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
+135 B \-252 B file2 +135 B \-252 B file2
added 0 B file4 added 0 B file4
removed 0 B file3 removed 0 B file3
$ borg diff archive1 archive2 $ borg diff \-\-json\-lines testrepo::archive1 archive3
{\(dqpath\(dq: \(dqfile1\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: 17, \(dqremoved\(dq: 5}, {\(dqtype\(dq: \(dqmode\(dq, \(dqold_mode\(dq: \(dq\-rw\-r\-\-r\-\-\(dq, \(dqnew_mode\(dq: \(dq\-rwxr\-xr\-x\(dq}]} {"path": "file1", "changes": [{"type": "modified", "added": 17, "removed": 5}, {"type": "mode", "old_mode": "\-rw\-r\-\-r\-\-", "new_mode": "\-rwxr\-xr\-x"}]}
{\(dqpath\(dq: \(dqfile2\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: 135, \(dqremoved\(dq: 252}]} {"path": "file2", "changes": [{"type": "modified", "added": 135, "removed": 252}]}
{\(dqpath\(dq: \(dqfile4\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqadded\(dq, \(dqsize\(dq: 0}]} {"path": "file4", "changes": [{"type": "added", "size": 0}]}
{\(dqpath\(dq: \(dqfile3\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqremoved\(dq, \(dqsize\(dq: 0}]} {"path": "file3", "changes": [{"type": "removed", "size": 0}]
.ft P
.fi
# Use \-\-sort\-by with a comma\-separated list; sorts apply stably from last to first.
# Here: primary by net size change descending, tie\-breaker by path ascending
$ borg diff \-\-sort\-by=\(dq>size_diff,path\(dq archive1 archive2
+17 B \-5 B [\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
removed 0 B file3
added 0 B file4
+135 B \-252 B file2
.EE
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH NOTES
.SS The FORMAT specifier syntax
.sp
The \fB\-\-format\fP option uses Python\(aqs format string syntax <https://docs.python.org/3.10/library/string.html#formatstrings>
\&.
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg diff \-\-format \(aq{content:30} {path}{NL}\(aq ArchiveFoo ArchiveBar
modified: +4.1 kB \-1.0 kB file\-diff
\&...
# {VAR:<NUMBER} \- pad to NUMBER columns left\-aligned.
# {VAR:>NUMBER} \- pad to NUMBER columns right\-aligned.
$ borg diff \-\-format \(aq{content:>30} {path}{NL}\(aq ArchiveFoo ArchiveBar
modified: +4.1 kB \-1.0 kB file\-diff
\&...
.EE
.UNINDENT
.UNINDENT
.sp
The following keys are always available:
.INDENT 0.0
.IP \(bu 2
NEWLINE: OS dependent line separator
.IP \(bu 2
NL: alias of NEWLINE
.IP \(bu 2
NUL: NUL character for creating print0 / xargs \-0 like output
.IP \(bu 2
SPACE: space character
.IP \(bu 2
TAB: tab character
.IP \(bu 2
CR: carriage return character
.IP \(bu 2
LF: line feed character
.UNINDENT
.sp
Keys available only when showing differences between archives:
.INDENT 0.0
.IP \(bu 2
path: archived file path
.IP \(bu 2
change: all available changes
.IP \(bu 2
content: file content change
.IP \(bu 2
mode: file mode change
.IP \(bu 2
type: file type change
.IP \(bu 2
owner: file owner (user/group) change
.IP \(bu 2
group: file group change
.IP \(bu 2
user: file user change
.IP \(bu 2
link: file link change
.IP \(bu 2
directory: file directory change
.IP \(bu 2
blkdev: file block device change
.IP \(bu 2
chrdev: file character device change
.IP \(bu 2
fifo: file fifo change
.IP \(bu 2
mtime: file modification time change
.IP \(bu 2
ctime: file change time change
.IP \(bu 2
isomtime: file modification time change (ISO 8601)
.IP \(bu 2
isoctime: file creation time change (ISO 8601)
.UNINDENT
.SS What is compared
.sp
For each matching item in both archives, Borg reports:
.INDENT 0.0
.IP \(bu 2
Content changes: total added/removed bytes within files. If chunker parameters are comparable,
Borg compares chunk IDs quickly; otherwise, it compares the content.
.IP \(bu 2
Metadata changes: user, group, mode, and other metadata shown inline, like
\(dq[old_mode \-> new_mode]\(dq for mode changes. Use \fB\-\-content\-only\fP to suppress metadata changes.
.IP \(bu 2
Added/removed items: printed as \(dqadded SIZE path\(dq or \(dqremoved SIZE path\(dq.
.UNINDENT
.SS Output formats
.sp
The default (text) output shows one line per changed path, e.g.:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
+135 B \-252 B [ \-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x ] path/to/file
.EE
.UNINDENT
.UNINDENT
.sp
JSON Lines output (\fB\-\-json\-lines\fP) prints one JSON object per changed path, e.g.:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
{\(dqpath\(dq: \(dqPATH\(dq, \(dqchanges\(dq: [
{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: BYTES, \(dqremoved\(dq: BYTES},
{\(dqtype\(dq: \(dqmode\(dq, \(dqold_mode\(dq: \(dq\-rw\-r\-\-r\-\-\(dq, \(dqnew_mode\(dq: \(dq\-rwxr\-xr\-x\(dq},
{\(dqtype\(dq: \(dqadded\(dq, \(dqsize\(dq: SIZE},
{\(dqtype\(dq: \(dqremoved\(dq, \(dqsize\(dq: SIZE}
]}
.EE
.UNINDENT
.UNINDENT
.SS Sorting
.sp
Use \fB\-\-sort\-by FIELDS\fP where FIELDS is a comma\-separated list of fields.
Sorts are applied stably from last to first in the given list. Prepend \(dq>\(dq for
descending, \(dq<\(dq (or no prefix) for ascending, for example \fB\-\-sort\-by=\(dq>size_added,path\(dq\fP\&.
Supported fields include:
.INDENT 0.0
.IP \(bu 2
path: the item path
.IP \(bu 2
size_added: total bytes added for the item content
.IP \(bu 2
size_removed: total bytes removed for the item content
.IP \(bu 2
size_diff: size_added \- size_removed (net content change)
.IP \(bu 2
size: size of the item as stored in ARCHIVE2 (0 for removed items)
.IP \(bu 2
user, group, uid, gid, ctime, mtime: taken from the item state in ARCHIVE2 when present
.IP \(bu 2
ctime_diff, mtime_diff: timestamp difference (ARCHIVE2 \- ARCHIVE1)
.UNINDENT
.SS Performance considerations
.sp
diff automatically detects whether the archives were created with the same chunker
parameters. If so, only chunk IDs are compared, which is very fast.
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP \fIborg\-common(1)\fP

View file

@ -1,6 +1,8 @@
'\" t
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-EXPORT-TAR 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-export-tar \- Export archive contents as a tarball
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -28,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-EXPORT-TAR" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-export-tar \- Export archive contents as a tarball
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] export\-tar [options] NAME FILE [PATH...] borg [common options] export\-tar [options] ARCHIVE FILE [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command creates a tarball from an archive. This command creates a tarball from an archive.
@ -51,7 +50,7 @@ before writing it to FILE:
.IP \(bu 2 .IP \(bu 2
\&.tar.xz or .txz: xz \&.tar.xz or .txz: xz
.IP \(bu 2 .IP \(bu 2
\&.tar.zstd or .tar.zst: zstd \&.tar.zstd: zstd
.IP \(bu 2 .IP \(bu 2
\&.tar.lz4: lz4 \&.tar.lz4: lz4
.UNINDENT .UNINDENT
@ -60,10 +59,11 @@ Alternatively, a \fB\-\-tar\-filter\fP program may be explicitly specified. It s
read the uncompressed tar stream from stdin and write a compressed/filtered read the uncompressed tar stream from stdin and write a compressed/filtered
tar stream to stdout. tar stream to stdout.
.sp .sp
Depending on the \fB\-\-tar\-format\fP option, these formats are created: Depending on the \fB\-tar\-format\fP option, these formats are created:
.TS .TS
box center; center;
l|l|l. |l|l|l|.
_
T{ T{
\-\-tar\-format \-\-tar\-format
T} T{ T} T{
@ -86,7 +86,6 @@ T} T{
POSIX.1\-2001 (pax) format POSIX.1\-2001 (pax) format
T} T{ T} T{
GNU + atime/ctime/mtime ns GNU + atime/ctime/mtime ns
+ xattrs
T} T}
_ _
T{ T{
@ -97,6 +96,7 @@ T} T{
mtime s, no atime/ctime, mtime s, no atime/ctime,
no ACLs/xattrs/bsdflags no ACLs/xattrs/bsdflags
T} T}
_
.TE .TE
.sp .sp
A \fB\-\-sparse\fP option (as found in borg extract) is not supported. A \fB\-\-sparse\fP option (as found in borg extract) is not supported.
@ -115,28 +115,28 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B ARCHIVE
specify the archive name archive to export
.TP .TP
.B FILE .B FILE
output tar file. \(dq\-\(dq to write to stdout instead. output tar file. "\-" to write to stdout instead.
.TP .TP
.B PATH .B PATH
paths to extract; patterns are supported paths to extract; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-tar\-filter .B \-\-tar\-filter
filter program to pipe data through filter program to pipe data through
.TP .TP
.B \-\-list .B \-\-list
output verbose list of items (files, dirs, ...) output verbose list of items (files, dirs, ...)
.TP .TP
.BI \-\-tar\-format \ FMT .BI \-\-tar\-format \ FMT
select tar format: BORG, PAX or GNU select tar format: BORG, PAX or GNU
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-EXTRACT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-extract \- Extract archive contents
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,28 +30,21 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-EXTRACT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-extract \- Extracts archive contents.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] extract [options] NAME [PATH...] borg [common options] extract [options] ARCHIVE [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command extracts the contents of an archive. This command extracts the contents of an archive. By default the entire
archive is extracted but a subset of files and directories can be selected
by passing a list of \fBPATHs\fP as arguments. The file selection can further
be restricted by using the \fB\-\-exclude\fP option.
.sp .sp
By default, the entire archive is extracted, but a subset of files and directories
can be selected by passing a list of \fBPATH\fP arguments. The default interpretation
for the paths to extract is \fIpp:\fP which is a literal path\-prefix match. If you want
to use e.g. a wildcard, you must select a different pattern style such as \fIsh:\fP or
\fIfm:\fP\&. See \fIborg_patterns\fP for more information.
.sp
The file selection can be further restricted by using the \fB\-\-exclude\fP option.
For more help on include/exclude patterns, see the \fIborg_patterns\fP command output. For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.sp .sp
By using \fB\-\-dry\-run\fP, you can do all extraction steps except actually writing the By using \fB\-\-dry\-run\fP, you can do all extraction steps except actually writing the
output data: reading metadata and data chunks from the repository, checking the hash/HMAC, output data: reading metadata and data chunks from the repo, checking the hash/hmac,
decrypting, and decompressing. decrypting, decompressing.
.sp .sp
\fB\-\-progress\fP can be slower than no progress display, since it makes one additional \fB\-\-progress\fP can be slower than no progress display, since it makes one additional
pass over the archive metadata. pass over the archive metadata.
@ -56,12 +52,12 @@ pass over the archive metadata.
\fBNOTE:\fP \fBNOTE:\fP
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
Currently, extract always writes into the current working directory (\(dq.\(dq), Currently, extract always writes into the current working directory ("."),
so make sure you \fBcd\fP to the right place before calling \fBborg extract\fP\&. so make sure you \fBcd\fP to the right place before calling \fBborg extract\fP\&.
.sp .sp
When parent directories are not extracted (because of using file/directory selection When parent directories are not extracted (because of using file/directory selection
or any other reason), Borg cannot restore parent directories\(aq metadata, e.g., owner, or any other reason), borg can not restore parent directories\(aq metadata, e.g. owner,
group, permissions, etc. group, permission, etc.
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH OPTIONS .SH OPTIONS
@ -70,43 +66,46 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B ARCHIVE
specify the archive name archive to extract
.TP .TP
.B PATH .B PATH
paths to extract; patterns are supported paths to extract; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-list .B \-\-list
output a verbose list of items (files, dirs, ...) output verbose list of items (files, dirs, ...)
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B \-n\fP,\fB \-\-dry\-run
do not actually change any files do not actually change any files
.TP .TP
.B \-\-numeric\-ids .B \-\-numeric\-owner
only use numeric user and group identifiers deprecated, use \fB\-\-numeric\-ids\fP instead
.TP .TP
.B \-\-noflags .B \-\-numeric\-ids
only obey numeric user and group identifiers
.TP
.B \-\-nobsdflags
deprecated, use \fB\-\-noflags\fP instead
.TP
.B \-\-noflags
do not extract/set flags (e.g. NODUMP, IMMUTABLE) do not extract/set flags (e.g. NODUMP, IMMUTABLE)
.TP .TP
.B \-\-noacls .B \-\-noacls
do not extract/set ACLs do not extract/set ACLs
.TP .TP
.B \-\-noxattrs .B \-\-noxattrs
do not extract/set xattrs do not extract/set xattrs
.TP .TP
.B \-\-stdout .B \-\-stdout
write all extracted data to stdout write all extracted data to stdout
.TP .TP
.B \-\-sparse .B \-\-sparse
create holes in the output sparse file from all\-zero chunks create holes in output sparse file from all\-zero chunks
.TP
.B \-\-continue
continue a previously interrupted extraction of the same archive
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -128,28 +127,27 @@ Remove the specified number of leading path elements. Paths with fewer elements
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
# Extract entire archive # Extract entire archive
$ borg extract my\-files $ borg extract /path/to/repo::my\-files
# Extract entire archive and list files while processing # Extract entire archive and list files while processing
$ borg extract \-\-list my\-files $ borg extract \-\-list /path/to/repo::my\-files
# Verify whether an archive could be successfully extracted, but do not write files to disk # Verify whether an archive could be successfully extracted, but do not write files to disk
$ borg extract \-\-dry\-run my\-files $ borg extract \-\-dry\-run /path/to/repo::my\-files
# Extract the \(dqsrc\(dq directory # Extract the "src" directory
$ borg extract my\-files home/USERNAME/src $ borg extract /path/to/repo::my\-files home/USERNAME/src
# Extract the \(dqsrc\(dq directory but exclude object files # Extract the "src" directory but exclude object files
$ borg extract my\-files home/USERNAME/src \-\-exclude \(aq*.o\(aq $ borg extract /path/to/repo::my\-files home/USERNAME/src \-\-exclude \(aq*.o\(aq
# Extract only the C files
$ borg extract my\-files \(aqsh:home/USERNAME/src/*.c\(aq
# Restore a raw device (must not be active/in use/mounted at that time) # Restore a raw device (must not be active/in use/mounted at that time)
$ borg extract \-\-stdout my\-sdx | dd of=/dev/sdx bs=10M $ borg extract \-\-stdout /path/to/repo::my\-sdx | dd of=/dev/sdx bs=10M
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-IMPORT-TAR 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-import-tar \- Create a backup archive from a tarball
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-IMPORT-TAR" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-import-tar \- Create a backup archive from a tarball
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] import\-tar [options] NAME TARFILE borg [common options] import\-tar [options] ARCHIVE TARFILE
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command creates a backup archive from a tarball. This command creates a backup archive from a tarball.
@ -49,7 +49,7 @@ based on its file extension and pipe the file through an appropriate filter:
.IP \(bu 2 .IP \(bu 2
\&.tar.xz or .txz: xz \-d \&.tar.xz or .txz: xz \-d
.IP \(bu 2 .IP \(bu 2
\&.tar.zstd or .tar.zst: zstd \-d \&.tar.zstd: zstd \-d
.IP \(bu 2 .IP \(bu 2
\&.tar.lz4: lz4 \-d \&.tar.lz4: lz4 \-d
.UNINDENT .UNINDENT
@ -80,42 +80,35 @@ UNIX V7 tar
.IP \(bu 2 .IP \(bu 2
SunOS tar with extended attributes SunOS tar with extended attributes
.UNINDENT .UNINDENT
.sp
To import multiple tarballs into a single archive, they can be simply
concatenated (e.g. using \(dqcat\(dq) into a single file, and imported with an
\fB\-\-ignore\-zeros\fP option to skip through the stop markers between them.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B ARCHIVE
specify the archive name name of archive to create (must be also a valid directory name)
.TP .TP
.B TARFILE .B TARFILE
input tar file. \(dq\-\(dq to read from stdin instead. input tar file. "\-" to read from stdin instead.
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-tar\-filter .B \-\-tar\-filter
filter program to pipe data through filter program to pipe data through
.TP .TP
.B \-s\fP,\fB \-\-stats .B \-s\fP,\fB \-\-stats
print statistics for the created archive print statistics for the created archive
.TP .TP
.B \-\-list .B \-\-list
output verbose list of items (files, dirs, ...) output verbose list of items (files, dirs, ...)
.TP .TP
.BI \-\-filter \ STATUSCHARS .BI \-\-filter \ STATUSCHARS
only display items with the given status characters only display items with the given status characters
.TP .TP
.B \-\-json .B \-\-json
output stats as JSON (implies \-\-stats) output stats as JSON (implies \-\-stats)
.TP
.B \-\-ignore\-zeros
ignore zero\-filled blocks in the input tarball
.UNINDENT .UNINDENT
.SS Archive options .SS Archive options
.INDENT 0.0 .INDENT 0.0
@ -124,40 +117,45 @@ ignore zero\-filled blocks in the input tarball
add a comment text to the archive add a comment text to the archive
.TP .TP
.BI \-\-timestamp \ TIMESTAMP .BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). alternatively, give a reference file/directory.
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP .TP
.BI \-\-chunker\-params \ PARAMS .BI \-\-chunker\-params \ PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095 specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
.TP .TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION .BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details. select compression algorithm, see the output of the "borg help compression" command for details.
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Export as an uncompressed tar archive .ft C
$ borg export\-tar Monday Monday.tar # export as uncompressed tar
$ borg export\-tar /path/to/repo::Monday Monday.tar
# Import an uncompressed tar archive # import an uncompressed tar
$ borg import\-tar Monday Monday.tar $ borg import\-tar /path/to/repo::Monday Monday.tar
# Exclude some file types and compress using gzip # exclude some file types, compress using gzip
$ borg export\-tar Monday Monday.tar.gz \-\-exclude \(aq*.so\(aq $ borg export\-tar /path/to/repo::Monday Monday.tar.gz \-\-exclude \(aq*.so\(aq
# Use a higher compression level with gzip # use higher compression level with gzip
$ borg export\-tar \-\-tar\-filter=\(dqgzip \-9\(dq Monday Monday.tar.gz $ borg export\-tar \-\-tar\-filter="gzip \-9" repo::Monday Monday.tar.gz
# Copy an archive from repoA to repoB # copy an archive from repoA to repoB
$ borg \-r repoA export\-tar \-\-tar\-format=BORG archive \- | borg \-r repoB import\-tar archive \- $ borg export\-tar \-\-tar\-format=BORG repoA::archive \- | borg import\-tar repoB::archive \-
# Export a tar, but instead of storing it on disk, upload it to a remote site using curl # export a tar, but instead of storing it on disk, upload it to remote site using curl
$ borg export\-tar Monday \- | curl \-\-data\-binary @\- https://somewhere/to/POST $ borg export\-tar /path/to/repo::Monday \- | curl \-\-data\-binary @\- https://somewhere/to/POST
# Remote extraction via \(aqtarpipe\(aq # remote extraction via "tarpipe"
$ borg export\-tar Monday \- | ssh somewhere \(dqcd extracted; tar x\(dq $ borg export\-tar /path/to/repo::Monday \- | ssh somewhere "cd extracted; tar x"
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SS Archives transfer script .SS Archives transfer script
@ -166,12 +164,14 @@ Outputs a script that copies all archives from repo1 to repo2:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
for N I T in \(gaborg list \-\-format=\(aq{archive} {id} {time:%Y\-%m\-%dT%H:%M:%S}{NL}\(aq\(ga .ft C
for A T in \(gaborg list \-\-format=\(aq{archive} {time:%Y\-%m\-%dT%H:%M:%S}{LF}\(aq repo1\(ga
do do
echo \(dqborg \-r repo1 export\-tar \-\-tar\-format=BORG aid:$I \- | borg \-r repo2 import\-tar \-\-timestamp=$T $N \-\(dq echo "borg export\-tar \-\-tar\-format=BORG repo1::$A \- | borg import\-tar \-\-timestamp=$T repo2::$A \-"
done done
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -186,7 +186,7 @@ archive contents (all items with metadata and data)
Lost: Lost:
.INDENT 0.0 .INDENT 0.0
.IP \(bu 2 .IP \(bu 2
some archive metadata (like the original command line, execution time, etc.) some archive metadata (like the original commandline, execution time, etc.)
.UNINDENT .UNINDENT
.sp .sp
Please note: Please note:

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-INFO 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-info \- Show archive details such as disk space used
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,91 +30,124 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-INFO" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-info \- Show archive details such as disk space used
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] info [options] [NAME] borg [common options] info [options] [REPOSITORY_OR_ARCHIVE]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command displays detailed information about the specified archive. This command displays detailed information about the specified archive or repository.
.sp .sp
Please note that the deduplicated sizes of the individual archives do not add Please note that the deduplicated sizes of the individual archives do not add
up to the deduplicated size of the repository (\(dqall archives\(dq), because the two up to the deduplicated size of the repository ("all archives"), because the two
mean different things: are meaning different things:
.sp .sp
This archive / deduplicated size = amount of data stored ONLY for this archive This archive / deduplicated size = amount of data stored ONLY for this archive
= unique chunks of this archive. = unique chunks of this archive.
All archives / deduplicated size = amount of data stored in the repository All archives / deduplicated size = amount of data stored in the repo
= all chunks in the repository. = all chunks in the repository.
.sp
Borg archives can only contain a limited amount of file metadata.
The size of an archive relative to this limit depends on a number of factors,
mainly the number of files, the lengths of paths and other metadata stored for files.
This is shown as \fIutilization of maximum supported archive size\fP\&.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B REPOSITORY_OR_ARCHIVE
specify the archive name repository or archive to display information about
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-json .B \-\-json
format output as JSON format output as JSON
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive filters
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN .BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq. only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP .TP
.BI \-\-sort\-by \ KEYS .BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP .TP
.BI \-\-first \ N .BI \-\-first \ N
consider the first N archives after other filters are applied consider first N archives after other filters were applied
.TP .TP
.BI \-\-last \ N .BI \-\-last \ N
consider the last N archives after other filters are applied consider last N archives after other filters were applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ borg info aid:f7dea078 .ft C
Archive name: source\-backup $ borg info /path/to/repo::2017\-06\-29T11:00\-srv
Archive fingerprint: f7dea0788dfc026cc2be1c0f5b94beb4e4084eb3402fc40c38d8719b1bf2d943 Archive name: 2017\-06\-29T11:00\-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Comment: Comment:
Hostname: mba2020 Hostname: myhostname
Username: tw Username: root
Time (start): Sat, 2022\-06\-25 20:51:40 Time (start): Thu, 2017\-06\-29 11:03:07
Time (end): Sat, 2022\-06\-25 20:51:40 Time (end): Thu, 2017\-06\-29 11:03:13
Duration: 0.03 seconds Duration: 5.66 seconds
Command line: /usr/bin/borg \-r path/to/repo create source\-backup src Number of files: 17037
Utilization of maximum supported archive size: 0% Command line: /usr/sbin/borg create /path/to/repo::2017\-06\-29T11:00\-srv /srv
Number of files: 244 Utilization of max. archive size: 0%
Original size: 13.80 MB \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Deduplicated size: 531 B Original size Compressed size Deduplicated size
.EE This archive: 12.53 GB 12.49 GB 1.62 kB
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
$ borg info /path/to/repo \-\-last 1
Archive name: 2017\-06\-29T11:00\-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Comment:
Hostname: myhostname
Username: root
Time (start): Thu, 2017\-06\-29 11:03:07
Time (end): Thu, 2017\-06\-29 11:03:13
Duration: 5.66 seconds
Number of files: 17037
Command line: /usr/sbin/borg create /path/to/repo::2017\-06\-29T11:00\-srv /srv
Utilization of max. archive size: 0%
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Original size Compressed size Deduplicated size
This archive: 12.53 GB 12.49 GB 1.62 kB
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
$ borg info /path/to/repo
Repository ID: d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Location: /path/to/repo
Encrypted: Yes (repokey)
Cache: /root/.cache/borg/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Security dir: /root/.config/borg/security/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Original size Compressed size Deduplicated size
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP, \fIborg\-list(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-repo\-info(1)\fP \fIborg\-common(1)\fP, \fIborg\-list(1)\fP, \fIborg\-diff(1)\fP
.SH AUTHOR .SH AUTHOR
The Borg Collective The Borg Collective
.\" Generated by docutils manpage writer. .\" Generated by docutils manpage writer.

334
docs/man/borg-init.1 Normal file
View file

@ -0,0 +1,334 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-INIT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-init \- Initialize an empty repository
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.SH SYNOPSIS
.sp
borg [common options] init [options] [REPOSITORY]
.SH DESCRIPTION
.sp
This command initializes an empty repository. A repository is a filesystem
directory containing the deduplicated data from zero or more archives.
.SS Encryption mode TLDR
.sp
The encryption mode can only be configured when creating a new repository \- you can
neither configure it on a per\-archive basis nor change the mode of an existing repository.
This example will likely NOT give optimum performance on your machine (performance
tips will come below):
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
borg init \-\-encryption repokey /path/to/repo
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Borg will:
.INDENT 0.0
.IP 1. 3
Ask you to come up with a passphrase.
.IP 2. 3
Create a borg key (which contains some random secrets. See \fIkey_files\fP).
.IP 3. 3
Derive a "key encryption key" from your passphrase
.IP 4. 3
Encrypt and sign the key with the key encryption key
.IP 5. 3
Store the encrypted borg key inside the repository directory (in the repo config).
This is why it is essential to use a secure passphrase.
.IP 6. 3
Encrypt and sign your backups to prevent anyone from reading or forging them unless they
have the key and know the passphrase. Make sure to keep a backup of
your key \fBoutside\fP the repository \- do not lock yourself out by
"leaving your keys inside your car" (see \fIborg_key_export\fP).
For remote backups the encryption is done locally \- the remote machine
never sees your passphrase, your unencrypted key or your unencrypted files.
Chunking and id generation are also based on your key to improve
your privacy.
.IP 7. 3
Use the key when extracting files to decrypt them and to verify that the contents of
the backups have not been accidentally or maliciously altered.
.UNINDENT
.SS Picking a passphrase
.sp
Make sure you use a good passphrase. Not too short, not too simple. The real
encryption / decryption key is encrypted with / locked by your passphrase.
If an attacker gets your key, he can\(aqt unlock and use it without knowing the
passphrase.
.sp
Be careful with special or non\-ascii characters in your passphrase:
.INDENT 0.0
.IP \(bu 2
Borg processes the passphrase as unicode (and encodes it as utf\-8),
so it does not have problems dealing with even the strangest characters.
.IP \(bu 2
BUT: that does not necessarily apply to your OS / VM / keyboard configuration.
.UNINDENT
.sp
So better use a long passphrase made from simple ascii chars than one that
includes non\-ascii stuff or characters that are hard/impossible to enter on
a different keyboard layout.
.sp
You can change your passphrase for existing repos at any time, it won\(aqt affect
the encryption/decryption key or other secrets.
.SS Choosing an encryption mode
.sp
Depending on your hardware, hashing and crypto performance may vary widely.
The easiest way to find out about what\(aqs fastest is to run \fBborg benchmark cpu\fP\&.
.sp
\fIrepokey\fP modes: if you want ease\-of\-use and "passphrase" security is good enough \-
the key will be stored in the repository (in \fBrepo_dir/config\fP).
.sp
\fIkeyfile\fP modes: if you rather want "passphrase and having\-the\-key" security \-
the key will be stored in your home directory (in \fB~/.config/borg/keys\fP).
.sp
The following table is roughly sorted in order of preference, the better ones are
in the upper part of the table, in the lower part is the old and/or unsafe(r) stuff:
.\" nanorst: inline-fill
.
.TS
center;
|l|l|l|l|l|.
_
T{
\fBmode (* = keyfile or repokey)\fP
T} T{
\fBID\-Hash\fP
T} T{
\fBEncryption\fP
T} T{
\fBAuthentication\fP
T} T{
\fBV>=\fP
T}
_
T{
\fB*\-blake2\-chacha20\-poly1305\fP
T} T{
BLAKE2b
T} T{
CHACHA20
T} T{
POLY1305
T} T{
1.3
T}
_
T{
\fB*\-chacha20\-poly1305\fP
T} T{
HMAC\-SHA\-256
T} T{
CHACHA20
T} T{
POLY1305
T} T{
1.3
T}
_
T{
\fB*\-blake2\-aes\-ocb\fP
T} T{
BLAKE2b
T} T{
AES256\-OCB
T} T{
AES256\-OCB
T} T{
1.3
T}
_
T{
\fB*\-aes\-ocb\fP
T} T{
HMAC\-SHA\-256
T} T{
AES256\-OCB
T} T{
AES256\-OCB
T} T{
1.3
T}
_
T{
\fB*\-blake2\fP
T} T{
BLAKE2b
T} T{
AES256\-CTR
T} T{
BLAKE2b
T} T{
1.1
T}
_
T{
\fB*\fP
T} T{
HMAC\-SHA\-256
T} T{
AES256\-CTR
T} T{
HMAC\-SHA256
T} T{
any
T}
_
T{
authenticated\-blake2
T} T{
BLAKE2b
T} T{
none
T} T{
BLAKE2b
T} T{
1.1
T}
_
T{
authenticated
T} T{
HMAC\-SHA\-256
T} T{
none
T} T{
HMAC\-SHA256
T} T{
1.1
T}
_
T{
none
T} T{
SHA\-256
T} T{
none
T} T{
none
T} T{
any
T}
_
.TE
.\" nanorst: inline-replace
.
.sp
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised to NOT use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.
.sp
If you do \fBnot\fP want to encrypt the contents of your backups, but still want to detect
malicious tampering use an \fIauthenticated\fP mode. It\(aqs like \fIrepokey\fP minus encryption.
.SS Key derivation functions
.INDENT 0.0
.IP \(bu 2
\fB\-\-key\-algorithm argon2\fP is the default and is recommended.
The key encryption key is derived from your passphrase via argon2\-id.
Argon2 is considered more modern and secure than pbkdf2.
.IP \(bu 2
You can use \fB\-\-key\-algorithm pbkdf2\fP if you want to access your repo via old versions of borg.
.UNINDENT
.sp
Our implementation of argon2\-based key algorithm follows the cryptographic best practices:
.INDENT 0.0
.IP \(bu 2
It derives two separate keys from your passphrase: one to encrypt your key and another one
to sign it. \fB\-\-key\-algorithm pbkdf2\fP uses the same key for both.
.IP \(bu 2
It uses encrypt\-then\-mac instead of encrypt\-and\-mac used by \fB\-\-key\-algorithm pbkdf2\fP
.UNINDENT
.sp
Neither is inherently linked to the key derivation function, but since we were going
to break backwards compatibility anyway we took the opportunity to fix all 3 issues at once.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository to create
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP
.BI \-e \ MODE\fR,\fB \ \-\-encryption \ MODE
select encryption key mode \fB(required)\fP
.TP
.B \-\-append\-only
create an append\-only mode repository. Note that this only affects the low level structure of the repository, and running \fIdelete\fP or \fIprune\fP will still be allowed. See \fIappend_only_mode\fP in Additional Notes for more details.
.TP
.BI \-\-storage\-quota \ QUOTA
Set storage quota of the new repository (e.g. 5G, 1.5T). Default: no quota.
.TP
.B \-\-make\-parent\-dirs
create the parent directories of the repository directory, if they are missing.
.TP
.B \-\-key\-algorithm
the algorithm we use to derive a key encryption key from your passphrase. Default: argon2
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
# Local repository, recommended repokey AEAD crypto modes
$ borg init \-\-encryption=repokey\-aes\-ocb /path/to/repo
$ borg init \-\-encryption=repokey\-chacha20\-poly1305 /path/to/repo
$ borg init \-\-encryption=repokey\-blake2\-aes\-ocb /path/to/repo
$ borg init \-\-encryption=repokey\-blake2\-chacha20\-poly1305 /path/to/repo
# Local repository (no encryption), not recommended
$ borg init \-\-encryption=none /path/to/repo
# Remote repository (accesses a remote borg via ssh)
# repokey: stores the (encrypted) key into <REPO_DIR>/config
$ borg init \-\-encryption=repokey\-aes\-ocb user@hostname:backup
# Remote repository (accesses a remote borg via ssh)
# keyfile: stores the (encrypted) key into ~/.config/borg/keys/
$ borg init \-\-encryption=keyfile\-aes\-ocb user@hostname:backup
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-create(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-check(1)\fP, \fIborg\-list(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY-CHANGE-ALGORITHM 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-algorithm \- Change repository key algorithm
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY-CHANGE-ALGORITHM" 1 "2022-06-26" "" "borg backup tool"
.SH NAME
borg-key-change-algorithm \- Change repository key algorithm
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] key change\-algorithm [options] ALGORITHM borg [common options] key change\-algorithm [options] [REPOSITORY] ALGORITHM
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
Change the algorithm we use to encrypt and authenticate the borg key. Change the algorithm we use to encrypt and authenticate the borg key.
@ -77,6 +77,8 @@ borg key change\-algorithm /path/to/repo pbkdf2
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.sp
REPOSITORY
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B ALGORITHM .B ALGORITHM

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY-CHANGE-LOCATION 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-location \- Change repository key location
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,39 +30,35 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY-CHANGE-LOCATION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-change-location \- Changes the repository key location.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] key change\-location [options] KEY_LOCATION borg [common options] key change\-location [options] [REPOSITORY] KEY_LOCATION
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
Change the location of a Borg key. The key can be stored at different locations: Change the location of a borg key. The key can be stored at different locations:
.INDENT 0.0 .sp
.IP \(bu 2
keyfile: locally, usually in the home directory keyfile: locally, usually in the home directory
.IP \(bu 2 repokey: inside the repo (in the repo config)
repokey: inside the repository (in the repository config) .INDENT 0.0
.UNINDENT .TP
.sp .B Note: this command does NOT change the crypto algorithms, just the key location,
Please note:
.sp
This command does NOT change the crypto algorithms, just the key location,
thus you must ONLY give the key location (keyfile or repokey). thus you must ONLY give the key location (keyfile or repokey).
.UNINDENT
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.sp
REPOSITORY
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B KEY_LOCATION .B KEY_LOCATION
select key location select key location
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-keep .B \-\-keep
keep the key also at the current location (default: remove it) keep the key also at the current location (default: remove it)
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY-CHANGE-PASSPHRASE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-passphrase \- Change repository key file passphrase
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY-CHANGE-PASSPHRASE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-change-passphrase \- Changes the repository key file passphrase.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] key change\-passphrase [options] borg [common options] key change\-passphrase [options] [REPOSITORY]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
The key files used for repository encryption are optionally passphrase The key files used for repository encryption are optionally passphrase
@ -45,25 +45,29 @@ does not protect future (nor past) backups to the same repository.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
# Create a key file protected repository # Create a key file protected repository
$ borg repo\-create \-\-encryption=keyfile\-aes\-ocb \-v $ borg init \-\-encryption=keyfile \-v /path/to/repo
Initializing repository at \(dq/path/to/repo\(dq Initializing repository at "/path/to/repo"
Enter new passphrase: Enter new passphrase:
Enter same passphrase again: Enter same passphrase again:
Remember your passphrase. Your data will be inaccessible without it. Remember your passphrase. Your data will be inaccessible without it.
Key in \(dq/root/.config/borg/keys/mnt_backup\(dq created. Key in "/root/.config/borg/keys/mnt_backup" created.
Keep this key safe. Your data will be inaccessible without it. Keep this key safe. Your data will be inaccessible without it.
Synchronizing chunks cache... Synchronizing chunks cache...
Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0. Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0.
Done. Done.
# Change key file passphrase # Change key file passphrase
$ borg key change\-passphrase \-v $ borg key change\-passphrase \-v /path/to/repo
Enter passphrase for key /root/.config/borg/keys/mnt_backup: Enter passphrase for key /root/.config/borg/keys/mnt_backup:
Enter new passphrase: Enter new passphrase:
Enter same passphrase again: Enter same passphrase again:
@ -73,8 +77,9 @@ Key updated
# Import a previously\-exported key into the specified # Import a previously\-exported key into the specified
# key file (creating or overwriting the output key) # key file (creating or overwriting the output key)
# (keyfile repositories only) # (keyfile repositories only)
$ BORG_KEY_FILE=/path/to/output\-key borg key import /path/to/exported $ BORG_KEY_FILE=/path/to/output\-key borg key import /path/to/repo /path/to/exported
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -82,12 +87,14 @@ Fully automated using environment variables:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ BORG_NEW_PASSPHRASE=old borg repo\-create \-\-encryption=repokey\-aes\-ocb .ft C
# now \(dqold\(dq is the current passphrase. $ BORG_NEW_PASSPHRASE=old borg init \-e=repokey repo
$ BORG_PASSPHRASE=old BORG_NEW_PASSPHRASE=new borg key change\-passphrase # now "old" is the current passphrase.
# now \(dqnew\(dq is the current passphrase. $ BORG_PASSPHRASE=old BORG_NEW_PASSPHRASE=new borg key change\-passphrase repo
.EE # now "new" is the current passphrase.
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY-EXPORT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-export \- Export the repository key for backup
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,16 +30,13 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY-EXPORT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-export \- Exports the repository key for backup.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] key export [options] [PATH] borg [common options] key export [options] [REPOSITORY] [PATH]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
If repository encryption is used, the repository is inaccessible If repository encryption is used, the repository is inaccessible
without the key. This command allows one to back up this essential key. without the key. This command allows one to backup this essential key.
Note that the backup produced does not include the passphrase itself Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a (i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase. repository, one needs both the exported key and the original passphrase.
@ -56,38 +56,43 @@ For repositories using the repokey encryption the key is saved in the
repository in the config file. A backup is thus not strictly needed, repository in the config file. A backup is thus not strictly needed,
but guards against the repository becoming inaccessible if the file but guards against the repository becoming inaccessible if the file
is damaged for some reason. is damaged for some reason.
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
borg key export /path/to/repo > encrypted\-key\-backup
borg key export \-\-paper /path/to/repo > encrypted\-key\-backup.txt
borg key export \-\-qr\-html /path/to/repo > encrypted\-key\-backup.html
# Or pass the output file as an argument instead of redirecting stdout:
borg key export /path/to/repo encrypted\-key\-backup
borg key export \-\-paper /path/to/repo encrypted\-key\-backup.txt
borg key export \-\-qr\-html /path/to/repo encrypted\-key\-backup.html
.ft P
.fi
.UNINDENT
.UNINDENT
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.sp
REPOSITORY
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B PATH .B PATH
where to store the backup where to store the backup
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-paper .B \-\-paper
Create an export suitable for printing and later type\-in Create an export suitable for printing and later type\-in
.TP .TP
.B \-\-qr\-html .B \-\-qr\-html
Create an HTML file suitable for printing and later type\-in or QR scan Create an html file suitable for printing and later type\-in or qr scan
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg key export > encrypted\-key\-backup
borg key export \-\-paper > encrypted\-key\-backup.txt
borg key export \-\-qr\-html > encrypted\-key\-backup.html
# Or pass the output file as an argument instead of redirecting stdout:
borg key export encrypted\-key\-backup
borg key export \-\-paper encrypted\-key\-backup.txt
borg key export \-\-qr\-html encrypted\-key\-backup.html
.EE
.UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY-IMPORT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-import \- Import the repository key from backup
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY-IMPORT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-import \- Imports the repository key from backup.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] key import [options] [PATH] borg [common options] key import [options] [REPOSITORY] [PATH]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command restores a key previously backed up with the export command. This command restores a key previously backed up with the export command.
@ -53,15 +53,17 @@ key import\fP creates a new key file in \fB$BORG_KEYS_DIR\fP\&.
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.sp
REPOSITORY
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B PATH .B PATH
path to the backup (\(aq\-\(aq to read from stdin) path to the backup (\(aq\-\(aq to read from stdin)
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-paper .B \-\-paper
interactively import from a backup done with \fB\-\-paper\fP interactively import from a backup done with \fB\-\-paper\fP
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-KEY 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key \- Manage a keyfile or repokey of a repository
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,20 +30,18 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-KEY" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key \- Manage the keyfile or repokey of a repository
.SH SYNOPSIS .SH SYNOPSIS
.nf .nf
borg [common options] key export ... borg [common options] key export ...
borg [common options] key import ... borg [common options] key import ...
borg [common options] key change\-passphrase ... borg [common options] key change\-passphrase ...
borg [common options] key change\-location ... borg [common options] key change\-location ...
borg [common options] key change\-algorithm ...
.fi .fi
.sp .sp
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP, \fIborg\-key\-change\-location(1)\fP \fIborg\-common(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP, \fIborg\-key\-change\-location(1)\fP, \fIborg\-key\-change\-algorithm(1)\fP
.SH AUTHOR .SH AUTHOR
The Borg Collective The Borg Collective
.\" Generated by docutils manpage writer. .\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-LIST 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-list \- List archive or repository contents
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,45 +30,63 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-LIST" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-list \- List archive contents.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] list [options] NAME [PATH...] borg [common options] list [options] [REPOSITORY_OR_ARCHIVE] [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command lists the contents of an archive. This command lists the contents of a repository or an archive.
.sp .sp
For more help on include/exclude patterns, see the output of \fIborg_patterns\fP\&. For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B REPOSITORY_OR_ARCHIVE
specify the archive name repository or archive to list contents of
.TP .TP
.B PATH .B PATH
paths to list; patterns are supported paths to list; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-short .B \-\-consider\-checkpoints
Show checkpoint archives in the repository contents list (default: hidden).
.TP
.B \-\-short
only print file/directory names, nothing else only print file/directory names, nothing else
.TP .TP
.BI \-\-format \ FORMAT .BI \-\-format \ FORMAT
specify format for file listing (default: \(dq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(dq) specify format for file or archive listing (default for files: "{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}"; for archives: "{archive:<36} {time} [{id}]{NL}")
.TP .TP
.B \-\-json\-lines .B \-\-json
Format output as JSON Lines. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text. Only valid for listing repository contents. Format output as JSON. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text. A "barchive" key is therefore not available.
.TP .TP
.BI \-\-depth \ N .B \-\-json\-lines
only list files up to the specified directory depth Only valid for listing archive contents. Format output as JSON Lines. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text. A "bpath" key is therefore not available.
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Archive filters
.INDENT 0.0
.TP
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider last N archives after other filters were applied
.UNINDENT
.SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -84,8 +105,16 @@ read include/exclude patterns from PATTERNFILE, one per line
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ borg list root\-2016\-02\-15 .ft C
$ borg list /path/to/repo
Monday Mon, 2016\-02\-15 19:15:11
repo Mon, 2016\-02\-15 19:26:54
root\-2016\-02\-15 Mon, 2016\-02\-15 19:36:29
newname Mon, 2016\-02\-15 19:50:19
\&...
$ borg list /path/to/repo::root\-2016\-02\-15
drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 . drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 .
drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin
\-rwxr\-xr\-x root root 1029624 Thu, 2014\-11\-13 00:08:51 bin/bash \-rwxr\-xr\-x root root 1029624 Thu, 2014\-11\-13 00:08:51 bin/bash
@ -93,14 +122,14 @@ lrwxrwxrwx root root 0 Fri, 2015\-03\-27 20:24:26 bin/bzcmp \-> bzdif
\-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff \-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff
\&... \&...
$ borg list root\-2016\-02\-15 \-\-pattern \(dq\- bin/ba*\(dq $ borg list /path/to/repo::root\-2016\-02\-15 \-\-pattern "\- bin/ba*"
drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 . drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 .
drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin
lrwxrwxrwx root root 0 Fri, 2015\-03\-27 20:24:26 bin/bzcmp \-> bzdiff lrwxrwxrwx root root 0 Fri, 2015\-03\-27 20:24:26 bin/bzcmp \-> bzdiff
\-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff \-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff
\&... \&...
$ borg list archiveA \-\-format=\(dq{mode} {user:6} {group:6} {size:8d} {isomtime} {path}{extra}{NEWLINE}\(dq $ borg list /path/to/repo::archiveA \-\-format="{mode} {user:6} {group:6} {size:8d} {isomtime} {path}{extra}{NEWLINE}"
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 . drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 .
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code/myproject drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code/myproject
@ -108,38 +137,50 @@ drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code/myproject
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text \-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text
\&... \&...
$ borg list archiveA \-\-pattern \(aq+ re:\e.ext$\(aq \-\-pattern \(aq\- re:^.*$\(aq $ borg list /path/to/repo/::archiveA \-\-pattern \(aqre:\e.ext$\(aq
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext \-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext
\&... \&...
$ borg list archiveA \-\-pattern \(aq+ re:.ext$\(aq \-\-pattern \(aq\- re:^.*$\(aq $ borg list /path/to/repo/::archiveA \-\-pattern \(aqre:.ext$\(aq
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext \-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text \-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text
\&... \&...
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH NOTES .SH NOTES
.SS The FORMAT specifier syntax .SS The FORMAT specifier syntax
.sp .sp
The \fB\-\-format\fP option uses Python\(aqs format string syntax <https://docs.python.org/3.10/library/string.html#formatstrings> The \fB\-\-format\fP option uses python\(aqs \fI\%format string syntax\fP\&.
\&.
.sp .sp
Examples: Examples:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ borg list \-\-format \(aq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(aq ArchiveFoo .ft C
$ borg list \-\-format \(aq{archive}{NL}\(aq /path/to/repo
ArchiveFoo
ArchiveBar
\&...
# {VAR:NUMBER} \- pad to NUMBER columns.
# Strings are left\-aligned, numbers are right\-aligned.
# Note: time columns except \(ga\(gaisomtime\(ga\(ga, \(ga\(gaisoctime\(ga\(ga and \(ga\(gaisoatime\(ga\(ga cannot be padded.
$ borg list \-\-format \(aq{archive:36} {time} [{id}]{NL}\(aq /path/to/repo
ArchiveFoo Thu, 2021\-12\-09 10:22:28 [0b8e9a312bef3f2f6e2d0fc110c196827786c15eba0188738e81697a7fa3b274]
$ borg list \-\-format \(aq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(aq /path/to/repo::ArchiveFoo
\-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo \-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo
\&... \&...
# {VAR:<NUMBER} \- pad to NUMBER columns left\-aligned. # {VAR:<NUMBER} \- pad to NUMBER columns left\-aligned.
# {VAR:>NUMBER} \- pad to NUMBER columns right\-aligned. # {VAR:>NUMBER} \- pad to NUMBER columns right\-aligned.
$ borg list \-\-format \(aq{mode} {user:>6} {group:>6} {size:<8} {mtime} {path}{extra}{NL}\(aq ArchiveFoo $ borg list \-\-format \(aq{mode} {user:>6} {group:>6} {size:<8} {mtime} {path}{extra}{NL}\(aq /path/to/repo::ArchiveFoo
\-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo \-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo
\&... \&...
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -150,57 +191,93 @@ NEWLINE: OS dependent line separator
.IP \(bu 2 .IP \(bu 2
NL: alias of NEWLINE NL: alias of NEWLINE
.IP \(bu 2 .IP \(bu 2
NUL: NUL character for creating print0 / xargs \-0 like output NUL: NUL character for creating print0 / xargs \-0 like output, see barchive and bpath keys below
.IP \(bu 2 .IP \(bu 2
SPACE: space character SPACE
.IP \(bu 2 .IP \(bu 2
TAB: tab character TAB
.IP \(bu 2 .IP \(bu 2
CR: carriage return character CR
.IP \(bu 2 .IP \(bu 2
LF: line feed character LF
.UNINDENT
.sp
Keys available only when listing archives in a repository:
.INDENT 0.0
.IP \(bu 2
archive: archive name interpreted as text (might be missing non\-text characters, see barchive)
.IP \(bu 2
name: alias of "archive"
.IP \(bu 2
barchive: verbatim archive name, can contain any character except NUL
.IP \(bu 2
comment: archive comment interpreted as text (might be missing non\-text characters, see bcomment)
.IP \(bu 2
bcomment: verbatim archive comment, can contain any character except NUL
.IP \(bu 2
id: internal ID of the archive
.IP \(bu 2
start: time (start) of creation of the archive
.IP \(bu 2
time: alias of "start"
.IP \(bu 2
end: time (end) of creation of the archive
.IP \(bu 2
command_line: command line which was used to create the archive
.IP \(bu 2
hostname: hostname of host on which this archive was created
.IP \(bu 2
username: username of user who created this archive
.UNINDENT .UNINDENT
.sp .sp
Keys available only when listing files in an archive: Keys available only when listing files in an archive:
.INDENT 0.0 .INDENT 0.0
.IP \(bu 2 .IP \(bu 2
type: file type (file, dir, symlink, ...) type
.IP \(bu 2 .IP \(bu 2
mode: file mode (as in stat) mode
.IP \(bu 2 .IP \(bu 2
uid: user id of file owner uid
.IP \(bu 2 .IP \(bu 2
gid: group id of file owner gid
.IP \(bu 2 .IP \(bu 2
user: user name of file owner user
.IP \(bu 2 .IP \(bu 2
group: group name of file owner group
.IP \(bu 2 .IP \(bu 2
path: file path path: path interpreted as text (might be missing non\-text characters, see bpath)
.IP \(bu 2 .IP \(bu 2
target: link target for symlinks bpath: verbatim POSIX path, can contain any character except NUL
.IP \(bu 2 .IP \(bu 2
hlid: hard link identity (same if hardlinking same fs object) source: link target for links (identical to linktarget)
.IP \(bu 2 .IP \(bu 2
inode: inode number linktarget
.IP \(bu 2 .IP \(bu 2
flags: file flags flags
.IP \(bu 2 .IP \(bu 2
size: file size size
.IP \(bu 2
csize: compressed size
.IP \(bu 2
dsize: deduplicated size
.IP \(bu 2
dcsize: deduplicated compressed size
.IP \(bu 2 .IP \(bu 2
num_chunks: number of chunks in this file num_chunks: number of chunks in this file
.IP \(bu 2 .IP \(bu 2
mtime: file modification time unique_chunks: number of unique chunks in this file
.IP \(bu 2 .IP \(bu 2
ctime: file change time mtime
.IP \(bu 2 .IP \(bu 2
atime: file access time ctime
.IP \(bu 2 .IP \(bu 2
isomtime: file modification time (ISO 8601 format) atime
.IP \(bu 2 .IP \(bu 2
isoctime: file change time (ISO 8601 format) isomtime
.IP \(bu 2 .IP \(bu 2
isoatime: file access time (ISO 8601 format) isoctime
.IP \(bu 2
isoatime
.IP \(bu 2 .IP \(bu 2
blake2b blake2b
.IP \(bu 2 .IP \(bu 2
@ -228,15 +305,17 @@ sha512
.IP \(bu 2 .IP \(bu 2
xxh64: XXH64 checksum of this file (note: this is NOT a cryptographic hash!) xxh64: XXH64 checksum of this file (note: this is NOT a cryptographic hash!)
.IP \(bu 2 .IP \(bu 2
archiveid: internal ID of the archive archiveid
.IP \(bu 2 .IP \(bu 2
archivename: name of the archive archivename
.IP \(bu 2 .IP \(bu 2
extra: prepends {target} with \(dq \-> \(dq for soft links and \(dq link to \(dq for hard links extra: prepends {source} with " \-> " for soft links and " link to " for hard links
.IP \(bu 2
health: either "healthy" (file ok) or "broken" (if file has all\-zero replacement chunks)
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO
.sp .sp
\fIborg\-common(1)\fP, \fIborg\-info(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-repo\-list(1)\fP \fIborg\-common(1)\fP, \fIborg\-info(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-patterns(1)\fP
.SH AUTHOR .SH AUTHOR
The Borg Collective The Borg Collective
.\" Generated by docutils manpage writer. .\" Generated by docutils manpage writer.

View file

@ -1,99 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-MATCH-ARCHIVES" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-match-archives \- Details regarding match-archives
.SH DESCRIPTION
.sp
The \fB\-\-match\-archives\fP option matches a given pattern against the list of all archives
in the repository. It can be given multiple times.
.sp
The patterns can have a prefix of:
.INDENT 0.0
.IP \(bu 2
name: pattern match on the archive name (default)
.IP \(bu 2
aid: prefix match on the archive id (only one result allowed)
.IP \(bu 2
user: exact match on the username who created the archive
.IP \(bu 2
host: exact match on the hostname where the archive was created
.IP \(bu 2
tags: match on the archive tags
.UNINDENT
.sp
In case of a name pattern match,
it uses pattern styles similar to the ones described by \fBborg help patterns\fP:
.INDENT 0.0
.TP
.B Identical match pattern, selector \fBid:\fP (default)
Simple string match, must fully match exactly as given.
.TP
.B Shell\-style patterns, selector \fBsh:\fP
Match like on the shell, wildcards like \fI*\fP and \fI?\fP work.
.TP
.B Regular expressions <https://docs.python.org/3/library/re.html>
, selector \fBre:\fP
Full regular expression support.
This is very powerful, but can also get rather complicated.
.UNINDENT
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# name match, id: style
borg delete \-\-match\-archives \(aqid:archive\-with\-crap\(aq
borg delete \-a \(aqid:archive\-with\-crap\(aq # same, using short option
borg delete \-a \(aqarchive\-with\-crap\(aq # same, because \(aqid:\(aq is the default
# name match, sh: style
borg delete \-a \(aqsh:home\-kenny\-*\(aq
# name match, re: style
borg delete \-a \(aqre:pc[123]\-home\-(user1|user2)\-2022\-09\-.*\(aq
# archive id prefix match:
borg delete \-a \(aqaid:d34db33f\(aq
# host or user match
borg delete \-a \(aquser:kenny\(aq
borg delete \-a \(aqhost:kenny\-pc\(aq
# tags match
borg delete \-a \(aqtags:TAG1\(aq \-a \(aqtags:TAG2\(aq
.EE
.UNINDENT
.UNINDENT
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-MOUNT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-mount \- Mount archive or an entire repository as a FUSE filesystem
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,37 +30,15 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-MOUNT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-mount \- Mounts an archive or an entire repository as a FUSE filesystem.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] mount [options] MOUNTPOINT [PATH...] borg [common options] mount [options] REPOSITORY_OR_ARCHIVE MOUNTPOINT [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command mounts a repository or an archive as a FUSE filesystem. This command mounts an archive as a FUSE filesystem. This can be useful for
This can be useful for browsing or restoring individual files. browsing an archive or restoring individual files. Unless the \fB\-\-foreground\fP
.sp option is given the command will run in the background until the filesystem
When restoring, take into account that the current FUSE implementation does is \fBumounted\fP\&.
not support special fs flags and ACLs.
.sp
When mounting a repository, the top directories will be named like the
archives and the directory structure below these will be loaded on\-demand from
the repository when entering these directories, so expect some delay.
.sp
Unless the \fB\-\-foreground\fP option is given, the command will run in the
background until the filesystem is \fBunmounted\fP\&.
.sp
Performance tips:
.INDENT 0.0
.IP \(bu 2
When doing a \(dqwhole repository\(dq mount:
do not enter archive directories if not needed; this avoids on\-demand loading.
.IP \(bu 2
Only mount a specific archive, not the whole repository.
.IP \(bu 2
Only mount specific paths in a specific archive, not the complete archive.
.UNINDENT
.sp .sp
The command \fBborgfs\fP provides a wrapper for \fBborg mount\fP\&. This can also be The command \fBborgfs\fP provides a wrapper for \fBborg mount\fP\&. This can also be
used in fstab entries: used in fstab entries:
@ -69,107 +50,100 @@ To allow a regular user to use fstab entries, add the \fBuser\fP option:
For FUSE configuration and mount options, see the mount.fuse(8) manual page. For FUSE configuration and mount options, see the mount.fuse(8) manual page.
.sp .sp
Borg\(aqs default behavior is to use the archived user and group names of each Borg\(aqs default behavior is to use the archived user and group names of each
file and map them to the system\(aqs respective user and group IDs. file and map them to the system\(aqs respective user and group ids.
Alternatively, using \fBnumeric\-ids\fP will instead use the archived user and Alternatively, using \fBnumeric\-ids\fP will instead use the archived user and
group IDs without any mapping. group ids without any mapping.
.sp .sp
The \fBuid\fP and \fBgid\fP mount options (implemented by Borg) can be used to The \fBuid\fP and \fBgid\fP mount options (implemented by Borg) can be used to
override the user and group IDs of all files (i.e., \fBborg mount \-o override the user and group ids of all files (i.e., \fBborg mount \-o
uid=1000,gid=1000\fP). uid=1000,gid=1000\fP).
.sp .sp
The man page references \fBuser_id\fP and \fBgroup_id\fP mount options The man page references \fBuser_id\fP and \fBgroup_id\fP mount options
(implemented by FUSE) which specify the user and group ID of the mount owner (implemented by fuse) which specify the user and group id of the mount owner
(also known as the user who does the mounting). It is set automatically by libfuse (or (aka, the user who does the mounting). It is set automatically by libfuse (or
the filesystem if libfuse is not used). However, you should not specify these the filesystem if libfuse is not used). However, you should not specify these
manually. Unlike the \fBuid\fP and \fBgid\fP mount options, which affect all files, manually. Unlike the \fBuid\fP and \fBgid\fP mount options which affect all files,
\fBuser_id\fP and \fBgroup_id\fP affect the user and group ID of the mounted \fBuser_id\fP and \fBgroup_id\fP affect the user and group id of the mounted
(base) directory. (base) directory.
.sp .sp
Additional mount options supported by Borg: Additional mount options supported by borg:
.INDENT 0.0 .INDENT 0.0
.IP \(bu 2 .IP \(bu 2
\fBversions\fP: when used with a repository mount, this gives a merged, versioned versions: when used with a repository mount, this gives a merged, versioned
view of the files in the archives. EXPERIMENTAL; layout may change in the future. view of the files in the archives. EXPERIMENTAL, layout may change in future.
.IP \(bu 2 .IP \(bu 2
\fBallow_damaged_files\fP: by default, damaged files (where chunks are missing) allow_damaged_files: by default damaged files (where missing chunks were
will return EIO (I/O error) when trying to read the related parts of the file. replaced with runs of zeros by borg check \fB\-\-repair\fP) are not readable and
Set this option to replace the missing parts with all\-zero bytes. return EIO (I/O error). Set this option to read such files.
.IP \(bu 2 .IP \(bu 2
\fBignore_permissions\fP: for security reasons the \fBdefault_permissions\fP mount ignore_permissions: for security reasons the "default_permissions" mount
option is internally enforced by Borg. \fBignore_permissions\fP can be given to option is internally enforced by borg. "ignore_permissions" can be given to
not enforce \fBdefault_permissions\fP\&. not enforce "default_permissions".
.UNINDENT .UNINDENT
.sp .sp
The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is intended for advanced users The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is meant for advanced users
to tweak performance. It sets the number of cached data chunks; additional to tweak the performance. It sets the number of cached data chunks; additional
memory usage can be up to ~8 MiB times this number. The default is the number memory usage can be up to ~8 MiB times this number. The default is the number
of CPU cores. of CPU cores.
.sp .sp
When the daemonized process receives a signal or crashes, it does not unmount. When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process Unmounting in these cases could cause an active rsync or similar process
to delete data unintentionally. to unintentionally delete data.
.sp .sp
When running in the foreground, ^C/SIGINT cleanly unmounts the filesystem, When running in the foreground ^C/SIGINT unmounts cleanly, but other
but other signals or crashes do not. signals or crashes do not.
.sp
Debugging:
.sp
\fBborg mount\fP usually daemonizes and the daemon process sends stdout/stderr
to /dev/null. Thus, you need to either use \fB\-f / \-\-foreground\fP to make it stay
in the foreground and not daemonize, or use \fBBORG_LOGGING_CONF\fP to reconfigure
the logger to output to a file.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B REPOSITORY_OR_ARCHIVE
repository or archive to mount
.TP
.B MOUNTPOINT .B MOUNTPOINT
where to mount the filesystem where to mount filesystem
.TP .TP
.B PATH .B PATH
paths to extract; patterns are supported paths to extract; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-f\fP,\fB \-\-foreground .B \-\-consider\-checkpoints
Show checkpoint archives in the repository contents list (default: hidden).
.TP
.B \-f\fP,\fB \-\-foreground
stay in foreground, do not daemonize stay in foreground, do not daemonize
.TP .TP
.B \-o .B \-o
extra mount options Extra mount options
.TP .TP
.B \-\-numeric\-ids .B \-\-numeric\-owner
use numeric user and group identifiers from archives deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
use numeric user and group identifiers from archive(s)
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive filters
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN .BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq. only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP .TP
.BI \-\-sort\-by \ KEYS .BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP .TP
.BI \-\-first \ N .BI \-\-first \ N
consider the first N archives after other filters are applied consider first N archives after other filters were applied
.TP .TP
.BI \-\-last \ N .BI \-\-last \ N
consider the last N archives after other filters are applied consider last N archives after other filters were applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-PATTERNS 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-patterns \- Details regarding patterns
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,58 +30,45 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-PATTERNS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-patterns \- Details regarding patterns
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
When specifying one or more file paths in a Borg command that supports The path/filenames used as input for the pattern matching start from the
patterns for the respective option or argument, you can apply the
patterns described here to include only desired files and/or exclude
unwanted ones. Patterns can be used
.INDENT 0.0
.IP \(bu 2
for \fB\-\-exclude\fP option,
.IP \(bu 2
in the file given with \fB\-\-exclude\-from\fP option,
.IP \(bu 2
for \fB\-\-pattern\fP option,
.IP \(bu 2
in the file given with \fB\-\-patterns\-from\fP option and
.IP \(bu 2
for \fBPATH\fP arguments that explicitly support them.
.UNINDENT
.sp
The path/filenames used as input for the pattern matching start with the
currently active recursion root. You usually give the recursion root(s) currently active recursion root. You usually give the recursion root(s)
when invoking borg and these can be either relative or absolute paths. when invoking borg and these can be either relative or absolute paths.
.sp .sp
Be careful, your patterns must match the archived paths: So, when you give \fIrelative/\fP as root, the paths going into the matcher
.INDENT 0.0 will look like \fIrelative/.../file.ext\fP\&. When you give \fI/absolute/\fP as
.IP \(bu 2 root, they will look like \fI/absolute/.../file.ext\fP\&.
Archived paths never start with a leading slash (\(aq/\(aq), nor with \(aq.\(aq, nor with \(aq..\(aq.
.INDENT 2.0
.IP \(bu 2
When you back up absolute paths like \fB/home/user\fP, the archived
paths start with \fBhome/user\fP\&.
.IP \(bu 2
When you back up relative paths like \fB\&./src\fP, the archived paths
start with \fBsrc\fP\&.
.IP \(bu 2
When you back up relative paths like \fB\&../../src\fP, the archived paths
start with \fBsrc\fP\&.
.UNINDENT
.UNINDENT
.sp .sp
Borg supports different pattern styles. To define a non\-default File paths in Borg archives are always stored normalized and relative.
style for a specific pattern, prefix it with two characters followed This means that e.g. \fBborg create /path/to/repo ../some/path\fP will
by a colon \(aq:\(aq (i.e. \fBfm:path/*\fP, \fBsh:path/**\fP). store all files as \fIsome/path/.../file.ext\fP and \fBborg create
/path/to/repo /home/user\fP will store all files as
\fIhome/user/.../file.ext\fP\&.
.sp .sp
The default pattern style for \fB\-\-exclude\fP differs from \fB\-\-pattern\fP, see below. A directory exclusion pattern can end either with or without a slash (\(aq/\(aq).
If it ends with a slash, such as \fIsome/path/\fP, the directory will be
included but not its content. If it does not end with a slash, such as
\fIsome/path\fP, both the directory and content will be excluded.
.sp
File patterns support these styles: fnmatch, shell, regular expressions,
path prefixes and path full\-matches. By default, fnmatch is used for
\fB\-\-exclude\fP patterns and shell\-style is used for the \fB\-\-pattern\fP
option. For commands that support patterns in their \fBPATH\fP argument
like (\fBborg list\fP), the default pattern is path prefix.
.sp
Starting with Borg 1.2, for all but regular expression pattern matching
styles, all paths are treated as relative, meaning that a leading path
separator is removed after normalizing and before matching. This allows
you to use absolute or relative patterns arbitrarily.
.sp
If followed by a colon (\(aq:\(aq) the first two characters of a pattern are
used as a style selector. Explicit style selection is necessary when a
non\-default style is desired or when the desired pattern starts with
two alphanumeric characters followed by a colon (i.e. \fIaa:something/*\fP).
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B Fnmatch <https://docs.python.org/3/library/fnmatch.html> .B \fI\%Fnmatch\fP, selector \fIfm:\fP
, selector \fBfm:\fP
This is the default style for \fB\-\-exclude\fP and \fB\-\-exclude\-from\fP\&. This is the default style for \fB\-\-exclude\fP and \fB\-\-exclude\-from\fP\&.
These patterns use a variant of shell pattern syntax, with \(aq*\(aq matching These patterns use a variant of shell pattern syntax, with \(aq*\(aq matching
any number of characters, \(aq?\(aq matching any single character, \(aq[...]\(aq any number of characters, \(aq?\(aq matching any single character, \(aq[...]\(aq
@ -86,7 +76,7 @@ matching any single character specified, including ranges, and \(aq[!...]\(aq
matching any character not specified. For the purpose of these patterns, matching any character not specified. For the purpose of these patterns,
the path separator (backslash for Windows and \(aq/\(aq on other systems) is not the path separator (backslash for Windows and \(aq/\(aq on other systems) is not
treated specially. Wrap meta\-characters in brackets for a literal treated specially. Wrap meta\-characters in brackets for a literal
match (i.e. \fB[?]\fP to match the literal character \(aq?\(aq). For a path match (i.e. \fI[?]\fP to match the literal character \fI?\fP). For a path
to match a pattern, the full path must match, or it must match to match a pattern, the full path must match, or it must match
from the start of the full path to just before a path separator. Except from the start of the full path to just before a path separator. Except
for the root path, paths will never end in the path separator when for the root path, paths will never end in the path separator when
@ -94,33 +84,33 @@ matching is attempted. Thus, if a given pattern ends in a path
separator, a \(aq*\(aq is appended before matching is attempted. A leading separator, a \(aq*\(aq is appended before matching is attempted. A leading
path separator is always removed. path separator is always removed.
.TP .TP
.B Shell\-style patterns, selector \fBsh:\fP .B Shell\-style patterns, selector \fIsh:\fP
This is the default style for \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP\&. This is the default style for \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP\&.
Like fnmatch patterns these are similar to shell patterns. The difference Like fnmatch patterns these are similar to shell patterns. The difference
is that the pattern may include \fB**/\fP for matching zero or more directory is that the pattern may include \fI**/\fP for matching zero or more directory
levels, \fB*\fP for matching zero or more arbitrary characters with the levels, \fI*\fP for matching zero or more arbitrary characters with the
exception of any path separator, \fB{}\fP containing comma\-separated exception of any path separator. A leading path separator is always removed.
alternative patterns. A leading path separator is always removed.
.TP .TP
.B Regular expressions <https://docs.python.org/3/library/re.html> .B Regular expressions, selector \fIre:\fP
, selector \fBre:\fP Regular expressions similar to those found in Perl are supported. Unlike
Unlike shell patterns, regular expressions are not required to match the full shell patterns regular expressions are not required to match the full
path and any substring match is sufficient. It is strongly recommended to path and any substring match is sufficient. It is strongly recommended to
anchor patterns to the start (\(aq^\(aq), to the end (\(aq$\(aq) or both. Path anchor patterns to the start (\(aq^\(aq), to the end (\(aq$\(aq) or both. Path
separators (backslash for Windows and \(aq/\(aq on other systems) in paths are separators (backslash for Windows and \(aq/\(aq on other systems) in paths are
always normalized to a forward slash \(aq/\(aq before applying a pattern. always normalized to a forward slash (\(aq/\(aq) before applying a pattern. The
regular expression syntax is described in the \fI\%Python documentation for
the re module\fP\&.
.TP .TP
.B Path prefix, selector \fBpp:\fP .B Path prefix, selector \fIpp:\fP
This pattern style is useful to match whole subdirectories. The pattern This pattern style is useful to match whole sub\-directories. The pattern
\fBpp:root/somedir\fP matches \fBroot/somedir\fP and everything therein. \fIpp:root/somedir\fP matches \fIroot/somedir\fP and everything therein. A leading
A leading path separator is always removed. path separator is always removed.
.TP .TP
.B Path full\-match, selector \fBpf:\fP .B Path full\-match, selector \fIpf:\fP
This pattern style is (only) useful to match full paths. This pattern style is (only) useful to match full paths.
This is kind of a pseudo pattern as it cannot have any variable or This is kind of a pseudo pattern as it can not have any variable or
unspecified parts \- the full path must be given. \fBpf:root/file.ext\fP unspecified parts \- the full path must be given. \fIpf:root/file.ext\fP matches
matches \fBroot/file.ext\fP only. A leading path separator is always \fIroot/file.ext\fP only. A leading path separator is always removed.
removed.
.sp .sp
Implementation note: this is implemented via very time\-efficient O(1) Implementation note: this is implemented via very time\-efficient O(1)
hashtable lookups (this means you can have huge amounts of such patterns hashtable lookups (this means you can have huge amounts of such patterns
@ -135,12 +125,12 @@ Same logic applies for exclude.
\fBNOTE:\fP \fBNOTE:\fP
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
\fBre:\fP, \fBsh:\fP and \fBfm:\fP patterns are all implemented on top of \fIre:\fP, \fIsh:\fP and \fIfm:\fP patterns are all implemented on top of the Python SRE
the Python SRE engine. It is very easy to formulate patterns for each engine. It is very easy to formulate patterns for each of these types which
of these types which requires an inordinate amount of time to match requires an inordinate amount of time to match paths. If untrusted users
paths. If untrusted users are able to supply patterns, ensure they are able to supply patterns, ensure they cannot supply \fIre:\fP patterns.
cannot supply \fBre:\fP patterns. Further, ensure that \fBsh:\fP and Further, ensure that \fIsh:\fP and \fIfm:\fP patterns only contain a handful of
\fBfm:\fP patterns only contain a handful of wildcards at most. wildcards at most.
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -148,18 +138,9 @@ Exclusions can be passed via the command line option \fB\-\-exclude\fP\&. When u
from within a shell, the patterns should be quoted to protect them from from within a shell, the patterns should be quoted to protect them from
expansion. expansion.
.sp .sp
Patterns matching special characters, e.g. whitespace, within a shell may
require adjustments, such as putting quotation marks around the arguments.
Example:
Using bash, the following command line option would match and exclude \(dqitem name\(dq:
\fB\-\-pattern=\(aq\-path/item name\(aq\fP
Note that when patterns are used within a pattern file directly read by borg,
e.g. when using \fB\-\-exclude\-from\fP or \fB\-\-patterns\-from\fP, there is no shell
involved and thus no quotation marks are required.
.sp
The \fB\-\-exclude\-from\fP option permits loading exclusion patterns from a text The \fB\-\-exclude\-from\fP option permits loading exclusion patterns from a text
file with one pattern per line. Lines empty or starting with the hash sign file with one pattern per line. Lines empty or starting with the number sign
\(aq#\(aq after removing whitespace on both ends are ignored. The optional style (\(aq#\(aq) after removing whitespace on both ends are ignored. The optional style
selector prefix is also supported for patterns loaded from a file. Due to selector prefix is also supported for patterns loaded from a file. Due to
whitespace removal, paths with whitespace at the beginning or end can only be whitespace removal, paths with whitespace at the beginning or end can only be
excluded using regular expressions. excluded using regular expressions.
@ -171,133 +152,77 @@ Examples:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Exclude a directory anywhere in the tree named \(ga\(gasteamapps/common\(ga\(ga .ft C
# (and everything below it), regardless of where it appears:
$ borg create \-e \(aqsh:**/steamapps/common/**\(aq archive /
# Exclude the contents of \(ga\(ga/home/user/.cache\(ga\(ga:
$ borg create \-e \(aqsh:home/user/.cache/**\(aq archive /home/user
$ borg create \-e home/user/.cache/ archive /home/user
# The file \(aq/home/user/.cache/important\(aq is *not* backed up:
$ borg create \-e home/user/.cache/ archive / /home/user/.cache/important
# Exclude \(aq/home/user/file.o\(aq but not \(aq/home/user/file.odt\(aq: # Exclude \(aq/home/user/file.o\(aq but not \(aq/home/user/file.odt\(aq:
$ borg create \-e \(aq*.o\(aq archive / $ borg create \-e \(aq*.o\(aq backup /
# Exclude \(aq/home/user/junk\(aq and \(aq/home/user/subdir/junk\(aq but # Exclude \(aq/home/user/junk\(aq and \(aq/home/user/subdir/junk\(aq but
# not \(aq/home/user/importantjunk\(aq or \(aq/etc/junk\(aq: # not \(aq/home/user/importantjunk\(aq or \(aq/etc/junk\(aq:
$ borg create \-e \(aqhome/*/junk\(aq archive / $ borg create \-e \(aq/home/*/junk\(aq backup /
# Exclude the contents of \(aq/home/user/cache\(aq but not the directory itself:
$ borg create \-e home/user/cache/ backup /
# The file \(aq/home/user/cache/important\(aq is *not* backed up:
$ borg create \-e /home/user/cache/ backup / /home/user/cache/important
# The contents of directories in \(aq/home\(aq are not backed up when their name # The contents of directories in \(aq/home\(aq are not backed up when their name
# ends in \(aq.tmp\(aq # ends in \(aq.tmp\(aq
$ borg create \-\-exclude \(aqre:^home/[^/]+\e.tmp/\(aq archive / $ borg create \-\-exclude \(aqre:^/home/[^/]+\e.tmp/\(aq backup /
# Load exclusions from file # Load exclusions from file
$ cat >exclude.txt <<EOF $ cat >exclude.txt <<EOF
# Comment line # Comment line
home/*/junk /home/*/junk
*.tmp *.tmp
fm:aa:something/* fm:aa:something/*
re:^home/[^/]+\e.tmp/ re:^/home/[^/]+\e.tmp/
sh:home/*/.thumbnails sh:/home/*/.thumbnails
# Example with spaces, no need to escape as it is processed by borg # Example with spaces, no need to escape as it is processed by borg
some file with spaces.txt some file with spaces.txt
EOF EOF
$ borg create \-\-exclude\-from exclude.txt archive / $ borg create \-\-exclude\-from exclude.txt backup /
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
A more general and easier to use way to define filename matching patterns A more general and easier to use way to define filename matching patterns exists
exists with the \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP options. Using with the \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP options. Using these, you may
these, you may specify the backup roots, default pattern styles and specify the backup roots (starting points) and patterns for inclusion/exclusion.
patterns for inclusion and exclusion. A root path starts with the prefix \fIR\fP, followed by a path (a plain path, not a
.INDENT 0.0 file pattern). An include rule starts with the prefix +, an exclude rule starts
.TP with the prefix \-, an exclude\-norecurse rule starts with !, all followed by a pattern.
.B Root path prefix \fBR\fP
A recursion root path starts with the prefix \fBR\fP, followed by a path
(a plain path, not a file pattern). Use this prefix to have the root
paths in the patterns file rather than as command line arguments.
.TP
.B Pattern style prefix \fBP\fP (only useful within patterns files)
To change the default pattern style, use the \fBP\fP prefix, followed by
the pattern style abbreviation (\fBfm\fP, \fBpf\fP, \fBpp\fP, \fBre\fP, \fBsh\fP).
All patterns following this line in the same patterns file will use this
style until another style is specified or the end of the file is reached.
When the current patterns file is finished, the default pattern style will
reset.
.TP
.B Exclude pattern prefix \fB\-\fP
Use the prefix \fB\-\fP, followed by a pattern, to define an exclusion.
This has the same effect as the \fB\-\-exclude\fP option.
.TP
.B Exclude no\-recurse pattern prefix \fB!\fP
Use the prefix \fB!\fP, followed by a pattern, to define an exclusion
that does not recurse into subdirectories. This saves time, but
prevents include patterns to match any files in subdirectories.
.TP
.B Include pattern prefix \fB+\fP
Use the prefix \fB+\fP, followed by a pattern, to define inclusions.
This is useful to include paths that are covered in an exclude
pattern and would otherwise not be backed up.
.UNINDENT
.sp
The first matching pattern is used, so if an include pattern matches
before an exclude pattern, the file is backed up. Note that a no\-recurse
exclude stops examination of subdirectories so that potential includes
will not match \- use normal excludes for such use cases.
.sp
Example:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Define the recursion root
R /
# Exclude all iso files in any directory
\- **/*.iso
# Explicitly include all inside etc and root
+ etc/**
+ root/**
# Exclude a specific directory under each user\(aqs home directories
\- home/*/.cache
# Explicitly include everything in /home
+ home/**
# Explicitly exclude some directories without recursing into them
! re:^(dev|proc|run|sys|tmp)
# Exclude all other files and directories
# that are not specifically included earlier.
\- **
.EE
.UNINDENT
.UNINDENT
.sp
\fBTip: You can easily test your patterns with \-\-dry\-run and \-\-list\fP:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg create \-\-dry\-run \-\-list \-\-patterns\-from patterns.txt archive
.EE
.UNINDENT
.UNINDENT
.sp
This will list the considered files one per line, prefixed with a
character that indicates the action (e.g. \(aqx\(aq for excluding, see
\fBItem flags\fP in \fIborg create\fP usage docs).
.sp .sp
\fBNOTE:\fP \fBNOTE:\fP
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
It is possible that a subdirectory or file is matched while its parent Via \fB\-\-pattern\fP or \fB\-\-patterns\-from\fP you can define BOTH inclusion and exclusion
directories are not. In that case, parent directories are not backed of files using pattern prefixes \fB+\fP and \fB\-\fP\&. With \fB\-\-exclude\fP and
up and thus their user, group, permission, etc. cannot be restored. \fB\-\-exclude\-from\fP ONLY excludes are defined.
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
Inclusion patterns are useful to include paths that are contained in an excluded
path. The first matching pattern is used so if an include pattern matches before
an exclude pattern, the file is backed up. If an exclude\-norecurse pattern matches
a directory, it won\(aqt recurse into it and won\(aqt discover any potential matches for
include rules below that directory.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
It\(aqs possible that a sub\-directory/file is matched while parent directories are not.
In that case, parent directories are not backed up thus their user, group, permission,
etc. can not be restored.
.UNINDENT
.UNINDENT
.sp
Note that the default pattern style for \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP is
shell style (\fIsh:\fP), so those patterns behave similar to rsync include/exclude
patterns. The pattern style can be set via the \fIP\fP prefix.
.sp
Patterns (\fB\-\-pattern\fP) and excludes (\fB\-\-exclude\fP) from the command line are Patterns (\fB\-\-pattern\fP) and excludes (\fB\-\-exclude\fP) from the command line are
considered first (in the order of appearance). Then patterns from \fB\-\-patterns\-from\fP considered first (in the order of appearance). Then patterns from \fB\-\-patterns\-from\fP
are added. Exclusion patterns from \fB\-\-exclude\-from\fP files are appended last. are added. Exclusion patterns from \fB\-\-exclude\-from\fP files are appended last.
@ -306,20 +231,16 @@ Examples:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# back up pics, but not the ones from 2018, except the good ones: .ft C
# backup pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues. # note: using = is essential to avoid cmdline argument parsing issues.
borg create \-\-pattern=+pics/2018/good \-\-pattern=\-pics/2018 archive pics borg create \-\-pattern=+pics/2018/good \-\-pattern=\-pics/2018 repo::arch pics
# back up only JPG/JPEG files (case insensitive) in all home directories: # use a file with patterns:
borg create \-\-pattern \(aq+ re:\e.jpe?g(?i)$\(aq archive /home borg create \-\-patterns\-from patterns.lst repo::arch
.ft P
# back up homes, but exclude big downloads (like .ISO files) or hidden files: .fi
borg create \-\-exclude \(aqre:\e.iso(?i)$\(aq \-\-exclude \(aqsh:home/**/.*\(aq archive /home
# use a file with patterns (recursion root \(aq/\(aq via command line):
borg create \-\-patterns\-from patterns.lst archive /
.EE
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -327,19 +248,26 @@ The patterns.lst file could look like that:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# \(dqsh:\(dq pattern style is the default .ft C
# exclude caches # "sh:" pattern style is the default, so the following line is not needed:
\- home/*/.cache P sh
R /
# can be rebuild
\- /home/*/.cache
# they\(aqre downloads for a reason
\- /home/*/Downloads
# susan is a nice person
# include susans home # include susans home
+ home/susan + /home/susan
# also back up this exact file # also back up this exact file
+ pf:home/bobby/specialfile.txt + pf:/home/bobby/specialfile.txt
# don\(aqt back up the other home directories # don\(aqt backup the other home directories
\- home/* \- /home/*
# don\(aqt even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too) # don\(aqt even look in /proc
! re:^(dev|proc|run|sys|tmp) ! /proc
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -347,26 +275,31 @@ You can specify recursion roots either on the command line or in a patternfile:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
# these two commands do the same thing # these two commands do the same thing
borg create \-\-exclude home/bobby/junk archive /home/bobby /home/susan borg create \-\-exclude /home/bobby/junk repo::arch /home/bobby /home/susan
borg create \-\-patterns\-from patternfile.lst archive borg create \-\-patterns\-from patternfile.lst repo::arch
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
patternfile.lst: The patternfile:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
# note that excludes use fm: by default and patternfiles use sh: by default. # note that excludes use fm: by default and patternfiles use sh: by default.
# therefore, we need to specify fm: to have the same exact behavior. # therefore, we need to specify fm: to have the same exact behavior.
P fm P fm
R /home/bobby R /home/bobby
R /home/susan R /home/susan
\- home/bobby/junk
.EE \- /home/bobby/junk
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-PLACEHOLDERS 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-placeholders \- Details regarding placeholders
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-PLACEHOLDERS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-placeholders \- Details regarding placeholders
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
Repository URLs, \fB\-\-name\fP, \fB\-a\fP / \fB\-\-match\-archives\fP, \fB\-\-comment\fP Repository (or Archive) URLs, \fB\-\-prefix\fP, \fB\-\-glob\-archives\fP, \fB\-\-comment\fP
and \fB\-\-remote\-path\fP values support these placeholders: and \fB\-\-remote\-path\fP values support these placeholders:
.INDENT 0.0 .INDENT 0.0
.TP .TP
@ -47,13 +47,11 @@ The full name of the machine in reverse domain name notation.
.TP .TP
.B {now} .B {now}
The current local date and time, by default in ISO\-8601 format. The current local date and time, by default in ISO\-8601 format.
You can also supply your own format string <https://docs.python.org/3.10/library/datetime.html#strftime-and-strptime-behavior> You can also supply your own \fI\%format string\fP, e.g. {now:%Y\-%m\-%d_%H:%M:%S}
, e.g. {now:%Y\-%m\-%d_%H:%M:%S}
.TP .TP
.B {utcnow} .B {utcnow}
The current UTC date and time, by default in ISO\-8601 format. The current UTC date and time, by default in ISO\-8601 format.
You can also supply your own format string <https://docs.python.org/3.10/library/datetime.html#strftime-and-strptime-behavior> You can also supply your own \fI\%format string\fP, e.g. {utcnow:%Y\-%m\-%d_%H:%M:%S}
, e.g. {utcnow:%Y\-%m\-%d_%H:%M:%S}
.TP .TP
.B {user} .B {user}
The user name (or UID, if no name is available) of the user running borg. The user name (or UID, if no name is available) of the user running borg.
@ -78,9 +76,11 @@ If literal curly braces need to be used, double them for escaping:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
borg create \-\-repo /path/to/repo {{literal_text}} .ft C
.EE borg create /path/to/repo::{{literal_text}}
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp
@ -88,11 +88,13 @@ Examples:
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
borg create \-\-repo /path/to/repo {hostname}\-{user}\-{utcnow} ... .ft C
borg create \-\-repo /path/to/repo {hostname}\-{now:%Y\-%m\-%d_%H:%M:%S%z} ... borg create /path/to/repo::{hostname}\-{user}\-{utcnow} ...
borg prune \-a \(aqsh:{hostname}\-*\(aq ... borg create /path/to/repo::{hostname}\-{now:%Y\-%m\-%d_%H:%M:%S} ...
.EE borg prune \-\-prefix \(aq{hostname}\-\(aq ...
.ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-PRUNE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-prune \- Prune repository archives according to specified rules
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,199 +30,160 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-PRUNE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-prune \- Prune archives according to specified rules.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] prune [options] [NAME] borg [common options] prune [options] [REPOSITORY]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
The prune command prunes a repository by soft\-deleting all archives not The prune command prunes a repository by deleting all archives not matching
matching any of the specified retention options. any of the specified retention options.
.sp .sp
Important: Important: Repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.INDENT 0.0
.IP \(bu 2
The prune command will only mark archives for deletion (\(dqsoft\-deletion\(dq),
repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.IP \(bu 2
You can use \fBborg undelete\fP to undelete archives, but only until
you run \fBborg compact\fP\&.
.UNINDENT
.sp .sp
This command is normally used by automated backup scripts wanting to keep a This command is normally used by automated backup scripts wanting to keep a
certain number of historic backups. This retention policy is commonly referred to as certain number of historic backups. This retention policy is commonly referred to as
GFS <https://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son> \fI\%GFS\fP
(Grandfather\-father\-son) backup rotation scheme. (Grandfather\-father\-son) backup rotation scheme.
.sp .sp
The recommended way to use prune is to give the archive series name to it via the Also, prune automatically removes checkpoint archives (incomplete archives left
NAME argument (assuming you have the same name for all archives in a series). behind by interrupted backup runs) except if the checkpoint is the latest
Alternatively, you can also use \-\-match\-archives (\-a), then only archives that archive (and thus still needed). Checkpoint archives are not considered when
match the pattern are considered for deletion and only those archives count comparing archive counts against the retention limits (\fB\-\-keep\-X\fP).
towards the totals specified by the rules. .sp
If a prefix is set with \-P, then only archives that start with the prefix are
considered for deletion and only those archives count towards the totals
specified by the rules.
Otherwise, \fIall\fP archives in the repository are candidates for deletion! Otherwise, \fIall\fP archives in the repository are candidates for deletion!
There is no automatic distinction between archives representing different There is no automatic distinction between archives representing different
contents. These need to be distinguished by specifying matching globs. contents. These need to be distinguished by specifying matching prefixes.
.sp .sp
If you have multiple series of archives with different data sets (e.g. If you have multiple sequences of archives with different data sets (e.g.
from different machines) in one shared repository, use one prune call per from different machines) in one shared repository, use one prune call per
series. data set that matches only the respective archives using the \-P option.
.sp .sp
The \fB\-\-keep\-within\fP option takes an argument of the form \(dq<int><char>\(dq, The \fB\-\-keep\-within\fP option takes an argument of the form "<int><char>",
where char is \(dqy\(dq, \(dqm\(dq, \(dqw\(dq, \(dqd\(dq, \(dqH\(dq, \(dqM\(dq, or \(dqS\(dq. For example, where char is "H", "d", "w", "m", "y". For example, \fB\-\-keep\-within 2d\fP means
\fB\-\-keep\-within 2d\fP means to keep all archives that were created within to keep all archives that were created within the past 48 hours.
the past 2 days. \(dq1m\(dq is taken to mean \(dq31d\(dq. The archives kept with "1m" is taken to mean "31d". The archives kept with this option do not
this option do not count towards the totals specified by any other options. count towards the totals specified by any other options.
.sp .sp
A good procedure is to thin out more and more the older your backups get. A good procedure is to thin out more and more the older your backups get.
As an example, \fB\-\-keep\-daily 7\fP means to keep the latest backup on each day, As an example, \fB\-\-keep\-daily 7\fP means to keep the latest backup on each day,
up to 7 most recent days with backups (days without backups do not count). up to 7 most recent days with backups (days without backups do not count).
The rules are applied from secondly to yearly, and backups selected by previous The rules are applied from secondly to yearly, and backups selected by previous
rules do not count towards those of later rules. The time that each backup rules do not count towards those of later rules. The time that each backup
starts is used for pruning purposes. Dates and times are interpreted in the local starts is used for pruning purposes. Dates and times are interpreted in
timezone of the system where borg prune runs, and weeks go from Monday to Sunday. the local timezone, and weeks go from Monday to Sunday. Specifying a
Specifying a negative number of archives to keep means that there is no limit. negative number of archives to keep means that there is no limit. As of borg
.sp 1.2.0, borg will retain the oldest archive if any of the secondly, minutely,
Borg will retain the oldest archive if any of the secondly, minutely, hourly, hourly, daily, weekly, monthly, or yearly rules was not otherwise able to meet
daily, weekly, monthly, quarterly, or yearly rules was not otherwise able to its retention target. This enables the first chronological archive to continue
meet its retention target. This enables the first chronological archive to aging until it is replaced by a newer archive that meets the retention criteria.
continue aging until it is replaced by a newer archive that meets the retention
criteria.
.sp
The \fB\-\-keep\-13weekly\fP and \fB\-\-keep\-3monthly\fP rules are two different
strategies for keeping archives every quarter year.
.sp .sp
The \fB\-\-keep\-last N\fP option is doing the same as \fB\-\-keep\-secondly N\fP (and it will The \fB\-\-keep\-last N\fP option is doing the same as \fB\-\-keep\-secondly N\fP (and it will
keep the last N archives under the assumption that you do not create more than one keep the last N archives under the assumption that you do not create more than one
backup archive in the same second). backup archive in the same second).
.sp .sp
You can influence how the \fB\-\-list\fP output is formatted by using the \fB\-\-short\fP When using \fB\-\-stats\fP, you will get some statistics about how much data was
option (less wide output) or by giving a custom format using \fB\-\-format\fP (see deleted \- the "Deleted data" deduplicated size there is most interesting as
the \fBborg repo\-list\fP description for more details about the format string). that is how much your repository will shrink.
Please note that the "All archives" stats refer to the state after pruning.
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B NAME .B REPOSITORY
specify the archive name repository to prune
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B \-n\fP,\fB \-\-dry\-run
do not change the repository do not change repository
.TP .TP
.B \-\-list .B \-\-force
output a verbose list of archives it keeps/prunes force pruning of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
.TP .TP
.B \-\-short .B \-s\fP,\fB \-\-stats
use a less wide archive part format print statistics for the deleted archive
.TP .TP
.B \-\-list\-pruned .B \-\-list
output verbose list of archives it prunes output verbose list of archives it keeps/prunes
.TP
.B \-\-list\-kept
output verbose list of archives it keeps
.TP
.BI \-\-format \ FORMAT
specify format for the archive part (default: \(dq{archive:<36} {time} [{id}]\(dq)
.TP .TP
.BI \-\-keep\-within \ INTERVAL .BI \-\-keep\-within \ INTERVAL
keep all archives within this time interval keep all archives within this time interval
.TP .TP
.B \-\-keep\-last\fP,\fB \-\-keep\-secondly .B \-\-keep\-last\fP,\fB \-\-keep\-secondly
number of secondly archives to keep number of secondly archives to keep
.TP .TP
.B \-\-keep\-minutely .B \-\-keep\-minutely
number of minutely archives to keep number of minutely archives to keep
.TP .TP
.B \-H\fP,\fB \-\-keep\-hourly .B \-H\fP,\fB \-\-keep\-hourly
number of hourly archives to keep number of hourly archives to keep
.TP .TP
.B \-d\fP,\fB \-\-keep\-daily .B \-d\fP,\fB \-\-keep\-daily
number of daily archives to keep number of daily archives to keep
.TP .TP
.B \-w\fP,\fB \-\-keep\-weekly .B \-w\fP,\fB \-\-keep\-weekly
number of weekly archives to keep number of weekly archives to keep
.TP .TP
.B \-m\fP,\fB \-\-keep\-monthly .B \-m\fP,\fB \-\-keep\-monthly
number of monthly archives to keep number of monthly archives to keep
.TP .TP
.B \-\-keep\-13weekly .B \-y\fP,\fB \-\-keep\-yearly
number of quarterly archives to keep (13 week strategy)
.TP
.B \-\-keep\-3monthly
number of quarterly archives to keep (3 month strategy)
.TP
.B \-y\fP,\fB \-\-keep\-yearly
number of yearly archives to keep number of yearly archives to keep
.TP
.B \-\-save\-space
work slower, but using less space
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive filters
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN .BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq. only consider archive names starting with this prefix.
.TP .TP
.BI \-\-oldest \ TIMESPAN .BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m. only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.sp .sp
Be careful: prune is a potentially dangerous command that removes backup Be careful, prune is a potentially dangerous command, it will remove backup
archives. archives.
.sp .sp
By default, prune applies to \fBall archives in the repository\fP unless you The default of prune is to apply to \fBall archives in the repository\fP unless
restrict its operation to a subset of the archives. you restrict its operation to a subset of the archives using \fB\-\-prefix\fP\&.
.sp When using \fB\-\-prefix\fP, be careful to choose a good prefix \- e.g. do not use a
The recommended way to name archives (with \fBborg create\fP) is to use the prefix "foo" if you do not also want to match "foobar".
identical archive name within a series of archives. Then you can simply give
that name to prune as well, so it operates only on that series of archives.
.sp
Alternatively, you can use \fB\-a\fP/\fB\-\-match\-archives\fP to match archive names
and select a subset of them.
When using \fB\-a\fP, be careful to choose a good pattern — for example, do not use a
prefix \(dqfoo\(dq if you do not also want to match \(dqfoobar\(dq.
.sp .sp
It is strongly recommended to always run \fBprune \-v \-\-list \-\-dry\-run ...\fP It is strongly recommended to always run \fBprune \-v \-\-list \-\-dry\-run ...\fP
first, so you will see what it would do without it actually doing anything. first so you will see what it would do without it actually doing anything.
.sp
Do not forget to run \fBborg compact \-v\fP after prune to actually free disk space.
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
.ft C
# Keep 7 end of day and 4 additional end of week archives. # Keep 7 end of day and 4 additional end of week archives.
# Do a dry\-run without actually deleting anything. # Do a dry\-run without actually deleting anything.
$ borg prune \-v \-\-list \-\-dry\-run \-\-keep\-daily=7 \-\-keep\-weekly=4 $ borg prune \-v \-\-list \-\-dry\-run \-\-keep\-daily=7 \-\-keep\-weekly=4 /path/to/repo
# Similar to the above, but only apply to the archive series named \(aq{hostname}\(aq: # Same as above but only apply to archive names starting with the hostname
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \(aq{hostname}\(aq # of the machine followed by a "\-" character:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-prefix=\(aq{hostname}\-\(aq /path/to/repo
# Similar to the above, but apply to archive names starting with the hostname # actually free disk space:
# of the machine followed by a \(aq\-\(aq character: $ borg compact /path/to/repo
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-a \(aqsh:{hostname}\-*\(aq
# Keep 7 end of day, 4 additional end of week archives, # Keep 7 end of day, 4 additional end of week archives,
# and an end of month archive for every month: # and an end of month archive for every month:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 $ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 /path/to/repo
# Keep all backups in the last 10 days, 4 additional end of week archives, # Keep all backups in the last 10 days, 4 additional end of week archives,
# and an end of month archive for every month: # and an end of month archive for every month:
$ borg prune \-v \-\-list \-\-keep\-within=10d \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 $ borg prune \-v \-\-list \-\-keep\-within=10d \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 /path/to/repo
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.sp .sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-RECREATE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-recreate \- Re-create archives
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,77 +30,89 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-RECREATE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-recreate \- Recreate archives.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] recreate [options] [PATH...] borg [common options] recreate [options] [REPOSITORY_OR_ARCHIVE] [PATH...]
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
Recreate the contents of existing archives. Recreate the contents of existing archives.
.sp .sp
Recreate is a potentially dangerous function and might lead to data loss recreate is a potentially dangerous function and might lead to data loss
(if used wrongly). BE VERY CAREFUL! (if used wrongly). BE VERY CAREFUL!
.sp .sp
Important: Repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&. Important: Repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.sp .sp
\fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, \fB\-\-exclude\-if\-present\fP, \fB\-\-keep\-exclude\-tags\fP \fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, \fB\-\-exclude\-if\-present\fP, \fB\-\-keep\-exclude\-tags\fP
and PATH have the exact same semantics as in \(dqborg create\(dq, but they only check and PATH have the exact same semantics as in "borg create", but they only check
files in the archives and not in the local filesystem. If paths are specified, for files in the archives and not in the local file system. If PATHs are specified,
the resulting archives will contain only files from those paths. the resulting archives will only contain files from these PATHs.
.sp .sp
Note that all paths in an archive are relative, therefore absolute patterns/paths Note that all paths in an archive are relative, therefore absolute patterns/paths
will \fInot\fP match (\fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, PATHs). will \fInot\fP match (\fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, PATHs).
.sp .sp
\fB\-\-recompress\fP allows one to change the compression of existing data in archives.
Due to how Borg stores compressed size information this might display
incorrect information for archives that were not recreated at the same time.
There is no risk of data loss by this.
.sp
\fB\-\-chunker\-params\fP will re\-chunk all files in the archive, this can be \fB\-\-chunker\-params\fP will re\-chunk all files in the archive, this can be
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives. used to have upgraded Borg 0.xx or Attic archives deduplicate with
Borg 1.x archives.
.sp .sp
\fBUSE WITH CAUTION.\fP \fBUSE WITH CAUTION.\fP
Depending on the paths and patterns given, recreate can be used to Depending on the PATHs and patterns given, recreate can be used to permanently
delete files from archives permanently. delete files from archives.
When in doubt, use \fB\-\-dry\-run \-\-verbose \-\-list\fP to see how patterns/paths are When in doubt, use \fB\-\-dry\-run \-\-verbose \-\-list\fP to see how patterns/PATHS are
interpreted. See \fIlist_item_flags\fP in \fBborg create\fP for details. interpreted. See \fIlist_item_flags\fP in \fBborg create\fP for details.
.sp .sp
The archive being recreated is only removed after the operation completes. The The archive being recreated is only removed after the operation completes. The
archive that is built during the operation exists at the same time at archive that is built during the operation exists at the same time at
\(dq<ARCHIVE>.recreate\(dq. The new archive will have a different archive ID. "<ARCHIVE>.recreate". The new archive will have a different archive ID.
.sp .sp
With \fB\-\-target\fP the original archive is not replaced, instead a new archive is created. With \fB\-\-target\fP the original archive is not replaced, instead a new archive is created.
.sp .sp
When rechunking, space usage can be substantial \- expect When rechunking (or recompressing), space usage can be substantial \- expect
at least the entire deduplicated size of the archives using the previous at least the entire deduplicated size of the archives using the previous
chunker params. chunker (or compression) params.
.sp .sp
If your most recent borg check found missing chunks, please first run another If you recently ran borg check \-\-repair and it had to fix lost chunks with all\-zero
backup for the same data, before doing any rechunking. If you are lucky, that replacement chunks, please first run another backup for the same data and re\-run
will recreate the missing chunks. Optionally, do another borg check to see borg check \-\-repair afterwards to heal any archives that had lost chunks which are
if the chunks are still missing. still generated from the input data.
.sp
Important: running borg recreate to re\-chunk will remove the chunks_healthy
metadata of all items with replacement chunks, so healing will not be possible
any more after re\-chunking (it is also unlikely it would ever work: due to the
change of chunking parameters, the missing chunk likely will never be seen again
even if you still have the data that produced it).
.SH OPTIONS .SH OPTIONS
.sp .sp
See \fIborg\-common(1)\fP for common options of Borg commands. See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B REPOSITORY_OR_ARCHIVE
repository or archive to recreate
.TP
.B PATH .B PATH
paths to recreate; patterns are supported paths to recreate; patterns are supported
.UNINDENT .UNINDENT
.SS options .SS optional arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B \-\-list .B \-\-list
output verbose list of items (files, dirs, ...) output verbose list of items (files, dirs, ...)
.TP .TP
.BI \-\-filter \ STATUSCHARS .BI \-\-filter \ STATUSCHARS
only display items with the given status characters (listed in borg create \-\-help) only display items with the given status characters (listed in borg create \-\-help)
.TP .TP
.B \-n\fP,\fB \-\-dry\-run .B \-n\fP,\fB \-\-dry\-run
do not change anything do not change anything
.TP .TP
.B \-s\fP,\fB \-\-stats .B \-s\fP,\fB \-\-stats
print statistics at end print statistics at end
.UNINDENT .UNINDENT
.SS Include/Exclude options .SS Exclusion options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN .BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -112,86 +127,75 @@ include/exclude paths matching PATTERN
.BI \-\-patterns\-from \ PATTERNFILE .BI \-\-patterns\-from \ PATTERNFILE
read include/exclude patterns from PATTERNFILE, one per line read include/exclude patterns from PATTERNFILE, one per line
.TP .TP
.B \-\-exclude\-caches .B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file ( <http://www.bford.info/cachedir/spec.html> ) exclude directories that contain a CACHEDIR.TAG file (\fI\%http://www.bford.info/cachedir/spec.html\fP)
.TP .TP
.BI \-\-exclude\-if\-present \ NAME .BI \-\-exclude\-if\-present \ NAME
exclude directories that are tagged by containing a filesystem object with the given NAME exclude directories that are tagged by containing a filesystem object with the given NAME
.TP .TP
.B \-\-keep\-exclude\-tags .B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, do not omit the tag objects themselves from the backup archive if tag objects are specified with \fB\-\-exclude\-if\-present\fP, don\(aqt omit the tag objects themselves from the backup archive
.UNINDENT .UNINDENT
.SS Archive filters .SS Archive options
.INDENT 0.0 .INDENT 0.0
.TP .TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-target \ TARGET .BI \-\-target \ TARGET
create a new archive with the name ARCHIVE, do not replace existing archive create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive)
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP .TP
.BI \-\-comment \ COMMENT .BI \-\-comment \ COMMENT
add a comment text to the archive add a comment text to the archive
.TP .TP
.BI \-\-timestamp \ TIMESTAMP .BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory. manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). alternatively, give a reference file/directory.
.TP .TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION .BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details. select compression algorithm, see the output of the "borg help compression" command for details.
.TP
.BI \-\-recompress \ MODE
recompress data chunks according to \fIMODE\fP and \fB\-\-compression\fP\&. Possible modes are \fIif\-different\fP: recompress if current compression is with a different compression algorithm (the level is not considered); \fIalways\fP: recompress even if current compression is with the same compression algorithm (use this to change the compression level); and \fInever\fP: do not recompress (use this option to explicitly prevent recompression). If no MODE is given, \fIif\-different\fP will be used. Not passing \-\-recompress is equivalent to "\-\-recompress never".
.TP .TP
.BI \-\-chunker\-params \ PARAMS .BI \-\-chunker\-params \ PARAMS
rechunk using given chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or \fIdefault\fP to use the chunker defaults. default: do not rechunk specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or \fIdefault\fP to use the current defaults. default: buzhash,19,23,21,4095
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
# Create a backup with fast, low compression .ft C
$ borg create archive /some/files \-\-compression lz4 # Make old (Attic / Borg 0.xx) archives deduplicate with Borg 1.x archives.
# Then recompress it — this might take longer, but the backup has already completed, # Archives created with Borg 1.1+ and the default chunker params are skipped
# so there are no inconsistencies from a long\-running backup job. # (archive ID stays the same).
$ borg recreate \-a archive \-\-recompress \-\-compression zlib,9 $ borg recreate /mnt/backup \-\-chunker\-params default \-\-progress
# Create a backup with little but fast compression
$ borg create /mnt/backup::archive /some/files \-\-compression lz4
# Then compress it \- this might take longer, but the backup has already completed,
# so no inconsistencies from a long\-running backup job.
$ borg recreate /mnt/backup::archive \-\-recompress \-\-compression zlib,9
# Remove unwanted files from all archives in a repository. # Remove unwanted files from all archives in a repository.
# Note the relative path for the \-\-exclude option — archives only contain relative paths. # Note the relative path for the \-\-exclude option \- archives only contain relative paths.
$ borg recreate \-\-exclude home/icke/Pictures/drunk_photos $ borg recreate /mnt/backup \-\-exclude home/icke/Pictures/drunk_photos
# Change the archive comment # Change archive comment
$ borg create \-\-comment \(dqThis is a comment\(dq archivename ~ $ borg create \-\-comment "This is a comment" /mnt/backup::archivename ~
$ borg info \-a archivename $ borg info /mnt/backup::archivename
Name: archivename Name: archivename
Fingerprint: ... Fingerprint: ...
Comment: This is a comment Comment: This is a comment
\&... \&...
$ borg recreate \-\-comment \(dqThis is a better comment\(dq \-a archivename $ borg recreate \-\-comment "This is a better comment" /mnt/backup::archivename
$ borg info \-a archivename $ borg info /mnt/backup::archivename
Name: archivename Name: archivename
Fingerprint: ... Fingerprint: ...
Comment: This is a better comment Comment: This is a better comment
\&... \&...
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText. .\" Man page generated from reStructuredText.
. .
.TH BORG-RENAME 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-rename \- Rename an existing archive
. .
.nr rst2man-indent-level 0 .nr rst2man-indent-level 0
. .
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u .in \\n[rst2man-indent\\n[rst2man-indent-level]]u
.. ..
.TH "BORG-RENAME" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-rename \- Rename an existing archive.
.SH SYNOPSIS .SH SYNOPSIS
.sp .sp
borg [common options] rename [options] OLDNAME NEWNAME borg [common options] rename [options] ARCHIVE NEWNAME
.SH DESCRIPTION .SH DESCRIPTION
.sp .sp
This command renames an archive in the repository. This command renames an archive in the repository.
@ -44,25 +44,27 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments .SS arguments
.INDENT 0.0 .INDENT 0.0
.TP .TP
.B OLDNAME .B ARCHIVE
specify the current archive name archive to rename
.TP .TP
.B NEWNAME .B NEWNAME
specify the new archive name the new archive name to use
.UNINDENT .UNINDENT
.SH EXAMPLES .SH EXAMPLES
.INDENT 0.0 .INDENT 0.0
.INDENT 3.5 .INDENT 3.5
.sp .sp
.EX .nf
$ borg create archivename ~ .ft C
$ borg repo\-list $ borg create /path/to/repo::archivename ~
$ borg list /path/to/repo
archivename Mon, 2016\-02\-15 19:50:19 archivename Mon, 2016\-02\-15 19:50:19
$ borg rename archivename newname $ borg rename /path/to/repo::archivename newname
$ borg repo\-list $ borg list /path/to/repo
newname Mon, 2016\-02\-15 19:50:19 newname Mon, 2016\-02\-15 19:50:19
.EE .ft P
.fi
.UNINDENT .UNINDENT
.UNINDENT .UNINDENT
.SH SEE ALSO .SH SEE ALSO

View file

@ -1,89 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-REPO-COMPRESS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-repo-compress \- Repository (re-)compression.
.SH SYNOPSIS
.sp
borg [common options] repo\-compress [options]
.SH DESCRIPTION
.sp
Repository (re\-)compression (and/or re\-obfuscation).
.sp
Reads all chunks in the repository and recompresses them if they are not already
using the compression type/level and obfuscation level given via \fB\-\-compression\fP\&.
.sp
If the outcome of the chunk processing indicates a change in compression
type/level or obfuscation level, the processed chunk is written to the repository.
Please note that the outcome might not always be the desired compression
type/level \- if no compression gives a shorter output, that might be chosen.
.sp
Please note that this command can not work in low (or zero) free disk space
conditions.
.sp
If the \fBborg repo\-compress\fP process receives a SIGINT signal (Ctrl\-C), the repo
will be committed and compacted and borg will terminate cleanly afterwards.
.sp
Both \fB\-\-progress\fP and \fB\-\-stats\fP are recommended when \fBborg repo\-compress\fP
is used interactively.
.sp
You do \fBnot\fP need to run \fBborg compact\fP after \fBborg repo\-compress\fP\&.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options
.INDENT 0.0
.TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
.TP
.B \-s\fP,\fB \-\-stats
print statistics
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Recompress repository contents
$ borg repo\-compress \-\-progress \-\-compression=zstd,3
# Recompress and obfuscate repository contents
$ borg repo\-compress \-\-progress \-\-compression=obfuscate,1,zstd,3
.EE
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

Some files were not shown because too many files have changed in this diff Show more