Compare commits

..

No commits in common. "master" and "2.0.0.dev0" have entirely different histories.

505 changed files with 57316 additions and 69169 deletions

27
.appveyor.yml Normal file
View file

@ -0,0 +1,27 @@
version: '{build}'
environment:
matrix:
- PYTHON: C:\Python38-x64
# Disable automatic builds
build: off
# Build artifacts: all wheel and exe files in the dist folder
artifacts:
- path: 'dist\*.whl'
- path: 'dist\*.exe'
install:
- ps: scripts\win-download-openssl.ps1
- ps: |
& $env:PYTHON\python.exe -m venv borg-env
borg-env\Scripts\activate.ps1
python -m pip install -U pip
pip install -r requirements.d/development.txt
pip install wheel pyinstaller
build_script:
- ps: |
borg-env\Scripts\activate.ps1
scripts\win-build.ps1

36
.coafile Normal file
View file

@ -0,0 +1,36 @@
[all]
# note: put developer specific settings into ~/.coarc (e.g. editor = ...)
max_line_length = 255
use_spaces = True
ignore = src/borg/(checksums.c|chunker.c|compress.c|hashindex.c|item.c),
src/borg/crypto/low_level.c,
src/borg/platform/*.c
[all.general]
files = src/borg/**/*.(py|pyx|c)
bears = SpaceConsistencyBear, FilenameBear, InvalidLinkBear, LineLengthBear
file_naming_convention = snake
[all.python]
files = src/borg/**/*.py
bears = PEP8Bear, PyDocStyleBear, PyLintBear
pep_ignore = E123,E125,E126,E127,E128,E226,E301,E309,E402,F401,F405,F811,W690
pylint_disable = C0103, C0111, C0112, C0122, C0123, C0301, C0302, C0325, C0330, C0411, C0412, C0413, C1801,
I1101,
W0102, W0104, W0106, W0108, W0120, W0201, W0212, W0221, W0231, W0401, W0404,
W0511, W0603, W0611, W0612, W0613, W0614, W0621, W0622, W0640, W0702, W0703,
W1201, W1202, W1401,
R0101, R0201, R0204, R0901, R0902, R0903, R0904, R0911, R0912, R0913, R0914, R0915,
R0916, R1701, R1704, R1705, R1706, R1710,
E0102, E0202, E0401, E0601, E0611, E0702, E1101, E1102, E1120, E1129, E1130
pydocstyle_ignore = D100, D101, D102, D103, D104, D105, D200, D201, D202, D203, D204, D205, D209, D210,
D212, D213, D300, D301, D400, D401, D402, D403, D404
[all.c]
files = src/borg/**/*.c
bears = CPPCheckBear
[all.html]
files = src/borg/**/*.html
bears = HTMLLintBear
htmllint_ignore = *

24
.coveragerc Normal file
View file

@ -0,0 +1,24 @@
[run]
branch = True
disable_warnings = module-not-measured
source = src/borg
omit =
*/borg/__init__.py
*/borg/__main__.py
*/borg/_version.py
*/borg/fuse.py
*/borg/support/*
*/borg/testsuite/*
*/borg/hash_sizes.py
[report]
exclude_lines =
pragma: no cover
pragma: freebsd only
pragma: unknown platform only
def __repr__
raise AssertionError
raise NotImplementedError
if 0:
if __name__ == .__main__.:
ignore_errors = True

View file

@ -1,2 +0,0 @@
# Migrate code style to Black
7957af562d5ce8266b177039783be4dc8bdd7898

5
.github/FUNDING.yml vendored
View file

@ -1,6 +1,5 @@
# These are supported funding model platforms
github: borgbackup
liberapay: borgbackup
open_collective: borgbackup
# github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
# liberapay: borgbackup
custom: ['https://www.borgbackup.org/support/fund.html']

View file

@ -1,54 +1,56 @@
<!--
Thank you for reporting an issue.
*IMPORTANT* Before creating a new issue, please look around:
- BorgBackup documentation: https://borgbackup.readthedocs.io/en/stable/index.html
*IMPORTANT* - *before* creating a new issue please look around:
- Borgbackup documentation: http://borgbackup.readthedocs.io/en/stable/index.html
- FAQ: https://borgbackup.readthedocs.io/en/stable/faq.html
- Open issues in the GitHub tracker: https://github.com/borgbackup/borg/issues
and
- open issues in Github tracker: https://github.com/borgbackup/borg/issues
If you cannot find a similar problem, then create a new issue.
Please fill in as much of the template as possible.
-->
## Have you checked the BorgBackup docs, FAQ, and open GitHub issues?
## Have you checked borgbackup docs, FAQ, and open Github issues?
No
## Is this a bug/issue report or a question?
## Is this a BUG / ISSUE report or a QUESTION?
Bug/Issue/Question
Invalid
## System information. For client/server mode, post info for both machines.
## System information. For client/server mode post info for both machines.
#### Your Borg version (borg -V).
#### Your borg version (borg -V).
#### Operating system (distribution) and version.
#### Hardware/network configuration and filesystems used.
#### Hardware / network configuration, and filesystems used.
#### How much data is handled by Borg?
#### How much data is handled by borg?
#### Full Borg command line that led to the problem (leave out excludes and passwords).
#### Full borg commandline that lead to the problem (leave away excludes and passwords)
## Describe the problem you're observing.
#### Can you reproduce the problem? If so, describe how. If not, describe troubleshooting steps you took before opening the issue.
#### Include any warnings/errors/backtraces from the system logs
#### Include any warning/errors/backtraces from the system logs
<!--
If this complaint relates to Borg performance, please include CRUD benchmark
If this complaint relates to borg performance, please include CRUD benchmark
results and any steps you took to troubleshoot.
How to run the benchmark: https://borgbackup.readthedocs.io/en/stable/usage/benchmark.html
How to run benchmark: http://borgbackup.readthedocs.io/en/stable/usage/benchmark.html
*IMPORTANT* Please mark logs and terminal command output, otherwise GitHub will not display them correctly.
*IMPORTANT* - Please mark logs and text output from terminal commands
or else Github will not display them correctly.
An example is provided below.
Example:
```
this is an example of how log text should be marked (wrap it with ```)
this is an example how log text should be marked (wrap it with ```)
```
-->

8
.github/PULL_REQUEST_TEMPLATE vendored Normal file
View file

@ -0,0 +1,8 @@
Thank you for contributing code to Borg, your help is appreciated!
Please, before you submit a pull request, make sure it complies with the
guidelines given in our documentation:
https://borgbackup.readthedocs.io/en/latest/development.html#contributions
**Please remove all above text before submitting your pull request.**

View file

@ -1,18 +0,0 @@
<!--
Thank you for contributing to BorgBackup!
Please make sure your PR complies with our contribution guidelines:
https://borgbackup.readthedocs.io/en/latest/development.html#contributions
-->
## Description
<!-- What does this PR do? Reference any related issues with "fixes #XXXX". -->
## Checklist
- [ ] PR is against `master` (or maintenance branch if only applicable there)
- [ ] New code has tests and docs where appropriate
- [ ] Tests pass (run `tox` or the relevant test subset)
- [ ] Commit messages are clean and reference related issues

View file

@ -1,24 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"
groups:
actions:
patterns:
- "*"
- package-ecosystem: "pip"
directory: "/requirements.d"
ignore:
- dependency-name: "black"
schedule:
interval: "weekly"
cooldown:
semver-major-days: 90
semver-minor-days: 30
groups:
pip-dependencies:
patterns:
- "*"

View file

@ -1,38 +0,0 @@
name: Backport pull request
on:
pull_request_target:
types: [closed]
issue_comment:
types: [created]
permissions:
contents: write # so it can comment
pull-requests: write # so it can create pull requests
jobs:
backport:
name: Backport pull request
runs-on: ubuntu-24.04
timeout-minutes: 5
# Only run when pull request is merged
# or when a comment starting with `/backport` is created by someone other than the
# https://github.com/backport-action bot user (user id: 97796249). Note that if you use your
# own PAT as `github_token`, that you should replace this id with yours.
if: >
(
github.event_name == 'pull_request_target' &&
github.event.pull_request.merged
) || (
github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
github.event.comment.user.id != 97796249 &&
startsWith(github.event.comment.body, '/backport')
)
steps:
- uses: actions/checkout@v6
- name: Create backport pull requests
uses: korthout/backport-action@v4
with:
label_pattern: '^port/(.+)$'

View file

@ -1,30 +0,0 @@
# https://black.readthedocs.io/en/stable/integrations/github_actions.html#usage
# See also what we use locally in requirements.d/codestyle.txt — this should be the same version here.
name: Lint
on:
push:
paths:
- '**.py'
- 'pyproject.toml'
- '.github/workflows/black.yaml'
pull_request:
paths:
- '**.py'
- 'pyproject.toml'
- '.github/workflows/black.yaml'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
lint:
runs-on: ubuntu-22.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v6
- uses: psf/black@6305bf1ae645ab7541be4f5028a86239316178eb # 26.1.0
with:
version: "~= 24.0"

View file

@ -5,8 +5,16 @@ name: CI
on:
push:
branches: [ master ]
tags:
- '2.*'
paths:
- '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '**.yml'
- '**.cfg'
- '**.ini'
- 'requirements.d/*'
- '!docs/**'
pull_request:
branches: [ master ]
paths:
@ -15,671 +23,109 @@ on:
- '**.c'
- '**.h'
- '**.yml'
- '**.toml'
- '**.cfg'
- '**.ini'
- 'requirements.d/*'
- '!docs/**'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
permissions:
contents: read
jobs:
lint:
runs-on: ubuntu-22.04
timeout-minutes: 5
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v6
- uses: astral-sh/ruff-action@v3
security:
runs-on: ubuntu-24.04
timeout-minutes: 5
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Install dependencies
python-version: 3.9
- name: Lint with flake8
run: |
python -m pip install --upgrade pip
pip install bandit[toml]
- name: Run Bandit
run: |
bandit -r src/borg -c pyproject.toml
pip install flake8
flake8 src scripts conftest.py
asan_ubsan:
pytest:
runs-on: ubuntu-24.04
timeout-minutes: 25
needs: [lint]
steps:
- uses: actions/checkout@v6
with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all.
fetch-depth: 0
fetch-tags: true
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.12'
- name: Install system packages
run: |
sudo apt-get update
sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev libzstd-dev
- name: Install Python dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.d/development.txt
- name: Build Borg with ASan/UBSan
# Build the C/Cython extensions with AddressSanitizer and UndefinedBehaviorSanitizer enabled.
# How this works:
# - The -fsanitize=address,undefined flags inject runtime checks into our native code. If a bug is hit
# (e.g., buffer overflow, use-after-free, out-of-bounds, or undefined behavior), the sanitizer prints
# a detailed error report to stderr, including a stack trace, and forces the process to exit with
# non-zero status. In CI, this will fail the step/job so you will notice.
# - ASAN_OPTIONS/UBSAN_OPTIONS configure the sanitizers' runtime behavior (see below for meanings).
env:
CFLAGS: "-O1 -g -fno-omit-frame-pointer -fsanitize=address,undefined"
CXXFLAGS: "-O1 -g -fno-omit-frame-pointer -fsanitize=address,undefined"
LDFLAGS: "-fsanitize=address,undefined"
# ASAN_OPTIONS controls AddressSanitizer runtime tweaks:
# - detect_leaks=0: Disable LeakSanitizer to avoid false positives with CPython/pymalloc in short-lived tests.
# - strict_string_checks=1: Make invalid string operations (e.g., over-reads) more likely to be detected.
# - check_initialization_order=1: Catch uses that depend on static initialization order (C++).
# - detect_stack_use_after_return=1: Detect stack-use-after-return via stack poisoning (may increase overhead).
ASAN_OPTIONS: "detect_leaks=0:strict_string_checks=1:check_initialization_order=1:detect_stack_use_after_return=1"
# UBSAN_OPTIONS controls UndefinedBehaviorSanitizer runtime:
# - print_stacktrace=1: Include a stack trace for UB reports to ease debugging.
# Note: UBSan is recoverable by default (process may continue after reporting). If you want CI to
# abort immediately and fail on the first UB, add `halt_on_error=1` (e.g., UBSAN_OPTIONS="print_stacktrace=1:halt_on_error=1").
UBSAN_OPTIONS: "print_stacktrace=1"
# PYTHONDEVMODE enables additional Python runtime checks and warnings.
PYTHONDEVMODE: "1"
run: pip install -e .
- name: Run tests under sanitizers
env:
ASAN_OPTIONS: "detect_leaks=0:strict_string_checks=1:check_initialization_order=1:detect_stack_use_after_return=1"
UBSAN_OPTIONS: "print_stacktrace=1"
PYTHONDEVMODE: "1"
# Ensure the ASan runtime is loaded first to avoid "ASan runtime does not come first" warnings.
# We discover libasan/libubsan paths via gcc and preload them for the Python test process.
# the remote tests are slow and likely won't find anything useful
run: |
set -euo pipefail
export LD_PRELOAD="$(gcc -print-file-name=libasan.so):$(gcc -print-file-name=libubsan.so)"
echo "Using LD_PRELOAD=$LD_PRELOAD"
pytest -v --benchmark-skip -k "not remote"
native_tests:
needs: [lint]
permissions:
contents: read
id-token: write
attestations: write
needs: lint
strategy:
fail-fast: true
# noinspection YAMLSchemaValidation
matrix: >-
${{ fromJSON(
github.event_name == 'pull_request' && '{
"include": [
{"os": "ubuntu-22.04", "python-version": "3.10", "toxenv": "mypy"},
{"os": "ubuntu-22.04", "python-version": "3.11", "toxenv": "docs"},
{"os": "ubuntu-22.04", "python-version": "3.10", "toxenv": "py310-llfuse"},
{"os": "ubuntu-24.04", "python-version": "3.14", "toxenv": "py314-mfusepy"}
]
}' || '{
"include": [
{"os": "ubuntu-22.04", "python-version": "3.11", "toxenv": "py311-pyfuse3", "binary": "borg-linux-glibc235-x86_64-gh"},
{"os": "ubuntu-22.04-arm", "python-version": "3.11", "toxenv": "py311-pyfuse3", "binary": "borg-linux-glibc235-arm64-gh"},
{"os": "ubuntu-24.04", "python-version": "3.12", "toxenv": "py312-llfuse"},
{"os": "ubuntu-24.04", "python-version": "3.13", "toxenv": "py313-pyfuse3"},
{"os": "ubuntu-24.04", "python-version": "3.14", "toxenv": "py314-mfusepy"},
{"os": "macos-15-intel", "python-version": "3.11", "toxenv": "py311-none", "binary": "borg-macos-15-x86_64-gh"},
{"os": "macos-15", "python-version": "3.11", "toxenv": "py311-none", "binary": "borg-macos-15-arm64-gh"}
]
}'
) }}
matrix:
include:
- os: ubuntu-20.04
python-version: '3.9'
toxenv: py39-fuse2
- os: ubuntu-20.04
python-version: '3.10'
toxenv: py310-fuse3
- os: macos-10.15 # macos-latest is macos 11.6.2 and hanging at test_fuse, #6099
python-version: '3.9'
toxenv: py39-fuse2
env:
# Configure pkg-config to use OpenSSL from Homebrew
PKG_CONFIG_PATH: /usr/local/opt/openssl@1.1/lib/pkgconfig
BORG_LIBDEFLATE_PREFIX: /usr # on ubuntu 20.04 pkgconfig does not find libdeflate
TOXENV: ${{ matrix.toxenv }}
runs-on: ${{ matrix.os }}
# macOS machines can be slow, if overloaded.
timeout-minutes: 360
timeout-minutes: 40
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v2
with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all.
# just fetching 1 commit is not enough for setuptools-scm, so we fetch all
fetch-depth: 0
fetch-tags: true
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Cache pip
uses: actions/cache@v5
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-${{ runner.arch }}-pip-${{ hashFiles('requirements.d/development.txt') }}
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.d/development.txt') }}
restore-keys: |
${{ runner.os }}-${{ runner.arch }}-pip-
${{ runner.os }}-${{ runner.arch }}-
- name: Cache tox environments
uses: actions/cache@v5
with:
path: .tox
key: ${{ runner.os }}-${{ runner.arch }}-tox-${{ matrix.toxenv }}-${{ hashFiles('requirements.d/development.txt', 'pyproject.toml') }}
restore-keys: |
${{ runner.os }}-${{ runner.arch }}-tox-${{ matrix.toxenv }}-
${{ runner.os }}-${{ runner.arch }}-tox-
${{ runner.os }}-pip-
${{ runner.os }}-
- name: Install Linux packages
if: ${{ runner.os == 'Linux' }}
run: |
sudo apt-get update
sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev libzstd-dev
sudo apt-get install -y bash zsh fish # for shell completion tests
sudo apt-get install -y rclone openssh-server curl
if [[ "$TOXENV" == *"llfuse"* ]]; then
sudo apt-get install -y libfuse-dev fuse # Required for Python llfuse module
elif [[ "$TOXENV" == *"pyfuse3"* || "$TOXENV" == *"mfusepy"* ]]; then
sudo apt-get install -y libfuse3-dev fuse3 # Required for Python pyfuse3 module
fi
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev
sudo apt-get install -y libfuse-dev fuse || true # Required for Python llfuse module
sudo apt-get install -y libfuse3-dev fuse3 || true # Required for Python pyfuse3 module
- name: Install macOS packages
if: ${{ runner.os == 'macOS' }}
run: |
brew unlink pkg-config@0.29.2 || true
brew bundle install
- name: Configure OpenSSH SFTP server (test only)
if: ${{ runner.os == 'Linux' && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
run: |
sudo mkdir -p /run/sshd
sudo useradd -m -s /bin/bash sftpuser || true
# Create SSH key for the CI user and authorize it for sftpuser
mkdir -p ~/.ssh
chmod 700 ~/.ssh
test -f ~/.ssh/id_ed25519 || ssh-keygen -t ed25519 -N '' -f ~/.ssh/id_ed25519
sudo mkdir -p /home/sftpuser/.ssh
sudo chmod 700 /home/sftpuser/.ssh
sudo cp ~/.ssh/id_ed25519.pub /home/sftpuser/.ssh/authorized_keys
sudo chown -R sftpuser:sftpuser /home/sftpuser/.ssh
sudo chmod 600 /home/sftpuser/.ssh/authorized_keys
# Allow publickey auth and enable Subsystem sftp
sudo sed -i 's/^#\?PasswordAuthentication .*/PasswordAuthentication no/' /etc/ssh/sshd_config
sudo sed -i 's/^#\?PubkeyAuthentication .*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
if ! grep -q '^Subsystem sftp' /etc/ssh/sshd_config; then echo 'Subsystem sftp /usr/lib/openssh/sftp-server' | sudo tee -a /etc/ssh/sshd_config; fi
# Ensure host keys exist to avoid slow generation on first sshd start
sudo ssh-keygen -A
# Start sshd (listen on default 22 inside runner)
sudo /usr/sbin/sshd -D &
# Add host key to known_hosts so paramiko trusts it
ssh-keyscan -H localhost 127.0.0.1 | tee -a ~/.ssh/known_hosts
# Start ssh-agent and add our key so paramiko can use the agent
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/id_ed25519
# Export SFTP test URL for tox via GITHUB_ENV
echo "BORG_TEST_SFTP_REPO=sftp://sftpuser@localhost:22/borg/sftp-repo" >> $GITHUB_ENV
- name: Install and configure MinIO S3 server (test only)
if: ${{ runner.os == 'Linux' && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
run: |
set -e
arch=$(uname -m)
case "$arch" in
x86_64|amd64) srv_url=https://dl.min.io/server/minio/release/linux-amd64/minio; cli_url=https://dl.min.io/client/mc/release/linux-amd64/mc ;;
aarch64|arm64) srv_url=https://dl.min.io/server/minio/release/linux-arm64/minio; cli_url=https://dl.min.io/client/mc/release/linux-arm64/mc ;;
*) echo "Unsupported arch: $arch"; exit 1 ;;
esac
curl -fsSL -o /usr/local/bin/minio "$srv_url"
curl -fsSL -o /usr/local/bin/mc "$cli_url"
sudo chmod +x /usr/local/bin/minio /usr/local/bin/mc
export PATH=/usr/local/bin:$PATH
# Start MinIO on :9000 with default credentials (minioadmin/minioadmin)
MINIO_DIR="$GITHUB_WORKSPACE/.minio-data"
MINIO_LOG="$GITHUB_WORKSPACE/.minio.log"
mkdir -p "$MINIO_DIR"
nohup minio server "$MINIO_DIR" --address ":9000" >"$MINIO_LOG" 2>&1 &
# Wait for MinIO port to be ready
for i in $(seq 1 60); do (echo > /dev/tcp/127.0.0.1/9000) >/dev/null 2>&1 && break; sleep 1; done
# Configure client and create bucket
mc alias set local http://127.0.0.1:9000 minioadmin minioadmin
mc mb --ignore-existing local/borg
# Export S3 test URL for tox via GITHUB_ENV
echo "BORG_TEST_S3_REPO=s3:minioadmin:minioadmin@http://127.0.0.1:9000/borg/s3-repo" >> $GITHUB_ENV
brew install pkg-config || brew upgrade pkg-config
brew install zstd || brew upgrade zstd
brew install lz4 || brew upgrade lz4
brew install libdeflate || brew upgrade libdeflate
brew install xxhash || brew upgrade xxhash
brew install openssl@1.1 || brew upgrade openssl@1.1
brew install --cask macfuse || brew upgrade --cask macfuse # Required for Python llfuse module
- name: Install Python requirements
run: |
python -m pip install --upgrade pip setuptools wheel
pip install -r requirements.d/development.txt
- name: Install borgbackup
run: |
if [[ "$TOXENV" == *"llfuse"* ]]; then
pip install -ve ".[llfuse,cockpit,s3,sftp]"
elif [[ "$TOXENV" == *"pyfuse3"* ]]; then
pip install -ve ".[pyfuse3,cockpit,s3,sftp]"
elif [[ "$TOXENV" == *"mfusepy"* ]]; then
pip install -ve ".[mfusepy,cockpit,s3,sftp]"
else
pip install -ve ".[cockpit,s3,sftp]"
fi
- name: Build Borg fat binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
pip install -r requirements.d/pyinstaller.txt
mkdir -p dist/binary
pyinstaller --clean --distpath=dist/binary scripts/borg.exe.spec
- name: Smoke-test the built binary (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
pushd dist/binary
echo "single-file binary"
chmod +x borg.exe
./borg.exe -V
echo "single-directory binary"
chmod +x borg-dir/borg.exe
./borg-dir/borg.exe -V
tar czf borg.tgz borg-dir
popd
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
echo "borg.exe binary in PATH"
borg.exe -V
- name: Prepare binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
run: |
mkdir -p artifacts
if [ -f dist/binary/borg.exe ]; then
cp dist/binary/borg.exe artifacts/${{ matrix.binary }}
fi
if [ -f dist/binary/borg.tgz ]; then
cp dist/binary/borg.tgz artifacts/${{ matrix.binary }}.tgz
fi
echo "binary files"
ls -l artifacts/
- name: Attest binaries provenance (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
uses: actions/attest-build-provenance@v3
with:
subject-path: 'artifacts/*'
- name: Upload binaries (${{ matrix.binary }})
if: ${{ matrix.binary && startsWith(github.ref, 'refs/tags/') }}
uses: actions/upload-artifact@v6
with:
name: ${{ matrix.binary }}
path: artifacts/*
if-no-files-found: error
- name: run tox env
# pip install -e .
python setup.py -v develop
- name: run pytest via tox
run: |
# do not use fakeroot, but run as root. avoids the dreaded EISDIR sporadic failures. see #2482.
#sudo -E bash -c "tox -e py"
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
tox --skip-missing-interpreters -- --junitxml=test-results.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: ${{ matrix.python-version }}
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: test_results
env_vars: OS,python
files: test-results.xml
tox --skip-missing-interpreters
- name: Upload coverage to Codecov
if: ${{ !cancelled() && !contains(matrix.toxenv, 'mypy') && !contains(matrix.toxenv, 'docs') }}
uses: codecov/codecov-action@v5
uses: codecov/codecov-action@v1
env:
OS: ${{ runner.os }}
python: ${{ matrix.python-version }}
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: coverage
env_vars: OS,python
vm_tests:
permissions:
contents: read
id-token: write
attestations: write
runs-on: ubuntu-24.04
timeout-minutes: 90
needs: [lint]
continue-on-error: true
strategy:
fail-fast: false
matrix:
include:
- os: freebsd
version: '14.3'
display_name: FreeBSD
# Controls binary build and provenance attestation on tags
do_binaries: true
artifact_prefix: borg-freebsd-14-x86_64-gh
- os: netbsd
version: '10.1'
display_name: NetBSD
do_binaries: false
- os: openbsd
version: '7.7'
display_name: OpenBSD
do_binaries: false
- os: omnios
version: 'r151056'
display_name: OmniOS
do_binaries: false
- os: haiku
version: 'r1beta5'
display_name: Haiku
do_binaries: false
steps:
- name: Check out repository
uses: actions/checkout@v6
with:
fetch-depth: 0
fetch-tags: true
- name: Test on ${{ matrix.display_name }}
id: cross_os
uses: cross-platform-actions/action@v0.32.0
env:
DO_BINARIES: ${{ matrix.do_binaries }}
with:
operating_system: ${{ matrix.os }}
version: ${{ matrix.version }}
shell: bash
run: |
set -euxo pipefail
case "${{ matrix.os }}" in
freebsd)
export IGNORE_OSVERSION=yes
sudo -E pkg update -f
sudo -E pkg install -y xxhash liblz4 zstd pkgconf
sudo -E pkg install -y fusefs-libs
sudo -E kldload fusefs
sudo -E sysctl vfs.usermount=1
sudo -E chmod 666 /dev/fuse
sudo -E pkg install -y rust
sudo -E pkg install -y gmake
sudo -E pkg install -y git
sudo -E pkg install -y python310 py310-sqlite3
sudo -E pkg install -y python311 py311-sqlite3 py311-pip py311-virtualenv
sudo ln -sf /usr/local/bin/python3.11 /usr/local/bin/python3
sudo ln -sf /usr/local/bin/python3.11 /usr/local/bin/python
sudo ln -sf /usr/local/bin/pip3.11 /usr/local/bin/pip3
sudo ln -sf /usr/local/bin/pip3.11 /usr/local/bin/pip
# required for libsodium/pynacl build
export MAKE=gmake
python -m venv .venv
. .venv/bin/activate
python -V
pip -V
python -m pip install --upgrade pip wheel
pip install -r requirements.d/development.txt
pip install -e ".[mfusepy,cockpit,s3,sftp]"
tox -e py311-mfusepy
if [[ "${{ matrix.do_binaries }}" == "true" && "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
python -m pip install -r requirements.d/pyinstaller.txt
mkdir -p dist/binary
pyinstaller --clean --distpath=dist/binary scripts/borg.exe.spec
pushd dist/binary
echo "single-file binary"
chmod +x borg.exe
./borg.exe -V
echo "single-directory binary"
chmod +x borg-dir/borg.exe
./borg-dir/borg.exe -V
tar czf borg.tgz borg-dir
popd
mkdir -p artifacts
if [ -f dist/binary/borg.exe ]; then
cp -v dist/binary/borg.exe artifacts/${{ matrix.artifact_prefix }}
fi
if [ -f dist/binary/borg.tgz ]; then
cp -v dist/binary/borg.tgz artifacts/${{ matrix.artifact_prefix }}.tgz
fi
fi
;;
netbsd)
arch="$(uname -m)"
sudo -E mkdir -p /usr/pkg/etc/pkgin
echo "https://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/${arch}/10.1/All" | sudo tee /usr/pkg/etc/pkgin/repositories.conf > /dev/null
sudo -E pkgin update
sudo -E pkgin -y upgrade
sudo -E pkgin -y install zstd lz4 xxhash git
sudo -E pkgin -y install rust
sudo -E pkgin -y install pkg-config
sudo -E pkgin -y install py311-pip py311-virtualenv py311-tox
sudo -E ln -sf /usr/pkg/bin/python3.11 /usr/pkg/bin/python3
sudo -E ln -sf /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip3
sudo -E ln -sf /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv3
sudo -E ln -sf /usr/pkg/bin/tox-3.11 /usr/pkg/bin/tox3
# Ensure base system admin tools are on PATH for the non-root shell
export PATH="/sbin:/usr/sbin:$PATH"
echo "--- Preparing an extattr-enabled filesystem ---"
# On many NetBSD setups /tmp is tmpfs without extended attributes.
# Create a FFS image with extended attributes enabled and use it for TMPDIR.
VNDDEV="vnd0"
IMGFILE="/tmp/fs.img"
sudo -E dd if=/dev/zero of=${IMGFILE} bs=1m count=1024
sudo -E vndconfig -c "${VNDDEV}" "${IMGFILE}"
sudo -E newfs -O 2ea /dev/r${VNDDEV}a
MNT="/mnt/eafs"
sudo -E mkdir -p ${MNT}
sudo -E mount -t ffs -o extattr /dev/${VNDDEV}a $MNT
export TMPDIR="${MNT}/tmp"
sudo -E mkdir -p ${TMPDIR}
sudo -E chmod 1777 ${TMPDIR}
touch ${TMPDIR}/testfile
lsextattr user ${TMPDIR}/testfile && echo "[xattr] *** xattrs SUPPORTED on ${TMPDIR}! ***"
tox3 -e py311-none
;;
openbsd)
sudo -E pkg_add xxhash lz4 zstd git
sudo -E pkg_add rust
sudo -E pkg_add openssl%3.4
sudo -E pkg_add py3-pip py3-virtualenv py3-tox
export BORG_OPENSSL_NAME=eopenssl34
tox -e py312-none
;;
omnios)
sudo pkg install gcc14 git pkg-config python-313 gnu-make gnu-coreutils
sudo ln -sf /usr/bin/python3.13 /usr/bin/python3
sudo ln -sf /usr/bin/python3.13-config /usr/bin/python3-config
sudo python3 -m ensurepip
sudo python3 -m pip install virtualenv
# install libxxhash from source
git clone --depth 1 https://github.com/Cyan4973/xxHash.git
cd xxHash
sudo gmake install INSTALL=/usr/gnu/bin/install PREFIX=/usr/local
cd ..
export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig:${PKG_CONFIG_PATH:-}"
export LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH:-}"
python3 -m venv .venv
. .venv/bin/activate
python -V
pip -V
python -m pip install --upgrade pip wheel
pip install -r requirements.d/development.txt
# no fuse support on omnios in our tests usually
pip install -e .
tox -e py313-none
;;
haiku)
pkgman refresh
pkgman install -y git pkgconfig zstd lz4 xxhash
pkgman install -y openssl3
pkgman install -y rust_bin
pkgman install -y python3.10
pkgman install -y cffi
pkgman install -y lz4_devel zstd_devel xxhash_devel openssl3_devel libffi_devel
# there is no pkgman package for tox, so we install it into a venv
python3 -m ensurepip --upgrade
python3 -m pip install --upgrade pip wheel
python3 -m venv .venv
. .venv/bin/activate
export PKG_CONFIG_PATH="/system/develop/lib/pkgconfig:/system/lib/pkgconfig:${PKG_CONFIG_PATH:-}"
export BORG_LIBLZ4_PREFIX=/system/develop
export BORG_LIBZSTD_PREFIX=/system/develop
export BORG_LIBXXHASH_PREFIX=/system/develop
export BORG_OPENSSL_PREFIX=/system/develop
pip install -r requirements.d/development.txt
pip install -e .
# troubles with either tox or pytest xdist, so we run pytest manually:
pytest -v -rs --benchmark-skip -k "not remote and not socket"
;;
esac
- name: Upload artifacts
if: startsWith(github.ref, 'refs/tags/') && matrix.do_binaries
uses: actions/upload-artifact@v6
with:
name: ${{ matrix.artifact_prefix }}
path: artifacts/*
if-no-files-found: ignore
- name: Attest provenance
if: startsWith(github.ref, 'refs/tags/') && matrix.do_binaries
uses: actions/attest-build-provenance@v3
with:
subject-path: 'artifacts/*'
windows_tests:
if: true # can be used to temporarily disable the build
runs-on: windows-latest
timeout-minutes: 90
needs: [lint]
env:
PY_COLORS: 1
defaults:
run:
shell: msys2 {0}
steps:
- uses: actions/checkout@v6
with:
fetch-depth: 0
- uses: msys2/setup-msys2@v2
with:
msystem: UCRT64
update: true
- name: Install system packages
run: ./scripts/msys2-install-deps development
- name: Build python venv
run: |
# building cffi / argon2-cffi in the venv fails, so we try to use the system packages
python -m venv --system-site-packages env
. env/bin/activate
# python -m pip install --upgrade pip
# pip install --upgrade setuptools build wheel
pip install -r requirements.d/pyinstaller.txt
- name: Build
run: |
# build borg.exe
. env/bin/activate
pip install -e ".[cockpit,s3,sftp]"
mkdir -p dist/binary
pyinstaller -y --clean --distpath=dist/binary scripts/borg.exe.spec
# build sdist and wheel in dist/...
python -m build
- uses: actions/upload-artifact@v6
with:
name: borg-windows
path: dist/binary/borg.exe
- name: Run tests
run: |
# Ensure locally built binary in ./dist/binary/borg-dir is found during tests
export PATH="$GITHUB_WORKSPACE/dist/binary/borg-dir:$PATH"
borg.exe -V
. env/bin/activate
python -m pytest -n4 --benchmark-skip -vv -rs -k "not remote" --junitxml=test-results.xml
- name: Upload test results to Codecov
if: ${{ !cancelled() }}
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: '3.11'
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: test_results
env_vars: OS,python
files: test-results.xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v5
env:
OS: ${{ runner.os }}
python: '3.11'
with:
token: ${{ secrets.CODECOV_TOKEN }}
report_type: coverage
env_vars: OS,python
env_vars: OS, python

View file

@ -5,33 +5,16 @@ name: "CodeQL"
on:
push:
branches: [ master ]
paths:
- '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '.github/workflows/codeql-analysis.yml'
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
paths:
- '**.py'
- '**.pyx'
- '**.c'
- '**.h'
- '.github/workflows/codeql-analysis.yml'
schedule:
- cron: '39 2 * * 5'
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}
jobs:
analyze:
name: Analyze
runs-on: ubuntu-22.04
timeout-minutes: 20
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
@ -42,20 +25,23 @@ jobs:
matrix:
language: [ 'cpp', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ]
# Learn more about CodeQL language support at https://codeql.github.com/docs/codeql-overview/supported-languages-and-frameworks/
# Learn more about CodeQL language support at https://git.io/codeql-language-support
env:
BORG_LIBDEFLATE_PREFIX: /usr # on ubuntu 20.04 pkgconfig does not find libdeflate
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v2
with:
# Just fetching one commit is not enough for setuptools-scm, so we fetch all.
# just fetching 1 commit is not enough for setuptools-scm, so we fetch all
fetch-depth: 0
- name: Set up Python
uses: actions/setup-python@v6
uses: actions/setup-python@v2
with:
python-version: 3.11
python-version: 3.9
- name: Cache pip
uses: actions/cache@v5
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('requirements.d/development.txt') }}
@ -66,10 +52,10 @@ jobs:
run: |
sudo apt-get update
sudo apt-get install -y pkg-config build-essential
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev libzstd-dev
sudo apt-get install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
@ -81,6 +67,6 @@ jobs:
python3 -m venv ../borg-env
source ../borg-env/bin/activate
pip3 install -r requirements.d/development.txt
pip3 install -ve .
pip3 install -e .
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v4
uses: github/codeql-action/analyze@v1

12
.gitignore vendored
View file

@ -2,13 +2,14 @@ MANIFEST
docs/_build
build
dist
external
borg-env
.tox
src/borg/compress.c
src/borg/crypto/low_level.c
src/borg/hashindex.c
src/borg/item.c
src/borg/chunkers/buzhash.c
src/borg/chunkers/buzhash64.c
src/borg/chunkers/reader.c
src/borg/chunker.c
src/borg/checksums.c
src/borg/platform/darwin.c
src/borg/platform/freebsd.c
@ -22,9 +23,12 @@ src/borg/_version.py
*.pyd
*.so
.idea/
.junie/
.cache/
.vscode/
borg.build/
borg.dist/
borg.exe
.coverage
.coverage.*
.vagrant
.eggs

View file

@ -1,9 +0,0 @@
repos:
- repo: https://github.com/psf/black
rev: 24.8.0
hooks:
- id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.0
hooks:
- id: ruff

View file

@ -1,33 +0,0 @@
# .readthedocs.yaml - Read the Docs configuration file.
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details.
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
jobs:
post_checkout:
- git fetch --unshallow
apt_packages:
- build-essential
- pkg-config
- libacl1-dev
- libssl-dev
- liblz4-dev
- libzstd-dev
- libxxhash-dev
python:
install:
- requirements: requirements.d/development.lock.txt
- requirements: requirements.d/docs.txt
- method: pip
path: .
sphinx:
configuration: docs/conf.py
formats:
- htmlzip

28
AUTHORS
View file

@ -1,5 +1,5 @@
Email addresses listed here are not intended for support.
Please see the `support section`_ instead.
E-mail addresses listed here are not intended for support, please see
the `support section`_ instead.
.. _support section: https://borgbackup.readthedocs.io/en/stable/support.html
@ -44,3 +44,27 @@ Attic Patches and Suggestions
- Johann Klähn
- Petros Moisiadis
- Thomas Waldmann
BLAKE2
------
Borg includes BLAKE2: Copyright 2012, Samuel Neves <sneves@dei.uc.pt>, licensed under the terms
of the CC0, the OpenSSL Licence, or the Apache Public License 2.0.
Slicing CRC32
-------------
Borg includes a fast slice-by-8 implementation of CRC32, Copyright 2011-2015 Stephan Brumme,
licensed under the terms of a zlib license. See http://create.stephan-brumme.com/crc32/
Folding CRC32
-------------
Borg includes an extremely fast folding implementation of CRC32, Copyright 2013 Intel Corporation,
licensed under the terms of the zlib license.
xxHash
------
XXH64, a fast non-cryptographic hash algorithm. Copyright 2012-2016 Yann Collet,
licensed under a BSD 2-clause license.

View file

@ -1,12 +0,0 @@
brew 'pkgconf'
brew 'zstd'
brew 'lz4'
brew 'xxhash'
brew 'openssl@3'
# osxfuse (aka macFUSE) is only required for "borg mount",
# but won't work on GitHub Actions' workers.
# It requires installing a kernel extension, so some users
# may want it and some won't.
#cask 'osxfuse'

View file

@ -1,4 +1,4 @@
Copyright (C) 2015-2025 The Borg Collective (see AUTHORS file)
Copyright (C) 2015-2022 The Borg Collective (see AUTHORS file)
Copyright (C) 2010-2014 Jonas Borgström <jonas@borgstrom.se>
All rights reserved.

View file

@ -1,7 +1,7 @@
# The files we need to include in the sdist are handled automatically by
# stuff we need to include into the sdist is handled automatically by
# setuptools_scm - it includes all git-committed files.
# But we want to exclude some committed files/directories not needed in the sdist:
exclude .editorconfig .gitattributes .gitignore .mailmap Vagrantfile
# but we want to exclude some committed files/dirs not needed in the sdist:
exclude .coafile .editorconfig .gitattributes .gitignore .mailmap Vagrantfile
prune .github
include src/borg/platform/darwin.c src/borg/platform/freebsd.c src/borg/platform/linux.c src/borg/platform/posix.c
include src/borg/platform/syncfilerange.c

View file

@ -1,23 +1,6 @@
This is borg2!
--------------
Please note that this is the README for borg2 / master branch.
For the stable version's docs, please see here:
https://borgbackup.readthedocs.io/en/stable/
Borg2 is currently in beta testing and might get major and/or
breaking changes between beta releases (and there is no beta to
next-beta upgrade code, so you will have to delete and re-create repos).
Thus, **DO NOT USE BORG2 FOR YOUR PRODUCTION BACKUPS!** Please help with
testing it, but set it up *additionally* to your production backups.
TODO: the screencasts need a remake using borg2, see here:
https://github.com/borgbackup/borg/issues/6303
|screencast_basic|
More screencasts: `installation`_, `advanced usage`_
What is BorgBackup?
-------------------
@ -25,17 +8,17 @@ What is BorgBackup?
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to back up data.
The main goal of Borg is to provide an efficient and secure way to backup data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
See the `installation manual`_ or, if you have already
downloaded Borg, ``docs/installation.rst`` to get started with Borg.
There is also an `offline documentation`_ available, in multiple formats.
.. _installation manual: https://borgbackup.readthedocs.io/en/master/installation.html
.. _installation manual: https://borgbackup.readthedocs.org/en/stable/installation.html
.. _offline documentation: https://readthedocs.org/projects/borgbackup/downloads
Main features
@ -69,16 +52,15 @@ Main features
**Speed**
* performance-critical code (chunking, compression, encryption) is
implemented in C/Cython
* local caching
* local caching of files/chunks index data
* quick detection of unmodified files
**Data encryption**
All data can be protected client-side using 256-bit authenticated encryption
(AES-OCB or chacha20-poly1305), ensuring data confidentiality, integrity and
authenticity.
All data can be protected using 256-bit AES encryption, data integrity and
authenticity is verified using HMAC-SHA256. Data is encrypted clientside.
**Obfuscation**
Optionally, Borg can actively obfuscate, e.g., the size of files/chunks to
Optionally, borg can actively obfuscate e.g. the size of files / chunks to
make fingerprinting attacks more difficult.
**Compression**
@ -91,24 +73,24 @@ Main features
* lzma (low speed, high compression)
**Off-site backups**
Borg can store data on any remote host accessible over SSH. If Borg is
installed on the remote host, significant performance gains can be achieved
compared to using a network file system (sshfs, NFS, ...).
Borg can store data on any remote host accessible over SSH. If Borg is
installed on the remote host, big performance gains can be achieved
compared to using a network filesystem (sshfs, nfs, ...).
**Backups mountable as file systems**
Backup archives are mountable as user-space file systems for easy interactive
backup examination and restores (e.g., by using a regular file manager).
**Backups mountable as filesystems**
Backup archives are mountable as userspace filesystems for easy interactive
backup examination and restores (e.g. by using a regular file manager).
**Easy installation on multiple platforms**
We offer single-file binaries that do not require installing anything -
you can just run them on these platforms:
* Linux
* macOS
* Mac OS X
* FreeBSD
* OpenBSD and NetBSD (no xattrs/ACLs support or binaries yet)
* Cygwin (experimental, no binaries yet)
* Windows Subsystem for Linux (WSL) on Windows 10/11 (experimental)
* Linux Subsystem of Windows 10 (experimental)
**Free and Open Source Software**
* security and functionality can be audited independently
@ -118,57 +100,61 @@ Main features
Easy to use
~~~~~~~~~~~
For ease of use, set the BORG_REPO environment variable::
Initialize a new backup repository (see ``borg init --help`` for encryption options)::
$ export BORG_REPO=/path/to/repo
$ borg init -e repokey /path/to/repo
Create a new backup repository (see ``borg repo-create --help`` for encryption options)::
Create a backup archive::
$ borg repo-create -e repokey-aes-ocb
$ borg create /path/to/repo::Saturday1 ~/Documents
Create a new backup archive::
Now doing another backup, just to show off the great deduplication::
$ borg create Monday1 ~/Documents
$ borg create -v --stats /path/to/repo::Saturday2 ~/Documents
-----------------------------------------------------------------------------
Archive name: Saturday2
Archive fingerprint: 622b7c53c...
Time (start): Sat, 2016-02-27 14:48:13
Time (end): Sat, 2016-02-27 14:48:14
Duration: 0.88 seconds
Number of files: 163
-----------------------------------------------------------------------------
Original size Compressed size Deduplicated size
This archive: 6.85 MB 6.85 MB 30.79 kB <-- !
All archives: 13.69 MB 13.71 MB 6.88 MB
Now do another backup, just to show off the great deduplication::
$ borg create -v --stats Monday2 ~/Documents
Repository: /path/to/repo
Archive name: Monday2
Archive fingerprint: 7714aef97c1a24539cc3dc73f79b060f14af04e2541da33d54c7ee8e81a00089
Time (start): Mon, 2022-10-03 19:57:35 +0200
Time (end): Mon, 2022-10-03 19:57:35 +0200
Duration: 0.01 seconds
Number of files: 24
Original size: 29.73 MB
Deduplicated size: 520 B
Unique chunks Total chunks
Chunk index: 167 330
-----------------------------------------------------------------------------
Helping, donations and bounties, becoming a Patron
For a graphical frontend refer to our complementary project `BorgWeb <https://borgweb.readthedocs.io/>`_.
Helping, Donations and Bounties, becoming a Patron
--------------------------------------------------
Your help is always welcome!
Spread the word, give feedback, help with documentation, testing or development.
You can also give monetary support to the project, see here for details:
You can also give monetary support to the project, see there for details:
https://www.borgbackup.org/support/fund.html
Links
-----
* `Main website <https://borgbackup.readthedocs.io/>`_
* `Main Web Site <https://borgbackup.readthedocs.org/>`_
* `Releases <https://github.com/borgbackup/borg/releases>`_,
`PyPI packages <https://pypi.org/project/borgbackup/>`_ and
`Changelog <https://github.com/borgbackup/borg/blob/master/docs/changes.rst>`_
* `Offline documentation <https://readthedocs.org/projects/borgbackup/downloads>`_
`PyPI packages <https://pypi.python.org/pypi/borgbackup>`_ and
`ChangeLog <https://github.com/borgbackup/borg/blob/master/docs/changes.rst>`_
* `Offline Documentation <https://readthedocs.org/projects/borgbackup/downloads>`_
* `GitHub <https://github.com/borgbackup/borg>`_ and
`Issue tracker <https://github.com/borgbackup/borg/issues>`_.
* `Web chat (IRC) <https://web.libera.chat/#borgbackup>`_ and
`Mailing list <https://mail.python.org/mailman/listinfo/borgbackup>`_
* `License <https://borgbackup.readthedocs.io/en/master/authors.html#license>`_
* `Security contact <https://borgbackup.readthedocs.io/en/master/support.html#security-contact>`_
`Issue Tracker <https://github.com/borgbackup/borg/issues>`_.
* `Web-Chat (IRC) <https://web.libera.chat/#borgbackup>`_ and
`Mailing List <https://mail.python.org/mailman/listinfo/borgbackup>`_
* `License <https://borgbackup.readthedocs.org/en/stable/authors.html#license>`_
* `Security contact <https://borgbackup.readthedocs.io/en/latest/support.html#security-contact>`_
Compatibility notes
-------------------
@ -178,18 +164,22 @@ CHANGES (like when going from 0.x.y to 1.0.0 or from 1.x.y to 2.0.0).
NOT RELEASED DEVELOPMENT VERSIONS HAVE UNKNOWN COMPATIBILITY PROPERTIES.
THIS IS SOFTWARE IN DEVELOPMENT, DECIDE FOR YOURSELF WHETHER IT FITS YOUR NEEDS.
THIS IS SOFTWARE IN DEVELOPMENT, DECIDE YOURSELF WHETHER IT FITS YOUR NEEDS.
Security issues should be reported to the `Security contact`_ (or
see ``docs/support.rst`` in the source distribution).
.. start-badges
|doc| |build| |coverage| |bestpractices|
|doc| |build| |coverage| |bestpractices| |bounties|
.. |doc| image:: https://readthedocs.org/projects/borgbackup/badge/?version=master
.. |bounties| image:: https://api.bountysource.com/badge/team?team_id=78284&style=bounties_posted
:alt: Bounty Source
:target: https://www.bountysource.com/teams/borgbackup
.. |doc| image:: https://readthedocs.org/projects/borgbackup/badge/?version=stable
:alt: Documentation
:target: https://borgbackup.readthedocs.io/en/master/
:target: https://borgbackup.readthedocs.org/en/stable/
.. |build| image:: https://github.com/borgbackup/borg/workflows/CI/badge.svg?branch=master
:alt: Build Status (master)

48
README_WINDOWS.rst Normal file
View file

@ -0,0 +1,48 @@
Borg Native on Windows
======================
Running borg natively on windows is in a early alpha stage. Expect many things to fail.
Do not use the native windows build on any data which you do not want to lose!
Build Requirements
------------------
- VC 14.0 Compiler
- OpenSSL Library v1.1.1c, 64bit (available at https://github.com/python/cpython-bin-deps)
Please use the `win-download-openssl.ps1` script to download and extract the library to
the correct location. See also the OpenSSL section below.
- Patience and a lot of coffee / beer
What's working
--------------
.. note::
The following examples assume that the `BORG_REPO` and `BORG_PASSPHRASE` environment variables are set
if the repo or passphrase is not explicitly given.
- Borg does not crash if called with ``borg``
- ``borg init --encryption repokey-blake2 ./demoRepo`` runs without an error/warning.
Note that absolute paths only work if the protocol is explicitly set to file://
- ``borg create ::backup-{now} D:\DemoData`` works as expected.
- ``borg list`` works as expected.
- ``borg extract --strip-components 1 ::backup-XXXX`` works.
If absolute paths are extracted, it's important to pass ``--strip-components 1`` as
otherwise the data is restored to the original location!
What's NOT working
------------------
- Extracting a backup which was created on windows machine on a non windows machine will fail.
- And many things more.
OpenSSL, Windows and Python
---------------------------
Windows does not ship OpenSSL by default, so we need to get the library from somewhere else.
However, a default python installation does include `libcrypto` which is required by borg.
The only things which are missing to build borg are the header and `*.lib` files.
Luckily the python developers provide all required files in a separate repository.
The `win-download-openssl.ps1` script can be used to download the package from
https://github.com/python/cpython-bin-deps and extract the files to the correct location.
For Anaconda, the required libraries can be installed with `conda install -c anaconda openssl`.

View file

@ -2,18 +2,16 @@
## Supported Versions
These Borg releases are currently supported with security updates.
These borg releases are currently supported with security updates.
| Version | Supported |
|---------|--------------------|
| 2.0.x | :x: (beta) |
| 1.4.x | :white_check_mark: |
| 1.2.x | :x: (no new releases, critical fixes may still be backported) |
| 1.1.x | :x: |
| 1.2.x | :white_check_mark: |
| 1.1.x | :white_check_mark: |
| < 1.1 | :x: |
## Reporting a Vulnerability
See here:
See there:
https://borgbackup.readthedocs.io/en/latest/support.html#security-contact

306
Vagrantfile vendored
View file

@ -1,10 +1,10 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Automated creation of testing environments/binaries on miscellaneous platforms
# Automated creation of testing environments / binaries on misc. platforms
$cpus = Integer(ENV.fetch('VMCPUS', '8')) # create VMs with that many cpus
$xdistn = Integer(ENV.fetch('XDISTN', '8')) # dispatch tests to that many pytest workers
$cpus = Integer(ENV.fetch('VMCPUS', '4')) # create VMs with that many cpus
$xdistn = Integer(ENV.fetch('XDISTN', '4')) # dispatch tests to that many pytest workers
$wmem = $xdistn * 256 # give the VM additional memory for workers [MB]
def packages_debianoid(user)
@ -15,8 +15,7 @@ def packages_debianoid(user)
apt-get -y -qq update
apt-get -y -qq dist-upgrade
# for building borgbackup and dependencies:
apt install -y pkg-config
apt install -y libssl-dev libacl1-dev libxxhash-dev liblz4-dev libzstd-dev || true
apt install -y libssl-dev libacl1-dev libxxhash-dev libdeflate-dev liblz4-dev libzstd-dev pkg-config
apt install -y libfuse-dev fuse || true
apt install -y libfuse3-dev fuse3 || true
apt install -y locales || true
@ -28,6 +27,9 @@ def packages_debianoid(user)
apt install -y python3-dev python3-setuptools virtualenv
# for building python:
apt install -y zlib1g-dev libbz2-dev libncurses5-dev libreadline-dev liblzma-dev libsqlite3-dev libffi-dev
# older debian / ubuntu have no .pc file for these, so we need to point at the lib/header location:
echo 'export BORG_LIBXXHASH_PREFIX=/usr' >> ~vagrant/.bash_profile
echo 'export BORG_LIBDEFLATE_PREFIX=/usr' >> ~vagrant/.bash_profile
EOF
end
@ -38,17 +40,16 @@ def packages_freebsd
# install all the (security and other) updates, base system
freebsd-update --not-running-from-cron fetch install
# for building borgbackup and dependencies:
pkg install -y xxhash liblz4 zstd pkgconf
pkg install -y xxhash libdeflate liblz4 zstd pkgconf
pkg install -y fusefs-libs || true
pkg install -y fusefs-libs3 || true
pkg install -y rust
pkg install -y git bash # fakeroot causes lots of troubles on freebsd
pkg install -y python310 py310-sqlite3
pkg install -y python311 py311-sqlite3 py311-pip py311-virtualenv
# make sure there is a python3/pip3/virtualenv command
ln -sf /usr/local/bin/python3.11 /usr/local/bin/python3
ln -sf /usr/local/bin/pip-3.11 /usr/local/bin/pip3
ln -sf /usr/local/bin/virtualenv-3.11 /usr/local/bin/virtualenv
# for building python (for the tests we use pyenv built pythons):
pkg install -y python39 py39-sqlite3
# make sure there is a python3 command
ln -sf /usr/local/bin/python3.9 /usr/local/bin/python3
python3 -m ensurepip
pip3 install virtualenv
# make bash default / work:
chsh -s bash vagrant
mount -t fdescfs fdesc /dev/fd
@ -65,86 +66,81 @@ def packages_freebsd
pkg update
yes | pkg upgrade
echo 'export BORG_OPENSSL_PREFIX=/usr' >> ~vagrant/.bash_profile
# (re)mount / with acls
mount -o acls /
EOF
end
def packages_openbsd
return <<-EOF
hostname "openbsd77.localdomain"
echo "$(hostname)" > /etc/myname
echo "127.0.0.1 localhost" > /etc/hosts
echo "::1 localhost" >> /etc/hosts
echo "127.0.0.1 $(hostname) $(hostname -s)" >> /etc/hosts
echo "https://ftp.eu.openbsd.org/pub/OpenBSD" > /etc/installurl
ftp https://cdn.openbsd.org/pub/OpenBSD/$(uname -r)/$(uname -m)/comp$(uname -r | tr -d .).tgz
tar -C / -xzphf comp$(uname -r | tr -d .).tgz
rm comp$(uname -r | tr -d .).tgz
pkg_add bash
chsh -s bash vagrant
pkg_add xxhash
pkg_add libdeflate
pkg_add lz4
pkg_add zstd
pkg_add git # no fakeroot
pkg_add rust
pkg_add openssl%3.4
pkg_add openssl%1.1
pkg_add py3-pip
pkg_add py3-virtualenv
echo 'export BORG_OPENSSL_NAME=eopenssl30' >> ~vagrant/.bash_profile
EOF
end
def packages_netbsd
return <<-EOF
echo 'https://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/$arch/9.3/All' > /usr/pkg/etc/pkgin/repositories.conf
# use the latest stuff, some packages in "9.2" are quite broken
echo 'http://ftp.NetBSD.org/pub/pkgsrc/packages/NetBSD/$arch/9.0_current/All' > /usr/pkg/etc/pkgin/repositories.conf
pkgin update
pkgin -y upgrade
pkg_add zstd lz4 xxhash git
pkg_add rust
pkg_add bash
chsh -s bash vagrant
echo "export PROMPT_COMMAND=" >> ~vagrant/.bash_profile # bug in netbsd 9.3, .bash_profile broken for screen
echo "export PROMPT_COMMAND=" >> ~root/.bash_profile # bug in netbsd 9.3, .bash_profile broken for screen
echo "export PROMPT_COMMAND=" >> ~vagrant/.bash_profile # bug in netbsd 9.2, .bash_profile broken for screen
echo "export PROMPT_COMMAND=" >> ~root/.bash_profile # bug in netbsd 9.2, .bash_profile broken for screen
pkg_add pkg-config
# pkg_add fuse # llfuse supports netbsd, but is still buggy.
# https://bitbucket.org/nikratio/python-llfuse/issues/70/perfuse_open-setsockopt-no-buffer-space
pkg_add py311-sqlite3 py311-pip py311-virtualenv py311-expat
ln -s /usr/pkg/bin/python3.11 /usr/pkg/bin/python
ln -s /usr/pkg/bin/python3.11 /usr/pkg/bin/python3
ln -s /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip
ln -s /usr/pkg/bin/pip3.11 /usr/pkg/bin/pip3
ln -s /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv
ln -s /usr/pkg/bin/virtualenv-3.11 /usr/pkg/bin/virtualenv3
pkg_add python39 py39-sqlite3 py39-pip py39-virtualenv py39-expat
ln -s /usr/pkg/bin/python3.9 /usr/pkg/bin/python
ln -s /usr/pkg/bin/python3.9 /usr/pkg/bin/python3
ln -s /usr/pkg/bin/pip3.9 /usr/pkg/bin/pip
ln -s /usr/pkg/bin/pip3.9 /usr/pkg/bin/pip3
ln -s /usr/pkg/bin/virtualenv-3.9 /usr/pkg/bin/virtualenv
ln -s /usr/pkg/bin/virtualenv-3.9 /usr/pkg/bin/virtualenv3
ln -s /usr/pkg/lib/python3.9/_sysconfigdata_netbsd9.py /usr/pkg/lib/python3.9/_sysconfigdata__netbsd9_.py # bug in netbsd 9.2, expected filename not there.
EOF
end
def package_update_openindiana
def packages_darwin
return <<-EOF
echo "nameserver 1.1.1.1" > /etc/resolv.conf
# needs separate provisioning step + reboot to become effective:
pkg update
# install all the (security and other) updates
sudo softwareupdate --ignore iTunesX
sudo softwareupdate --ignore iTunes
sudo softwareupdate --ignore Safari
sudo softwareupdate --ignore "Install macOS High Sierra"
sudo softwareupdate --install --all
which brew || CI=1 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew update > /dev/null
brew install pkg-config readline openssl@1.1 xxhash libdeflate zstd lz4 xz
brew install --cask macfuse
# brew upgrade # upgrade everything (takes rather long)
echo 'export PKG_CONFIG_PATH=/usr/local/opt/openssl@1.1/lib/pkgconfig' >> ~vagrant/.bash_profile
EOF
end
def packages_openindiana
return <<-EOF
pkg install gcc-13 git
pkg install pkg-config libxxhash
pkg install python-313
ln -sf /usr/bin/python3.13 /usr/bin/python3
ln -sf /usr/bin/python3.13-config /usr/bin/python3-config
# needs separate provisioning step + reboot:
#pkg update
#pkg install gcc-7 python-39 setuptools-39
ln -sf /usr/bin/python3.9 /usr/bin/python3
python3 -m ensurepip
ln -sf /usr/bin/pip3.13 /usr/bin/pip3
ln -sf /usr/bin/pip3.9 /usr/bin/pip3
pip3 install virtualenv
# let borg's pkg-config find openssl:
pfexec pkg set-mediator -V 3 openssl
EOF
end
def install_pyenv(boxname)
return <<-EOF
echo 'export PYTHON_CONFIGURE_OPTS="${PYTHON_CONFIGURE_OPTS} --enable-shared"' >> ~/.bash_profile
echo 'export PYTHON_CONFIGURE_OPTS="--enable-shared"' >> ~/.bash_profile
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bash_profile
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bash_profile
. ~/.bash_profile
@ -157,11 +153,17 @@ def install_pyenv(boxname)
EOF
end
def fix_pyenv_darwin(boxname)
return <<-EOF
echo 'export PYTHON_CONFIGURE_OPTS="--enable-framework"' >> ~/.bash_profile
EOF
end
def install_pythons(boxname)
return <<-EOF
. ~/.bash_profile
echo "PYTHON_CONFIGURE_OPTS: ${PYTHON_CONFIGURE_OPTS}"
pyenv install 3.13.8
pyenv install 3.10.0 # tests, version supporting openssl 1.1
pyenv install 3.9.12 # tests, version supporting openssl 1.1, binary build
pyenv rehash
EOF
end
@ -178,9 +180,9 @@ def build_pyenv_venv(boxname)
return <<-EOF
. ~/.bash_profile
cd /vagrant/borg
# use the latest 3.13 release
pyenv global 3.13.8
pyenv virtualenv 3.13.8 borg-env
# use the latest 3.9 release
pyenv global 3.9.12
pyenv virtualenv 3.9.12 borg-env
ln -s ~/.pyenv/versions/borg-env .
EOF
end
@ -193,10 +195,8 @@ def install_borg(fuse)
pip install -U wheel # upgrade wheel, might be too old
cd borg
pip install -r requirements.d/development.lock.txt
python3 scripts/make.py clean
# install borgstore WITH all options, so it pulls in the needed
# requirements, so they will also get into the binaries built. #8574
pip install borgstore[sftp,s3]
python setup.py clean
python setup.py clean2
pip install -e .[#{fuse}]
EOF
end
@ -206,7 +206,10 @@ def install_pyinstaller()
. ~/.bash_profile
cd /vagrant/borg
. borg-env/bin/activate
pip install -r requirements.d/pyinstaller.txt
git clone https://github.com/thomaswaldmann/pyinstaller.git
cd pyinstaller
git checkout v4.7-maint
python setup.py install
EOF
end
@ -229,8 +232,8 @@ def run_tests(boxname, skip_env)
. ../borg-env/bin/activate
if which pyenv 2> /dev/null; then
# for testing, use the earliest point releases of the supported python versions:
pyenv global 3.13.8
pyenv local 3.13.8
pyenv global 3.9.12 3.10.0
pyenv local 3.9.12 3.10.0
fi
# otherwise: just use the system python
# some OSes can only run specific test envs, e.g. because they miss FUSE support:
@ -271,95 +274,51 @@ Vagrant.configure(2) do |config|
v.cpus = $cpus
end
config.vm.define "noble" do |b|
b.vm.box = "bento/ubuntu-24.04"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("noble")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("noble", ".*none.*")
end
config.vm.define "jammy" do |b|
config.vm.define "jammy64" do |b|
b.vm.box = "ubuntu/jammy64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("jammy")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("jammy64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("jammy", ".*none.*")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("jammy64", ".*none.*")
end
config.vm.define "trixie" do |b|
b.vm.box = "debian/testing64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("trixie")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("trixie")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("trixie")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("trixie")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("trixie", ".*none.*")
end
config.vm.define "bookworm32" do |b|
b.vm.box = "generic-x32/debian12"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bookworm32")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bookworm32")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bookworm32")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bookworm32")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bookworm32", ".*none.*")
end
config.vm.define "bookworm" do |b|
b.vm.box = "debian/bookworm64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bookworm")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bookworm")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bookworm")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bookworm")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bookworm", ".*none.*")
end
config.vm.define "bullseye" do |b|
config.vm.define "bullseye64" do |b|
b.vm.box = "debian/bullseye64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bullseye")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bullseye")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bullseye")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("bullseye64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("bullseye64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("bullseye64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bullseye")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bullseye", ".*none.*")
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("bullseye64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("bullseye64", ".*none.*")
end
config.vm.define "freebsd13" do |b|
config.vm.define "buster64" do |b|
b.vm.box = "debian/buster64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages debianoid", :type => :shell, :inline => packages_debianoid("vagrant")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("buster64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("buster64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("buster64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("buster64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("buster64", ".*none.*")
end
config.vm.define "freebsd64" do |b|
b.vm.box = "generic/freebsd13"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
@ -367,68 +326,79 @@ Vagrant.configure(2) do |config|
b.ssh.shell = "sh"
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages freebsd", :type => :shell, :inline => packages_freebsd
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd13")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd13")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd13")
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd13")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd13", ".*(pyfuse3|none).*")
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd64", ".*(fuse3|none).*")
end
config.vm.define "freebsd14" do |b|
b.vm.box = "generic/freebsd14"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.ssh.shell = "sh"
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages freebsd", :type => :shell, :inline => packages_freebsd
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("freebsd14")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("freebsd14")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("freebsd14")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("freebsd14")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("freebsd14", ".*(pyfuse3|none).*")
end
config.vm.define "openbsd7" do |b|
b.vm.box = "l3system/openbsd77-amd64"
config.vm.define "openbsd64" do |b|
b.vm.box = "openbsd71-64"
b.vm.provider :virtualbox do |v|
v.memory = 1024 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages openbsd", :type => :shell, :inline => packages_openbsd
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openbsd7")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openbsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openbsd7", ".*fuse.*")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openbsd64", ".*fuse.*")
end
config.vm.define "netbsd9" do |b|
config.vm.define "netbsd64" do |b|
b.vm.box = "generic/netbsd9"
b.vm.provider :virtualbox do |v|
v.memory = 4096 + $wmem # need big /tmp tmpfs in RAM!
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages netbsd", :type => :shell, :inline => packages_netbsd
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("netbsd9")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("netbsd9", ".*fuse.*")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("netbsd64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg(false)
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("netbsd64", ".*fuse.*")
end
config.vm.define "darwin64" do |b|
b.vm.box = "macos-sierra"
b.vm.provider :virtualbox do |v|
v.memory = 4096 + $wmem
v.customize ['modifyvm', :id, '--ostype', 'MacOS_64']
v.customize ['modifyvm', :id, '--paravirtprovider', 'default']
v.customize ['modifyvm', :id, '--nested-hw-virt', 'on']
# Adjust CPU settings according to
# https://github.com/geerlingguy/macos-virtualbox-vm
v.customize ['modifyvm', :id, '--cpuidset',
'00000001', '000306a9', '00020800', '80000201', '178bfbff']
# Disable USB variant requiring Virtualbox proprietary extension pack
v.customize ["modifyvm", :id, '--usbehci', 'off', '--usbxhci', 'off']
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "packages darwin", :type => :shell, :privileged => false, :inline => packages_darwin
b.vm.provision "install pyenv", :type => :shell, :privileged => false, :inline => install_pyenv("darwin64")
b.vm.provision "fix pyenv", :type => :shell, :privileged => false, :inline => fix_pyenv_darwin("darwin64")
b.vm.provision "install pythons", :type => :shell, :privileged => false, :inline => install_pythons("darwin64")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_pyenv_venv("darwin64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("llfuse")
b.vm.provision "install pyinstaller", :type => :shell, :privileged => false, :inline => install_pyinstaller()
b.vm.provision "build binary with pyinstaller", :type => :shell, :privileged => false, :inline => build_binary_with_pyinstaller("darwin64")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("darwin64", ".*(fuse3|none).*")
end
# rsync on openindiana has troubles, does not set correct owner for /vagrant/borg and thus gives lots of
# permission errors. can be manually fixed in the VM by: sudo chown -R vagrant /vagrant/borg ; then rsync again.
config.vm.define "openindiana" do |b|
b.vm.box = "openindiana/hipster"
config.vm.define "openindiana64" do |b|
b.vm.box = "openindiana"
b.vm.provider :virtualbox do |v|
v.memory = 2048 + $wmem
end
b.vm.provision "fs init", :type => :shell, :inline => fs_init("vagrant")
b.vm.provision "package update openindiana", :type => :shell, :inline => package_update_openindiana, :reboot => true
b.vm.provision "packages openindiana", :type => :shell, :inline => packages_openindiana
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openindiana")
b.vm.provision "build env", :type => :shell, :privileged => false, :inline => build_sys_venv("openindiana64")
b.vm.provision "install borg", :type => :shell, :privileged => false, :inline => install_borg("nofuse")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openindiana", ".*fuse.*")
b.vm.provision "run tests", :type => :shell, :privileged => false, :inline => run_tests("openindiana64", ".*fuse.*")
end
# TODO: create more VMs with python 3.9+ and openssl 1.1 or 3.0.
# See branch 1.1-maint for a better equipped Vagrantfile (but still on py35 and openssl 1.0).
end

75
conftest.py Normal file
View file

@ -0,0 +1,75 @@
import os
import pytest
# needed to get pretty assertion failures in unit tests:
if hasattr(pytest, 'register_assert_rewrite'):
pytest.register_assert_rewrite('borg.testsuite')
import borg.cache # noqa: E402
from borg.logger import setup_logging # noqa: E402
# Ensure that the loggers exist for all tests
setup_logging()
from borg.testsuite import has_lchflags, has_llfuse, has_pyfuse3 # noqa: E402
from borg.testsuite import are_symlinks_supported, are_hardlinks_supported, is_utime_fully_supported # noqa: E402
from borg.testsuite.platform import fakeroot_detected # noqa: E402
@pytest.fixture(autouse=True)
def clean_env(tmpdir_factory, monkeypatch):
# avoid that we access / modify the user's normal .config / .cache directory:
monkeypatch.setenv('XDG_CONFIG_HOME', str(tmpdir_factory.mktemp('xdg-config-home')))
monkeypatch.setenv('XDG_CACHE_HOME', str(tmpdir_factory.mktemp('xdg-cache-home')))
# also avoid to use anything from the outside environment:
keys = [key for key in os.environ
if key.startswith('BORG_') and key not in ('BORG_FUSE_IMPL', )]
for key in keys:
monkeypatch.delenv(key, raising=False)
# Speed up tests
monkeypatch.setenv("BORG_TESTONLY_WEAKEN_KDF", "1")
def pytest_report_header(config, startdir):
tests = {
"BSD flags": has_lchflags,
"fuse2": has_llfuse,
"fuse3": has_pyfuse3,
"root": not fakeroot_detected(),
"symlinks": are_symlinks_supported(),
"hardlinks": are_hardlinks_supported(),
"atime/mtime": is_utime_fully_supported(),
"modes": "BORG_TESTS_IGNORE_MODES" not in os.environ
}
enabled = []
disabled = []
for test in tests:
if tests[test]:
enabled.append(test)
else:
disabled.append(test)
output = "Tests enabled: " + ", ".join(enabled) + "\n"
output += "Tests disabled: " + ", ".join(disabled)
return output
class DefaultPatches:
def __init__(self, request):
self.org_cache_wipe_cache = borg.cache.LocalCache.wipe_cache
def wipe_should_not_be_called(*a, **kw):
raise AssertionError("Cache wipe was triggered, if this is part of the test add "
"@pytest.mark.allow_cache_wipe")
if 'allow_cache_wipe' not in request.keywords:
borg.cache.LocalCache.wipe_cache = wipe_should_not_be_called
request.addfinalizer(self.undo)
def undo(self):
borg.cache.LocalCache.wipe_cache = self.org_cache_wipe_cache
@pytest.fixture(autouse=True)
def default_patches(request):
return DefaultPatches(request)

View file

@ -1,5 +1,5 @@
Here we store third-party documentation, licenses, etc.
Here we store 3rd party documentation, licenses, etc.
Please note that all files inside the "borg" package directory (except those
excluded in setup.py) will be installed, so do not keep docs or licenses
Please note that all files inside the "borg" package directory (except the
stuff excluded in setup.py) will be INSTALLED, so don't keep docs or licenses
there.

View file

@ -21,7 +21,7 @@ help:
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and an HTML help project"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"

View file

@ -1,6 +1,6 @@
<div class="sidebar-block">
<div class="sidebar-toc">
{# Restrict the sidebar ToC depth to two levels while generating command usage pages.
{# Restrict the sidebar toc depth to two levels while generating command usage pages.
This avoids superfluous entries for each "Description" and "Examples" heading. #}
{% if pagename.startswith("usage/") and pagename not in (
"usage/general", "usage/help", "usage/debug", "usage/notes",

View file

@ -1,173 +0,0 @@
{%- extends "basic/layout.html" %}
{# Do this so that Bootstrap is included before the main CSS file. #}
{%- block htmltitle %}
{% set script_files = script_files + ["_static/myscript.js"] %}
<!-- Licensed under the Apache 2.0 License -->
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/fonts/open-sans/stylesheet.css', 1) }}" />
<!-- Licensed under the SIL Open Font License -->
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/fonts/source-serif-pro/source-serif-pro.css', 1) }}" />
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/css/bootstrap.min.css', 1) }}" />
<link rel="stylesheet" type="text/css" href="{{ pathto('_static/css/bootstrap-theme.min.css', 1) }}" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
{{ super() }}
{%- endblock %}
{%- block extrahead %}
{% if theme_touch_icon %}
<link rel="apple-touch-icon" href="{{ pathto('_static/' ~ theme_touch_icon, 1) }}" />
{% endif %}
{{ super() }}
{% endblock %}
{# Displays the URL for the homepage if it's set, or the master_doc if it is not. #}
{% macro homepage() -%}
{%- if theme_homepage %}
{%- if hasdoc(theme_homepage) %}
{{ pathto(theme_homepage) }}
{%- else %}
{{ theme_homepage }}
{%- endif %}
{%- else %}
{{ pathto(master_doc) }}
{%- endif %}
{%- endmacro %}
{# Displays the URL for the tospage if it's set, or falls back to the homepage macro. #}
{% macro tospage() -%}
{%- if theme_tospage %}
{%- if hasdoc(theme_tospage) %}
{{ pathto(theme_tospage) }}
{%- else %}
{{ theme_tospage }}
{%- endif %}
{%- else %}
{{ homepage() }}
{%- endif %}
{%- endmacro %}
{# Displays the URL for the projectpage if it's set, or falls back to the homepage macro. #}
{% macro projectlink() -%}
{%- if theme_projectlink %}
{%- if hasdoc(theme_projectlink) %}
{{ pathto(theme_projectlink) }}
{%- else %}
{{ theme_projectlink }}
{%- endif %}
{%- else %}
{{ homepage() }}
{%- endif %}
{%- endmacro %}
{# Displays the next and previous links both before and after the content. #}
{% macro render_relations() -%}
{% if prev or next %}
<div class="footer-relations">
{% if prev %}
<div class="pull-left">
<a class="btn btn-default" href="{{ prev.link|e }}" title="{{ _('previous chapter')}} (use the left arrow)">{{ prev.title }}</a>
</div>
{% endif %}
{%- if next and next.title != '&lt;no title&gt;' %}
<div class="pull-right">
<a class="btn btn-default" href="{{ next.link|e }}" title="{{ _('next chapter')}} (use the right arrow)">{{ next.title }}</a>
</div>
{%- endif %}
</div>
<div class="clearer"></div>
{% endif %}
{%- endmacro %}
{%- macro guzzle_sidebar() %}
<div id="left-column">
<div class="sphinxsidebar">
{%- if sidebars != None %}
{#- New-style sidebar: explicitly include/exclude templates. #}
{%- for sidebartemplate in sidebars %}
{%- include sidebartemplate %}
{%- endfor %}
{% else %}
{% include "logo-text.html" %}
{% include "globaltoc.html" %}
{% include "searchbox.html" %}
{%- endif %}
</div>
</div>
{%- endmacro %}
{%- block content %}
{%- if pagename == 'index' and theme_index_template %}
{% include theme_index_template %}
{%- else %}
<div class="container-wrapper">
<div id="mobile-toggle">
<a href="#"><span class="glyphicon glyphicon-align-justify" aria-hidden="true"></span></a>
</div>
{%- block sidebar1 %}{{ guzzle_sidebar() }}{% endblock %}
{%- block document_wrapper %}
{%- block document %}
<div id="right-column">
{% block breadcrumbs %}
<div role="navigation" aria-label="breadcrumbs navigation">
<ol class="breadcrumb">
<li><a href="{{ pathto(master_doc) }}">Docs</a></li>
{% for doc in parents %}
<li><a href="{{ doc.link|e }}">{{ doc.title }}</a></li>
{% endfor %}
<li>{{ title }}</li>
</ol>
</div>
{% endblock %}
<div class="document clearer body" role="main">
{% block body %} {% endblock %}
</div>
{%- block bottom_rel_links %}
{{ render_relations() }}
{%- endblock %}
</div>
<div class="clearfix"></div>
{%- endblock %}
{%- endblock %}
{%- block comments -%}
{% if theme_disqus_comments_shortname %}
<div class="container comment-container">
{% include "comments.html" %}
</div>
{% endif %}
{%- endblock %}
</div>
{%- endif %}
{%- endblock %}
{%- block footer %}
<script type="text/javascript">
$("#mobile-toggle a").click(function () {
$("#left-column").toggle();
});
</script>
<script type="text/javascript" src="{{ pathto('_static/js/bootstrap.js', 1)}}"></script>
{%- block footer_wrapper %}
<div class="footer">
&copy; Copyright {{ copyright }}. Created using <a href="http://sphinx.pocoo.org/">Sphinx</a>.
</div>
{%- endblock %}
{%- block ga %}
{%- if theme_google_analytics_account %}
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', '{{ theme_google_analytics_account }}']);
_gaq.push(['_trackPageview']);
(function() {
var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
})();
</script>
{%- endif %}
{%- endblock %}
{%- endblock %}

View file

@ -1,117 +0,0 @@
Binary BorgBackup builds
========================
General notes
-------------
The binaries are supposed to work on the specified platform without installing anything else.
There are some limitations, though:
- for Linux, your system must have the same or newer glibc version as the one used for building
- for macOS, you need to have the same or newer macOS version as the one used for building
- for other OSes, there are likely similar limitations
If you don't find something working on your system, check the older borg releases.
*.asc are GnuPG signatures - only provided for locally built binaries.
*.exe (or no extension) is the single-file fat binary.
*.tgz is the single-directory fat binary (extract it once with tar -xzf).
Using the single-directory build is faster and does not require as much space
in the temporary directory as the self-extracting single-file build.
macOS: to avoid issues, download the file via the command line OR remove the
"quarantine" attribute after downloading:
$ xattr -dr com.apple.quarantine borg-macos1012.tgz
Download the correct files
--------------------------
Binaries built on GitHub servers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
borg-linux-glibc235-x86_64-gh Linux AMD/Intel (built on Ubuntu 22.04 LTS with glibc 2.35)
borg-linux-glibc235-arm64-gh Linux ARM (built on Ubuntu 22.04 LTS with glibc 2.35)
borg-macos-15-arm64-gh macOS Apple Silicon (built on macOS 15 w/o FUSE support)
borg-macos-15-x86_64-gh macOS Intel (built on macOS 15 w/o FUSE support)
borg-freebsd-14-x86_64-gh FreeBSD AMD/Intel (built on FreeBSD 14)
Binaries built locally
~~~~~~~~~~~~~~~~~~~~~~
borg-linux-glibc231-x86_64 Linux (built on Debian 11 "Bullseye" with glibc 2.31)
Note: if you don't find a specific binary here, check release 1.4.1 or 1.2.9.
Verifying your download
-----------------------
I provide GPG signatures for files which I have built locally on my machines.
To check the GPG signature, download both the file and the corresponding
signature (*.asc file) and then (on the shell) type, for example:
gpg --recv-keys 9F88FB52FAF7B393
gpg --verify borgbackup.tar.gz.asc borgbackup.tar.gz
The files are signed by:
Thomas Waldmann <tw@waldmann-edv.de>
GPG key fingerprint: 6D5B EF9A DD20 7580 5747 B70F 9F88 FB52 FAF7 B393
My fingerprint is also in the footer of all my BorgBackup mailing list posts.
Provenance attestations for GitHub-built binaries
-------------------------------------------------
For binaries built on GitHub (files with a "-gh" suffix in the name), we publish
an artifact provenance attestation that proves the binary was built by our
GitHub Actions workflow from a specific commit or tag. You can verify this using
the GitHub CLI (gh). Install it from https://cli.github.com/ and make sure you
use a recent version that supports "gh attestation".
Practical example (Linux, 2.0.0b20 tag):
curl -LO https://github.com/borgbackup/borg/releases/download/2.0.0b20/borg-linux-glibc235-x86_64-gh
gh attestation verify --repo borgbackup/borg --source-ref refs/tags/2.0.0b20 borg-linux-glibc235-x86_64-gh
If verification succeeds, gh prints a summary stating the subject (your file),
that it was attested by GitHub Actions, and the job/workflow reference.
Installing
----------
It is suggested that you rename or symlink the binary to just "borg".
If you need "borgfs", just also symlink it to the same binary; it will
detect internally under which name it was invoked.
On UNIX-like platforms, /usr/local/bin/ or ~/bin/ is a nice place for it,
but you can invoke it from anywhere by providing the full path to it.
Make sure the file is readable and executable (chmod +rx borg on UNIX-like
platforms).
Reporting issues
----------------
Please first check the FAQ and whether a GitHub issue already exists.
If you find a NEW issue, please open a ticket on our issue tracker:
https://github.com/borgbackup/borg/issues/
There, please give:
- the version number (it is displayed if you invoke borg -V)
- the sha256sum of the binary
- a good description of what the issue is
- a good description of how to reproduce your issue
- a traceback with system info (if you have one)
- your precise platform (CPU, 32/64-bit?), OS, distribution, release
- your Python and (g)libc versions

View file

@ -5,8 +5,8 @@
Borg documentation
==================
.. When you add an element here, do not forget to add it to index.rst.
.. Note: Some things are in appendices (see latex_appendices in conf.py).
.. when you add an element here, do not forget to add it to index.rst
.. Note: Some things are in appendices (see latex_appendices in conf.py)
.. toctree::
:maxdepth: 2

View file

@ -52,7 +52,8 @@ h1 {
}
.container.experimental,
#debugging-facilities {
#debugging-facilities,
#borg-recreate {
/* don't change text dimensions */
margin: 0 -30px; /* padding below + border width */
padding: 0 10px; /* 10 px visual margin between edge of text and the border */

File diff suppressed because it is too large Load diff

View file

@ -1,807 +0,0 @@
.. _changelog_0x:
Change Log 0.x
==============
Version 0.30.0 (2016-01-23)
---------------------------
Compatibility notes:
- The new default logging level is WARNING. Previously, it was INFO, which was
more verbose. Use -v (or --info) to show once again log level INFO messages.
See the "general" section in the usage docs.
- For borg create, you need --list (in addition to -v) to see the long file
list (was needed so you can have e.g. --stats alone without the long list)
- See below about BORG_DELETE_I_KNOW_WHAT_I_AM_DOING (was:
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING)
Bug fixes:
- fix crash when using borg create --dry-run --keep-tag-files, #570
- make sure teardown with cleanup happens for Cache and RepositoryCache,
avoiding leftover locks and TEMP dir contents, #285 (partially), #548
- fix locking KeyError, partial fix for #502
- log stats consistently, #526
- add abbreviated weekday to timestamp format, fixes #496
- strip whitespace when loading exclusions from file
- unset LD_LIBRARY_PATH before invoking ssh, fixes strange OpenSSL library
version warning when using the borg binary, #514
- add some error handling/fallback for C library loading, #494
- added BORG_DELETE_I_KNOW_WHAT_I_AM_DOING for check in "borg delete", #503
- remove unused "repair" rpc method name
New features:
- borg create: implement exclusions using regular expression patterns.
- borg create: implement inclusions using patterns.
- borg extract: support patterns, #361
- support different styles for patterns:
- fnmatch (`fm:` prefix, default when omitted), like borg <= 0.29.
- shell (`sh:` prefix) with `*` not matching directory separators and
`**/` matching 0..n directories
- path prefix (`pp:` prefix, for unifying borg create pp1 pp2 into the
patterns system), semantics like in borg <= 0.29
- regular expression (`re:`), new!
- --progress option for borg upgrade (#291) and borg delete <archive>
- update progress indication more often (e.g. for borg create within big
files or for borg check repo), #500
- finer chunker granularity for items metadata stream, #547, #487
- borg create --list is now used (in addition to -v) to enable the verbose
file list output
- display borg version below tracebacks, #532
Other changes:
- hashtable size (and thus: RAM and disk consumption) follows a growth policy:
grows fast while small, grows slower when getting bigger, #527
- Vagrantfile: use pyinstaller 3.1 to build binaries, freebsd sqlite3 fix,
fixes #569
- no separate binaries for centos6 any more because the generic linux binaries
also work on centos6 (or in general: on systems with a slightly older glibc
than debian7
- dev environment: require virtualenv<14.0 so we get a py32 compatible pip
- docs:
- add space-saving chunks.archive.d trick to FAQ
- important: clarify -v and log levels in usage -> general, please read!
- sphinx configuration: create a simple man page from usage docs
- add a repo server setup example
- disable unneeded SSH features in authorized_keys examples for security.
- borg prune only knows "--keep-within" and not "--within"
- add gource video to resources docs, #507
- add netbsd install instructions
- authors: make it more clear what refers to borg and what to attic
- document standalone binary requirements, #499
- rephrase the mailing list section
- development docs: run build_api and build_usage before tagging release
- internals docs: hash table max. load factor is 0.75 now
- markup, typo, grammar, phrasing, clarifications and other fixes.
- add gcc gcc-c++ to redhat/fedora/corora install docs, fixes #583
Version 0.29.0 (2015-12-13)
---------------------------
Compatibility notes:
- When upgrading to 0.29.0, you need to upgrade client as well as server
installations due to the locking and command-line interface changes; otherwise
you'll get an error message about an RPC protocol mismatch or a wrong command-line
option.
If you run a server that needs to support both old and new clients, it is
suggested that you have a "borg-0.28.2" and a "borg-0.29.0" command.
clients then can choose via e.g. "borg --remote-path=borg-0.29.0 ...".
- The default waiting time for a lock changed from infinity to 1 second for a
better interactive user experience. If the repo you want to access is
currently locked, borg will now terminate after 1s with an error message.
If you have scripts that should wait for the lock for a longer time, use
--lock-wait N (with N being the maximum wait time in seconds).
Bug fixes:
- hash table tuning (better chosen hashtable load factor 0.75 and prime initial
size of 1031 gave ~1000x speedup in some scenarios)
- avoid creation of an orphan lock for one case, #285
- --keep-tag-files: fix file mode and multiple tag files in one directory, #432
- fixes for "borg upgrade" (attic repo converter), #466
- remove --progress isatty magic (and also --no-progress option) again, #476
- borg init: display proper repo URL
- fix format of umask in help pages, #463
New features:
- implement --lock-wait, support timeout for UpgradableLock, #210
- implement borg break-lock command, #157
- include system info below traceback, #324
- sane remote logging, remote stderr, #461:
- remote log output: intercept it and log it via local logging system,
with "Remote: " prefixed to message. log remote tracebacks.
- remote stderr: output it to local stderr with "Remote: " prefixed.
- add --debug and --info (same as --verbose) to set the log level of the
builtin logging configuration (which otherwise defaults to warning), #426
note: there are few messages emitted at DEBUG level currently.
- optionally configure logging via env var BORG_LOGGING_CONF
- add --filter option for status characters: e.g. to show only the added
or modified files (and also errors), use "borg create -v --filter=AME ...".
- more progress indicators, #394
- use ISO-8601 date and time format, #375
- "borg check --prefix" to restrict archive checking to that name prefix, #206
Other changes:
- hashindex_add C implementation (speed up cache re-sync for new archives)
- increase FUSE read_size to 1024 (speed up metadata operations)
- check/delete/prune --save-space: free unused segments quickly, #239
- increase rpc protocol version to 2 (see also Compatibility notes), #458
- silence borg by default (via default log level WARNING)
- get rid of C compiler warnings, #391
- upgrade OS X FUSE to 3.0.9 on the OS X binary build system
- use python 3.5.1 to build binaries
- docs:
- new mailing list borgbackup@python.org, #468
- readthedocs: color and logo improvements
- load coverage icons over SSL (avoids mixed content)
- more precise binary installation steps
- update release procedure docs about OS X FUSE
- FAQ entry about unexpected 'A' status for unchanged file(s), #403
- add docs about 'E' file status
- add "borg upgrade" docs, #464
- add developer docs about output and logging
- clarify encryption, add note about client-side encryption
- add resources section, with videos, talks, presentations, #149
- Borg moved to Arch Linux [community]
- fix wrong installation instructions for archlinux
Version 0.28.2 (2015-11-15)
---------------------------
New features:
- borg create --exclude-if-present TAGFILE - exclude directories that have the
given file from the backup. You can additionally give --keep-tag-files to
preserve just the directory roots and the tag-files (but not back up other
directory contents), #395, attic #128, attic #142
Other changes:
- do not create docs sources at build time (just have them in the repo),
completely remove have_cython() hack, do not use the "mock" library at build
time, #384
- avoid hidden import, make it easier for PyInstaller, easier fix for #218
- docs:
- add description of item flags / status output, fixes #402
- explain how to regenerate usage and API files (build_api or
build_usage) and when to commit usage files directly into git, #384
- minor install docs improvements
Version 0.28.1 (2015-11-08)
---------------------------
Bug fixes:
- do not try to build api / usage docs for production install,
fixes unexpected "mock" build dependency, #384
Other changes:
- avoid using msgpack.packb at import time
- fix formatting issue in changes.rst
- fix build on readthedocs
Version 0.28.0 (2015-11-08)
---------------------------
Compatibility notes:
- changed return codes (exit codes), see docs. in short:
old: 0 = ok, 1 = error. now: 0 = ok, 1 = warning, 2 = error
New features:
- refactor return codes (exit codes), fixes #61
- add --show-rc option enable "terminating with X status, rc N" output, fixes 58, #351
- borg create backups atime and ctime additionally to mtime, fixes #317
- extract: support atime additionally to mtime
- FUSE: support ctime and atime additionally to mtime
- support borg --version
- emit a warning if we have a slow msgpack installed
- borg list --prefix=thishostname- REPO, fixes #205
- Debug commands (do not use except if you know what you do: debug-get-obj,
debug-put-obj, debug-delete-obj, debug-dump-archive-items.
Bug fixes:
- setup.py: fix bug related to BORG_LZ4_PREFIX processing
- fix "check" for repos that have incomplete chunks, fixes #364
- borg mount: fix unlocking of repository at umount time, fixes #331
- fix reading files without touching their atime, #334
- non-ascii ACL fixes for Linux, FreeBSD and OS X, #277
- fix acl_use_local_uid_gid() and add a test for it, attic #359
- borg upgrade: do not upgrade repositories in place by default, #299
- fix cascading failure with the index conversion code, #269
- borg check: implement 'cmdline' archive metadata value decoding, #311
- fix RobustUnpacker, it missed some metadata keys (new atime and ctime keys
were missing, but also bsdflags). add check for unknown metadata keys.
- create from stdin: also save atime, ctime (cosmetic)
- use default_notty=False for confirmations, fixes #345
- vagrant: fix msgpack installation on centos, fixes #342
- deal with unicode errors for symlinks in same way as for regular files and
have a helpful warning message about how to fix wrong locale setup, fixes #382
- add ACL keys the RobustUnpacker must know about
Other changes:
- improve file size displays, more flexible size formatters
- explicitly commit to the units standard, #289
- archiver: add E status (means that an error occurred when processing this
(single) item
- do binary releases via "github releases", closes #214
- create: use -x and --one-file-system (was: --do-not-cross-mountpoints), #296
- a lot of changes related to using "logging" module and screen output, #233
- show progress display if on a tty, output more progress information, #303
- factor out status output so it is consistent, fix surrogates removal,
maybe fixes #309
- move away from RawConfigParser to ConfigParser
- archive checker: better error logging, give chunk_id and sequence numbers
(can be used together with borg debug-dump-archive-items).
- do not mention the deprecated passphrase mode
- emit a deprecation warning for --compression N (giving a just a number)
- misc .coverragerc fixes (and coverage measurement improvements), fixes #319
- refactor confirmation code, reduce code duplication, add tests
- prettier error messages, fixes #307, #57
- tests:
- add a test to find disk-full issues, #327
- travis: also run tests on Python 3.5
- travis: use tox -r so it rebuilds the tox environments
- test the generated pyinstaller-based binary by archiver unit tests, #215
- vagrant: tests: announce whether fakeroot is used or not
- vagrant: add vagrant user to fuse group for debianoid systems also
- vagrant: llfuse install on darwin needs pkgconfig installed
- vagrant: use pyinstaller from develop branch, fixes #336
- benchmarks: test create, extract, list, delete, info, check, help, fixes #146
- benchmarks: test with both the binary and the python code
- archiver tests: test with both the binary and the python code, fixes #215
- make basic test more robust
- docs:
- moved docs to borgbackup.readthedocs.org, #155
- a lot of fixes and improvements, use mobile-friendly RTD standard theme
- use zlib,6 compression in some examples, fixes #275
- add missing rename usage to docs, closes #279
- include the help offered by borg help <topic> in the usage docs, fixes #293
- include a list of major changes compared to attic into README, fixes #224
- add OS X install instructions, #197
- more details about the release process, #260
- fix linux glibc requirement (binaries built on debian7 now)
- build: move usage and API generation to setup.py
- update docs about return codes, #61
- remove api docs (too much breakage on rtd)
- borgbackup install + basics presentation (asciinema)
- describe the current style guide in documentation
- add section about debug commands
- warn about not running out of space
- add example for rename
- improve chunker params docs, fixes #362
- minor development docs update
Version 0.27.0 (2015-10-07)
---------------------------
New features:
- "borg upgrade" command - attic -> borg one time converter / migration, #21
- temporary hack to avoid using lots of disk space for chunks.archive.d, #235:
To use it: rm -rf chunks.archive.d ; touch chunks.archive.d
- respect XDG_CACHE_HOME, attic #181
- add support for arbitrary SSH commands, attic #99
- borg delete --cache-only REPO (only delete cache, not REPO), attic #123
Bug fixes:
- use Debian 7 (wheezy) to build pyinstaller borgbackup binaries, fixes slow
down observed when running the Centos6-built binary on Ubuntu, #222
- do not crash on empty lock.roster, fixes #232
- fix multiple issues with the cache config version check, #234
- fix segment entry header size check, attic #352
plus other error handling improvements / code deduplication there.
- always give segment and offset in repo IntegrityErrors
Other changes:
- stop producing binary wheels, remove docs about it, #147
- docs:
- add warning about prune
- generate usage include files only as needed
- development docs: add Vagrant section
- update / improve / reformat FAQ
- hint to single-file pyinstaller binaries from README
Version 0.26.1 (2015-09-28)
---------------------------
This is a minor update, just docs and new pyinstaller binaries.
- docs update about python and binary requirements
- better docs for --read-special, fix #220
- re-built the binaries, fix #218 and #213 (glibc version issue)
- update web site about single-file pyinstaller binaries
Note: if you did a python-based installation, there is no need to upgrade.
Version 0.26.0 (2015-09-19)
---------------------------
New features:
- Faster cache sync (do all in one pass, remove tar/compression stuff), #163
- BORG_REPO env var to specify the default repo, #168
- read special files as if they were regular files, #79
- implement borg create --dry-run, attic issue #267
- Normalize paths before pattern matching on OS X, #143
- support OpenBSD and NetBSD (except xattrs/ACLs)
- support / run tests on Python 3.5
Bug fixes:
- borg mount repo: use absolute path, attic #200, attic #137
- chunker: use off_t to get 64bit on 32bit platform, #178
- initialize chunker fd to -1, so it's not equal to STDIN_FILENO (0)
- fix reaction to "no" answer at delete repo prompt, #182
- setup.py: detect lz4.h header file location
- to support python < 3.2.4, add less buggy argparse lib from 3.2.6 (#194)
- fix for obtaining ``char *`` from temporary Python value (old code causes
a compile error on Mint 17.2)
- llfuse 0.41 install troubles on some platforms, require < 0.41
(UnicodeDecodeError exception due to non-ascii llfuse setup.py)
- cython code: add some int types to get rid of unspecific python add /
subtract operations (avoid ``undefined symbol FPE_``... error on some platforms)
- fix verbose mode display of stdin backup
- extract: warn if a include pattern never matched, fixes #209,
implement counters for Include/ExcludePatterns
- archive names with slashes are invalid, attic issue #180
- chunker: add a check whether the POSIX_FADV_DONTNEED constant is defined -
fixes building on OpenBSD.
Other changes:
- detect inconsistency / corruption / hash collision, #170
- replace versioneer with setuptools_scm, #106
- docs:
- pkg-config is needed for llfuse installation
- be more clear about pruning, attic issue #132
- unit tests:
- xattr: ignore security.selinux attribute showing up
- ext3 seems to need a bit more space for a sparse file
- do not test lzma level 9 compression (avoid MemoryError)
- work around strange mtime granularity issue on netbsd, fixes #204
- ignore st_rdev if file is not a block/char device, fixes #203
- stay away from the setgid and sticky mode bits
- use Vagrant to do easy cross-platform testing (#196), currently:
- Debian 7 "wheezy" 32bit, Debian 8 "jessie" 64bit
- Ubuntu 12.04 32bit, Ubuntu 14.04 64bit
- Centos 7 64bit
- FreeBSD 10.2 64bit
- OpenBSD 5.7 64bit
- NetBSD 6.1.5 64bit
- Darwin (OS X Yosemite)
Version 0.25.0 (2015-08-29)
---------------------------
Compatibility notes:
- lz4 compression library (liblz4) is a new requirement (#156)
- the new compression code is very compatible: as long as you stay with zlib
compression, older borg releases will still be able to read data from a
repo/archive made with the new code (note: this is not the case for the
default "none" compression, use "zlib,0" if you want a "no compression" mode
that can be read by older borg). Also the new code is able to read repos and
archives made with older borg versions (for all zlib levels 0..9).
Deprecations:
- --compression N (with N being a number, as in 0.24) is deprecated.
We keep the --compression 0..9 for now not to break scripts, but it is
deprecated and will be removed later, so better fix your scripts now:
--compression 0 (as in 0.24) is the same as --compression zlib,0 (now).
BUT: if you do not want compression, use --compression none
(which is the default).
--compression 1 (in 0.24) is the same as --compression zlib,1 (now)
--compression 9 (in 0.24) is the same as --compression zlib,9 (now)
New features:
- create --compression none (default, means: do not compress, just pass through
data "as is". this is more efficient than zlib level 0 as used in borg 0.24)
- create --compression lz4 (super-fast, but not very high compression)
- create --compression zlib,N (slower, higher compression, default for N is 6)
- create --compression lzma,N (slowest, highest compression, default N is 6)
- honor the nodump flag (UF_NODUMP) and do not back up such items
- list --short just outputs a simple list of the files/directories in an archive
Bug fixes:
- fixed --chunker-params parameter order confusion / malfunction, fixes #154
- close fds of segments we delete (during compaction)
- close files which fell out the lrucache
- fadvise DONTNEED now is only called for the byte range actually read, not for
the whole file, fixes #158.
- fix issue with negative "all archives" size, fixes #165
- restore_xattrs: ignore if setxattr fails with EACCES, fixes #162
Other changes:
- remove fakeroot requirement for tests, tests run faster without fakeroot
(test setup does not fail any more without fakeroot, so you can run with or
without fakeroot), fixes #151 and #91.
- more tests for archiver
- recover_segment(): don't assume we have an fd for segment
- lrucache refactoring / cleanup, add dispose function, py.test tests
- generalize hashindex code for any key length (less hardcoding)
- lock roster: catch file not found in remove() method and ignore it
- travis CI: use requirements file
- improved docs:
- replace hack for llfuse with proper solution (install libfuse-dev)
- update docs about compression
- update development docs about fakeroot
- internals: add some words about lock files / locking system
- support: mention BountySource and for what it can be used
- theme: use a lighter green
- add pypi, wheel, dist package based install docs
- split install docs into system-specific preparations and generic instructions
Version 0.24.0 (2015-08-09)
---------------------------
Incompatible changes (compared to 0.23):
- borg now always issues --umask NNN option when invoking another borg via ssh
on the repository server. By that, it's making sure it uses the same umask
for remote repos as for local ones. Because of this, you must upgrade both
server and client(s) to 0.24.
- the default umask is 077 now (if you do not specify via --umask) which might
be a different one as you used previously. The default umask avoids that
you accidentally give access permissions for group and/or others to files
created by borg (e.g. the repository).
Deprecations:
- "--encryption passphrase" mode is deprecated, see #85 and #97.
See the new "--encryption repokey" mode for a replacement.
New features:
- borg create --chunker-params ... to configure the chunker, fixes #16
(attic #302, attic #300, and somehow also #41).
This can be used to reduce memory usage caused by chunk management overhead,
so borg does not create a huge chunks index/repo index and eats all your RAM
if you back up lots of data in huge files (like VM disk images).
See docs/misc/create_chunker-params.txt for more information.
- borg info now reports chunk counts in the chunk index.
- borg create --compression 0..9 to select zlib compression level, fixes #66
(attic #295).
- borg init --encryption repokey (to store the encryption key into the repo),
fixes #85
- improve at-end error logging, always log exceptions and set exit_code=1
- LoggedIO: better error checks / exceptions / exception handling
- implement --remote-path to allow non-default-path borg locations, #125
- implement --umask M and use 077 as default umask for better security, #117
- borg check: give a named single archive to it, fixes #139
- cache sync: show progress indication
- cache sync: reimplement the chunk index merging in C
Bug fixes:
- fix segfault that happened for unreadable files (chunker: n needs to be a
signed size_t), #116
- fix the repair mode, #144
- repo delete: add destroy to allowed rpc methods, fixes issue #114
- more compatible repository locking code (based on mkdir), maybe fixes #92
(attic #317, attic #201).
- better Exception msg if no Borg is installed on the remote repo server, #56
- create a RepositoryCache implementation that can cope with >2GiB,
fixes attic #326.
- fix Traceback when running check --repair, attic #232
- clarify help text, fixes #73.
- add help string for --no-files-cache, fixes #140
Other changes:
- improved docs:
- added docs/misc directory for misc. writeups that won't be included
"as is" into the html docs.
- document environment variables and return codes (attic #324, attic #52)
- web site: add related projects, fix web site url, IRC #borgbackup
- Fedora/Fedora-based install instructions added to docs
- Cygwin-based install instructions added to docs
- updated AUTHORS
- add FAQ entries about redundancy / integrity
- clarify that borg extract uses the cwd as extraction target
- update internals doc about chunker params, memory usage and compression
- added docs about development
- add some words about resource usage in general
- document how to back up a raw disk
- add note about how to run borg from virtual env
- add solutions for (ll)fuse installation problems
- document what borg check does, fixes #138
- reorganize borgbackup.github.io sidebar, prev/next at top
- deduplicate and refactor the docs / README.rst
- use borg-tmp as prefix for temporary files / directories
- short prune options without "keep-" are deprecated, do not suggest them
- improved tox configuration
- remove usage of unittest.mock, always use mock from pypi
- use entrypoints instead of scripts, for better use of the wheel format and
modern installs
- add requirements.d/development.txt and modify tox.ini
- use travis-ci for testing based on Linux and (new) OS X
- use coverage.py, pytest-cov and codecov.io for test coverage support
I forgot to list some stuff already implemented in 0.23.0, here they are:
New features:
- efficient archive list from manifest, meaning a big speedup for slow
repo connections and "list <repo>", "delete <repo>", "prune" (attic #242,
attic #167)
- big speedup for chunks cache sync (esp. for slow repo connections), fixes #18
- hashindex: improve error messages
Other changes:
- explicitly specify binary mode to open binary files
- some easy micro optimizations
Version 0.23.0 (2015-06-11)
---------------------------
Incompatible changes (compared to attic, fork related):
- changed sw name and cli command to "borg", updated docs
- package name (and name in urls) uses "borgbackup" to have fewer collisions
- changed repo / cache internal magic strings from ATTIC* to BORG*,
changed cache location to .cache/borg/ - this means that it currently won't
accept attic repos (see issue #21 about improving that)
Bug fixes:
- avoid defect python-msgpack releases, fixes attic #171, fixes attic #185
- fix traceback when trying to do unsupported passphrase change, fixes attic #189
- datetime does not like the year 10.000, fixes attic #139
- fix "info" all archives stats, fixes attic #183
- fix parsing with missing microseconds, fixes attic #282
- fix misleading hint the fuse ImportError handler gave, fixes attic #237
- check unpacked data from RPC for tuple type and correct length, fixes attic #127
- fix Repository._active_txn state when lock upgrade fails
- give specific path to xattr.is_enabled(), disable symlink setattr call that
always fails
- fix test setup for 32bit platforms, partial fix for attic #196
- upgraded versioneer, PEP440 compliance, fixes attic #257
New features:
- less memory usage: add global option --no-cache-files
- check --last N (only check the last N archives)
- check: sort archives in reverse time order
- rename repo::oldname newname (rename repository)
- create -v output more informative
- create --progress (backup progress indicator)
- create --timestamp (utc string or reference file/dir)
- create: if "-" is given as path, read binary from stdin
- extract: if --stdout is given, write all extracted binary data to stdout
- extract --sparse (simple sparse file support)
- extra debug information for 'fread failed'
- delete <repo> (deletes whole repo + local cache)
- FUSE: reflect deduplication in allocated blocks
- only allow whitelisted RPC calls in server mode
- normalize source/exclude paths before matching
- use posix_fadvise not to spoil the OS cache, fixes attic #252
- toplevel error handler: show tracebacks for better error analysis
- sigusr1 / sigint handler to print current file infos - attic PR #286
- RPCError: include the exception args we get from remote
Other changes:
- source: misc. cleanups, pep8, style
- docs and faq improvements, fixes, updates
- cleanup crypto.pyx, make it easier to adapt to other AES modes
- do os.fsync like recommended in the python docs
- source: Let chunker optionally work with os-level file descriptor.
- source: Linux: remove duplicate os.fsencode calls
- source: refactor _open_rb code a bit, so it is more consistent / regular
- source: refactor indicator (status) and item processing
- source: use py.test for better testing, flake8 for code style checks
- source: fix tox >=2.0 compatibility (test runner)
- pypi package: add python version classifiers, add FreeBSD to platforms
Attic Changelog
---------------
Here you can see the full list of changes between each Attic release until Borg
forked from Attic:
Version 0.17
~~~~~~~~~~~~
(bugfix release, released on X)
- Fix hashindex ARM memory alignment issue (#309)
- Improve hashindex error messages (#298)
Version 0.16
~~~~~~~~~~~~
(bugfix release, released on May 16, 2015)
- Fix typo preventing the security confirmation prompt from working (#303)
- Improve handling of systems with improperly configured file system encoding (#289)
- Fix "All archives" output for attic info. (#183)
- More user friendly error message when repository key file is not found (#236)
- Fix parsing of iso 8601 timestamps with zero microseconds (#282)
Version 0.15
~~~~~~~~~~~~
(bugfix release, released on Apr 15, 2015)
- xattr: Be less strict about unknown/unsupported platforms (#239)
- Reduce repository listing memory usage (#163).
- Fix BrokenPipeError for remote repositories (#233)
- Fix incorrect behavior with two character directory names (#265, #268)
- Require approval before accessing relocated/moved repository (#271)
- Require approval before accessing previously unknown unencrypted repositories (#271)
- Fix issue with hash index files larger than 2GB.
- Fix Python 3.2 compatibility issue with noatime open() (#164)
- Include missing pyx files in dist files (#168)
Version 0.14
~~~~~~~~~~~~
(feature release, released on Dec 17, 2014)
- Added support for stripping leading path segments (#95)
"attic extract --strip-segments X"
- Add workaround for old Linux systems without acl_extended_file_no_follow (#96)
- Add MacPorts' path to the default openssl search path (#101)
- HashIndex improvements, eliminates unnecessary IO on low memory systems.
- Fix "Number of files" output for attic info. (#124)
- limit create file permissions so files aren't read while restoring
- Fix issue with empty xattr values (#106)
Version 0.13
~~~~~~~~~~~~
(feature release, released on Jun 29, 2014)
- Fix sporadic "Resource temporarily unavailable" when using remote repositories
- Reduce file cache memory usage (#90)
- Faster AES encryption (utilizing AES-NI when available)
- Experimental Linux, OS X and FreeBSD ACL support (#66)
- Added support for backup and restore of BSDFlags (OSX, FreeBSD) (#56)
- Fix bug where xattrs on symlinks were not correctly restored
- Added cachedir support. CACHEDIR.TAG compatible cache directories
can now be excluded using ``--exclude-caches`` (#74)
- Fix crash on extreme mtime timestamps (year 2400+) (#81)
- Fix Python 3.2 specific lockf issue (EDEADLK)
Version 0.12
~~~~~~~~~~~~
(feature release, released on April 7, 2014)
- Python 3.4 support (#62)
- Various documentation improvements a new style
- ``attic mount`` now supports mounting an entire repository not only
individual archives (#59)
- Added option to restrict remote repository access to specific path(s):
``attic serve --restrict-to-path X`` (#51)
- Include "all archives" size information in "--stats" output. (#54)
- Added ``--stats`` option to ``attic delete`` and ``attic prune``
- Fixed bug where ``attic prune`` used UTC instead of the local time zone
when determining which archives to keep.
- Switch to SI units (Power of 1000 instead 1024) when printing file sizes
Version 0.11
~~~~~~~~~~~~
(feature release, released on March 7, 2014)
- New "check" command for repository consistency checking (#24)
- Documentation improvements
- Fix exception during "attic create" with repeated files (#39)
- New "--exclude-from" option for attic create/extract/verify.
- Improved archive metadata deduplication.
- "attic verify" has been deprecated. Use "attic extract --dry-run" instead.
- "attic prune --hourly|daily|..." has been deprecated.
Use "attic prune --keep-hourly|daily|..." instead.
- Ignore xattr errors during "extract" if not supported by the filesystem. (#46)
Version 0.10
~~~~~~~~~~~~
(bugfix release, released on Jan 30, 2014)
- Fix deadlock when extracting 0 sized files from remote repositories
- "--exclude" wildcard patterns are now properly applied to the full path
not just the file name part (#5).
- Make source code endianness agnostic (#1)
Version 0.9
~~~~~~~~~~~
(feature release, released on Jan 23, 2014)
- Remote repository speed and reliability improvements.
- Fix sorting of segment names to ignore NFS left over files. (#17)
- Fix incorrect display of time (#13)
- Improved error handling / reporting. (#12)
- Use fcntl() instead of flock() when locking repository/cache. (#15)
- Let ssh figure out port/user if not specified so we don't override .ssh/config (#9)
- Improved libcrypto path detection (#23).
Version 0.8.1
~~~~~~~~~~~~~
(bugfix release, released on Oct 4, 2013)
- Fix segmentation fault issue.
Version 0.8
~~~~~~~~~~~
(feature release, released on Oct 3, 2013)
- Fix xattr issue when backing up sshfs filesystems (#4)
- Fix issue with excessive index file size (#6)
- Support access of read only repositories.
- New syntax to enable repository encryption:
attic init --encryption="none|passphrase|keyfile".
- Detect and abort if repository is older than the cache.
Version 0.7
~~~~~~~~~~~
(feature release, released on Aug 5, 2013)
- Ported to FreeBSD
- Improved documentation
- Experimental: Archives mountable as FUSE filesystems.
- The "user." prefix is no longer stripped from xattrs on Linux
Version 0.6.1
~~~~~~~~~~~~~
(bugfix release, released on July 19, 2013)
- Fixed an issue where mtime was not always correctly restored.
Version 0.6
~~~~~~~~~~~
First public release on July 9, 2013

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
# Documentation build configuration file, created by
# documentation build configuration file, created by
# sphinx-quickstart on Sat Sep 10 18:18:25 2011.
#
# This file is execfile()d with the current directory set to its containing directory.
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
@ -12,164 +12,167 @@
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
import sys
import os
sys.path.insert(0, os.path.abspath("../src"))
import sys, os
sys.path.insert(0, os.path.abspath('../src'))
from borg import __version__ as sw_version
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = ".rst"
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = "index"
master_doc = 'index'
# General information about the project.
project = "Borg - Deduplicating Archiver"
copyright = "2010-2014 Jonas Borgström, 2015-2025 The Borg Collective (see AUTHORS file)"
project = 'Borg - Deduplicating Archiver'
copyright = '2010-2014 Jonas Borgström, 2015-2022 The Borg Collective (see AUTHORS file)'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
split_char = "+" if "+" in sw_version else "-"
split_char = '+' if '+' in sw_version else '-'
version = sw_version.split(split_char)[0]
# The full version, including alpha/beta/rc tags.
release = version
suppress_warnings = ["image.nonlocal_uri"]
suppress_warnings = ['image.nonlocal_uri']
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
#today = ''
# Else, today_fmt is used as the format for a strftime call.
today_fmt = "%Y-%m-%d"
today_fmt = '%Y-%m-%d'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
# default_role = None
#default_role = None
# The Borg docs contain no or very little Python docs.
# Thus, the primary domain is RST.
primary_domain = "rst"
# Thus, the primary domain is rst.
primary_domain = 'rst'
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of built-in themes.
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
import guzzle_sphinx_theme
html_theme_path = guzzle_sphinx_theme.html_theme_path()
html_theme = "guzzle_sphinx_theme"
html_theme = 'guzzle_sphinx_theme'
def set_rst_settings(app):
app.env.settings.update({"field_name_limit": 0, "option_limit": 0})
app.env.settings.update({
'field_name_limit': 0,
'option_limit': 0,
})
def setup(app):
app.setup_extension("sphinxcontrib.jquery")
app.add_css_file("css/borg.css")
app.connect("builder-inited", set_rst_settings)
app.add_css_file('css/borg.css')
app.connect('builder-inited', set_rst_settings)
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {"project_nav_name": "Borg %s" % version}
html_theme_options = {
'project_nav_name': 'Borg %s' % version,
}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = ['_themes']
#html_theme_path = ['_themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = "_static/logo.svg"
html_logo = '_static/logo.svg'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = "_static/favicon.ico"
html_favicon = '_static/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["borg_theme"]
html_static_path = ['borg_theme']
html_extra_path = ["../src/borg/paperkey.html"]
html_extra_path = ['../src/borg/paperkey.html']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
html_last_updated_fmt = "%Y-%m-%d"
html_last_updated_fmt = '%Y-%m-%d'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
html_use_smartypants = True
smartquotes_action = "qe" # no D in there means "do not transform -- and ---"
smartquotes_action = 'qe' # no D in there means "do not transform -- and ---"
# Custom sidebar templates, maps document names to template names.
html_sidebars = {"**": ["logo-text.html", "searchbox.html", "globaltoc.html"]}
html_sidebars = {
'**': ['logo-text.html', 'searchbox.html', 'globaltoc.html'],
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
#html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
#html_domain_indices = True
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
# html_split_index = False
#html_split_index = False
# If true, links to the reST sources are added to the pages.
html_show_sourcelink = False
@ -183,45 +186,57 @@ html_show_copyright = False
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = "borgdoc"
htmlhelp_basename = 'borgdoc'
# -- Options for LaTeX output --------------------------------------------------
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [("book", "Borg.tex", "Borg Documentation", "The Borg Collective", "manual")]
latex_documents = [
('book', 'Borg.tex', 'Borg Documentation',
'The Borg Collective', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
latex_logo = "_static/logo.pdf"
latex_logo = '_static/logo.pdf'
latex_elements = {"papersize": "a4paper", "pointsize": "10pt", "figure_align": "H"}
latex_elements = {
'papersize': 'a4paper',
'pointsize': '10pt',
'figure_align': 'H',
}
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
#latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
latex_show_urls = "footnote"
latex_show_urls = 'footnote'
# Additional stuff for the LaTeX preamble.
# latex_preamble = ''
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
latex_appendices = ["support", "resources", "changes", "authors"]
latex_appendices = [
'support',
'resources',
'changes',
'authors',
]
# If false, no module index is generated.
# latex_domain_indices = True
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
@ -229,23 +244,21 @@ latex_appendices = ["support", "resources", "changes", "authors"]
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(
"usage",
"borg",
"BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.",
["The Borg Collective (see AUTHORS file)"],
1,
)
('usage', 'borg',
'BorgBackup is a deduplicating backup program with optional compression and authenticated encryption.',
['The Borg Collective (see AUTHORS file)'],
1),
]
extensions = [
"sphinx.ext.extlinks",
"sphinx.ext.autodoc",
"sphinx.ext.todo",
"sphinx.ext.coverage",
"sphinx.ext.viewcode",
"sphinxcontrib.jquery", # jquery is not included anymore by default
"guzzle_sphinx_theme", # register the theme as an extension to generate a sitemap.xml
'sphinx.ext.extlinks',
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
]
extlinks = {"issue": ("https://github.com/borgbackup/borg/issues/%s", "#%s")}
extlinks = {
'issue': ('https://github.com/borgbackup/borg/issues/%s', '#'),
'targz_url': ('https://pypi.python.org/packages/source/b/borgbackup/%%s-%s.tar.gz' % version, None),
}

View file

@ -14,4 +14,3 @@ This chapter details deployment strategies for the following scenarios.
deployment/automated-local
deployment/image-backup
deployment/pull-backup
deployment/non-root-user

View file

@ -14,13 +14,13 @@ systemd and udev.
Overview
--------
A udev rule is created to trigger on the addition of block devices. The rule contains a tag
that triggers systemd to start a one-shot service. The one-shot service executes a script in
An udev rule is created to trigger on the addition of block devices. The rule contains a tag
that triggers systemd to start a oneshot service. The oneshot service executes a script in
the standard systemd service environment, which automatically captures stdout/stderr and
logs it to the journal.
The script mounts the added block device if it is a registered backup drive and creates
backups on it. When done, it optionally unmounts the filesystem and spins the drive down,
The script mounts the added block device, if it is a registered backup drive, and creates
backups on it. When done, it optionally unmounts the file system and spins the drive down,
so that it may be physically disconnected.
Configuring the system
@ -29,13 +29,26 @@ Configuring the system
First, create the ``/etc/backups`` directory (as root).
All configuration goes into this directory.
Find out the ID of the partition table of your backup disk (here assumed to be /dev/sdz)::
Then, create ``/etc/backups/40-backup.rules`` with the following content (all on one line)::
lsblk --fs -o +PTUUID /dev/sdz
ACTION=="add", SUBSYSTEM=="bdi", DEVPATH=="/devices/virtual/bdi/*",
TAG+="systemd", ENV{SYSTEMD_WANTS}="automatic-backup.service"
Then, create ``/etc/backups/80-backup.rules`` with the following content (all on one line)::
.. topic:: Finding a more precise udev rule
ACTION=="add", SUBSYSTEM=="block", ENV{ID_PART_TABLE_UUID}=="<the PTUUID you just noted>", TAG+="systemd", ENV{SYSTEMD_WANTS}+="automatic-backup.service"
If you always connect the drive(s) to the same physical hardware path, e.g. the same
eSATA port, then you can make a more precise udev rule.
Execute ``udevadm monitor`` and connect a drive to the port you intend to use.
You should see a flurry of events, find those regarding the `block` subsystem.
Pick the event whose device path ends in something similar to a device file name,
typically`sdX/sdXY`. Use the event's device path and replace `sdX/sdXY` after the
`/block/` part in the path with a star (\*). For example:
`DEVPATH=="/devices/pci0000:00/0000:00:11.0/ata3/host2/target2:0:0/2:0:0:0/block/*"`.
Reboot a few times to ensure that the hardware path does not change: on some motherboards
components of it can be random. In these cases you cannot use a more accurate rule,
or need to insert additional stars for matching the path.
The "systemd" tag in conjunction with the SYSTEMD_WANTS environment variable has systemd
launch the "automatic-backup" service, which we will create next, as the
@ -47,8 +60,8 @@ launch the "automatic-backup" service, which we will create next, as the
Type=oneshot
ExecStart=/etc/backups/run.sh
Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template;
modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
Now, create the main backup script, ``/etc/backups/run.sh``. Below is a template,
modify it to suit your needs (e.g. more backup sets, dumping databases etc.).
.. code-block:: bash
@ -94,10 +107,10 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
echo "Disk $uuid is a backup disk"
partition_path=/dev/disk/by-uuid/$uuid
# Mount filesystem if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It will not find the drive if
# Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else.
findmnt $MOUNTPOINT >/dev/null || mount $partition_path $MOUNTPOINT
(mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive"
@ -106,13 +119,13 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
#
# Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4"
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
# Set BORG_PASSPHRASE or BORG_PASSCOMMAND somewhere around here, using export,
# if encryption is used.
# Because no one can answer these questions non-interactively, it is better to
# fail quickly instead of hanging.
# No one can answer if Borg asks these questions, it is better to just fail quickly
# instead of hanging.
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
@ -123,16 +136,16 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
# This is just an example, change it however you see fit
borg create $BORG_OPTS \
--exclude root/.cache \
--exclude var/lib/docker/devicemapper \
--exclude /root/.cache \
--exclude /var/lib/docker/devicemapper \
$TARGET::$DATE-$$-system \
/ /boot
# /home is often a separate partition/filesystem.
# Even if it is not (add --exclude /home above), it probably makes sense
# /home is often a separate partition / file system.
# Even if it isn't (add --exclude /home above), it probably makes sense
# to have /home in a separate archive.
borg create $BORG_OPTS \
--exclude 'sh:home/*/.cache' \
--exclude 'sh:/home/*/.cache' \
$TARGET::$DATE-$$-home \
/home/
@ -151,20 +164,21 @@ modify it to suit your needs (e.g., more backup sets, dumping databases, etc.).
fi
Create the ``/etc/backups/autoeject`` file to have the script automatically eject the drive
after creating the backup. Rename the file to something else (e.g., ``/etc/backups/autoeject-no``)
when you want to do something with the drive after creating backups (e.g., running checks).
after creating the backup. Rename the file to something else (e.g. ``/etc/backup/autoeject-no``)
when you want to do something with the drive after creating backups (e.g running check).
Create the ``/etc/backups/backup-suspend`` file if the machine should suspend after completing
the backup. Don't forget to disconnect the device physically before resuming,
the backup. Don't forget to physically disconnect the device before resuming,
otherwise you'll enter a cycle. You can also add an option to power down instead.
Create an empty ``/etc/backups/backup.disks`` file, in which you will register your backup drives.
Create an empty ``/etc/backups/backup.disks`` file, you'll register your backup drives
there.
Finally, enable the udev rules and services:
The last part is to actually enable the udev rules and services:
.. code-block:: bash
ln -s /etc/backups/80-backup.rules /etc/udev/rules.d/80-backup.rules
ln -s /etc/backups/40-backup.rules /etc/udev/rules.d/40-backup.rules
ln -s /etc/backups/automatic-backup.service /etc/systemd/system/automatic-backup.service
systemctl daemon-reload
udevadm control --reload
@ -173,13 +187,13 @@ Adding backup hard drives
-------------------------
Connect your backup hard drive. Format it, if not done already.
Find the UUID of the filesystem on which backups should be stored::
Find the UUID of the file system that backups should be stored on::
lsblk -o+uuid,label
Record the UUID in the ``/etc/backups/backup.disks`` file.
Note the UUID into the ``/etc/backup/backup.disks`` file.
Mount the drive at /mnt/backup.
Mount the drive to /mnt/backup.
Initialize a Borg repository at the location indicated by ``TARGET``::
@ -197,14 +211,14 @@ See backup logs using journalctl::
Security considerations
-----------------------
The script as shown above will mount any filesystem with a UUID listed in
``/etc/backups/backup.disks``. The UUID check is a safety/annoyance-reduction
The script as shown above will mount any file system with an UUID listed in
``/etc/backup/backup.disks``. The UUID check is a safety / annoyance-reduction
mechanism to keep the script from blowing up whenever a random USB thumb drive is connected.
It is not meant as a security mechanism. Mounting filesystems and reading repository
data exposes additional attack surfaces (kernel filesystem drivers,
possibly userspace services, and Borg itself). On the other hand, someone
It is not meant as a security mechanism. Mounting file systems and reading repository
data exposes additional attack surfaces (kernel file system drivers,
possibly user space services and Borg itself). On the other hand, someone
standing right next to your computer can attempt a lot of attacks, most of which
are easier to do than, e.g., exploiting filesystems (installing a physical keylogger,
are easier to do than e.g. exploiting file systems (installing a physical key logger,
DMA attacks, stealing the machine, ...).
Borg ensures that backups are not created on random drives that "just happen"

View file

@ -1,48 +1,47 @@
.. include:: ../global.rst.inc
.. highlight:: none
.. _central-backup-server:
Central repository server with Ansible or Salt
==============================================
This section gives an example of how to set up a Borg repository server for multiple
This section will give an example how to setup a borg repository server for multiple
clients.
Machines
--------
This section uses multiple machines, referred to by their
respective fully qualified domain names (FQDNs).
There are multiple machines used in this section and will further be named by their
respective fully qualified domain name (fqdn).
* The backup server: `backup01.srv.local`
* The clients:
- John Doe's desktop: `johndoe.clnt.local`
- Web server 01: `web01.srv.local`
- Webserver 01: `web01.srv.local`
- Application server 01: `app01.srv.local`
User and group
--------------
The repository server should have a single UNIX user for all the clients.
The repository server needs to have only one UNIX user for all the clients.
Recommended user and group with additional settings:
* User: `backup`
* Group: `backup`
* Shell: `/bin/bash` (or another shell capable of running the `borg serve` command)
* Shell: `/bin/bash` (or other capable to run the `borg serve` command)
* Home: `/home/backup`
Most clients should initiate a backup as the root user to capture all
users, groups, and permissions (e.g., when backing up `/home`).
Most clients shall initiate a backup from the root user to catch all
users, groups and permissions (e.g. when backing up `/home`).
Folders
-------
The following directory layout is suggested on the repository server:
The following folder tree layout is suggested on the repository server:
* User home directory, /home/backup
* Repositories path (storage pool): /home/backup/repos
* Clients restricted paths (`/home/backup/repos/<client fqdn>`):
* Clients restricted paths (`/home/backup/repos/<client fqdn>`):
- johndoe.clnt.local: `/home/backup/repos/johndoe.clnt.local`
- web01.srv.local: `/home/backup/repos/web01.srv.local`
@ -60,10 +59,10 @@ but no other directories. You can allow a client to access several separate dire
which could make sense if multiple machines belong to one person which should then have access to all the
backups of their machines.
Only one SSH key per client is allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and
``app01.srv.local``. They will access the backup under a single UNIX user account as
There is only one ssh key per client allowed. Keys are added for ``johndoe.clnt.local``, ``web01.srv.local`` and
``app01.srv.local``. But they will access the backup under only one UNIX user account as:
``backup@backup01.srv.local``. Every key in ``$HOME/.ssh/authorized_keys`` has a
forced command and restrictions applied, as shown below:
forced command and restrictions applied as shown below:
::
@ -73,19 +72,19 @@ forced command and restrictions applied, as shown below:
.. note:: The text shown above needs to be written on a single line!
The options added to the key perform the following:
The options which are added to the key will perform the following:
1. Change working directory
2. Run ``borg serve`` restricted to the client base path
3. Restrict ssh and do not allow stuff which imposes a security risk
Because of the ``cd`` command, the server automatically changes the current
working directory. The client then does not need to know the absolute
Due to the ``cd`` command we use, the server automatically changes the current
working directory. Then client doesn't need to have knowledge of the absolute
or relative remote repository path and can directly access the repositories at
``ssh://<user>@<host>/./<repo>``.
.. note:: The setup above ignores all client-given command line parameters
that are normally appended to the `borg serve` command.
.. note:: The setup above ignores all client given commandline parameters
which are normally appended to the `borg serve` command.
Client
------
@ -96,15 +95,15 @@ The client needs to initialize the `pictures` repository like this:
borg init ssh://backup@backup01.srv.local/./pictures
Or with the full path (this should not be used in practice; it is only for demonstration purposes).
The server automatically changes the current working directory to the `<client fqdn>` directory.
Or with the full path (should actually never be used, as only for demonstrational purposes).
The server should automatically change the current working directory to the `<client fqdn>` folder.
::
borg init ssh://backup@backup01.srv.local/home/backup/repos/johndoe.clnt.local/pictures
When `johndoe.clnt.local` tries to access a path outside its restriction, the following error is raised.
John Doe tries to back up into the web01 path:
When `johndoe.clnt.local` tries to access a not restricted path the following error is raised.
John Doe tries to backup into the Web 01 path:
::
@ -203,7 +202,7 @@ Salt running on a Debian system.
Enhancements
------------
As this section only describes a simple and effective setup, it could be further
As this section only describes a simple and effective setup it could be further
enhanced when supporting (a limited set) of client supplied commands. A wrapper
for starting `borg serve` could be written. Or borg itself could be enhanced to
autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment

View file

@ -5,25 +5,26 @@
Hosting repositories
====================
This section shows how to provide repository storage securely for users.
This sections shows how to securely provide repository storage for users.
Optionally, each user can have a storage quota.
Repositories are accessed through SSH. Each user of the service should
have their own login, which is only able to access that user's files.
Technically, it is possible to have multiple users share one login;
have her own login which is only able to access the user's files.
Technically it would be possible to have multiple users share one login,
however, separating them is better. Separate logins increase isolation
and provide an additional layer of security and safety for both the
and are thus an additional layer of security and safety for both the
provider and the users.
For example, if a user manages to breach ``borg serve``, they can
only damage their own data (assuming that the system does not have further
For example, if a user manages to breach ``borg serve`` then she can
only damage her own data (assuming that the system does not have further
vulnerabilities).
Use the standard directory structure of the operating system. Each user
is assigned a home directory, and that user's repositories reside in their
is assigned a home directory and repositories of the user reside in her
home directory.
The following ``~user/.ssh/authorized_keys`` file is the most important
piece for a correct deployment. It allows the user to log in via
piece for a correct deployment. It allows the user to login via
their public key (which must be provided by the user), and restricts
SSH access to safe operations only.
@ -36,18 +37,18 @@ SSH access to safe operations only.
.. warning::
If this file should be automatically updated (e.g. by a web console),
pay **utmost attention** to sanitizing user input. Strip all whitespace
around the user-supplied key, ensure that it **only** contains ASCII
with no control characters and that it consists of three parts separated
by a single space. Ensure that no newlines are contained within the key.
If this file should be automatically updated (e.g. by a web console),
pay **utmost attention** to sanitizing user input. Strip all whitespace
around the user-supplied key, ensure that it **only** contains ASCII
with no control characters and that it consists of three parts separated
by a single space. Ensure that no newlines are contained within the key.
The ``restrict`` keyword enables all restrictions, i.e. disables port, agent
and X11 forwarding, as well as disabling PTY allocation and execution of ~/.ssh/rc.
If any future restriction capabilities are added to authorized_keys
files they will be included in this set.
The ``command`` keyword forces execution of the specified command
The ``command`` keyword forces execution of the specified command line
upon login. This must be ``borg serve``. The ``--restrict-to-repository``
option permits access to exactly **one** repository. It can be given
multiple times to permit access to more than one repository.
@ -55,6 +56,30 @@ multiple times to permit access to more than one repository.
The repository may not exist yet; it can be initialized by the user,
which allows for encryption.
**Storage quotas** can be enabled by adding the ``--storage-quota`` option
to the ``borg serve`` command line::
restrict,command="borg serve --storage-quota 20G ..." ...
The storage quotas of repositories are completely independent. If a
client is able to access multiple repositories, each repository
can be filled to the specified quota.
If storage quotas are used, ensure that all deployed Borg releases
support storage quotas.
Refer to :ref:`internals_storage_quota` for more details on storage quotas.
**Specificities: Append-only repositories**
Running ``borg init`` via a ``borg serve --append-only`` server will **not**
create a repository that is configured to be append-only by its repository
config.
But, ``--append-only`` arguments in ``authorized_keys`` will override the
repository config, therefore append-only mode can be enabled on a key by key
basis.
Refer to the `sshd(8) <https://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man8/sshd.8>`_
man page for more details on SSH options.
See also :ref:`borg_serve`

View file

@ -6,48 +6,13 @@ Backing up entire disk images
Backing up disk images can still be efficient with Borg because its `deduplication`_
technique makes sure only the modified parts of the file are stored. Borg also has
optional simple sparse file support for extraction.
It is of utmost importance to pin down the disk you want to back up.
Use the disk's SERIAL for that.
Use:
.. code-block:: bash
# You can find the short disk serial by:
# udevadm info --query=property --name=nvme1n1 | grep ID_SERIAL_SHORT | cut -d '=' -f 2
export BORG_REPO=/path/to/repo
DISK_SERIAL="7VS0224F"
DISK_ID=$(readlink -f /dev/disk/by-id/*"${DISK_SERIAL}") # Returns /dev/nvme1n1
mapfile -t PARTITIONS < <(lsblk -o NAME,TYPE -p -n -l "$DISK_ID" | awk '$2 == "part" {print $1}')
echo "Partitions of $DISK_ID:"
echo "${PARTITIONS[@]}"
echo "Disk Identifier: $DISK_ID"
# Use the following line to perform a Borg backup for the full disk:
# borg create --read-special disk-backup "$DISK_ID"
# Use the following to perform a Borg backup for all partitions of the disk
# borg create --read-special partitions-backup "${PARTITIONS[@]}"
# Example output:
# Partitions of /dev/nvme1n1:
# /dev/nvme1n1p1
# /dev/nvme1n1p2
# /dev/nvme1n1p3
# Disk Identifier: /dev/nvme1n1
# borg create --read-special disk-backup /dev/nvme1n1
# borg create --read-special partitions-backup /dev/nvme1n1p1 /dev/nvme1n1p2 /dev/nvme1n1p3
optional simple sparse file support for extract.
Decreasing the size of image backups
------------------------------------
Disk images are as large as the full disk when uncompressed and might not get much
smaller post-deduplication after heavy use because virtually all filesystems do not
smaller post-deduplication after heavy use because virtually all file systems don't
actually delete file data on disk but instead delete the filesystem entries referencing
the data. Therefore, if a disk nears capacity and files are deleted again, the change
will barely decrease the space it takes up when compressed and deduplicated. Depending
@ -63,28 +28,28 @@ deduplicating. For backup, save the disk header and the contents of each partiti
HEADER_SIZE=$(sfdisk -lo Start $DISK | grep -A1 -P 'Start$' | tail -n1 | xargs echo)
PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d')
dd if=$DISK count=$HEADER_SIZE | borg create --repo repo hostname-partinfo -
dd if=$DISK count=$HEADER_SIZE | borg create repo::hostname-partinfo -
echo "$PARTITIONS" | grep NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
ntfsclone -so - $x | borg create --repo repo hostname-part$PARTNUM -
ntfsclone -so - $x | borg create repo::hostname-part$PARTNUM -
done
# to back up non-NTFS partitions as well:
# to backup non-NTFS partitions as well:
echo "$PARTITIONS" | grep -v NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
borg create --read-special --repo repo hostname-part$PARTNUM $x
borg create --read-special repo::hostname-part$PARTNUM $x
done
Restoration is a similar process::
borg extract --stdout --repo repo hostname-partinfo | dd of=$DISK && partprobe
borg extract --stdout repo::hostname-partinfo | dd of=$DISK && partprobe
PARTITIONS=$(sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d')
borg list --format {archive}{NL} repo | grep 'part[0-9]*$' | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
PARTITION=$(echo "$PARTITIONS" | grep -E "$DISKp?$PARTNUM" | head -n1)
if echo "$PARTITION" | cut -d' ' -f2- | grep -q NTFS; then
borg extract --stdout --repo repo $x | ntfsclone -rO $(echo "$PARTITION" | cut -d' ' -f1) -
borg extract --stdout repo::$x | ntfsclone -rO $(echo "$PARTITION" | cut -d' ' -f1) -
else
borg extract --stdout --repo repo $x | dd of=$(echo "$PARTITION" | cut -d' ' -f1)
borg extract --stdout repo::$x | dd of=$(echo "$PARTITION" | cut -d' ' -f1)
fi
done
@ -105,18 +70,18 @@ except it works in place, zeroing the original partition. This makes the backup
a bit simpler::
sfdisk -lo Device,Type $DISK | sed -e '1,/Device\s*Type/d' | grep Linux | cut -d' ' -f1 | xargs -n1 zerofree
borg create --read-special --repo repo hostname-disk $DISK
borg create --read-special repo::hostname-disk $DISK
Because the partitions were zeroed in place, restoration is only one command::
borg extract --stdout --repo repo hostname-disk | dd of=$DISK
borg extract --stdout repo::hostname-disk | dd of=$DISK
.. note:: The "traditional" way to zero out space on a partition, especially one already
mounted, is simply to ``dd`` from ``/dev/zero`` to a temporary file and delete
mounted, is to simply ``dd`` from ``/dev/zero`` to a temporary file and delete
it. This is ill-advised for the reasons mentioned in the ``zerofree`` man page:
- it is slow.
- it makes the disk image (temporarily) grow to its maximal extent.
- it is slow
- it makes the disk image (temporarily) grow to its maximal extent
- it (temporarily) uses all free space on the disk, so other concurrent write actions may fail.
Virtual machines
@ -129,7 +94,7 @@ regular file to Borg with the same issues as regular files when it comes to conc
reading and writing from the same file.
For backing up live VMs use filesystem snapshots on the VM host, which establishes
crash-consistency for the VM images. This means that with most filesystems (that
crash-consistency for the VM images. This means that with most file systems (that
are journaling) the FS will always be fine in the backup (but may need a journal
replay to become accessible).
@ -145,10 +110,10 @@ to reach application-consistency; it's a broad and complex issue that cannot be
in entirety here.
Hypervisor snapshots capturing most of the VM's state can also be used for backups and
can be a better alternative to pure filesystem-based snapshots of the VM's disk, since
can be a better alternative to pure file system based snapshots of the VM's disk, since
no state is lost. Depending on the application this can be the easiest and most reliable
way to create application-consistent backups.
Borg does not intend to address these issues due to their huge complexity and
Borg doesn't intend to address these issues due to their huge complexity and
platform/software dependency. Combining Borg with the mechanisms provided by the platform
(snapshots, hypervisor features) will be the best approach to start tackling them.
(snapshots, hypervisor features) will be the best approach to start tackling them.

View file

@ -1,66 +0,0 @@
.. include:: ../global.rst.inc
.. highlight:: none
.. _non_root_user:
================================
Backing up using a non-root user
================================
This section describes how to run Borg as a non-root user and still be able to
back up every file on the system.
Normally, Borg is run as the root user to bypass all filesystem permissions and
be able to read all files. However, in theory this also allows Borg to modify or
delete files on your system (for example, in case of a bug).
To eliminate this possibility, we can run Borg as a non-root user and give it read-only
permissions to all files on the system.
Using Linux capabilities inside a systemd service
=================================================
One way to do so is to use Linux `capabilities
<https://man7.org/linux/man-pages/man7/capabilities.7.html>`_ within a systemd
service.
Linux capabilities allow us to grant parts of the root users privileges to
a non-root user. This works on a per-thread level and does not grant permissions
to the non-root user as a whole.
For this, we need to run the backup script from a systemd service and use the `AmbientCapabilities
<https://www.freedesktop.org/software/systemd/man/latest/systemd.exec.html#AmbientCapabilities=>`_
option added in systemd 229.
A very basic unit file would look like this:
::
[Unit]
Description=Borg Backup
[Service]
Type=oneshot
User=borg
ExecStart=/usr/local/sbin/backup.sh
AmbientCapabilities=CAP_DAC_READ_SEARCH
The ``CAP_DAC_READ_SEARCH`` capability gives Borg read-only access to all files and directories on the system.
This service can then be started manually using ``systemctl start``, a systemd timer or other methods.
Restore considerations
======================
Use the root user when restoring files. If you use the non-root user, ``borg extract`` will
change ownership of all restored files to the non-root user. Using ``borg mount`` will not allow the
non-root user to access files it would not be able to access on the system itself.
Other than that, you can use the same restore process you would use when running the backup as root.
.. warning::
When using a local repository and running Borg commands as root, make sure to use only commands that do not
modify the repository itself, such as extract or mount. Modifying the repository as root will break it for the
non-root user, since some files inside the repository will then be owned by root.

View file

@ -6,51 +6,51 @@
Backing up in pull mode
=======================
Typically the Borg client connects to a backup server using SSH as a transport
Typically the borg client connects to a backup server using SSH as a transport
when initiating a backup. This is referred to as push mode.
However, if you require the backup server to initiate the connection, or prefer
If you however require the backup server to initiate the connection or prefer
it to initiate the backup run, one of the following workarounds is required to
allow such a pull-mode setup.
allow such a pull mode setup.
A common use case for pull mode is to back up a remote server to a local personal
A common use case for pull mode is to backup a remote server to a local personal
computer.
SSHFS
=====
Assume you have a pull backup system set up with Borg, where a backup server
pulls data from the target via SSHFS. In this mode, the backup client's filesystem
is mounted remotely on the backup server. Pull mode is even possible if
Assuming you have a pull backup system set up with borg, where a backup server
pulls the data from the target via SSHFS. In this mode, the backup client's file
system is mounted remotely on the backup server. Pull mode is even possible if
the SSH connection must be established by the client via a remote tunnel. Other
network file systems like NFS or SMB could be used as well, but SSHFS is very
simple to set up and probably the most secure one.
There are some restrictions caused by SSHFS. For example, unless you define UID
and GID mappings when mounting via ``sshfs``, owners and groups of the mounted
filesystem will probably change, and you may not have access to those files if
Borg is not run with root privileges.
file system will probably change, and you may not have access to those files if
BorgBackup is not run with root privileges.
SSHFS is a FUSE filesystem and uses the SFTP protocol, so there may also be
unsupported features that the actual implementations of SSHFS, libfuse, and
SFTP on the backup server do not support, like filename encodings, ACLs, xattrs,
or flags. Therefore, there is no guarantee that you can restore a system
SSHFS is a FUSE file system and uses the SFTP protocol, so there may be also
other unsupported features that the actual implementations of sshfs, libfuse and
sftp on the backup server do not support, like file name encodings, ACLs, xattrs
or flags. So there is no guarantee that you are able to restore a system
completely in every aspect from such a backup.
.. warning::
To mount the client's root filesystem you will need root access to the
client. This contradicts the usual threat model of Borg, where
clients do not need to trust the backup server (data is encrypted). In pull
To mount the client's root file system you will need root access to the
client. This contradicts to the usual threat model of BorgBackup, where
clients don't need to trust the backup server (data is encrypted). In pull
mode the server (when logged in as root) could cause unlimited damage to the
client. Therefore, pull mode should be used only with servers you fully
client. Therefore, pull mode should be used only from servers you do fully
trust!
.. warning::
Additionally, while chrooted into the client's root filesystem,
code from the client will be executed. Therefore, you should do this only when
you fully trust the client.
Additionally, while being chrooted into the client's root file system,
code from the client will be executed. Thus, you should only do that when
fully trusting the client.
.. warning::
@ -98,9 +98,9 @@ create the backup, retaining the original paths, excluding the repository:
::
borg create --exclude borgrepo --files-cache ctime,size --repo /borgrepo archive /
borg create --exclude /borgrepo --files-cache ctime,size /borgrepo::archive /
For the sake of simplicity only ``borgrepo`` is excluded here. You may want to
For the sake of simplicity only ``/borgrepo`` is excluded here. You may want to
set up an exclude file with additional files and folders to be excluded. Also
note that we have to modify Borg's file change detection behaviour SSHFS
cannot guarantee stable inode numbers, so we have to supply the
@ -159,9 +159,9 @@ Now we can run
::
borg extract --repo /borgrepo archive PATH
borg extract /borgrepo::archive PATH
to restore whatever we like partially. Finally, do the clean-up:
to partially restore whatever we like. Finally, do the clean-up:
::
@ -187,7 +187,7 @@ and extract a backup, utilizing the ``--numeric-ids`` option:
sshfs root@host:/ /mnt/sshfs
cd /mnt/sshfs
borg extract --numeric-ids --repo /path/to/repo archive
borg extract --numeric-ids /path/to/repo::archive
cd ~
umount /mnt/sshfs
@ -199,7 +199,7 @@ directly extract it without the need of mounting with SSHFS:
::
borg export-tar --repo /path/to/repo archive - | ssh root@host 'tar -C / -x'
borg export-tar /path/to/repo::archive - | ssh root@host 'tar -C / -x'
Note that in this scenario the tar format is the limiting factor it cannot
restore all the advanced features that BorgBackup supports. See
@ -209,8 +209,8 @@ socat
=====
In this setup a SSH connection from the backup server to the client is
established that uses SSH reverse port forwarding to tunnel data
transparently between UNIX domain sockets on the client and server and the socat
established that uses SSH reverse port forwarding to transparently
tunnel data between UNIX domain sockets on the client and server and the socat
tool to connect these with the borg client and server processes, respectively.
The program socat has to be available on the backup server and on the client
@ -247,7 +247,7 @@ to *borg-client* has to have read and write permissions on ``/run/borg``::
On *borg-server*, we have to start the command ``borg serve`` and make its
standard input and output available to a unix socket::
borg-server:~$ socat UNIX-LISTEN:/run/borg/reponame.sock,fork EXEC:"borg serve --restrict-to-path /path/to/repo"
borg-server:~$ socat UNIX-LISTEN:/run/borg/reponame.sock,fork EXEC:"borg serve --append-only --restrict-to-path /path/to/repo"
Socat will wait until a connection is opened. Then socat will execute the
command given, redirecting Standard Input and Output to the unix socket. The
@ -277,7 +277,7 @@ forwarding can do this for us::
Warning: remote port forwarding failed for listen path /run/borg/reponame.sock
When you are done, you have to remove the socket file manually, otherwise
When you are done, you have to manually remove the socket file, otherwise
you may see an error like this when trying to execute borg commands::
Remote: YYYY/MM/DD HH:MM:SS socat[XXX] E connect(5, AF=1 "/run/borg/reponame.sock", 13): Connection refused
@ -301,7 +301,7 @@ ignore all arguments intended for the SSH command.
All Borg commands can now be executed on *borg-client*. For example to create a
backup execute the ``borg create`` command::
borg-client:~$ borg create --repo ssh://borg-server/path/to/repo archive /path_to_backup
borg-client:~$ borg create ssh://borg-server/path/to/repo::archive /path_to_backup
When automating backup creation, the
interactive ssh session may seem inappropriate. An alternative way of creating
@ -312,7 +312,7 @@ a backup may be the following command::
borgc@borg-client \
borg create \
--rsh "sh -c 'exec socat STDIO UNIX-CONNECT:/run/borg/reponame.sock'" \
--repo ssh://borg-server/path/to/repo archive /path_to_backup \
ssh://borg-server/path/to/repo::archive /path_to_backup \
';' rm /run/borg/reponame.sock
This command also automatically removes the socket file after the ``borg
@ -350,7 +350,7 @@ dedicated ssh key:
borgs@borg-server$ install -m 700 -d ~/.ssh/
borgs@borg-server$ ssh-keygen -N '' -t rsa -f ~/.ssh/borg-client_key
borgs@borg-server$ { echo -n 'command="borg serve --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys
borgs@borg-server$ { echo -n 'command="borg serve --append-only --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys
borgs@borg-server$ chmod 600 ~/.ssh/authorized_keys
``install -m 700 -d ~/.ssh/``
@ -365,10 +365,12 @@ dedicated ssh key:
Another more complex approach is using a unique ssh key for each pull operation.
This is more secure as it guarantees that the key will not be used for other purposes.
``{ echo -n 'command="borg serve --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys``
``{ echo -n 'command="borg serve --append-only --restrict-to-repo ~/repo",restrict '; cat ~/.ssh/borg-client_key.pub; } >> ~/.ssh/authorized_keys``
Add borg-client's ssh public key to ~/.ssh/authorized_keys with forced command and restricted mode.
The borg client is restricted to use one repo at the specified path.
The borg client is restricted to use one repo at the specified path and to append-only operation.
Commands like *delete*, *prune* and *compact* have to be executed another way, for example directly on *borg-server*
side or from a privileged, less restricted client (using another authorized_keys entry).
``chmod 600 ~/.ssh/authorized_keys``
@ -415,88 +417,8 @@ Parentheses are not needed when using a dedicated bash process.
*ssh://borgs@borg-server/~/repo* refers to the repository *repo* within borgs's home directory on *borg-server*.
*StrictHostKeyChecking=no* is used to add host keys automatically to *~/.ssh/known_hosts* without user intervention.
*StrictHostKeyChecking=no* is used to automatically add host keys to *~/.ssh/known_hosts* without user intervention.
``kill "${SSH_AGENT_PID}"``
Kill ssh-agent with loaded keys when it is not needed anymore.
Remote forwarding
=================
The standard ssh client allows to create tunnels to forward local ports to a remote server (local forwarding) and also
to allow remote ports to be forwarded to local ports (remote forwarding).
This remote forwarding can be used to allow remote backup clients to access the backup server even if the backup server
cannot be reached by the backup client.
This can even be used in cases where neither the backup server can reach the backup client and the backup client cannot
reach the backup server, but some intermediate host can access both.
A schematic approach is as follows
::
Backup Server (backup@mybackup) Intermediate Machine (john@myinter) Backup Client (bob@myclient)
1. Establish SSH remote forwarding -----------> SSH listen on local port
2. Starting ``borg create`` establishes
3. SSH forwards to intermediate machine <------- SSH connection to the local port
4. Receives backup connection <------- and further on to backup server
via SSH
So for the backup client the backup is done via SSH to a local port and for the backup server there is a normal backup
performed via ssh.
In order to achieve this, the following commands can be used to create the remote port forwarding:
1. On machine ``myinter``
``ssh bob@myclient -v -C -R 8022:mybackup:22 -N``
This will listen for ssh-connections on port ``8022`` on ``myclient`` and forward connections to port 22 on ``mybackup``.
You can also remove the need for machine ``myinter`` and create the port forwarding on the backup server directly by
using ``localhost`` instead of ``mybackup``
2. On machine ``myclient``
``borg create -v --progress --stats ssh://backup@localhost:8022/home/backup/repos/myclient /``
Make sure to use port ``8022`` and ``localhost`` for the repository as this instructs borg on ``myclient`` to use the
remote forwarded ssh connection.
SSH Keys
--------
If you want to automate backups when using this method, the ssh ``known_hosts`` and ``authorized_keys`` need to be set up
to allow connections.
Security Considerations
-----------------------
Opening up SSH access this way can pose a security risk as it effectively opens remote access to your
backup server on the client even if it is located outside of your company network.
To reduce the chances of compromise, you should configure a forced command in ``authorized_keys`` to prevent
anyone from performing any other action on the backup server.
This can be done e.g. by adding the following in ``$HOME/.ssh/authorized_keys`` on ``mybackup`` with proper
path and client-fqdn:
::
command="cd /home/backup/repos/<client fqdn>;borg serve --restrict-to-path /home/backup/repos/<client fqdn>"
All the additional security considerations for borg should be applied, see :ref:`central-backup-server` for some additional
hints.
More information
----------------
See `remote forwarding`_ and the `ssh man page`_ for more information about remote forwarding.
.. _remote forwarding: https://linuxize.com/post/how-to-setup-ssh-tunneling/
.. _ssh man page: https://manpages.debian.org/testing/manpages-de/ssh.1.de.html

View file

@ -8,7 +8,7 @@ Development
This chapter will get you started with Borg development.
Borg is written in Python (with a little bit of Cython and C for
the performance-critical parts).
the performance critical parts).
Contributions
-------------
@ -19,7 +19,7 @@ Some guidance for contributors:
- Discuss changes on the GitHub issue tracker, on IRC or on the mailing list.
- Make your PRs on the ``master`` branch (see `Branching Model`_ for details and exceptions).
- Make your PRs on the ``master`` branch (see `Branching Model`_ for details).
- Do clean changesets:
@ -52,14 +52,14 @@ Borg development happens on the ``master`` branch and uses GitHub pull
requests (if you don't have GitHub or don't want to use it you can
send smaller patches via the borgbackup mailing list to the maintainers).
Stable releases are maintained on maintenance branches named ``x.y-maint``, e.g.,
the maintenance branch of the 1.4.x series is ``1.4-maint``.
Stable releases are maintained on maintenance branches named ``x.y-maint``, eg.
the maintenance branch of the 1.0.x series is ``1.0-maint``.
Most PRs should be filed against the ``master`` branch. Only if an
issue affects **only** a particular maintenance branch a PR should be
filed against it directly.
While discussing/reviewing a PR it will be decided whether the
While discussing / reviewing a PR it will be decided whether the
change should be applied to maintenance branches. Each maintenance
branch has a corresponding *backport/x.y-maint* label, which will then
be applied.
@ -105,110 +105,11 @@ were collected:
Previously (until release 1.0.10) Borg used a `"merge upwards"
<https://git-scm.com/docs/gitworkflows#_merging_upwards>`_ model where
most minor changes and fixes were committed to a maintenance branch
(e.g. 1.0-maint), and the maintenance branch(es) were regularly merged
most minor changes and fixes where committed to a maintenance branch
(eg. 1.0-maint), and the maintenance branch(es) were regularly merged
back into the main development branch. This became more and more
troublesome due to merges growing more conflict-heavy and error-prone.
How to submit a pull request
----------------------------
In order to contribute to Borg, you will need to fork the ``borgbackup/borg``
main repository to your own Github repository. Then clone your Github repository
to your local machine. The instructions for forking and cloning a repository
can be found there:
`<https://docs.github.com/en/get-started/quickstart/fork-a-repo>`_ .
Make sure you also fetched the git tags, because without them, ``setuptools-scm``
will run into issues determining the correct borg version. Check if ``git tag``
shows a lot of release tags (version numbers).
If it does not, use ``git fetch --tags`` to fetch them.
To work on your contribution, you first need to decide which branch your pull
request should be against. Often, this might be master branch (esp. for big /
risky contributions), but it could be also a maintenance branch like e.g.
1.4-maint (esp. for small fixes that should go into next maintenance release,
e.g. 1.4.x).
Start by checking out the appropriate branch:
::
git checkout master
It is best practice for a developer to keep local ``master`` branch as an
up-to-date copy of the upstream ``master`` branch and always do own work in a
separate feature or bugfix branch.
This is useful to be able to rebase own branches onto the upstream branches
they were branched from, if necessary.
This also applies to other upstream branches (like e.g. ``1.4-maint``), not
only to ``master``.
Thus, create a new branch now:
::
git checkout -b MYCONTRIB-master # choose an appropriate own branch name
Now, work on your contribution in that branch. Use these git commands:
::
git status # is there anything that needs to be added?
git add ... # if so, add it
git commit # finally, commit it. use a descriptive comment.
Then push the changes to your Github repository:
::
git push --set-upstream origin MYCONTRIB-master
Finally, make a pull request on ``borgbackup/borg`` Github repository against
the appropriate branch (e.g. ``master``) so that your changes can be reviewed.
What to do if work was accidentally started in wrong branch
-----------------------------------------------------------
If you accidentally worked in ``master`` branch, check out the ``master``
branch and make sure there are no uncommitted changes. Then, create a feature
branch from that, so that your contribution is in a feature branch.
::
git checkout master
git checkout -b MYCONTRIB-master
Next, check out the ``master`` branch again. Find the commit hash of the last
commit that was made before you started working on your contribution and perform
a hard reset.
::
git checkout master
git log
git reset --hard THATHASH
Then, update the local ``master`` branch with changes made in the upstream
repository.
::
git pull borg master
Rebase feature branch onto updated master branch
------------------------------------------------
After updating the local ``master`` branch from upstream, the feature branch
can be checked out and rebased onto (the now up-to-date) ``master`` branch.
::
git checkout MYCONTRIB-master
git rebase -i master
Next, check if there are any commits that exist in the feature branch
but not in the ``master`` branch and vice versa. If there are no
conflicts or after resolving them, push your changes to your Github repository.
::
git log
git diff master
git push -f
Code and issues
---------------
@ -218,35 +119,24 @@ Code is stored on GitHub, in the `Borgbackup organization
<https://github.com/borgbackup/borg/pulls>`_ should be sent there as
well. See also the :ref:`support` section for more details.
Style guide / Automated Code Formatting
---------------------------------------
Style guide
-----------
We use `black`_ for automatically formatting the code.
If you work on the code, it is recommended that you run black **before each commit**
(so that new code is always using the desired formatting and no additional commits
are required to fix the formatting).
::
pip install -r requirements.d/codestyle.txt # everybody use same black version
black --check . # only check, don't change
black . # reformat the code
The CI workflows will check the code formatting and will fail if it is not formatted correctly.
When (mass-)reformatting existing code, we need to avoid ruining `git blame`, so please
follow their `guide about avoiding ruining git blame`_:
.. _black: https://black.readthedocs.io/
.. _guide about avoiding ruining git blame: https://black.readthedocs.io/en/stable/guides/introducing_black_to_your_project.html#avoiding-ruining-git-blame
We generally follow `pep8
<https://www.python.org/dev/peps/pep-0008/>`_, with 120 columns
instead of 79. We do *not* use form-feed (``^L``) characters to
separate sections either. Compliance is tested automatically when
you run the tests.
Continuous Integration
----------------------
All pull requests go through `GitHub Actions`_, which runs the tests on misc.
Python versions and on misc. platforms as well as some additional checks.
All pull requests go through `GitHub Actions`_, which runs the tests on Linux
and Mac OS X as well as the flake8 style checker. Windows builds run on AppVeyor_,
while additional Unix-like platforms are tested on Golem_.
.. _AppVeyor: https://ci.appveyor.com/project/borgbackup/borg/
.. _Golem: https://golem.enkore.de/view/Borg/
.. _GitHub Actions: https://github.com/borgbackup/borg/actions
Output and Logging
@ -274,12 +164,6 @@ virtual env and run::
pip install -r requirements.d/development.txt
This project utilizes pre-commit to format and lint code before it is committed.
Although pre-commit is installed when running the command above, the pre-commit hooks
will have to be installed separately. Run this command to install the pre-commit hooks::
pre-commit install
Running the tests
-----------------
@ -287,7 +171,7 @@ The tests are in the borg/testsuite package.
To run all the tests, you need to have fakeroot installed. If you do not have
fakeroot, you still will be able to run most tests, just leave away the
``fakeroot -u`` from the given command lines.
`fakeroot -u` from the given command lines.
To run the test suite use the following command::
@ -298,7 +182,7 @@ Some more advanced examples::
# verify a changed tox.ini (run this after any change to tox.ini):
fakeroot -u tox --recreate
fakeroot -u tox -e py313 # run all tests, but only on python 3.13
fakeroot -u tox -e py39 # run all tests, but only on python 3.9
fakeroot -u tox borg.testsuite.locking # only run 1 test module
@ -310,24 +194,26 @@ Important notes:
- When using ``--`` to give options to py.test, you MUST also give ``borg.testsuite[.module]``.
Running the tests (using the pypi package)
------------------------------------------
Since borg 1.4, it is also possible to run the tests without a development
environment, using the borgbackup dist package (downloaded from pypi.org or
github releases page):
Running more checks using coala
-------------------------------
First install coala and some checkers ("bears"):
::
# optional: create and use a virtual env:
python3 -m venv env
. env/bin/activate
pip install -r requirements.d/coala.txt
# install packages
pip install borgbackup
pip install pytest pytest-benchmark
You can now run coala from the toplevel directory; it will read its settings
from ``.coafile`` there:
::
coala
Some bears have additional requirements and they usually tell you about
them in case they are missing.
# run the tests
pytest -v -rs --benchmark-skip --pyargs borg.testsuite
Adding a compression algorithm
------------------------------
@ -350,8 +236,8 @@ for easier use by packagers downstream.
When a command is added, a command line flag changed, added or removed,
the usage docs need to be rebuilt as well::
python scripts/make.py build_usage
python scripts/make.py build_man
python setup.py build_usage
python setup.py build_man
However, we prefer to do this as part of our :ref:`releasing`
preparations, so it is generally not necessary to update these when
@ -405,49 +291,6 @@ Usage::
# To copy files from the VM (in this case, the generated binary):
vagrant scp OS:/vagrant/borg/borg.exe .
Using Podman
------------
macOS-based developers (and others who prefer containers) can run the Linux test suite locally using Podman.
Prerequisites:
- Install Podman (e.g., ``brew install podman``).
- Initialize the Podman machine, only once: ``podman machine init``.
- Start the Podman machine, before using it: ``podman machine start``.
Usage::
# Open an interactive shell in the container (default if no command given):
./scripts/linux-run
# Run the default tox environment:
./scripts/linux-run tox
# Run a specific tox environment:
./scripts/linux-run tox -e py311-pyfuse3
# Pass arguments to pytest (e.g., run specific tests):
./scripts/linux-run tox -e py313-pyfuse3 -- -k mount
# Switch base image (temporarily):
./scripts/linux-run --image python:3.11-bookworm tox
Resource Usage
~~~~~~~~~~~~~~
The default Podman VM uses 2GB RAM and half your CPUs.
For heavy tests (parallel execution), this might be tight.
- **Check usage:** Run ``podman stats`` in another terminal while tests are running.
- **Increase resources:**
::
podman machine stop
podman machine set --cpus 6 --memory 4096
podman machine start
Creating standalone binaries
----------------------------
@ -467,6 +310,7 @@ If you encounter issues, see also our `Vagrantfile` for details.
work on same OS, same architecture (x86 32bit, amd64 64bit)
without external dependencies.
.. _releasing:
Creating a new release
@ -482,18 +326,12 @@ Checklist:
- Update ``CHANGES.rst``, based on ``git log $PREVIOUS_RELEASE..``.
- Check version number of upcoming release in ``CHANGES.rst``.
- Render ``CHANGES.rst`` via ``make html`` and check for markup errors.
- Verify that ``MANIFEST.in``, ``pyproject.toml`` and ``setup.py`` are complete.
- Run these commands, check git status for files that might need to be added, and commit::
python scripts/make.py build_usage
python scripts/make.py build_man
- Verify that ``MANIFEST.in`` and ``setup.py`` are complete.
- ``python setup.py build_usage ; python setup.py build_man`` and commit.
- Tag the release::
git tag -s -m "tagged/signed release X.Y.Z" X.Y.Z
- Push the release PR branch to GitHub, make a pull request.
- Also push the release tag.
- Create a clean repo and use it for the following steps::
git clone borg borg-clean
@ -502,9 +340,8 @@ Checklist:
It will also reveal uncommitted required files.
Moreover, it makes sure the vagrant machines only get committed files and
do a fresh start based on that.
- Optional: run tox and/or binary builds on all supported platforms via vagrant,
check for test failures. This is now optional as we do platform testing and
binary building on GitHub.
- Run tox and/or binary builds on all supported platforms via vagrant,
check for test failures.
- Create sdist, sign it, upload release to (test) PyPi:
::
@ -512,32 +349,26 @@ Checklist:
scripts/sdist-sign X.Y.Z
scripts/upload-pypi X.Y.Z test
scripts/upload-pypi X.Y.Z
- Put binaries into dist/borg-OSNAME and sign them:
Note: the signature is not uploaded to PyPi any more, but we upload it to
github releases.
- When GitHub CI looks good on the release PR, merge it and then check "Actions":
GitHub will create binary assets after the release PR is merged within the
CI testing of the merge. Check the "Upload binaries" step on Ubuntu (AMD/Intel
and ARM64) and macOS (Intel and ARM64), fetch the ZIPs with the binaries.
- Unpack the ZIPs and test the binaries, upload the binaries to the GitHub
release page (borg-OS-SPEC-ARCH-gh and borg-OS-SPEC-ARCH-gh.tgz).
::
scripts/sign-binaries 201912312359
- Close the release milestone on GitHub.
- `Update borgbackup.org
<https://github.com/borgbackup/borgbackup.github.io/pull/53/files>`_ with the
new version number and release date.
- Announce on:
- Mailing list.
- Mastodon / BlueSky / X (aka Twitter).
- IRC channel (change ``/topic``).
- Mailing list.
- Twitter.
- IRC channel (change ``/topic``).
- Create a GitHub release, include:
- pypi dist package and signature
- Standalone binaries (see above for how to create them).
* Standalone binaries (see above for how to create them).
- For macOS binaries **with** FUSE support, document the macFUSE version
in the README of the binaries. macFUSE uses a kernel extension that needs
to be compatible with the code contained in the binary.
- A link to ``CHANGES.rst``.
+ For OS X, document the OS X Fuse version in the README of the binaries.
OS X FUSE uses a kernel extension that needs to be compatible with the
code contained in the binary.
* A link to ``CHANGES.rst``.

File diff suppressed because it is too large Load diff

View file

@ -1,7 +1,7 @@
.. highlight:: bash
.. |package_dirname| replace:: borgbackup-|version|
.. |package_filename| replace:: |package_dirname|.tar.gz
.. |package_url| replace:: https://pypi.org/project/borgbackup/#files
.. |package_url| replace:: https://pypi.python.org/packages/source/b/borgbackup/|package_filename|
.. |git_url| replace:: https://github.com/borgbackup/borg.git
.. _github: https://github.com/borgbackup/borg
.. _issue tracker: https://github.com/borgbackup/borg/issues
@ -10,21 +10,20 @@
.. _HMAC-SHA256: https://en.wikipedia.org/wiki/HMAC
.. _SHA256: https://en.wikipedia.org/wiki/SHA-256
.. _PBKDF2: https://en.wikipedia.org/wiki/PBKDF2
.. _argon2: https://en.wikipedia.org/wiki/Argon2
.. _ACL: https://en.wikipedia.org/wiki/Access_control_list
.. _libacl: https://savannah.nongnu.org/projects/acl/
.. _libattr: https://savannah.nongnu.org/projects/attr/
.. _libxxhash: https://github.com/Cyan4973/xxHash
.. _liblz4: https://github.com/Cyan4973/lz4
.. _libzstd: https://github.com/facebook/zstd
.. _libb2: https://github.com/BLAKE2/libb2
.. _OpenSSL: https://www.openssl.org/
.. _`Python 3`: https://www.python.org/
.. _Buzhash: https://en.wikipedia.org/wiki/Buzhash
.. _msgpack: https://msgpack.org/
.. _`msgpack-python`: https://pypi.org/project/msgpack-python/
.. _llfuse: https://pypi.org/project/llfuse/
.. _mfusepy: https://pypi.org/project/mfusepy/
.. _pyfuse3: https://pypi.org/project/pyfuse3/
.. _`msgpack-python`: https://pypi.python.org/pypi/msgpack-python/
.. _llfuse: https://pypi.python.org/pypi/llfuse/
.. _pyfuse3: https://pypi.python.org/pypi/pyfuse3/
.. _userspace filesystems: https://en.wikipedia.org/wiki/Filesystem_in_Userspace
.. _Cython: https://cython.org/
.. _virtualenv: https://pypi.org/project/virtualenv/
.. _Cython: http://cython.org/
.. _virtualenv: https://pypi.python.org/pypi/virtualenv/
.. _mailing list discussion about internals: http://librelist.com/browser/attic/2014/5/6/questions-and-suggestions-about-inner-working-of-attic>

View file

@ -6,7 +6,7 @@ Borg Documentation
.. include:: ../README.rst
.. When you add an element here, do not forget to add it to book.rst.
.. when you add an element here, do not forget to add it to book.rst
.. toctree::
:maxdepth: 2
@ -18,8 +18,6 @@ Borg Documentation
faq
support
changes
changes_1.x
changes_0.x
internals
development
authors

View file

@ -13,7 +13,6 @@ There are different ways to install Borg:
that comes bundled with all dependencies.
- :ref:`source-install`, either:
- :ref:`windows-binary` - builds a binary file for Windows using MSYS2.
- :ref:`pip-installation` - installing a source package with pip needs
more installation steps and requires all dependencies with
development headers and a compiler.
@ -43,7 +42,7 @@ package which can be installed with the package manager.
Distribution Source Command
============ ============================================= =======
Alpine Linux `Alpine repository`_ ``apk add borgbackup``
Arch Linux `[extra]`_ ``pacman -S borg``
Arch Linux `[community]`_ ``pacman -S borg``
Debian `Debian packages`_ ``apt install borgbackup``
Gentoo `ebuild`_ ``emerge borgbackup``
GNU Guix `GNU Guix`_ ``guix package --install borg``
@ -64,14 +63,14 @@ Ubuntu `Ubuntu packages`_, `Ubuntu PPA`_ ``apt install borgbac
============ ============================================= =======
.. _Alpine repository: https://pkgs.alpinelinux.org/packages?name=borgbackup
.. _[extra]: https://www.archlinux.org/packages/?name=borg
.. _[community]: https://www.archlinux.org/packages/?name=borg
.. _Debian packages: https://packages.debian.org/search?keywords=borgbackup&searchon=names&exact=1&suite=all&section=all
.. _Fedora official repository: https://packages.fedoraproject.org/pkgs/borgbackup/borgbackup/
.. _Fedora official repository: https://apps.fedoraproject.org/packages/borgbackup
.. _FreeBSD ports: https://www.freshports.org/archivers/py-borgbackup/
.. _ebuild: https://packages.gentoo.org/packages/app-backup/borgbackup
.. _GNU Guix: https://www.gnu.org/software/guix/package-list.html#borg
.. _pkgsrc: https://pkgsrc.se/sysutils/py-borgbackup
.. _cauldron: https://madb.mageia.org/package/show/application/0/release/cauldron/name/borgbackup
.. _pkgsrc: http://pkgsrc.se/sysutils/py-borgbackup
.. _cauldron: http://madb.mageia.org/package/show/application/0/release/cauldron/name/borgbackup
.. _.nix file: https://github.com/NixOS/nixpkgs/blob/master/pkgs/tools/backup/borgbackup/default.nix
.. _OpenBSD ports: https://cvsweb.openbsd.org/cgi-bin/cvsweb/ports/sysutils/borgbackup/
.. _OpenIndiana hipster repository: https://pkg.openindiana.org/hipster/en/search.shtml?token=borg&action=Search
@ -82,9 +81,9 @@ Ubuntu `Ubuntu packages`_, `Ubuntu PPA`_ ``apt install borgbac
.. _Ubuntu packages: https://launchpad.net/ubuntu/+source/borgbackup
.. _Ubuntu PPA: https://launchpad.net/~costamagnagianfranco/+archive/ubuntu/borgbackup
Please ask package maintainers to build a package or, if you can package/
Please ask package maintainers to build a package or, if you can package /
submit it yourself, please help us with that! See :issue:`105` on
GitHub to follow up on packaging efforts.
github to followup on packaging efforts.
**Current status of package in the repositories**
@ -106,12 +105,12 @@ Standalone Binary
.. note:: Releases are signed with an OpenPGP key, see
:ref:`security-contact` for more instructions.
Borg x86/x64 AMD/Intel compatible binaries (generated with `pyinstaller`_)
Borg x86/x64 amd/intel compatible binaries (generated with `pyinstaller`_)
are available on the releases_ page for the following platforms:
* **Linux**: glibc >= 2.28 (ok for most supported Linux releases).
Older glibc releases are untested and may not work.
* **macOS**: 10.12 or newer (To avoid signing issues, download the file via
* **MacOS**: 10.12 or newer (To avoid signing issues download the file via
command line **or** remove the ``quarantine`` attribute after downloading:
``$ xattr -dr com.apple.quarantine borg-macosx64.tgz``)
* **FreeBSD**: 12.1 (unknown whether it works for older releases)
@ -135,10 +134,10 @@ fail if /tmp has not enough free space or is mounted with the ``noexec``
option. You can change the temporary directory by setting the ``TEMP``
environment variable before running Borg.
If a new version is released, you will have to download it manually and replace
If a new version is released, you will have to manually download it and replace
the old version using the same steps as shown above.
.. _pyinstaller: https://www.pyinstaller.org
.. _pyinstaller: http://www.pyinstaller.org
.. _releases: https://github.com/borgbackup/borg/releases
.. _source-install:
@ -158,32 +157,29 @@ Dependencies
~~~~~~~~~~~~
To install Borg from a source package (including pip), you have to install the
following dependencies first. For the libraries you will also need their
development header files (sometimes in a separate `-dev` or `-devel` package).
following dependencies first:
* `Python 3`_ >= 3.10.0
* OpenSSL_ >= 1.1.1 (LibreSSL will not work)
* libacl_ (which depends on libattr_)
* libxxhash_ >= 0.8.1
* liblz4_ >= 1.7.0 (r129)
* libzstd_ >= 1.3.0
* libffi (required for argon2-cffi-bindings)
* pkg-config (cli tool) - Borg uses this to discover header and library
locations automatically. Alternatively, you can also point to them via some
environment variables, see setup.py.
* Some other Python dependencies, pip will automatically install them for you.
* Optionally, if you wish to mount an archive as a FUSE filesystem, you need
* `Python 3`_ >= 3.9.0, plus development headers.
* Libraries (library plus development headers):
- OpenSSL_ >= 1.1.1 (LibreSSL will not work)
- libacl_ (which depends on libattr_)
- liblz4_ >= 1.7.0 (r129)
- libzstd_ >= 1.3.0
- libxxhash >= 0.8.1 (0.8.0 might work also)
- libdeflate >= 1.5
* pkg-config (cli tool) and pkgconfig python package (borg uses these to
discover header and library location - if it can't import pkgconfig and
is not pointed to header/library locations via env vars [see setup.py],
it will raise a fatal error).
**These must be present before invoking setup.py!**
* some other Python dependencies, pip will automatically install them for you.
* optionally, if you wish to mount an archive as a FUSE filesystem, you need
a FUSE implementation for Python:
- mfusepy_ >= 3.1.0 (for fuse 2 and fuse 3, use `pip install borgbackup[mfusepy]`), or
- pyfuse3_ >= 3.1.1 (for fuse 3, use `pip install borgbackup[pyfuse3]`), or
- llfuse_ >= 1.3.8 (for fuse 2, use `pip install borgbackup[llfuse]`).
- Additionally, your OS will need to have FUSE support installed
(e.g. a package `fuse` for fuse 2 or a package `fuse3` for fuse 3 support).
* Optionally, if you wish to use S3/B2 Backend:
- borgstore[s3] ~= 0.3.0 (use `pip install borgbackup[s3]`)
* Optionally, if you wish to use SFTP Backend:
- borgstore[sftp] ~= 0.3.0 (use `pip install borgbackup[sftp]`)
- Either pyfuse3_ (preferably, newer and maintained) or llfuse_ (older,
unmaintained now). See also the BORG_FUSE_IMPL env variable.
- See setup.py about the version requirements.
If you have troubles finding the right package names, have a look at the
distribution specific sections below or the Vagrantfile in the git repository,
@ -191,35 +187,24 @@ which contains installation scripts for a number of operating systems.
In the following, the steps needed to install the dependencies are listed for a
selection of platforms. If your distribution is not covered by these
instructions, try to use your package manager to install the dependencies.
instructions, try to use your package manager to install the dependencies. On
FreeBSD, you may need to get a recent enough OpenSSL version from FreeBSD
ports.
After you have installed the dependencies, you can proceed with steps outlined
under :ref:`pip-installation`.
Arch Linux
++++++++++
Install the runtime and build dependencies::
pacman -S python python-pip python-virtualenv openssl acl xxhash lz4 zstd base-devel
pacman -S fuse2 # needed for llfuse
pacman -S fuse3 # needed for pyfuse3
Note that Arch Linux specifically doesn't support
`partial upgrades <https://wiki.archlinux.org/title/Partial_upgrade>`__,
so in case some packages cannot be retrieved from the repo, run with ``pacman -Syu``.
Debian / Ubuntu
+++++++++++++++
Install the dependencies with development headers::
sudo apt-get install python3 python3-dev python3-pip python3-virtualenv \
libacl1-dev \
libacl1-dev libacl1 \
libssl-dev \
liblz4-dev libzstd-dev libxxhash-dev \
libffi-dev \
build-essential pkg-config
liblz4-dev libzstd-dev libxxhash-dev libdeflate-dev \
build-essential \
pkg-config python3-pkgconfig
sudo apt-get install libfuse-dev fuse # needed for llfuse
sudo apt-get install libfuse3-dev fuse3 # needed for pyfuse3
@ -233,11 +218,10 @@ Fedora
Install the dependencies with development headers::
sudo dnf install python3 python3-devel python3-pip python3-virtualenv \
libacl-devel \
libacl-devel libacl \
openssl-devel \
lz4-devel libzstd-devel xxhash-devel \
libffi-devel \
pkgconf
lz4-devel libzstd-devel xxhash-devel libdeflate-devel \
pkgconf python3-pkgconfig
sudo dnf install gcc gcc-c++ redhat-rpm-config
sudo dnf install fuse-devel fuse # needed for llfuse
sudo dnf install fuse3-devel fuse3 # needed for pyfuse3
@ -252,8 +236,8 @@ Install the dependencies automatically using zypper::
Alternatively, you can enumerate all build dependencies in the command line::
sudo zypper install python3 python3-devel \
libacl-devel openssl-devel xxhash-devel libzstd-devel liblz4-devel \
libffi-devel \
libacl-devel openssl-devel \
libxxhash-devel libdeflate-devel \
python3-Cython python3-Sphinx python3-msgpack-python python3-pkgconfig pkgconf \
python3-pytest python3-setuptools python3-setuptools_scm \
python3-sphinx_rtd_theme gcc gcc-c++
@ -262,10 +246,16 @@ Alternatively, you can enumerate all build dependencies in the command line::
macOS
+++++
When installing borgbackup via Homebrew_, the basic dependencies are installed automatically.
When installing via Homebrew_, dependencies are installed automatically. To install
dependencies manually::
For FUSE support to mount the backup archives, you need macFUSE, which is available
via `github <https://github.com/osxfuse/osxfuse/releases/latest>`__, or Homebrew::
brew install python3 openssl zstd lz4 xxhash libdeflate
brew install pkg-config
pip3 install virtualenv pkgconfig
For FUSE support to mount the backup archives, you need at least version 3.0 of
macFUSE, which is available via `github
<https://github.com/osxfuse/osxfuse/releases/latest>`__, or Homebrew::
brew install --cask macfuse
@ -275,14 +265,7 @@ the installed ``openssl`` formula, point pkg-config to the correct path::
PKG_CONFIG_PATH="/usr/local/opt/openssl@1.1/lib/pkgconfig" pip install borgbackup[llfuse]
When working from a borg git repo workdir, you can install dependencies using the
Brewfile::
brew install python@3.11 # can be any supported python3 version
brew bundle install # install requirements from borg repo's ./Brewfile
pip3 install virtualenv
Be aware that for all recent macOS releases you must authorize full disk access.
For OS X Catalina and later, be aware that you must authorize full disk access.
It is no longer sufficient to run borg backups as root. If you have not yet
granted full disk access, and you run Borg backup from cron, you will see
messages such as::
@ -303,7 +286,7 @@ and commands to make FUSE work for using the mount command.
pkg install -y python3 pkgconf
pkg install openssl
pkg install liblz4 zstd xxhash
pkg install liblz4 zstd xxhash libdeflate
pkg install fusefs-libs # needed for llfuse
pkg install -y git
python3 -m ensurepip # to install pip for Python3
@ -313,20 +296,6 @@ and commands to make FUSE work for using the mount command.
kldload fuse
sysctl vfs.usermount=1
.. _windows_deps:
Windows
+++++++
.. note::
Running under Windows is experimental.
.. warning::
This script needs to be run in the UCRT64 environment in MSYS2.
Install the dependencies with the provided script::
./scripts/msys2-install-deps
Windows 10's Linux Subsystem
++++++++++++++++++++++++++++
@ -345,34 +314,11 @@ Cygwin
Use the Cygwin installer to install the dependencies::
python39 python39-devel
python39 python39-devel python39-pkgconfig
python39-setuptools python39-pip python39-wheel python39-virtualenv
libssl-devel libxxhash-devel liblz4-devel libzstd-devel
libssl-devel libxxhash-devel libdeflate-devel liblz4-devel libzstd-devel
binutils gcc-g++ git make openssh
Make sure to use a virtual environment to avoid confusions with any Python installed on Windows.
.. _windows-binary:
Building a binary on Windows
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. note::
This is experimental.
.. warning::
This needs to be run in the UCRT64 environment in MSYS2.
Ensure to install the dependencies as described within :ref:`Dependencies: Windows <windows_deps>`.
::
# Needed for setuptools < 70.2.0 to work - https://www.msys2.org/docs/python/#known-issues
# export SETUPTOOLS_USE_DISTUTILS=stdlib
pip install -e .
pyinstaller -y scripts/borg.exe.spec
A standalone executable will be created in ``dist/borg.exe``.
.. _pip-installation:
@ -383,13 +329,11 @@ Virtualenv_ can be used to build and install Borg without affecting
the system Python or requiring root access. Using a virtual environment is
optional, but recommended except for the most simple use cases.
Ensure to install the dependencies as described within :ref:`source-install`.
.. note::
If you install into a virtual environment, you need to **activate** it
first (``source borg-env/bin/activate``), before running ``borg``.
Alternatively, symlink ``borg-env/bin/borg`` into some directory that is in
your ``PATH`` so you can run ``borg``.
your ``PATH`` so you can just run ``borg``.
This will use ``pip`` to install the latest release from PyPi::
@ -399,6 +343,9 @@ This will use ``pip`` to install the latest release from PyPi::
# might be required if your tools are outdated
pip install -U pip setuptools wheel
# pkgconfig MUST be available before borg is installed!
pip install pkgconfig
# install Borg + Python dependencies into virtualenv
pip install borgbackup
# or alternatively (if you want FUSE support):
@ -410,19 +357,6 @@ activating your virtual environment::
pip install -U borgbackup # or ... borgbackup[llfuse/pyfuse3]
When doing manual pip installation, man pages are not automatically
installed. You can run these commands to install the man pages
locally::
# get borg from github
git clone https://github.com/borgbackup/borg.git borg
# Install the files with proper permissions
install -D -m 0644 borg/docs/man/borg*.1* $HOME/.local/share/man/man1/borg.1
# Update the man page cache
mandb
.. _git-installation:
Using git
@ -431,30 +365,20 @@ Using git
This uses latest, unreleased development code from git.
While we try not to break master, there are no guarantees on anything.
Ensure to install the dependencies as described within :ref:`source-install`.
Version metadata is obtained dynamically at install time using ``setuptools-scm``.
Please ensure that your git repo either has correct tags, or provide the version
manually using the ``SETUPTOOLS_SCM_PRETEND_VERSION`` environment variable.
::
# get borg from github
git clone https://github.com/borgbackup/borg.git
# create a virtual environment
virtualenv --python=$(which python3) borg-env
virtualenv --python=${which python3} borg-env
source borg-env/bin/activate # always before using!
# install borg dependencies into virtualenv
# install borg + dependencies into virtualenv
cd borg
pip install -r requirements.d/development.txt
pip install -r requirements.d/docs.txt # optional, to build the docs
# set a borg version if setuptools-scm fails to do so automatically
export SETUPTOOLS_SCM_PRETEND_VERSION=
# install borg into virtualenv
pip install -e . # in-place editable mode
or
pip install -e .[pyfuse3] # in-place editable mode, use pyfuse3
@ -472,11 +396,11 @@ If you need to use a different version of Python you can install this using ``py
...
# create a virtual environment
pyenv install 3.10.0 # minimum, preferably use something more recent!
pyenv global 3.10.0
pyenv local 3.10.0
pyenv install 3.9.0 # minimum, preferably use something more recent!
pyenv global 3.9.0
pyenv local 3.9.0
virtualenv --python=${pyenv which python} borg-env
source borg-env/bin/activate # always before using!
...
.. note:: As a developer or power user, you should always use a virtual environment.
.. note:: As a developer or power user, you always want to use a virtual environment.

View file

@ -4,7 +4,7 @@
Internals
=========
The internals chapter describes and analyzes most of the inner workings
The internals chapter describes and analyses most of the inner workings
of Borg.
Borg uses a low-level, key-value store, the :ref:`repository`, and
@ -19,18 +19,18 @@ specified when the backup was performed.
Deduplication is performed globally across all data in the repository
(multiple backups and even multiple hosts), both on data and file
metadata, using :ref:`chunks` created by the chunker using the
Buzhash_ algorithm ("buzhash" and "buzhash64" chunker) or a simpler
fixed block size algorithm ("fixed" chunker).
Buzhash_ algorithm ("buzhash" chunker) or a simpler fixed blocksize
algorithm ("fixed" chunker).
To perform the repository-wide deduplication, a hash of each
To actually perform the repository-wide deduplication, a hash of each
chunk is checked against the :ref:`chunks cache <cache>`, which is a
hash table of all chunks that already exist.
hash-table of all chunks that already exist.
.. figure:: internals/structure.png
:figwidth: 100%
:width: 100%
Layers in Borg. At the very top, commands are implemented, using
Layers in Borg. On the very top commands are implemented, using
a data access layer provided by the Archive and Item classes.
The "key" object provides both compression and authenticated
encryption used by the data access layer. The "key" object represents

Binary file not shown.

After

Width:  |  Height:  |  Size: 757 KiB

Binary file not shown.

File diff suppressed because it is too large Load diff

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 KiB

After

Width:  |  Height:  |  Size: 156 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

View file

@ -11,13 +11,16 @@ but does mean that there are no release-to-release guarantees on what you might
even for point releases (1.1.x), and there is no documentation beyond the code and the internals documents.
Borg does on the other hand provide an API on a command-line level. In other words, a frontend should
(for example) create a backup archive by invoking :ref:`borg_create`, provide command-line parameters/options
as needed, and parse JSON output from Borg.
(for example) create a backup archive just invoke :ref:`borg_create`, give commandline parameters/options
as needed and parse JSON output from borg.
Important: JSON output is expected to be UTF-8, but currently borg depends on the locale being configured
for that (must be a UTF-8 locale and *not* "C" or "ascii"), so that Python will choose to encode to UTF-8.
The same applies to any inputs read by borg, they are expected to be UTF-8 encoded also.
We consider this a bug (see :issue:`2273`) and might fix it later, so borg will use UTF-8 independent of
the locale.
On POSIX systems, you can usually set environment vars to choose a UTF-8 locale:
::
@ -26,53 +29,6 @@ On POSIX systems, you can usually set environment vars to choose a UTF-8 locale:
export LC_CTYPE=en_US.UTF-8
Another way to get Python's stdin/stdout/stderr streams to use UTF-8 encoding (without having
a UTF-8 locale / LANG / LC_CTYPE) is:
::
export PYTHONIOENCODING=utf-8
See :issue:`2273` for more details.
Dealing with non-unicode byte sequences and JSON limitations
------------------------------------------------------------
Paths on POSIX systems can have arbitrary bytes in them (except 0x00 which is used as string terminator in C).
Nowadays, UTF-8 encoded paths (which decode to valid unicode) are the usual thing, but a lot of systems
still have paths from the past, when other, non-unicode codings were used. Especially old Samba shares often
have wild mixtures of misc. encodings, sometimes even very broken stuff.
borg deals with such non-unicode paths ("with funny/broken characters") by decoding such byte sequences using
UTF-8 coding and "surrogateescape" error handling mode, which maps invalid bytes to special unicode code points
(surrogate escapes). When encoding such a unicode string back to a byte sequence, the original byte sequence
will be reproduced exactly.
JSON should only contain valid unicode text without any surrogate escapes, so we can't just directly have a
surrogate-escaped path in JSON ("path" is only one example, this also affects other text-like content).
Borg deals with this situation like this (since borg 2.0):
For a valid unicode path (no surrogate escapes), the JSON will only have "path": path.
For a non-unicode path (with surrogate escapes), the JSON will have 2 entries:
- "path": path_approximation (pure valid unicode, all invalid bytes will show up as "?")
- "path_b64": path_bytes_base64_encoded (if you decode the base64, you get the original path byte string)
JSON users need to pick whatever suits their needs best. The suggested procedure (shown for "path") is:
- check if there is a "path_b64" key.
- if it is there, you will know that the original bytes path did not cleanly UTF-8-decode into unicode (has
some invalid bytes) and that the string given by the "path" key is only an approximation, but not the precise
path. if you need precision, you must base64-decode the value of "path_b64" and deal with the arbitrary byte
string you'll get. if an approximation is fine, use the value of the "path" key.
- if it is not there, the value of the "path" key is all you need (the original bytes path is its UTF-8 encoding).
Logging
-------
@ -84,6 +40,8 @@ where each line is a JSON object. The *type* key of the object determines its ot
parsing error will be printed in plain text, because logging set-up happens after all arguments are
parsed.
Since JSON can only encode text, any string representing a file system path may miss non-text parts.
The following types are in use. Progress information is governed by the usual rules for progress information,
it is not produced unless ``--progress`` is specified.
@ -92,20 +50,17 @@ archive_progress
The following keys exist, each represents the current progress.
original_size
Original size of data processed so far (before compression and deduplication, may be empty/absent)
Original size of data processed so far (before compression and deduplication)
compressed_size
Compressed size (may be empty/absent)
Compressed size
deduplicated_size
Deduplicated size (may be empty/absent)
Deduplicated size
nfiles
Number of (regular) files processed so far (may be empty/absent)
Number of (regular) files processed so far
path
Current path (may be empty/absent)
Current path
time
Unix timestamp (float)
finished
boolean indicating whether the operation has finished, only the last object for an *operation*
can have this property set to *true*.
progress_message
A message-based progress information with no concrete progress information, just a message
@ -135,14 +90,12 @@ progress_percent
can have this property set to *true*.
message
A formatted progress message, this will include the percentage and perhaps other information
(absent for finished == true)
current
Current value (always less-or-equal to *total*, absent for finished == true)
Current value (always less-or-equal to *total*)
info
Array that describes the current item, may be *null*, contents depend on *msgid*
(absent for finished == true)
total
Total value (absent for finished == true)
Total value
time
Unix timestamp (float)
@ -255,7 +208,7 @@ Passphrase prompts should be handled differently. Use the environment variables
and *BORG_NEW_PASSPHRASE* (see :ref:`env_vars` for reference) to pass passphrases to Borg, don't
use the interactive passphrase prompts.
When setting a new passphrase (:ref:`borg_repo-create`, :ref:`borg_key_change-passphrase`) normally
When setting a new passphrase (:ref:`borg_init`, :ref:`borg_key_change-passphrase`) normally
Borg prompts whether it should display the passphrase. This can be suppressed by setting
the environment variable *BORG_DISPLAY_PASSPHRASE* to *no*.
@ -280,7 +233,7 @@ and :ref:`borg_list` implement a ``--json`` option which turns their regular out
Some commands, like :ref:`borg_list` and :ref:`borg_diff`, can produce *a lot* of JSON. Since many JSON implementations
don't support a streaming mode of operation, which is pretty much required to deal with this amount of JSON, these
commands implement a ``--json-lines`` option which generates output in the `JSON lines <https://jsonlines.org/>`_ format,
commands implement a ``--json-lines`` option which generates output in the `JSON lines <http://jsonlines.org/>`_ format,
which is simply a number of JSON objects separated by new lines.
Dates are formatted according to ISO 8601 in local time. No explicit time zone is specified *at this time*
@ -299,7 +252,7 @@ last_modified
The *encryption* key, if present, contains:
mode
Textual encryption mode name (same as :ref:`borg_repo-create` ``--encryption`` names)
Textual encryption mode name (same as :ref:`borg_init` ``--encryption`` names)
keyfile
Path to the local key file used for access. Depending on *mode* this key may be absent.
@ -316,8 +269,12 @@ stats
Number of unique chunks
total_size
Total uncompressed size of all chunks multiplied with their reference counts
total_csize
Total compressed and encrypted size of all chunks multiplied with their reference counts
unique_size
Uncompressed size of all chunks
unique_csize
Compressed and encrypted size of all chunks
.. highlight: json
@ -328,8 +285,10 @@ Example *borg info* output::
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": {
"total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792,
"total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468
}
},
@ -373,6 +332,11 @@ stats
Deduplicated size (against the current repository, not when the archive was created)
nfiles
Number of regular files in the archive
limits
Object describing the utilization of Borg limits
max_archive_size
Float between 0 and 1 describing how large this archive is relative to the maximum size allowed by Borg
command_line
Array of strings of the command line that created the archive
@ -442,6 +406,9 @@ The same archive with more information (``borg info --last 1 --json``)::
"end": "2017-02-27T12:27:20.789123",
"hostname": "host",
"id": "80cd07219ad725b3c5f665c1dcf119435c4dee1647a560ecac30f8d40221a46a",
"limits": {
"max_archive_size": 0.0001330855110409714
},
"name": "host-system-backup-2017-02-27",
"start": "2017-02-27T12:27:20.789123",
"stats": {
@ -457,8 +424,10 @@ The same archive with more information (``borg info --last 1 --json``)::
"path": "/home/user/.cache/borg/0cbe6166b46627fd26b97f8831e2ca97584280a46714ef84d2b668daf8271a23",
"stats": {
"total_chunks": 511533,
"total_csize": 17948017540,
"total_size": 22635749792,
"total_unique_chunks": 54892,
"unique_csize": 1920405405,
"unique_size": 2449675468
}
},
@ -480,15 +449,14 @@ Refer to the *borg list* documentation for the available keys and their meaning.
Example (excerpt) of ``borg list --json-lines``::
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux", "target": "", "flags": null, "mtime": "2017-02-27T12:27:20.023407", "size": 0}
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux/baz", "target": "", "flags": null, "mtime": "2017-02-27T12:27:20.585407", "size": 0}
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux", "healthy": true, "source": "", "linktarget": "", "flags": null, "mtime": "2017-02-27T12:27:20.023407", "size": 0}
{"type": "d", "mode": "drwxr-xr-x", "user": "user", "group": "user", "uid": 1000, "gid": 1000, "path": "linux/baz", "healthy": true, "source": "", "linktarget": "", "flags": null, "mtime": "2017-02-27T12:27:20.585407", "size": 0}
Archive Differencing
++++++++++++++++++++
Each archive difference item (file contents, user/group/mode) output by :ref:`borg_diff` is represented by an *ItemDiff* object.
The properties of an *ItemDiff* object are:
The propertiese of an *ItemDiff* object are:
path:
The filename/path of the *Item* (file, directory, symlink).
@ -527,26 +495,26 @@ added:
removed:
See **added** property.
old_mode:
If **type** == '*mode*', then **old_mode** and **new_mode** provide the mode and permissions changes.
new_mode:
See **old_mode** property.
old_user:
If **type** == '*owner*', then **old_user**, **new_user**, **old_group** and **new_group** provide the user
and group ownership changes.
old_group:
See **old_user** property.
new_user:
See **old_user** property.
new_group:
See **old_user** property.
Example (excerpt) of ``borg diff --json-lines``::
@ -565,171 +533,92 @@ Message IDs are strings that essentially give a log message or operation a name,
full text, since texts change more frequently. Message IDs are unambiguous and reduce the need to parse
log messages.
Assigned message IDs and related error RCs (exit codes) are:
Assigned message IDs are:
.. See scripts/errorlist.py; this is slightly edited.
Errors
Error rc: 2 traceback: no
Error: {}
ErrorWithTraceback rc: 2 traceback: yes
Error: {}
Buffer.MemoryLimitExceeded rc: 2 traceback: no
Requested buffer size {} is above the limit of {}.
EfficientCollectionQueue.SizeUnderflow rc: 2 traceback: no
Could not pop_front first {} elements, collection only has {} elements..
RTError rc: 2 traceback: no
Runtime Error: {}
CancelledByUser rc: 3 traceback: no
Cancelled by user.
CommandError rc: 4 traceback: no
Command Error: {}
PlaceholderError rc: 5 traceback: no
Formatting Error: "{}".format({}): {}({})
InvalidPlaceholder rc: 6 traceback: no
Invalid placeholder "{}" in string: {}
Repository.AlreadyExists rc: 10 traceback: no
A repository already exists at {}.
Repository.CheckNeeded rc: 12 traceback: yes
Inconsistency detected. Please run "borg check {}".
Repository.DoesNotExist rc: 13 traceback: no
Repository {} does not exist.
Repository.InsufficientFreeSpaceError rc: 14 traceback: no
Insufficient free space to complete transaction (required: {}, available: {}).
Repository.InvalidRepository rc: 15 traceback: no
{} is not a valid repository. Check repo config.
Repository.InvalidRepositoryConfig rc: 16 traceback: no
{} does not have a valid configuration. Check repo config [{}].
Repository.ObjectNotFound rc: 17 traceback: yes
Object with key {} not found in repository {}.
Repository.ParentPathDoesNotExist rc: 18 traceback: no
The parent path of the repo directory [{}] does not exist.
Repository.PathAlreadyExists rc: 19 traceback: no
There is already something at {}.
Repository.PathPermissionDenied rc: 21 traceback: no
Permission denied to {}.
MandatoryFeatureUnsupported rc: 25 traceback: no
Unsupported repository feature(s) {}. A newer version of borg is required to access this repository.
NoManifestError rc: 26 traceback: no
Repository has no manifest.
UnsupportedManifestError rc: 27 traceback: no
Unsupported manifest envelope. A newer version is required to access this repository.
Archive.AlreadyExists rc: 30 traceback: no
Archive.AlreadyExists
Archive {} already exists
Archive.DoesNotExist rc: 31 traceback: no
Archive.DoesNotExist
Archive {} does not exist
Archive.IncompatibleFilesystemEncodingError rc: 32 traceback: no
Archive.IncompatibleFilesystemEncodingError
Failed to encode filename "{}" into file system encoding "{}". Consider configuring the LANG environment variable.
KeyfileInvalidError rc: 40 traceback: no
Invalid key data for repository {} found in {}.
KeyfileMismatchError rc: 41 traceback: no
Mismatch between repository {} and key file {}.
KeyfileNotFoundError rc: 42 traceback: no
No key file for repository {} found in {}.
NotABorgKeyFile rc: 43 traceback: no
This file is not a borg key backup, aborting.
RepoKeyNotFoundError rc: 44 traceback: no
No key entry found in the config of repository {}.
RepoIdMismatch rc: 45 traceback: no
This key backup seems to be for a different backup repository, aborting.
UnencryptedRepo rc: 46 traceback: no
Key management not available for unencrypted repositories.
UnknownKeyType rc: 47 traceback: no
Key type {0} is unknown.
UnsupportedPayloadError rc: 48 traceback: no
Unsupported payload type {}. A newer version is required to access this repository.
UnsupportedKeyFormatError rc: 49 traceback:no
Your borg key is stored in an unsupported format. Try using a newer version of borg.
NoPassphraseFailure rc: 50 traceback: no
can not acquire a passphrase: {}
PasscommandFailure rc: 51 traceback: no
passcommand supplied in BORG_PASSCOMMAND failed: {}
PassphraseWrong rc: 52 traceback: no
passphrase supplied in BORG_PASSPHRASE, by BORG_PASSCOMMAND or via BORG_PASSPHRASE_FD is incorrect.
PasswordRetriesExceeded rc: 53 traceback: no
exceeded the maximum password retries
Cache.CacheInitAbortedError rc: 60 traceback: no
Cache.CacheInitAbortedError
Cache initialization aborted
Cache.EncryptionMethodMismatch rc: 61 traceback: no
Cache.EncryptionMethodMismatch
Repository encryption method changed since last access, refusing to continue
Cache.RepositoryAccessAborted rc: 62 traceback: no
Cache.RepositoryAccessAborted
Repository access aborted
Cache.RepositoryIDNotUnique rc: 63 traceback: no
Cache.RepositoryIDNotUnique
Cache is newer than repository - do you have multiple, independently updated repos with same ID?
Cache.RepositoryReplay rc: 64 traceback: no
Cache, or information obtained from the security directory is newer than repository - this is either an attack or unsafe (multiple repos with same ID)
LockError rc: 70 traceback: no
Cache.RepositoryReplay
Cache is newer than repository - this is either an attack or unsafe (multiple repos with same ID)
Buffer.MemoryLimitExceeded
Requested buffer size {} is above the limit of {}.
ExtensionModuleError
The Borg binary extension modules do not seem to be properly installed
IntegrityError
Data integrity error: {}
NoManifestError
Repository has no manifest.
PlaceholderError
Formatting Error: "{}".format({}): {}({})
KeyfileInvalidError
Invalid key file for repository {} found in {}.
KeyfileMismatchError
Mismatch between repository {} and key file {}.
KeyfileNotFoundError
No key file for repository {} found in {}.
PassphraseWrong
passphrase supplied in BORG_PASSPHRASE is incorrect
PasswordRetriesExceeded
exceeded the maximum password retries
RepoKeyNotFoundError
No key entry found in the config of repository {}.
UnsupportedManifestError
Unsupported manifest envelope. A newer version is required to access this repository.
UnsupportedPayloadError
Unsupported payload type {}. A newer version is required to access this repository.
NotABorgKeyFile
This file is not a borg key backup, aborting.
RepoIdMismatch
This key backup seems to be for a different backup repository, aborting.
UnencryptedRepo
Keymanagement not available for unencrypted repositories.
UnknownKeyType
Keytype {0} is unknown.
LockError
Failed to acquire the lock {}.
LockErrorT rc: 71 traceback: yes
LockErrorT
Failed to acquire the lock {}.
LockFailed rc: 72 traceback: yes
Failed to create/acquire the lock {} ({}).
LockTimeout rc: 73 traceback: no
Failed to create/acquire the lock {} (timeout).
NotLocked rc: 74 traceback: yes
Failed to release the lock {} (was not locked).
NotMyLock rc: 75 traceback: yes
Failed to release the lock {} (was/is locked, but not by me).
ConnectionClosed rc: 80 traceback: no
ConnectionClosed
Connection closed by remote host
ConnectionClosedWithHint rc: 81 traceback: no
Connection closed by remote host. {}
InvalidRPCMethod rc: 82 traceback: no
InvalidRPCMethod
RPC method {} is not valid
PathNotAllowed rc: 83 traceback: no
Repository path not allowed: {}
RemoteRepository.RPCServerOutdated rc: 84 traceback: no
PathNotAllowed
Repository path not allowed
RemoteRepository.RPCServerOutdated
Borg server is too old for {}. Required version {}
UnexpectedRPCDataFormatFromClient rc: 85 traceback: no
UnexpectedRPCDataFormatFromClient
Borg {}: Got unexpected RPC data format from client.
UnexpectedRPCDataFormatFromServer rc: 86 traceback: no
UnexpectedRPCDataFormatFromServer
Got unexpected RPC data format from server:
{}
ConnectionBrokenWithHint rc: 87 traceback: no
Connection to remote host is broken. {}
IntegrityError rc: 90 traceback: yes
Data integrity error: {}
FileIntegrityError rc: 91 traceback: yes
File failed integrity check: {}
DecompressionError rc: 92 traceback: yes
Decompression error: {}
Warnings
BorgWarning rc: 1
Warning: {}
BackupWarning rc: 1
{}: {}
FileChangedWarning rc: 100
{}: file changed while we backed it up
IncludePatternNeverMatchedWarning rc: 101
Include pattern '{}' never matched.
BackupError rc: 102
{}: backup error
BackupRaceConditionError rc: 103
{}: file type or inode changed while we backed it up (race condition, skipped file)
BackupOSError rc: 104
{}: {}
BackupPermissionError rc: 105
{}: {}
BackupIOError rc: 106
{}: {}
BackupFileNotFoundError rc: 107
{}: {}
Repository.AlreadyExists
Repository {} already exists.
Repository.CheckNeeded
Inconsistency detected. Please run "borg check {}".
Repository.DoesNotExist
Repository {} does not exist.
Repository.InsufficientFreeSpaceError
Insufficient free space to complete transaction (required: {}, available: {}).
Repository.InvalidRepository
{} is not a valid repository. Check repo config.
Repository.AtticRepository
Attic repository detected. Please run "borg upgrade {}".
Repository.ObjectNotFound
Object with key {} not found in repository {}.
Operations
- cache.begin_transaction
@ -743,7 +632,6 @@ Operations
- repository.check
- check.verify_data
- check.rebuild_manifest
- check.rebuild_refcounts
- extract
*info* is one string element, the name of the path currently extracted.
@ -761,4 +649,4 @@ Prompts
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING
For "This is a potentially dangerous function..." (check --repair)
BORG_DELETE_I_KNOW_WHAT_I_AM_DOING
For "You requested to DELETE the repository completely *including* all archives it contains:"
For "You requested to completely DELETE the repository *including* all archives it contains:"

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

After

Width:  |  Height:  |  Size: 313 KiB

Binary file not shown.

View file

@ -1,5 +1,3 @@
.. include:: ../global.rst.inc
.. somewhat surprisingly the "bash" highlighter gives nice results with
the pseudo-code notation used in the "Encryption" section.
@ -24,22 +22,25 @@ The attack model of Borg is that the environment of the client process
attacker has any and all access to the repository, including interactive
manipulation (man-in-the-middle) for remote repositories.
Furthermore, the client environment is assumed to be persistent across
Furthermore the client environment is assumed to be persistent across
attacks (practically this means that the security database cannot be
deleted between attacks).
Under these circumstances Borg guarantees that the attacker cannot
1. modify the data of any archive without the client detecting the change
2. rename or add an archive without the client detecting the change
2. rename, remove or add an archive without the client detecting the change
3. recover plain-text data
4. recover definite (heuristics based on access patterns are possible)
structural information such as the object graph (which archives
refer to what chunks)
The attacker can always impose a denial of service by definition (they could
block connections to the repository, or delete it partly or entirely).
The attacker can always impose a denial of service per definition (he could
forbid connections to the repository, or delete it entirely).
When the above attack model is extended to include multiple clients
independently updating the same repository, then Borg fails to provide
confidentiality (i.e. guarantees 3) and 4) do not apply any more).
.. _security_structural_auth:
@ -47,12 +48,12 @@ Structural Authentication
-------------------------
Borg is fundamentally based on an object graph structure (see :ref:`internals`),
where the root objects are the archives.
where the root object is called the manifest.
Borg follows the `Horton principle`_, which states that
not only the message must be authenticated, but also its meaning (often
expressed through context), because every object used is referenced by a
parent object through its object ID up to the archive list entry. The object ID in
parent object through its object ID up to the manifest. The object ID in
Borg is a MAC of the object's plaintext, therefore this ensures that
an attacker cannot change the context of an object without forging the MAC.
@ -60,45 +61,50 @@ In other words, the object ID itself only authenticates the plaintext of the
object and not its context or meaning. The latter is established by a different
object referring to an object ID, thereby assigning a particular meaning to
an object. For example, an archive item contains a list of object IDs that
represent packed file metadata. On their own, it's not clear that these objects
represent packed file metadata. On their own it's not clear that these objects
would represent what they do, but by the archive item referring to them
in a particular part of its own data structure assigns this meaning.
This results in a directed acyclic graph of authentication from the archive
list entry to the data chunks of individual files.
This results in a directed acyclic graph of authentication from the manifest
to the data chunks of individual files.
Above used to be all for borg 1.x and was the reason why it needed the
tertiary authentication mechanism (TAM) for manifest and archives.
.. _tam_description:
borg 2 now stores the ro_type ("meaning") of a repo object's data into that
object's metadata (like e.g.: manifest vs. archive vs. user file content data).
When loading data from the repo, borg verifies that the type of object it got
matches the type it wanted. borg 2 does not use TAMs any more.
.. rubric:: Authenticating the manifest
As both the object's metadata and data are AEAD encrypted and also bound to
the object ID (via giving the ID as AAD), there is no way an attacker (without
access to the borg key) could change the type of the object or move content
to a different object ID.
Since the manifest has a fixed ID (000...000) the aforementioned authentication
does not apply to it, indeed, cannot apply to it; it is impossible to authenticate
the root node of a DAG through its edges, since the root node has no incoming edges.
This effectively 'anchors' each archive to the key, which is controlled by the
client, thereby anchoring the DAG starting from the archives list entry,
making it impossible for an attacker to add or modify any part of the
DAG without Borg being able to detect the tampering.
With the scheme as described so far an attacker could easily replace the manifest,
therefore Borg includes a tertiary authentication mechanism (TAM) that is applied
to the manifest since version 1.0.9 (see :ref:`tam_vuln`).
Please note that removing an archive by removing an entry from archives/*
is possible and is done by ``borg delete`` and ``borg prune`` within their
normal operation. An attacker could also remove some entries there, but, due to
encryption, would not know what exactly they are removing. An attacker with
repository access could also remove other parts of the repository or the whole
repository, so there is not much point in protecting against archive removal.
TAM works by deriving a separate key through HKDF_ from the other encryption and
authentication keys and calculating the HMAC of the metadata to authenticate [#]_::
The borg 1.x way of having the archives list within the manifest chunk was
problematic as it required a read-modify-write operation on the manifest,
requiring a lock on the repository. We want to try less locking and more
parallelism in future.
# RANDOM(n) returns n random bytes
salt = RANDOM(64)
Passphrase notes
----------------
ikm = id_key || enc_key || enc_hmac_key
# *context* depends on the operation, for manifest authentication it is
# the ASCII string "borg-metadata-authentication-manifest".
tam_key = HKDF-SHA-512(ikm, salt, context)
# *data* is a dict-like structure
data[hmac] = zeroes
packed = pack(data)
data[hmac] = HMAC(tam_key, packed)
packed_authenticated = pack(data)
Since an attacker cannot gain access to this key and also cannot make the
client authenticate arbitrary data using this mechanism, the attacker is unable
to forge the authentication.
This effectively 'anchors' the manifest to the key, which is controlled by the
client, thereby anchoring the entire DAG, making it impossible for an attacker
to add, remove or modify any part of the DAG without Borg being able to detect
the tampering.
Note that when using BORG_PASSPHRASE the attacker cannot swap the *entire*
repository against a new repository with e.g. repokey mode and no passphrase,
@ -108,6 +114,11 @@ However, interactively a user might not notice this kind of attack
immediately, if she assumes that the reason for the absent passphrase
prompt is a set BORG_PASSPHRASE. See issue :issue:`2169` for details.
.. [#] The reason why the authentication tag is stored in the packed
data itself is that older Borg versions can still read the
manifest this way, while a changed layout would have broken
compatibility.
.. _security_encryption:
Encryption
@ -118,12 +129,12 @@ AEAD modes
Modes: --encryption (repokey|keyfile)-[blake2-](aes-ocb|chacha20-poly1305)
Supported: borg 2.0+
Supported: borg 1.3+
Encryption with these modes is based on AEAD ciphers (authenticated encryption
with associated data) and session keys.
Depending on the chosen mode (see :ref:`borg_repo-create`) different AEAD ciphers are used:
Depending on the chosen mode (see :ref:`borg_init`) different AEAD ciphers are used:
- AES-256-OCB - super fast, single-pass algorithm IF you have hw accelerated AES.
- chacha20-poly1305 - very fast, purely software based AEAD cipher.
@ -136,8 +147,7 @@ The chunk ID is derived via a MAC over the plaintext (mac key taken from borg ke
For each borg invocation, a new session id is generated by `os.urandom`_.
From that session id, the initial key material (ikm, taken from the borg key)
and an application and cipher specific salt, borg derives a session key using a
"one-step KDF" based on just sha256.
and an application and cipher specific salt, borg derives a session key via HKDF.
For each session key, IVs (nonces) are generated by a counter which increments for
each encrypted message.
@ -145,8 +155,9 @@ each encrypted message.
Session::
sessionid = os.urandom(24)
domain = "borg-session-key-CIPHERNAME"
sessionkey = sha256(crypt_key + sessionid + domain)
ikm = enc_key || enc_hmac_key
salt = "borg-session-key-CIPHERNAME"
sessionkey = HKDF(ikm, sessionid, salt)
message_iv = 0
Encryption::
@ -167,13 +178,13 @@ Decryption::
ASSERT(type-byte is correct)
domain = "borg-session-key-CIPHERNAME"
past_key = sha256(crypt_key + past_sessionid + domain)
past_key = HKDF(ikm, past_sessionid, salt)
decrypted = AEAD_decrypt(past_key, past_message_iv, authenticated)
decompressed = decompress(decrypted)
ASSERT( CONSTANT-TIME-COMPARISON( chunk-id, MAC(id_key, decompressed) ) )
Notable:
- More modern and often faster AEAD ciphers instead of self-assembled stuff.
@ -190,13 +201,106 @@ Legacy modes
Modes: --encryption (repokey|keyfile)-[blake2]
Supported: borg < 2.0
Supported: all borg versions, blake2 since 1.1
These were the AES-CTR based modes in previous borg versions.
DEPRECATED. We strongly suggest you use the safer AEAD modes, see above.
borg 2.0 does not support creating new repos using these modes,
but ``borg transfer`` can still read such existing repos.
Encryption with these modes is based on the Encrypt-then-MAC construction,
which is generally seen as the most robust way to create an authenticated
encryption scheme from encryption and message authentication primitives.
Every operation (encryption, MAC / authentication, chunk ID derivation)
uses independent, random keys generated by `os.urandom`_.
Borg does not support unauthenticated encryption -- only authenticated encryption
schemes are supported. No unauthenticated encryption schemes will be added
in the future.
Depending on the chosen mode (see :ref:`borg_init`) different primitives are used:
- Legacy encryption modes use AES-256 in CTR mode. The
counter is added in plaintext, since it is needed for decryption,
and is also tracked locally on the client to avoid counter reuse.
- The authentication primitive is either HMAC-SHA-256 or BLAKE2b-256
in a keyed mode.
Both HMAC-SHA-256 and BLAKE2b have undergone extensive cryptanalysis
and have proven secure against known attacks. The known vulnerability
of SHA-256 against length extension attacks does not apply to HMAC-SHA-256.
The authentication primitive should be chosen based upon SHA hardware support.
With SHA hardware support, hmac-sha256 is likely to be much faster.
If no hardware support is provided, Blake2b-256 will outperform hmac-sha256.
To find out if you have SHA hardware support, use::
$ borg benchmark cpu
The output will include an evaluation of cryptographic hashes/MACs like::
Cryptographic hashes / MACs ====================================
hmac-sha256 1GB 0.436s
blake2b-256 1GB 1.579s
Based upon your output, choose the primitive that is faster (in the above
example, hmac-sha256 is much faster, which indicates SHA hardware support).
- The primitive used for authentication is always the same primitive
that is used for deriving the chunk ID, but they are always
used with independent keys.
Encryption::
id = AUTHENTICATOR(id_key, data)
compressed = compress(data)
iv = reserve_iv()
encrypted = AES-256-CTR(enc_key, 8-null-bytes || iv, compressed)
authenticated = type-byte || AUTHENTICATOR(enc_hmac_key, encrypted) || iv || encrypted
Decryption::
# Given: input *authenticated* data, possibly a *chunk-id* to assert
type-byte, mac, iv, encrypted = SPLIT(authenticated)
ASSERT(type-byte is correct)
ASSERT( CONSTANT-TIME-COMPARISON( mac, AUTHENTICATOR(enc_hmac_key, encrypted) ) )
decrypted = AES-256-CTR(enc_key, 8-null-bytes || iv, encrypted)
decompressed = decompress(decrypted)
ASSERT( CONSTANT-TIME-COMPARISON( chunk-id, AUTHENTICATOR(id_key, decompressed) ) )
The client needs to track which counter values have been used, since
encrypting a chunk requires a starting counter value and no two chunks
may have overlapping counter ranges (otherwise the bitwise XOR of the
overlapping plaintexts is revealed).
The client does not directly track the counter value, because it
changes often (with each encrypted chunk), instead it commits a
"reservation" to the security database and the repository by taking
the current counter value and adding 4 GiB / 16 bytes (the block size)
to the counter. Thus the client only needs to commit a new reservation
every few gigabytes of encrypted data.
This mechanism also avoids reusing counter values in case the client
crashes or the connection to the repository is severed, since any
reservation would have been committed to both the security database
and the repository before any data is encrypted. Borg uses its
standard mechanism (SaveFile) to ensure that reservations are durable
(on most hardware / storage systems), therefore a crash of the
client's host would not impact tracking of reservations.
However, this design is not infallible, and requires synchronization
between clients, which is handled through the repository. Therefore in
a multiple-client scenario a repository can trick a client into
reusing counter values by ignoring counter reservations and replaying
the manifest (which will fail if the client has seen a more recent
manifest or has a more recent nonce reservation). If the repository is
untrusted, but a trusted synchronization channel exists between
clients, the security database could be synchronized between them over
said trusted channel. This is not part of Borg's functionality.
.. _key_encryption:
@ -210,23 +314,32 @@ For offline storage of the encryption keys they are encrypted with a
user-chosen passphrase.
A 256 bit key encryption key (KEK) is derived from the passphrase
using argon2_ with a random 256 bit salt. The KEK is then used
to Encrypt-*then*-MAC a packed representation of the keys using the
chacha20-poly1305 AEAD cipher and a constant IV == 0.
The ciphertext is then converted to base64.
using PBKDF2-HMAC-SHA256 with a random 256 bit salt which is then used
to Encrypt-*and*-MAC (unlike the Encrypt-*then*-MAC approach used
otherwise) a packed representation of the keys with AES-256-CTR with a
constant initialization vector of 0. A HMAC-SHA256 of the plaintext is
generated using the same KEK and is stored alongside the ciphertext,
which is converted to base64 in its entirety.
This base64 blob (commonly referred to as *keyblob*) is then stored in
the key file or in the repository config (keyfile and repokey modes
respectively).
The use of a constant IV is secure because an identical passphrase will
result in a different derived KEK for every key encryption due to the salt.
This scheme, and specifically the use of a constant IV with the CTR
mode, is secure because an identical passphrase will result in a
different derived KEK for every key encryption due to the salt.
The use of Encrypt-and-MAC instead of Encrypt-then-MAC is seen as
uncritical (but not ideal) here, since it is combined with AES-CTR mode,
which is not vulnerable to padding attacks.
.. seealso::
Refer to the :ref:`key_files` section for details on the format.
Refer to issue :issue:`747` for suggested improvements of the encryption
scheme and password-based key derivation.
Implementations used
--------------------
@ -234,16 +347,30 @@ Implementations used
We do not implement cryptographic primitives ourselves, but rely
on widely used libraries providing them:
- AES-OCB and CHACHA20-POLY1305 from OpenSSL 1.1 are used,
- AES-CTR, AES-OCB, CHACHA20-POLY1305 and HMAC-SHA-256 from OpenSSL 1.1 are used,
which is also linked into the static binaries we provide.
We think this is not an additional risk, since we don't ever
use OpenSSL's networking, TLS or X.509 code, but only their
primitives implemented in libcrypto.
- SHA-256, SHA-512 and BLAKE2b from Python's hashlib_ standard library module are used.
- HMAC and a constant-time comparison from Python's hmac_ standard library module are used.
- argon2 is used via argon2-cffi.
Borg requires a Python built with OpenSSL support (due to PBKDF2), therefore
these functions are delegated to OpenSSL by Python.
- HMAC, PBKDF2 and a constant-time comparison from Python's hmac_ standard
library module is used. While the HMAC implementation is written in Python,
the PBKDF2 implementation is provided by OpenSSL. The constant-time comparison
(``compare_digest``) is written in C and part of Python.
Implemented cryptographic constructions are:
- AEAD modes: AES-OCB and CHACHA20-POLY1305 are straight from OpenSSL.
- Legacy modes: Encrypt-then-MAC based on AES-256-CTR and either HMAC-SHA-256
or keyed BLAKE2b256 as described above under Encryption_.
- Encrypt-and-MAC based on AES-256-CTR and HMAC-SHA-256
as described above under `Offline key security`_.
- HKDF_-SHA-512
.. _Horton principle: https://en.wikipedia.org/wiki/Horton_Principle
.. _HKDF: https://tools.ietf.org/html/rfc5869
.. _length extension: https://en.wikipedia.org/wiki/Length_extension_attack
.. _hashlib: https://docs.python.org/3/library/hashlib.html
.. _hmac: https://docs.python.org/3/library/hmac.html
@ -264,7 +391,7 @@ SSH server -- Borg RPC does not contain *any* networking
code. Networking is done by the SSH client running in a separate
process, Borg only communicates over the standard pipes (stdout,
stderr and stdin) with this process. This also means that Borg doesn't
have to use a SSH client directly (or SSH at all). For example,
have to directly use a SSH client (or SSH at all). For example,
``sudo`` or ``qrexec`` could be used as an intermediary.
By using the system's SSH client and not implementing a
@ -341,12 +468,13 @@ Compression and Encryption
Combining encryption with compression can be insecure in some contexts (e.g. online protocols).
There was some discussion about this in :issue:`1040` and for Borg some developers
There was some discussion about this in `github issue #1040`_ and for Borg some developers
concluded this is no problem at all, some concluded this is hard and extremely slow to exploit
and thus no problem in practice.
No matter what, there is always the option not to use compression if you are worried about this.
.. _github issue #1040: https://github.com/borgbackup/borg/issues/1040
Fingerprinting
==============
@ -361,25 +489,19 @@ The chunks stored in the repo are the (compressed, encrypted and authenticated)
output of the chunker. The sizes of these stored chunks are influenced by the
compression, encryption and authentication.
buzhash and buzhash64 chunker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
buzhash chunker
+++++++++++++++
The buzhash chunkers chunk according to the input data, the chunker's
parameters and secret key material (which all influence the chunk boundary
The buzhash chunker chunks according to the input data, the chunker's
parameters and the secret chunker seed (which all influence the chunk boundary
positions).
Secret key material:
- "buzhash": chunker seed (32bits), used for XORing the hardcoded buzhash table
- "buzhash64": bh64_key (256bits) is derived from ID key, used to cryptographically
generate the table.
Small files below some specific threshold (default: 512 KiB) result in only one
chunk (identical content / size as the original file), bigger files result in
multiple chunks.
fixed chunker
~~~~~~~~~~~~~
+++++++++++++
This chunker yields fixed sized chunks, with optional support of a differently
sized header chunk. The last chunk is not required to have the full block size

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

View file

@ -1,8 +1,8 @@
Introduction
============
.. This shim is here to fix the structure in the PDF
rendering. Without this stub, the elements in the toctree of
index.rst show up a level below the README file included.
.. this shim is here to fix the structure in the PDF
rendering. without this stub, the elements in the toctree of
index.rst show up a level below the README file included
.. include:: ../README.rst

View file

@ -1,91 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-ANALYZE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-analyze \- Analyzes archives.
.SH SYNOPSIS
.sp
borg [common options] analyze [options]
.SH DESCRIPTION
.sp
Analyze archives to find \(dqhot spots\(dq.
.sp
\fBborg analyze\fP relies on the usual archive matching options to select the
archives that should be considered for analysis (e.g. \fB\-a series_name\fP).
Then it iterates over all matching archives, over all contained files, and
collects information about chunks stored in all directories it encounters.
.sp
It considers chunk IDs and their plaintext sizes (we do not have the compressed
size in the repository easily available) and adds up the sizes of added and removed
chunks per direct parent directory, and outputs a list of \(dqdirectory: size\(dq.
.sp
You can use that list to find directories with a lot of \(dqactivity\(dq — maybe
some of these are temporary or cache directories you forgot to exclude.
.sp
To avoid including these unwanted directories in your backups, you can carefully
exclude them in \fBborg create\fP (for future backups) or use \fBborg recreate\fP
to recreate existing archives without them.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-BENCHMARK-CPU 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark-cpu \- Benchmark CPU bound operations.
.
.nr rst2man-indent-level 0
.
@ -27,24 +30,17 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-BENCHMARK-CPU" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark-cpu \- Benchmark CPU-bound operations.
.SH SYNOPSIS
.sp
borg [common options] benchmark cpu [options]
.SH DESCRIPTION
.sp
This command benchmarks miscellaneous CPU\-bound Borg operations.
This command benchmarks misc. CPU bound borg operations.
.sp
It creates input data in memory, runs the operation and then displays throughput.
To reduce outside influence on the timings, please make sure to run this with:
.INDENT 0.0
.IP \(bu 2
an otherwise as idle as possible machine
.IP \(bu 2
enough free memory so there will be no slow down due to paging activity
.UNINDENT
\- an otherwise as idle as possible machine
\- enough free memory so there will be no slow down due to paging activity
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-BENCHMARK-CRUD 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark-crud \- Benchmark Create, Read, Update, Delete for archives.
.
.nr rst2man-indent-level 0
.
@ -27,29 +30,28 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-BENCHMARK-CRUD" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark-crud \- Benchmark Create, Read, Update, Delete for archives.
.SH SYNOPSIS
.sp
borg [common options] benchmark crud [options] PATH
borg [common options] benchmark crud [options] REPOSITORY PATH
.SH DESCRIPTION
.sp
This command benchmarks borg CRUD (create, read, update, delete) operations.
.sp
It creates input data below the given PATH and backs up this data into the given REPO.
It creates input data below the given PATH and backups this data into the given REPO.
The REPO must already exist (it could be a fresh empty repo or an existing repo, the
command will create / read / update / delete some archives named borg\-benchmark\-crud* there.
.sp
Make sure you have free space there; you will need about 1 GB each (+ overhead).
Make sure you have free space there, you\(aqll need about 1GB each (+ overhead).
.sp
If your repository is encrypted and borg needs a passphrase to unlock the key, use:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
BORG_PASSPHRASE=mysecret borg benchmark crud REPO PATH
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -86,8 +88,11 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository to use for benchmark (must exist)
.TP
.B PATH
path where to create benchmark input data
path were to create benchmark input data
.UNINDENT
.SH SEE ALSO
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-BENCHMARK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-benchmark \- benchmark command
.
.nr rst2man-indent-level 0
.
@ -27,9 +30,6 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-BENCHMARK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-benchmark \- benchmark command
.SH SYNOPSIS
.nf
borg [common options] benchmark crud ...

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-BREAK-LOCK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-break-lock \- Break the repository lock (e.g. in case it was left by a dead borg.
.
.nr rst2man-indent-level 0
.
@ -27,20 +30,23 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-BREAK-LOCK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-break-lock \- Breaks the repository lock (for example, if it was left by a dead Borg process).
.SH SYNOPSIS
.sp
borg [common options] break\-lock [options]
borg [common options] break\-lock [options] [REPOSITORY]
.SH DESCRIPTION
.sp
This command breaks the repository and cache locks.
Use with care and only when no Borg process (on any machine) is
trying to access the cache or the repository.
Please use carefully and only while no borg process (on any machine) is
trying to access the Cache or the Repository.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository for which to break the locks
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-CHANGE-PASSPHRASE 1 "2017-11-25" "" "borg backup tool"
.SH NAME
borg-change-passphrase \- Change repository key file passphrase
.
.nr rst2man-indent-level 0
.
@ -27,42 +30,24 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-REPO-INFO" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-repo-info \- Show repository information.
.SH SYNOPSIS
.sp
borg [common options] repo\-info [options]
borg [common options] change\-passphrase [options] [REPOSITORY]
.SH DESCRIPTION
.sp
This command displays detailed information about the repository.
The key files used for repository encryption are optionally passphrase
protected. This command can be used to change this passphrase.
.sp
Please note that this command only changes the passphrase, but not any
secret protected by it (like e.g. encryption/MAC keys or chunker seed).
Thus, changing the passphrase after passphrase and borg key got compromised
does not protect future (nor past) backups to the same repository.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options
.INDENT 0.0
.TP
.B \-\-json
format output as JSON
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.SS arguments
.sp
.EX
$ borg repo\-info
Repository ID: 0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Location: /Users/tw/w/borg/path/to/repo
Encrypted: Yes (repokey AES\-OCB)
Cache: /Users/tw/.cache/borg/0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Security dir: /Users/tw/.config/borg/security/0e85a7811022326c067acb2a7181d5b526b7d2f61b34470fb8670c440a67f1a9
Original size: 152.14 MB
Deduplicated size: 30.38 MB
Unique chunks: 654
Total chunks: 3302
.EE
.UNINDENT
.UNINDENT
REPOSITORY
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-CHECK 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-check \- Check repository consistency
.
.nr rst2man-indent-level 0
.
@ -27,172 +30,146 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-CHECK" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-check \- Checks repository consistency.
.SH SYNOPSIS
.sp
borg [common options] check [options]
borg [common options] check [options] [REPOSITORY_OR_ARCHIVE]
.SH DESCRIPTION
.sp
The check command verifies the consistency of a repository and its archives.
It consists of two major steps:
.INDENT 0.0
.IP 1. 3
Checking the consistency of the repository itself. This includes checking
the file magic headers, and both the metadata and data of all objects in
the repository. The read data is checked by size and hash. Bit rot and other
types of accidental damage can be detected this way. Running the repository
check can be split into multiple partial checks using \fB\-\-max\-duration\fP\&.
When checking an <ssh://> remote repository, please note that the checks run on
the server and do not cause significant network traffic.
.IP 2. 3
Checking consistency and correctness of the archive metadata and optionally
archive data (requires \fB\-\-verify\-data\fP). This includes ensuring that the
repository manifest exists, the archive metadata chunk is present, and that
all chunks referencing files (items) in the archive exist. This requires
reading archive and file metadata, but not data. To scan for archives whose
entries were lost from the archive directory, pass \fB\-\-find\-lost\-archives\fP\&.
It requires reading all data and is hence very time\-consuming.
To additionally cryptographically verify the file (content) data integrity,
pass \fB\-\-verify\-data\fP, which is even more time\-consuming.
The check command verifies the consistency of a repository and the corresponding archives.
.sp
When checking archives of a remote repository, archive checks run on the client
machine because they require decrypting data and therefore the encryption key.
.UNINDENT
.sp
Both steps can also be run independently. Pass \fB\-\-repository\-only\fP to run the
repository checks only, or pass \fB\-\-archives\-only\fP to run the archive checks
only.
.sp
The \fB\-\-max\-duration\fP option can be used to split a long\-running repository
check into multiple partial checks. After the given number of seconds, the check
is interrupted. The next partial check will continue where the previous one
stopped, until the full repository has been checked. Assuming a complete check
would take 7 hours, then running a daily check with \fB\-\-max\-duration=3600\fP
(1 hour) would result in one full repository check per week. Doing a full
repository check aborts any previous partial check; the next partial check will
restart from the beginning. With partial repository checks you can run neither
archive checks, nor enable repair mode. Consequently, if you want to use
\fB\-\-max\-duration\fP you must also pass \fB\-\-repository\-only\fP, and must not pass
\fB\-\-archives\-only\fP, nor \fB\-\-repair\fP\&.
.sp
\fBWarning:\fP Please note that partial repository checks (i.e., running with
\fB\-\-max\-duration\fP) can only perform non\-cryptographic checksum checks on the
repository files. Enabling partial repository checks excludes archive checks
for the same reason. Therefore, partial checks may be useful only with very large
repositories where a full check would take too long.
.sp
The \fB\-\-verify\-data\fP option will perform a full integrity verification (as
opposed to checking just the xxh64) of data, which means reading the
data from the repository, decrypting and decompressing it. It is a complete
cryptographic verification and hence very time\-consuming, but will detect any
accidental and malicious corruption. Tamper\-resistance is only guaranteed for
encrypted repositories against attackers without access to the keys. You cannot
use \fB\-\-verify\-data\fP with \fB\-\-repository\-only\fP\&.
.sp
The \fB\-\-find\-lost\-archives\fP option will also scan the whole repository, but
tells Borg to search for lost archive metadata. If Borg encounters any archive
metadata that does not match an archive directory entry (including
soft\-deleted archives), it means that an entry was lost.
Unless \fBborg compact\fP is called, these archives can be fully restored with
\fB\-\-repair\fP\&. Please note that \fB\-\-find\-lost\-archives\fP must read a lot of
data from the repository and is thus very time\-consuming. You cannot use
\fB\-\-find\-lost\-archives\fP with \fB\-\-repository\-only\fP\&.
.SS About repair mode
.sp
The check command is a read\-only task by default. If any corruption is found,
Borg will report the issue and proceed with checking. To actually repair the
issues found, pass \fB\-\-repair\fP\&.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
\fB\-\-repair\fP is a \fBPOTENTIALLY DANGEROUS FEATURE\fP and might lead to data
loss! This does not just include data that was previously lost anyway, but
might include more data for kinds of corruption it is not capable of
dealing with. \fBBE VERY CAREFUL!\fP
.UNINDENT
.UNINDENT
check \-\-repair is a potentially dangerous function and might lead to data loss
(for kinds of corruption it is not capable of dealing with). BE VERY CAREFUL!
.sp
Pursuant to the previous warning it is also highly recommended to test the
reliability of the hardware running Borg with stress testing software. This
especially includes storage and memory testers. Unreliable hardware might lead
to additional data loss.
reliability of the hardware running this software with stress testing software
such as memory testers. Unreliable hardware can also lead to data loss especially
when this command is run in repair mode.
.sp
It is highly recommended to create a backup of your repository before running
in repair mode (i.e. running it with \fB\-\-repair\fP).
.sp
Repair mode will attempt to fix any corruptions found. Fixing corruptions does
not mean recovering lost data: Borg cannot magically restore data lost due to
e.g. a hardware failure. Repairing a repository means sacrificing some data
for the sake of the repository as a whole and the remaining data. Hence it is,
by definition, a potentially lossy task.
.sp
In practice, repair mode hooks into both the repository and archive checks:
First, the underlying repository data files are checked:
.INDENT 0.0
.IP 1. 3
When checking the repository\(aqs consistency, repair mode removes corrupted
objects from the repository after it did a 2nd try to read them correctly.
.IP 2. 3
When checking the consistency and correctness of archives, repair mode might
remove whole archives from the manifest if their archive metadata chunk is
corrupt or lost. Borg will also report files that reference missing chunks.
.IP \(bu 2
For all segments, the segment magic header is checked.
.IP \(bu 2
For all objects stored in the segments, all metadata (e.g. CRC and size) and
all data is read. The read data is checked by size and CRC. Bit rot and other
types of accidental damage can be detected this way.
.IP \(bu 2
In repair mode, if an integrity error is detected in a segment, try to recover
as many objects from the segment as possible.
.IP \(bu 2
In repair mode, make sure that the index is consistent with the data stored in
the segments.
.IP \(bu 2
If checking a remote repo via \fBssh:\fP, the repo check is executed on the server
without causing significant network traffic.
.IP \(bu 2
The repository check can be skipped using the \fB\-\-archives\-only\fP option.
.IP \(bu 2
A repository check can be time consuming. Partial checks are possible with the
\fB\-\-max\-duration\fP option.
.UNINDENT
.sp
If \fB\-\-repair \-\-find\-lost\-archives\fP is given, previously lost entries will
be recreated in the archive directory. This is only possible before
\fBborg compact\fP would remove the archives\(aq data completely.
Second, the consistency and correctness of the archive metadata is verified:
.INDENT 0.0
.IP \(bu 2
Is the repo manifest present? If not, it is rebuilt from archive metadata
chunks (this requires reading and decrypting of all metadata and data).
.IP \(bu 2
Check if archive metadata chunk is present; if not, remove archive from manifest.
.IP \(bu 2
For all files (items) in the archive, for all chunks referenced by these
files, check if chunk is present. In repair mode, if a chunk is not present,
replace it with a same\-size replacement chunk of zeroes. If a previously lost
chunk reappears (e.g. via a later backup), in repair mode the all\-zero replacement
chunk will be replaced by the correct chunk. This requires reading of archive and
file metadata, but not data.
.IP \(bu 2
In repair mode, when all the archives were checked, orphaned chunks are deleted
from the repo. One cause of orphaned chunks are input file related errors (like
read errors) in the archive creation process.
.IP \(bu 2
In verify\-data mode, a complete cryptographic verification of the archive data
integrity is performed. This conflicts with \fB\-\-repository\-only\fP as this mode
only makes sense if the archive checks are enabled. The full details of this mode
are documented below.
.IP \(bu 2
If checking a remote repo via \fBssh:\fP, the archive check is executed on the
client machine because it requires decryption, and this is always done client\-side
as key access is needed.
.IP \(bu 2
The archive checks can be time consuming; they can be skipped using the
\fB\-\-repository\-only\fP option.
.UNINDENT
.sp
The \fB\-\-max\-duration\fP option can be used to split a long\-running repository check
into multiple partial checks. After the given number of seconds the check is
interrupted. The next partial check will continue where the previous one stopped,
until the complete repository has been checked. Example: Assuming a complete check took 7
hours, then running a daily check with \-\-max\-duration=3600 (1 hour) resulted in one
completed check per week.
.sp
Attention: A partial \-\-repository\-only check can only do way less checking than a full
\-\-repository\-only check: only the non\-cryptographic checksum checks on segment file
entries are done, while a full \-\-repository\-only check would also do a repo index check.
A partial check cannot be combined with the \fB\-\-repair\fP option. Partial checks
may therefore be useful only with very large repositories where a full check would take
too long.
Doing a full repository check aborts a partial check; the next partial check will restart
from the beginning.
.sp
The \fB\-\-verify\-data\fP option will perform a full integrity verification (as opposed to
checking the CRC32 of the segment) of data, which means reading the data from the
repository, decrypting and decompressing it. This is a cryptographic verification,
which will detect (accidental) corruption. For encrypted repositories it is
tamper\-resistant as well, unless the attacker has access to the keys. It is also very
slow.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options
.SS arguments
.INDENT 0.0
.TP
.B \-\-repository\-only
.B REPOSITORY_OR_ARCHIVE
repository or archive to check consistency of
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-repository\-only
only perform repository checks
.TP
.B \-\-archives\-only
only perform archive checks
.B \-\-archives\-only
only perform archives checks
.TP
.B \-\-verify\-data
.B \-\-verify\-data
perform cryptographic archive data integrity verification (conflicts with \fB\-\-repository\-only\fP)
.TP
.B \-\-repair
.B \-\-repair
attempt to repair any inconsistencies found
.TP
.B \-\-find\-lost\-archives
attempt to find lost archives
.B \-\-save\-space
work slower, but using less space
.TP
.BI \-\-max\-duration \ SECONDS
perform only a partial repository check for at most SECONDS seconds (default: unlimited)
do only a partial repo check for max. SECONDS seconds (Default: unlimited)
.UNINDENT
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
consider last N archives after other filters were applied
.UNINDENT
.SH SEE ALSO
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-COMMON 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-common \- Common options of Borg commands
.
.nr rst2man-indent-level 0
.
@ -27,74 +30,77 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-COMMON" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-common \- Common options of Borg commands
.SH SYNOPSIS
.INDENT 0.0
.TP
.B \-h\fP,\fB \-\-help
.B \-h\fP,\fB \-\-help
show this help message and exit
.TP
.B \-\-critical
.B \-\-critical
work on log level CRITICAL
.TP
.B \-\-error
.B \-\-error
work on log level ERROR
.TP
.B \-\-warning
.B \-\-warning
work on log level WARNING (default)
.TP
.B \-\-info\fP,\fB \-v\fP,\fB \-\-verbose
.B \-\-info\fP,\fB \-v\fP,\fB \-\-verbose
work on log level INFO
.TP
.B \-\-debug
.B \-\-debug
enable debug output, work on log level DEBUG
.TP
.BI \-\-debug\-topic \ TOPIC
enable TOPIC debugging (can be specified multiple times). The logger path is borg.debug.<TOPIC> if TOPIC is not fully qualified.
.TP
.B \-p\fP,\fB \-\-progress
.B \-p\fP,\fB \-\-progress
show progress information
.TP
.B \-\-iec
.B \-\-iec
format using IEC units (1KiB = 1024B)
.TP
.B \-\-log\-json
.B \-\-log\-json
Output one JSON object per log line instead of formatted text.
.TP
.BI \-\-lock\-wait \ SECONDS
wait at most SECONDS for acquiring a repository/cache lock (default: 10).
wait at most SECONDS for acquiring a repository/cache lock (default: 1).
.TP
.B \-\-show\-version
.B \-\-bypass\-lock
Bypass locking mechanism
.TP
.B \-\-show\-version
show/log the borg version
.TP
.B \-\-show\-rc
.B \-\-show\-rc
show/log the return code (rc)
.TP
.BI \-\-umask \ M
set umask to M (local only, default: 0077)
.TP
.BI \-\-remote\-path \ PATH
use PATH as borg executable on the remote (default: \(dqborg\(dq)
use PATH as borg executable on the remote (default: "borg")
.TP
.BI \-\-remote\-ratelimit \ RATE
deprecated, use \fB\-\-upload\-ratelimit\fP instead
.TP
.BI \-\-upload\-ratelimit \ RATE
set network upload rate limit in kiByte/s (default: 0=unlimited)
.TP
.BI \-\-remote\-buffer \ UPLOAD_BUFFER
deprecated, use \fB\-\-upload\-buffer\fP instead
.TP
.BI \-\-upload\-buffer \ UPLOAD_BUFFER
set network upload buffer size in MiB. (default: 0=no buffer)
.TP
.B \-\-consider\-part\-files
treat part files like normal files (e.g. to list/extract them)
.TP
.BI \-\-debug\-profile \ FILE
Write execution profile in Borg format into FILE. For local use a Python\-compatible file can be generated by suffixing FILE with \(dq.pyprof\(dq.
Write execution profile in Borg format into FILE. For local use a Python\-compatible file can be generated by suffixing FILE with ".pyprof".
.TP
.BI \-\-rsh \ RSH
Use this command to connect to the \(aqborg serve\(aq process (default: \(aqssh\(aq)
.TP
.BI \-\-socket \ PATH
Use UNIX DOMAIN (IPC) socket at PATH for client/server communication with socket: protocol.
.TP
.BI \-r \ REPO\fR,\fB \ \-\-repo \ REPO
repository to use
.UNINDENT
.SH SEE ALSO
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-COMPACT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-compact \- compact segment files in the repository
.
.nr rst2man-indent-level 0
.
@ -27,77 +30,64 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-COMPACT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-compact \- Collects garbage in the repository.
.SH SYNOPSIS
.sp
borg [common options] compact [options]
borg [common options] compact [options] [REPOSITORY]
.SH DESCRIPTION
.sp
Free repository space by deleting unused chunks.
This command frees repository space by compacting segments.
.sp
\fBborg compact\fP analyzes all existing archives to determine which repository
objects are actually used (referenced). It then deletes all unused objects
from the repository to free space.
Use this regularly to avoid running out of space \- you do not need to use this
after each borg command though. It is especially useful after deleting archives,
because only compaction will really free repository space.
.sp
Unused objects may result from:
.INDENT 0.0
.IP \(bu 2
use of \fBborg delete\fP or \fBborg prune\fP
.IP \(bu 2
interrupted backups (consider retrying the backup before running compact)
.IP \(bu 2
backups of source files that encountered an I/O error mid\-transfer and were skipped
.IP \(bu 2
corruption of the repository (e.g., the archives directory lost entries; see notes below)
.UNINDENT
borg compact does not need a key, so it is possible to invoke it from the
client or also from the server.
.sp
You usually do not want to run \fBborg compact\fP after every write operation, but
either regularly (e.g., once a month, possibly together with \fBborg check\fP) or
when disk space needs to be freed.
Depending on the amount of segments that need compaction, it may take a while,
so consider using the \fB\-\-progress\fP option.
.sp
\fBImportant:\fP
A segment is compacted if the amount of saved space is above the percentage value
given by the \fB\-\-threshold\fP option. If omitted, a threshold of 10% is used.
When using \fB\-\-verbose\fP, borg will output an estimate of the freed space.
.sp
After compacting, it is no longer possible to use \fBborg undelete\fP to recover
previously soft\-deleted archives.
After upgrading borg (server) to 1.2+, you can use \fBborg compact \-\-cleanup\-commits\fP
to clean up the numerous 17byte commit\-only segments that borg 1.1 did not clean up
due to a bug. It is enough to do that once per repository. After cleaning up the
commits, borg will also do a normal compaction.
.sp
\fBborg compact\fP might also delete data from archives that were \(dqlost\(dq due to
archives directory corruption. Such archives could potentially be restored with
\fBborg check \-\-find\-lost\-archives [\-\-repair]\fP, which is slow. You therefore
might not want to do that unless there are signs of lost archives (e.g., when
seeing fatal errors when creating backups or when archives are missing in
\fBborg repo\-list\fP).
.sp
When using the \fB\-\-stats\fP option, borg will internally list all repository
objects to determine their existence and stored size. It will build a fresh
chunks index from that information and cache it in the repository. For some
types of repositories, this might be very slow. It will tell you the sum of
stored object sizes, before and after compaction.
.sp
Without \fB\-\-stats\fP, borg will rely on the cached chunks index to determine
existing object IDs (but there is no stored size information in the index,
thus it cannot compute before/after compaction size statistics).
See \fIseparate_compaction\fP in Additional Notes for more details.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options
.SS arguments
.INDENT 0.0
.TP
.B \-n\fP,\fB \-\-dry\-run
do not change the repository
.B REPOSITORY
repository to compact
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP
.B \-s\fP,\fB \-\-stats
print statistics (might be much slower)
.B \-\-cleanup\-commits
cleanup commit\-only 17\-byte segment files
.TP
.BI \-\-threshold \ PERCENT
set minimum threshold for saved space in PERCENT (Default: 10)
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Compact segments and free repository disk space
$ borg compact
.EE
.nf
.ft C
# compact segments and free repo disk space
$ borg compact /path/to/repo
# same as above plus clean up 17byte commit\-only segments
$ borg compact \-\-cleanup\-commits /path/to/repo
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO

View file

@ -1,76 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-COMPLETION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-completion \- Output shell completion script for the given shell.
.SH SYNOPSIS
.sp
borg [common options] completion [options] SHELL
.SH DESCRIPTION
.sp
This command prints a shell completion script for the given shell.
.sp
Please note that for some dynamic completions (like archive IDs), the shell
completion script will call borg to query the repository. This will work best
if that call can be made without prompting for user input, so you may want to
set BORG_REPO and BORG_PASSPHRASE environment variables.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B SHELL
shell to generate completion for (one of: %(choices)s)
.UNINDENT
.SH EXAMPLES
.sp
To activate completion in your current shell session, evaluate the output
of this command. To enable it persistently, add the corresponding line to
your shell\(aqs startup file.
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Bash (in ~/.bashrc)
eval \(dq$(borg completion bash)\(dq
# Zsh (in ~/.zshrc)
eval \(dq$(borg completion zsh)\(dq
.EE
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-COMPRESSION 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-compression \- Details regarding compression
.
.nr rst2man-indent-level 0
.
@ -27,19 +30,16 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-COMPRESSION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-compression \- Details regarding compression
.SH DESCRIPTION
.sp
It is no problem to mix different compression methods in one repository,
It is no problem to mix different compression methods in one repo,
deduplication is done on the source data chunks (not on the compressed
or encrypted data).
.sp
If some specific chunk was once compressed and stored into the repository, creating
If some specific chunk was once compressed and stored into the repo, creating
another backup that also uses this chunk will not change the stored chunk.
So if you use different compression specs for the backups, whichever stores a
chunk first determines its compression. See also \fBborg recreate\fP\&.
chunk first determines its compression. See also borg recreate.
.sp
Compression is lz4 by default. If you want something else, you have to specify what you want.
.sp
@ -53,19 +53,20 @@ Do not compress.
Use lz4 compression. Very high speed, very low compression. (default)
.TP
.B zstd[,L]
Use zstd (\(dqzstandard\(dq) compression, a modern wide\-range algorithm.
Use zstd ("zstandard") compression, a modern wide\-range algorithm.
If you do not explicitly give the compression level L (ranging from 1
to 22), it will use level 3.
Archives compressed with zstd are not compatible with borg < 1.1.4.
.TP
.B zlib[,L]
Use zlib (\(dqgz\(dq) compression. Medium speed, medium compression.
Use zlib ("gz") compression. Medium speed, medium compression.
If you do not explicitly give the compression level L (ranging from 0
to 9), it will use level 6.
Giving level 0 (means \(dqno compression\(dq, but still has zlib protocol
overhead) is usually pointless, you better use \(dqnone\(dq compression.
Giving level 0 (means "no compression", but still has zlib protocol
overhead) is usually pointless, you better use "none" compression.
.TP
.B lzma[,L]
Use lzma (\(dqxz\(dq) compression. Low speed, high compression.
Use lzma ("xz") compression. Low speed, high compression.
If you do not explicitly give the compression level L (ranging from 0
to 9), it will use level 6.
Giving levels above 6 is pointless and counterproductive because it does
@ -75,103 +76,55 @@ lots of CPU cycles and RAM.
.B auto,C[,L]
Use a built\-in heuristic to decide per chunk whether to compress or not.
The heuristic tries with lz4 whether the data is compressible.
For incompressible data, it will not use compression (uses \(dqnone\(dq).
For incompressible data, it will not use compression (uses "none").
For compressible data, it uses the given C[,L] compression \- with C[,L]
being any valid compression specifier. This can be helpful for media files
which often cannot be compressed much more.
being any valid compression specifier.
.TP
.B obfuscate,SPEC,C[,L]
Use compressed\-size obfuscation to make fingerprinting attacks based on
the observable stored chunk size more difficult. Note:
.INDENT 7.0
.IP \(bu 2
You must combine this with encryption, or it won\(aqt make any sense.
.IP \(bu 2
Your repo size will be bigger, of course.
.IP \(bu 2
A chunk is limited by the constant \fBMAX_DATA_SIZE\fP (cur. ~20MiB).
.UNINDENT
the observable stored chunk size more difficult.
Note:
\- you must combine this with encryption or it won\(aqt make any sense.
\- your repo size will be bigger, of course.
.sp
The SPEC value determines how the size obfuscation works:
.sp
\fIRelative random reciprocal size variation\fP (multiplicative)
The SPEC value will determine how the size obfuscation will work:
.sp
Relative random reciprocal size variation:
Size will increase by a factor, relative to the compressed data size.
Smaller factors are used often, larger factors rarely.
Smaller factors are often used, larger factors rarely.
1: factor 0.01 .. 100.0
2: factor 0.1 .. 1000.0
3: factor 1.0 .. 10000.0
4: factor 10.0 .. 100000.0
5: factor 100.0 .. 1000000.0
6: factor 1000.0 .. 10000000.0
.sp
Available factors:
.INDENT 7.0
.INDENT 3.5
.sp
.EX
1: 0.01 .. 100
2: 0.1 .. 1,000
3: 1 .. 10,000
4: 10 .. 100,000
5: 100 .. 1,000,000
6: 1,000 .. 10,000,000
.EE
.UNINDENT
.UNINDENT
.sp
Example probabilities for SPEC \fB1\fP:
.INDENT 7.0
.INDENT 3.5
.sp
.EX
90 % 0.01 .. 0.1
9 % 0.1 .. 1
0.9 % 1 .. 10
0.09% 10 .. 100
.EE
.UNINDENT
.UNINDENT
.sp
\fIRandomly sized padding up to the given size\fP (additive)
.INDENT 7.0
.INDENT 3.5
.sp
.EX
110: 1kiB (2 ^ (SPEC \- 100))
Add a randomly sized padding up to the given size:
110: 1kiB
\&...
120: 1MiB
\&...
123: 8MiB (max.)
.EE
.UNINDENT
.UNINDENT
.sp
\fIPadmé padding\fP (deterministic)
.INDENT 7.0
.INDENT 3.5
.sp
.EX
250: pads to sums of powers of 2, max 12% overhead
.EE
.UNINDENT
.UNINDENT
.sp
Uses the Padmé algorithm to deterministically pad the compressed size to a sum of
powers of 2, limiting overhead to 12%. See <https://lbarman.ch/blog/padme/> for details.
.UNINDENT
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg create \-\-compression lz4 \-\-repo REPO ARCHIVE data
borg create \-\-compression zstd \-\-repo REPO ARCHIVE data
borg create \-\-compression zstd,10 \-\-repo REPO ARCHIVE data
borg create \-\-compression zlib \-\-repo REPO ARCHIVE data
borg create \-\-compression zlib,1 \-\-repo REPO ARCHIVE data
borg create \-\-compression auto,lzma,6 \-\-repo REPO ARCHIVE data
.nf
.ft C
borg create \-\-compression lz4 REPO::ARCHIVE data
borg create \-\-compression zstd REPO::ARCHIVE data
borg create \-\-compression zstd,10 REPO::ARCHIVE data
borg create \-\-compression zlib REPO::ARCHIVE data
borg create \-\-compression zlib,1 REPO::ARCHIVE data
borg create \-\-compression auto,lzma,6 REPO::ARCHIVE data
borg create \-\-compression auto,lzma ...
borg create \-\-compression obfuscate,110,none ...
borg create \-\-compression obfuscate,3,none ...
borg create \-\-compression obfuscate,3,auto,zstd,10 ...
borg create \-\-compression obfuscate,2,zstd,6 ...
borg create \-\-compression obfuscate,250,zstd,3 ...
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.SH AUTHOR

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-CONFIG 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-config \- get, set, and delete values in a repository or cache config file
.
.nr rst2man-indent-level 0
.
@ -27,22 +30,19 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-CONFIG" 1 "2024-07-19" "" "borg backup tool"
.SH NAME
borg-config \- get, set, and delete values in a repository or cache config file
.SH SYNOPSIS
.sp
borg [common options] config [options] [NAME] [VALUE]
borg [common options] config [options] [REPOSITORY] [NAME] [VALUE]
.SH DESCRIPTION
.sp
This command gets and sets options in a local repository or cache config file.
For security reasons, this command only works on local repositories.
.sp
To delete a config value entirely, use \fB\-\-delete\fP\&. To list the values
of the configuration file or the default values, use \fB\-\-list\fP\&. To get an existing
of the configuration file or the default values, use \fB\-\-list\fP\&. To get and existing
key, pass only the key name. To set a key, pass both the key name and
the new value. Keys can be specified in the format \(dqsection.name\(dq or
simply \(dqname\(dq; the section will default to \(dqrepository\(dq and \(dqcache\(dq for
the new value. Keys can be specified in the format "section.name" or
simply "name"; the section will default to "repository" and "cache" for
the repo and cache configs, respectively.
.sp
By default, borg config manipulates the repository config file. Using \fB\-\-cache\fP
@ -53,6 +53,9 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository to configure
.TP
.B NAME
name of config key
.TP
@ -62,13 +65,13 @@ new value for key
.SS optional arguments
.INDENT 0.0
.TP
.B \-c\fP,\fB \-\-cache
.B \-c\fP,\fB \-\-cache
get and set values from the repo cache
.TP
.B \-d\fP,\fB \-\-delete
.B \-d\fP,\fB \-\-delete
delete the key from the config file
.TP
.B \-l\fP,\fB \-\-list
.B \-l\fP,\fB \-\-list
list the configuration of the repo
.UNINDENT
.SH EXAMPLES
@ -87,13 +90,13 @@ making changes!
.nf
.ft C
# find cache directory
$ cd ~/.cache/borg/$(borg config id)
$ cd ~/.cache/borg/$(borg config /path/to/repo id)
# reserve some space
$ borg config additional_free_space 2G
$ borg config /path/to/repo additional_free_space 2G
# make a repo append\-only
$ borg config append_only 1
$ borg config /path/to/repo append_only 1
.ft P
.fi
.UNINDENT

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-CREATE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-create \- Create new archive
.
.nr rst2man-indent-level 0
.
@ -27,41 +30,33 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-CREATE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-create \- Creates a new archive.
.SH SYNOPSIS
.sp
borg [common options] create [options] NAME [PATH...]
borg [common options] create [options] ARCHIVE [PATH...]
.SH DESCRIPTION
.sp
This command creates a backup archive containing all files found while recursively
traversing all specified paths. Paths are added to the archive as they are given,
which means that if relative paths are desired, the command must be run from the correct
traversing all paths specified. Paths are added to the archive as they are given,
that means if relative paths are desired, the command has to be run from the correct
directory.
.sp
The slashdot hack in paths (recursion roots) is triggered by using \fB/./\fP:
\fB/this/gets/stripped/./this/gets/archived\fP means to process that fs object, but
strip the prefix on the left side of \fB\&./\fP from the archived items (in this case,
\fBthis/gets/archived\fP will be the path in the archived item).
.sp
When specifying \(aq\-\(aq as a path, borg will read data from standard input and create a
file named \(aqstdin\(aq in the created archive from that data. In some cases, it is more
appropriate to use \-\-content\-from\-command. See the section \fIReading from stdin\fP
below for details.
When giving \(aq\-\(aq as path, borg will read data from standard input and create a
file \(aqstdin\(aq in the created archive from that data. In some cases it\(aqs more
appropriate to use \-\-content\-from\-command, however. See section \fIReading from
stdin\fP below for details.
.sp
The archive will consume almost no disk space for files or parts of files that
have already been stored in other archives.
.sp
The archive name does not need to be unique; you can and should use the same
name for a series of archives. The unique archive identifier is its ID (hash),
and you can abbreviate the ID as long as it is unique.
The archive name needs to be unique. It must not end in \(aq.checkpoint\(aq or
\(aq.checkpoint.N\(aq (with N being a number), because these names are used for
checkpoints and treated in special ways.
.sp
In the archive name, you may use the following placeholders:
{now}, {utcnow}, {fqdn}, {hostname}, {user} and some others.
.sp
Backup speed is increased by not reprocessing files that are already part of
existing archives and were not modified. The detection of unmodified files is
existing archives and weren\(aqt modified. The detection of unmodified files is
done by comparing multiple file metadata values with previous values kept in
the files cache.
.sp
@ -96,29 +91,15 @@ ctime vs. mtime: safety vs. speed
.INDENT 0.0
.IP \(bu 2
ctime is a rather safe way to detect changes to a file (metadata and contents)
as it cannot be set from userspace. But a metadata\-only change will already
as it can not be set from userspace. But, a metadata\-only change will already
update the ctime, so there might be some unnecessary chunking/hashing even
without content changes. Some filesystems do not support ctime (change time).
E.g. doing a chown or chmod to a file will change its ctime.
.IP \(bu 2
mtime usually works and only updates if file contents were changed. But mtime
can be arbitrarily set from userspace, e.g., to set mtime back to the same value
can be arbitrarily set from userspace, e.g. to set mtime back to the same value
it had before a content change happened. This can be used maliciously as well as
well\-meant, but in both cases mtime\-based cache modes can be problematic.
.UNINDENT
.INDENT 0.0
.TP
.B The \fB\-\-files\-changed\fP option controls how Borg detects if a file has changed during backup:
.INDENT 7.0
.IP \(bu 2
ctime (default): Use ctime to detect changes. This is the safest option.
.IP \(bu 2
mtime: Use mtime to detect changes.
.IP \(bu 2
disabled: Disable the \(dqfile has changed while we backed it up\(dq detection completely.
This is not recommended unless you know what you\(aqre doing, as it could lead to
inconsistent backups if files change during the backup process.
.UNINDENT
well\-meant, but in both cases mtime based cache modes can be problematic.
.UNINDENT
.sp
The mount points of filesystems or filesystem snapshots should be the same for every
@ -126,13 +107,13 @@ creation of a new archive to ensure fast operation. This is because the file cac
is used to determine changed files quickly uses absolute filenames.
If this is not possible, consider creating a bind mount to a stable location.
.sp
The \fB\-\-progress\fP option shows (from left to right) Original and (uncompressed)
deduplicated size (O and U respectively), then the Number of files (N) processed so far,
followed by the currently processed path.
The \fB\-\-progress\fP option shows (from left to right) Original, Compressed and Deduplicated
(O, C and D, respectively), then the Number of files (N) processed so far, followed by
the currently processed path.
.sp
When using \fB\-\-stats\fP, you will get some statistics about how much data was
added \- the \(dqThis Archive\(dq deduplicated size there is most interesting as that is
how much your repository will grow. Please note that the \(dqAll archives\(dq stats refer to
added \- the "This Archive" deduplicated size there is most interesting as that is
how much your repository will grow. Please note that the "All archives" stats refer to
the state after creation. Also, the \fB\-\-stats\fP and \fB\-\-dry\-run\fP options are mutually
exclusive because the data is not actually compressed and deduplicated during a dry run.
.sp
@ -145,55 +126,58 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B ARCHIVE
name of archive to create (must be also a valid directory name)
.TP
.B PATH
paths to archive
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-n\fP,\fB \-\-dry\-run
.B \-n\fP,\fB \-\-dry\-run
do not create a backup archive
.TP
.B \-s\fP,\fB \-\-stats
.B \-s\fP,\fB \-\-stats
print statistics for the created archive
.TP
.B \-\-list
output a verbose list of items (files, dirs, ...)
.B \-\-list
output verbose list of items (files, dirs, ...)
.TP
.BI \-\-filter \ STATUSCHARS
only display items with the given status characters (see description)
.TP
.B \-\-json
.B \-\-json
output stats as JSON. Implies \fB\-\-stats\fP\&.
.TP
.B \-\-no\-cache\-sync
experimental: do not synchronize the cache. Implies not using the files cache.
.TP
.BI \-\-stdin\-name \ NAME
use NAME in archive for stdin data (default: \(aqstdin\(aq)
.TP
.BI \-\-stdin\-user \ USER
set user USER in archive for stdin data (default: do not store user/uid)
set user USER in archive for stdin data (default: \(aqroot\(aq)
.TP
.BI \-\-stdin\-group \ GROUP
set group GROUP in archive for stdin data (default: do not store group/gid)
set group GROUP in archive for stdin data (default: \(aqroot\(aq)
.TP
.BI \-\-stdin\-mode \ M
set mode to M in archive for stdin data (default: 0660)
.TP
.B \-\-content\-from\-command
interpret PATH as a command and store its stdout. See also the section \(aqReading from stdin\(aq below.
.B \-\-content\-from\-command
interpret PATH as command and store its stdout. See also section Reading from stdin below.
.TP
.B \-\-paths\-from\-stdin
read DELIM\-separated list of paths to back up from stdin. All control is external: it will back up all files given \- no more, no less.
.B \-\-paths\-from\-stdin
read DELIM\-separated list of paths to backup from stdin. Will not recurse into directories.
.TP
.B \-\-paths\-from\-command
.B \-\-paths\-from\-command
interpret PATH as command and treat its output as \fB\-\-paths\-from\-stdin\fP
.TP
.BI \-\-paths\-delimiter \ DELIM
set path delimiter for \fB\-\-paths\-from\-stdin\fP and \fB\-\-paths\-from\-command\fP (default: \fB\en\fP)
set path delimiter for \fB\-\-paths\-from\-stdin\fP and \fB\-\-paths\-from\-command\fP (default: n)
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -208,52 +192,61 @@ include/exclude paths matching PATTERN
.BI \-\-patterns\-from \ PATTERNFILE
read include/exclude patterns from PATTERNFILE, one per line
.TP
.B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file ( <http://www.bford.info/cachedir/spec.html> )
.B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file (\fI\%http://www.bford.info/cachedir/spec.html\fP)
.TP
.BI \-\-exclude\-if\-present \ NAME
exclude directories that are tagged by containing a filesystem object with the given NAME
.TP
.B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, do not omit the tag objects themselves from the backup archive
.B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, don\(aqt omit the tag objects themselves from the backup archive
.TP
.B \-\-exclude\-nodump
exclude files flagged NODUMP
.UNINDENT
.SS Filesystem options
.INDENT 0.0
.TP
.B \-x\fP,\fB \-\-one\-file\-system
stay in the same file system and do not store mount points of other file systems \- this might behave different from your expectations, see the description below.
.B \-x\fP,\fB \-\-one\-file\-system
stay in the same file system and do not store mount points of other file systems. This might behave different from your expectations, see the docs.
.TP
.B \-\-numeric\-ids
.B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
only store numeric user and group identifiers
.TP
.B \-\-atime
.B \-\-noatime
do not store atime into archive
.TP
.B \-\-atime
do store atime into archive
.TP
.B \-\-noctime
.B \-\-noctime
do not store ctime into archive
.TP
.B \-\-nobirthtime
.B \-\-nobirthtime
do not store birthtime (creation date) into archive
.TP
.B \-\-noflags
.B \-\-nobsdflags
deprecated, use \fB\-\-noflags\fP instead
.TP
.B \-\-noflags
do not read and store flags (e.g. NODUMP, IMMUTABLE) into archive
.TP
.B \-\-noacls
.B \-\-noacls
do not read and store ACLs into archive
.TP
.B \-\-noxattrs
.B \-\-noxattrs
do not read and store xattrs into archive
.TP
.B \-\-sparse
.B \-\-sparse
detect sparse holes in input (supported only by fixed chunker)
.TP
.BI \-\-files\-cache \ MODE
operate files cache in MODE. default: ctime,size,inode
.TP
.BI \-\-files\-changed \ MODE
specify how to detect if a file has changed during backup (ctime, mtime, disabled). default: ctime
.TP
.B \-\-read\-special
.B \-\-read\-special
open and read block and char device files as well as FIFOs as if they were regular files. Also follows symlinks pointing to these kinds of files.
.UNINDENT
.SS Archive options
@ -263,113 +256,106 @@ open and read block and char device files as well as FIFOs as if they were regul
add a comment text to the archive
.TP
.BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). Alternatively, give a reference file/directory.
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP
.BI \-\-chunker\-params \ PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
.TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
select compression algorithm, see the output of the "borg help compression" command for details.
.UNINDENT
.SH EXAMPLES
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Archive series and performance: In Borg 2, archives that share the same NAME form an \(dqarchive series\(dq.
The files cache is maintained per series. For best performance on repeated backups, reuse the same
NAME every time you run \fBborg create\fP for the same dataset (e.g. always use \fBmy\-documents\fP).
Frequently changing the NAME (for example by embedding date/time like \fBmy\-documents\-2025\-11\-10\fP)
prevents cache reuse and forces Borg to re\-scan and re\-chunk files, which can make incremental
backups vastly slower. Only vary the NAME if you intentionally want to start a new series.
.sp
If you must vary the archive name but still want cache reuse across names, see the advanced
knobs described in \fIupgradenotes2\fP (\fBBORG_FILES_CACHE_SUFFIX\fP and \fBBORG_FILES_CACHE_TTL\fP),
but the recommended approach is to keep a stable NAME per series.
.UNINDENT
.UNINDENT
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Backup ~/Documents into an archive named \(dqmy\-documents\(dq
$ borg create my\-documents ~/Documents
.nf
.ft C
# Backup ~/Documents into an archive named "my\-documents"
$ borg create /path/to/repo::my\-documents ~/Documents
# same, but list all files as we process them
$ borg create \-\-list my\-documents ~/Documents
# Backup /mnt/disk/docs, but strip path prefix using the slashdot hack
$ borg create \-\-repo /path/to/repo docs /mnt/disk/./docs
$ borg create \-\-list /path/to/repo::my\-documents ~/Documents
# Backup ~/Documents and ~/src but exclude pyc files
$ borg create my\-files \e
$ borg create /path/to/repo::my\-files \e
~/Documents \e
~/src \e
\-\-exclude \(aq*.pyc\(aq
# Backup home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create my\-files /home \-\-exclude \(aqsh:home/*/.thumbnails\(aq
$ borg create /path/to/repo::my\-files /home \e
\-\-exclude \(aqsh:/home/*/.thumbnails\(aq
# Back up the root filesystem into an archive named \(dqroot\-archive\(dq
# Use zlib compression (good, but slow) — default is LZ4 (fast, low compression ratio)
$ borg create \-C zlib,6 \-\-one\-file\-system root\-archive /
# Backup the root filesystem into an archive named "root\-YYYY\-MM\-DD"
# use zlib compression (good, but slow) \- default is lz4 (fast, low compression ratio)
$ borg create \-C zlib,6 \-\-one\-file\-system /path/to/repo::root\-{now:%Y\-%m\-%d} /
# Backup into an archive name like FQDN\-root
$ borg create \(aq{fqdn}\-root\(aq /
# Backup onto a remote host ("push" style) via ssh to port 2222,
# logging in as user "borg" and storing into /path/to/repo
$ borg create ssh://borg@backup.example.org:2222/path/to/repo::{fqdn}\-root\-{now} /
# Back up a remote host locally (\(dqpull\(dq style) using SSHFS
# Backup a remote host locally ("pull" style) using sshfs
$ mkdir sshfs\-mount
$ sshfs root@example.com:/ sshfs\-mount
$ cd sshfs\-mount
$ borg create example.com\-root .
$ borg create /path/to/repo::example.com\-root\-{now:%Y\-%m\-%d} .
$ cd ..
$ fusermount \-u sshfs\-mount
# Make a big effort in fine\-grained deduplication (big chunk management
# overhead, needs a lot of RAM and disk space; see the formula in the internals docs):
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 small /smallstuff
# Make a big effort in fine granular deduplication (big chunk management
# overhead, needs a lot of RAM and disk space, see formula in internals
# docs \- same parameters as borg < 1.0 or attic):
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 /path/to/repo::small /smallstuff
# Backup a raw device (must not be active/in use/mounted at that time)
$ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 my\-sdx /dev/sdX
$ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 /path/to/repo::my\-sdx /dev/sdX
# Backup a sparse disk image (must not be active/in use/mounted at that time)
$ borg create \-\-sparse \-\-chunker\-params fixed,4194304 my\-disk my\-disk.raw
$ borg create \-\-sparse \-\-chunker\-params fixed,4194304 /path/to/repo::my\-disk my\-disk.raw
# No compression (none)
$ borg create \-\-compression none arch ~
$ borg create \-\-compression none /path/to/repo::arch ~
# Super fast, low compression (lz4, default)
$ borg create arch ~
$ borg create /path/to/repo::arch ~
# Less fast, higher compression (zlib, N = 0..9)
$ borg create \-\-compression zlib,N arch ~
$ borg create \-\-compression zlib,N /path/to/repo::arch ~
# Even slower, even higher compression (lzma, N = 0..9)
$ borg create \-\-compression lzma,N arch ~
$ borg create \-\-compression lzma,N /path/to/repo::arch ~
# Only compress compressible data with lzma,N (N = 0..9)
$ borg create \-\-compression auto,lzma,N arch ~
$ borg create \-\-compression auto,lzma,N /path/to/repo::arch ~
# Use the short hostname and username as the archive name
$ borg create \(aq{hostname}\-{user}\(aq ~
# Use short hostname, user name and current time in archive name
$ borg create /path/to/repo::{hostname}\-{user}\-{now} ~
# Similar, use the same datetime format that is default as of borg 1.1
$ borg create /path/to/repo::{hostname}\-{user}\-{now:%Y\-%m\-%dT%H:%M:%S} ~
# As above, but add nanoseconds
$ borg create /path/to/repo::{hostname}\-{user}\-{now:%Y\-%m\-%dT%H:%M:%S.%f} ~
# Back up relative paths by moving into the correct directory first
# Backing up relative paths by moving into the correct directory first
$ cd /home/user/Documents
# The root directory of the archive will be \(dqprojectA\(dq
$ borg create \(aqdaily\-projectA\(aq projectA
# The root directory of the archive will be "projectA"
$ borg create /path/to/repo::daily\-projectA\-{now:%Y\-%m\-%d} projectA
# Use external command to determine files to archive
# Use \-\-paths\-from\-stdin with find to back up only files less than 1 MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin small\-files\-only
# Use \-\-paths\-from\-command with find to back up files from only a given user
$ borg create \-\-paths\-from\-command joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with find to only backup files less than 1MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin /path/to/repo::small\-files\-only
# Use \-\-paths\-from\-command with find to only backup files from a given user
$ borg create \-\-paths\-from\-command /path/to/repo::joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them)
$ find ~ \-size \-1000k \-print0 | borg create \e
\-\-paths\-from\-stdin \e
\-\-paths\-delimiter \(dq\e0\(dq \e
smallfiles\-handle\-newline
.EE
\-\-paths\-delimiter "\e0" \e
/path/to/repo::smallfiles\-handle\-newline
.ft P
.fi
.UNINDENT
.UNINDENT
.SH NOTES
@ -390,13 +376,13 @@ through using the \fB\-\-keep\-exclude\-tags\fP option.
The \fB\-x\fP or \fB\-\-one\-file\-system\fP option excludes directories, that are mountpoints (and everything in them).
It detects mountpoints by comparing the device number from the output of \fBstat()\fP of the directory and its
parent directory. Specifically, it excludes directories for which \fBstat()\fP reports a device number different
from the device number of their parent.
In general: be aware that there are directories with device number different from their parent, which the kernel
does not consider a mountpoint and also the other way around.
Linux examples for this are bind mounts (possibly same device number, but always a mountpoint) and ALL
subvolumes of a btrfs (different device number from parent but not necessarily a mountpoint).
macOS examples are the apfs mounts of a typical macOS installation.
Therefore, when using \fB\-\-one\-file\-system\fP, you should double\-check that the backup works as intended.
from the device number of their parent. Be aware that in Linux (and possibly elsewhere) there are directories
with device number different from their parent, which the kernel does not consider a mountpoint and also the
other way around. Examples are bind mounts (possibly same device number, but always a mountpoint) and ALL
subvolumes of a btrfs (different device number from parent but not necessarily a mountpoint). Therefore when
using \fB\-\-one\-file\-system\fP, one should make doubly sure that the backup works as intended especially when using
btrfs. This is even more important, if the btrfs layout was created by someone else, e.g. a distribution
installer.
.SS Item flags
.sp
\fB\-\-list\fP outputs a list of all files, directories and other
@ -409,7 +395,7 @@ If you are interested only in a subset of that output, you can give e.g.
below).
.sp
A uppercase character represents the status of a regular file relative to the
\(dqfiles\(dq cache (not relative to the repo \-\- this is an issue if the files cache
"files" cache (not relative to the repo \-\- this is an issue if the files cache
is not used). Metadata is stored in any case and for \(aqA\(aq and \(aqM\(aq also new data
chunks are stored. For \(aqU\(aq all data chunks refer to already existing chunks.
.INDENT 0.0
@ -435,7 +421,7 @@ borg usually just stores their metadata:
.IP \(bu 2
\(aqc\(aq = char device
.IP \(bu 2
\(aqh\(aq = regular file, hard link (to already seen inodes)
\(aqh\(aq = regular file, hardlink (to already seen inodes)
.IP \(bu 2
\(aqs\(aq = symlink
.IP \(bu 2
@ -445,24 +431,26 @@ borg usually just stores their metadata:
Other flags used include:
.INDENT 0.0
.IP \(bu 2
\(aq+\(aq = included, item would be backed up (if not in dry\-run mode)
.IP \(bu 2
\(aq\-\(aq = excluded, item would not be / was not backed up
.IP \(bu 2
\(aqi\(aq = backup data was read from standard input (stdin)
.IP \(bu 2
\(aq\-\(aq = dry run, item was \fInot\fP backed up
.IP \(bu 2
\(aqx\(aq = excluded, item was \fInot\fP backed up
.IP \(bu 2
\(aq?\(aq = missing status code (if you see this, please file a bug report!)
.UNINDENT
.SS Reading backup data from stdin
.SS Reading from stdin
.sp
There are two methods to read from stdin. Either specify \fB\-\fP as path and
pipe directly to borg:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
backup\-vm \-\-id myvm \-\-stdout | borg create \-\-repo REPO ARCHIVE \-
.EE
.nf
.ft C
backup\-vm \-\-id myvm \-\-stdout | borg create REPO::ARCHIVE \-
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -473,9 +461,11 @@ to the command:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg create \-\-content\-from\-command \-\-repo REPO ARCHIVE \-\- backup\-vm \-\-id myvm \-\-stdout
.EE
.nf
.ft C
borg create \-\-content\-from\-command REPO::ARCHIVE \-\- backup\-vm \-\-id myvm \-\-stdout
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -496,22 +486,9 @@ creation a bit.
.sp
By default, the content read from stdin is stored in a file called \(aqstdin\(aq.
Use \fB\-\-stdin\-name\fP to change the name.
.SS Feeding all file paths from externally
.sp
Usually, you give a starting path (recursion root) to borg and then borg
automatically recurses, finds and backs up all fs objects contained in
there (optionally considering include/exclude rules).
.sp
If you need more control and you want to give every single fs object path
to borg (maybe implementing your own recursion or your own rules), you can use
\fB\-\-paths\-from\-stdin\fP or \fB\-\-paths\-from\-command\fP (with the latter, borg will
fail to create an archive should the command fail).
.sp
Borg supports paths with the slashdot hack to strip path prefixes here also.
So, be careful not to unintentionally trigger that.
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-check(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-placeholders(1)\fP, \fIborg\-compression(1)\fP, \fIborg\-repo\-create(1)\fP
\fIborg\-common(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-check(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-placeholders(1)\fP, \fIborg\-compression(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-DELETE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-delete \- Delete an existing repository or archives
.
.nr rst2man-indent-level 0
.
@ -27,103 +30,125 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-DELETE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-delete \- Deletes archives.
.SH SYNOPSIS
.sp
borg [common options] delete [options] [NAME]
borg [common options] delete [options] [REPOSITORY_OR_ARCHIVE] [ARCHIVE...]
.SH DESCRIPTION
.sp
This command soft\-deletes archives from the repository.
This command deletes an archive from the repository or the complete repository.
.sp
Important:
.INDENT 0.0
.IP \(bu 2
The delete command will only mark archives for deletion (\(dqsoft\-deletion\(dq),
repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.IP \(bu 2
You can use \fBborg undelete\fP to undelete archives, but only until
Important: When deleting archives, repository disk space is \fBnot\fP freed until
you run \fBborg compact\fP\&.
.UNINDENT
.sp
When you delete a complete repository, the security info and local cache for it
(if any) are also deleted. Alternatively, you can delete just the local cache
with the \fB\-\-cache\-only\fP option, or keep the security info with the
\fB\-\-keep\-security\-info\fP option.
.sp
When in doubt, use \fB\-\-dry\-run \-\-list\fP to see what would be deleted.
.sp
You can delete multiple archives by specifying a match pattern using
the \fB\-\-match\-archives PATTERN\fP option (for more information on these
patterns, see \fIborg_patterns\fP).
When using \fB\-\-stats\fP, you will get some statistics about how much data was
deleted \- the "Deleted data" deduplicated size there is most interesting as
that is how much your repository will shrink.
Please note that the "All archives" stats refer to the state after deletion.
.sp
You can delete multiple archives by specifying their common prefix, if they
have one, using the \fB\-\-prefix PREFIX\fP option. You can also specify a shell
pattern to match multiple archives using the \fB\-\-glob\-archives GLOB\fP option
(for more info on these patterns, see \fIborg_patterns\fP). Note that these
two options are mutually exclusive.
.sp
To avoid accidentally deleting archives, especially when using glob patterns,
it might be helpful to use the \fB\-\-dry\-run\fP to test out the command without
actually making any changes to the repository.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B REPOSITORY_OR_ARCHIVE
repository or archive to delete
.TP
.B ARCHIVE
archives to delete
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-n\fP,\fB \-\-dry\-run
do not change the repository
.B \-n\fP,\fB \-\-dry\-run
do not change repository
.TP
.B \-\-list
output a verbose list of archives
.B \-\-list
output verbose list of archives
.TP
.B \-s\fP,\fB \-\-stats
print statistics for the deleted archive
.TP
.B \-\-cache\-only
delete only the local cache for the given repository
.TP
.B \-\-force
force deletion of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
.TP
.B \-\-keep\-security\-info
keep the local security info when deleting a repository
.TP
.B \-\-save\-space
work slower, but using less space
.UNINDENT
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
consider last N archives after other filters were applied
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Delete all backup archives named \(dqkenny\-files\(dq:
$ borg delete \-a kenny\-files
# Actually free disk space:
$ borg compact
.nf
.ft C
# delete a single backup archive:
$ borg delete /path/to/repo::Monday
# actually free disk space:
$ borg compact /path/to/repo
# Delete a specific backup archive using its unique archive ID prefix
$ borg delete aid:d34db33f
# delete all archives whose names begin with the machine\(aqs hostname followed by "\-"
$ borg delete \-\-prefix \(aq{hostname}\-\(aq /path/to/repo
# Delete all archives whose names begin with the machine\(aqs hostname followed by \(dq\-\(dq
$ borg delete \-a \(aqsh:{hostname}\-*\(aq
# delete all archives whose names contain "\-2012\-"
$ borg delete \-\-glob\-archives \(aq*\-2012\-*\(aq /path/to/repo
# Delete all archives whose names contain \(dq\-2012\-\(dq
$ borg delete \-a \(aqsh:*\-2012\-*\(aq
# see what would be deleted if delete was run without \-\-dry\-run
$ borg delete \-\-list \-\-dry\-run \-a \(aq*\-May\-*\(aq /path/to/repo
# See what would be deleted if delete was run without \-\-dry\-run
$ borg delete \-\-list \-\-dry\-run \-a \(aqsh:*\-May\-*\(aq
.EE
# delete the whole repository and the related local cache:
$ borg delete /path/to/repo
You requested to completely DELETE the repository *including* all archives it contains:
repo Mon, 2016\-02\-15 19:26:54
root\-2016\-02\-15 Mon, 2016\-02\-15 19:36:29
newname Mon, 2016\-02\-15 19:50:19
Type \(aqYES\(aq if you understand this and want to continue: YES
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-compact(1)\fP, \fIborg\-repo\-delete(1)\fP
\fIborg\-common(1)\fP, \fIborg\-compact(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-DIFF 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-diff \- Diff contents of two archives
.
.nr rst2man-indent-level 0
.
@ -27,54 +30,61 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-DIFF" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-diff \- Finds differences between two archives.
.SH SYNOPSIS
.sp
borg [common options] diff [options] ARCHIVE1 ARCHIVE2 [PATH...]
borg [common options] diff [options] REPO::ARCHIVE1 ARCHIVE2 [PATH...]
.SH DESCRIPTION
.sp
This command finds differences (file contents, metadata) between ARCHIVE1 and ARCHIVE2.
This command finds differences (file contents, user/group/mode) between archives.
.sp
For more help on include/exclude patterns, see the output of the \fIborg_patterns\fP command.
A repository location and an archive name must be specified for REPO::ARCHIVE1.
ARCHIVE2 is just another archive name in same repository (no repository location
allowed).
.sp
For archives created with Borg 1.1 or newer diff automatically detects whether
the archives are created with the same chunker params. If so, only chunk IDs
are compared, which is very fast.
.sp
For archives prior to Borg 1.1 chunk contents are compared by default.
If you did not create the archives with different chunker params,
pass \fB\-\-same\-chunker\-params\fP\&.
Note that the chunker params changed from Borg 0.xx to 1.0.
.sp
For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B ARCHIVE1
ARCHIVE1 name
.B REPO::ARCHIVE1
repository location and ARCHIVE1 name
.TP
.B ARCHIVE2
ARCHIVE2 name
ARCHIVE2 name (no repository location allowed)
.TP
.B PATH
paths of items inside the archives to compare; patterns are supported.
paths of items inside the archives to compare; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-numeric\-ids
.B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
only consider numeric user and group identifiers
.TP
.B \-\-same\-chunker\-params
override the check of chunker parameters
.B \-\-same\-chunker\-params
Override check of chunker parameters.
.TP
.BI \-\-format \ FORMAT
specify format for differences between archives (default: \(dq{change} {path}{NL}\(dq)
.B \-\-sort
Sort the output lines by file path.
.TP
.B \-\-json\-lines
.B \-\-json\-lines
Format output as JSON Lines.
.TP
.B \-\-sort\-by
Sort output by comma\-separated fields (e.g., \(aq>size_added,path\(aq).
.TP
.B \-\-content\-only
Only compare differences in content (exclude metadata differences)
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -93,174 +103,50 @@ read include/exclude patterns from PATTERNFILE, one per line
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg diff archive1 archive2
.nf
.ft C
$ borg init \-e=none testrepo
$ mkdir testdir
$ cd testdir
$ echo asdf > file1
$ dd if=/dev/urandom bs=1M count=4 > file2
$ touch file3
$ borg create ../testrepo::archive1 .
$ chmod a+x file1
$ echo "something" >> file2
$ borg create ../testrepo::archive2 .
$ echo "testing 123" >> file1
$ rm file3
$ touch file4
$ borg create ../testrepo::archive3 .
$ cd ..
$ borg diff testrepo::archive1 archive2
[\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
+135 B \-252 B file2
$ borg diff testrepo::archive2 archive3
+17 B \-5 B file1
added 0 B file4
removed 0 B file3
$ borg diff testrepo::archive1 archive3
+17 B \-5 B [\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
+135 B \-252 B file2
added 0 B file4
removed 0 B file3
$ borg diff archive1 archive2
{\(dqpath\(dq: \(dqfile1\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: 17, \(dqremoved\(dq: 5}, {\(dqtype\(dq: \(dqmode\(dq, \(dqold_mode\(dq: \(dq\-rw\-r\-\-r\-\-\(dq, \(dqnew_mode\(dq: \(dq\-rwxr\-xr\-x\(dq}]}
{\(dqpath\(dq: \(dqfile2\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: 135, \(dqremoved\(dq: 252}]}
{\(dqpath\(dq: \(dqfile4\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqadded\(dq, \(dqsize\(dq: 0}]}
{\(dqpath\(dq: \(dqfile3\(dq, \(dqchanges\(dq: [{\(dqtype\(dq: \(dqremoved\(dq, \(dqsize\(dq: 0}]}
# Use \-\-sort\-by with a comma\-separated list; sorts apply stably from last to first.
# Here: primary by net size change descending, tie\-breaker by path ascending
$ borg diff \-\-sort\-by=\(dq>size_diff,path\(dq archive1 archive2
+17 B \-5 B [\-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x] file1
removed 0 B file3
added 0 B file4
+135 B \-252 B file2
.EE
$ borg diff \-\-json\-lines testrepo::archive1 archive3
{"path": "file1", "changes": [{"type": "modified", "added": 17, "removed": 5}, {"type": "mode", "old_mode": "\-rw\-r\-\-r\-\-", "new_mode": "\-rwxr\-xr\-x"}]}
{"path": "file2", "changes": [{"type": "modified", "added": 135, "removed": 252}]}
{"path": "file4", "changes": [{"type": "added", "size": 0}]}
{"path": "file3", "changes": [{"type": "removed", "size": 0}]
.ft P
.fi
.UNINDENT
.UNINDENT
.SH NOTES
.SS The FORMAT specifier syntax
.sp
The \fB\-\-format\fP option uses Python\(aqs format string syntax <https://docs.python.org/3.10/library/string.html#formatstrings>
\&.
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg diff \-\-format \(aq{content:30} {path}{NL}\(aq ArchiveFoo ArchiveBar
modified: +4.1 kB \-1.0 kB file\-diff
\&...
# {VAR:<NUMBER} \- pad to NUMBER columns left\-aligned.
# {VAR:>NUMBER} \- pad to NUMBER columns right\-aligned.
$ borg diff \-\-format \(aq{content:>30} {path}{NL}\(aq ArchiveFoo ArchiveBar
modified: +4.1 kB \-1.0 kB file\-diff
\&...
.EE
.UNINDENT
.UNINDENT
.sp
The following keys are always available:
.INDENT 0.0
.IP \(bu 2
NEWLINE: OS dependent line separator
.IP \(bu 2
NL: alias of NEWLINE
.IP \(bu 2
NUL: NUL character for creating print0 / xargs \-0 like output
.IP \(bu 2
SPACE: space character
.IP \(bu 2
TAB: tab character
.IP \(bu 2
CR: carriage return character
.IP \(bu 2
LF: line feed character
.UNINDENT
.sp
Keys available only when showing differences between archives:
.INDENT 0.0
.IP \(bu 2
path: archived file path
.IP \(bu 2
change: all available changes
.IP \(bu 2
content: file content change
.IP \(bu 2
mode: file mode change
.IP \(bu 2
type: file type change
.IP \(bu 2
owner: file owner (user/group) change
.IP \(bu 2
group: file group change
.IP \(bu 2
user: file user change
.IP \(bu 2
link: file link change
.IP \(bu 2
directory: file directory change
.IP \(bu 2
blkdev: file block device change
.IP \(bu 2
chrdev: file character device change
.IP \(bu 2
fifo: file fifo change
.IP \(bu 2
mtime: file modification time change
.IP \(bu 2
ctime: file change time change
.IP \(bu 2
isomtime: file modification time change (ISO 8601)
.IP \(bu 2
isoctime: file creation time change (ISO 8601)
.UNINDENT
.SS What is compared
.sp
For each matching item in both archives, Borg reports:
.INDENT 0.0
.IP \(bu 2
Content changes: total added/removed bytes within files. If chunker parameters are comparable,
Borg compares chunk IDs quickly; otherwise, it compares the content.
.IP \(bu 2
Metadata changes: user, group, mode, and other metadata shown inline, like
\(dq[old_mode \-> new_mode]\(dq for mode changes. Use \fB\-\-content\-only\fP to suppress metadata changes.
.IP \(bu 2
Added/removed items: printed as \(dqadded SIZE path\(dq or \(dqremoved SIZE path\(dq.
.UNINDENT
.SS Output formats
.sp
The default (text) output shows one line per changed path, e.g.:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
+135 B \-252 B [ \-rw\-r\-\-r\-\- \-> \-rwxr\-xr\-x ] path/to/file
.EE
.UNINDENT
.UNINDENT
.sp
JSON Lines output (\fB\-\-json\-lines\fP) prints one JSON object per changed path, e.g.:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
{\(dqpath\(dq: \(dqPATH\(dq, \(dqchanges\(dq: [
{\(dqtype\(dq: \(dqmodified\(dq, \(dqadded\(dq: BYTES, \(dqremoved\(dq: BYTES},
{\(dqtype\(dq: \(dqmode\(dq, \(dqold_mode\(dq: \(dq\-rw\-r\-\-r\-\-\(dq, \(dqnew_mode\(dq: \(dq\-rwxr\-xr\-x\(dq},
{\(dqtype\(dq: \(dqadded\(dq, \(dqsize\(dq: SIZE},
{\(dqtype\(dq: \(dqremoved\(dq, \(dqsize\(dq: SIZE}
]}
.EE
.UNINDENT
.UNINDENT
.SS Sorting
.sp
Use \fB\-\-sort\-by FIELDS\fP where FIELDS is a comma\-separated list of fields.
Sorts are applied stably from last to first in the given list. Prepend \(dq>\(dq for
descending, \(dq<\(dq (or no prefix) for ascending, for example \fB\-\-sort\-by=\(dq>size_added,path\(dq\fP\&.
Supported fields include:
.INDENT 0.0
.IP \(bu 2
path: the item path
.IP \(bu 2
size_added: total bytes added for the item content
.IP \(bu 2
size_removed: total bytes removed for the item content
.IP \(bu 2
size_diff: size_added \- size_removed (net content change)
.IP \(bu 2
size: size of the item as stored in ARCHIVE2 (0 for removed items)
.IP \(bu 2
user, group, uid, gid, ctime, mtime: taken from the item state in ARCHIVE2 when present
.IP \(bu 2
ctime_diff, mtime_diff: timestamp difference (ARCHIVE2 \- ARCHIVE1)
.UNINDENT
.SS Performance considerations
.sp
diff automatically detects whether the archives were created with the same chunker
parameters. If so, only chunk IDs are compared, which is very fast.
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP

View file

@ -1,6 +1,8 @@
'\" t
.\" Man page generated from reStructuredText.
.
.TH BORG-EXPORT-TAR 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-export-tar \- Export archive contents as a tarball
.
.nr rst2man-indent-level 0
.
@ -28,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-EXPORT-TAR" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-export-tar \- Export archive contents as a tarball
.SH SYNOPSIS
.sp
borg [common options] export\-tar [options] NAME FILE [PATH...]
borg [common options] export\-tar [options] ARCHIVE FILE [PATH...]
.SH DESCRIPTION
.sp
This command creates a tarball from an archive.
@ -51,7 +50,7 @@ before writing it to FILE:
.IP \(bu 2
\&.tar.xz or .txz: xz
.IP \(bu 2
\&.tar.zstd or .tar.zst: zstd
\&.tar.zstd: zstd
.IP \(bu 2
\&.tar.lz4: lz4
.UNINDENT
@ -60,10 +59,11 @@ Alternatively, a \fB\-\-tar\-filter\fP program may be explicitly specified. It s
read the uncompressed tar stream from stdin and write a compressed/filtered
tar stream to stdout.
.sp
Depending on the \fB\-\-tar\-format\fP option, these formats are created:
Depending on the \fB\-tar\-format\fP option, these formats are created:
.TS
box center;
l|l|l.
center;
|l|l|l|.
_
T{
\-\-tar\-format
T} T{
@ -86,7 +86,6 @@ T} T{
POSIX.1\-2001 (pax) format
T} T{
GNU + atime/ctime/mtime ns
+ xattrs
T}
_
T{
@ -97,6 +96,7 @@ T} T{
mtime s, no atime/ctime,
no ACLs/xattrs/bsdflags
T}
_
.TE
.sp
A \fB\-\-sparse\fP option (as found in borg extract) is not supported.
@ -115,28 +115,28 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B ARCHIVE
archive to export
.TP
.B FILE
output tar file. \(dq\-\(dq to write to stdout instead.
output tar file. "\-" to write to stdout instead.
.TP
.B PATH
paths to extract; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-tar\-filter
.B \-\-tar\-filter
filter program to pipe data through
.TP
.B \-\-list
.B \-\-list
output verbose list of items (files, dirs, ...)
.TP
.BI \-\-tar\-format \ FMT
select tar format: BORG, PAX or GNU
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-EXTRACT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-extract \- Extract archive contents
.
.nr rst2man-indent-level 0
.
@ -27,28 +30,21 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-EXTRACT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-extract \- Extracts archive contents.
.SH SYNOPSIS
.sp
borg [common options] extract [options] NAME [PATH...]
borg [common options] extract [options] ARCHIVE [PATH...]
.SH DESCRIPTION
.sp
This command extracts the contents of an archive.
This command extracts the contents of an archive. By default the entire
archive is extracted but a subset of files and directories can be selected
by passing a list of \fBPATHs\fP as arguments. The file selection can further
be restricted by using the \fB\-\-exclude\fP option.
.sp
By default, the entire archive is extracted, but a subset of files and directories
can be selected by passing a list of \fBPATH\fP arguments. The default interpretation
for the paths to extract is \fIpp:\fP which is a literal path\-prefix match. If you want
to use e.g. a wildcard, you must select a different pattern style such as \fIsh:\fP or
\fIfm:\fP\&. See \fIborg_patterns\fP for more information.
.sp
The file selection can be further restricted by using the \fB\-\-exclude\fP option.
For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.sp
By using \fB\-\-dry\-run\fP, you can do all extraction steps except actually writing the
output data: reading metadata and data chunks from the repository, checking the hash/HMAC,
decrypting, and decompressing.
output data: reading metadata and data chunks from the repo, checking the hash/hmac,
decrypting, decompressing.
.sp
\fB\-\-progress\fP can be slower than no progress display, since it makes one additional
pass over the archive metadata.
@ -56,12 +52,12 @@ pass over the archive metadata.
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Currently, extract always writes into the current working directory (\(dq.\(dq),
Currently, extract always writes into the current working directory ("."),
so make sure you \fBcd\fP to the right place before calling \fBborg extract\fP\&.
.sp
When parent directories are not extracted (because of using file/directory selection
or any other reason), Borg cannot restore parent directories\(aq metadata, e.g., owner,
group, permissions, etc.
or any other reason), borg can not restore parent directories\(aq metadata, e.g. owner,
group, permission, etc.
.UNINDENT
.UNINDENT
.SH OPTIONS
@ -70,43 +66,46 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B ARCHIVE
archive to extract
.TP
.B PATH
paths to extract; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-list
output a verbose list of items (files, dirs, ...)
.B \-\-list
output verbose list of items (files, dirs, ...)
.TP
.B \-n\fP,\fB \-\-dry\-run
.B \-n\fP,\fB \-\-dry\-run
do not actually change any files
.TP
.B \-\-numeric\-ids
only use numeric user and group identifiers
.B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-noflags
.B \-\-numeric\-ids
only obey numeric user and group identifiers
.TP
.B \-\-nobsdflags
deprecated, use \fB\-\-noflags\fP instead
.TP
.B \-\-noflags
do not extract/set flags (e.g. NODUMP, IMMUTABLE)
.TP
.B \-\-noacls
.B \-\-noacls
do not extract/set ACLs
.TP
.B \-\-noxattrs
.B \-\-noxattrs
do not extract/set xattrs
.TP
.B \-\-stdout
.B \-\-stdout
write all extracted data to stdout
.TP
.B \-\-sparse
create holes in the output sparse file from all\-zero chunks
.TP
.B \-\-continue
continue a previously interrupted extraction of the same archive
.B \-\-sparse
create holes in output sparse file from all\-zero chunks
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -128,28 +127,27 @@ Remove the specified number of leading path elements. Paths with fewer elements
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
# Extract entire archive
$ borg extract my\-files
$ borg extract /path/to/repo::my\-files
# Extract entire archive and list files while processing
$ borg extract \-\-list my\-files
$ borg extract \-\-list /path/to/repo::my\-files
# Verify whether an archive could be successfully extracted, but do not write files to disk
$ borg extract \-\-dry\-run my\-files
$ borg extract \-\-dry\-run /path/to/repo::my\-files
# Extract the \(dqsrc\(dq directory
$ borg extract my\-files home/USERNAME/src
# Extract the "src" directory
$ borg extract /path/to/repo::my\-files home/USERNAME/src
# Extract the \(dqsrc\(dq directory but exclude object files
$ borg extract my\-files home/USERNAME/src \-\-exclude \(aq*.o\(aq
# Extract only the C files
$ borg extract my\-files \(aqsh:home/USERNAME/src/*.c\(aq
# Extract the "src" directory but exclude object files
$ borg extract /path/to/repo::my\-files home/USERNAME/src \-\-exclude \(aq*.o\(aq
# Restore a raw device (must not be active/in use/mounted at that time)
$ borg extract \-\-stdout my\-sdx | dd of=/dev/sdx bs=10M
.EE
$ borg extract \-\-stdout /path/to/repo::my\-sdx | dd of=/dev/sdx bs=10M
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-IMPORT-TAR 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-import-tar \- Create a backup archive from a tarball
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-IMPORT-TAR" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-import-tar \- Create a backup archive from a tarball
.SH SYNOPSIS
.sp
borg [common options] import\-tar [options] NAME TARFILE
borg [common options] import\-tar [options] ARCHIVE TARFILE
.SH DESCRIPTION
.sp
This command creates a backup archive from a tarball.
@ -49,7 +49,7 @@ based on its file extension and pipe the file through an appropriate filter:
.IP \(bu 2
\&.tar.xz or .txz: xz \-d
.IP \(bu 2
\&.tar.zstd or .tar.zst: zstd \-d
\&.tar.zstd: zstd \-d
.IP \(bu 2
\&.tar.lz4: lz4 \-d
.UNINDENT
@ -80,42 +80,35 @@ UNIX V7 tar
.IP \(bu 2
SunOS tar with extended attributes
.UNINDENT
.sp
To import multiple tarballs into a single archive, they can be simply
concatenated (e.g. using \(dqcat\(dq) into a single file, and imported with an
\fB\-\-ignore\-zeros\fP option to skip through the stop markers between them.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B ARCHIVE
name of archive to create (must be also a valid directory name)
.TP
.B TARFILE
input tar file. \(dq\-\(dq to read from stdin instead.
input tar file. "\-" to read from stdin instead.
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-tar\-filter
.B \-\-tar\-filter
filter program to pipe data through
.TP
.B \-s\fP,\fB \-\-stats
.B \-s\fP,\fB \-\-stats
print statistics for the created archive
.TP
.B \-\-list
.B \-\-list
output verbose list of items (files, dirs, ...)
.TP
.BI \-\-filter \ STATUSCHARS
only display items with the given status characters
.TP
.B \-\-json
.B \-\-json
output stats as JSON (implies \-\-stats)
.TP
.B \-\-ignore\-zeros
ignore zero\-filled blocks in the input tarball
.UNINDENT
.SS Archive options
.INDENT 0.0
@ -124,40 +117,45 @@ ignore zero\-filled blocks in the input tarball
add a comment text to the archive
.TP
.BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). alternatively, give a reference file/directory.
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP
.BI \-\-chunker\-params \ PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE). default: buzhash,19,23,21,4095
.TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
select compression algorithm, see the output of the "borg help compression" command for details.
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Export as an uncompressed tar archive
$ borg export\-tar Monday Monday.tar
.nf
.ft C
# export as uncompressed tar
$ borg export\-tar /path/to/repo::Monday Monday.tar
# Import an uncompressed tar archive
$ borg import\-tar Monday Monday.tar
# import an uncompressed tar
$ borg import\-tar /path/to/repo::Monday Monday.tar
# Exclude some file types and compress using gzip
$ borg export\-tar Monday Monday.tar.gz \-\-exclude \(aq*.so\(aq
# exclude some file types, compress using gzip
$ borg export\-tar /path/to/repo::Monday Monday.tar.gz \-\-exclude \(aq*.so\(aq
# Use a higher compression level with gzip
$ borg export\-tar \-\-tar\-filter=\(dqgzip \-9\(dq Monday Monday.tar.gz
# use higher compression level with gzip
$ borg export\-tar \-\-tar\-filter="gzip \-9" repo::Monday Monday.tar.gz
# Copy an archive from repoA to repoB
$ borg \-r repoA export\-tar \-\-tar\-format=BORG archive \- | borg \-r repoB import\-tar archive \-
# copy an archive from repoA to repoB
$ borg export\-tar \-\-tar\-format=BORG repoA::archive \- | borg import\-tar repoB::archive \-
# Export a tar, but instead of storing it on disk, upload it to a remote site using curl
$ borg export\-tar Monday \- | curl \-\-data\-binary @\- https://somewhere/to/POST
# export a tar, but instead of storing it on disk, upload it to remote site using curl
$ borg export\-tar /path/to/repo::Monday \- | curl \-\-data\-binary @\- https://somewhere/to/POST
# Remote extraction via \(aqtarpipe\(aq
$ borg export\-tar Monday \- | ssh somewhere \(dqcd extracted; tar x\(dq
.EE
# remote extraction via "tarpipe"
$ borg export\-tar /path/to/repo::Monday \- | ssh somewhere "cd extracted; tar x"
.ft P
.fi
.UNINDENT
.UNINDENT
.SS Archives transfer script
@ -166,12 +164,14 @@ Outputs a script that copies all archives from repo1 to repo2:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
for N I T in \(gaborg list \-\-format=\(aq{archive} {id} {time:%Y\-%m\-%dT%H:%M:%S}{NL}\(aq\(ga
.nf
.ft C
for A T in \(gaborg list \-\-format=\(aq{archive} {time:%Y\-%m\-%dT%H:%M:%S}{LF}\(aq repo1\(ga
do
echo \(dqborg \-r repo1 export\-tar \-\-tar\-format=BORG aid:$I \- | borg \-r repo2 import\-tar \-\-timestamp=$T $N \-\(dq
echo "borg export\-tar \-\-tar\-format=BORG repo1::$A \- | borg import\-tar \-\-timestamp=$T repo2::$A \-"
done
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -186,7 +186,7 @@ archive contents (all items with metadata and data)
Lost:
.INDENT 0.0
.IP \(bu 2
some archive metadata (like the original command line, execution time, etc.)
some archive metadata (like the original commandline, execution time, etc.)
.UNINDENT
.sp
Please note:

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-INFO 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-info \- Show archive details such as disk space used
.
.nr rst2man-indent-level 0
.
@ -27,91 +30,124 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-INFO" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-info \- Show archive details such as disk space used
.SH SYNOPSIS
.sp
borg [common options] info [options] [NAME]
borg [common options] info [options] [REPOSITORY_OR_ARCHIVE]
.SH DESCRIPTION
.sp
This command displays detailed information about the specified archive.
This command displays detailed information about the specified archive or repository.
.sp
Please note that the deduplicated sizes of the individual archives do not add
up to the deduplicated size of the repository (\(dqall archives\(dq), because the two
mean different things:
up to the deduplicated size of the repository ("all archives"), because the two
are meaning different things:
.sp
This archive / deduplicated size = amount of data stored ONLY for this archive
= unique chunks of this archive.
All archives / deduplicated size = amount of data stored in the repository
All archives / deduplicated size = amount of data stored in the repo
= all chunks in the repository.
.sp
Borg archives can only contain a limited amount of file metadata.
The size of an archive relative to this limit depends on a number of factors,
mainly the number of files, the lengths of paths and other metadata stored for files.
This is shown as \fIutilization of maximum supported archive size\fP\&.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B REPOSITORY_OR_ARCHIVE
repository or archive to display information about
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-json
.B \-\-json
format output as JSON
.UNINDENT
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
consider last N archives after other filters were applied
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg info aid:f7dea078
Archive name: source\-backup
Archive fingerprint: f7dea0788dfc026cc2be1c0f5b94beb4e4084eb3402fc40c38d8719b1bf2d943
.nf
.ft C
$ borg info /path/to/repo::2017\-06\-29T11:00\-srv
Archive name: 2017\-06\-29T11:00\-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Comment:
Hostname: mba2020
Username: tw
Time (start): Sat, 2022\-06\-25 20:51:40
Time (end): Sat, 2022\-06\-25 20:51:40
Duration: 0.03 seconds
Command line: /usr/bin/borg \-r path/to/repo create source\-backup src
Utilization of maximum supported archive size: 0%
Number of files: 244
Original size: 13.80 MB
Deduplicated size: 531 B
.EE
Hostname: myhostname
Username: root
Time (start): Thu, 2017\-06\-29 11:03:07
Time (end): Thu, 2017\-06\-29 11:03:13
Duration: 5.66 seconds
Number of files: 17037
Command line: /usr/sbin/borg create /path/to/repo::2017\-06\-29T11:00\-srv /srv
Utilization of max. archive size: 0%
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Original size Compressed size Deduplicated size
This archive: 12.53 GB 12.49 GB 1.62 kB
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
$ borg info /path/to/repo \-\-last 1
Archive name: 2017\-06\-29T11:00\-srv
Archive fingerprint: b2f1beac2bd553b34e06358afa45a3c1689320d39163890c5bbbd49125f00fe5
Comment:
Hostname: myhostname
Username: root
Time (start): Thu, 2017\-06\-29 11:03:07
Time (end): Thu, 2017\-06\-29 11:03:13
Duration: 5.66 seconds
Number of files: 17037
Command line: /usr/sbin/borg create /path/to/repo::2017\-06\-29T11:00\-srv /srv
Utilization of max. archive size: 0%
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Original size Compressed size Deduplicated size
This archive: 12.53 GB 12.49 GB 1.62 kB
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
$ borg info /path/to/repo
Repository ID: d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Location: /path/to/repo
Encrypted: Yes (repokey)
Cache: /root/.cache/borg/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
Security dir: /root/.config/borg/security/d857ce5788c51272c61535062e89eac4e8ef5a884ffbe976e0af9d8765dedfa5
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
Original size Compressed size Deduplicated size
All archives: 121.82 TB 112.41 TB 215.42 GB
Unique chunks Total chunks
Chunk index: 1015213 626934122
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-list(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-repo\-info(1)\fP
\fIborg\-common(1)\fP, \fIborg\-list(1)\fP, \fIborg\-diff(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.

334
docs/man/borg-init.1 Normal file
View file

@ -0,0 +1,334 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-INIT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-init \- Initialize an empty repository
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.SH SYNOPSIS
.sp
borg [common options] init [options] [REPOSITORY]
.SH DESCRIPTION
.sp
This command initializes an empty repository. A repository is a filesystem
directory containing the deduplicated data from zero or more archives.
.SS Encryption mode TLDR
.sp
The encryption mode can only be configured when creating a new repository \- you can
neither configure it on a per\-archive basis nor change the mode of an existing repository.
This example will likely NOT give optimum performance on your machine (performance
tips will come below):
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
borg init \-\-encryption repokey /path/to/repo
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
Borg will:
.INDENT 0.0
.IP 1. 3
Ask you to come up with a passphrase.
.IP 2. 3
Create a borg key (which contains some random secrets. See \fIkey_files\fP).
.IP 3. 3
Derive a "key encryption key" from your passphrase
.IP 4. 3
Encrypt and sign the key with the key encryption key
.IP 5. 3
Store the encrypted borg key inside the repository directory (in the repo config).
This is why it is essential to use a secure passphrase.
.IP 6. 3
Encrypt and sign your backups to prevent anyone from reading or forging them unless they
have the key and know the passphrase. Make sure to keep a backup of
your key \fBoutside\fP the repository \- do not lock yourself out by
"leaving your keys inside your car" (see \fIborg_key_export\fP).
For remote backups the encryption is done locally \- the remote machine
never sees your passphrase, your unencrypted key or your unencrypted files.
Chunking and id generation are also based on your key to improve
your privacy.
.IP 7. 3
Use the key when extracting files to decrypt them and to verify that the contents of
the backups have not been accidentally or maliciously altered.
.UNINDENT
.SS Picking a passphrase
.sp
Make sure you use a good passphrase. Not too short, not too simple. The real
encryption / decryption key is encrypted with / locked by your passphrase.
If an attacker gets your key, he can\(aqt unlock and use it without knowing the
passphrase.
.sp
Be careful with special or non\-ascii characters in your passphrase:
.INDENT 0.0
.IP \(bu 2
Borg processes the passphrase as unicode (and encodes it as utf\-8),
so it does not have problems dealing with even the strangest characters.
.IP \(bu 2
BUT: that does not necessarily apply to your OS / VM / keyboard configuration.
.UNINDENT
.sp
So better use a long passphrase made from simple ascii chars than one that
includes non\-ascii stuff or characters that are hard/impossible to enter on
a different keyboard layout.
.sp
You can change your passphrase for existing repos at any time, it won\(aqt affect
the encryption/decryption key or other secrets.
.SS Choosing an encryption mode
.sp
Depending on your hardware, hashing and crypto performance may vary widely.
The easiest way to find out about what\(aqs fastest is to run \fBborg benchmark cpu\fP\&.
.sp
\fIrepokey\fP modes: if you want ease\-of\-use and "passphrase" security is good enough \-
the key will be stored in the repository (in \fBrepo_dir/config\fP).
.sp
\fIkeyfile\fP modes: if you rather want "passphrase and having\-the\-key" security \-
the key will be stored in your home directory (in \fB~/.config/borg/keys\fP).
.sp
The following table is roughly sorted in order of preference, the better ones are
in the upper part of the table, in the lower part is the old and/or unsafe(r) stuff:
.\" nanorst: inline-fill
.
.TS
center;
|l|l|l|l|l|.
_
T{
\fBmode (* = keyfile or repokey)\fP
T} T{
\fBID\-Hash\fP
T} T{
\fBEncryption\fP
T} T{
\fBAuthentication\fP
T} T{
\fBV>=\fP
T}
_
T{
\fB*\-blake2\-chacha20\-poly1305\fP
T} T{
BLAKE2b
T} T{
CHACHA20
T} T{
POLY1305
T} T{
1.3
T}
_
T{
\fB*\-chacha20\-poly1305\fP
T} T{
HMAC\-SHA\-256
T} T{
CHACHA20
T} T{
POLY1305
T} T{
1.3
T}
_
T{
\fB*\-blake2\-aes\-ocb\fP
T} T{
BLAKE2b
T} T{
AES256\-OCB
T} T{
AES256\-OCB
T} T{
1.3
T}
_
T{
\fB*\-aes\-ocb\fP
T} T{
HMAC\-SHA\-256
T} T{
AES256\-OCB
T} T{
AES256\-OCB
T} T{
1.3
T}
_
T{
\fB*\-blake2\fP
T} T{
BLAKE2b
T} T{
AES256\-CTR
T} T{
BLAKE2b
T} T{
1.1
T}
_
T{
\fB*\fP
T} T{
HMAC\-SHA\-256
T} T{
AES256\-CTR
T} T{
HMAC\-SHA256
T} T{
any
T}
_
T{
authenticated\-blake2
T} T{
BLAKE2b
T} T{
none
T} T{
BLAKE2b
T} T{
1.1
T}
_
T{
authenticated
T} T{
HMAC\-SHA\-256
T} T{
none
T} T{
HMAC\-SHA256
T} T{
1.1
T}
_
T{
none
T} T{
SHA\-256
T} T{
none
T} T{
none
T} T{
any
T}
_
.TE
.\" nanorst: inline-replace
.
.sp
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised to NOT use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.
.sp
If you do \fBnot\fP want to encrypt the contents of your backups, but still want to detect
malicious tampering use an \fIauthenticated\fP mode. It\(aqs like \fIrepokey\fP minus encryption.
.SS Key derivation functions
.INDENT 0.0
.IP \(bu 2
\fB\-\-key\-algorithm argon2\fP is the default and is recommended.
The key encryption key is derived from your passphrase via argon2\-id.
Argon2 is considered more modern and secure than pbkdf2.
.IP \(bu 2
You can use \fB\-\-key\-algorithm pbkdf2\fP if you want to access your repo via old versions of borg.
.UNINDENT
.sp
Our implementation of argon2\-based key algorithm follows the cryptographic best practices:
.INDENT 0.0
.IP \(bu 2
It derives two separate keys from your passphrase: one to encrypt your key and another one
to sign it. \fB\-\-key\-algorithm pbkdf2\fP uses the same key for both.
.IP \(bu 2
It uses encrypt\-then\-mac instead of encrypt\-and\-mac used by \fB\-\-key\-algorithm pbkdf2\fP
.UNINDENT
.sp
Neither is inherently linked to the key derivation function, but since we were going
to break backwards compatibility anyway we took the opportunity to fix all 3 issues at once.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY
repository to create
.UNINDENT
.SS optional arguments
.INDENT 0.0
.TP
.BI \-e \ MODE\fR,\fB \ \-\-encryption \ MODE
select encryption key mode \fB(required)\fP
.TP
.B \-\-append\-only
create an append\-only mode repository. Note that this only affects the low level structure of the repository, and running \fIdelete\fP or \fIprune\fP will still be allowed. See \fIappend_only_mode\fP in Additional Notes for more details.
.TP
.BI \-\-storage\-quota \ QUOTA
Set storage quota of the new repository (e.g. 5G, 1.5T). Default: no quota.
.TP
.B \-\-make\-parent\-dirs
create the parent directories of the repository directory, if they are missing.
.TP
.B \-\-key\-algorithm
the algorithm we use to derive a key encryption key from your passphrase. Default: argon2
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
# Local repository, recommended repokey AEAD crypto modes
$ borg init \-\-encryption=repokey\-aes\-ocb /path/to/repo
$ borg init \-\-encryption=repokey\-chacha20\-poly1305 /path/to/repo
$ borg init \-\-encryption=repokey\-blake2\-aes\-ocb /path/to/repo
$ borg init \-\-encryption=repokey\-blake2\-chacha20\-poly1305 /path/to/repo
# Local repository (no encryption), not recommended
$ borg init \-\-encryption=none /path/to/repo
# Remote repository (accesses a remote borg via ssh)
# repokey: stores the (encrypted) key into <REPO_DIR>/config
$ borg init \-\-encryption=repokey\-aes\-ocb user@hostname:backup
# Remote repository (accesses a remote borg via ssh)
# keyfile: stores the (encrypted) key into ~/.config/borg/keys/
$ borg init \-\-encryption=keyfile\-aes\-ocb user@hostname:backup
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-create(1)\fP, \fIborg\-delete(1)\fP, \fIborg\-check(1)\fP, \fIborg\-list(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY-CHANGE-ALGORITHM 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-algorithm \- Change repository key algorithm
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY-CHANGE-ALGORITHM" 1 "2022-06-26" "" "borg backup tool"
.SH NAME
borg-key-change-algorithm \- Change repository key algorithm
.SH SYNOPSIS
.sp
borg [common options] key change\-algorithm [options] ALGORITHM
borg [common options] key change\-algorithm [options] [REPOSITORY] ALGORITHM
.SH DESCRIPTION
.sp
Change the algorithm we use to encrypt and authenticate the borg key.
@ -77,6 +77,8 @@ borg key change\-algorithm /path/to/repo pbkdf2
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.INDENT 0.0
.TP
.B ALGORITHM

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY-CHANGE-LOCATION 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-location \- Change repository key location
.
.nr rst2man-indent-level 0
.
@ -27,39 +30,35 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY-CHANGE-LOCATION" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-change-location \- Changes the repository key location.
.SH SYNOPSIS
.sp
borg [common options] key change\-location [options] KEY_LOCATION
borg [common options] key change\-location [options] [REPOSITORY] KEY_LOCATION
.SH DESCRIPTION
.sp
Change the location of a Borg key. The key can be stored at different locations:
.INDENT 0.0
.IP \(bu 2
Change the location of a borg key. The key can be stored at different locations:
.sp
keyfile: locally, usually in the home directory
.IP \(bu 2
repokey: inside the repository (in the repository config)
.UNINDENT
.sp
Please note:
.sp
This command does NOT change the crypto algorithms, just the key location,
repokey: inside the repo (in the repo config)
.INDENT 0.0
.TP
.B Note: this command does NOT change the crypto algorithms, just the key location,
thus you must ONLY give the key location (keyfile or repokey).
.UNINDENT
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.INDENT 0.0
.TP
.B KEY_LOCATION
select key location
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-keep
.B \-\-keep
keep the key also at the current location (default: remove it)
.UNINDENT
.SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY-CHANGE-PASSPHRASE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-change-passphrase \- Change repository key file passphrase
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY-CHANGE-PASSPHRASE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-change-passphrase \- Changes the repository key file passphrase.
.SH SYNOPSIS
.sp
borg [common options] key change\-passphrase [options]
borg [common options] key change\-passphrase [options] [REPOSITORY]
.SH DESCRIPTION
.sp
The key files used for repository encryption are optionally passphrase
@ -45,25 +45,29 @@ does not protect future (nor past) backups to the same repository.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
# Create a key file protected repository
$ borg repo\-create \-\-encryption=keyfile\-aes\-ocb \-v
Initializing repository at \(dq/path/to/repo\(dq
$ borg init \-\-encryption=keyfile \-v /path/to/repo
Initializing repository at "/path/to/repo"
Enter new passphrase:
Enter same passphrase again:
Remember your passphrase. Your data will be inaccessible without it.
Key in \(dq/root/.config/borg/keys/mnt_backup\(dq created.
Key in "/root/.config/borg/keys/mnt_backup" created.
Keep this key safe. Your data will be inaccessible without it.
Synchronizing chunks cache...
Archives: 0, w/ cached Idx: 0, w/ outdated Idx: 0, w/o cached Idx: 0.
Done.
# Change key file passphrase
$ borg key change\-passphrase \-v
$ borg key change\-passphrase \-v /path/to/repo
Enter passphrase for key /root/.config/borg/keys/mnt_backup:
Enter new passphrase:
Enter same passphrase again:
@ -73,8 +77,9 @@ Key updated
# Import a previously\-exported key into the specified
# key file (creating or overwriting the output key)
# (keyfile repositories only)
$ BORG_KEY_FILE=/path/to/output\-key borg key import /path/to/exported
.EE
$ BORG_KEY_FILE=/path/to/output\-key borg key import /path/to/repo /path/to/exported
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -82,12 +87,14 @@ Fully automated using environment variables:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ BORG_NEW_PASSPHRASE=old borg repo\-create \-\-encryption=repokey\-aes\-ocb
# now \(dqold\(dq is the current passphrase.
$ BORG_PASSPHRASE=old BORG_NEW_PASSPHRASE=new borg key change\-passphrase
# now \(dqnew\(dq is the current passphrase.
.EE
.nf
.ft C
$ BORG_NEW_PASSPHRASE=old borg init \-e=repokey repo
# now "old" is the current passphrase.
$ BORG_PASSPHRASE=old BORG_NEW_PASSPHRASE=new borg key change\-passphrase repo
# now "new" is the current passphrase.
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY-EXPORT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-export \- Export the repository key for backup
.
.nr rst2man-indent-level 0
.
@ -27,16 +30,13 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY-EXPORT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-export \- Exports the repository key for backup.
.SH SYNOPSIS
.sp
borg [common options] key export [options] [PATH]
borg [common options] key export [options] [REPOSITORY] [PATH]
.SH DESCRIPTION
.sp
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to back up this essential key.
without the key. This command allows one to backup this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.
@ -56,38 +56,43 @@ For repositories using the repokey encryption the key is saved in the
repository in the config file. A backup is thus not strictly needed,
but guards against the repository becoming inaccessible if the file
is damaged for some reason.
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
borg key export /path/to/repo > encrypted\-key\-backup
borg key export \-\-paper /path/to/repo > encrypted\-key\-backup.txt
borg key export \-\-qr\-html /path/to/repo > encrypted\-key\-backup.html
# Or pass the output file as an argument instead of redirecting stdout:
borg key export /path/to/repo encrypted\-key\-backup
borg key export \-\-paper /path/to/repo encrypted\-key\-backup.txt
borg key export \-\-qr\-html /path/to/repo encrypted\-key\-backup.html
.ft P
.fi
.UNINDENT
.UNINDENT
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.INDENT 0.0
.TP
.B PATH
where to store the backup
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-paper
.B \-\-paper
Create an export suitable for printing and later type\-in
.TP
.B \-\-qr\-html
Create an HTML file suitable for printing and later type\-in or QR scan
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg key export > encrypted\-key\-backup
borg key export \-\-paper > encrypted\-key\-backup.txt
borg key export \-\-qr\-html > encrypted\-key\-backup.html
# Or pass the output file as an argument instead of redirecting stdout:
borg key export encrypted\-key\-backup
borg key export \-\-paper encrypted\-key\-backup.txt
borg key export \-\-qr\-html encrypted\-key\-backup.html
.EE
.UNINDENT
.B \-\-qr\-html
Create an html file suitable for printing and later type\-in or qr scan
.UNINDENT
.SH SEE ALSO
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY-IMPORT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key-import \- Import the repository key from backup
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY-IMPORT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key-import \- Imports the repository key from backup.
.SH SYNOPSIS
.sp
borg [common options] key import [options] [PATH]
borg [common options] key import [options] [REPOSITORY] [PATH]
.SH DESCRIPTION
.sp
This command restores a key previously backed up with the export command.
@ -53,15 +53,17 @@ key import\fP creates a new key file in \fB$BORG_KEYS_DIR\fP\&.
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.sp
REPOSITORY
.INDENT 0.0
.TP
.B PATH
path to the backup (\(aq\-\(aq to read from stdin)
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-paper
.B \-\-paper
interactively import from a backup done with \fB\-\-paper\fP
.UNINDENT
.SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-KEY 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-key \- Manage a keyfile or repokey of a repository
.
.nr rst2man-indent-level 0
.
@ -27,20 +30,18 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-KEY" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-key \- Manage the keyfile or repokey of a repository
.SH SYNOPSIS
.nf
borg [common options] key export ...
borg [common options] key import ...
borg [common options] key change\-passphrase ...
borg [common options] key change\-location ...
borg [common options] key change\-algorithm ...
.fi
.sp
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP, \fIborg\-key\-change\-location(1)\fP
\fIborg\-common(1)\fP, \fIborg\-key\-export(1)\fP, \fIborg\-key\-import(1)\fP, \fIborg\-key\-change\-passphrase(1)\fP, \fIborg\-key\-change\-location(1)\fP, \fIborg\-key\-change\-algorithm(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-LIST 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-list \- List archive or repository contents
.
.nr rst2man-indent-level 0
.
@ -27,45 +30,63 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-LIST" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-list \- List archive contents.
.SH SYNOPSIS
.sp
borg [common options] list [options] NAME [PATH...]
borg [common options] list [options] [REPOSITORY_OR_ARCHIVE] [PATH...]
.SH DESCRIPTION
.sp
This command lists the contents of an archive.
This command lists the contents of a repository or an archive.
.sp
For more help on include/exclude patterns, see the output of \fIborg_patterns\fP\&.
For more help on include/exclude patterns, see the \fIborg_patterns\fP command output.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B REPOSITORY_OR_ARCHIVE
repository or archive to list contents of
.TP
.B PATH
paths to list; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-short
.B \-\-consider\-checkpoints
Show checkpoint archives in the repository contents list (default: hidden).
.TP
.B \-\-short
only print file/directory names, nothing else
.TP
.BI \-\-format \ FORMAT
specify format for file listing (default: \(dq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(dq)
specify format for file or archive listing (default for files: "{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}"; for archives: "{archive:<36} {time} [{id}]{NL}")
.TP
.B \-\-json\-lines
Format output as JSON Lines. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text.
.B \-\-json
Only valid for listing repository contents. Format output as JSON. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text. A "barchive" key is therefore not available.
.TP
.BI \-\-depth \ N
only list files up to the specified directory depth
.B \-\-json\-lines
Only valid for listing archive contents. Format output as JSON Lines. The form of \fB\-\-format\fP is ignored, but keys used in it are added to the JSON output. Some keys are always present. Note: JSON can only represent text. A "bpath" key is therefore not available.
.UNINDENT
.SS Include/Exclude options
.SS Archive filters
.INDENT 0.0
.TP
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider last N archives after other filters were applied
.UNINDENT
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -84,8 +105,16 @@ read include/exclude patterns from PATTERNFILE, one per line
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg list root\-2016\-02\-15
.nf
.ft C
$ borg list /path/to/repo
Monday Mon, 2016\-02\-15 19:15:11
repo Mon, 2016\-02\-15 19:26:54
root\-2016\-02\-15 Mon, 2016\-02\-15 19:36:29
newname Mon, 2016\-02\-15 19:50:19
\&...
$ borg list /path/to/repo::root\-2016\-02\-15
drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 .
drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin
\-rwxr\-xr\-x root root 1029624 Thu, 2014\-11\-13 00:08:51 bin/bash
@ -93,14 +122,14 @@ lrwxrwxrwx root root 0 Fri, 2015\-03\-27 20:24:26 bin/bzcmp \-> bzdif
\-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff
\&...
$ borg list root\-2016\-02\-15 \-\-pattern \(dq\- bin/ba*\(dq
$ borg list /path/to/repo::root\-2016\-02\-15 \-\-pattern "\- bin/ba*"
drwxr\-xr\-x root root 0 Mon, 2016\-02\-15 17:44:27 .
drwxrwxr\-x root root 0 Mon, 2016\-02\-15 19:04:49 bin
lrwxrwxrwx root root 0 Fri, 2015\-03\-27 20:24:26 bin/bzcmp \-> bzdiff
\-rwxr\-xr\-x root root 2140 Fri, 2015\-03\-27 20:24:22 bin/bzdiff
\&...
$ borg list archiveA \-\-format=\(dq{mode} {user:6} {group:6} {size:8d} {isomtime} {path}{extra}{NEWLINE}\(dq
$ borg list /path/to/repo::archiveA \-\-format="{mode} {user:6} {group:6} {size:8d} {isomtime} {path}{extra}{NEWLINE}"
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 .
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code
drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code/myproject
@ -108,38 +137,50 @@ drwxrwxr\-x user user 0 Sun, 2015\-02\-01 11:00:00 code/myproject
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text
\&...
$ borg list archiveA \-\-pattern \(aq+ re:\e.ext$\(aq \-\-pattern \(aq\- re:^.*$\(aq
$ borg list /path/to/repo/::archiveA \-\-pattern \(aqre:\e.ext$\(aq
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext
\&...
$ borg list archiveA \-\-pattern \(aq+ re:.ext$\(aq \-\-pattern \(aq\- re:^.*$\(aq
$ borg list /path/to/repo/::archiveA \-\-pattern \(aqre:.ext$\(aq
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.ext
\-rw\-rw\-r\-\- user user 1416192 Sun, 2015\-02\-01 11:00:00 code/myproject/file.text
\&...
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.SH NOTES
.SS The FORMAT specifier syntax
.sp
The \fB\-\-format\fP option uses Python\(aqs format string syntax <https://docs.python.org/3.10/library/string.html#formatstrings>
\&.
The \fB\-\-format\fP option uses python\(aqs \fI\%format string syntax\fP\&.
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg list \-\-format \(aq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(aq ArchiveFoo
.nf
.ft C
$ borg list \-\-format \(aq{archive}{NL}\(aq /path/to/repo
ArchiveFoo
ArchiveBar
\&...
# {VAR:NUMBER} \- pad to NUMBER columns.
# Strings are left\-aligned, numbers are right\-aligned.
# Note: time columns except \(ga\(gaisomtime\(ga\(ga, \(ga\(gaisoctime\(ga\(ga and \(ga\(gaisoatime\(ga\(ga cannot be padded.
$ borg list \-\-format \(aq{archive:36} {time} [{id}]{NL}\(aq /path/to/repo
ArchiveFoo Thu, 2021\-12\-09 10:22:28 [0b8e9a312bef3f2f6e2d0fc110c196827786c15eba0188738e81697a7fa3b274]
$ borg list \-\-format \(aq{mode} {user:6} {group:6} {size:8} {mtime} {path}{extra}{NL}\(aq /path/to/repo::ArchiveFoo
\-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo
\&...
# {VAR:<NUMBER} \- pad to NUMBER columns left\-aligned.
# {VAR:>NUMBER} \- pad to NUMBER columns right\-aligned.
$ borg list \-\-format \(aq{mode} {user:>6} {group:>6} {size:<8} {mtime} {path}{extra}{NL}\(aq ArchiveFoo
$ borg list \-\-format \(aq{mode} {user:>6} {group:>6} {size:<8} {mtime} {path}{extra}{NL}\(aq /path/to/repo::ArchiveFoo
\-rw\-rw\-r\-\- user user 1024 Thu, 2021\-12\-09 10:22:17 file\-foo
\&...
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -150,57 +191,93 @@ NEWLINE: OS dependent line separator
.IP \(bu 2
NL: alias of NEWLINE
.IP \(bu 2
NUL: NUL character for creating print0 / xargs \-0 like output
NUL: NUL character for creating print0 / xargs \-0 like output, see barchive and bpath keys below
.IP \(bu 2
SPACE: space character
SPACE
.IP \(bu 2
TAB: tab character
TAB
.IP \(bu 2
CR: carriage return character
CR
.IP \(bu 2
LF: line feed character
LF
.UNINDENT
.sp
Keys available only when listing archives in a repository:
.INDENT 0.0
.IP \(bu 2
archive: archive name interpreted as text (might be missing non\-text characters, see barchive)
.IP \(bu 2
name: alias of "archive"
.IP \(bu 2
barchive: verbatim archive name, can contain any character except NUL
.IP \(bu 2
comment: archive comment interpreted as text (might be missing non\-text characters, see bcomment)
.IP \(bu 2
bcomment: verbatim archive comment, can contain any character except NUL
.IP \(bu 2
id: internal ID of the archive
.IP \(bu 2
start: time (start) of creation of the archive
.IP \(bu 2
time: alias of "start"
.IP \(bu 2
end: time (end) of creation of the archive
.IP \(bu 2
command_line: command line which was used to create the archive
.IP \(bu 2
hostname: hostname of host on which this archive was created
.IP \(bu 2
username: username of user who created this archive
.UNINDENT
.sp
Keys available only when listing files in an archive:
.INDENT 0.0
.IP \(bu 2
type: file type (file, dir, symlink, ...)
type
.IP \(bu 2
mode: file mode (as in stat)
mode
.IP \(bu 2
uid: user id of file owner
uid
.IP \(bu 2
gid: group id of file owner
gid
.IP \(bu 2
user: user name of file owner
user
.IP \(bu 2
group: group name of file owner
group
.IP \(bu 2
path: file path
path: path interpreted as text (might be missing non\-text characters, see bpath)
.IP \(bu 2
target: link target for symlinks
bpath: verbatim POSIX path, can contain any character except NUL
.IP \(bu 2
hlid: hard link identity (same if hardlinking same fs object)
source: link target for links (identical to linktarget)
.IP \(bu 2
inode: inode number
linktarget
.IP \(bu 2
flags: file flags
flags
.IP \(bu 2
size: file size
size
.IP \(bu 2
csize: compressed size
.IP \(bu 2
dsize: deduplicated size
.IP \(bu 2
dcsize: deduplicated compressed size
.IP \(bu 2
num_chunks: number of chunks in this file
.IP \(bu 2
mtime: file modification time
unique_chunks: number of unique chunks in this file
.IP \(bu 2
ctime: file change time
mtime
.IP \(bu 2
atime: file access time
ctime
.IP \(bu 2
isomtime: file modification time (ISO 8601 format)
atime
.IP \(bu 2
isoctime: file change time (ISO 8601 format)
isomtime
.IP \(bu 2
isoatime: file access time (ISO 8601 format)
isoctime
.IP \(bu 2
isoatime
.IP \(bu 2
blake2b
.IP \(bu 2
@ -228,15 +305,17 @@ sha512
.IP \(bu 2
xxh64: XXH64 checksum of this file (note: this is NOT a cryptographic hash!)
.IP \(bu 2
archiveid: internal ID of the archive
archiveid
.IP \(bu 2
archivename: name of the archive
archivename
.IP \(bu 2
extra: prepends {target} with \(dq \-> \(dq for soft links and \(dq link to \(dq for hard links
extra: prepends {source} with " \-> " for soft links and " link to " for hard links
.IP \(bu 2
health: either "healthy" (file ok) or "broken" (if file has all\-zero replacement chunks)
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP, \fIborg\-info(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-patterns(1)\fP, \fIborg\-repo\-list(1)\fP
\fIborg\-common(1)\fP, \fIborg\-info(1)\fP, \fIborg\-diff(1)\fP, \fIborg\-prune(1)\fP, \fIborg\-patterns(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.

View file

@ -1,99 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-MATCH-ARCHIVES" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-match-archives \- Details regarding match-archives
.SH DESCRIPTION
.sp
The \fB\-\-match\-archives\fP option matches a given pattern against the list of all archives
in the repository. It can be given multiple times.
.sp
The patterns can have a prefix of:
.INDENT 0.0
.IP \(bu 2
name: pattern match on the archive name (default)
.IP \(bu 2
aid: prefix match on the archive id (only one result allowed)
.IP \(bu 2
user: exact match on the username who created the archive
.IP \(bu 2
host: exact match on the hostname where the archive was created
.IP \(bu 2
tags: match on the archive tags
.UNINDENT
.sp
In case of a name pattern match,
it uses pattern styles similar to the ones described by \fBborg help patterns\fP:
.INDENT 0.0
.TP
.B Identical match pattern, selector \fBid:\fP (default)
Simple string match, must fully match exactly as given.
.TP
.B Shell\-style patterns, selector \fBsh:\fP
Match like on the shell, wildcards like \fI*\fP and \fI?\fP work.
.TP
.B Regular expressions <https://docs.python.org/3/library/re.html>
, selector \fBre:\fP
Full regular expression support.
This is very powerful, but can also get rather complicated.
.UNINDENT
.sp
Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# name match, id: style
borg delete \-\-match\-archives \(aqid:archive\-with\-crap\(aq
borg delete \-a \(aqid:archive\-with\-crap\(aq # same, using short option
borg delete \-a \(aqarchive\-with\-crap\(aq # same, because \(aqid:\(aq is the default
# name match, sh: style
borg delete \-a \(aqsh:home\-kenny\-*\(aq
# name match, re: style
borg delete \-a \(aqre:pc[123]\-home\-(user1|user2)\-2022\-09\-.*\(aq
# archive id prefix match:
borg delete \-a \(aqaid:d34db33f\(aq
# host or user match
borg delete \-a \(aquser:kenny\(aq
borg delete \-a \(aqhost:kenny\-pc\(aq
# tags match
borg delete \-a \(aqtags:TAG1\(aq \-a \(aqtags:TAG2\(aq
.EE
.UNINDENT
.UNINDENT
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-MOUNT 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-mount \- Mount archive or an entire repository as a FUSE filesystem
.
.nr rst2man-indent-level 0
.
@ -27,37 +30,15 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-MOUNT" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-mount \- Mounts an archive or an entire repository as a FUSE filesystem.
.SH SYNOPSIS
.sp
borg [common options] mount [options] MOUNTPOINT [PATH...]
borg [common options] mount [options] REPOSITORY_OR_ARCHIVE MOUNTPOINT [PATH...]
.SH DESCRIPTION
.sp
This command mounts a repository or an archive as a FUSE filesystem.
This can be useful for browsing or restoring individual files.
.sp
When restoring, take into account that the current FUSE implementation does
not support special fs flags and ACLs.
.sp
When mounting a repository, the top directories will be named like the
archives and the directory structure below these will be loaded on\-demand from
the repository when entering these directories, so expect some delay.
.sp
Unless the \fB\-\-foreground\fP option is given, the command will run in the
background until the filesystem is \fBunmounted\fP\&.
.sp
Performance tips:
.INDENT 0.0
.IP \(bu 2
When doing a \(dqwhole repository\(dq mount:
do not enter archive directories if not needed; this avoids on\-demand loading.
.IP \(bu 2
Only mount a specific archive, not the whole repository.
.IP \(bu 2
Only mount specific paths in a specific archive, not the complete archive.
.UNINDENT
This command mounts an archive as a FUSE filesystem. This can be useful for
browsing an archive or restoring individual files. Unless the \fB\-\-foreground\fP
option is given the command will run in the background until the filesystem
is \fBumounted\fP\&.
.sp
The command \fBborgfs\fP provides a wrapper for \fBborg mount\fP\&. This can also be
used in fstab entries:
@ -69,107 +50,100 @@ To allow a regular user to use fstab entries, add the \fBuser\fP option:
For FUSE configuration and mount options, see the mount.fuse(8) manual page.
.sp
Borg\(aqs default behavior is to use the archived user and group names of each
file and map them to the system\(aqs respective user and group IDs.
file and map them to the system\(aqs respective user and group ids.
Alternatively, using \fBnumeric\-ids\fP will instead use the archived user and
group IDs without any mapping.
group ids without any mapping.
.sp
The \fBuid\fP and \fBgid\fP mount options (implemented by Borg) can be used to
override the user and group IDs of all files (i.e., \fBborg mount \-o
override the user and group ids of all files (i.e., \fBborg mount \-o
uid=1000,gid=1000\fP).
.sp
The man page references \fBuser_id\fP and \fBgroup_id\fP mount options
(implemented by FUSE) which specify the user and group ID of the mount owner
(also known as the user who does the mounting). It is set automatically by libfuse (or
(implemented by fuse) which specify the user and group id of the mount owner
(aka, the user who does the mounting). It is set automatically by libfuse (or
the filesystem if libfuse is not used). However, you should not specify these
manually. Unlike the \fBuid\fP and \fBgid\fP mount options, which affect all files,
\fBuser_id\fP and \fBgroup_id\fP affect the user and group ID of the mounted
manually. Unlike the \fBuid\fP and \fBgid\fP mount options which affect all files,
\fBuser_id\fP and \fBgroup_id\fP affect the user and group id of the mounted
(base) directory.
.sp
Additional mount options supported by Borg:
Additional mount options supported by borg:
.INDENT 0.0
.IP \(bu 2
\fBversions\fP: when used with a repository mount, this gives a merged, versioned
view of the files in the archives. EXPERIMENTAL; layout may change in the future.
versions: when used with a repository mount, this gives a merged, versioned
view of the files in the archives. EXPERIMENTAL, layout may change in future.
.IP \(bu 2
\fBallow_damaged_files\fP: by default, damaged files (where chunks are missing)
will return EIO (I/O error) when trying to read the related parts of the file.
Set this option to replace the missing parts with all\-zero bytes.
allow_damaged_files: by default damaged files (where missing chunks were
replaced with runs of zeros by borg check \fB\-\-repair\fP) are not readable and
return EIO (I/O error). Set this option to read such files.
.IP \(bu 2
\fBignore_permissions\fP: for security reasons the \fBdefault_permissions\fP mount
option is internally enforced by Borg. \fBignore_permissions\fP can be given to
not enforce \fBdefault_permissions\fP\&.
ignore_permissions: for security reasons the "default_permissions" mount
option is internally enforced by borg. "ignore_permissions" can be given to
not enforce "default_permissions".
.UNINDENT
.sp
The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is intended for advanced users
to tweak performance. It sets the number of cached data chunks; additional
The BORG_MOUNT_DATA_CACHE_ENTRIES environment variable is meant for advanced users
to tweak the performance. It sets the number of cached data chunks; additional
memory usage can be up to ~8 MiB times this number. The default is the number
of CPU cores.
.sp
When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process
to delete data unintentionally.
to unintentionally delete data.
.sp
When running in the foreground, ^C/SIGINT cleanly unmounts the filesystem,
but other signals or crashes do not.
.sp
Debugging:
.sp
\fBborg mount\fP usually daemonizes and the daemon process sends stdout/stderr
to /dev/null. Thus, you need to either use \fB\-f / \-\-foreground\fP to make it stay
in the foreground and not daemonize, or use \fBBORG_LOGGING_CONF\fP to reconfigure
the logger to output to a file.
When running in the foreground ^C/SIGINT unmounts cleanly, but other
signals or crashes do not.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY_OR_ARCHIVE
repository or archive to mount
.TP
.B MOUNTPOINT
where to mount the filesystem
where to mount filesystem
.TP
.B PATH
paths to extract; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-f\fP,\fB \-\-foreground
.B \-\-consider\-checkpoints
Show checkpoint archives in the repository contents list (default: hidden).
.TP
.B \-f\fP,\fB \-\-foreground
stay in foreground, do not daemonize
.TP
.B \-o
extra mount options
.B \-o
Extra mount options
.TP
.B \-\-numeric\-ids
use numeric user and group identifiers from archives
.B \-\-numeric\-owner
deprecated, use \fB\-\-numeric\-ids\fP instead
.TP
.B \-\-numeric\-ids
use numeric user and group identifiers from archive(s)
.UNINDENT
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
Comma\-separated list of sorting keys; valid keys are: timestamp, name, id; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
consider first N archives after other filters were applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
consider last N archives after other filters were applied
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-PATTERNS 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-patterns \- Details regarding patterns
.
.nr rst2man-indent-level 0
.
@ -27,58 +30,45 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-PATTERNS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-patterns \- Details regarding patterns
.SH DESCRIPTION
.sp
When specifying one or more file paths in a Borg command that supports
patterns for the respective option or argument, you can apply the
patterns described here to include only desired files and/or exclude
unwanted ones. Patterns can be used
.INDENT 0.0
.IP \(bu 2
for \fB\-\-exclude\fP option,
.IP \(bu 2
in the file given with \fB\-\-exclude\-from\fP option,
.IP \(bu 2
for \fB\-\-pattern\fP option,
.IP \(bu 2
in the file given with \fB\-\-patterns\-from\fP option and
.IP \(bu 2
for \fBPATH\fP arguments that explicitly support them.
.UNINDENT
.sp
The path/filenames used as input for the pattern matching start with the
The path/filenames used as input for the pattern matching start from the
currently active recursion root. You usually give the recursion root(s)
when invoking borg and these can be either relative or absolute paths.
.sp
Be careful, your patterns must match the archived paths:
.INDENT 0.0
.IP \(bu 2
Archived paths never start with a leading slash (\(aq/\(aq), nor with \(aq.\(aq, nor with \(aq..\(aq.
.INDENT 2.0
.IP \(bu 2
When you back up absolute paths like \fB/home/user\fP, the archived
paths start with \fBhome/user\fP\&.
.IP \(bu 2
When you back up relative paths like \fB\&./src\fP, the archived paths
start with \fBsrc\fP\&.
.IP \(bu 2
When you back up relative paths like \fB\&../../src\fP, the archived paths
start with \fBsrc\fP\&.
.UNINDENT
.UNINDENT
So, when you give \fIrelative/\fP as root, the paths going into the matcher
will look like \fIrelative/.../file.ext\fP\&. When you give \fI/absolute/\fP as
root, they will look like \fI/absolute/.../file.ext\fP\&.
.sp
Borg supports different pattern styles. To define a non\-default
style for a specific pattern, prefix it with two characters followed
by a colon \(aq:\(aq (i.e. \fBfm:path/*\fP, \fBsh:path/**\fP).
File paths in Borg archives are always stored normalized and relative.
This means that e.g. \fBborg create /path/to/repo ../some/path\fP will
store all files as \fIsome/path/.../file.ext\fP and \fBborg create
/path/to/repo /home/user\fP will store all files as
\fIhome/user/.../file.ext\fP\&.
.sp
The default pattern style for \fB\-\-exclude\fP differs from \fB\-\-pattern\fP, see below.
A directory exclusion pattern can end either with or without a slash (\(aq/\(aq).
If it ends with a slash, such as \fIsome/path/\fP, the directory will be
included but not its content. If it does not end with a slash, such as
\fIsome/path\fP, both the directory and content will be excluded.
.sp
File patterns support these styles: fnmatch, shell, regular expressions,
path prefixes and path full\-matches. By default, fnmatch is used for
\fB\-\-exclude\fP patterns and shell\-style is used for the \fB\-\-pattern\fP
option. For commands that support patterns in their \fBPATH\fP argument
like (\fBborg list\fP), the default pattern is path prefix.
.sp
Starting with Borg 1.2, for all but regular expression pattern matching
styles, all paths are treated as relative, meaning that a leading path
separator is removed after normalizing and before matching. This allows
you to use absolute or relative patterns arbitrarily.
.sp
If followed by a colon (\(aq:\(aq) the first two characters of a pattern are
used as a style selector. Explicit style selection is necessary when a
non\-default style is desired or when the desired pattern starts with
two alphanumeric characters followed by a colon (i.e. \fIaa:something/*\fP).
.INDENT 0.0
.TP
.B Fnmatch <https://docs.python.org/3/library/fnmatch.html>
, selector \fBfm:\fP
.B \fI\%Fnmatch\fP, selector \fIfm:\fP
This is the default style for \fB\-\-exclude\fP and \fB\-\-exclude\-from\fP\&.
These patterns use a variant of shell pattern syntax, with \(aq*\(aq matching
any number of characters, \(aq?\(aq matching any single character, \(aq[...]\(aq
@ -86,7 +76,7 @@ matching any single character specified, including ranges, and \(aq[!...]\(aq
matching any character not specified. For the purpose of these patterns,
the path separator (backslash for Windows and \(aq/\(aq on other systems) is not
treated specially. Wrap meta\-characters in brackets for a literal
match (i.e. \fB[?]\fP to match the literal character \(aq?\(aq). For a path
match (i.e. \fI[?]\fP to match the literal character \fI?\fP). For a path
to match a pattern, the full path must match, or it must match
from the start of the full path to just before a path separator. Except
for the root path, paths will never end in the path separator when
@ -94,33 +84,33 @@ matching is attempted. Thus, if a given pattern ends in a path
separator, a \(aq*\(aq is appended before matching is attempted. A leading
path separator is always removed.
.TP
.B Shell\-style patterns, selector \fBsh:\fP
.B Shell\-style patterns, selector \fIsh:\fP
This is the default style for \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP\&.
Like fnmatch patterns these are similar to shell patterns. The difference
is that the pattern may include \fB**/\fP for matching zero or more directory
levels, \fB*\fP for matching zero or more arbitrary characters with the
exception of any path separator, \fB{}\fP containing comma\-separated
alternative patterns. A leading path separator is always removed.
is that the pattern may include \fI**/\fP for matching zero or more directory
levels, \fI*\fP for matching zero or more arbitrary characters with the
exception of any path separator. A leading path separator is always removed.
.TP
.B Regular expressions <https://docs.python.org/3/library/re.html>
, selector \fBre:\fP
Unlike shell patterns, regular expressions are not required to match the full
.B Regular expressions, selector \fIre:\fP
Regular expressions similar to those found in Perl are supported. Unlike
shell patterns regular expressions are not required to match the full
path and any substring match is sufficient. It is strongly recommended to
anchor patterns to the start (\(aq^\(aq), to the end (\(aq$\(aq) or both. Path
separators (backslash for Windows and \(aq/\(aq on other systems) in paths are
always normalized to a forward slash \(aq/\(aq before applying a pattern.
always normalized to a forward slash (\(aq/\(aq) before applying a pattern. The
regular expression syntax is described in the \fI\%Python documentation for
the re module\fP\&.
.TP
.B Path prefix, selector \fBpp:\fP
This pattern style is useful to match whole subdirectories. The pattern
\fBpp:root/somedir\fP matches \fBroot/somedir\fP and everything therein.
A leading path separator is always removed.
.B Path prefix, selector \fIpp:\fP
This pattern style is useful to match whole sub\-directories. The pattern
\fIpp:root/somedir\fP matches \fIroot/somedir\fP and everything therein. A leading
path separator is always removed.
.TP
.B Path full\-match, selector \fBpf:\fP
.B Path full\-match, selector \fIpf:\fP
This pattern style is (only) useful to match full paths.
This is kind of a pseudo pattern as it cannot have any variable or
unspecified parts \- the full path must be given. \fBpf:root/file.ext\fP
matches \fBroot/file.ext\fP only. A leading path separator is always
removed.
This is kind of a pseudo pattern as it can not have any variable or
unspecified parts \- the full path must be given. \fIpf:root/file.ext\fP matches
\fIroot/file.ext\fP only. A leading path separator is always removed.
.sp
Implementation note: this is implemented via very time\-efficient O(1)
hashtable lookups (this means you can have huge amounts of such patterns
@ -135,12 +125,12 @@ Same logic applies for exclude.
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
\fBre:\fP, \fBsh:\fP and \fBfm:\fP patterns are all implemented on top of
the Python SRE engine. It is very easy to formulate patterns for each
of these types which requires an inordinate amount of time to match
paths. If untrusted users are able to supply patterns, ensure they
cannot supply \fBre:\fP patterns. Further, ensure that \fBsh:\fP and
\fBfm:\fP patterns only contain a handful of wildcards at most.
\fIre:\fP, \fIsh:\fP and \fIfm:\fP patterns are all implemented on top of the Python SRE
engine. It is very easy to formulate patterns for each of these types which
requires an inordinate amount of time to match paths. If untrusted users
are able to supply patterns, ensure they cannot supply \fIre:\fP patterns.
Further, ensure that \fIsh:\fP and \fIfm:\fP patterns only contain a handful of
wildcards at most.
.UNINDENT
.UNINDENT
.sp
@ -148,18 +138,9 @@ Exclusions can be passed via the command line option \fB\-\-exclude\fP\&. When u
from within a shell, the patterns should be quoted to protect them from
expansion.
.sp
Patterns matching special characters, e.g. whitespace, within a shell may
require adjustments, such as putting quotation marks around the arguments.
Example:
Using bash, the following command line option would match and exclude \(dqitem name\(dq:
\fB\-\-pattern=\(aq\-path/item name\(aq\fP
Note that when patterns are used within a pattern file directly read by borg,
e.g. when using \fB\-\-exclude\-from\fP or \fB\-\-patterns\-from\fP, there is no shell
involved and thus no quotation marks are required.
.sp
The \fB\-\-exclude\-from\fP option permits loading exclusion patterns from a text
file with one pattern per line. Lines empty or starting with the hash sign
\(aq#\(aq after removing whitespace on both ends are ignored. The optional style
file with one pattern per line. Lines empty or starting with the number sign
(\(aq#\(aq) after removing whitespace on both ends are ignored. The optional style
selector prefix is also supported for patterns loaded from a file. Due to
whitespace removal, paths with whitespace at the beginning or end can only be
excluded using regular expressions.
@ -171,133 +152,77 @@ Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Exclude a directory anywhere in the tree named \(ga\(gasteamapps/common\(ga\(ga
# (and everything below it), regardless of where it appears:
$ borg create \-e \(aqsh:**/steamapps/common/**\(aq archive /
# Exclude the contents of \(ga\(ga/home/user/.cache\(ga\(ga:
$ borg create \-e \(aqsh:home/user/.cache/**\(aq archive /home/user
$ borg create \-e home/user/.cache/ archive /home/user
# The file \(aq/home/user/.cache/important\(aq is *not* backed up:
$ borg create \-e home/user/.cache/ archive / /home/user/.cache/important
.nf
.ft C
# Exclude \(aq/home/user/file.o\(aq but not \(aq/home/user/file.odt\(aq:
$ borg create \-e \(aq*.o\(aq archive /
$ borg create \-e \(aq*.o\(aq backup /
# Exclude \(aq/home/user/junk\(aq and \(aq/home/user/subdir/junk\(aq but
# not \(aq/home/user/importantjunk\(aq or \(aq/etc/junk\(aq:
$ borg create \-e \(aqhome/*/junk\(aq archive /
$ borg create \-e \(aq/home/*/junk\(aq backup /
# Exclude the contents of \(aq/home/user/cache\(aq but not the directory itself:
$ borg create \-e home/user/cache/ backup /
# The file \(aq/home/user/cache/important\(aq is *not* backed up:
$ borg create \-e /home/user/cache/ backup / /home/user/cache/important
# The contents of directories in \(aq/home\(aq are not backed up when their name
# ends in \(aq.tmp\(aq
$ borg create \-\-exclude \(aqre:^home/[^/]+\e.tmp/\(aq archive /
$ borg create \-\-exclude \(aqre:^/home/[^/]+\e.tmp/\(aq backup /
# Load exclusions from file
$ cat >exclude.txt <<EOF
# Comment line
home/*/junk
/home/*/junk
*.tmp
fm:aa:something/*
re:^home/[^/]+\e.tmp/
sh:home/*/.thumbnails
re:^/home/[^/]+\e.tmp/
sh:/home/*/.thumbnails
# Example with spaces, no need to escape as it is processed by borg
some file with spaces.txt
EOF
$ borg create \-\-exclude\-from exclude.txt archive /
.EE
$ borg create \-\-exclude\-from exclude.txt backup /
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
A more general and easier to use way to define filename matching patterns
exists with the \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP options. Using
these, you may specify the backup roots, default pattern styles and
patterns for inclusion and exclusion.
.INDENT 0.0
.TP
.B Root path prefix \fBR\fP
A recursion root path starts with the prefix \fBR\fP, followed by a path
(a plain path, not a file pattern). Use this prefix to have the root
paths in the patterns file rather than as command line arguments.
.TP
.B Pattern style prefix \fBP\fP (only useful within patterns files)
To change the default pattern style, use the \fBP\fP prefix, followed by
the pattern style abbreviation (\fBfm\fP, \fBpf\fP, \fBpp\fP, \fBre\fP, \fBsh\fP).
All patterns following this line in the same patterns file will use this
style until another style is specified or the end of the file is reached.
When the current patterns file is finished, the default pattern style will
reset.
.TP
.B Exclude pattern prefix \fB\-\fP
Use the prefix \fB\-\fP, followed by a pattern, to define an exclusion.
This has the same effect as the \fB\-\-exclude\fP option.
.TP
.B Exclude no\-recurse pattern prefix \fB!\fP
Use the prefix \fB!\fP, followed by a pattern, to define an exclusion
that does not recurse into subdirectories. This saves time, but
prevents include patterns to match any files in subdirectories.
.TP
.B Include pattern prefix \fB+\fP
Use the prefix \fB+\fP, followed by a pattern, to define inclusions.
This is useful to include paths that are covered in an exclude
pattern and would otherwise not be backed up.
.UNINDENT
.sp
The first matching pattern is used, so if an include pattern matches
before an exclude pattern, the file is backed up. Note that a no\-recurse
exclude stops examination of subdirectories so that potential includes
will not match \- use normal excludes for such use cases.
.sp
Example:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Define the recursion root
R /
# Exclude all iso files in any directory
\- **/*.iso
# Explicitly include all inside etc and root
+ etc/**
+ root/**
# Exclude a specific directory under each user\(aqs home directories
\- home/*/.cache
# Explicitly include everything in /home
+ home/**
# Explicitly exclude some directories without recursing into them
! re:^(dev|proc|run|sys|tmp)
# Exclude all other files and directories
# that are not specifically included earlier.
\- **
.EE
.UNINDENT
.UNINDENT
.sp
\fBTip: You can easily test your patterns with \-\-dry\-run and \-\-list\fP:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg create \-\-dry\-run \-\-list \-\-patterns\-from patterns.txt archive
.EE
.UNINDENT
.UNINDENT
.sp
This will list the considered files one per line, prefixed with a
character that indicates the action (e.g. \(aqx\(aq for excluding, see
\fBItem flags\fP in \fIborg create\fP usage docs).
A more general and easier to use way to define filename matching patterns exists
with the \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP options. Using these, you may
specify the backup roots (starting points) and patterns for inclusion/exclusion.
A root path starts with the prefix \fIR\fP, followed by a path (a plain path, not a
file pattern). An include rule starts with the prefix +, an exclude rule starts
with the prefix \-, an exclude\-norecurse rule starts with !, all followed by a pattern.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
It is possible that a subdirectory or file is matched while its parent
directories are not. In that case, parent directories are not backed
up and thus their user, group, permission, etc. cannot be restored.
Via \fB\-\-pattern\fP or \fB\-\-patterns\-from\fP you can define BOTH inclusion and exclusion
of files using pattern prefixes \fB+\fP and \fB\-\fP\&. With \fB\-\-exclude\fP and
\fB\-\-exclude\-from\fP ONLY excludes are defined.
.UNINDENT
.UNINDENT
.sp
Inclusion patterns are useful to include paths that are contained in an excluded
path. The first matching pattern is used so if an include pattern matches before
an exclude pattern, the file is backed up. If an exclude\-norecurse pattern matches
a directory, it won\(aqt recurse into it and won\(aqt discover any potential matches for
include rules below that directory.
.sp
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
It\(aqs possible that a sub\-directory/file is matched while parent directories are not.
In that case, parent directories are not backed up thus their user, group, permission,
etc. can not be restored.
.UNINDENT
.UNINDENT
.sp
Note that the default pattern style for \fB\-\-pattern\fP and \fB\-\-patterns\-from\fP is
shell style (\fIsh:\fP), so those patterns behave similar to rsync include/exclude
patterns. The pattern style can be set via the \fIP\fP prefix.
.sp
Patterns (\fB\-\-pattern\fP) and excludes (\fB\-\-exclude\fP) from the command line are
considered first (in the order of appearance). Then patterns from \fB\-\-patterns\-from\fP
are added. Exclusion patterns from \fB\-\-exclude\-from\fP files are appended last.
@ -306,20 +231,16 @@ Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# back up pics, but not the ones from 2018, except the good ones:
.nf
.ft C
# backup pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create \-\-pattern=+pics/2018/good \-\-pattern=\-pics/2018 archive pics
borg create \-\-pattern=+pics/2018/good \-\-pattern=\-pics/2018 repo::arch pics
# back up only JPG/JPEG files (case insensitive) in all home directories:
borg create \-\-pattern \(aq+ re:\e.jpe?g(?i)$\(aq archive /home
# back up homes, but exclude big downloads (like .ISO files) or hidden files:
borg create \-\-exclude \(aqre:\e.iso(?i)$\(aq \-\-exclude \(aqsh:home/**/.*\(aq archive /home
# use a file with patterns (recursion root \(aq/\(aq via command line):
borg create \-\-patterns\-from patterns.lst archive /
.EE
# use a file with patterns:
borg create \-\-patterns\-from patterns.lst repo::arch
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -327,19 +248,26 @@ The patterns.lst file could look like that:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# \(dqsh:\(dq pattern style is the default
# exclude caches
\- home/*/.cache
.nf
.ft C
# "sh:" pattern style is the default, so the following line is not needed:
P sh
R /
# can be rebuild
\- /home/*/.cache
# they\(aqre downloads for a reason
\- /home/*/Downloads
# susan is a nice person
# include susans home
+ home/susan
+ /home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don\(aqt back up the other home directories
\- home/*
# don\(aqt even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too)
! re:^(dev|proc|run|sys|tmp)
.EE
+ pf:/home/bobby/specialfile.txt
# don\(aqt backup the other home directories
\- /home/*
# don\(aqt even look in /proc
! /proc
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -347,26 +275,31 @@ You can specify recursion roots either on the command line or in a patternfile:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
# these two commands do the same thing
borg create \-\-exclude home/bobby/junk archive /home/bobby /home/susan
borg create \-\-patterns\-from patternfile.lst archive
.EE
borg create \-\-exclude /home/bobby/junk repo::arch /home/bobby /home/susan
borg create \-\-patterns\-from patternfile.lst repo::arch
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
patternfile.lst:
The patternfile:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
# note that excludes use fm: by default and patternfiles use sh: by default.
# therefore, we need to specify fm: to have the same exact behavior.
P fm
R /home/bobby
R /home/susan
\- home/bobby/junk
.EE
\- /home/bobby/junk
.ft P
.fi
.UNINDENT
.UNINDENT
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-PLACEHOLDERS 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-placeholders \- Details regarding placeholders
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-PLACEHOLDERS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-placeholders \- Details regarding placeholders
.SH DESCRIPTION
.sp
Repository URLs, \fB\-\-name\fP, \fB\-a\fP / \fB\-\-match\-archives\fP, \fB\-\-comment\fP
Repository (or Archive) URLs, \fB\-\-prefix\fP, \fB\-\-glob\-archives\fP, \fB\-\-comment\fP
and \fB\-\-remote\-path\fP values support these placeholders:
.INDENT 0.0
.TP
@ -47,13 +47,11 @@ The full name of the machine in reverse domain name notation.
.TP
.B {now}
The current local date and time, by default in ISO\-8601 format.
You can also supply your own format string <https://docs.python.org/3.10/library/datetime.html#strftime-and-strptime-behavior>
, e.g. {now:%Y\-%m\-%d_%H:%M:%S}
You can also supply your own \fI\%format string\fP, e.g. {now:%Y\-%m\-%d_%H:%M:%S}
.TP
.B {utcnow}
The current UTC date and time, by default in ISO\-8601 format.
You can also supply your own format string <https://docs.python.org/3.10/library/datetime.html#strftime-and-strptime-behavior>
, e.g. {utcnow:%Y\-%m\-%d_%H:%M:%S}
You can also supply your own \fI\%format string\fP, e.g. {utcnow:%Y\-%m\-%d_%H:%M:%S}
.TP
.B {user}
The user name (or UID, if no name is available) of the user running borg.
@ -78,9 +76,11 @@ If literal curly braces need to be used, double them for escaping:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg create \-\-repo /path/to/repo {{literal_text}}
.EE
.nf
.ft C
borg create /path/to/repo::{{literal_text}}
.ft P
.fi
.UNINDENT
.UNINDENT
.sp
@ -88,11 +88,13 @@ Examples:
.INDENT 0.0
.INDENT 3.5
.sp
.EX
borg create \-\-repo /path/to/repo {hostname}\-{user}\-{utcnow} ...
borg create \-\-repo /path/to/repo {hostname}\-{now:%Y\-%m\-%d_%H:%M:%S%z} ...
borg prune \-a \(aqsh:{hostname}\-*\(aq ...
.EE
.nf
.ft C
borg create /path/to/repo::{hostname}\-{user}\-{utcnow} ...
borg create /path/to/repo::{hostname}\-{now:%Y\-%m\-%d_%H:%M:%S} ...
borg prune \-\-prefix \(aq{hostname}\-\(aq ...
.ft P
.fi
.UNINDENT
.UNINDENT
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-PRUNE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-prune \- Prune repository archives according to specified rules
.
.nr rst2man-indent-level 0
.
@ -27,199 +30,160 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-PRUNE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-prune \- Prune archives according to specified rules.
.SH SYNOPSIS
.sp
borg [common options] prune [options] [NAME]
borg [common options] prune [options] [REPOSITORY]
.SH DESCRIPTION
.sp
The prune command prunes a repository by soft\-deleting all archives not
matching any of the specified retention options.
The prune command prunes a repository by deleting all archives not matching
any of the specified retention options.
.sp
Important:
.INDENT 0.0
.IP \(bu 2
The prune command will only mark archives for deletion (\(dqsoft\-deletion\(dq),
repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.IP \(bu 2
You can use \fBborg undelete\fP to undelete archives, but only until
you run \fBborg compact\fP\&.
.UNINDENT
Important: Repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.sp
This command is normally used by automated backup scripts wanting to keep a
certain number of historic backups. This retention policy is commonly referred to as
GFS <https://en.wikipedia.org/wiki/Backup_rotation_scheme#Grandfather-father-son>
\fI\%GFS\fP
(Grandfather\-father\-son) backup rotation scheme.
.sp
The recommended way to use prune is to give the archive series name to it via the
NAME argument (assuming you have the same name for all archives in a series).
Alternatively, you can also use \-\-match\-archives (\-a), then only archives that
match the pattern are considered for deletion and only those archives count
towards the totals specified by the rules.
Also, prune automatically removes checkpoint archives (incomplete archives left
behind by interrupted backup runs) except if the checkpoint is the latest
archive (and thus still needed). Checkpoint archives are not considered when
comparing archive counts against the retention limits (\fB\-\-keep\-X\fP).
.sp
If a prefix is set with \-P, then only archives that start with the prefix are
considered for deletion and only those archives count towards the totals
specified by the rules.
Otherwise, \fIall\fP archives in the repository are candidates for deletion!
There is no automatic distinction between archives representing different
contents. These need to be distinguished by specifying matching globs.
contents. These need to be distinguished by specifying matching prefixes.
.sp
If you have multiple series of archives with different data sets (e.g.
If you have multiple sequences of archives with different data sets (e.g.
from different machines) in one shared repository, use one prune call per
series.
data set that matches only the respective archives using the \-P option.
.sp
The \fB\-\-keep\-within\fP option takes an argument of the form \(dq<int><char>\(dq,
where char is \(dqy\(dq, \(dqm\(dq, \(dqw\(dq, \(dqd\(dq, \(dqH\(dq, \(dqM\(dq, or \(dqS\(dq. For example,
\fB\-\-keep\-within 2d\fP means to keep all archives that were created within
the past 2 days. \(dq1m\(dq is taken to mean \(dq31d\(dq. The archives kept with
this option do not count towards the totals specified by any other options.
The \fB\-\-keep\-within\fP option takes an argument of the form "<int><char>",
where char is "H", "d", "w", "m", "y". For example, \fB\-\-keep\-within 2d\fP means
to keep all archives that were created within the past 48 hours.
"1m" is taken to mean "31d". The archives kept with this option do not
count towards the totals specified by any other options.
.sp
A good procedure is to thin out more and more the older your backups get.
As an example, \fB\-\-keep\-daily 7\fP means to keep the latest backup on each day,
up to 7 most recent days with backups (days without backups do not count).
The rules are applied from secondly to yearly, and backups selected by previous
rules do not count towards those of later rules. The time that each backup
starts is used for pruning purposes. Dates and times are interpreted in the local
timezone of the system where borg prune runs, and weeks go from Monday to Sunday.
Specifying a negative number of archives to keep means that there is no limit.
.sp
Borg will retain the oldest archive if any of the secondly, minutely, hourly,
daily, weekly, monthly, quarterly, or yearly rules was not otherwise able to
meet its retention target. This enables the first chronological archive to
continue aging until it is replaced by a newer archive that meets the retention
criteria.
.sp
The \fB\-\-keep\-13weekly\fP and \fB\-\-keep\-3monthly\fP rules are two different
strategies for keeping archives every quarter year.
starts is used for pruning purposes. Dates and times are interpreted in
the local timezone, and weeks go from Monday to Sunday. Specifying a
negative number of archives to keep means that there is no limit. As of borg
1.2.0, borg will retain the oldest archive if any of the secondly, minutely,
hourly, daily, weekly, monthly, or yearly rules was not otherwise able to meet
its retention target. This enables the first chronological archive to continue
aging until it is replaced by a newer archive that meets the retention criteria.
.sp
The \fB\-\-keep\-last N\fP option is doing the same as \fB\-\-keep\-secondly N\fP (and it will
keep the last N archives under the assumption that you do not create more than one
backup archive in the same second).
.sp
You can influence how the \fB\-\-list\fP output is formatted by using the \fB\-\-short\fP
option (less wide output) or by giving a custom format using \fB\-\-format\fP (see
the \fBborg repo\-list\fP description for more details about the format string).
When using \fB\-\-stats\fP, you will get some statistics about how much data was
deleted \- the "Deleted data" deduplicated size there is most interesting as
that is how much your repository will shrink.
Please note that the "All archives" stats refer to the state after pruning.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B NAME
specify the archive name
.B REPOSITORY
repository to prune
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-n\fP,\fB \-\-dry\-run
do not change the repository
.B \-n\fP,\fB \-\-dry\-run
do not change repository
.TP
.B \-\-list
output a verbose list of archives it keeps/prunes
.B \-\-force
force pruning of corrupted archives, use \fB\-\-force \-\-force\fP in case \fB\-\-force\fP does not work.
.TP
.B \-\-short
use a less wide archive part format
.B \-s\fP,\fB \-\-stats
print statistics for the deleted archive
.TP
.B \-\-list\-pruned
output verbose list of archives it prunes
.TP
.B \-\-list\-kept
output verbose list of archives it keeps
.TP
.BI \-\-format \ FORMAT
specify format for the archive part (default: \(dq{archive:<36} {time} [{id}]\(dq)
.B \-\-list
output verbose list of archives it keeps/prunes
.TP
.BI \-\-keep\-within \ INTERVAL
keep all archives within this time interval
.TP
.B \-\-keep\-last\fP,\fB \-\-keep\-secondly
.B \-\-keep\-last\fP,\fB \-\-keep\-secondly
number of secondly archives to keep
.TP
.B \-\-keep\-minutely
.B \-\-keep\-minutely
number of minutely archives to keep
.TP
.B \-H\fP,\fB \-\-keep\-hourly
.B \-H\fP,\fB \-\-keep\-hourly
number of hourly archives to keep
.TP
.B \-d\fP,\fB \-\-keep\-daily
.B \-d\fP,\fB \-\-keep\-daily
number of daily archives to keep
.TP
.B \-w\fP,\fB \-\-keep\-weekly
.B \-w\fP,\fB \-\-keep\-weekly
number of weekly archives to keep
.TP
.B \-m\fP,\fB \-\-keep\-monthly
.B \-m\fP,\fB \-\-keep\-monthly
number of monthly archives to keep
.TP
.B \-\-keep\-13weekly
number of quarterly archives to keep (13 week strategy)
.TP
.B \-\-keep\-3monthly
number of quarterly archives to keep (3 month strategy)
.TP
.B \-y\fP,\fB \-\-keep\-yearly
.B \-y\fP,\fB \-\-keep\-yearly
number of yearly archives to keep
.TP
.B \-\-save\-space
work slower, but using less space
.UNINDENT
.SS Archive filters
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.BI \-P \ PREFIX\fR,\fB \ \-\-prefix \ PREFIX
only consider archive names starting with this prefix.
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.BI \-a \ GLOB\fR,\fB \ \-\-glob\-archives \ GLOB
only consider archive names matching the glob. sh: rules apply, see "borg help patterns". \fB\-\-prefix\fP and \fB\-\-glob\-archives\fP are mutually exclusive.
.UNINDENT
.SH EXAMPLES
.sp
Be careful: prune is a potentially dangerous command that removes backup
Be careful, prune is a potentially dangerous command, it will remove backup
archives.
.sp
By default, prune applies to \fBall archives in the repository\fP unless you
restrict its operation to a subset of the archives.
.sp
The recommended way to name archives (with \fBborg create\fP) is to use the
identical archive name within a series of archives. Then you can simply give
that name to prune as well, so it operates only on that series of archives.
.sp
Alternatively, you can use \fB\-a\fP/\fB\-\-match\-archives\fP to match archive names
and select a subset of them.
When using \fB\-a\fP, be careful to choose a good pattern — for example, do not use a
prefix \(dqfoo\(dq if you do not also want to match \(dqfoobar\(dq.
The default of prune is to apply to \fBall archives in the repository\fP unless
you restrict its operation to a subset of the archives using \fB\-\-prefix\fP\&.
When using \fB\-\-prefix\fP, be careful to choose a good prefix \- e.g. do not use a
prefix "foo" if you do not also want to match "foobar".
.sp
It is strongly recommended to always run \fBprune \-v \-\-list \-\-dry\-run ...\fP
first, so you will see what it would do without it actually doing anything.
.sp
Do not forget to run \fBborg compact \-v\fP after prune to actually free disk space.
first so you will see what it would do without it actually doing anything.
.INDENT 0.0
.INDENT 3.5
.sp
.EX
.nf
.ft C
# Keep 7 end of day and 4 additional end of week archives.
# Do a dry\-run without actually deleting anything.
$ borg prune \-v \-\-list \-\-dry\-run \-\-keep\-daily=7 \-\-keep\-weekly=4
$ borg prune \-v \-\-list \-\-dry\-run \-\-keep\-daily=7 \-\-keep\-weekly=4 /path/to/repo
# Similar to the above, but only apply to the archive series named \(aq{hostname}\(aq:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \(aq{hostname}\(aq
# Similar to the above, but apply to archive names starting with the hostname
# of the machine followed by a \(aq\-\(aq character:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-a \(aqsh:{hostname}\-*\(aq
# Same as above but only apply to archive names starting with the hostname
# of the machine followed by a "\-" character:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-prefix=\(aq{hostname}\-\(aq /path/to/repo
# actually free disk space:
$ borg compact /path/to/repo
# Keep 7 end of day, 4 additional end of week archives,
# and an end of month archive for every month:
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-keep\-monthly=\-1
$ borg prune \-v \-\-list \-\-keep\-daily=7 \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 /path/to/repo
# Keep all backups in the last 10 days, 4 additional end of week archives,
# and an end of month archive for every month:
$ borg prune \-v \-\-list \-\-keep\-within=10d \-\-keep\-weekly=4 \-\-keep\-monthly=\-1
.EE
$ borg prune \-v \-\-list \-\-keep\-within=10d \-\-keep\-weekly=4 \-\-keep\-monthly=\-1 /path/to/repo
.ft P
.fi
.UNINDENT
.UNINDENT
.sp

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-RECREATE 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-recreate \- Re-create archives
.
.nr rst2man-indent-level 0
.
@ -27,77 +30,89 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-RECREATE" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-recreate \- Recreate archives.
.SH SYNOPSIS
.sp
borg [common options] recreate [options] [PATH...]
borg [common options] recreate [options] [REPOSITORY_OR_ARCHIVE] [PATH...]
.SH DESCRIPTION
.sp
Recreate the contents of existing archives.
.sp
Recreate is a potentially dangerous function and might lead to data loss
recreate is a potentially dangerous function and might lead to data loss
(if used wrongly). BE VERY CAREFUL!
.sp
Important: Repository disk space is \fBnot\fP freed until you run \fBborg compact\fP\&.
.sp
\fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, \fB\-\-exclude\-if\-present\fP, \fB\-\-keep\-exclude\-tags\fP
and PATH have the exact same semantics as in \(dqborg create\(dq, but they only check
files in the archives and not in the local filesystem. If paths are specified,
the resulting archives will contain only files from those paths.
and PATH have the exact same semantics as in "borg create", but they only check
for files in the archives and not in the local file system. If PATHs are specified,
the resulting archives will only contain files from these PATHs.
.sp
Note that all paths in an archive are relative, therefore absolute patterns/paths
will \fInot\fP match (\fB\-\-exclude\fP, \fB\-\-exclude\-from\fP, PATHs).
.sp
\fB\-\-recompress\fP allows one to change the compression of existing data in archives.
Due to how Borg stores compressed size information this might display
incorrect information for archives that were not recreated at the same time.
There is no risk of data loss by this.
.sp
\fB\-\-chunker\-params\fP will re\-chunk all files in the archive, this can be
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives.
used to have upgraded Borg 0.xx or Attic archives deduplicate with
Borg 1.x archives.
.sp
\fBUSE WITH CAUTION.\fP
Depending on the paths and patterns given, recreate can be used to
delete files from archives permanently.
When in doubt, use \fB\-\-dry\-run \-\-verbose \-\-list\fP to see how patterns/paths are
Depending on the PATHs and patterns given, recreate can be used to permanently
delete files from archives.
When in doubt, use \fB\-\-dry\-run \-\-verbose \-\-list\fP to see how patterns/PATHS are
interpreted. See \fIlist_item_flags\fP in \fBborg create\fP for details.
.sp
The archive being recreated is only removed after the operation completes. The
archive that is built during the operation exists at the same time at
\(dq<ARCHIVE>.recreate\(dq. The new archive will have a different archive ID.
"<ARCHIVE>.recreate". The new archive will have a different archive ID.
.sp
With \fB\-\-target\fP the original archive is not replaced, instead a new archive is created.
.sp
When rechunking, space usage can be substantial \- expect
When rechunking (or recompressing), space usage can be substantial \- expect
at least the entire deduplicated size of the archives using the previous
chunker params.
chunker (or compression) params.
.sp
If your most recent borg check found missing chunks, please first run another
backup for the same data, before doing any rechunking. If you are lucky, that
will recreate the missing chunks. Optionally, do another borg check to see
if the chunks are still missing.
If you recently ran borg check \-\-repair and it had to fix lost chunks with all\-zero
replacement chunks, please first run another backup for the same data and re\-run
borg check \-\-repair afterwards to heal any archives that had lost chunks which are
still generated from the input data.
.sp
Important: running borg recreate to re\-chunk will remove the chunks_healthy
metadata of all items with replacement chunks, so healing will not be possible
any more after re\-chunking (it is also unlikely it would ever work: due to the
change of chunking parameters, the missing chunk likely will never be seen again
even if you still have the data that produced it).
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B REPOSITORY_OR_ARCHIVE
repository or archive to recreate
.TP
.B PATH
paths to recreate; patterns are supported
.UNINDENT
.SS options
.SS optional arguments
.INDENT 0.0
.TP
.B \-\-list
.B \-\-list
output verbose list of items (files, dirs, ...)
.TP
.BI \-\-filter \ STATUSCHARS
only display items with the given status characters (listed in borg create \-\-help)
.TP
.B \-n\fP,\fB \-\-dry\-run
.B \-n\fP,\fB \-\-dry\-run
do not change anything
.TP
.B \-s\fP,\fB \-\-stats
.B \-s\fP,\fB \-\-stats
print statistics at end
.UNINDENT
.SS Include/Exclude options
.SS Exclusion options
.INDENT 0.0
.TP
.BI \-e \ PATTERN\fR,\fB \ \-\-exclude \ PATTERN
@ -112,86 +127,75 @@ include/exclude paths matching PATTERN
.BI \-\-patterns\-from \ PATTERNFILE
read include/exclude patterns from PATTERNFILE, one per line
.TP
.B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file ( <http://www.bford.info/cachedir/spec.html> )
.B \-\-exclude\-caches
exclude directories that contain a CACHEDIR.TAG file (\fI\%http://www.bford.info/cachedir/spec.html\fP)
.TP
.BI \-\-exclude\-if\-present \ NAME
exclude directories that are tagged by containing a filesystem object with the given NAME
.TP
.B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, do not omit the tag objects themselves from the backup archive
.B \-\-keep\-exclude\-tags
if tag objects are specified with \fB\-\-exclude\-if\-present\fP, don\(aqt omit the tag objects themselves from the backup archive
.UNINDENT
.SS Archive filters
.SS Archive options
.INDENT 0.0
.TP
.BI \-a \ PATTERN\fR,\fB \ \-\-match\-archives \ PATTERN
only consider archives matching all patterns. See \(dqborg help match\-archives\(dq.
.TP
.BI \-\-sort\-by \ KEYS
Comma\-separated list of sorting keys; valid keys are: timestamp, archive, name, id, tags, host, user; default is: timestamp
.TP
.BI \-\-first \ N
consider the first N archives after other filters are applied
.TP
.BI \-\-last \ N
consider the last N archives after other filters are applied
.TP
.BI \-\-oldest \ TIMESPAN
consider archives between the oldest archive\(aqs timestamp and (oldest + TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newest \ TIMESPAN
consider archives between the newest archive\(aqs timestamp and (newest \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-older \ TIMESPAN
consider archives older than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-newer \ TIMESPAN
consider archives newer than (now \- TIMESPAN), e.g., 7d or 12m.
.TP
.BI \-\-target \ TARGET
create a new archive with the name ARCHIVE, do not replace existing archive
create a new archive with the name ARCHIVE, do not replace existing archive (only applies for a single archive)
.TP
.BI \-c \ SECONDS\fR,\fB \ \-\-checkpoint\-interval \ SECONDS
write checkpoint every SECONDS seconds (Default: 1800)
.TP
.BI \-\-comment \ COMMENT
add a comment text to the archive
.TP
.BI \-\-timestamp \ TIMESTAMP
manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:MM] format, (+|\-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
manually specify the archive creation date/time (UTC, yyyy\-mm\-ddThh:mm:ss format). alternatively, give a reference file/directory.
.TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
select compression algorithm, see the output of the "borg help compression" command for details.
.TP
.BI \-\-recompress \ MODE
recompress data chunks according to \fIMODE\fP and \fB\-\-compression\fP\&. Possible modes are \fIif\-different\fP: recompress if current compression is with a different compression algorithm (the level is not considered); \fIalways\fP: recompress even if current compression is with the same compression algorithm (use this to change the compression level); and \fInever\fP: do not recompress (use this option to explicitly prevent recompression). If no MODE is given, \fIif\-different\fP will be used. Not passing \-\-recompress is equivalent to "\-\-recompress never".
.TP
.BI \-\-chunker\-params \ PARAMS
rechunk using given chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or \fIdefault\fP to use the chunker defaults. default: do not rechunk
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or \fIdefault\fP to use the current defaults. default: buzhash,19,23,21,4095
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Create a backup with fast, low compression
$ borg create archive /some/files \-\-compression lz4
# Then recompress it — this might take longer, but the backup has already completed,
# so there are no inconsistencies from a long\-running backup job.
$ borg recreate \-a archive \-\-recompress \-\-compression zlib,9
.nf
.ft C
# Make old (Attic / Borg 0.xx) archives deduplicate with Borg 1.x archives.
# Archives created with Borg 1.1+ and the default chunker params are skipped
# (archive ID stays the same).
$ borg recreate /mnt/backup \-\-chunker\-params default \-\-progress
# Create a backup with little but fast compression
$ borg create /mnt/backup::archive /some/files \-\-compression lz4
# Then compress it \- this might take longer, but the backup has already completed,
# so no inconsistencies from a long\-running backup job.
$ borg recreate /mnt/backup::archive \-\-recompress \-\-compression zlib,9
# Remove unwanted files from all archives in a repository.
# Note the relative path for the \-\-exclude option — archives only contain relative paths.
$ borg recreate \-\-exclude home/icke/Pictures/drunk_photos
# Note the relative path for the \-\-exclude option \- archives only contain relative paths.
$ borg recreate /mnt/backup \-\-exclude home/icke/Pictures/drunk_photos
# Change the archive comment
$ borg create \-\-comment \(dqThis is a comment\(dq archivename ~
$ borg info \-a archivename
# Change archive comment
$ borg create \-\-comment "This is a comment" /mnt/backup::archivename ~
$ borg info /mnt/backup::archivename
Name: archivename
Fingerprint: ...
Comment: This is a comment
\&...
$ borg recreate \-\-comment \(dqThis is a better comment\(dq \-a archivename
$ borg info \-a archivename
$ borg recreate \-\-comment "This is a better comment" /mnt/backup::archivename
$ borg info /mnt/backup::archivename
Name: archivename
Fingerprint: ...
Comment: This is a better comment
\&...
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO

View file

@ -1,5 +1,8 @@
.\" Man page generated from reStructuredText.
.
.TH BORG-RENAME 1 "2022-04-14" "" "borg backup tool"
.SH NAME
borg-rename \- Rename an existing archive
.
.nr rst2man-indent-level 0
.
@ -27,12 +30,9 @@ level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-RENAME" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-rename \- Rename an existing archive.
.SH SYNOPSIS
.sp
borg [common options] rename [options] OLDNAME NEWNAME
borg [common options] rename [options] ARCHIVE NEWNAME
.SH DESCRIPTION
.sp
This command renames an archive in the repository.
@ -44,25 +44,27 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.SS arguments
.INDENT 0.0
.TP
.B OLDNAME
specify the current archive name
.B ARCHIVE
archive to rename
.TP
.B NEWNAME
specify the new archive name
the new archive name to use
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
$ borg create archivename ~
$ borg repo\-list
.nf
.ft C
$ borg create /path/to/repo::archivename ~
$ borg list /path/to/repo
archivename Mon, 2016\-02\-15 19:50:19
$ borg rename archivename newname
$ borg repo\-list
$ borg rename /path/to/repo::archivename newname
$ borg list /path/to/repo
newname Mon, 2016\-02\-15 19:50:19
.EE
.ft P
.fi
.UNINDENT
.UNINDENT
.SH SEE ALSO

View file

@ -1,89 +0,0 @@
.\" Man page generated from reStructuredText.
.
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.TH "BORG-REPO-COMPRESS" "1" "2025-12-23" "" "borg backup tool"
.SH NAME
borg-repo-compress \- Repository (re-)compression.
.SH SYNOPSIS
.sp
borg [common options] repo\-compress [options]
.SH DESCRIPTION
.sp
Repository (re\-)compression (and/or re\-obfuscation).
.sp
Reads all chunks in the repository and recompresses them if they are not already
using the compression type/level and obfuscation level given via \fB\-\-compression\fP\&.
.sp
If the outcome of the chunk processing indicates a change in compression
type/level or obfuscation level, the processed chunk is written to the repository.
Please note that the outcome might not always be the desired compression
type/level \- if no compression gives a shorter output, that might be chosen.
.sp
Please note that this command can not work in low (or zero) free disk space
conditions.
.sp
If the \fBborg repo\-compress\fP process receives a SIGINT signal (Ctrl\-C), the repo
will be committed and compacted and borg will terminate cleanly afterwards.
.sp
Both \fB\-\-progress\fP and \fB\-\-stats\fP are recommended when \fBborg repo\-compress\fP
is used interactively.
.sp
You do \fBnot\fP need to run \fBborg compact\fP after \fBborg repo\-compress\fP\&.
.SH OPTIONS
.sp
See \fIborg\-common(1)\fP for common options of Borg commands.
.SS options
.INDENT 0.0
.TP
.BI \-C \ COMPRESSION\fR,\fB \ \-\-compression \ COMPRESSION
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
.TP
.B \-s\fP,\fB \-\-stats
print statistics
.UNINDENT
.SH EXAMPLES
.INDENT 0.0
.INDENT 3.5
.sp
.EX
# Recompress repository contents
$ borg repo\-compress \-\-progress \-\-compression=zstd,3
# Recompress and obfuscate repository contents
$ borg repo\-compress \-\-progress \-\-compression=obfuscate,1,zstd,3
.EE
.UNINDENT
.UNINDENT
.SH SEE ALSO
.sp
\fIborg\-common(1)\fP
.SH AUTHOR
The Borg Collective
.\" Generated by docutils manpage writer.
.

Some files were not shown because too many files have changed in this diff Show more