Merge pull request #7233 from systemcrash/spfix

Docs and comments consistency and readability improvement
This commit is contained in:
TW 2022-12-30 15:13:17 +01:00 committed by GitHub
commit ed6dcbebb1
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
71 changed files with 282 additions and 271 deletions

View file

@ -22,11 +22,11 @@ What is BorgBackup?
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
See the `installation manual`_ or, if you have already
downloaded Borg, ``docs/installation.rst`` to get started with Borg.

View file

@ -8,8 +8,8 @@ Version 0.30.0 (2016-01-23)
Compatibility notes:
- you may need to use -v (or --info) more often to actually see output emitted
at INFO log level (because it is suppressed at the default WARNING log level).
- The new default logging level is WARNING. Previously, it was INFO, which was
more verbose. Use -v (or --info) to show once again log level INFO messages.
See the "general" section in the usage docs.
- for borg create, you need --list (additionally to -v) to see the long file
list (was needed so you can have e.g. --stats alone without the long list)
@ -164,7 +164,7 @@ New features:
- borg create --exclude-if-present TAGFILE - exclude directories that have the
given file from the backup. You can additionally give --keep-tag-files to
preserve just the directory roots and the tag-files (but not backup other
preserve just the directory roots and the tag-files (but not back up other
directory contents), #395, attic #128, attic #142
Other changes:
@ -419,10 +419,10 @@ Compatibility notes:
Deprecations:
- --compression N (with N being a number, as in 0.24) is deprecated.
We keep the --compression 0..9 for now to not break scripts, but it is
We keep the --compression 0..9 for now not to break scripts, but it is
deprecated and will be removed later, so better fix your scripts now:
--compression 0 (as in 0.24) is the same as --compression zlib,0 (now).
BUT: if you do not want compression, you rather want --compression none
BUT: if you do not want compression, use --compression none
(which is the default).
--compression 1 (in 0.24) is the same as --compression zlib,1 (now)
--compression 9 (in 0.24) is the same as --compression zlib,9 (now)
@ -434,7 +434,7 @@ New features:
- create --compression lz4 (super-fast, but not very high compression)
- create --compression zlib,N (slower, higher compression, default for N is 6)
- create --compression lzma,N (slowest, highest compression, default N is 6)
- honor the nodump flag (UF_NODUMP) and do not backup such items
- honor the nodump flag (UF_NODUMP) and do not back up such items
- list --short just outputs a simple list of the files/directories in an archive
Bug fixes:
@ -541,7 +541,7 @@ Other changes:
- update internals doc about chunker params, memory usage and compression
- added docs about development
- add some words about resource usage in general
- document how to backup a raw disk
- document how to back up a raw disk
- add note about how to run borg from virtual env
- add solutions for (ll)fuse installation problems
- document what borg check does, fixes #138
@ -617,7 +617,7 @@ New features:
- FUSE: reflect deduplication in allocated blocks
- only allow whitelisted RPC calls in server mode
- normalize source/exclude paths before matching
- use posix_fadvise to not spoil the OS cache, fixes attic #252
- use posix_fadvise not to spoil the OS cache, fixes attic #252
- toplevel error handler: show tracebacks for better error analysis
- sigusr1 / sigint handler to print current file infos - attic PR #286
- RPCError: include the exception args we get from remote

View file

@ -776,7 +776,7 @@ Other changes:
- docs:
- improve description of path variables
- document how to completely delete data, #2929
- document how to delete data completely, #2929
- add FAQ about Borg config dir, #4941
- add docs about errors not printed as JSON, #4073
- update usage_general.rst.inc
@ -864,7 +864,7 @@ New features:
- ability to use a system-provided version of "xxhash"
- create:
- changed the default behaviour to not store the atime of fs items. atime is
- changed the default behaviour not to store the atime of fs items. atime is
often rather not interesting and fragile - it easily changes even if nothing
else has changed and, if stored into the archive, spoils deduplication of
the archive metadata stream.
@ -1781,7 +1781,7 @@ Fixes:
- security fix: configure FUSE with "default_permissions", #3903
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- make "hostname" short, even on misconfigured systems, #4262
- fix free space calculation on macOS (and others?), #4289
- config: quit with error message when no key is provided, #4223
@ -2149,7 +2149,7 @@ New features:
- mount: added exclusion group options and paths, #2138
Reused some code to support similar options/paths as borg extract offers -
making good use of these to only mount a smaller subset of dirs/files can
making good use of these to mount only a smaller subset of dirs/files can
speed up mounting a lot and also will consume way less memory.
borg mount [options] repo_or_archive mountpoint path [paths...]
@ -2235,10 +2235,10 @@ Compatibility notes:
- The deprecated --no-files-cache is not a global/common option any more,
but only available for borg create (it is not needed for anything else).
Use --files-cache=disabled instead of --no-files-cache.
- The nodump flag ("do not backup this file") is not honoured any more by
- The nodump flag ("do not back up this file") is not honoured any more by
default because this functionality (esp. if it happened by error or
unexpected) was rather confusing and unexplainable at first to users.
If you want that "do not backup NODUMP-flagged files" behaviour, use:
If you want that "do not back up NODUMP-flagged files" behaviour, use:
borg create --exclude-nodump ...
- If you are on Linux and do not need bsdflags archived, consider using
``--nobsdflags`` with ``borg create`` to avoid additional syscalls and
@ -3078,7 +3078,7 @@ New features:
which includes the SHA1 and SHA2 family as well as MD5
- borg prune:
- to better visualize the "thinning out", we now list all archives in
- to visualize the "thinning out" better, we now list all archives in
reverse time order. rephrase and reorder help text.
- implement --keep-last N via --keep-secondly N, also --keep-minutely.
assuming that there is not more than 1 backup archive made in 1s,
@ -3175,7 +3175,7 @@ Bug fixes:
- security fix: configure FUSE with "default_permissions", #3903.
"default_permissions" is now enforced by borg by default to let the
kernel check uid/gid/mode based permissions.
"ignore_permissions" can be given to not enforce "default_permissions".
"ignore_permissions" can be given not to enforce "default_permissions".
- xattrs: fix borg exception handling on ENOSPC error, #3808.
New features:
@ -3478,7 +3478,7 @@ Other changes:
- docs:
- language clarification - VM backup FAQ
- borg create: document how to backup stdin, #2013
- borg create: document how to back up stdin, #2013
- borg upgrade: fix incorrect title levels
- add CVE numbers for issues fixed in 1.0.9, #2106
- fix typos (taken from Debian package patch)
@ -3505,8 +3505,18 @@ Security fixes:
CVE-2016-10099 was assigned to this vulnerability.
- borg check: When rebuilding the manifest (which should only be needed very rarely)
duplicate archive names would be handled on a "first come first serve" basis, allowing
an attacker to apparently replace archives.
duplicate archive names would be handled on a "first come first serve" basis,
potentially opening an attack vector to replace archives.
Example: were there 2 archives named "foo" in a repo (which can not happen
under normal circumstances, because borg checks if the name is already used)
and a "borg check" recreated a (previously lost) manifest, the first of the
archives it encountered would be in the manifest. The second archive is also
still in the repo, but not referenced in the manifest, in this case. If the
second archive is the "correct" one (and was previously referenced from the
manifest), it looks like it got replaced by the first one. In the manifest,
it actually got replaced. Both remain in the repo but the "correct" one is no
longer accessible via normal means - the manifest.
CVE-2016-10100 was assigned to this vulnerability.
@ -3674,7 +3684,7 @@ Bug fixes:
New features:
- add "borg key export" / "borg key import" commands, #1555, so users are able
to backup / restore their encryption keys more easily.
to back up / restore their encryption keys more easily.
Supported formats are the keyfile format used by borg internally and a
special "paper" format with by line checksums for printed backups. For the
@ -4161,7 +4171,7 @@ Bug fixes:
- do not sleep for >60s while waiting for lock, #773
- unpack file stats before passing to FUSE
- fix build on illumos
- don't try to backup doors or event ports (Solaris and derivatives)
- don't try to back up doors or event ports (Solaris and derivatives)
- remove useless/misleading libc version display, #738
- test suite: reset exit code of persistent archiver, #844
- RemoteRepository: clean up pipe if remote open() fails
@ -4234,20 +4244,20 @@ Compatibility notes:
changed file and in the worst case (e.g. if your files cache was lost / is
not used) by the size of every file (minus any compression you might use).
in case you want to immediately see a much lower resource usage (RAM / disk)
in case you want to see a much lower resource usage immediately (RAM / disk)
for chunks management, it might be better to start with a new repo than
continuing in the existing repo (with an existing repo, you'ld have to wait
until all archives with small chunks got pruned to see a lower resource
to continue in the existing repo (with an existing repo, you have to wait
until all archives with small chunks get pruned to see a lower resource
usage).
if you used the old --chunker-params default value (or if you did not use
--chunker-params option at all) and you'ld like to continue using small
--chunker-params option at all) and you'd like to continue using small
chunks (and you accept the huge resource usage that comes with that), just
explicitly use borg create --chunker-params=10,23,16,4095.
use explicitly borg create --chunker-params=10,23,16,4095.
- archive timestamps: the 'time' timestamp now refers to archive creation
start time (was: end time), the new 'time_end' timestamp refers to archive
creation end time. This might affect prune if your backups take rather long.
if you give a timestamp via cli this is stored into 'time', therefore it now
creation end time. This might affect prune if your backups take a long time.
if you give a timestamp via cli, this is stored into 'time'. therefore it now
needs to mean archive creation start time.
New features:
@ -4289,8 +4299,8 @@ Bug fixes:
Other changes:
- it is now possible to use "pip install borgbackup[fuse]" to automatically
install the llfuse dependency using the correct version requirement
- it is now possible to use "pip install borgbackup[fuse]" to
install the llfuse dependency automatically, using the correct version requirement
for it. you still need to care about having installed the FUSE / build
related OS package first, though, so that building llfuse can succeed.
- Vagrant: drop Ubuntu Precise (12.04) - does not have Python >= 3.4

View file

@ -168,13 +168,13 @@ after creating the backup. Rename the file to something else (e.g. ``/etc/backup
when you want to do something with the drive after creating backups (e.g running check).
Create the ``/etc/backups/backup-suspend`` file if the machine should suspend after completing
the backup. Don't forget to physically disconnect the device before resuming,
the backup. Don't forget to disconnect the device physically before resuming,
otherwise you'll enter a cycle. You can also add an option to power down instead.
Create an empty ``/etc/backups/backup.disks`` file, you'll register your backup drives
there.
The last part is to actually enable the udev rules and services:
The last part is actually to enable the udev rules and services:
.. code-block:: bash

View file

@ -4,7 +4,7 @@
Central repository server with Ansible or Salt
==============================================
This section will give an example how to setup a borg repository server for multiple
This section will give an example how to set up a borg repository server for multiple
clients.
Machines
@ -103,7 +103,7 @@ The server should automatically change the current working directory to the `<cl
borg init backup@backup01.srv.local:/home/backup/repos/johndoe.clnt.local/pictures
When `johndoe.clnt.local` tries to access a not restricted path the following error is raised.
John Doe tries to backup into the Web 01 path:
John Doe tries to back up into the Web 01 path:
::
@ -202,7 +202,7 @@ Salt running on a Debian system.
Enhancements
------------
As this section only describes a simple and effective setup it could be further
As this section only describes a simple and effective setup, it could be further
enhanced when supporting (a limited set) of client supplied commands. A wrapper
for starting `borg serve` could be written. Or borg itself could be enhanced to
autodetect it runs under SSH by checking the `SSH_ORIGINAL_COMMAND` environment

View file

@ -5,7 +5,7 @@
Hosting repositories
====================
This sections shows how to securely provide repository storage for users.
This sections shows how to provide repository storage securely for users.
Optionally, each user can have a storage quota.
Repositories are accessed through SSH. Each user of the service should
@ -24,7 +24,7 @@ is assigned a home directory and repositories of the user reside in her
home directory.
The following ``~user/.ssh/authorized_keys`` file is the most important
piece for a correct deployment. It allows the user to login via
piece for a correct deployment. It allows the user to log in via
their public key (which must be provided by the user), and restricts
SSH access to safe operations only.

View file

@ -33,7 +33,7 @@ deduplicating. For backup, save the disk header and the contents of each partiti
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
ntfsclone -so - $x | borg create repo::hostname-part$PARTNUM -
done
# to backup non-NTFS partitions as well:
# to back up non-NTFS partitions as well:
echo "$PARTITIONS" | grep -v NTFS | cut -d' ' -f1 | while read x; do
PARTNUM=$(echo $x | grep -Eo "[0-9]+$")
borg create --read-special repo::hostname-part$PARTNUM $x
@ -77,7 +77,7 @@ Because the partitions were zeroed in place, restoration is only one command::
borg extract --stdout repo::hostname-disk | dd of=$DISK
.. note:: The "traditional" way to zero out space on a partition, especially one already
mounted, is to simply ``dd`` from ``/dev/zero`` to a temporary file and delete
mounted, is simply to ``dd`` from ``/dev/zero`` to a temporary file and delete
it. This is ill-advised for the reasons mentioned in the ``zerofree`` man page:
- it is slow

View file

@ -13,7 +13,7 @@ If you however require the backup server to initiate the connection or prefer
it to initiate the backup run, one of the following workarounds is required to
allow such a pull mode setup.
A common use case for pull mode is to backup a remote server to a local personal
A common use case for pull mode is to back up a remote server to a local personal
computer.
SSHFS
@ -161,7 +161,7 @@ Now we can run
borg extract /borgrepo::archive PATH
to partially restore whatever we like. Finally, do the clean-up:
to restore whatever we like partially. Finally, do the clean-up:
::
@ -209,8 +209,8 @@ socat
=====
In this setup a SSH connection from the backup server to the client is
established that uses SSH reverse port forwarding to transparently
tunnel data between UNIX domain sockets on the client and server and the socat
established that uses SSH reverse port forwarding to tunnel data
transparently between UNIX domain sockets on the client and server and the socat
tool to connect these with the borg client and server processes, respectively.
The program socat has to be available on the backup server and on the client
@ -277,7 +277,7 @@ forwarding can do this for us::
Warning: remote port forwarding failed for listen path /run/borg/reponame.sock
When you are done, you have to manually remove the socket file, otherwise
When you are done, you have to remove the socket file manually, otherwise
you may see an error like this when trying to execute borg commands::
Remote: YYYY/MM/DD HH:MM:SS socat[XXX] E connect(5, AF=1 "/run/borg/reponame.sock", 13): Connection refused
@ -417,7 +417,7 @@ Parentheses are not needed when using a dedicated bash process.
*ssh://borgs@borg-server/~/repo* refers to the repository *repo* within borgs's home directory on *borg-server*.
*StrictHostKeyChecking=no* is used to automatically add host keys to *~/.ssh/known_hosts* without user intervention.
*StrictHostKeyChecking=no* is used to add host keys automatically to *~/.ssh/known_hosts* without user intervention.
``kill "${SSH_AGENT_PID}"``

View file

@ -24,7 +24,7 @@ SSHFS, the Borg client only can do file system operations and has no agent
running on the remote side, so *every* operation needs to go over the network,
which is slower.
Can I backup from multiple servers into a single repository?
Can I back up from multiple servers into a single repository?
------------------------------------------------------------
In order for the deduplication used by Borg to work, it
@ -86,7 +86,7 @@ run into this by yourself by restoring an older copy of your repository.
"attack": maybe an attacker has replaced your repo by an older copy, trying to
trick you into AES counter reuse, trying to break your repo encryption.
If you'ld decide to ignore this and accept unsafe operation for this repository,
If you decide to ignore this and accept unsafe operation for this repository,
you could delete the manifest-timestamp and the local cache:
::
@ -115,8 +115,8 @@ Which file types, attributes, etc. are *not* preserved?
Are there other known limitations?
----------------------------------
- borg extract only supports restoring into an empty destination. After that,
the destination will exactly have the contents of the extracted archive.
- borg extract supports restoring only into an empty destination. After extraction,
the destination will have exactly the contents of the extracted archive.
If you extract into a non-empty destination, borg will (for example) not
remove files which are in the destination, but not in the archive.
See :issue:`4598` for a workaround and more details.
@ -128,12 +128,12 @@ If a backup stops mid-way, does the already-backed-up data stay there?
Yes, Borg supports resuming backups.
During a backup a special checkpoint archive named ``<archive-name>.checkpoint``
is saved every checkpoint interval (the default value for this is 30
During a backup, a special checkpoint archive named ``<archive-name>.checkpoint``
is saved at every checkpoint interval (the default value for this is 30
minutes) containing all the data backed-up until that point.
This checkpoint archive is a valid archive,
but it is only a partial backup (not all files that you wanted to backup are
but it is only a partial backup (not all files that you wanted to back up are
contained in it). Having it in the repo until a successful, full backup is
completed is useful because it references all the transmitted chunks up
to the checkpoint. This means that in case of an interruption, you only need to
@ -159,11 +159,11 @@ so that checkpoints even work while a big file is being processed.
They are named ``<filename>.borg_part_<N>`` and all operations usually ignore
these files, but you can make them considered by giving the option
``--consider-part-files``. You usually only need that option if you are
really desperate (e.g. if you have no completed backup of that file and you'ld
really desperate (e.g. if you have no completed backup of that file and you'd
rather get a partial file extracted than nothing). You do **not** want to give
that option under any normal circumstances.
How can I backup huge file(s) over a unstable connection?
How can I back up huge file(s) over a unstable connection?
---------------------------------------------------------
Yes. For more details, see :ref:`checkpoints_parts`.
@ -334,7 +334,7 @@ Assuming that all your chunks have a size of :math:`2^{21}` bytes (approximately
and we have a "perfect" hash algorithm, we can think that the probability of collision
would be of :math:`p^2/2^{n+1}` then, using SHA-256 (:math:`n=256`) and for example
we have 1000 million chunks (:math:`p=10^9`) (1000 million chunks would be about 2100TB).
The probability would be around to 0.0000000000000000000000000000000000000000000000000000000000043.
The probability would be around 0.0000000000000000000000000000000000000000000000000000000000043.
A mass-murderer space rock happens about once every 30 million years on average.
This leads to a probability of such an event occurring in the next second to about :math:`10^{-15}`.
@ -342,9 +342,9 @@ That's **45** orders of magnitude more probable than the SHA-256 collision. Brie
if you find SHA-256 collisions scary then your priorities are wrong. This example was grabbed from
`this SO answer <https://stackoverflow.com/a/4014407/13359375>`_, it's great honestly.
Still, the real question is if Borg tries to not make this happen?
Still, the real question is whether Borg tries not to make this happen?
Well... it used to not check anything but there was a feature added which saves the size
Well... previously it did not check anything until there was a feature added which saves the size
of the chunks too, so the size of the chunks is compared to the size that you got with the
hash and if the check says there is a mismatch it will raise an exception instead of corrupting
the file. This doesn't save us from everything but reduces the chances of corruption.
@ -364,7 +364,7 @@ How do I configure different prune policies for different directories?
----------------------------------------------------------------------
Say you want to prune ``/var/log`` faster than the rest of
``/``. How do we implement that? The answer is to backup to different
``/``. How do we implement that? The answer is to back up to different
archive *names* and then implement different prune policies for
different prefixes. For example, you could have a script that does::
@ -467,7 +467,7 @@ Setting ``BORG_PASSPHRASE``
user
<https://security.stackexchange.com/questions/14000/environment-variable-accessibility-in-linux/14009#14009>`_.
Using ``BORG_PASSCOMMAND`` with a properly permissioned file
Using ``BORG_PASSCOMMAND`` with a file of proper permissions
Another option is to create a file with a password in it in your home
directory and use permissions to keep anyone else from reading it. For
example, first create a key::
@ -489,7 +489,7 @@ Using keyfile-based encryption with a blank passphrase
Using ``BORG_PASSCOMMAND`` with macOS Keychain
macOS has a native manager for secrets (such as passphrases) which is safer
than just using a file as it is encrypted at rest and unlocked manually
(fortunately, the login keyring automatically unlocks when you login). With
(fortunately, the login keyring automatically unlocks when you log in). With
the built-in ``security`` command, you can access it from the command line,
making it useful for ``BORG_PASSCOMMAND``.
@ -524,7 +524,7 @@ Using ``BORG_PASSCOMMAND`` with GNOME Keyring
export BORG_PASSCOMMAND="secret-tool lookup borg-repository repo-name"
.. note:: For this to automatically unlock the keychain it must be run
.. note:: For this to unlock the keychain automatically it must be run
in the ``dbus`` session of an unlocked terminal; for example, running a backup
script as a ``cron`` job might not work unless you also ``export DISPLAY=:0``
so ``secret-tool`` can pick up your open session. `It gets even more complicated`__
@ -567,13 +567,13 @@ otherwise make unavailable) all your backups.
How can I protect against a hacked backup client?
-------------------------------------------------
Assume you backup your backup client machine C to the backup server S and
Assume you back up your backup client machine C to the backup server S and
C gets hacked. In a simple push setup, the attacker could then use borg on
C to delete all backups residing on S.
These are your options to protect against that:
- Do not allow to permanently delete data from the repo, see :ref:`append_only_mode`.
- Do not allow to delete data permanently from the repo, see :ref:`append_only_mode`.
- Use a pull-mode setup using ``ssh -R``, see :ref:`pull_backup` for more information.
- Mount C's filesystem on another machine and then create a backup of it.
- Do not give C filesystem-level access to S.
@ -738,13 +738,13 @@ This has some pros and cons, though:
The long term plan to improve this is called "borgception", see :issue:`474`.
Can I backup my root partition (/) with Borg?
Can I back up my root partition (/) with Borg?
---------------------------------------------
Backing up your entire root partition works just fine, but remember to
exclude directories that make no sense to backup, such as /dev, /proc,
exclude directories that make no sense to back up, such as /dev, /proc,
/sys, /tmp and /run, and to use ``--one-file-system`` if you only want to
backup the root partition (and not any mounted devices e.g.).
back up the root partition (and not any mounted devices e.g.).
If it crashes with a UnicodeError, what can I do?
-------------------------------------------------
@ -853,7 +853,7 @@ Then you do the backup and look at the log output:
The metadata values used in this comparison are determined by the ``--files-cache`` option
and could be e.g. size, ctime and inode number (see the ``borg create`` docs for more
details and potential issues).
You can use the ``stat`` command on files to manually look at fs metadata to debug if
You can use the ``stat`` command on files to look at fs metadata manually to debug if
there is any unexpected change triggering the ``M`` status.
Also, the ``--debug-topic=files_cache`` option of ``borg create`` provides a lot of debug
output helping to analyse why the files cache does not give its expected high performance.
@ -955,7 +955,7 @@ Another possible reason is that files don't always have the same path, for
example if you mount a filesystem without stable mount points for each backup
or if you are running the backup from a filesystem snapshot whose name is not
stable. If the directory where you mount a filesystem is different every time,
Borg assumes they are different files. This is true even if you backup these
Borg assumes they are different files. This is true even if you back up these
files with relative pathnames - borg uses full pathnames in files cache regardless.
It is possible for some filesystems, such as ``mergerfs`` or network filesystems,
@ -1006,7 +1006,7 @@ How can I avoid unwanted base directories getting stored into archives?
Possible use cases:
- Another file system is mounted and you want to backup it with original paths.
- Another file system is mounted and you want to back it up with original paths.
- You have created a BTRFS snapshot in a ``/.snapshots`` directory for backup.
To achieve this, run ``borg create`` within the mountpoint/snapshot directory:

View file

@ -134,7 +134,7 @@ fail if /tmp has not enough free space or is mounted with the ``noexec``
option. You can change the temporary directory by setting the ``TEMP``
environment variable before running Borg.
If a new version is released, you will have to manually download it and replace
If a new version is released, you will have to download it manually and replace
the old version using the same steps as shown above.
.. _pyinstaller: http://www.pyinstaller.org
@ -331,7 +331,7 @@ optional, but recommended except for the most simple use cases.
If you install into a virtual environment, you need to **activate** it
first (``source borg-env/bin/activate``), before running ``borg``.
Alternatively, symlink ``borg-env/bin/borg`` into some directory that is in
your ``PATH`` so you can just run ``borg``.
your ``PATH`` so you can run ``borg``.
This will use ``pip`` to install the latest release from PyPi::
@ -414,4 +414,4 @@ If you need to use a different version of Python you can install this using ``py
source borg-env/bin/activate # always before using!
...
.. note:: As a developer or power user, you always want to use a virtual environment.
.. note:: As a developer or power user, you should always use a virtual environment.

View file

@ -22,7 +22,7 @@ metadata, using :ref:`chunks` created by the chunker using the
Buzhash_ algorithm ("buzhash" chunker) or a simpler fixed blocksize
algorithm ("fixed" chunker).
To actually perform the repository-wide deduplication, a hash of each
To perform the repository-wide deduplication, a hash of each
chunk is checked against the :ref:`chunks cache <cache>`, which is a
hash-table of all chunks that already exist.

View file

@ -157,11 +157,11 @@ An object (the payload part of a segment file log entry) must be like:
- compressed data (with an optional all-zero-bytes obfuscation trailer)
This new, more complex repo v2 object format was implemented to be able to efficiently
query the metadata without having to read, transfer and decrypt the (usually much bigger)
This new, more complex repo v2 object format was implemented to be able to query the
metadata efficiently without having to read, transfer and decrypt the (usually much bigger)
data part.
The metadata is encrypted to not disclose potentially sensitive information that could be
The metadata is encrypted not to disclose potentially sensitive information that could be
used for e.g. fingerprinting attacks.
The compression `ctype` and `clevel` is explained in :ref:`data-compression`.
@ -688,7 +688,7 @@ To determine whether a file has not changed, cached values are looked up via
the key in the mapping and compared to the current file attribute values.
If the file's size, timestamp and inode number is still the same, it is
considered to not have changed. In that case, we check that all file content
considered not to have changed. In that case, we check that all file content
chunks are (still) present in the repository (we check that via the chunks
cache).
@ -818,7 +818,7 @@ bucket is reached.
This particular mode of operation is open addressing with linear probing.
When the hash table is filled to 75%, its size is grown. When it's
emptied to 25%, its size is shrinked. Operations on it have a variable
emptied to 25%, its size is shrunken. Operations on it have a variable
complexity between constant and linear with low factor, and memory overhead
varies between 33% and 300%.
@ -1013,7 +1013,7 @@ while doing no compression at all (none) is a operation that takes no time, it
likely will need to store more data to the storage compared to using lz4.
The time needed to transfer and store the additional data might be much more
than if you had used lz4 (which is super fast, but still might compress your
data about 2:1). This is assuming your data is compressible (if you backup
data about 2:1). This is assuming your data is compressible (if you back up
already compressed data, trying to compress them at backup time is usually
pointless).

View file

@ -546,7 +546,7 @@ Errors
Buffer.MemoryLimitExceeded
Requested buffer size {} is above the limit of {}.
ExtensionModuleError
The Borg binary extension modules do not seem to be properly installed
The Borg binary extension modules do not seem to be installed properly
IntegrityError
Data integrity error: {}
NoManifestError
@ -638,4 +638,4 @@ Prompts
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING
For "This is a potentially dangerous function..." (check --repair)
BORG_DELETE_I_KNOW_WHAT_I_AM_DOING
For "You requested to completely DELETE the repository *including* all archives it contains:"
For "You requested to DELETE the repository completely *including* all archives it contains:"

View file

@ -60,7 +60,7 @@ In other words, the object ID itself only authenticates the plaintext of the
object and not its context or meaning. The latter is established by a different
object referring to an object ID, thereby assigning a particular meaning to
an object. For example, an archive item contains a list of object IDs that
represent packed file metadata. On their own it's not clear that these objects
represent packed file metadata. On their own, it's not clear that these objects
would represent what they do, but by the archive item referring to them
in a particular part of its own data structure assigns this meaning.
@ -277,7 +277,7 @@ SSH server -- Borg RPC does not contain *any* networking
code. Networking is done by the SSH client running in a separate
process, Borg only communicates over the standard pipes (stdout,
stderr and stdin) with this process. This also means that Borg doesn't
have to directly use a SSH client (or SSH at all). For example,
have to use a SSH client directly (or SSH at all). For example,
``sudo`` or ``qrexec`` could be used as an intermediary.
By using the system's SSH client and not implementing a

View file

@ -169,7 +169,7 @@ set mode to M in archive for stdin data (default: 0660)
interpret PATH as command and store its stdout. See also section Reading from stdin below.
.TP
.B \-\-paths\-from\-stdin
read DELIM\-separated list of paths to backup from stdin. Will not recurse into directories.
read DELIM\-separated list of paths to back up from stdin. Will not recurse into directories.
.TP
.B \-\-paths\-from\-command
interpret PATH as command and treat its output as \fB\-\-paths\-from\-stdin\fP
@ -264,30 +264,30 @@ select compression algorithm, see the output of the \(dqborg help compression\(d
.sp
.nf
.ft C
# Backup ~/Documents into an archive named \(dqmy\-documents\(dq
# Back up ~/Documents into an archive named \(dqmy\-documents\(dq
$ borg create my\-documents ~/Documents
# same, but list all files as we process them
$ borg create \-\-list my\-documents ~/Documents
# Backup ~/Documents and ~/src but exclude pyc files
# Back up ~/Documents and ~/src but exclude pyc files
$ borg create my\-files \e
~/Documents \e
~/src \e
\-\-exclude \(aq*.pyc\(aq
# Backup home directories excluding image thumbnails (i.e. only
# Back up home directories excluding image thumbnails (i.e. only
# /home/<one directory>/.thumbnails is excluded, not /home/*/*/.thumbnails etc.)
$ borg create my\-files /home \-\-exclude \(aqsh:home/*/.thumbnails\(aq
# Backup the root filesystem into an archive named \(dqroot\-YYYY\-MM\-DD\(dq
# Back up the root filesystem into an archive named \(dqroot\-YYYY\-MM\-DD\(dq
# use zlib compression (good, but slow) \- default is lz4 (fast, low compression ratio)
$ borg create \-C zlib,6 \-\-one\-file\-system root\-{now:%Y\-%m\-%d} /
# Backup into an archive name like FQDN\-root\-TIMESTAMP
# Back up into an archive name like FQDN\-root\-TIMESTAMP
$ borg create \(aq{fqdn}\-root\-{now}\(aq /
# Backup a remote host locally (\(dqpull\(dq style) using sshfs
# Back up a remote host locally (\(dqpull\(dq style) using sshfs
$ mkdir sshfs\-mount
$ sshfs root@example.com:/ sshfs\-mount
$ cd sshfs\-mount
@ -300,10 +300,10 @@ $ fusermount \-u sshfs\-mount
# docs \- same parameters as borg < 1.0):
$ borg create \-\-chunker\-params buzhash,10,23,16,4095 small /smallstuff
# Backup a raw device (must not be active/in use/mounted at that time)
# Back up a raw device (must not be active/in use/mounted at that time)
$ borg create \-\-read\-special \-\-chunker\-params fixed,4194304 my\-sdx /dev/sdX
# Backup a sparse disk image (must not be active/in use/mounted at that time)
# Back up a sparse disk image (must not be active/in use/mounted at that time)
$ borg create \-\-sparse \-\-chunker\-params fixed,4194304 my\-disk my\-disk.raw
# No compression (none)
@ -334,9 +334,9 @@ $ cd /home/user/Documents
$ borg create \(aqdaily\-projectA\-{now:%Y\-%m\-%d}\(aq projectA
# Use external command to determine files to archive
# Use \-\-paths\-from\-stdin with find to only backup files less than 1MB in size
# Use \-\-paths\-from\-stdin with find to back up only files less than 1MB in size
$ find ~ \-size \-1000k | borg create \-\-paths\-from\-stdin small\-files\-only
# Use \-\-paths\-from\-command with find to only backup files from a given user
# Use \-\-paths\-from\-command with find to back up files from only a given user
$ borg create \-\-paths\-from\-command joes\-files \-\- find /srv/samba/shared \-user joe
# Use \-\-paths\-from\-stdin with \-\-paths\-delimiter (for example, for filenames with newlines in them)
$ find ~ \-size \-1000k \-print0 | borg create \e

View file

@ -36,7 +36,7 @@ borg [common options] key export [options] [PATH]
.SH DESCRIPTION
.sp
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View file

@ -91,7 +91,7 @@ of CPU cores.
.sp
When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process
to unintentionally delete data.
to delete data unintentionally.
.sp
When running in the foreground ^C/SIGINT unmounts cleanly, but other
signals or crashes do not.

View file

@ -110,7 +110,7 @@ The easiest way to find out about what\(aqs fastest is to run \fBborg benchmark
\fIrepokey\fP modes: if you want ease\-of\-use and \(dqpassphrase\(dq security is good enough \-
the key will be stored in the repository (in \fBrepo_dir/config\fP).
.sp
\fIkeyfile\fP modes: if you rather want \(dqpassphrase and having\-the\-key\(dq security \-
\fIkeyfile\fP modes: if you want \(dqpassphrase and having\-the\-key\(dq security \-
the key will be stored in your home directory (in \fB~/.config/borg/keys\fP).
.sp
The following table is roughly sorted in order of preference, the better ones are
@ -205,7 +205,7 @@ _
.\" nanorst: inline-replace
.
.sp
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised to NOT use this mode
\fInone\fP mode uses no encryption and no authentication. You\(aqre advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.
.sp

View file

@ -72,7 +72,7 @@ keep the local security info when deleting a repository
.ft C
# delete the whole repository and the related local cache:
$ borg rdelete
You requested to completely DELETE the repository *including* all archives it contains:
You requested to DELETE the repository completely *including* all archives it contains:
repo Mon, 2016\-02\-15 19:26:54
root\-2016\-02\-15 Mon, 2016\-02\-15 19:36:29
newname Mon, 2016\-02\-15 19:50:19

View file

@ -59,8 +59,8 @@ There is no risk of data loss by this.
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives.
.sp
\fBUSE WITH CAUTION.\fP
Depending on the PATHs and patterns given, recreate can be used to permanently
delete files from archives.
Depending on the PATHs and patterns given, recreate can be used to
delete files from archives permanently.
When in doubt, use \fB\-\-dry\-run \-\-verbose \-\-list\fP to see how patterns/PATHS are
interpreted. See \fIlist_item_flags\fP in \fBborg create\fP for details.
.sp
@ -163,7 +163,7 @@ manually specify the archive creation date/time (yyyy\-mm\-ddThh:mm:ss[(+|\-)HH:
select compression algorithm, see the output of the \(dqborg help compression\(dq command for details.
.TP
.BI \-\-recompress \ MODE
recompress data chunks according to \fIMODE\fP and \fB\-\-compression\fP\&. Possible modes are \fIif\-different\fP: recompress if current compression is with a different compression algorithm or different level; \fIalways\fP: recompress unconditionally; and \fInever\fP: do not recompress (use this option to explicitly prevent recompression). If no MODE is given, \fIif\-different\fP will be used. Not passing \-\-recompress is equivalent to \(dq\-\-recompress never\(dq.
recompress data chunks according to \fIMODE\fP and \fB\-\-compression\fP\&. Possible modes are \fIif\-different\fP: recompress if current compression is with a different compression algorithm or different level; \fIalways\fP: recompress unconditionally; and \fInever\fP: do not recompress (use this option explicitly to prevent recompression). If no MODE is given, \fIif\-different\fP will be used. Not passing \-\-recompress is equivalent to \(dq\-\-recompress never\(dq.
.TP
.BI \-\-chunker\-params \ PARAMS
specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or \fIdefault\fP to use the current defaults. default: buzhash,19,23,21,4095

View file

@ -43,10 +43,10 @@ See \fIborg\-common(1)\fP for common options of Borg commands.
.INDENT 0.0
.TP
.BI \-\-restrict\-to\-path \ PATH
restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub\-directories is granted implicitly; PATH doesn\(aqt need to directly point to a repository.
restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub\-directories is granted implicitly; PATH doesn\(aqt need to point to a repository directly.
.TP
.BI \-\-restrict\-to\-repository \ PATH
restrict repository access. Only the repository located at PATH (no sub\-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike \fB\-\-restrict\-to\-path\fP sub\-directories are not accessible; PATH needs to directly point at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there.
restrict repository access. Only the repository located at PATH (no sub\-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike \fB\-\-restrict\-to\-path\fP sub\-directories are not accessible; PATH needs to point directly at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there.
.TP
.B \-\-append\-only
only allow appending to repository segment files. Note that this only affects the low level structure of the repository, and running \fIdelete\fP or \fIprune\fP will still be allowed. See \fIappend_only_mode\fP in Additional Notes for more details.
@ -79,7 +79,7 @@ locations like \fB/etc/environment\fP or in the forced command itself (example b
.sp
.nf
.ft C
# Allow an SSH keypair to only run borg, and only have access to /path/to/repo.
# Allow an SSH keypair to run only borg, and only have access to /path/to/repo.
# Use key options to disable unneeded and potentially dangerous SSH functionality.
# This will help to secure an automated remote backup system.
$ cat ~/.ssh/authorized_keys
@ -100,7 +100,7 @@ The examples above use the \fBrestrict\fP directive. This does automatically
block potential dangerous ssh features, even when they are added in a future
update. Thus, this option should be preferred.
.sp
If you\(aqre using openssh\-server < 7.2, however, you have to explicitly specify
If you\(aqre using openssh\-server < 7.2, however, you have to specify explicitly
the ssh features to restrict and cannot simply use the restrict option as it
has been introduced in v7.2. We recommend to use
\fBno\-port\-forwarding,no\-X11\-forwarding,no\-pty,no\-agent\-forwarding,no\-user\-rc\fP

View file

@ -104,7 +104,7 @@ root\-2016\-02\-01 root\-2016\-02\-2015
.INDENT 3.5
\fBborgfs\fP will be automatically provided if you used a distribution
package, \fBpip\fP or \fBsetup.py\fP to install Borg. Users of the
standalone binary will have to manually create a symlink (see
standalone binary will have to create a symlink manually (see
\fIpyinstaller\-binary\fP).
.UNINDENT
.UNINDENT

View file

@ -56,8 +56,8 @@ except when noted otherwise in the changelog
Use \fBborg upgrade \-\-tam REPO\fP to require manifest authentication
introduced with Borg 1.0.9 to address security issues. This means
that modifying the repository after doing this with a version prior
to 1.0.9 will raise a validation error, so only perform this upgrade
after updating all clients using the repository to 1.0.9 or newer.
to 1.0.9 will raise a validation error, so perform this upgrade
only after updating all clients using the repository to 1.0.9 or newer.
.sp
This upgrade should be done on each client for safety reasons.
.sp

View file

@ -40,11 +40,11 @@ borg [common options] <command> [options] [arguments]
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
.sp
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back up data.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
.sp
Borg stores a set of files in an \fIarchive\fP\&. A \fIrepository\fP is a collection
of \fIarchives\fP\&. The format of repositories is Borg\-specific. Borg does not
@ -54,7 +54,7 @@ it does not matter when or where archives were created (e.g. different hosts).
.SS A step\-by\-step example
.INDENT 0.0
.IP 1. 3
Before a backup can be made a repository has to be initialized:
Before a backup can be made, a repository has to be initialized:
.INDENT 3.0
.INDENT 3.5
.sp
@ -91,7 +91,7 @@ $ borg \-r /path/to/repo create \-\-stats Tuesday ~/src ~/Documents
.UNINDENT
.UNINDENT
.sp
This backup will be a lot quicker and a lot smaller since only new never
This backup will be a lot quicker and a lot smaller since only new, never
before seen data is stored. The \fB\-\-stats\fP option causes Borg to
output statistics about the newly created archive such as the deduplicated
size (the amount of unique data not shared with other archives):
@ -189,7 +189,7 @@ $ borg \-r /path/to/repo compact
\fBNOTE:\fP
.INDENT 0.0
.INDENT 3.5
Borg is quiet by default (it works on WARNING log level).
Borg is quiet by default (it defaults to WARNING log level).
You can use options like \fB\-\-progress\fP or \fB\-\-list\fP to get specific
reports during command execution. You can also add the \fB\-v\fP (or
\fB\-\-verbose\fP or \fB\-\-info\fP) option to adjust the log level to INFO to
@ -404,7 +404,7 @@ If BORG_PASSPHRASE or BORG_PASSCOMMAND are also set, they take precedence.
When set, use the value to answer the passphrase question when a \fBnew\fP passphrase is asked for.
This variable is checked first. If it is not set, BORG_PASSPHRASE and BORG_PASSCOMMAND will also
be checked.
Main usecase for this is to fully automate \fBborg change\-passphrase\fP\&.
Main usecase for this is to automate fully \fBborg change\-passphrase\fP\&.
.TP
.B BORG_DISPLAY_PASSPHRASE
When set, use the value to answer the \(dqdisplay the passphrase for verification\(dq question when defining a new passphrase for encrypted repositories.
@ -413,7 +413,7 @@ When set, use the value to answer the \(dqdisplay the passphrase for verificatio
Borg usually computes a host id from the FQDN plus the results of \fBuuid.getnode()\fP (which usually returns
a unique id based on the MAC address of the network interface. Except if that MAC happens to be all\-zero \- in
that case it returns a random value, which is not what we want (because it kills automatic stale lock removal).
So, if you have a all\-zero MAC address or other reasons to better externally control the host id, just set this
So, if you have a all\-zero MAC address or other reasons to control better externally the host id just set this
environment variable to a unique value. If all your FQDNs are unique, you can just use the FQDN. If not,
use \fI\%fqdn@uniqueid\fP\&.
.TP
@ -441,7 +441,7 @@ cache entries for backup sources other than the current sources.
.TP
.B BORG_FILES_CACHE_TTL
When set to a numeric value, this determines the maximum \(dqtime to live\(dq for the files cache
entries (default: 20). The files cache is used to quickly determine whether a file is unchanged.
entries (default: 20). The files cache is used to determine quickly whether a file is unchanged.
The FAQ explains this more detailed in: \fIalways_chunking\fP
.TP
.B BORG_SHOW_SYSINFO
@ -509,7 +509,7 @@ For \(dqWarning: The repository at location ... was previously located at ...\(d
For \(dqThis is a potentially dangerous function...\(dq (check \-\-repair)
.TP
.B BORG_DELETE_I_KNOW_WHAT_I_AM_DOING=NO (or =YES)
For \(dqYou requested to completely DELETE the repository \fIincluding\fP all archives it contains:\(dq
For \(dqYou requested to DELETE the repository completely \fIincluding\fP all archives it contains:\(dq
.UNINDENT
.sp
Note: answers are case sensitive. setting an invalid answer value might either give the default
@ -737,7 +737,7 @@ If your repository is remote, all deduplicated (and optionally compressed/
encrypted) data of course has to go over the connection (\fBssh://\fP repo url).
If you use a locally mounted network filesystem, additionally some copy
operations used for transaction support also go over the connection. If
you backup multiple sources to one target repository, additional traffic
you back up multiple sources to one target repository, additional traffic
happens for cache resynchronization.
.UNINDENT
.SS Support for file metadata

View file

@ -13,11 +13,11 @@ DESCRIPTION
BorgBackup (short: Borg) is a deduplicating backup program.
Optionally, it supports compression and authenticated encryption.
The main goal of Borg is to provide an efficient and secure way to backup data.
The main goal of Borg is to provide an efficient and secure way to back data up.
The data deduplication technique used makes Borg suitable for daily backups
since only changes are stored.
The authenticated encryption technique makes it suitable for backups to not
fully trusted targets.
The authenticated encryption technique makes it suitable for backups to targets not
fully trusted.
Borg stores a set of files in an *archive*. A *repository* is a collection
of *archives*. The format of repositories is Borg-specific. Borg does not

View file

@ -24,7 +24,7 @@ sudo chmod 755 /usr/local/bin/borg
# Now check it: (possibly needs a terminal restart)
borg -V
# That's it! Check out the other screencasts to see how to actually use borgbackup.
# That's it! Now check out the other screencasts to see how to use borgbackup.
}]
# wget may be slow

View file

@ -45,7 +45,7 @@ to the ratio of the different target chunk sizes.
Note: RAM needs were not a problem in this specific case (37GB data size).
But just imagine, you have 37TB of such data and much less than 42GB RAM,
then you'ld definitely want the "lg" chunker params so you only need
then you should use the "lg" chunker params so you only need
2.6GB RAM. Or even bigger chunks than shown for "lg" (see "xl").
You also see compression works better for larger chunks, as expected.

View file

@ -21,22 +21,22 @@ mount an archive to restore from a backup.
*Repositories* are filesystem directories acting as self-contained stores of archives.
Repositories can be accessed locally via path or remotely via ssh. Under the hood,
repositories contain data blocks and a manifest tracking which blocks are in each
archive. If some data hasn't changed from one backup to another, Borg can simply
reference an already uploaded data chunk (deduplication).
repositories contain data blocks and a manifest that tracks which blocks are in each
archive. If some data hasn't changed between backups, Borg simply
references an already uploaded data chunk (deduplication).
.. _about_free_space:
Important note about free space
-------------------------------
Before you start creating backups, please make sure that there is *always*
a good amount of free space on the filesystem that has your backup repository
Before you start creating backups, ensure that there is *always* plenty
of free space on the destination filesystem that has your backup repository
(and also on ~/.cache). A few GB should suffice for most hard-drive sized
repositories. See also :ref:`cache-memory-usage`.
Borg doesn't use space reserved for root on repository disks (even when run as root),
on file systems which do not support this mechanism (e.g. XFS) we recommend to reserve
Borg doesn't use space reserved for root on repository disks (even when run as root).
On file systems which do not support this mechanism (e.g. XFS) we recommend to reserve
some space in Borg itself just to be safe by adjusting the ``additional_free_space``
setting (a good starting point is ``2G``)::
@ -47,7 +47,7 @@ can while aborting the current operation safely, which allows the user to free m
by deleting/pruning archives. This mechanism is not bullet-proof in some
circumstances [1]_.
If you *really* run out of disk space, it can be hard or impossible to free space,
If you do run out of disk space, it can be hard or impossible to free space,
because Borg needs free space to operate - even to delete backup archives.
You can use some monitoring process or just include the free space information
@ -71,16 +71,15 @@ Also helpful:
Important note about permissions
--------------------------------
To avoid permissions issues (in your borg repository or borg cache), **always
To avoid permission issues (in your borg repository or borg cache), **always
access the repository using the same user account**.
If you want to backup files of other users or the operating system, running
borg as root likely will be required (otherwise you'ld get `Permission denied`
If you want to back up files of other users or the operating system, running
borg as root likely will be required (otherwise you get `Permission denied`
errors).
If you only back up your own files, you neither need nor want to run borg as
root, just run it as your normal user.
If you only back up your own files, run it as your normal user (i.e. not root).
For a local repository just always use the same user to invoke borg.
For a local repository always use the same user to invoke borg.
For a remote repository: always use e.g. ssh://borg@remote_host. You can use this
from different local users, the remote user running borg and accessing the
@ -116,14 +115,14 @@ common techniques to achieve this.
- Shut down containers before backing up their storage volumes.
For some systems Borg might work well enough without these
For some systems, Borg might work well enough without these
precautions. If you are simply backing up the files on a system that
isn't very active (e.g. in a typical home directory), Borg usually
works well enough without further care for consistency. Log files and
caches might not be in a perfect state, but this is rarely a problem.
For databases, virtual machines, and containers, there are specific
techniques for backing them up that do not simply use Borg to backup
techniques for backing them up that do not simply use Borg to back up
the underlying filesystem. For databases, check your database
documentation for techniques that will save the database state between
transactions. For virtual machines, consider running the backup on
@ -139,8 +138,8 @@ complete operating system) to a repository ``~/backup/main`` on a remote server
Some files which aren't necessarily needed in this backup are excluded. See
:ref:`borg_patterns` on how to add more exclude options.
After the backup this script also uses the :ref:`borg_prune` subcommand to keep
only a certain number of old archives and deletes the others.
After the backup, this script also uses the :ref:`borg_prune` subcommand to keep
a certain number of old archives and deletes the others.
Finally, it uses the :ref:`borg_compact` subcommand to remove deleted objects
from the segment files in the repository to free disk space.
@ -152,8 +151,8 @@ by the root user, but not executable or readable by anyone else, i.e. root:root
You can use this script as a starting point and modify it where it's necessary to fit
your setup.
Do not forget to test your created backups to make sure everything you need is being
backed up and that the ``prune`` command is keeping and deleting the correct backups.
Do not forget to test your created backups to make sure everything you need is
backed up and that the ``prune`` command keeps and deletes the correct backups.
::
@ -171,7 +170,7 @@ backed up and that the ``prune`` command is keeping and deleting the correct bac
info "Starting backup"
# Backup the most important directories into an archive named after
# Back up the most important directories into an archive named after
# the machine this script is currently running on:
borg create \
@ -236,7 +235,7 @@ Pitfalls with shell variables and environment variables
-------------------------------------------------------
This applies to all environment variables you want Borg to see, not just
``BORG_PASSPHRASE``. The short explanation is: always ``export`` your variable,
``BORG_PASSPHRASE``. TL;DR: always ``export`` your variable,
and use single quotes if you're unsure of the details of your shell's expansion
behavior. E.g.::
@ -256,7 +255,7 @@ For more information, refer to the sudo(8) man page and ``env_keep`` in
the sudoers(5) man page.
.. Tip::
To debug what your borg process is actually seeing, find its PID
To debug what your borg process sees, find its PID
(``ps aux|grep borg``) and then look into ``/proc/<PID>/environ``.
.. passphrase_notes:
@ -264,29 +263,30 @@ the sudoers(5) man page.
Passphrase notes
----------------
If you use encryption (or authentication), Borg will interactively ask you
If you use encryption (or authentication), Borg will ask you interactively
for a passphrase to encrypt/decrypt the keyfile / repokey.
A passphrase should be a single line of text, a trailing linefeed will be
A passphrase should be a single line of text. Any trailing linefeed will be
stripped.
For your own safety, you maybe want to avoid empty passphrases as well
extremely long passphrase (much more than 256 bits of entropy).
Do not use empty passphrases, as these can be trivially guessed, which does not
leave any encrypted data secure.
Also avoid passphrases containing non-ASCII characters.
Borg is technically able to process all unicode text, but you might get into
trouble reproducing the same encoded utf-8 bytes or with keyboard layouts,
so better just avoid non-ASCII stuff.
Avoid passphrases containing non-ASCII characters.
Borg can process any unicode text, but problems may arise at input due to text
encoding or differing keyboard layouts, so best just avoid non-ASCII stuff.
If you want to automate, you can alternatively supply the passphrase
directly or indirectly using some environment variables.
See: https://xkcd.com/936/
You can directly give a passphrase::
If you want to automate, you can supply the passphrase
directly or indirectly with the use of environment variables.
Supply a passphrase directly::
# use this passphrase (use safe permissions on the script!):
export BORG_PASSPHRASE='my super secret passphrase'
Or ask an external program to supply the passphrase::
Or delegate to an external program to supply the passphrase::
# use the "pass" password manager to get the passphrase:
export BORG_PASSCOMMAND='pass show backup'
@ -424,22 +424,23 @@ be acceptable for backup usage.
Restoring a backup
------------------
Please note that we are only describing the most basic commands and options
here - please refer to the command reference to see more.
Please note that we describe only the most basic commands and options
here. Refer to the command reference to see more.
For restoring, you usually want to work **on the same machine as the same user**
that was also used to create the backups of the wanted files. Doing it like
that avoids quite some issues:
To restore, work **on the same machine as the same user**
that was used to create the backups of the wanted files. Doing so
avoids issues such as:
- no confusion relating to paths
- same mapping of user/group names to user/group IDs
- no permission issues
- you likely already have a working borg setup there,
- confusion relating to paths
- mapping of user/group names to user/group IDs
- permissions
- maybe including a environment variable for the key passphrase (for encrypted repos),
- maybe including a keyfile for the repo (not needed for repokey mode),
- maybe including a ssh key for the repo server (not needed for locally mounted repos),
- maybe including a valid borg cache for that repo (quicker than cache rebuild).
You likely already have a working borg setup there, including perhaps:
- an environment variable for the key passphrase (for encrypted repos),
- a keyfile for the repo (not needed for repokey mode),
- a ssh key for the repo server (not needed for locally mounted repos),
- a valid borg cache for that repo (quicker than cache rebuild).
The **user** might be:
@ -461,12 +462,12 @@ The **key** can be located:
- in the repository (**repokey** mode).
Easy, this will usually "just work".
- in the home directory of the user who did the backup (**keyfile** mode).
- in the home directory of the user who made the backup (**keyfile** mode).
This may cause a bit more effort:
- if you have just lost that home directory and you first need to restore the
borg key (e.g. from the separate backup you have made of it or from another
borg key (e.g. from the separate backup you made of it or from another
user or machine accessing the same repository).
- if you first must find out the correct machine / user / home directory
(where the borg client was run to make the backups).
@ -483,19 +484,19 @@ There are **2 ways to restore** files from a borg backup repository:
- **borg mount** - use this if:
- you don't precisely know what files you want to restore
- you don't know exactly which files you want to restore
- you don't know which archive contains the files (in the state) you want
- you need to look into files / directories before deciding what you want
- you need a relatively low volume of data restored
- you don't care for restoring stuff that the FUSE mount is not implementing yet
- you don't care for restoring stuff that FUSE mount does not implement yet
(like special fs flags, ACLs)
- you have a client with good resources (RAM, CPU, temp. disk space)
- you want to rather use some filemanager to restore (copy) files than borg
- you have a client with good resources (RAM, CPU, temporary disk space)
- you would rather use some filemanager to restore (copy) files than borg
extract shell commands
- **borg extract** - use this if:
- you precisely know what you want (repo, archive, path)
- you know precisely what you want (repo, archive, path)
- you need a high volume of files restored (best speed)
- you want a as-complete-as-it-gets reproduction of file metadata
(like special fs flags, ACLs)

View file

@ -1,8 +1,8 @@
1. Before a backup can be made a repository has to be initialized::
1. Before a backup can be made, a repository has to be initialized::
$ borg -r /path/to/repo rcreate --encryption=repokey-aes-ocb
2. Backup the ``~/src`` and ``~/Documents`` directories into an archive called
2. Back up the ``~/src`` and ``~/Documents`` directories into an archive called
*Monday*::
$ borg -r /path/to/repo create Monday ~/src ~/Documents
@ -11,7 +11,7 @@
$ borg -r /path/to/repo create --stats Tuesday ~/src ~/Documents
This backup will be a lot quicker and a lot smaller since only new never
This backup will be a lot quicker and a lot smaller since only new, never
before seen data is stored. The ``--stats`` option causes Borg to
output statistics about the newly created archive such as the deduplicated
size (the amount of unique data not shared with other archives)::
@ -58,7 +58,7 @@
$ borg -r /path/to/repo compact
.. Note::
Borg is quiet by default (it works on WARNING log level).
Borg is quiet by default (it defaults to WARNING log level).
You can use options like ``--progress`` or ``--list`` to get specific
reports during command execution. You can also add the ``-v`` (or
``--verbose`` or ``--info``) option to adjust the log level to INFO to

View file

@ -132,7 +132,7 @@ of CPU cores.
When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process
to unintentionally delete data.
to delete data unintentionally.
When running in the foreground ^C/SIGINT unmounts cleanly, but other
signals or crashes do not.

View file

@ -74,9 +74,9 @@ Examples
$ borg create 'daily-projectA-{now:%Y-%m-%d}' projectA
# Use external command to determine files to archive
# Use --paths-from-stdin with find to only backup files less than 1MB in size
# Use --paths-from-stdin with find to back up only files less than 1MB in size
$ find ~ -size -1000k | borg create --paths-from-stdin small-files-only
# Use --paths-from-command with find to only backup files from a given user
# Use --paths-from-command with find to back up files from only a given user
$ borg create --paths-from-command joes-files -- find /srv/samba/shared -user joe
# Use --paths-from-stdin with --paths-delimiter (for example, for filenames with newlines in them)
$ find ~ -size -1000k -print0 | borg create \

View file

@ -43,7 +43,7 @@ borg create
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--content-from-command`` | interpret PATH as command and store its stdout. See also section Reading from stdin below. |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--paths-from-stdin`` | read DELIM-separated list of paths to backup from stdin. Will not recurse into directories. |
| | ``--paths-from-stdin`` | read DELIM-separated list of paths to back up from stdin. Will not recurse into directories. |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--paths-from-command`` | interpret PATH as command and treat its output as ``--paths-from-stdin`` |
+-------------------------------------------------------+---------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@ -136,7 +136,7 @@ borg create
--stdin-group GROUP set group GROUP in archive for stdin data (default: 'wheel')
--stdin-mode M set mode to M in archive for stdin data (default: 0660)
--content-from-command interpret PATH as command and store its stdout. See also section Reading from stdin below.
--paths-from-stdin read DELIM-separated list of paths to backup from stdin. Will not recurse into directories.
--paths-from-stdin read DELIM-separated list of paths to back up from stdin. Will not recurse into directories.
--paths-from-command interpret PATH as command and treat its output as ``--paths-from-stdin``
--paths-delimiter DELIM set path delimiter for ``--paths-from-stdin`` and ``--paths-from-command`` (default: \n)

View file

@ -33,14 +33,14 @@ General:
When set, use the value to answer the passphrase question when a **new** passphrase is asked for.
This variable is checked first. If it is not set, BORG_PASSPHRASE and BORG_PASSCOMMAND will also
be checked.
Main usecase for this is to fully automate ``borg change-passphrase``.
Main usecase for this is to automate fully ``borg change-passphrase``.
BORG_DISPLAY_PASSPHRASE
When set, use the value to answer the "display the passphrase for verification" question when defining a new passphrase for encrypted repositories.
BORG_HOST_ID
Borg usually computes a host id from the FQDN plus the results of ``uuid.getnode()`` (which usually returns
a unique id based on the MAC address of the network interface. Except if that MAC happens to be all-zero - in
that case it returns a random value, which is not what we want (because it kills automatic stale lock removal).
So, if you have a all-zero MAC address or other reasons to better externally control the host id, just set this
So, if you have a all-zero MAC address or other reasons to control better externally the host id, just set this
environment variable to a unique value. If all your FQDNs are unique, you can just use the FQDN. If not,
use fqdn@uniqueid.
BORG_LOCK_WAIT
@ -62,7 +62,7 @@ General:
cache entries for backup sources other than the current sources.
BORG_FILES_CACHE_TTL
When set to a numeric value, this determines the maximum "time to live" for the files cache
entries (default: 20). The files cache is used to quickly determine whether a file is unchanged.
entries (default: 20). The files cache is used to determine quickly whether a file is unchanged.
The FAQ explains this more detailed in: :ref:`always_chunking`
BORG_SHOW_SYSINFO
When set to no (default: yes), system information (like OS, Python version, ...) in
@ -112,7 +112,7 @@ Some automatic "answerers" (if set, they automatically answer confirmation quest
BORG_CHECK_I_KNOW_WHAT_I_AM_DOING=NO (or =YES)
For "This is a potentially dangerous function..." (check --repair)
BORG_DELETE_I_KNOW_WHAT_I_AM_DOING=NO (or =YES)
For "You requested to completely DELETE the repository *including* all archives it contains:"
For "You requested to DELETE the repository completely *including* all archives it contains:"
Note: answers are case sensitive. setting an invalid answer value might either give the default
answer or ask you interactively, depending on whether retries are allowed (they by default are

View file

@ -91,5 +91,5 @@ Network (only for client/server operation):
encrypted) data of course has to go over the connection (``ssh://`` repo url).
If you use a locally mounted network filesystem, additionally some copy
operations used for transaction support also go over the connection. If
you backup multiple sources to one target repository, additional traffic
you back up multiple sources to one target repository, additional traffic
happens for cache resynchronization.

View file

@ -195,14 +195,14 @@ are added. Exclusion patterns from ``--exclude-from`` files are appended last.
Examples::
# backup pics, but not the ones from 2018, except the good ones:
# back up pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create --pattern=+pics/2018/good --pattern=-pics/2018 archive pics
# backup only JPG/JPEG files (case insensitive) in all home directories:
# back up only JPG/JPEG files (case insensitive) in all home directories:
borg create --pattern '+ re:\.jpe?g(?i)$' archive /home
# backup homes, but exclude big downloads (like .ISO files) or hidden files:
# back up homes, but exclude big downloads (like .ISO files) or hidden files:
borg create --exclude 're:\.iso(?i)$' --exclude 'sh:home/**/.*' archive /home
# use a file with patterns (recursion root '/' via command line):
@ -217,7 +217,7 @@ The patterns.lst file could look like that::
+ home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don't backup the other home directories
# don't back up the other home directories
- home/*
# don't even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too)
! re:^(dev|proc|run|sys|tmp)

View file

@ -54,7 +54,7 @@ Description
~~~~~~~~~~~
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View file

@ -49,5 +49,5 @@ borgfs
``borgfs`` will be automatically provided if you used a distribution
package, ``pip`` or ``setup.py`` to install Borg. Users of the
standalone binary will have to manually create a symlink (see
standalone binary will have to create a symlink manually (see
:ref:`pyinstaller-binary`).

View file

@ -152,7 +152,7 @@ of CPU cores.
When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process
to unintentionally delete data.
to delete data unintentionally.
When running in the foreground ^C/SIGINT unmounts cleanly, but other
signals or crashes do not.

View file

@ -30,7 +30,7 @@ for block devices (like disks, partitions, LVM LVs) or raw disk image files.
``--chunker-params=fixed,4096,512`` results in fixed 4kiB sized blocks,
but the first header block will only be 512B long. This might be useful to
dedup files with 1 header + N fixed size data blocks. Be careful to not
dedup files with 1 header + N fixed size data blocks. Be careful not to
produce a too big amount of chunks (like using small block size for huge
files).
@ -63,7 +63,7 @@ For more details, see :ref:`chunker_details`.
``--noatime / --noctime``
~~~~~~~~~~~~~~~~~~~~~~~~~
You can use these ``borg create`` options to not store the respective timestamp
You can use these ``borg create`` options not to store the respective timestamp
into the archive, in case you do not really need it.
Besides saving a little space for the not archived timestamp, it might also
@ -74,7 +74,7 @@ won't deduplicate just because of that.
``--nobsdflags / --noflags``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use this to not query and store (or not extract and set) flags - in case
You can use this not to query and store (or not extract and set) flags - in case
you don't need them or if they are broken somehow for your fs.
On Linux, dealing with the flags needs some additional syscalls. Especially when
@ -132,7 +132,7 @@ scale and perform better if you do not work via the FUSE mount.
Example
+++++++
Imagine you have made some snapshots of logical volumes (LVs) you want to backup.
Imagine you have made some snapshots of logical volumes (LVs) you want to back up.
.. note::
@ -309,8 +309,8 @@ operation on an append-only repository to catch accidental or malicious corrupti
# run without append-only mode
borg check --verify-data && borg compact
Aside from checking repository & archive integrity you may want to also manually check
backups to ensure their content seems correct.
Aside from checking repository & archive integrity you may also want to check
backups manually to ensure their content seems correct.
Further considerations
++++++++++++++++++++++

View file

@ -123,7 +123,7 @@ The easiest way to find out about what's fastest is to run ``borg benchmark cpu`
`repokey` modes: if you want ease-of-use and "passphrase" security is good enough -
the key will be stored in the repository (in ``repo_dir/config``).
`keyfile` modes: if you rather want "passphrase and having-the-key" security -
`keyfile` modes: if you want "passphrase and having-the-key" security -
the key will be stored in your home directory (in ``~/.config/borg/keys``).
The following table is roughly sorted in order of preference, the better ones are
@ -151,7 +151,7 @@ in the upper part of the table, in the lower part is the old and/or unsafe(r) st
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View file

@ -6,7 +6,7 @@ Examples
# delete the whole repository and the related local cache:
$ borg rdelete
You requested to completely DELETE the repository *including* all archives it contains:
You requested to DELETE the repository completely *including* all archives it contains:
repo Mon, 2016-02-15 19:26:54
root-2016-02-15 Mon, 2016-02-15 19:36:29
newname Mon, 2016-02-15 19:50:19

View file

@ -67,7 +67,7 @@ borg recreate
+-----------------------------------------------------------------------------+---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``-C COMPRESSION``, ``--compression COMPRESSION`` | select compression algorithm, see the output of the "borg help compression" command for details. |
+-----------------------------------------------------------------------------+---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--recompress MODE`` | recompress data chunks according to `MODE` and ``--compression``. Possible modes are `if-different`: recompress if current compression is with a different compression algorithm or different level; `always`: recompress unconditionally; and `never`: do not recompress (use this option to explicitly prevent recompression). If no MODE is given, `if-different` will be used. Not passing --recompress is equivalent to "--recompress never". |
| | ``--recompress MODE`` | recompress data chunks according to `MODE` and ``--compression``. Possible modes are `if-different`: recompress if current compression is with a different compression algorithm or different level; `always`: recompress unconditionally; and `never`: do not recompress (use this option explicitly to prevent recompression). If no MODE is given, `if-different` will be used. Not passing --recompress is equivalent to "--recompress never". |
+-----------------------------------------------------------------------------+---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--chunker-params PARAMS`` | specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or `default` to use the current defaults. default: buzhash,19,23,21,4095 |
+-----------------------------------------------------------------------------+---------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@ -116,7 +116,7 @@ borg recreate
--comment COMMENT add a comment text to the archive
--timestamp TIMESTAMP manually specify the archive creation date/time (yyyy-mm-ddThh:mm:ss[(+|-)HH:MM] format, (+|-)HH:MM is the UTC offset, default: local time zone). Alternatively, give a reference file/directory.
-C COMPRESSION, --compression COMPRESSION select compression algorithm, see the output of the "borg help compression" command for details.
--recompress MODE recompress data chunks according to `MODE` and ``--compression``. Possible modes are `if-different`: recompress if current compression is with a different compression algorithm or different level; `always`: recompress unconditionally; and `never`: do not recompress (use this option to explicitly prevent recompression). If no MODE is given, `if-different` will be used. Not passing --recompress is equivalent to "--recompress never".
--recompress MODE recompress data chunks according to `MODE` and ``--compression``. Possible modes are `if-different`: recompress if current compression is with a different compression algorithm or different level; `always`: recompress unconditionally; and `never`: do not recompress (use this option explicitly to prevent recompression). If no MODE is given, `if-different` will be used. Not passing --recompress is equivalent to "--recompress never".
--chunker-params PARAMS specify the chunker parameters (ALGO, CHUNK_MIN_EXP, CHUNK_MAX_EXP, HASH_MASK_BITS, HASH_WINDOW_SIZE) or `default` to use the current defaults. default: buzhash,19,23,21,4095
@ -147,8 +147,8 @@ There is no risk of data loss by this.
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives.
**USE WITH CAUTION.**
Depending on the PATHs and patterns given, recreate can be used to permanently
delete files from archives.
Depending on the PATHs and patterns given, recreate can be used to
delete files from archives permanently.
When in doubt, use ``--dry-run --verbose --list`` to see how patterns/PATHS are
interpreted. See :ref:`list_item_flags` in ``borg create`` for details.

View file

@ -21,7 +21,7 @@ locations like ``/etc/environment`` or in the forced command itself (example bel
::
# Allow an SSH keypair to only run borg, and only have access to /path/to/repo.
# Allow an SSH keypair to run only borg, and only have access to /path/to/repo.
# Use key options to disable unneeded and potentially dangerous SSH functionality.
# This will help to secure an automated remote backup system.
$ cat ~/.ssh/authorized_keys
@ -36,7 +36,7 @@ locations like ``/etc/environment`` or in the forced command itself (example bel
block potential dangerous ssh features, even when they are added in a future
update. Thus, this option should be preferred.
If you're using openssh-server < 7.2, however, you have to explicitly specify
If you're using openssh-server < 7.2, however, you have to specify explicitly
the ssh features to restrict and cannot simply use the restrict option as it
has been introduced in v7.2. We recommend to use
``no-port-forwarding,no-X11-forwarding,no-pty,no-agent-forwarding,no-user-rc``

View file

@ -15,9 +15,9 @@ borg serve
+-------------------------------------------------------+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| **options** |
+-------------------------------------------------------+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--restrict-to-path PATH`` | restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub-directories is granted implicitly; PATH doesn't need to directly point to a repository. |
| | ``--restrict-to-path PATH`` | restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub-directories is granted implicitly; PATH doesn't need to point directly to a repository. |
+-------------------------------------------------------+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--restrict-to-repository PATH`` | restrict repository access. Only the repository located at PATH (no sub-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike ``--restrict-to-path`` sub-directories are not accessible; PATH needs to directly point at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there. |
| | ``--restrict-to-repository PATH`` | restrict repository access. Only the repository located at PATH (no sub-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike ``--restrict-to-path`` sub-directories are not accessible; PATH needs to point directly at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there. |
+-------------------------------------------------------+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | ``--append-only`` | only allow appending to repository segment files. Note that this only affects the low level structure of the repository, and running `delete` or `prune` will still be allowed. See :ref:`append_only_mode` in Additional Notes for more details. |
+-------------------------------------------------------+-----------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
@ -41,8 +41,8 @@ borg serve
options
--restrict-to-path PATH restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub-directories is granted implicitly; PATH doesn't need to directly point to a repository.
--restrict-to-repository PATH restrict repository access. Only the repository located at PATH (no sub-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike ``--restrict-to-path`` sub-directories are not accessible; PATH needs to directly point at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there.
--restrict-to-path PATH restrict repository access to PATH. Can be specified multiple times to allow the client access to several directories. Access to all sub-directories is granted implicitly; PATH doesn't need to point directly to a repository.
--restrict-to-repository PATH restrict repository access. Only the repository located at PATH (no sub-directories are considered) is accessible. Can be specified multiple times to allow the client access to several repositories. Unlike ``--restrict-to-path`` sub-directories are not accessible; PATH needs to point directly at a repository location. PATH may be an empty directory or the last element of PATH may not exist, in which case the client may initialize a repository there.
--append-only only allow appending to repository segment files. Note that this only affects the low level structure of the repository, and running `delete` or `prune` will still be allowed. See :ref:`append_only_mode` in Additional Notes for more details.
--storage-quota QUOTA Override storage quota of the repository (e.g. 5G, 1.5T). When a new repository is initialized, sets the storage quota on the new repository as well. Default: no quota.

View file

@ -51,8 +51,8 @@ exe = EXE(pyz,
console=True)
# Build a directory-based binary in addition to a packed
# single file. This allows one to easily look at all included
# files (e.g. without having to strace or halt the built binary
# single file. This allows one to look at all included
# files easily (e.g. without having to strace or halt the built binary
# and introspect /tmp). Also avoids unpacking all libs when
# running the app, which is better for app signing on various OS.
slim_exe = EXE(pyz,

View file

@ -556,7 +556,7 @@ _borg-recreate() {
local -a mods=(
'if-different:recompress if current compression is with a different compression algorithm (the level is not considered)'
'always:recompress even if current compression is with the same compression algorithm (use this to change the compression level)'
'never:do not recompress (use this option to explicitly prevent recompression)'
'never:do not recompress (use this option explicitly to prevent recompression)'
)
mods=( ${(q)mods//\\/\\\\} )
mods=( ${mods//:/\\:} )

View file

@ -40,7 +40,7 @@ cpu_threads = multiprocessing.cpu_count() if multiprocessing and multiprocessing
# Are we building on ReadTheDocs?
on_rtd = os.environ.get("READTHEDOCS")
# Extra cflags for all extensions, usually just warnings we want to explicitly enable
# Extra cflags for all extensions, usually just warnings we want to enable explicitly
cflags = ["-Wall", "-Wextra", "-Wpointer-arith"]
compress_source = "src/borg/compress.pyx"

View file

@ -1824,7 +1824,7 @@ class ArchiveChecker:
# if we kill the defect chunk here, subsequent actions within this "borg check"
# run will find missing chunks and replace them with all-zero replacement
# chunks and flag the files as "repaired".
# if another backup is done later and the missing chunks get backupped again,
# if another backup is done later and the missing chunks get backed up again,
# a "borg check" afterwards can heal all files where this chunk was missing.
logger.warning(
"Found defect chunks. They will be deleted now, so affected files can "

View file

@ -744,7 +744,7 @@ class CreateMixIn:
subparser.add_argument(
"--paths-from-stdin",
action="store_true",
help="read DELIM-separated list of paths to backup from stdin. Will not " "recurse into directories.",
help="read DELIM-separated list of paths to back up from stdin. Will not " "recurse into directories.",
)
subparser.add_argument(
"--paths-from-command",

View file

@ -199,14 +199,14 @@ class HelpMixIn:
Examples::
# backup pics, but not the ones from 2018, except the good ones:
# back up pics, but not the ones from 2018, except the good ones:
# note: using = is essential to avoid cmdline argument parsing issues.
borg create --pattern=+pics/2018/good --pattern=-pics/2018 archive pics
# backup only JPG/JPEG files (case insensitive) in all home directories:
# back up only JPG/JPEG files (case insensitive) in all home directories:
borg create --pattern '+ re:\\.jpe?g(?i)$' archive /home
# backup homes, but exclude big downloads (like .ISO files) or hidden files:
# back up homes, but exclude big downloads (like .ISO files) or hidden files:
borg create --exclude 're:\\.iso(?i)$' --exclude 'sh:home/**/.*' archive /home
# use a file with patterns (recursion root '/' via command line):
@ -221,7 +221,7 @@ class HelpMixIn:
+ home/susan
# also back up this exact file
+ pf:home/bobby/specialfile.txt
# don't backup the other home directories
# don't back up the other home directories
- home/*
# don't even look in /dev, /proc, /run, /sys, /tmp (note: would exclude files like /device, too)
! re:^(dev|proc|run|sys|tmp)

View file

@ -151,7 +151,7 @@ class KeysMixIn:
key_export_epilog = process_epilog(
"""
If repository encryption is used, the repository is inaccessible
without the key. This command allows one to backup this essential key.
without the key. This command allows one to back up this essential key.
Note that the backup produced does not include the passphrase itself
(i.e. the exported key stays encrypted). In order to regain access to a
repository, one needs both the exported key and the original passphrase.

View file

@ -105,7 +105,7 @@ class MountMixIn:
When the daemonized process receives a signal or crashes, it does not unmount.
Unmounting in these cases could cause an active rsync or similar process
to unintentionally delete data.
to delete data unintentionally.
When running in the foreground ^C/SIGINT unmounts cleanly, but other
signals or crashes do not.

View file

@ -118,7 +118,7 @@ class RCreateMixIn:
`repokey` modes: if you want ease-of-use and "passphrase" security is good enough -
the key will be stored in the repository (in ``repo_dir/config``).
`keyfile` modes: if you rather want "passphrase and having-the-key" security -
`keyfile` modes: if you want "passphrase and having-the-key" security -
the key will be stored in your home directory (in ``~/.config/borg/keys``).
The following table is roughly sorted in order of preference, the better ones are
@ -146,7 +146,7 @@ class RCreateMixIn:
.. nanorst: inline-replace
`none` mode uses no encryption and no authentication. You're advised to NOT use this mode
`none` mode uses no encryption and no authentication. You're advised NOT to use this mode
as it would expose you to all sorts of issues (DoS, confidentiality, tampering, ...) in
case of malicious activity in the repository.

View file

@ -31,13 +31,13 @@ class RDeleteMixIn:
manifest = Manifest.load(repository, Manifest.NO_OPERATION_CHECK)
n_archives = len(manifest.archives)
msg.append(
f"You requested to completely DELETE the following repository "
f"You requested to DELETE the following repository completely "
f"*including* {n_archives} archives it contains:"
)
except NoManifestError:
n_archives = None
msg.append(
"You requested to completely DELETE the following repository "
"You requested to DELETE the following repository completely "
"*including* all archives it may contain:"
)
@ -54,7 +54,7 @@ class RDeleteMixIn:
for archive_info in manifest.archives.list(sort_by=["ts"]):
msg.append(format_archive(archive_info))
else:
msg.append("This repository seems to not have any archives.")
msg.append("This repository seems not to have any archives.")
else:
msg.append(
"This repository seems to have no manifest, so we can't "

View file

@ -89,8 +89,8 @@ class RecreateMixIn:
used to have upgraded Borg 0.xx archives deduplicate with Borg 1.x archives.
**USE WITH CAUTION.**
Depending on the PATHs and patterns given, recreate can be used to permanently
delete files from archives.
Depending on the PATHs and patterns given, recreate can be used to
delete files from archives permanently.
When in doubt, use ``--dry-run --verbose --list`` to see how patterns/PATHS are
interpreted. See :ref:`list_item_flags` in ``borg create`` for details.
@ -199,7 +199,7 @@ class RecreateMixIn:
"`if-different`: recompress if current compression is with a different "
"compression algorithm or different level; "
"`always`: recompress unconditionally; and "
"`never`: do not recompress (use this option to explicitly prevent "
"`never`: do not recompress (use this option explicitly to prevent "
"recompression). "
"If no MODE is given, `if-different` will be used. "
'Not passing --recompress is equivalent to "--recompress never".',

View file

@ -46,7 +46,7 @@ class ServeMixIn:
action="append",
help="restrict repository access to PATH. "
"Can be specified multiple times to allow the client access to several directories. "
"Access to all sub-directories is granted implicitly; PATH doesn't need to directly point to a repository.",
"Access to all sub-directories is granted implicitly; PATH doesn't need to point directly to a repository.",
)
subparser.add_argument(
"--restrict-to-repository",
@ -57,7 +57,7 @@ class ServeMixIn:
"(no sub-directories are considered) is accessible. "
"Can be specified multiple times to allow the client access to several repositories. "
"Unlike ``--restrict-to-path`` sub-directories are not accessible; "
"PATH needs to directly point at a repository location. "
"PATH needs to point directly at a repository location. "
"PATH may be an empty directory or the last element of PATH may not exist, in which case "
"the client may initialize a repository there.",
)

View file

@ -135,9 +135,9 @@ class ChunkerFixed:
It optionally supports:
- a header block of different size
- using a sparsemap to only read data ranges and seek over hole ranges
- using a sparsemap to read only data ranges and seek over hole ranges
for sparse files.
- using an externally given filemap to only read specific ranges from
- using an externally given filemap to read only specific ranges from
a file.
Note: the last block of a data or hole range may be less than the block size,
@ -231,7 +231,7 @@ cdef class Chunker:
"""
Content-Defined Chunker, variable chunk sizes.
This chunker does quite some effort to mostly cut the same-content chunks, even if
This chunker makes quite some effort to cut mostly chunks of the same-content, even if
the content moves to a different offset inside the file. It uses the buzhash
rolling-hash algorithm to identify the chunk cutting places by looking at the
content inside the moving window and computing the rolling hash value over the

View file

@ -261,7 +261,7 @@ def scandir_inorder(*, path, fd=None):
def secure_erase(path, *, avoid_collateral_damage):
"""Attempt to securely erase a file by writing random data over it before deleting it.
"""Attempt to erase a file securely by writing random data over it before deleting it.
If avoid_collateral_damage is True, we only secure erase if the total link count is 1,
otherwise we just do a normal "delete" (unlink) without first overwriting it with random.

View file

@ -211,7 +211,7 @@ class SigIntManager:
def action_completed(self):
# this must be called when the action triggered is completed,
# to avoid that the action is repeatedly triggered.
# to avoid repeatedly triggering the action.
self._action_triggered = False
self._action_done = True
@ -242,7 +242,7 @@ def ignore_sigint():
Ctrl-C will send a SIGINT to both the main process (borg) and subprocesses
(e.g. ssh for remote ssh:// repos), but often we do not want the subprocess
getting killed (e.g. because it is still needed to cleanly shut down borg).
getting killed (e.g. because it is still needed to shut down borg cleanly).
To avoid that: Popen(..., preexec_fn=ignore_sigint)
"""

View file

@ -328,8 +328,8 @@ class LockRoster:
def migrate_lock(self, key, old_id, new_id):
"""migrate the lock ownership from old_id to new_id"""
assert self.id == old_id
# need to temporarily switch off stale lock killing as we want to
# rather migrate than kill them (at least the one made by old_id).
# need to switch off stale lock killing temporarily as we want to
# migrate rather than kill them (at least the one made by old_id).
killing, self.kill_stale_locks = self.kill_stale_locks, False
try:
try:

View file

@ -152,7 +152,7 @@ class Manifest:
# behaviours are known when introducing new features sometimes this might not match the general descriptions
# below.
# The READ operation describes which features are needed to safely list and extract the archives in the
# The READ operation describes which features are needed to list and extract the archives safely in the
# repository.
READ = "read"
# The CHECK operation is for all operations that need either to understand every detail

View file

@ -36,7 +36,7 @@ def swidth(s):
def process_alive(host, pid, thread):
"""
Check if the (host, pid, thread_id) combination corresponds to a potentially alive process.
Check whether the (host, pid, thread_id) combination corresponds to a process potentially alive.
If the process is local, then this will be accurate. If the process is not local, then this
returns always True, since there is no real way to check.

View file

@ -41,7 +41,7 @@ def getosusername():
def process_alive(host, pid, thread):
"""
Check if the (host, pid, thread_id) combination corresponds to a potentially alive process.
Check whether the (host, pid, thread_id) combination corresponds to a process potentially alive.
"""
if host.split('@')[0].lower() != platform.node().lower():
# Not running on the same node, assume running.

View file

@ -669,7 +669,7 @@ This problem will go away as soon as the server has been upgraded to 1.0.7+.
self.shutdown_time = time.monotonic() + 30
self.rollback()
finally:
# in any case, we want to cleanly close the repo, even if the
# in any case, we want to close the repo cleanly, even if the
# rollback can not succeed (e.g. because the connection was
# already closed) and raised another exception:
logger.debug(

View file

@ -56,7 +56,7 @@ class RepoObj:
return hdr + meta_encrypted + data_encrypted
def parse_meta(self, id: bytes, cdata: bytes) -> dict:
# when calling parse_meta, enough cdata needs to be supplied to completely contain the
# when calling parse_meta, enough cdata needs to be supplied to contain completely the
# meta_len_hdr and the encrypted, packed metadata. it is allowed to provide more cdata.
assert isinstance(id, bytes)
assert isinstance(cdata, bytes)

View file

@ -129,7 +129,7 @@ class Repository:
this is of course way more complex).
LoggedIO gracefully handles truncate/unlink splits as long as the truncate resulted in
a zero length file. Zero length segments are considered to not exist, while LoggedIO.cleanup()
a zero length file. Zero length segments are considered not to exist, while LoggedIO.cleanup()
will still get rid of them.
"""
@ -328,7 +328,7 @@ class Repository:
if os.path.isfile(config_path):
link_error_msg = (
"Failed to securely erase old repository config file (hardlinks not supported). "
"Failed to erase old repository config file securely (hardlinks not supported). "
"Old repokey data, if any, might persist on physical storage."
)
try:
@ -429,7 +429,7 @@ class Repository:
# valid (committed) state of the repo which we could use.
msg = '%s" - although likely this is "beyond repair' % self.path # dirty hack
raise self.CheckNeeded(msg)
# Attempt to automatically rebuild index if we crashed between commit
# Attempt to rebuild index automatically if we crashed between commit
# tag write and index save.
if index_transaction_id != segments_transaction_id:
if index_transaction_id is not None and index_transaction_id > segments_transaction_id:
@ -719,7 +719,7 @@ class Repository:
self.index = None
def check_free_space(self):
"""Pre-commit check for sufficient free space to actually perform the commit."""
"""Pre-commit check for sufficient free space necessary to perform the commit."""
# As a baseline we take four times the current (on-disk) index size.
# At this point the index may only be updated by compaction, which won't resize it.
# We still apply a factor of four so that a later, separate invocation can free space
@ -734,7 +734,7 @@ class Repository:
# 10 bytes for each segment-refcount pair, 10 bytes for each segment-space pair
# Assume maximum of 5 bytes per integer. Segment numbers will usually be packed more densely (1-3 bytes),
# as will refcounts and free space integers. For 5 MiB segments this estimate is good to ~20 PB repo size.
# Add 4K to generously account for constant format overhead.
# Add a generous 4K to account for constant format overhead.
hints_size = len(self.segments) * 10 + len(self.compact) * 10 + 4096
required_free_space += hints_size
@ -1238,7 +1238,7 @@ class Repository:
# smallest valid seg is <uint32> 0, smallest valid offs is <uint32> 8
start_segment, start_offset, end_segment = state if state is not None else (0, 0, transaction_id)
ids, segment, offset = [], 0, 0
# we only scan up to end_segment == transaction_id to only scan **committed** chunks,
# we only scan up to end_segment == transaction_id to scan only **committed** chunks,
# avoiding scanning into newly written chunks.
for segment, filename in self.io.segment_iterator(start_segment, end_segment):
# the start_offset we potentially got from state is only valid for the start_segment we also got

View file

@ -65,7 +65,7 @@ def exec_cmd(*args, archiver=None, fork=False, exe=None, input=b"", binary_outpu
sys.stdin = StringIO(input.decode())
sys.stdin.buffer = BytesIO(input)
output = BytesIO()
# Always use utf-8 here, to simply .decode() below
# Always use utf-8 here, to .decode() below
output_text = sys.stdout = sys.stderr = io.TextIOWrapper(output, encoding="utf-8")
if archiver is None:
archiver = Archiver()

View file

@ -113,7 +113,7 @@ class ArchiverTestCase(ArchiverTestCaseBase):
pass
else:
with pytest.raises((LockFailed, RemoteRepository.RPCError)) as excinfo:
# self.fuse_mount always assumes fork=True, so for this test we have to manually set fork=False
# self.fuse_mount always assumes fork=True, so for this test we have to set fork=False manually
with self.fuse_mount(self.repository_location, fork=False):
pass
if isinstance(excinfo.value, RemoteRepository.RPCError):

View file

@ -436,7 +436,7 @@ class ArchiverTestCase(ArchiverTestCaseBase):
def test_extract_capabilities(self):
fchown = os.fchown
# We need to manually patch chown to get the behaviour Linux has, since fakeroot does not
# We need to patch chown manually to get the behaviour Linux has, since fakeroot does not
# accurately model the interaction of chown(2) and Linux capabilities, i.e. it does not remove them.
def patched_fchown(fd, uid, gid):
xattr.setxattr(fd, b"security.capability", b"", follow_symlinks=False)

View file

@ -615,17 +615,17 @@ class IndexCorruptionTestCase(BaseTestCase):
idx = NSIndex()
# create lots of colliding entries
for y in range(700): # stay below max load to not trigger resize
for y in range(700): # stay below max load not to trigger resize
idx[HH(0, y, 0)] = (0, y, 0)
assert idx.size() == 1024 + 1031 * 48 # header + 1031 buckets
# delete lots of the collisions, creating lots of tombstones
for y in range(400): # stay above min load to not trigger resize
for y in range(400): # stay above min load not to trigger resize
del idx[HH(0, y, 0)]
# create lots of colliding entries, within the not yet used part of the hashtable
for y in range(330): # stay below max load to not trigger resize
for y in range(330): # stay below max load not to trigger resize
# at y == 259 a resize will happen due to going beyond max EFFECTIVE load
# if the bug is present, that element will be inserted at the wrong place.
# and because it will be at the wrong place, it can not be found again.

View file

@ -463,7 +463,7 @@ class RepositoryCommitTestCase(RepositoryTestCaseBase):
put_segment = get_latest_segment()
self.repository.commit(compact=False)
# We now delete H(1), and force this segment to not be compacted, which can happen
# We now delete H(1), and force this segment not to be compacted, which can happen
# if it's not sparse enough (symbolized by H(2) here).
self.repository.delete(H(1))
self.repository.put(H(2), fchunk(b"1"))