This should allow us to make sure older borg versions can be cleanly
prevented from doing operations that are no longer safe because of
repository format evolution. This allows more fine grained control than
just incrementing the manifest version. So for example a change that
still allows new archives to be created but would corrupt the repository
when an old version tries to delete an archive or check the repository
would add the new feature to the check and delete set but leave it out
of the write set.
This is somewhat inspired by ext{2,3,4} which uses sets for
compat (everything except fsck), ro-compat (may only be accessed
read-only by older versions) and features (refuse all access).
make_parent(path) helper to reduce code duplication.
also use it for directories although makedirs can also do it.
bugfix: also create parent dir for device files, if needed.
(cherry picked from commit d4e27e2952)
* Set warning exit code when xattr is too big
* Warnings for more extended attributes errors (ENOTSUP, EACCES)
* Add tests for all xattr warnings
(cherry picked from commit 63b5cbfc99)
we do not trust the remote, so we are careful unpacking its responses.
the remote could return manipulated msgpack data that announces e.g.
a huge array or map or string. the local would then need to allocate huge
amounts of RAM in expectation of that data (no matter whether really
that much is coming or not).
by using limits in the Unpacker, a ValueError will be raised if unexpected
amounts of data shall get unpacked. memory DoS will be avoided.
# Conflicts:
# borg/archiver.py
# src/borg/archive.py
# src/borg/remote.py
# src/borg/repository.py
increasing the mask (target chunk size) from 14 (16kiB) to 17 (128kiB).
this should reduce the amount of item metadata chunks an archive has to reference to 1/8.
this does not completely fix#1452, but at least enables a 8x larger item metadata stream.
if we do not lose the original chunk ids list when "repairing" a file (replacing missing
chunks with all-zero chunks), we have a chance to heal the file back into its original
state later, in case the chunks re-appear (e.g. in a fresh backup).
processing depends on symlink target:
- if target is a special file: process the symlink as a regular file
- if target is anything else: process the symlink as symlink
refactor code a little to avoid duplication.