-
I have ~10GB local repository with ~1,5K archives (backups) of multiple sets of data (multiple distinct source directories) done over a few years (kept in one repository to allow for deduplication on data hierarchy changes). All the directories are backed up together (one by one) on a regular basis. As number of changes in a particular source directories is usually small, the single "create" command takes <1s to process it, but after that there is ~30s delay on "Saving files cache". As that happens for every directory, the total backup time is significantly increased (in comparison to the actual "backup time" - without the post processing). Sample command: When executed with
I performed the update procedure for 1.2.6 (although there might be some backups with it created with 1.2.6 due to the Borg update in the operating system) and there were no "tam:none", so according to the procedure, I haven't performed "borg upgrade". I'm not sure, if I did that on 1.0.9. My related questions:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 11 replies
-
If you only have few changed source files, it is expected that the overhead of loading and saving the files cache, the chunks and repo index is more than the actual backup. The archive TAM related stuff is relatively inexpensive as that is not much data to process, but of course the time needed for that scales with the archive count (btw, other borg operations, like You can ignore the TAM related debug log messages, they are normal and only for debugging. If your files cache is rather big and takes a lot of time to load/save and you are backing up different sets of source files using different |
Beta Was this translation helpful? Give feedback.
As clarified by @ThomasWaldmann, it is not a problem with cache, but
--stats
generates extra time to go through all the existing archives in the repository which may take some time, if you have many of them (here 1,5K+).