mirror of
https://github.com/advplyr/audiobookshelf.git
synced 2026-03-02 22:46:56 -05:00
[Enhancement]: Splitting backups into author/item and database backups with separate backup schedules #2466
Labels
No labels
authentication
awaiting release
backlog
bug
chapter editor
config-issue
ebooks
encoding/embedding
enhancement
help wanted
listening sessions & progress
planned
possible plugin
progress sync
sorting/filtering/searching
unable to reproduce
upload
users & permissions
waiting
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/audiobookshelf#2466
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ZLoth on GitHub (Oct 2, 2024).
Type of Enhancement
Server Backend
Describe the Feature/Enhancement
Split the backups into two: One being the author/item images backup (metadata-authors and metadata-items respectively, and one being the database (aka absdatabase.sqlite), and each having their own backup schedule (images being on a weekly schedule, while the database is on a daily schedule).
Why would this be helpful?
My library consists of over 5,000 titles, thus there is a high number of images for both the titles and authors that can't be further compressed well as well as the database which compresses very well. See the example below where I show the sizes both uncompressed and compressed:
Now I fully understand the reason why you want to have a daily backup of the database as it contains current book status as well as user list. As you can see, the uncompressed database dump is around 630MB while the compressed version is just 70.8MB for my large library.
But, the images, once created, rarely change, thus they may be backed up on a more infrequent schedule (e.g. weekly with a option for a manual backup) with the possibility that you may have to run a match to recover the images of the most recent titles. The images are already in compressed jpg format, and take up 1.49 GB. That is plenty of data that doesn't change, and could benefit from a less-frequent backup schedule with detection that if nothing changed, the backup doesn't get created.
The reason why I mention this is because of how I backup on my server. As I am running under a docker container, I have a mapping from the /metadata/backups within the docker container to the /mnt/pool/backup/Audio BookShelf folder on my server. In turn, I have a nightly job which backs up the on-site /mnt/pool/backup folder (which also includes computer backups) to an offsite backup at Backblaze. A daily backup looks like this online with two days retention of deleted files:
Future Implementation (Screenshot)
For backups:

For restores:

Audiobookshelf Server Version
v2.13.4
Current Implementation (Screenshot)
@ZLoth commented on GitHub (Oct 8, 2024):
Very rough versions of the backup screens uploaded.
@ZLoth commented on GitHub (Oct 9, 2024):
One possibility that was raised in the Discord channel by @nichwall is to store the metadata.json and images along with the books (items), and have just the database be backed up. I'm completely open to that idea, as I backup my books on a regular basis to a large hard drive. Here are my options:
Also storing the book as a metadata.json file will help with moving books between libraries, especially if the scanner order is like this: