mirror of
https://github.com/RandomNinjaAtk/arr-scripts.git
synced 2026-03-02 22:57:35 -05:00
[FEATURE] - Lidarr - Smarter Handling of Missing Albums — Randomization, Rotation, and Retry Backoff #204
Labels
No labels
Needs Triage
Not Reproducible
Upstream Issue
User Error
bug
documentation
enhancement
good first issue
help wanted
invalid
lidarr
lidarr
question
radarr
readarr
sabnzbd
sonarr
synology (host)
unraid (host)
waiting for logs
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/arr-scripts#204
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @funkbox on GitHub (May 13, 2025).
Is your feature request related to a problem? Please describe.
Yes. When running Audio.service.bash against a large Lidarr library with many missing albums (e.g., 60,000+), the script uses the /wanted/missing endpoint, which returns only the most recent 1000 missing releases in descending order.
This causes several issues over time:
• The script loops the same albums repeatedly, especially failed ones
• Older missing albums are never reached, since they fall outside the 1000-item limit
• Albums that can’t be matched/downloaded are retried every cycle, wasting API calls and time
This makes it difficult to backfill long-missing releases or move past stuck entries.
Describe the solution you'd like
I’d like to see the script support a smarter, more dynamic approach to processing missing albums, including:
Randomization of the /wanted/missing list
Support randomizing the album order before processing.
Example: pipe the JSON list through shuf (e.g., jq -c '.[]' | shuf) before looping.
A config flag like shuffleWantedList=true could control this.
Alternating modes: recent + backfill
Alternate between processing recent releases and older catalog entries.
This could involve:
• Pulling page 1 of /wanted/missing for recents
• Pulling from /artist or additional pages (e.g., page=2–5)
• Merging and shuffling the results
This would ensure both current and long-missing albums are addressed over time.
Add logic to track albums that fail to download (e.g., not found on TIDAL, Deezer etc.).
After X failed attempts, pause retrying that album for a set duration (e.g., 7 days).
This could use a simple JSON file to track failures by timestamp.
Describe alternatives you've considered
Some thing's I've tried:
• Manually shuffling the wanted list using shuf
• Filtering out known-failed albums via custom wrappers
• Paging /wanted/missing manually with custom scripts
These approaches work but require external tooling and don’t solve the problem natively within the automation.
Additional context
These changes would:
• Improve long-term efficiency for large-scale libraries
• Prevent unnecessary repeat processing of failed entries
• Increase coverage of missing releases that fall outside the most recent 1000
• Improve behavior in unattended or always-on setups
Thanks again for this incredible script — it’s become a cornerstone of my music automation stack. These enhancements would make it even more powerful and scalable.