mirror of
https://github.com/RandomNinjaAtk/arr-scripts.git
synced 2026-03-02 22:57:35 -05:00
[BUG] - Lidarr - memory overflow? #158
Labels
No labels
Needs Triage
Not Reproducible
Upstream Issue
User Error
bug
documentation
enhancement
good first issue
help wanted
invalid
lidarr
lidarr
question
radarr
readarr
sabnzbd
sonarr
synology (host)
unraid (host)
waiting for logs
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/arr-scripts#158
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @utilitybee on GitHub (Jun 8, 2024).
Originally assigned to: @RandomNinjaAtk on GitHub.
Application
Lidarr
Host platform
Unraid
Script
i assume it is called custom-svc-Audio
Script Version
most current since I have restarted my dockers 50 times in the last 2 weeks haha
Describe the bug
After weeks of trying to track down a OOM error i was able to spot it on htop.
It will suck up all available ram in less then a minute. While trying to figure out the OOM error i installed the swap plugin and gave it a 200gb swap just so i have time to see what is happening. it used all of that also.
To Reproduce
Steps to reproduce the behavior:
no idea is is random sometimes it happen 6 times in like an hour or every other day
Logs/Screenshots


Screenshots were taking at 10:37 am
Audio-2024_06_07_10_11_AM.txt
@utilitybee commented on GitHub (Jun 8, 2024):
vulcan-diagnostics-20240608-0657.zip
i will also provide system diags
@RandomNinjaAtk commented on GitHub (Jun 8, 2024):
You can set a memory limit on the container itself, give that a try to see if it helps. I don’t think the I’ve done anything in scripts to specifically cause an issue, I’ve seen out of memory errors on my own server from time to time but never pinned down a cause….
I have no control over the underlying apps/software that are used for certain things. If you have any ideas of what could specifically cause it, I’m all ears…
@utilitybee commented on GitHub (Jun 8, 2024):
I did my best to find out as much info as i can before i opened this ticket. I did not want to waste any ones time. Any info is appreciate. If this is just lidarr then my mistake :(. I will try to catch it again and see if i can narrow it down since i have a place to look. spending hours staring at htop the only thing i noticed is that the custum-svc-audio will spool up like 30 PIDs of ffmpeg and i have seen it go as high as 4 gigs before it finished. and according to the tree view it is part of the lidarr docker
@utilitybee commented on GitHub (Jun 8, 2024):
so i got a video clip of it happening again. It looks like a api call gone bad? i dont know.
https://youtu.be/qjZ8-4ujTYA
I got to watch it to happen again. and the same way. but this time i have the logs side by side
https://youtu.be/shjZ0Pkip8Q
If this has nothing to do with your scripts i apologize haha. i don't want bother anyone. or if there is something else i can can do it help narrow it down let me know. or if this is such a small use case that it is not worth looking into i also understand. lol either way i love the scripts. and just want to say thanks
@RandomNinjaAtk commented on GitHub (Jun 9, 2024):
Thanks, this is helpful, when I get a chance to review, I’ll let you know what I find.
Edit: just from a quick review of the second video, think what you suspect is right by a bad api call. When I have some free cycles late today, I’ll see if I can track it down.
what looks like is happening is that it’s supposed to be gathering album info from the api, and it should only be grabbing a single album at a time, but it appears that it’s grabbing every album in lidarr possibly, which would be a lot of data and I see why that would be an issue.
Something to track down for sure, I’ll see what I can do. In the meantime you may just want to limit the containers overall memory by giving it a cap.
@utilitybee commented on GitHub (Jun 9, 2024):
I have set the limit and thank you for the update!
@RandomNinjaAtk commented on GitHub (Aug 5, 2024):
@utilitybee Give the latest update a try.... I think it might be fixed...?
@utilitybee commented on GitHub (Aug 5, 2024):
Just updated and removed the memory limit. I will give it a few days then update this thread. <3
@utilitybee commented on GitHub (Aug 7, 2024):
I have not had any oom errors. I also notice a lot less cached memory also, but could that be a coincidence? I don't think so because I did not change anything haha. Thank you for taking the time!!!
@RandomNinjaAtk commented on GitHub (Aug 7, 2024):
No problem, thanks for identifying the problem! Teamwork makes the dream work