Handling Stalled Downloads (Qbittorrent) #5124

Open
opened 2026-02-19 22:53:31 -05:00 by deekerman · 69 comments
Owner

Originally created by @screwyluie on GitHub (Nov 25, 2020).

Is your feature request related to a problem? Please describe.
The problem is stalled downloads being sat on forever with no resolution.

Describe the solution you'd like
There should be some user settings to deal with skipping stalled downloads.
Something like X amount of time stalled, and 'should this be blacklisted' I think would suffice.
That way if it's a legit reason (internet is down or whatever) and you choose to not blacklist the downloads it'll loop back and try it again eventually. So now you're more likely to get the files you want, and even if there's a legit reason for the issue that gets resolved later they will still continue to queue/search/download.

Describe alternatives you've considered
The alternative is regular manual policing of the queue to manually deal with them. This is made more annoying because if you delete the download from the DL client, radarr doesn't see it as failed (which would trigger a new search) it just gives up and lists it as missing. So you have to also tell it search again.

All of this adds up the antithesis of automation imo and any solution would be better than no solution.

AB#753

Originally created by @screwyluie on GitHub (Nov 25, 2020). **Is your feature request related to a problem? Please describe.** The problem is stalled downloads being sat on forever with no resolution. **Describe the solution you'd like** There should be some user settings to deal with skipping stalled downloads. Something like X amount of time stalled, and 'should this be blacklisted' I think would suffice. That way if it's a legit reason (internet is down or whatever) and you choose to not blacklist the downloads it'll loop back and try it again eventually. So now you're more likely to get the files you want, and even if there's a legit reason for the issue that gets resolved later they will still continue to queue/search/download. **Describe alternatives you've considered** The alternative is regular manual policing of the queue to manually deal with them. This is made more annoying because if you delete the download from the DL client, radarr doesn't see it as failed (which would trigger a new search) it just gives up and lists it as missing. So you have to also tell it search again. All of this adds up the antithesis of automation imo and any solution would be better than no solution. [AB#753](https://dev.azure.com/Servarr/7ab38f4e-5a57-4d70-84f4-94dd9bc5d6df/_workitems/edit/753)
Author
Owner

@MikeFalcor commented on GitHub (Dec 26, 2021):

I agree with this wholeheartedly.

I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on.

On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17.

I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application.

THIS IS OVER A YEAR OLD!!!!

@MikeFalcor commented on GitHub (Dec 26, 2021): I agree with this wholeheartedly. I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on. On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17. I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application. THIS IS OVER A YEAR OLD!!!!
Author
Owner

@austinwbest commented on GitHub (Dec 26, 2021):

I agree with this wholeheartedly.

I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on.

On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17.

I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application.

THIS IS OVER A YEAR OLD!!!!

You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol

Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that.

All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it.

@austinwbest commented on GitHub (Dec 26, 2021): > I agree with this wholeheartedly. > > I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on. > > On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17. > > I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application. > > THIS IS OVER A YEAR OLD!!!! You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that. All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it.
Author
Owner

@MikeFalcor commented on GitHub (Dec 26, 2021):

I agree with this wholeheartedly.
I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on.
On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17.
I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application.
THIS IS OVER A YEAR OLD!!!!

You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol

Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that.

All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it.

"Radarr makes failed downloads a thing of the past. Password protected releases, missing repair blocks or virtually any other reason? no worries. Radarr will automatically blacklist the release and tries another one until it finds one that works."

I'm not the one that typed that out and used it as a way to gain user-base.

Case in point: I wrote an application that automates new user creation for one of my client location in downtown Seattle. It is used every single time a new user is needed on the network and pre-fills information based on tech input. It doesn't automatically add group membership for a new user. So why the hell would I tell the techs that use it that it does?!

A stalled download over an extended period of time (say a month or two months) is a failed download. And that does not get handled automatically. THAT is a problem that is advertised as being solved and this 'Feature Request' is low priority? C'mon...

I do appreciate the Radarr team and the devs, but this is a huge oversight and the fact that you just. can't. own it. Wow.

Side note: I do love it when devs get called out on something and they immediately jump to the defensive. I've seen it countless times in the 25+ years of my IT career. It will continue, for sure.

@MikeFalcor commented on GitHub (Dec 26, 2021): > > I agree with this wholeheartedly. > > I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on. > > On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17. > > I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application. > > THIS IS OVER A YEAR OLD!!!! > > You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol > > Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that. > > All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it. "Radarr makes failed downloads a thing of the past. Password protected releases, missing repair blocks or virtually any other reason? no worries. Radarr will automatically blacklist the release and tries another one until it finds one that works." I'm not the one that typed that out and used it as a way to gain user-base. Case in point: I wrote an application that automates new user creation for one of my client location in downtown Seattle. It is used every single time a new user is needed on the network and pre-fills information based on tech input. It doesn't automatically add group membership for a new user. So why the hell would I tell the techs that use it that it does?! A stalled download over an extended period of time (say a month or two months) is a failed download. And that does not get handled automatically. THAT is a problem that is advertised as being solved and this 'Feature Request' is low priority? C'mon... I do appreciate the Radarr team and the devs, but this is a huge oversight and the fact that you just. can't. own it. Wow. Side note: I do love it when devs get called out on something and they immediately jump to the defensive. I've seen it countless times in the 25+ years of my IT career. It will continue, for sure.
Author
Owner

@bakerboy448 commented on GitHub (Dec 26, 2021):

Further off topic comments not relevant to implementing, discussing the details, or actively working on this feature are not helpful.

This feature is extremely low priority and also problematic to detect.

Usenet - as you quoted - has failed download statuses that can easily be identified.

There is no such thing as a failed torrent.

There is also concerns with implementing this around hit and runs.

Ultimately a torrent is typically stalled because of either the Tracker lied to Radarr about the number of seeds or the user (or seeders') systems are configured in a manner that do not allow connections.

For all public trackers it is typically the former and they inflate their seed counts by factors of 10, 50 or 100 if not more.

Not to mention this FR has been open for over a year and has only +7 so it clearly is not a priority to the hundreds of thousands - no exaggeration- of users.

@bakerboy448 commented on GitHub (Dec 26, 2021): Further off topic comments not relevant to implementing, discussing the details, or actively working on this feature are not helpful. This feature is extremely low priority and also problematic to detect. Usenet - as you quoted - has failed download statuses that can easily be identified. There is no such thing as a failed torrent. There is also concerns with implementing this around hit and runs. Ultimately a torrent is typically stalled because of either the Tracker lied to Radarr about the number of seeds or the user (or seeders') systems are configured in a manner that do not allow connections. For all public trackers it is typically the former and they inflate their seed counts by factors of 10, 50 or 100 if not more. Not to mention this FR has been open for over a year and has only +7 so it clearly is not a priority to the hundreds of thousands - no exaggeration- of users.
Author
Owner

@bakerboy448 commented on GitHub (Dec 26, 2021):

The alternative is regular manual policing of the queue to manually deal with them. This is made more annoying because if you delete the download from the DL client, radarr doesn't see it as failed (which would trigger a new search) it just gives up and lists it as missing. So you have to also tell it search again.

this is easily worked around simply by removing - and blocklisting - the download in Radarr.

no additional manual effort needed.

in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api.

No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers.

additionally - to quote from @austinwbest -

Stalled doesn't always mean failed. It means a site claims they have more seeders than they do. It means a low seed count and the few people are not online. Errors mean failed.

@bakerboy448 commented on GitHub (Dec 26, 2021): > The alternative is regular manual policing of the queue to manually deal with them. This is made more annoying because if you delete the download from the DL client, radarr doesn't see it as failed (which would trigger a new search) it just gives up and lists it as missing. So you have to also tell it search again. this is easily worked around simply by removing - and blocklisting - the download in Radarr. no additional manual effort needed. in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api. No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers. additionally - to quote from @austinwbest - > Stalled doesn't always mean failed. It means a site claims they have more seeders than they do. It means a low seed count and the few people are not online. Errors mean failed.
Author
Owner

@austinwbest commented on GitHub (Dec 26, 2021):

I agree with this wholeheartedly.
I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on.
On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17.
I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application.
THIS IS OVER A YEAR OLD!!!!

You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol

Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that.

All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it.

"Radarr makes failed downloads a thing of the past. Password protected releases, missing repair blocks or virtually any other reason? no worries. Radarr will automatically blacklist the release and tries another one until it finds one that works."

I'm not the one that typed that out and used it as a way to gain user-base.

Case in point: I wrote an application that automates new user creation for one of my client location in downtown Seattle. It is used every single time a new user is needed on the network and pre-fills information based on tech input. It doesn't automatically add group membership for a new user. So why the hell would I tell the techs that use it that it does?!

A stalled download over an extended period of time (say a month or two months) is a failed download. And that does not get handled automatically. THAT is a problem that is advertised as being solved and this 'Feature Request' is low priority? C'mon...

I do appreciate the Radarr team and the devs, but this is a huge oversight and the fact that you just. can't. own it. Wow.

Side note: I do love it when devs get called out on something and they immediately jump to the defensive. I've seen it countless times in the 25+ years of my IT career. It will continue, for sure.

Stalled doesn't always mean failed. It means a site claims they have more seeders than they do. It means a low seed count and the few people are not online. Errors mean failed. Your incorrect interpretation doesn't make it fact bud.

Again, your opinion is fine and if it is that critical to you then PR it. As you didn't even touch on that in your second rant and do what most everyone else does, I won't expect much more than another wall of words.

You said your peace and you got a response so at this point I think it's fair to stop spamming this if it isn't related to actually doing it

For the record, I didn't even notice this until I got an email with your response. It isn't a bad idea to implement in some fashion.

Enjoy your day.

@austinwbest commented on GitHub (Dec 26, 2021): > > > I agree with this wholeheartedly. > > > I could go on this whole diatribe about how Radarr was initially created to INCREASE automation and REDUCE the monotony of having to touch a large number of downloads, but come on. > > > On a scale of 1 to 10 - 1 being a feature request and 10 being a simple fact that it should have been implemented on day one? This is a 17. > > > I write scripts for just about everything that software devs couldn't seem to figure out on their own, so I will figure out a script for handling this as well. Obviously, my time would be wasted waiting for a key automation feature such as automatic error and stall handling to be built into a usenet/torrent automation application. > > > THIS IS OVER A YEAR OLD!!!! > > > > You should consider a PR and contribute back to the community. Shouldn't be that difficult for someone who obviously writes so many scripts to do things we can't figure out lol > > > > Comparing a time field and calling an existing function to remove it from the client and calling another existing function to blacklist it does sound pretty intensive to do. Oh wait, maybe it is a priority thing and it isn't high enough to the people who freely give their time to build and maintain this so others can use it for free. Nah, couldn't be that. > > > > All the whining and talking shit and all caps doesn't make any difference. If it is that important to you, PR it. Otherwise it may get done, it may not. It is open source free software, that is the nature of it. > > "Radarr makes failed downloads a thing of the past. Password protected releases, missing repair blocks or virtually any other reason? no worries. Radarr will automatically blacklist the release and tries another one until it finds one that works." > > I'm not the one that typed that out and used it as a way to gain user-base. > > Case in point: I wrote an application that automates new user creation for one of my client location in downtown Seattle. It is used every single time a new user is needed on the network and pre-fills information based on tech input. It doesn't automatically add group membership for a new user. So why the hell would I tell the techs that use it that it does?! > > A stalled download over an extended period of time (say a month or two months) is a failed download. And that does not get handled automatically. THAT is a problem that is advertised as being solved and this 'Feature Request' is low priority? C'mon... > > I do appreciate the Radarr team and the devs, but this is a huge oversight and the fact that you just. can't. own it. Wow. > > Side note: I do love it when devs get called out on something and they immediately jump to the defensive. I've seen it countless times in the 25+ years of my IT career. It will continue, for sure. Stalled doesn't always mean failed. It means a site claims they have more seeders than they do. It means a low seed count and the few people are not online. Errors mean failed. Your incorrect interpretation doesn't make it fact bud. Again, your opinion is fine and if it is that critical to you then PR it. As you didn't even touch on that in your second rant and do what most everyone else does, I won't expect much more than another wall of words. You said your peace and you got a response so at this point I think it's fair to stop spamming this if it isn't related to actually doing it For the record, I didn't even notice this until I got an email with your response. It isn't a bad idea to implement in some fashion. Enjoy your day.
Author
Owner

@screwyluie commented on GitHub (Dec 26, 2021):

this is easily worked around simply by removing - and blocklisting - the download in Radarr.

no additional manual effort needed.

in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api.

no extra effort should be needed in the first place. You missed the whole point. Just because some other solution works doesn't negate the issue. As a self contained piece of software, not considering anyone else's software, radarr is failing it's main task the automation of downloads. Queueing up a movie and then having it NEVER download because the torrent has stalled is the exact opposite of it's purpose. Can you not understand that?

ok it's hard, ok it's not getting done soon, whatever, that's fine... but we don't need to do that because of X,Y,Z is BS. The implementation of options for the user to address issues that prevent downloads from finishing is straightforward and obvious. The software/devs don't need know "Stalled doesn't always mean failed" just let the user decide, give them the tools they need to manage their own downloads.

There's no explaining this away.

No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers.

additionally - to quote from @austinwbest -

All of this can be addressed with the options I mentioned. These are non issues, if people get themselves banned from trackers that's their problem for not understanding what they're doing. Hell, write up a little warning about the settings... how hard is that. The rest of us what our queue to actually download not sit there until eternity because after a year you never know it might still download eventually.... come on get real.

Give us options, please.

@screwyluie commented on GitHub (Dec 26, 2021): > this is easily worked around simply by removing - and blocklisting - the download in Radarr. > > no additional manual effort needed. > > in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api. > no extra effort should be needed in the first place. You missed the whole point. Just because some other solution works doesn't negate the issue. As a self contained piece of software, not considering anyone else's software, radarr is failing it's main task the automation of downloads. Queueing up a movie and then having it NEVER download because the torrent has stalled is the exact opposite of it's purpose. Can you not understand that? ok it's hard, ok it's not getting done soon, whatever, that's fine... but we don't need to do that because of X,Y,Z is BS. The implementation of options for the user to address issues that prevent downloads from finishing is straightforward and obvious. The software/devs don't need know "Stalled doesn't always mean failed" just let the user decide, give them the tools they need to manage their own downloads. There's no explaining this away. > No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers. > > additionally - to quote from @austinwbest - All of this can be addressed with the options I mentioned. These are non issues, if people get themselves banned from trackers that's their problem for not understanding what they're doing. Hell, write up a little warning about the settings... how hard is that. The rest of us what our queue to actually download not sit there until eternity because after a year you never know it might still download eventually.... come on get real. Give us options, please.
Author
Owner

@bakerboy448 commented on GitHub (Dec 26, 2021):

Give us options, please.

needless to stay this is still open

anyway

The main development team is split across 4 apps: Lidarr, Prowlarr, Radarr, Readarr.
There are only ~3-4 active developers who work on this, and of those 3 -4 exactly zero work on this fulltime. Exactly zero of any members involved (developers / support) get paid to do this. Everyone of us have fulltime real jobs. Everyone of us has kids and family. Everyone involved works on more than just this project.

As you can imagine between the four projects (in addition to the various backends and other modules), the todo list is long - upwards of over a thousand GitHub issues alone - and time short.

the fastest way to get this implemented would be to learn .net or find someone who knows .net that is passionate about this issue and can work with the team to get a PR up and landed

@bakerboy448 commented on GitHub (Dec 26, 2021): > Give us options, please. needless to stay this is still open anyway The main development team is split across 4 apps: Lidarr, Prowlarr, Radarr, Readarr. There are only ~3-4 active developers who work on this, and of those 3 -4 exactly zero work on this fulltime. Exactly zero of any members involved (developers / support) get paid to do this. Everyone of us have fulltime real jobs. Everyone of us has kids and family. Everyone involved works on more than just this project. As you can imagine between the four projects (in addition to the various backends and other modules), the todo list is long - upwards of over a thousand GitHub issues alone - and time short. the fastest way to get this implemented would be to learn .net or find someone who knows .net that is passionate about this issue and can work with the team to get a PR up and landed
Author
Owner

@screwyluie commented on GitHub (Dec 26, 2021):

Give us options, please.

needless to stay this is still open

anyway

The main development team is split across 4 apps: Lidarr, Prowlarr, Radarr, Readarr. There are only ~3-4 active developers who work on this, and of those 3 -4 exactly zero work on this fulltime. Exactly zero of any members involved (developers / support) get paid to do this. Everyone of us have fulltime real jobs. Everyone of us has kids and family. Everyone involved works on more than just this project.

And this is fine, beggars can't be choosers, and if wait/busy is the response then so be it. 100% ok with that.

I'm fully aware of the dev situation, but that's not going to stop me from making feature requests, because that's how this works.

@screwyluie commented on GitHub (Dec 26, 2021): > > Give us options, please. > > needless to stay this is still open > > anyway > > The main development team is split across 4 apps: Lidarr, Prowlarr, Radarr, Readarr. There are only ~3-4 active developers who work on this, and of those 3 -4 exactly zero work on this fulltime. Exactly zero of any members involved (developers / support) get paid to do this. Everyone of us have fulltime real jobs. Everyone of us has kids and family. Everyone involved works on more than just this project. And this is fine, beggars can't be choosers, and if wait/busy is the response then so be it. 100% ok with that. I'm fully aware of the dev situation, but that's not going to stop me from making feature requests, because that's how this works.
Author
Owner

@austinwbest commented on GitHub (Dec 26, 2021):

this is easily worked around simply by removing - and blocklisting - the download in Radarr.

no additional manual effort needed.

in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api.

no extra effort should be needed in the first place. You missed the whole point. Just because some other solution works doesn't negate the issue. As a self contained piece of software, not considering anyone else's software, radarr is failing it's main task the automation of downloads. Queueing up a movie and then having it NEVER download because the torrent has stalled is the exact opposite of it's purpose. Can you not understand that?

ok it's hard, ok it's not getting done soon, whatever, that's fine... but we don't need to do that because of X,Y,Z is BS. The implementation of options for the user to address issues that prevent downloads from finishing is straightforward and obvious. The software/devs don't need know "Stalled doesn't always mean failed" just let the user decide, give them the tools they need to manage their own downloads.

There's no explaining this away.

No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers.

additionally - to quote from @austinwbest -

All of this can be addressed with the options I mentioned. These are non issues, if people get themselves banned from trackers that's their problem for not understanding what they're doing. Hell, write up a little warning about the settings... how hard is that. The rest of us what our queue to actually download not sit there until eternity because after a year you never know it might still download eventually.... come on get real.

Give us options, please.

My point was that Stalled does not mean error which is what auto failed handling does.

I am in agreement that if something was added it would be at the users risk to use, but I'm not sure if others would agree and weight the H&R risk above it.

I am speaking from the outside looking in at the problem and even said I think it is worth discussing.

That said, every f'n post being rude and smartasses surely doesn't make me want to give a damn about addressing it and clearly I am the only one to even attempt dialog about it.

I clearly said I didn't even see this until today so how about leaving the bs at the door. If not, that's fine too. It can sit for another year without anyone considering it as it's a non issue for me, I get stuck queue notifications that alert me of the issue.

Personally, I'm done with the back and forth of it all as it isn't productive at all. Yall can continue to carry on the same way if you choose to or can chill while we hash out the details of it and see if it is something that would be added.

@austinwbest commented on GitHub (Dec 26, 2021): > > this is easily worked around simply by removing - and blocklisting - the download in Radarr. > > > > no additional manual effort needed. > > > > in theory it can also be automated via a Cronjob and a script to monitor Radarr's queue via the api and then if needed eventually remove and blocklist the download the api. > > > > no extra effort should be needed in the first place. You missed the whole point. Just because some other solution works doesn't negate the issue. As a self contained piece of software, not considering anyone else's software, radarr is failing it's main task the automation of downloads. Queueing up a movie and then having it NEVER download because the torrent has stalled is the exact opposite of it's purpose. Can you not understand that? > > ok it's hard, ok it's not getting done soon, whatever, that's fine... but we don't need to do that because of X,Y,Z is BS. The implementation of options for the user to address issues that prevent downloads from finishing is straightforward and obvious. The software/devs don't need know "Stalled doesn't always mean failed" just let the user decide, give them the tools they need to manage their own downloads. > > There's no explaining this away. > > > No near term plans for this due to various other priorities and the inherent fact that there is no "failed" torrent status and each individuals definition of "failed" or "stalled" may very. There is also the inherent risk of Hit and Runs and users easily getting themselves banned from their trackers. > > > > additionally - to quote from @austinwbest - > > All of this can be addressed with the options I mentioned. These are non issues, if people get themselves banned from trackers that's their problem for not understanding what they're doing. Hell, write up a little warning about the settings... how hard is that. The rest of us what our queue to actually download not sit there until eternity because after a year you never know it might still download eventually.... come on get real. > > Give us options, please. My point was that Stalled does not mean error which is what auto failed handling does. I am in agreement that if something was added it would be at the users risk to use, but I'm not sure if others would agree and weight the H&R risk above it. I am speaking from the outside looking in at the problem and even said I think it is worth discussing. That said, every f'n post being rude and smartasses surely doesn't make me want to give a damn about addressing it and clearly I am the only one to even attempt dialog about it. I clearly said I didn't even see this until today so how about leaving the bs at the door. If not, that's fine too. It can sit for another year without anyone considering it as it's a non issue for me, I get stuck queue notifications that alert me of the issue. Personally, I'm done with the back and forth of it all as it isn't productive at all. Yall can continue to carry on the same way if you choose to or can chill while we hash out the details of it and see if it is something that would be added.
Author
Owner

@screwyluie commented on GitHub (Dec 26, 2021):

My point was that Stalled does not mean error which is what auto failed handling does.

I am in agreement that if something was added it would be at the users risk to use, but I'm not sure if others would agree and weight the H&R risk above it.

I am speaking from the outside looking in at the problem and even said I think it is worth discussing.

That said, every f'n post being rude and smartasses surely doesn't make me want to give a damn about addressing it and clearly I am the only one to even attempt dialog about it.

I clearly said I didn't even see this until today so how about leaving the bs at the door. If not, that's fine too. It can sit for another year without anyone considering it as it's a non issue for me, I get stuck queue notifications that alert me of the issue.

Personally, I'm done with the back and forth of it all as it isn't productive at all. Yall can continue to carry on the same way if you choose to or can chill while we hash out the details of it and see if it is something that would be added.

I didn't realize part of his quote would ping you if that's what happened, none of that was addressed at you. I was frustrated with bakerboy explaining away the issue and trying to gate keep the feature requests. Nothing more maddening than asking for help and someone else tell you that your problem doesn't matter because they don't think so.

My apologies to you or anyone else who think any of my frustration is pointed at them. I can't speak for the other guy, but as the OP it just gets hard to deal with anything on github, this one or others, when every time you ask for help or make a suggestions there's always someone like bakerboy there ready to tell you why your problem doesn't matter and how they know better.

You're right, this back and forth is not productive. Apologies.

@screwyluie commented on GitHub (Dec 26, 2021): > My point was that Stalled does not mean error which is what auto failed handling does. > > I am in agreement that if something was added it would be at the users risk to use, but I'm not sure if others would agree and weight the H&R risk above it. > > I am speaking from the outside looking in at the problem and even said I think it is worth discussing. > > That said, every f'n post being rude and smartasses surely doesn't make me want to give a damn about addressing it and clearly I am the only one to even attempt dialog about it. > > I clearly said I didn't even see this until today so how about leaving the bs at the door. If not, that's fine too. It can sit for another year without anyone considering it as it's a non issue for me, I get stuck queue notifications that alert me of the issue. > > Personally, I'm done with the back and forth of it all as it isn't productive at all. Yall can continue to carry on the same way if you choose to or can chill while we hash out the details of it and see if it is something that would be added. I didn't realize part of his quote would ping you if that's what happened, none of that was addressed at you. I was frustrated with bakerboy explaining away the issue and trying to gate keep the feature requests. Nothing more maddening than asking for help and someone else tell you that your problem doesn't matter because they don't think so. My apologies to you or anyone else who think any of my frustration is pointed at them. I can't speak for the other guy, but as the OP it just gets hard to deal with anything on github, this one or others, when every time you ask for help or make a suggestions there's always someone like bakerboy there ready to tell you why your problem doesn't matter and how they know better. You're right, this back and forth is not productive. Apologies.
Author
Owner

@dacabdi commented on GitHub (Jan 10, 2022):

Hey @screwyluie, I will take a stab at implementing this. Can you help me with some investigations?

  1. Do all clients support stalled state reporting explicitly (qbittorrent does, it's what I use) or we need to follow some heuristics based on time, download throughput, etc. (the second option might complicate things because we might need to keep more state on our side).
  2. Would you say it is better to make it a general setting or a per-download-client setting (it might help to avoid hit&run by using two+ clients and split them across private/public; that way you decide which one to apply the policy of stalled handling).

@bakerboy448 what's the culture here around PR reviews and gatekeeping? I will read the contributions wiki. Probably will look at it next weekend.

@dacabdi commented on GitHub (Jan 10, 2022): Hey @screwyluie, I will take a stab at implementing this. Can you help me with some investigations? 1. Do all clients support stalled state reporting explicitly (_qbittorrent_ does, it's what I use) or we need to follow some heuristics based on time, download throughput, etc. (the second option might complicate things because we might need to keep more state on our side). 2. Would you say it is better to make it a general setting or a per-download-client setting (it might help to avoid hit&run by using two+ clients and split them across private/public; that way you decide which one to apply the policy of stalled handling). @bakerboy448 what's the culture here around PR reviews and gatekeeping? I will read the contributions wiki. Probably will look at it next weekend.
Author
Owner

@bakerboy448 commented on GitHub (Jan 10, 2022):

For something like this and given the sensitivity about removing stalled downloads - e.g. trying to avoid thousands of users saying "Radarr got me banned from my tracker for HnR" - likely swing by discord and talk through and vet the idea/plan first with the team.

@bakerboy448 commented on GitHub (Jan 10, 2022): For something like this and given the sensitivity about removing stalled downloads - e.g. trying to avoid thousands of users saying "Radarr got me banned from my tracker for HnR" - likely swing by discord and talk through and vet the idea/plan first with the team.
Author
Owner

@dacabdi commented on GitHub (Jan 10, 2022):

Sounds reasonable @bakerboy448, please see my edits above regarding a couple of broad implementation questions, precisely aimed at avoiding your concern. To put it another way, I want to make a feature flag, and allow specific client management.

@dacabdi commented on GitHub (Jan 10, 2022): Sounds reasonable @bakerboy448, please see my edits above regarding a couple of broad implementation questions, precisely aimed at avoiding your concern. To put it another way, I want to make a feature flag, and allow specific client management.
Author
Owner

@dacabdi commented on GitHub (Jan 10, 2022):

Also, the reason why I'm piggybacking this feature request is,

  1. I have a TrueNAS box with spinning disks zpools. But those can't handle the IO of 5 torrent clients writing/reading concurrently. Also, the disks (my bad and lack of experience) are SMR and are not suitable for a lot of randomized writing, specially under ZFS, due to the perception of availability of the pool members during writes that require relocating entire shingled tracks. That's not an issue if written once/read a lot, like in media archival.
  2. To avoid this issue, I have an NAS graded Seagate Ironwolf series NVMe that takes all the IO from the torrents, and then those are automatically moved over to the big spinning media pool. However, the disk is fairly small (500 GiB) and I have to limit the number of active downloads. Which brings pain if the few concurrent downloads I allow get stalled (I watch a lot of Cannes Festival kind of cinema, it's hard to find seeds).

So, I have a usage case for stalled downloads management. I would claim it is not that far fetched a scenario, specially since a lot of people follow that scheme of downloading to a temporary block device and then moving to another.

@dacabdi commented on GitHub (Jan 10, 2022): Also, the reason why I'm piggybacking this feature request is, 1. I have a TrueNAS box with spinning disks zpools. But those can't handle the IO of 5 torrent clients writing/reading concurrently. Also, the disks (my bad and lack of experience) are SMR and are not suitable for a lot of randomized writing, specially under ZFS, due to the perception of availability of the pool members during writes that require relocating entire shingled tracks. That's not an issue if written once/read a lot, like in media archival. 2. To avoid this issue, I have an NAS graded Seagate Ironwolf series NVMe that takes all the IO from the torrents, and then those are automatically moved over to the big spinning media pool. However, the disk is fairly small (500 GiB) and I have to limit the number of active downloads. Which brings pain if the few concurrent downloads I allow get stalled (I watch a lot of Cannes Festival kind of cinema, it's hard to find seeds). So, I have a usage case for stalled downloads management. I would claim it is not that far fetched a scenario, specially since a lot of people follow that scheme of downloading to a temporary block device and then moving to another.
Author
Owner

@screwyluie commented on GitHub (Jan 11, 2022):

  • Do all clients support stalled state reporting explicitly (qbittorrent does, it's what I use) or we need to follow some heuristics based on time, download throughput, etc. (the second option might complicate things because we might need to keep more state on our side).

I also use qbittorrent. I would be willing to do some testing or googling for other clients. Even if they don't you could do something as simple as in the options "Consider download stalled after XXXX" where it's hrs/days. I don't see any harm with bans from trackers or similar if you were to set it like that. Most people messing with this should have some idea of how long a download will take. I know for me, even if it's 100gb it'll be done within a couple of hours so I would proly set it to 12hrs to have a download marked as stalled and to try a new one. You would need similar options to what's in the activity page when you delete a download (delete files, prevent this release again).

  • Would you say it is better to make it a general setting or a per-download-client setting (it might help to avoid hit&run by using two+ clients and split them across private/public; that way you decide which one to apply the policy of stalled handling).

NZB's shouldn't stall, so we're really only dealing with torrents, I don't see any reason why it would need to be per client, but if you can think of one then I don't see why not.

I have a very similar, in function, setup to yours except instead of a local nas I'm using cloud storage as the long-term archive with my local drive doing the initial IO. So it's not too far-fetched a setup, agreed.

@screwyluie commented on GitHub (Jan 11, 2022): > * Do all clients support stalled state reporting explicitly (_qbittorrent_ does, it's what I use) or we need to follow some heuristics based on time, download throughput, etc. (the second option might complicate things because we might need to keep more state on our side). I also use qbittorrent. I would be willing to do some testing or googling for other clients. Even if they don't you could do something as simple as in the options "Consider download stalled after XXXX" where it's hrs/days. I don't see any harm with bans from trackers or similar if you were to set it like that. Most people messing with this should have some idea of how long a download will take. I know for me, even if it's 100gb it'll be done within a couple of hours so I would proly set it to 12hrs to have a download marked as stalled and to try a new one. You would need similar options to what's in the activity page when you delete a download (delete files, prevent this release again). > * Would you say it is better to make it a general setting or a per-download-client setting (it might help to avoid hit&run by using two+ clients and split them across private/public; that way you decide which one to apply the policy of stalled handling). NZB's shouldn't stall, so we're really only dealing with torrents, I don't see any reason why it would need to be per client, but if you can think of one then I don't see why not. I have a very similar, in function, setup to yours except instead of a local nas I'm using cloud storage as the long-term archive with my local drive doing the initial IO. So it's not too far-fetched a setup, agreed.
Author
Owner

@bakerboy448 commented on GitHub (Jan 11, 2022):

Per Client - and presumably not needed for Usenet clients would seem to be the consensus.

@bakerboy448 commented on GitHub (Jan 11, 2022): Per Client - and presumably not needed for Usenet clients would seem to be the consensus.
Author
Owner

@austinwbest commented on GitHub (Jan 11, 2022):

I also use qbittorrent. I would be willing to do some testing or googling for other clients. Even if they don't you could do something as simple as in the options "Consider download stalled after XXXX" where it's hrs/days. I don't see any harm with bans from trackers or similar if you were to set it like that. Most people messing with this should have some idea of how long a download will take. I know for me, even if it's 100gb it'll be done within a couple of hours so I would proly set it to 12hrs to have a download marked as stalled and to try a new one. You would need similar options to what's in the activity page when you delete a download (delete files, prevent this release again).

This isn't completely accurate. There can easily be a low seed count and it takes days or a week to complete things. The seeders can shut their systems down so it stalls until the next day or a couple days but does pick back up. My 1:1 gb fiber connection cant fix that issue so knowing how long it takes based on size isn't relevant to it being removed to soon and becoming a H&R

It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. I'm sure they will blame Radarr for deleting it to soon, etc no matter what. Kind of why i am not really for this kind of feature in an automated sense. Again though, i would never use such a thing anyways as i just deal with anything that gets stuck when i get a notification it is stuck in queue.

I highly recommend you get this kind of thing OK'ed from Q before you spend any time on it. If he comes to the same conclusion as i did, it'll never be merged in.

@austinwbest commented on GitHub (Jan 11, 2022): >> I also use qbittorrent. I would be willing to do some testing or googling for other clients. Even if they don't you could do something as simple as in the options "Consider download stalled after XXXX" where it's hrs/days. I don't see any harm with bans from trackers or similar if you were to set it like that. Most people messing with this should have some idea of how long a download will take. I know for me, even if it's 100gb it'll be done within a couple of hours so I would proly set it to 12hrs to have a download marked as stalled and to try a new one. You would need similar options to what's in the activity page when you delete a download (delete files, prevent this release again). This isn't completely accurate. There can easily be a low seed count and it takes days or a week to complete things. The seeders can shut their systems down so it stalls until the next day or a couple days but does pick back up. My 1:1 gb fiber connection cant fix that issue so knowing how long it takes based on size isn't relevant to it being removed to soon and becoming a H&R It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. I'm sure they will blame Radarr for deleting it to soon, etc no matter what. Kind of why i am not really *for* this kind of feature in an automated sense. Again though, i would never use such a thing anyways as i just deal with anything that gets stuck when i get a notification it is stuck in queue. I highly recommend you get this kind of thing OK'ed from Q before you spend any time on it. If he comes to the same conclusion as i did, it'll never be merged in.
Author
Owner

@screwyluie commented on GitHub (Jan 11, 2022):

It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules.

This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly

@screwyluie commented on GitHub (Jan 11, 2022): > It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

@screwyluie I started looking at the code. Tracking time (or any other dimension) requires comparing states across updates. The only level at which the state is kept uses a model that projects the downloaded items information into a reduced set of properties that is common to all classes of clients. For example, seeds is a foreign concept to usenet, and even among torrents, some APIs might provide metrics that others lack. Hence, for the sake of simplicity, the filtering and the classification must be done in the ingestion layer with no previous state keeping. Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better.

@dacabdi commented on GitHub (Jan 11, 2022): @screwyluie I started looking at the code. Tracking time (or any other dimension) requires comparing states across updates. The only level at which the state is kept uses a model that projects the downloaded items information into a reduced set of properties that is common to all classes of clients. For example, _seeds_ is a foreign concept to usenet, and even among torrents, some APIs might provide metrics that others lack. Hence, for the sake of simplicity, the filtering and the classification must be done in the ingestion layer with no previous state keeping. Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better.
Author
Owner

@Krandor1 commented on GitHub (Jan 11, 2022):

Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it.

On Jan 11, 2022, at 17:15, screwyluie @.***> wrote:


It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules.

This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are subscribed to this thread.

@Krandor1 commented on GitHub (Jan 11, 2022): Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it. > On Jan 11, 2022, at 17:15, screwyluie ***@***.***> wrote: > >  > It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. > > This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. > You are receiving this because you are subscribed to this thread.
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it.

On Jan 11, 2022, at 17:15, screwyluie @.***> wrote:  It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you are subscribed to this thread.

Is that something we can mitigate by adding some kind of warning to the checkmark and the fields?

@dacabdi commented on GitHub (Jan 11, 2022): > Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it. > […](#) > On Jan 11, 2022, at 17:15, screwyluie ***@***.***> wrote:  It boils down to user responsibility more than anything. The user has to be the one to enable this and then keep an eye on things on the trackers they use that enforce H&R rules. This is exactly the point I was making. I suppose I didn't get my point across well enough. But the user should know what they need and be able to configure accordingly — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you are subscribed to this thread. Is that something we can mitigate by adding some kind of warning to the checkmark and the fields?
Author
Owner

@screwyluie commented on GitHub (Jan 11, 2022):

Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it.

ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen.

add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent.

@screwyluie commented on GitHub (Jan 11, 2022): > Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it. ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen. add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent.
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

Ah, actually, the qBittorrent API reports last activity timestamp, no need to track time :)

@dacabdi commented on GitHub (Jan 11, 2022): Ah, actually, the qBittorrent API reports last activity timestamp, no need to track time :)
Author
Owner

@screwyluie commented on GitHub (Jan 11, 2022):

Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better.

You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again.

that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me.

@screwyluie commented on GitHub (Jan 11, 2022): > Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better. You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again. that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me.
Author
Owner

@Krandor1 commented on GitHub (Jan 11, 2022):

Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”.

On Jan 11, 2022, at 20:41, screwyluie @.***> wrote:


Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it.

ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen.

add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent.


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.

@Krandor1 commented on GitHub (Jan 11, 2022): Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”. > On Jan 11, 2022, at 20:41, screwyluie ***@***.***> wrote: > >  > Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it. > > ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen. > > add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent. > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. > You are receiving this because you commented.
Author
Owner

@screwyluie commented on GitHub (Jan 11, 2022):

Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”.

so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process?

Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.

@screwyluie commented on GitHub (Jan 11, 2022): > Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”. > […](#) so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process? Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.
Author
Owner

@austinwbest commented on GitHub (Jan 11, 2022):

Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it.

ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen.

add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent.

The idea is to develop without adding more ways for radarr to be blamed for things, libraries to be deleted, users to inflict harm on their ratio, etc where possible.

@austinwbest commented on GitHub (Jan 11, 2022): > > Problem is many users don’t. There have been a lot of threads where people go “radar deleted my whole library” and they checked something they shouldn’t have. It is an isssue and concern of people checking something bad and blaming radar for it. > > ok but do we develop for the lowest denominator? I mean if warning signs aren't enough to keep people safe then there's prolly no preventing it. If you can't be bothered to read, I don't think this should hinder a useful idea though. I don't know about you guys but stalled torrents are a daily thing... I have to check *arrs daily to make sure things are just sitting in limbo waiting for a download that will never happen. > > add a warning, brief description, etc. The worst thing that's gonna happen is it might take a little longer to download something... I don't think those people would even realize what it was doing so long as their downloads eventually show up, which they most likely will given that's moving on to a different torrent. The idea is to develop without adding more ways for radarr to be blamed for things, libraries to be deleted, users to inflict harm on their ratio, etc where possible.
Author
Owner

@bakerboy448 commented on GitHub (Jan 11, 2022):

so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process?

Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.

This thinking will easily get you banned on almost every private tracker. Time doesn't matter, most of the time it is just if you've downloaded X% then you must finish your download and seed per the ratio rules

@bakerboy448 commented on GitHub (Jan 11, 2022): > so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process? > Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping. This thinking will easily get you banned on almost every private tracker. Time doesn't matter, most of the time it is just if you've downloaded X% then you must finish your download and seed per the ratio rules
Author
Owner

@austinwbest commented on GitHub (Jan 11, 2022):

Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better.

You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again.

that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me.

This needs more than just a time, that is surely not going to be the only factor to determine something being marked as Stalled & removed. It has to many outside elements that can effect it such as download speed, uploader speed, etc

@austinwbest commented on GitHub (Jan 11, 2022): > > Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better. > > You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again. > > that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me. This needs more than just a time, that is surely not going to be the only factor to determine something being marked as Stalled & removed. It has to many outside elements that can effect it such as download speed, uploader speed, etc
Author
Owner

@Krandor1 commented on GitHub (Jan 11, 2022):

I have no idea how to make it work without hit and run issues. Minimum of an hour. A file has one seeder and he’s fast. In 30 minutes I get 91%. Then he goes offline for the night for 12 hours so torrent stalls. Your feature now removes that torrent but many trackers now will still mark it is hit and run.

I don’t know how you work around that.

On Jan 11, 2022, at 21:14, screwyluie @.***> wrote:


Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”.

so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process?

Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.

@Krandor1 commented on GitHub (Jan 11, 2022): I have no idea how to make it work without hit and run issues. Minimum of an hour. A file has one seeder and he’s fast. In 30 minutes I get 91%. Then he goes offline for the night for 12 hours so torrent stalls. Your feature now removes that torrent but many trackers now will still mark it is hit and run. I don’t know how you work around that. > On Jan 11, 2022, at 21:14, screwyluie ***@***.***> wrote: > >  > Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”. > … > > so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process? > > Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping. > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. > You are receiving this because you commented.
Author
Owner

@austinwbest commented on GitHub (Jan 11, 2022):

I have no idea how to make it work without hit and run issues. Minimum of an hour. A file has one seeder and he’s fast. In 30 minutes I get 91%. Then he goes offline for the night for 12 hours so torrent stalls. Your feature now removes that torrent but many trackers now will still mark it is hit and run.

I don’t know how you work around that.

On Jan 11, 2022, at 21:14, screwyluie @.***> wrote:


Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”.

so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process?

Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.

Yes, it's not a straight forward answer unfortunately. I'll stick to notifications when things are stuck and let yall hash out ideas :)

@austinwbest commented on GitHub (Jan 11, 2022): > I have no idea how to make it work without hit and run issues. Minimum of an hour. A file has one seeder and he’s fast. In 30 minutes I get 91%. Then he goes offline for the night for 12 hours so torrent stalls. Your feature now removes that torrent but many trackers now will still mark it is hit and run. > > I don’t know how you work around that. > > > On Jan 11, 2022, at 21:14, screwyluie ***@***.***> wrote: > > > >  > > Worse what can happen is somebody gets banned for hit and runs because “those people would not realize what it is doing”. > > … > > > > so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process? > > > > Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping. > > > > — > > Reply to this email directly, view it on GitHub, or unsubscribe. > > Triage notifications on the go with GitHub Mobile for iOS or Android. > > You are receiving this because you commented. > Yes, it's not a straight forward answer unfortunately. I'll stick to notifications when things are stuck and let yall hash out ideas :)
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

Ok so, I finally had some time to follow the code path. Here is the idea, I'll wait for the devs to vet it and get started.

  1. On the execution side, it seems to suffice with marking the Status field of a DownloadClientItem as Failed to trigger downstream the blocklisting and elimination of the download. I did follow the rabbit hole to the blocklisting service handling the event and persisting on the backend repo. I can implement a policy object consumed by the client object, that picks whether to set the status to a Warning or a Failed based on the parametrization of the policy and the fields on the torrent data. The policy object, by having only one point of contact with the client object, can be unit tested to exhaustion in isolation. Please let me know if any of these assumptions is wrong.

  2. On the heuristics side, luckily, qBittorrent's API reports the epoch time since the last chunk up/down on the torrent. This would be the marker for activity recency without having to persist a tracking state on Radarr. Additionally, the date since being added can be used to set a threshold for policy consideration. Now, if point (1) is valid, and the decision can be made so close to the client, then other improvements can be made on the policy level because we can get more information without leaking the abstraction into the upper layer. For example, seeders count, progress, etc. To recap, "mark a torrent failed and pending for processing IF never seen complete AND t0 time has passed since added AND t1 has passed since last activity AND seeds < N". Which is to say, we never completed this download, we have had it for a long time, it hasn't moved in a while, and nobody has it completely (availability can also be a variable I think). These values can be configured specifically for qBittorrent, maybe expandable in the future if other clients support all the dimensions. You can recommend a better heuristic with the values here: https://github.com/qbittorrent/qBittorrent/wiki/WebUI-API-(qBittorrent-4.1)#get-torrent-list

Also, do you folks have some design doc, block diagrams, etc.? Something to get a more general context?

@dacabdi commented on GitHub (Jan 11, 2022): Ok so, I finally had some time to follow the code path. Here is the idea, I'll wait for the devs to vet it and get started. 1. On the execution side, it seems to suffice with marking the `Status` field of a `DownloadClientItem` as `Failed` to trigger downstream the blocklisting and elimination of the download. I did follow the rabbit hole to the blocklisting service handling the event and persisting on the backend repo. I can implement a policy object consumed by the client object, that picks whether to set the status to a `Warning` or a `Failed` based on the parametrization of the policy and the fields on the torrent data. The policy object, by having only one point of contact with the client object, can be unit tested to exhaustion in isolation. Please let me know if any of these assumptions is wrong. 2. On the heuristics side, luckily, qBittorrent's API reports the epoch time since the last chunk up/down on the torrent. This would be the marker for activity recency without having to persist a tracking state on Radarr. Additionally, the date since being added can be used to set a threshold for policy consideration. Now, if point (1) is valid, and the decision can be made so close to the client, then other improvements can be made on the policy level because we can get more information without leaking the abstraction into the upper layer. For example, seeders count, progress, etc. **To recap**, _"mark a torrent failed and pending for processing IF never seen complete AND t0 time has passed since added AND t1 has passed since last activity AND seeds < N"._ Which is to say, we never completed this download, we have had it for a long time, it hasn't moved in a while, and nobody has it completely (availability can also be a variable I think). These values can be configured specifically for qBittorrent, maybe expandable in the future if other clients support all the dimensions. You can recommend a better heuristic with the values here: https://github.com/qbittorrent/qBittorrent/wiki/WebUI-API-(qBittorrent-4.1)#get-torrent-list Also, do you folks have some design doc, block diagrams, etc.? Something to get a more general context?
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better.

You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again.
that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me.

This needs more than just a time, that is surely not going to be the only factor to determine something being marked as Stalled & removed. It has to many outside elements that can effect it such as download speed, uploader speed, etc

By stale, I am immediately assuming no traffic up or down. My previous post would hash that, since time since last activity would even help prevent dropping a torrent when somebody else is leeching from us.

@dacabdi commented on GitHub (Jan 11, 2022): > > > Would you say that a configurable seeders count, given a download marked as stale by qBittorrent, would be enough of a starter criteria? Also, please consider that I understand the devs reticence to accept the work, so the simpler, smaller, and more easily reviewable the PRs, the better. > > > > > > You can already set minimum seeds and even with a decent amount of them you can still stall a torrent. I don't think correlating the number of seeds with stale status would help this situation. It should have the date/time a download was added in radarr, that should be all you need to compare against for the most basic setup I mentioned before where you just have an upper limit on time. If the fetch is at least XXhrs old and marked stale/stalled then delete it and search again. > > that's how I see it working on a most basic level. Ideally, it would be more intelligent but even something as basic as that would be very helpful to me. > > This needs more than just a time, that is surely not going to be the only factor to determine something being marked as Stalled & removed. It has to many outside elements that can effect it such as download speed, uploader speed, etc By stale, I am immediately assuming no traffic up or down. My previous post would hash that, since time since last activity would even help prevent dropping a torrent when somebody else is leeching from us.
Author
Owner

@dacabdi commented on GitHub (Jan 11, 2022):

so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process?

Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping.

This thinking will easily get you banned on almost every private tracker. Time doesn't matter, most of the time it is just if you've downloaded X% then you must finish your download and seed per the ratio rules

On this point, let's consider thar Radarr actually nowadays does a pretty good job of handing out work to different instances of the downloaders. This being a pretty sophisticated scenario, the user can setup multiple downloaders and apply a more stringent policy on the public ones only.

@dacabdi commented on GitHub (Jan 11, 2022): > > so set a minimum time, this is not that hard to conceptualize. Do you have any idea to help or are you just here to patronize the process? > > > Set a minimum of an hour, I've not seen a tracker that would ban you that fast... if you know of some then suggest something else as the minimum. I'm all for putting in useful safeguards but just saying the same thing over and over again without any useful ideas to go with it is not helping. > > This thinking will easily get you banned on almost every private tracker. Time doesn't matter, most of the time it is just if you've downloaded X% then you must finish your download and seed per the ratio rules On this point, let's consider thar Radarr actually nowadays does a pretty good job of handing out work to different instances of the downloaders. This being a pretty sophisticated scenario, the user can setup multiple downloaders and apply a more stringent policy on the public ones only.
Author
Owner

@screwyluie commented on GitHub (Jan 13, 2022):

By stale, I am immediately assuming no traffic up or down. My previous post would hash that, since time since last activity would even help prevent dropping a torrent when somebody else is leeching from us.

perhaps we all have a different definition of stalled/stale and that's the real issue here. To me a stalled torrent is not doing anything.

I don't know what trackers you guys are so worried about with such weird specific that if you download X% you have to download the whole thing or it's a hit/run? I think you are the niche case here. Most trackers are either entirely open or have a simple ratio requirement, IE: you must upload 110% of whatever you download.

but again, the complaints are not actually helpful are they? Let's come up with ideas to solve the problems... and if I had a dollar for every time someone has said "blaming radar for deleting their library" in this... let's be realistic about the actual concerns here, not using some random outside instance as justification. The only real concern is hit and run... there's simply no risk to the library, in fact, this is an improvement to it since you'll actually get what you're looking for.

To me, it seems that the 2 of you that have ridiculously sensitive trackers should have an option to not include them... maybe run a second client just for those trackers, and since this is looking to be implemented on a per-client basis you could easily exclude them. Or just not use this feature at all. Again, a simple short warning alongside the setting letting people know it could cause hit/run with private trackers, you're warned. If you're working with a tracker that sensitive you're prolly already paranoid about it so that should be more than enough to red flag it for you.

Another idea, is it possible for radarr to abandon a download? By this I mean leave the download in the client but still move on to a new search... that would be a good preventative option to go alongside the two existing ones "delete files; prevent grabbing again" add-in "abandon, but don't remove from client". That way there's no chance of a hit and run, the torrent client is in full control of the download, but radar can get on with things and try to get it from another source.

@screwyluie commented on GitHub (Jan 13, 2022): > By stale, I am immediately assuming no traffic up or down. My previous post would hash that, since time since last activity would even help prevent dropping a torrent when somebody else is leeching from us. perhaps we all have a different definition of stalled/stale and that's the real issue here. To me a stalled torrent is not doing anything. I don't know what trackers you guys are so worried about with such weird specific that if you download X% you have to download the whole thing or it's a hit/run? I think you are the niche case here. Most trackers are either entirely open or have a simple ratio requirement, IE: you must upload 110% of whatever you download. but again, the complaints are not actually helpful are they? Let's come up with ideas to solve the problems... and if I had a dollar for every time someone has said "blaming radar for deleting their library" in this... let's be realistic about the actual concerns here, not using some random outside instance as justification. The only real concern is hit and run... there's simply no risk to the library, in fact, this is an improvement to it since you'll actually get what you're looking for. To me, it seems that the 2 of you that have ridiculously sensitive trackers should have an option to not include them... maybe run a second client just for those trackers, and since this is looking to be implemented on a per-client basis you could easily exclude them. Or just not use this feature at all. Again, a simple short warning alongside the setting letting people know it could cause hit/run with private trackers, you're warned. If you're working with a tracker that sensitive you're prolly already paranoid about it so that should be more than enough to red flag it for you. Another idea, is it possible for radarr to abandon a download? By this I mean leave the download in the client but still move on to a new search... that would be a good preventative option to go alongside the two existing ones "delete files; prevent grabbing again" add-in "abandon, but don't remove from client". That way there's no chance of a hit and run, the torrent client is in full control of the download, but radar can get on with things and try to get it from another source.
Author
Owner

@bakerboy448 commented on GitHub (Jan 13, 2022):

such weird specific that if you download X% you have to download the whole thing or it's a hit/run? I think you are the niche case here. Most trackers are either entirely open or have a simple ratio requirement

Need to double check, but 90% sure it's the case for all of these:

  • MaM
  • BHD
  • TorrentLeech
  • BTN
  • DigitalCore

I wasn't aware these are all niche trackers. I think it's the opposite - your trackers that are strictly ratio based and it doesn't matter what you download and upload are the unique niche ones.

Abandoning seems like a better concept to be honestly.

@bakerboy448 commented on GitHub (Jan 13, 2022): > such weird specific that if you download X% you have to download the whole thing or it's a hit/run? I think you are the niche case here. Most trackers are either entirely open or have a simple ratio requirement Need to double check, but 90% sure it's the case for all of these: - MaM - BHD - TorrentLeech - BTN - DigitalCore I wasn't aware these are all niche trackers. I think it's the opposite - your trackers that are strictly ratio based and it doesn't matter what you download and upload are the unique niche ones. Abandoning seems like a better concept to be honestly.
Author
Owner

@dacabdi commented on GitHub (Jan 17, 2022):

Any update from the devs? I have it half baked, including tests (well, actually, in the opposite order).

@dacabdi commented on GitHub (Jan 17, 2022): Any update from the devs? I have it half baked, including tests (well, actually, in the opposite order).
Author
Owner

@Krandor1 commented on GitHub (Jan 17, 2022):

If there was an update it would be in the issue.

On Jan 17, 2022, at 14:36, David Cabrera @.***> wrote:


Any update from the devs? I have it half baked.


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you commented.

@Krandor1 commented on GitHub (Jan 17, 2022): If there was an update it would be in the issue. > On Jan 17, 2022, at 14:36, David Cabrera ***@***.***> wrote: > >  > Any update from the devs? I have it half baked. > > — > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. > You are receiving this because you commented.
Author
Owner

@dacabdi commented on GitHub (Jan 17, 2022):

If there was an update it would be in the issue.

On Jan 17, 2022, at 14:36, David Cabrera @.***> wrote:  Any update from the devs? I have it half baked. — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you commented.

Are you one of the devs?

@dacabdi commented on GitHub (Jan 17, 2022): > If there was an update it would be in the issue. > […](#) > On Jan 17, 2022, at 14:36, David Cabrera ***@***.***> wrote:  Any update from the devs? I have it half baked. — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you commented. Are you one of the devs?
Author
Owner

@austinwbest commented on GitHub (Jan 17, 2022):

Nothing more than previously stated it would appear bud

@austinwbest commented on GitHub (Jan 17, 2022): Nothing more than previously stated it would appear bud
Author
Owner

@dacabdi commented on GitHub (Jan 17, 2022):

@austinwbest pushed the PR and it is passing the validation pipeline. Please take a look when you get a chance.

@dacabdi commented on GitHub (Jan 17, 2022): @austinwbest pushed the PR and it is passing the validation pipeline. Please take a look when you get a chance.
Author
Owner

@heisenberg2980 commented on GitHub (Nov 24, 2022):

@dacabdi @austinwbest what is the latest update on this? is your PR now merged?

@heisenberg2980 commented on GitHub (Nov 24, 2022): @dacabdi @austinwbest what is the latest update on this? is your PR now merged?
Author
Owner

@bakerboy448 commented on GitHub (Nov 24, 2022):

If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite.

When a PR is merged that resolves this issue then this issue will be closed and have it linked.

@bakerboy448 commented on GitHub (Nov 24, 2022): If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite. When a PR is merged that resolves this issue then this issue will be closed and have it linked.
Author
Owner

@spacecakes commented on GitHub (Jan 8, 2023):

If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite.

When a PR is merged that resolves this issue then this issue will be closed and have it linked.

Well it's a 2+ year old PR. Not unreasonable to want updates, even if it means getting closed.

@spacecakes commented on GitHub (Jan 8, 2023): > If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite. > > When a PR is merged that resolves this issue then this issue will be closed and have it linked. Well it's a 2+ year old PR. Not unreasonable to want updates, even if it means getting closed.
Author
Owner

@rafarrw commented on GitHub (Mar 7, 2023):

If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite.
When a PR is merged that resolves this issue then this issue will be closed and have it linked.

Well it's a 2+ year old PR. Not unreasonable to want updates, even if it means getting closed.

+1

Honestly, this is my main issue with Radarr (and Sonarr). It's the only thing that is not automated, and I have to manually mark the download as failed to search for another.

It's not that bad, but we are seeing the solution right here, and I don't think we should waste useful code.

@rafarrw commented on GitHub (Mar 7, 2023): > > If there is an update there will be one posted in the issue; asking for updates or similar typically does not make anything happen faster, often the opposite. > > When a PR is merged that resolves this issue then this issue will be closed and have it linked. > > Well it's a 2+ year old PR. Not unreasonable to want updates, even if it means getting closed. +1 Honestly, this is my main issue with Radarr (and Sonarr). It's the only thing that is not automated, and I have to manually mark the download as failed to search for another. It's not that bad, but we are seeing the solution right here, and I don't think we should waste useful code.
Author
Owner

@phishyphun commented on GitHub (Jun 4, 2023):

+1 Would love this to be considered for future update!

@phishyphun commented on GitHub (Jun 4, 2023): +1 Would love this to be considered for future update!
Author
Owner

@Miles-Hider commented on GitHub (Jun 26, 2023):

New to *arr but this all seems like a pretty basic functionality request. If "stale" or "stalled" is hard to define, let the user define it as they wish, to me at least it's easily defined.

To me, using free trackers, a stalled download is:

  1. A download that qBittorrent literally says is "stalled"
  2. A download that is at 0% and has been for several hours
  3. A download that regardless of its percentage is not likely to finish within 12 hours.

H&R's aren't an issue for me and many others, an automation tool that requires continual manual input (sometimes having to blacklist a download 10x to get a working one), that's the antithesis of automation.

@Miles-Hider commented on GitHub (Jun 26, 2023): New to *arr but this all seems like a pretty basic functionality request. If "stale" or "stalled" is hard to define, let the user define it as they wish, to me at least it's easily defined. To me, using free trackers, a stalled download is: 1) A download that qBittorrent literally says is "stalled" 2) A download that is at 0% and has been for several hours 3) A download that regardless of its percentage is not likely to finish within 12 hours. H&R's aren't an issue for me and many others, an automation tool that requires continual manual input (sometimes having to blacklist a download 10x to get a working one), that's the antithesis of automation.
Author
Owner

@dacabdi commented on GitHub (Jun 26, 2023):

I apologize for the delay (well, hehe, 2+ years, wow), day work hasn't left a lot of room for projects. In light of the amount of people wanting this feature, I will come back to it :). Most likely will work on it this upcoming weekend.

@dacabdi commented on GitHub (Jun 26, 2023): I apologize for the delay (well, hehe, 2+ years, wow), day work hasn't left a lot of room for projects. In light of the amount of people wanting this feature, I will come back to it :). Most likely will work on it this upcoming weekend.
Author
Owner

@heisenberg2980 commented on GitHub (Aug 12, 2023):

Hi @dacabdi do you have any update about this? looking forward to get this functionality added to this amazing project (and who knows, maybe the same can later be added to Sonarr as well?)

@heisenberg2980 commented on GitHub (Aug 12, 2023): Hi @dacabdi do you have any update about this? looking forward to get this functionality added to this amazing project (and who knows, maybe the same can later be added to Sonarr as well?)
Author
Owner

@bakerboy448 commented on GitHub (Aug 16, 2023):

Heyo everyone! I was having this problem where stalled downloads in Radarr couldn't be removed automatically, and after looking through the entire first page of google, I couldn't find a fix. I went ahead and made a set of scripts that handles that automatically and it's available at https://github.com/connor-eg/radarr-purge-stalled-downloads. I don't intend to maintain this (like at all) but if anyone finds it useful then that's a vibe.

-cOwOnaviwus on Servarr Discord

@bakerboy448 commented on GitHub (Aug 16, 2023): > Heyo everyone! I was having this problem where stalled downloads in Radarr couldn't be removed automatically, and after looking through the entire first page of google, I couldn't find a fix. I went ahead and made a set of scripts that handles that automatically and it's available at https://github.com/connor-eg/radarr-purge-stalled-downloads. I don't intend to maintain this (like at all) but if anyone finds it useful then that's a vibe. -cOwOnaviwus on [Servarr Discord](https://discord.com/channels/264387956343570434/264387956343570434/1141412130030108672)
Author
Owner

@dacabdi commented on GitHub (Aug 19, 2023):

't left a lot of room for projects. In light of the amount of people wanting this feature, I will come back to it :). Most likely will work on it this upcoming weekend.

I updated the PR, but got into a new round of reviews, I don't have a lot of free time outside my working hours, but I will try to get this all the way to completion :).

@dacabdi commented on GitHub (Aug 19, 2023): > 't left a lot of room for projects. In light of the amount of people wanting this feature, I will come back to it :). Most likely will work on it this upcoming weekend. I updated the PR, but got into a new round of reviews, I don't have a lot of free time outside my working hours, but I will try to get this all the way to completion :).
Author
Owner

@ds-sebastian commented on GitHub (Sep 26, 2023):

Alt Text

@ds-sebastian commented on GitHub (Sep 26, 2023): ![Alt Text](https://tenor.com/view/opentext-mdr-gif-27635816.gif)
Author
Owner

@connor-eg commented on GitHub (Oct 2, 2023):

Heyo, I looked at this a while ago and decided to just handle it myself. I had advertised my fix in a few places, but it only just now occurred to me to put it here. It's some Bash and Python that handles deleting stalled downloads in radarr, and which can be automated via cron.

https://github.com/connor-eg/radarr-purge-stalled-downloads

I should mention that I don't think that my solution is limited to working with qBittorrent (it only interacts with Radarr's API), but I have only tested it with a system running qbit.

Edit: scrolling up a bit and I see @bakerboy448 mentioned my repo from the Discord post I'd made about it. Really nice to see it didn't go unnoticed then :D

@connor-eg commented on GitHub (Oct 2, 2023): Heyo, I looked at this a while ago and decided to just handle it myself. I had advertised my fix in a few places, but it only just now occurred to me to put it here. It's some Bash and Python that handles deleting stalled downloads in radarr, and which can be automated via cron. https://github.com/connor-eg/radarr-purge-stalled-downloads I should mention that I don't _think_ that my solution is limited to working with qBittorrent (it only interacts with Radarr's API), but I have only _tested_ it with a system running qbit. Edit: scrolling up a bit and I see @bakerboy448 mentioned my repo from the Discord post I'd made about it. Really nice to see it didn't go unnoticed then :D
Author
Owner

@ManiMatter commented on GitHub (Oct 2, 2023):

Hi

I took a stab at it too snd built a docker solution.

Maybe you find this handy too:
https://github.com/ManiMatter/decluttarr

P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

@ManiMatter commented on GitHub (Oct 2, 2023): Hi I took a stab at it too snd built a docker solution. Maybe you find this handy too: https://github.com/ManiMatter/decluttarr P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)
Author
Owner

@connor-eg commented on GitHub (Oct 2, 2023):

Hi

I took a stab at it too snd built a docker solution.

Maybe you find this handy too: https://github.com/ManiMatter/decluttarr

P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

This looks like it beats the hell out of my solution lmao

Seriously though, good work. Keeping track of bad downloads over time so as to not accidentally delete a viable one is a way better way of handling things.

@connor-eg commented on GitHub (Oct 2, 2023): > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) This looks like it beats the hell out of my solution lmao Seriously though, good work. Keeping track of bad downloads over time so as to not accidentally delete a viable one is a way better way of handling things.
Author
Owner

@Andystew94 commented on GitHub (Oct 2, 2023):

Hi

I took a stab at it too snd built a docker solution.

Maybe you find this handy too: https://github.com/ManiMatter/decluttarr

P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed?

@Andystew94 commented on GitHub (Oct 2, 2023): > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed?
Author
Owner

@ManiMatter commented on GitHub (Oct 2, 2023):

Hi
I took a stab at it too snd built a docker solution.
Maybe you find this handy too: https://github.com/ManiMatter/decluttarr
P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

This looks like it beats the hell out of my solution lmao

Seriously though, good work. Keeping track of bad downloads over time so as to not accidentally delete a viable one is a way better way of handling things.

Thanks very much, @connor-eg, your kind feedback means a lot to me. It's my first python and my first git project ever, so very encouraging getting such feedback :)

@ManiMatter commented on GitHub (Oct 2, 2023): > > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) > > This looks like it beats the hell out of my solution lmao > > Seriously though, good work. Keeping track of bad downloads over time so as to not accidentally delete a viable one is a way better way of handling things. Thanks very much, @connor-eg, your kind feedback means a lot to me. It's my first python and my first git project ever, so very encouraging getting such feedback :)
Author
Owner

@ManiMatter commented on GitHub (Oct 2, 2023):

Hi
I took a stab at it too snd built a docker solution.
Maybe you find this handy too: https://github.com/ManiMatter/decluttarr
P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed?

It will blacklist the download, and radarr/sonarr themselves I believe will search for another source.
It uses the /queue/delete API (https://sonarr.tv/docs/api/#/Queue/delete_api_v3_queue__id_) and the skipRedownload is set to false, thus yes, I think it downoads another source if there is one

@ManiMatter commented on GitHub (Oct 2, 2023): > > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) > > Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed? It will blacklist the download, and radarr/sonarr themselves I believe will search for another source. It uses the /queue/delete API (https://sonarr.tv/docs/api/#/Queue/delete_api_v3_queue__id_) and the skipRedownload is set to false, thus yes, I think it downoads another source if there is one
Author
Owner

@Andystew94 commented on GitHub (Oct 2, 2023):

Hi
I took a stab at it too snd built a docker solution.
Maybe you find this handy too: https://github.com/ManiMatter/decluttarr
P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed?

It will blacklist the download, and radarr/sonarr themselves I believe will search for another source.

Fantastic work! Will give it a go later today.

@Andystew94 commented on GitHub (Oct 2, 2023): > > > Hi > > > I took a stab at it too snd built a docker solution. > > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) > > > > > > Question, does it prompt both Sonarr and Radarr to find another once the stalled download is removed? > > It will blacklist the download, and radarr/sonarr themselves I believe will search for another source. Fantastic work! Will give it a go later today.
Author
Owner

@ShiloTobin commented on GitHub (May 11, 2024):

Damn, I should have looked for this thread earlier. I ended up making my own Unstallarr, and how it works, I'm guessing like decluttarr where it uses both the Radarr and Sonarr's API to scan for the queue in each one. It will then read the queue, and check for the current status of each download.

When it finds a stalled download, it will add it to the blocklist, and then by the natural design of the arr's, it will automatically search for a new copy.

I found that when I made mine I also had to include a check for "Downloading" but also has the "error" of "Downloading Metadata" to catch those torrents that stall but aren't marked as stalled.

I'm about to try this one, because the one issue I had was, if i removed the stalled download from qbit, it would keep redownloading the same copy that stalls, even if it's blacklisted in the arr's, because their blocklist works off torrent hashes, not the torrent names, so it just grabs different hashes of the same torrent.

If this app requires you to add your API key from Sonarr and Radarr, that is also normal, only way to communicate to your arr server.

@ShiloTobin commented on GitHub (May 11, 2024): Damn, I should have looked for this thread earlier. I ended up making my own Unstallarr, and how it works, I'm guessing like decluttarr where it uses both the Radarr and Sonarr's API to scan for the queue in each one. It will then read the queue, and check for the current status of each download. When it finds a stalled download, it will add it to the blocklist, and then by the natural design of the arr's, it will automatically search for a new copy. I found that when I made mine I also had to include a check for "Downloading" but also has the "error" of "Downloading Metadata" to catch those torrents that stall but aren't marked as stalled. I'm about to try this one, because the one issue I had was, if i removed the stalled download from qbit, it would keep redownloading the same copy that stalls, even if it's blacklisted in the arr's, because their blocklist works off torrent hashes, not the torrent names, so it just grabs different hashes of the same torrent. If this app requires you to add your API key from Sonarr and Radarr, that is also normal, only way to communicate to your arr server.
Author
Owner

@Nabstar3 commented on GitHub (Jul 7, 2024):

Hi

I took a stab at it too snd built a docker solution.

Maybe you find this handy too: https://github.com/ManiMatter/decluttarr

P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner

It's worked well so far.

@Nabstar3 commented on GitHub (Jul 7, 2024): > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner It's worked well so far.
Author
Owner

@ManiMatter commented on GitHub (Jul 7, 2024):

Hi
I took a stab at it too snd built a docker solution.
Maybe you find this handy too: https://github.com/ManiMatter/decluttarr
P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner

It's worked well so far.

MattDGTL‘s script was actually a starting point when building decluttarr.

By now decluttarr has more features (like slowness check, failed imports, etc) and supports also lidarr/readarr/whisparr.

@ManiMatter commented on GitHub (Jul 7, 2024): > > Hi > > I took a stab at it too snd built a docker solution. > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) > > I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner > > It's worked well so far. MattDGTL‘s script was actually a starting point when building decluttarr. By now decluttarr has more features (like slowness check, failed imports, etc) and supports also lidarr/readarr/whisparr.
Author
Owner

@i2sly commented on GitHub (Aug 14, 2024):

Hi
I took a stab at it too snd built a docker solution.
Maybe you find this handy too: https://github.com/ManiMatter/decluttarr
P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-)

I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner
It's worked well so far.

MattDGTL‘s script was actually a starting point when building decluttarr.

By now decluttarr has more features (like slowness check, failed imports, etc) and supports also lidarr/readarr/whisparr.

Hey man, just found this and went to set it up. Looks great and straight forward except for one thing. You don't have anything in your yaml for configuring where the logs are placed? I am new to docker so there may be a simple way to figure that out but this is the first docker that dosn't have an interface and config file or clearly defined log variable.

Thank You in advance and I will update if i figure it out before you can respond.

@i2sly commented on GitHub (Aug 14, 2024): > > > Hi > > > I took a stab at it too snd built a docker solution. > > > Maybe you find this handy too: https://github.com/ManiMatter/decluttarr > > > P.S. Couldn't refrain from adding the "-arr" after somebody in the discord group said that "people add -arr to their software even if it has nothing to do with the sonarr crew to sound more cool" ;-) > > > > > > I also found and have been using this: https://github.com/MattDGTL/sonarr-radarr-queue-cleaner > > It's worked well so far. > > MattDGTL‘s script was actually a starting point when building decluttarr. > > By now decluttarr has more features (like slowness check, failed imports, etc) and supports also lidarr/readarr/whisparr. Hey man, just found this and went to set it up. Looks great and straight forward except for one thing. You don't have anything in your yaml for configuring where the logs are placed? I am new to docker so there may be a simple way to figure that out but this is the first docker that dosn't have an interface and config file or clearly defined log variable. Thank You in advance and I will update if i figure it out before you can respond.
Author
Owner

@bakerboy448 commented on GitHub (Aug 14, 2024):

Please do not spam this issue with noise for unrelated githubs. issues with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or https://github.com/ManiMatter/decluttarr or anything that is not Radarr shall be confined to that github.....not on this Radarr thread.

@bakerboy448 commented on GitHub (Aug 14, 2024): Please do not spam this issue with noise for unrelated githubs. issues with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or https://github.com/ManiMatter/decluttarr or anything that is not Radarr shall be confined to that github.....not on this Radarr thread.
Author
Owner

@i2sly commented on GitHub (Aug 15, 2024):

Please do not spam this issue with noise for unrelated githubs. issues with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or https://github.com/ManiMatter/decluttarr or anything that is not Radarr shall be confined to that github.....not on this Radarr thread.

not a problem, only reason i responded here is cause i found it here and on testing it is the closest solution to a problem everyone has wanted fixed for years. Hope you have a great day

@i2sly commented on GitHub (Aug 15, 2024): > Please do not spam this issue with noise for unrelated githubs. issues with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or https://github.com/ManiMatter/decluttarr or anything that is not Radarr shall be confined to that github.....not on this Radarr thread. not a problem, only reason i responded here is cause i found it here and on testing it is the closest solution to a problem everyone has wanted fixed for years. Hope you have a great day
Author
Owner

@Krandor1 commented on GitHub (Aug 15, 2024):

"everyone has wanted" is a bit of an overstatement. I would never use such
an option personally.

Brian Hartsfield

On Thu, Aug 15, 2024 at 8:14 PM i2sly @.***> wrote:

Please do not spam this issue with noise for unrelated githubs. issues
with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or
https://github.com/ManiMatter/decluttarr or anything that is not Radarr
shall be confined to that github.....not on this Radarr thread.

not a problem, only reason i responded here is cause i found it here and
on testing it is the closest solution to a problem everyone has wanted
fixed for years. Hope you have a great day


Reply to this email directly, view it on GitHub
https://github.com/Radarr/Radarr/issues/5407#issuecomment-2292492676,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ABBHJKVBMR6A4Q5Q35K3WFLZRU76BAVCNFSM4UDCEALKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMRZGI2DSMRWG43A
.
You are receiving this because you commented.Message ID:
@.***>

@Krandor1 commented on GitHub (Aug 15, 2024): "everyone has wanted" is a bit of an overstatement. I would never use such an option personally. -- Brian Hartsfield On Thu, Aug 15, 2024 at 8:14 PM i2sly ***@***.***> wrote: > Please do not spam this issue with noise for unrelated githubs. issues > with https://github.com/MattDGTL/sonarr-radarr-queue-cleaner or > https://github.com/ManiMatter/decluttarr or anything that is not Radarr > shall be confined to that github.....not on this Radarr thread. > > not a problem, only reason i responded here is cause i found it here and > on testing it is the closest solution to a problem everyone has wanted > fixed for years. Hope you have a great day > > — > Reply to this email directly, view it on GitHub > <https://github.com/Radarr/Radarr/issues/5407#issuecomment-2292492676>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABBHJKVBMR6A4Q5Q35K3WFLZRU76BAVCNFSM4UDCEALKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMRZGI2DSMRWG43A> > . > You are receiving this because you commented.Message ID: > ***@***.***> >
Author
Owner

@i2sly commented on GitHub (Aug 15, 2024):

"everyone has wanted" is a bit of an overstatement. I would never use such an option personally. -- Brian Hartsfield

statement was made cause i see many years and dozens of threads just like this one. But curious, how do you deal with stalled downloads outside of private tor sites that acutally post honest seed info? Or do you only use private torrent sources? I ask because if you have a better way to go 100% unattended automated server I would be all ears. I have had mine fully automated for 2 years other than torrents with 0 seeds getting stuck and it not being able to handle that issue on its own. I don't have that issue with my private sites where i get my new content that I know will seed and keep ratios healthy but for the overseerr access i give to my family and friends they constantly download old shit so that is pulled to a diff qbit instance behind VPN that sources from public and it is a very common problem there.

@i2sly commented on GitHub (Aug 15, 2024): > "everyone has wanted" is a bit of an overstatement. I would never use such an option personally. -- Brian Hartsfield statement was made cause i see many years and dozens of threads just like this one. But curious, how do you deal with stalled downloads outside of private tor sites that acutally post honest seed info? Or do you only use private torrent sources? I ask because if you have a better way to go 100% unattended automated server I would be all ears. I have had mine fully automated for 2 years other than torrents with 0 seeds getting stuck and it not being able to handle that issue on its own. I don't have that issue with my private sites where i get my new content that I know will seed and keep ratios healthy but for the overseerr access i give to my family and friends they constantly download old shit so that is pulled to a diff qbit instance behind VPN that sources from public and it is a very common problem there.
Author
Owner

@SheeEttin commented on GitHub (Aug 15, 2024):

This is for discussion of the issue, not a general discussion forum, please use it properly. We don't need email notifications for unrelated discussion.

@markus101 @mynameisbogdan can one of you limit comments please?

@SheeEttin commented on GitHub (Aug 15, 2024): This is for discussion of the issue, not a general discussion forum, please use it properly. We don't need email notifications for unrelated discussion. @markus101 @mynameisbogdan can one of you limit comments please?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/Radarr#5124
No description provided.