mirror of
https://github.com/qbittorrent/qBittorrent.git
synced 2026-03-02 22:57:32 -05:00
Wasted data client blocking algorithm/options #14891
Labels
No labels
Accessibility
AppImage
Bounty
Build system
CI
Can't reproduce
Code cleanup
Confirmed bug
Confirmed bug
Core
Crash
Data loss
Discussion
Docker
Documentation
Duplicate
Feature
Feature request
Feature request
Feature request
Filters
Flatpak
GUI
Has workaround
I2P
Invalid
Libtorrent
Look and feel
Meta
NSIS
Network
Not an issue
OS: *BSD
OS: Linux
OS: Windows
OS: macOS
PPA
Performance
Project management
Proxy/VPN
Qt bugs
Qt6 compat
RSS
Search engine
Security
Temp folder
Themes
Translations
Triggers
Waiting diagnosis
Waiting info
Waiting upstream
Waiting web implementation
Watched folders
WebAPI
WebUI
autoCloseOldIssue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/qBittorrent#14891
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Blakestandal on GitHub (Aug 18, 2023).
Suggestion
Enable a feature that blocks/limits peers with abnormal wasted data tired to their IP.
Maybe you'll tolerate 10mb (or a specified amount) from a single peer. So all peers who meet that requirement are allowed to seed to you. Otherwise they're in some sort of torrent - specific blacklist.
Better too if the same Peer is being flagged on multiple torrents if they can be blocked entirely.
Use case
It's getting ridiculous. I'm wasting almost 50% data on many torrents. (see screenshot). I have a high quality VPN, running on a mid tier truenas scale server, all with hardwired connections at every point. 800/20mbps speeds. Nothing is bottle-necked but something needs to be done here.
Extra info/examples/attachments
@Blakestandal commented on GitHub (Aug 27, 2023):
Still a problem. Just to bump this issue
@DarkVoyage commented on GitHub (Sep 2, 2023):
Do you use ipfilter.dat? Bad actors on public torrents are 99% of the time are special corporate fakers. They should be filtered well by list of known parasites.
Check this: https://www.ipfilter.app/
@Blakestandal commented on GitHub (Sep 2, 2023):
is there a way this can be built into the app? I use Truenas Scale, so my QB is in a container. routing through another container isn't impossible but it would be much better if it was integrated. cheers
@Only1Shadow commented on GitHub (Sep 11, 2023):
I'll second this feature request - the bad actors are able to change IP with ease so the blacklists are constantly out of date and eventually end up blocking legit peers. Not only are they sending intentionally corrupted pieces, but they've also begun to initiate a transfer at a good rate then slow their data to a crawl, sometimes leaving a legit piece, sometimes not, but tying up the transfer for hours.
Bad actor mitigation options (to me) are more useful than a blocklist because it can be done in real time and if can be adjusted to be far more accurate... My suggestion would be something like:
GreyList host for xxx Hours/Minutes if:
-More than xxx Percent wasted data or xxx Mb wasted data or xxx bad Pieces
-More than xxx Percent reduced transfer rate from peak or initial rate
-Less than xxx Kb/s transfer rate
@DarkVoyage commented on GitHub (Sep 12, 2023):
I doubt that such an advanced system would be realized soon or at all. Currently you can ban any peers manually. ipfilter doesn't contain private IPs, it filters only corporate fixed pools and of course it is updated and removes not actual data.
@Only1Shadow commented on GitHub (Sep 12, 2023):
I'd love to contribute this feature to the code base but my programming days go back to when Fortran and COBOL were mainstream. I can make the lights flash on an arduino but not much more.
@Blakestandal commented on GitHub (Oct 3, 2023):
I'm not a programmer, but i have to think a running tally of each peer's wasted data is a pretty easy metric to track... and then just allowing the user to set a limit to how much wasted is allowed per peer...
seems like the shortest and easiest way to help with this isssue.
because i now have a few torrents with OVER 100% wasted space (5gb file, 7gb wasted)... and that just seems like we should have the ability to remedy that. thanks again <3
@DarkVoyage commented on GitHub (Oct 3, 2023):
Are you sure that you understand what waste means in this case? It is not space, it is simply faulty data downloaded that doesn't match the hash, but it gets rejected and that's all. So in the end you lose some traffic and download time, but if you have unlimited connection and in the end you get the full download, it is not a great problem after all.
In qB you see which peer gave you which amount of data and if you get a lot of waste, then it can't be every peer, you can easily find a bad actor, who uploaded a lot and no progress is seen afterwards. You can ban them manually.
But as I told above - generally this is a work of corporate entities, which fight for their crappy copyright. There's simply no need for a common person to come to a random public torrent and try to interfere normal process without much success and with a lot of wasted upload. Anyway torrent will be collected from other peers and they won't gain from slowing it.
@LazyPajen commented on GitHub (Oct 4, 2023):
I like this idea from Only1Shadow
I think this could be use full for those that have a Capped´d account
It also be restricted to "per session"
It also could happen when the other side have an mishap on their side
@Only1Shadow commented on GitHub (Oct 4, 2023):
@DarkVoyage wasted means wasted. I do not have an unlimited connection and as with many rural Americans my connection speed is also low (10Mbit and 1.5TB/mo) so being able to auto ban bad actors or hosts having issues on their end is of great benefit.
@Blakestandal as a former programmer and database admin I can say this isnt a trivial bug fix, but neither should it be more than several hours worth of coding.
**There's a bit more to it of course, but this is not a major programming job for a very useful feature IMHO.
I wish my coding skills were up to date enough to contribute this
@Blakestandal commented on GitHub (Nov 2, 2023):
Precisely put. Thank you. And yes, even tho I pay extra for unlimited data via Comcast I'd much appreciate downloading some 10gb file in about 10gb's worth of time... Not 24gb... (yes, literally have that happen sometimes). I use qb on truenas so checking it manually every time for bad peers is out of the question. Like stated above there needs to be a threshold per peer of how much wasted is allowed. Maybe it just flags them per client for a day or week or something. But yeah, it's ridiculous how much traffic I waste (pretty much 50 % Average).
@Blakestandal commented on GitHub (Nov 9, 2023):
Literally have another prime example right now. 15gb file. Wasted 17 gb. Total is 32gb downloaded. Meanwhile I've uploaded 24.
(this is within about 8 hours)
Clearly there's a flaw in this design 😂
@Blakestandal commented on GitHub (May 1, 2024):
Any chance this subject has been revisited? I'm still having the issue and it happens on different devices with different vpns and everything. Nothing common other than my hardware is old-ish. Still doesn't seem like a hardware issue. But would love more feedback. Thanks!
@xavier2k6 commented on GitHub (May 25, 2025):
ANNOUNCEMENT!
For anybody coming across this "Feature Request" & would like/love to see a potential implementation in the future!
Here are some options available to you:
Please select/click the 👍 &/or ❤
reactionsin the original/opening post of this ticket.Please feel free (If you have the "skillset") to create a "Pull Request" implementing what's being requested in this ticket.
(new/existing contributors/developers are always welcome)
DO:
DO NOT:
(These will be disregarded/hidden as "spam/abuse/off-topic" etc. as they don't provide anything constructive.)