mirror of
https://github.com/qbittorrent/qBittorrent.git
synced 2026-03-02 22:57:32 -05:00
Ignore slow-speed peers for torrents with big sized parts #6183
Labels
No labels
Accessibility
AppImage
Bounty
Build system
CI
Can't reproduce
Code cleanup
Confirmed bug
Confirmed bug
Core
Crash
Data loss
Discussion
Docker
Documentation
Duplicate
Feature
Feature request
Feature request
Feature request
Filters
Flatpak
GUI
Has workaround
I2P
Invalid
Libtorrent
Look and feel
Meta
NSIS
Network
Not an issue
OS: *BSD
OS: Linux
OS: Windows
OS: macOS
PPA
Performance
Project management
Proxy/VPN
Qt bugs
Qt6 compat
RSS
Search engine
Security
Temp folder
Themes
Translations
Triggers
Waiting diagnosis
Waiting info
Waiting upstream
Waiting web implementation
Watched folders
WebAPI
WebUI
autoCloseOldIssue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/qBittorrent#6183
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @TiraelSedai on GitHub (Oct 18, 2017).
So here is an example (sorry for Russian, its: Client > Progress > Download speed > Upload Speed > Downloaded)
https://i.imgur.com/VmPZuBu.png
I don't mind 25 kbps and 35 kbps peers as much, but the ones that have a speed below 2 kbps are simply taking up space in my parts queue.
That torrent in the example was about 200 MB, and parts were 2MB (or 4MB? I'm not quite sure).
And if we just ignore low-speed peers, we will actually decrease download time by a fair bit.
@Download commented on GitHub (Apr 10, 2019):
#4490
@TiraelSedai commented on GitHub (Apr 10, 2019):
At least I know now that I can manually ban the peer
@tonn333 commented on GitHub (Oct 28, 2019):
I want to always ignore slow peers. Why is it not a feature yet?
@WoodpeckerBaby commented on GitHub (Mar 17, 2020):
The thing is, they are not uploading to you does not mean that they are not uploading to someone. If you disconnect from that peer or refuse to seed to that peer, that will compromise the peering dynamics for everyone. Almost none of us are aggressively pushing the peering policy to the limit using game theory and whatnot. We can do that of course using a greedy algorithm on top of round-robin chocker, but should we? I think not.
To fix your problem, you should either significantly increase the per torrent connection limit and/or the global limit, or just don't limit anything other than the global connection limit based on the processing power of your switches, APs and routers. I personally set this number to 500, and it works great.
I don't limit simultaneous active download or upload, because some half-dead torrent can be revived if one peer with just the right blocks joins back. I wouldn't want to miss that. I also don't limit the upload bandwidth. For me, it is unlikely going to saturate my upload bandwidth in most cases. I choose the "peer-proportional - throttle TCP" option just in case it does saturate.
My client is run on a separate fibre connection that I don't use normally. Only unregistered guest WIFI and low-security servers, such as Tor, are on that network.
@TiraelSedai commented on GitHub (Mar 17, 2020):
Sorry but that was just stream of conscious and nothing useful.
How any of the bull you wrote can change the fact that you can catch peer seeding at 1-2 Kbps while torrent block size would be 4-16 MB? Do the calculations of how long that would entertain your unsaturated upload yourself.
@Seeker2 commented on GitHub (Mar 17, 2020):
A piece cannot be shared with others until it is fully downloaded and passes its hash check.
A 16 MB piece at 1 KB/sec will take over 4.5 hours.
Uploading to lots of peers at once at 1 KB/sec each will still take a long time before any of those peers complete 1 piece unless they have other sources.
@WoodpeckerBaby commented on GitHub (Mar 17, 2020):
You can use round-robin under advanced settings. If that fixes anything.
If you do not have very good internet or can't afford to keep a high-performance server/station on 24/7, you can look into buying a seedbox as an alternative.
You should set your RAM settings to be really large so that it does not wear down your harddrive too quickly. I have mine set to 12GB for R/W cache, for example. RSS the magnet link to the server or drop the BT file in a cloud folder. They get pulled by the qB server, then inject a list of trackers. How long a particular file takes usually doesn't concern me, as long as in aggregate, they are downloading pretty fast, and the "last seen complete" parameter is not "never" for more than 7 days, it will not report an error to me.
If this is unacceptable for you, and you are not frugal with your upload bandwidth, you should consider joining a private tracker, which will have rules for maintaining a good seeding ratio.
@WoodpeckerBaby commented on GitHub (Mar 17, 2020):
Find a better torrent? You can research on the tools and sources, which you should not discuss here.
@Seeker2 commented on GitHub (Mar 17, 2020):
My last post wasn't about "bad" torrents, it's about how BitTorrent works when poor (or worse default) settings are used in many BitTorrent clients.
@WoodpeckerBaby commented on GitHub (Mar 17, 2020):
Again, join private trackers.
@Seeker2 commented on GitHub (Mar 17, 2020):
No, that's not my point or issue.
Someone could end up doing this because they try to share everything with everyone.
@WoodpeckerBaby commented on GitHub (Mar 18, 2020):
No, if it is consistent, it's deliberate.
@Seeker2 commented on GitHub (Mar 18, 2020):
There are multiple reasons that can cause low upload speed from a peer or seed, some consistent...some not.
And many of them are somewhat accidental -- people cannot be expected to baby-sit their BitTorrent client 24/7 to prevent it sometimes falling into one of these failure modes.
@FranciscoPombal commented on GitHub (Mar 18, 2020):
When downloading using either the
fixed_slotsor the newrate_basedchoker (as of writing it is still in the works on the libtorrent side: https://github.com/arvidn/libtorrent/pull/4417), you should automatically eventually converge on connecting to the fastest peers, due to the optimistic unchoke mechanism (this also assumes your own upload speed is not set to too low). If there aren't a lot of peers, it makes sense that even the really slow ones get unchoked - why is this a problem? Every bit of bandwidth helps.One thing to keep in mind is that you might hurt your performance if you set the limit of unchoke slots too high. Then, you will connect to too many peers at once, leading to many random I/O requests, which will make your storage medium the bottleneck (especially if you use an HDD).
When using the
fixed_slotschoker, 20 global unchoke slots and 4 per torrent is a good default to start with. Experiment with slightly higher values to find what works best for you.If you think there is a bug in the choke/unchoke mechanism, submit an issue to libtorrent.
@Seeker2 commented on GitHub (Mar 18, 2020):
There doesn't have to be a bug in the choke/unchoke mechanism to cause slow peers or seeds.
Someone who doesn't bother to stop torrents could easily end up running 20+ at once...and that might not even be a problem so long as demand on those torrents is relatively low.
But a flash crowd might appear and suddenly the "good" seed becomes a terrible one.
@FranciscoPombal commented on GitHub (Mar 18, 2020):
And if you were downloading from that seed and it slows down, they will be replaced by a faster one eventually, or you will keep downloading from them if there are no others available.
@TiraelSedai commented on GitHub (Mar 19, 2020):
@FranciscoPombal are you absolutelly sure that
?
In my experience, it's not the case. It won't replace peer uploading one block. Maybe if it's as you said the very last block and there are plenty peers with higher upload.
However this is absolutely devastating when you are using sequential download, as this mechanism won't kick in and you can basically forget about streaming a video.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
I'm pretty sure yes but let's ping @arvidn just in case.
Can data corresponding to a piece be received
byfrom multiple peers at once?Can the choker replace peers who are in the process of sending a piece with faster ones, or can it only do so when a piece finishes?irrelevant@arvidn commented on GitHub (Mar 19, 2020):
Yes
The choker runs every 15-20 seconds, or so. It doesn't care which piece is being uploaded or downloaded from the peers. There is no choking of the download direction though. The choker decides which peers to upload to (or, reciprocate).
@TiraelSedai commented on GitHub (Mar 19, 2020):
So if I'm understanding correctly the question should be
Which sounds counter-intuitive because this would hurt overall download speed.
If that never happen and you are stuck with downloading a specific piece from a very slow peer, which is my point exactly.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
@arvidn Thanks for the answer.
Never mind my question about the choker's behavior, it is indeed irrelevant for this case.
@TiraelSedai
yes, my bad. I edited the post above accordingly.
Now think about it. If you could only receive data corresponding to one piece from one peer at a time, you would have the problem you describe. For for example, it you had a 4 KiB/s peer start transmitting a 4 MiB piece to you, you would be stuck with that peer for the duration of the piece download (about 16 minutes). This would indeed impact sequential downloading for the purpose of watching movies, for example, since no matter how fast the other peers are transmitting the other pieces, playback would still have to wait on the slowest one.
But this is not the case, as confirmed above. Which means that if you see only one peer transmitting a piece at 4 KiB/s, it is because that is the only peer available to do so at the moment. As soon as a faster peer is able to speed the process up, that will happen. There is no advantage to banning the slower peers, as they are not taking the place of faster ones.
@TiraelSedai commented on GitHub (Mar 19, 2020):
My only issue with that is that I do not see where it is confirmed above. I mean if you are absolutely sure that's fine, since you've already closed the issue.
My understanding of what it works like, we have 10 peers and peer 1 is slow and I'm downloading piece1 from him.
I've downloaded pieces 2 through 10 from peers 2 through 10 and now I'm requesting pieces 10 thorough 19 from those peers.
What you are saying is that instead we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? That sounds like exactly what I want, but I'm pretty unsure of whether it actually ever happens.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
https://github.com/qbittorrent/qBittorrent/issues/7614#issuecomment-601133671
I you have sequential mode enabled then the pieces that come before will be prioritized. If not, anything may happen as qBIttorrent attempts to saturate your connection.
Also,
how does this cripple your total download speed? It's just that the same bandwidth that one of the peers would have otherwise used for the other pieces is first used for that particular piece. So the speed is the same.
@arvidn commented on GitHub (Mar 19, 2020):
To clarify; Under normal circumstances libtorrent prefers to download a single piece from as many peers as possible in parallel, because any partially downloaded piece represents inefficiency in propagating data (we can only upload data from pieces we have fully downloaded and verified against the piece hash).
there are cases where peers may be put on their own piece. e.g.
If we suspect a peer is sending corrupt data, but we don't know for sure because we also received part of the piece from another peer, the suspect peer is put in parole mode, which means we'll try to download whole piece from it, to catch it sending corrupt data for sure.
A peer whose transfer rate to us is above a threshold will request all blocks in the piece we pick for it. The default threshold is if the whole piece can be downloaded in 30 seconds or less.
Depending on the download rate from the we'll have multiple outstanding request to the same peer. These requests have an affinity to be made sequentially within a piece, which means there may not be that many block left over for other peers to pick from.
There's a relatively new feature called
piece_extent_affinitywhich creates an affinity to larger ranges of sequential requests, to improve disk I/O performance.@FranciscoPombal commented on GitHub (Mar 19, 2020):
@arvidn but if sequential mode is in effect, libtorrent will prioritize pieces that get "left behind", correct?
i.e. if peer 1 is slowly transmitting piece 42 and the other faster peers have already finished sending us pieces 43 through 50, will libtorrent request piece 42 to the faster peers, before requesting them pieces 51 through 60?
@arvidn commented on GitHub (Mar 19, 2020):
yes and no. the sequential download feature is just affecting the piece picker. It will prefer to request from the lowest available piece. So, it depends on what the reason is that the slow peer is downloading piece 42, and whether all blocks in the piece were requested from the peer.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
@arvidn apologies if I did not make myself clear, in the example I meant the peers are transmitting (uploading) to us, not downloading from us. What about in that case?
@TiraelSedai commented on GitHub (Mar 19, 2020):
@arvidn thanks for spending time and explaining how libtorrent works!
@Seeker2 commented on GitHub (Mar 19, 2020):
I have similar confusion/mistaken belief that qBitTorrent/libtorrent downloaded pieces typically from 1 peer or seed each:
https://github.com/qbittorrent/qBittorrent/issues/182#issuecomment-312121825
Some hilarously wrong logic results because of that...however my conclusion is still relevant, ...once arvidn corrected all my mistakes.
@arvidn commented on GitHub (Mar 19, 2020):
I was talking about us downloading from peers.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
@arvidn
So if sequential mode is active, if a slower peer is making the download of piece 42 slow, will the piece picker prioritize asking other peers to help with that one or continue with the next ones?
@arvidn commented on GitHub (Mar 19, 2020):
Most likely, yes. By default libtorrent prioritizes picking blocks from pieces that have already been started. But there are cases where it might not. Like if the slow peer is in parole mode for instance.
However, peers in parole mode are supposed to pick the lowest priority piece, precisely to avoid blocking higher priority ones. But maybe the slow peer used to be fast, so fast we requested every block in piece 42 from it, but then it slowed down.
@FranciscoPombal commented on GitHub (Mar 19, 2020):
@arvidn
Perhaps when sequential mode is set, is it possible to re-request the same piece from other faster peers, to avoid that piece "lagging behind' due to the fast peer that became slow? Does libtorrent already do that? I'm explicitly thinking about the use case of streaming a media file as it is downloaded. If a piece gets stuck with such a "fast-then-slow" peer, the playback will stop completely until that piece is complete.
Outside of sequential mode, this is irrelevant though.
@arvidn commented on GitHub (Mar 19, 2020):
There's some related documentation here: https://blog.libtorrent.org/2011/11/block-request-time-outs/
That isn't what sequential mode is supposed to do. It's just a simple piece picker. If you want to stream, use
set_piece_deadline()(and report issues :) )@FranciscoPombal commented on GitHub (Mar 19, 2020):
I see.
@TiraelSedai
Are you satisfied with the response?
@arvidn commented on GitHub (Mar 19, 2020):
@TiraelSedai I just read the ticket. I'm having a hard time understanding what you're suggesting or asking for. It sounds like end-game mode isn't working for you, or it's insufficient.
@TiraelSedai commented on GitHub (Mar 20, 2020):
I was under the impression that end-game was working, however during sequential download we could have a scenario when we have to wait for a slower peer for minutes or hours. You however say that it is extremely unlikely that we'd wait more than a 30 seconds for a piece (given we have enough bandwidth with other peers), even if it's not a last one, so I'm fine with that.
@arvidn commented on GitHub (Mar 20, 2020):
I'm only talking about the intention of the logic in libtorrent. If you actually experience libtorrent not re-requesting a stalled block from some other peer, in end-game mode, there's a bug. I was under the impression that this ticket is reporting an issue, not asking a question about a hypothetical scenario.
@TiraelSedai commented on GitHub (Mar 20, 2020):
It is not, however, I want to highlight once again that we are not talking about end-game mode, but rather about when it's not in end-game mode - sequential download, some piece (let's say #10).