Ignore slow-speed peers for torrents with big sized parts #6183

Closed
opened 2026-02-21 18:16:08 -05:00 by deekerman · 39 comments
Owner

Originally created by @TiraelSedai on GitHub (Oct 18, 2017).

So here is an example (sorry for Russian, its: Client > Progress > Download speed > Upload Speed > Downloaded)
https://i.imgur.com/VmPZuBu.png

I don't mind 25 kbps and 35 kbps peers as much, but the ones that have a speed below 2 kbps are simply taking up space in my parts queue.
That torrent in the example was about 200 MB, and parts were 2MB (or 4MB? I'm not quite sure).
And if we just ignore low-speed peers, we will actually decrease download time by a fair bit.

Originally created by @TiraelSedai on GitHub (Oct 18, 2017). So here is an example (sorry for Russian, its: Client > Progress > Download speed > Upload Speed > Downloaded) https://i.imgur.com/VmPZuBu.png I don't mind 25 kbps and 35 kbps peers as much, but the ones that have a speed below 2 kbps are simply taking up space in my parts queue. That torrent in the example was about 200 MB, and parts were 2MB (or 4MB? I'm not quite sure). And if we just ignore low-speed peers, we will actually decrease download time by a fair bit.
Author
Owner

@Download commented on GitHub (Apr 10, 2019):

#4490

@Download commented on GitHub (Apr 10, 2019): #4490
Author
Owner

@TiraelSedai commented on GitHub (Apr 10, 2019):

At least I know now that I can manually ban the peer

@TiraelSedai commented on GitHub (Apr 10, 2019): At least I know now that I can manually ban the peer
Author
Owner

@tonn333 commented on GitHub (Oct 28, 2019):

I want to always ignore slow peers. Why is it not a feature yet?

@tonn333 commented on GitHub (Oct 28, 2019): I want to always ignore slow peers. Why is it not a feature yet?
Author
Owner

@WoodpeckerBaby commented on GitHub (Mar 17, 2020):

The thing is, they are not uploading to you does not mean that they are not uploading to someone. If you disconnect from that peer or refuse to seed to that peer, that will compromise the peering dynamics for everyone. Almost none of us are aggressively pushing the peering policy to the limit using game theory and whatnot. We can do that of course using a greedy algorithm on top of round-robin chocker, but should we? I think not.

To fix your problem, you should either significantly increase the per torrent connection limit and/or the global limit, or just don't limit anything other than the global connection limit based on the processing power of your switches, APs and routers. I personally set this number to 500, and it works great.

I don't limit simultaneous active download or upload, because some half-dead torrent can be revived if one peer with just the right blocks joins back. I wouldn't want to miss that. I also don't limit the upload bandwidth. For me, it is unlikely going to saturate my upload bandwidth in most cases. I choose the "peer-proportional - throttle TCP" option just in case it does saturate.

My client is run on a separate fibre connection that I don't use normally. Only unregistered guest WIFI and low-security servers, such as Tor, are on that network.

@WoodpeckerBaby commented on GitHub (Mar 17, 2020): The thing is, they are not uploading to you does not mean that they are not uploading to someone. If you disconnect from that peer or refuse to seed to that peer, that will compromise the peering dynamics for everyone. Almost none of us are aggressively pushing the peering policy to the limit using game theory and whatnot. We can do that of course using a greedy algorithm on top of round-robin chocker, but should we? I think not. To fix your problem, you should either significantly increase the per torrent connection limit and/or the global limit, or just don't limit anything other than the global connection limit based on the processing power of your switches, APs and routers. I personally set this number to 500, and it works great. I don't limit simultaneous active download or upload, because some half-dead torrent can be revived if one peer with just the right blocks joins back. I wouldn't want to miss that. I also don't limit the upload bandwidth. For me, it is unlikely going to saturate my upload bandwidth in most cases. I choose the "peer-proportional - throttle TCP" option just in case it does saturate. My client is run on a separate fibre connection that I don't use normally. Only unregistered guest WIFI and low-security servers, such as Tor, are on that network.
Author
Owner

@TiraelSedai commented on GitHub (Mar 17, 2020):

Sorry but that was just stream of conscious and nothing useful.

How any of the bull you wrote can change the fact that you can catch peer seeding at 1-2 Kbps while torrent block size would be 4-16 MB? Do the calculations of how long that would entertain your unsaturated upload yourself.

@TiraelSedai commented on GitHub (Mar 17, 2020): Sorry but that was just stream of conscious and nothing useful. How any of the bull you wrote can change the fact that you can catch peer seeding at 1-2 Kbps while torrent block size would be 4-16 MB? Do the calculations of how long that would entertain your **unsaturated upload** yourself.
Author
Owner

@Seeker2 commented on GitHub (Mar 17, 2020):

A piece cannot be shared with others until it is fully downloaded and passes its hash check.
A 16 MB piece at 1 KB/sec will take over 4.5 hours.
Uploading to lots of peers at once at 1 KB/sec each will still take a long time before any of those peers complete 1 piece unless they have other sources.

@Seeker2 commented on GitHub (Mar 17, 2020): A piece cannot be shared with others until it is fully downloaded and passes its hash check. A 16 MB piece at 1 KB/sec will take over 4.5 hours. Uploading to lots of peers at once at 1 KB/sec each will still take a long time before any of those peers complete 1 piece unless they have other sources.
Author
Owner

@WoodpeckerBaby commented on GitHub (Mar 17, 2020):

Sorry but that was just stream of conscious and nothing useful.

How any of the bull you wrote can change the fact that you can catch peer seeding at 1-2 Kbps while torrent block size would be 4-16 MB? Do the calculations of how long that would entertain your unsaturated upload yourself.

You can use round-robin under advanced settings. If that fixes anything.

If you do not have very good internet or can't afford to keep a high-performance server/station on 24/7, you can look into buying a seedbox as an alternative.

You should set your RAM settings to be really large so that it does not wear down your harddrive too quickly. I have mine set to 12GB for R/W cache, for example. RSS the magnet link to the server or drop the BT file in a cloud folder. They get pulled by the qB server, then inject a list of trackers. How long a particular file takes usually doesn't concern me, as long as in aggregate, they are downloading pretty fast, and the "last seen complete" parameter is not "never" for more than 7 days, it will not report an error to me.

If this is unacceptable for you, and you are not frugal with your upload bandwidth, you should consider joining a private tracker, which will have rules for maintaining a good seeding ratio.

@WoodpeckerBaby commented on GitHub (Mar 17, 2020): > Sorry but that was just stream of conscious and nothing useful. > > How any of the bull you wrote can change the fact that you can catch peer seeding at 1-2 Kbps while torrent block size would be 4-16 MB? Do the calculations of how long that would entertain your **unsaturated upload** yourself. You can use round-robin under advanced settings. If that fixes anything. If you do not have very good internet or can't afford to keep a high-performance server/station on 24/7, you can look into buying a seedbox as an alternative. You should set your RAM settings to be really large so that it does not wear down your harddrive too quickly. I have mine set to 12GB for R/W cache, for example. RSS the magnet link to the server or drop the BT file in a cloud folder. They get pulled by the qB server, then inject a list of trackers. How long a particular file takes usually doesn't concern me, as long as in aggregate, they are downloading pretty fast, and the "last seen complete" parameter is not "never" for more than 7 days, it will not report an error to me. If this is unacceptable for you, and you are not frugal with your upload bandwidth, you should consider joining a private tracker, which will have rules for maintaining a good seeding ratio.
Author
Owner

@WoodpeckerBaby commented on GitHub (Mar 17, 2020):

A piece cannot be shared with others until it is fully downloaded and passes its hash check.
A 16 MB piece at 1 KB/sec will take over 4.5 hours.
Uploading to lots of peers at once at 1 KB/sec each will still take a long time before any of those peers complete 1 piece unless they have other sources.

Find a better torrent? You can research on the tools and sources, which you should not discuss here.

@WoodpeckerBaby commented on GitHub (Mar 17, 2020): > A piece cannot be shared with others until it is fully downloaded and passes its hash check. > A 16 MB piece at 1 KB/sec will take over 4.5 hours. > Uploading to lots of peers at once at 1 KB/sec each will still take a long time before any of those peers complete 1 piece unless they have other sources. Find a better torrent? You can research on the tools and sources, which you should not discuss here.
Author
Owner

@Seeker2 commented on GitHub (Mar 17, 2020):

My last post wasn't about "bad" torrents, it's about how BitTorrent works when poor (or worse default) settings are used in many BitTorrent clients.

@Seeker2 commented on GitHub (Mar 17, 2020): My last post wasn't about "bad" torrents, it's about how BitTorrent works when poor (or worse default) settings are used in many BitTorrent clients.
Author
Owner

@WoodpeckerBaby commented on GitHub (Mar 17, 2020):

Screen Shot 2020-03-18 at 3 51 43 AM It happens to everyone. A lot of people hit and run. Many more are concerned with DMCA etc when uploading because they don't understand the mechanics. They think the less you upload, the less likely you are gonna upload to a right-holder. But that's not how it work...

Again, join private trackers.

@WoodpeckerBaby commented on GitHub (Mar 17, 2020): <img width="417" alt="Screen Shot 2020-03-18 at 3 51 43 AM" src="https://user-images.githubusercontent.com/14303318/76895874-cd493c00-68cb-11ea-8688-14cc1c803eab.png"> It happens to everyone. A lot of people hit and run. Many more are concerned with DMCA etc when uploading because they don't understand the mechanics. They think the less you upload, the less likely you are gonna upload to a right-holder. But that's not how it work... Again, join private trackers.
Author
Owner

@Seeker2 commented on GitHub (Mar 17, 2020):

No, that's not my point or issue.
Someone could end up doing this because they try to share everything with everyone.

@Seeker2 commented on GitHub (Mar 17, 2020): No, that's not my point or issue. Someone could end up doing this because they try to share everything with everyone.
Author
Owner

@WoodpeckerBaby commented on GitHub (Mar 18, 2020):

No, that's not my point or issue.
Someone could end up doing this because they try to share everything with everyone.

No, if it is consistent, it's deliberate.

@WoodpeckerBaby commented on GitHub (Mar 18, 2020): > No, that's not my point or issue. > Someone could end up doing this because they try to share everything with everyone. No, if it is consistent, it's deliberate.
Author
Owner

@Seeker2 commented on GitHub (Mar 18, 2020):

There are multiple reasons that can cause low upload speed from a peer or seed, some consistent...some not.
And many of them are somewhat accidental -- people cannot be expected to baby-sit their BitTorrent client 24/7 to prevent it sometimes falling into one of these failure modes.

@Seeker2 commented on GitHub (Mar 18, 2020): There are multiple reasons that can cause low upload speed from a peer or seed, some consistent...some not. And many of them are somewhat accidental -- people cannot be expected to baby-sit their BitTorrent client 24/7 to prevent it sometimes falling into one of these failure modes.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 18, 2020):

When downloading using either the fixed_slots or the new rate_based choker (as of writing it is still in the works on the libtorrent side: https://github.com/arvidn/libtorrent/pull/4417), you should automatically eventually converge on connecting to the fastest peers, due to the optimistic unchoke mechanism (this also assumes your own upload speed is not set to too low). If there aren't a lot of peers, it makes sense that even the really slow ones get unchoked - why is this a problem? Every bit of bandwidth helps.

One thing to keep in mind is that you might hurt your performance if you set the limit of unchoke slots too high. Then, you will connect to too many peers at once, leading to many random I/O requests, which will make your storage medium the bottleneck (especially if you use an HDD).

When using the fixed_slots choker, 20 global unchoke slots and 4 per torrent is a good default to start with. Experiment with slightly higher values to find what works best for you.

If you think there is a bug in the choke/unchoke mechanism, submit an issue to libtorrent.

@FranciscoPombal commented on GitHub (Mar 18, 2020): When downloading using either the `fixed_slots` or the new `rate_based` choker (as of writing it is still in the works on the libtorrent side: https://github.com/arvidn/libtorrent/pull/4417), you should automatically eventually converge on connecting to the fastest peers, due to the optimistic unchoke mechanism (this also assumes your own upload speed is not set to too low). If there aren't a lot of peers, it makes sense that even the really slow ones get unchoked - why is this a problem? Every bit of bandwidth helps. One thing to keep in mind is that you might hurt your performance if you set the limit of unchoke slots too high. Then, you will connect to too many peers at once, leading to many random I/O requests, which will make your storage medium the bottleneck (especially if you use an HDD). When using the `fixed_slots` choker, 20 global unchoke slots and 4 per torrent is a good default to start with. Experiment with slightly higher values to find what works best for you. If you think there is a bug in the choke/unchoke mechanism, submit an issue to libtorrent.
Author
Owner

@Seeker2 commented on GitHub (Mar 18, 2020):

There doesn't have to be a bug in the choke/unchoke mechanism to cause slow peers or seeds.

Someone who doesn't bother to stop torrents could easily end up running 20+ at once...and that might not even be a problem so long as demand on those torrents is relatively low.
But a flash crowd might appear and suddenly the "good" seed becomes a terrible one.

@Seeker2 commented on GitHub (Mar 18, 2020): There doesn't have to be a bug in the choke/unchoke mechanism to cause slow peers or seeds. Someone who doesn't bother to stop torrents could easily end up running 20+ at once...and that might not even be a problem so long as demand on those torrents is relatively low. But a flash crowd might appear and suddenly the "good" seed becomes a terrible one.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 18, 2020):

But a flash crowd might appear and suddenly the "good" seed becomes a terrible one.

And if you were downloading from that seed and it slows down, they will be replaced by a faster one eventually, or you will keep downloading from them if there are no others available.

@FranciscoPombal commented on GitHub (Mar 18, 2020): > But a flash crowd might appear and suddenly the "good" seed becomes a terrible one. And if you were downloading from that seed and it slows down, they will be replaced by a faster one eventually, or you will keep downloading from them if there are no others available.
Author
Owner

@TiraelSedai commented on GitHub (Mar 19, 2020):

@FranciscoPombal are you absolutelly sure that

they will be replaced by a faster one eventually

?

In my experience, it's not the case. It won't replace peer uploading one block. Maybe if it's as you said the very last block and there are plenty peers with higher upload.

However this is absolutely devastating when you are using sequential download, as this mechanism won't kick in and you can basically forget about streaming a video.

@TiraelSedai commented on GitHub (Mar 19, 2020): @FranciscoPombal are you absolutelly sure that > they will be replaced by a faster one eventually ? In my experience, it's not the case. It won't replace peer uploading one block. Maybe if it's as you said the very last block and there are plenty peers with higher upload. However this is absolutely devastating when you are using sequential download, as this mechanism won't kick in and you can basically forget about streaming a video.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@FranciscoPombal are you absolutelly sure that

they will be replaced by a faster one eventually

?

In my experience, it's not the case. It won't replace peer uploading one block. Maybe if it's as you said the very last block and there are plenty peers with higher upload.

However this is absolutely devastating when you are using sequential download, as this mechanism won't kick in and you can basically forget about streaming a video.

I'm pretty sure yes but let's ping @arvidn just in case.

Can data corresponding to a piece be received byfrom multiple peers at once?
Can the choker replace peers who are in the process of sending a piece with faster ones, or can it only do so when a piece finishes? irrelevant

@FranciscoPombal commented on GitHub (Mar 19, 2020): > @FranciscoPombal are you absolutelly sure that > > > they will be replaced by a faster one eventually > > ? > > In my experience, it's not the case. It won't replace peer uploading one block. Maybe if it's as you said the very last block and there are plenty peers with higher upload. > > However this is absolutely devastating when you are using sequential download, as this mechanism won't kick in and you can basically forget about streaming a video. I'm pretty sure yes but let's ping @arvidn just in case. Can data corresponding to a piece be received ~by~from multiple peers at once? ~Can the choker replace peers who are in the process of sending a piece with faster ones, or can it only do so when a piece finishes?~ irrelevant
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

Can data corresponding to a piece be received by multiple peers at once?

Yes

Can the choker replace peers who are in the process of sending a piece with faster ones, or can it only do so when a piece finishes?

The choker runs every 15-20 seconds, or so. It doesn't care which piece is being uploaded or downloaded from the peers. There is no choking of the download direction though. The choker decides which peers to upload to (or, reciprocate).

@arvidn commented on GitHub (Mar 19, 2020): > Can data corresponding to a piece be received by multiple peers at once? Yes > Can the choker replace peers who are in the process of sending a piece with faster ones, or can it only do so when a piece finishes? The choker runs every 15-20 seconds, or so. It doesn't care which piece is being uploaded or downloaded from the peers. There is no choking of the *download* direction though. The choker decides which peers to *upload* to (or, reciprocate).
Author
Owner

@TiraelSedai commented on GitHub (Mar 19, 2020):

So if I'm understanding correctly the question should be

Can data corresponding to a piece be received from multiple peers at once?

Which sounds counter-intuitive because this would hurt overall download speed.
If that never happen and you are stuck with downloading a specific piece from a very slow peer, which is my point exactly.

@TiraelSedai commented on GitHub (Mar 19, 2020): So if I'm understanding correctly the question should be > Can data corresponding to a piece be received **from** multiple peers at once? Which sounds counter-intuitive because this would hurt overall download speed. **_If that never happen and you are stuck with downloading a specific piece from a very slow peer, which is my point exactly._**
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@arvidn Thanks for the answer.

Never mind my question about the choker's behavior, it is indeed irrelevant for this case.

@TiraelSedai

So if I'm understanding correctly the question should be

Can data corresponding to a piece be received from multiple peers at once?

yes, my bad. I edited the post above accordingly.

Now think about it. If you could only receive data corresponding to one piece from one peer at a time, you would have the problem you describe. For for example, it you had a 4 KiB/s peer start transmitting a 4 MiB piece to you, you would be stuck with that peer for the duration of the piece download (about 16 minutes). This would indeed impact sequential downloading for the purpose of watching movies, for example, since no matter how fast the other peers are transmitting the other pieces, playback would still have to wait on the slowest one.

But this is not the case, as confirmed above. Which means that if you see only one peer transmitting a piece at 4 KiB/s, it is because that is the only peer available to do so at the moment. As soon as a faster peer is able to speed the process up, that will happen. There is no advantage to banning the slower peers, as they are not taking the place of faster ones.

@FranciscoPombal commented on GitHub (Mar 19, 2020): @arvidn Thanks for the answer. Never mind my question about the choker's behavior, it is indeed irrelevant for this case. @TiraelSedai > So if I'm understanding correctly the question should be >> Can data corresponding to a piece be received **from** multiple peers at once? yes, my bad. I edited the post above accordingly. Now think about it. If you could only receive data corresponding to one piece from one peer at a time, you would have the problem you describe. For for example, it you had a 4 KiB/s peer start transmitting a 4 MiB piece to you, you would be stuck with that peer for the duration of the piece download (about 16 minutes). This would indeed impact sequential downloading for the purpose of watching movies, for example, since no matter how fast the other peers are transmitting the other pieces, playback would still have to wait on the slowest one. But this is not the case, as confirmed above. Which means that if you see only one peer transmitting a piece at 4 KiB/s, it is because that is the only peer available to do so at the moment. As soon as a faster peer is able to speed the process up, that will happen. There is no advantage to banning the slower peers, as they are not taking the place of faster ones.
Author
Owner

@TiraelSedai commented on GitHub (Mar 19, 2020):

My only issue with that is that I do not see where it is confirmed above. I mean if you are absolutely sure that's fine, since you've already closed the issue.

My understanding of what it works like, we have 10 peers and peer 1 is slow and I'm downloading piece1 from him.
I've downloaded pieces 2 through 10 from peers 2 through 10 and now I'm requesting pieces 10 thorough 19 from those peers.

What you are saying is that instead we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? That sounds like exactly what I want, but I'm pretty unsure of whether it actually ever happens.

@TiraelSedai commented on GitHub (Mar 19, 2020): My only issue with that is that I do not see where it is confirmed above. I mean if you are absolutely sure that's fine, since you've already closed the issue. My understanding of what it works like, we have 10 peers and peer 1 is slow and I'm downloading piece1 from him. I've downloaded pieces 2 through 10 from peers 2 through 10 and now I'm requesting pieces 10 thorough 19 from those peers. What you are saying is that instead we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? That sounds like exactly what I want, but I'm pretty unsure of whether it actually ever happens.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

My only issue with that is that I do not see where it is confirmed above. I mean if you are absolutely sure that's fine, since you've already closed the issue.

https://github.com/qbittorrent/qBittorrent/issues/7614#issuecomment-601133671

Can data corresponding to a piece be received from multiple peers at once?
Yes


My understanding of what it works like, we have 10 peers and peer 1 is slow and I'm downloading piece1 from him.
I've downloaded pieces 2 through 10 from peers 2 through 10 and now I'm requesting pieces 10 thorough 19 from those peers.

What you are saying is that instead we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? That sounds like exactly what I want, but I'm pretty unsure of whether it actually ever happens.

I you have sequential mode enabled then the pieces that come before will be prioritized. If not, anything may happen as qBIttorrent attempts to saturate your connection.

Also,

we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1?

how does this cripple your total download speed? It's just that the same bandwidth that one of the peers would have otherwise used for the other pieces is first used for that particular piece. So the speed is the same.

@FranciscoPombal commented on GitHub (Mar 19, 2020): > My only issue with that is that I do not see where it is confirmed above. I mean if you are absolutely sure that's fine, since you've already closed the issue. https://github.com/qbittorrent/qBittorrent/issues/7614#issuecomment-601133671 >>Can data corresponding to a piece be received from multiple peers at once? >Yes --- > My understanding of what it works like, we have 10 peers and peer 1 is slow and I'm downloading piece1 from him. > I've downloaded pieces 2 through 10 from peers 2 through 10 and now I'm requesting pieces 10 thorough 19 from those peers. > > What you are saying is that instead we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? That sounds like exactly what I want, but I'm pretty unsure of whether it actually ever happens. I you have sequential mode enabled then the pieces that come before will be prioritized. If not, anything may happen as qBIttorrent attempts to saturate your connection. Also, > we will request piece1 from one of the other peers thus crippling our total download speed just so that we can have piece1? how does this cripple your total download speed? It's just that the same bandwidth that one of the peers would have otherwise used for the other pieces is first used for that particular piece. So the speed is the same.
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

To clarify; Under normal circumstances libtorrent prefers to download a single piece from as many peers as possible in parallel, because any partially downloaded piece represents inefficiency in propagating data (we can only upload data from pieces we have fully downloaded and verified against the piece hash).

there are cases where peers may be put on their own piece. e.g.

  1. If we suspect a peer is sending corrupt data, but we don't know for sure because we also received part of the piece from another peer, the suspect peer is put in parole mode, which means we'll try to download whole piece from it, to catch it sending corrupt data for sure.

  2. A peer whose transfer rate to us is above a threshold will request all blocks in the piece we pick for it. The default threshold is if the whole piece can be downloaded in 30 seconds or less.

  3. Depending on the download rate from the we'll have multiple outstanding request to the same peer. These requests have an affinity to be made sequentially within a piece, which means there may not be that many block left over for other peers to pick from.

  4. There's a relatively new feature called piece_extent_affinity which creates an affinity to larger ranges of sequential requests, to improve disk I/O performance.

@arvidn commented on GitHub (Mar 19, 2020): To clarify; Under normal circumstances libtorrent prefers to download a single piece from as many peers as possible in parallel, because any partially downloaded piece represents inefficiency in propagating data (we can only upload data from pieces we have fully downloaded and verified against the piece hash). there are cases where peers may be put on their own piece. e.g. 1. If we suspect a peer is sending corrupt data, but we don't know for sure because we also received part of the piece from another peer, the suspect peer is put in *parole* mode, which means we'll try to download whole piece from it, to catch it sending corrupt data for sure. 2. A peer whose transfer rate to us is above a threshold will request all blocks in the piece we pick for it. The default threshold is if the whole piece can be downloaded in 30 seconds or less. 3. Depending on the download rate from the we'll have multiple outstanding request to the same peer. These requests have an affinity to be made sequentially within a piece, which means there may not be that many block left over for other peers to pick from. 4. There's a relatively new feature called `piece_extent_affinity` which creates an affinity to larger ranges of sequential requests, to improve disk I/O performance.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@arvidn but if sequential mode is in effect, libtorrent will prioritize pieces that get "left behind", correct?

i.e. if peer 1 is slowly transmitting piece 42 and the other faster peers have already finished sending us pieces 43 through 50, will libtorrent request piece 42 to the faster peers, before requesting them pieces 51 through 60?

@FranciscoPombal commented on GitHub (Mar 19, 2020): @arvidn but if sequential mode is in effect, libtorrent will prioritize pieces that get "left behind", correct? i.e. if peer 1 is slowly transmitting piece 42 and the other faster peers have already finished sending us pieces 43 through 50, will libtorrent request piece 42 to the faster peers, before requesting them pieces 51 through 60?
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

yes and no. the sequential download feature is just affecting the piece picker. It will prefer to request from the lowest available piece. So, it depends on what the reason is that the slow peer is downloading piece 42, and whether all blocks in the piece were requested from the peer.

@arvidn commented on GitHub (Mar 19, 2020): yes and no. the sequential download feature is *just* affecting the piece picker. It will prefer to request from the lowest available piece. So, it depends on what the reason is that the slow peer is downloading piece 42, and whether all blocks in the piece were requested from the peer.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@arvidn apologies if I did not make myself clear, in the example I meant the peers are transmitting (uploading) to us, not downloading from us. What about in that case?

@FranciscoPombal commented on GitHub (Mar 19, 2020): @arvidn apologies if I did not make myself clear, in the example I meant the peers are transmitting (uploading) _to_ us, not downloading from us. What about in that case?
Author
Owner

@TiraelSedai commented on GitHub (Mar 19, 2020):

@arvidn thanks for spending time and explaining how libtorrent works!

@TiraelSedai commented on GitHub (Mar 19, 2020): @arvidn thanks for spending time and explaining how libtorrent works!
Author
Owner

@Seeker2 commented on GitHub (Mar 19, 2020):

I have similar confusion/mistaken belief that qBitTorrent/libtorrent downloaded pieces typically from 1 peer or seed each:
https://github.com/qbittorrent/qBittorrent/issues/182#issuecomment-312121825

Some hilarously wrong logic results because of that...however my conclusion is still relevant, ...once arvidn corrected all my mistakes.

@Seeker2 commented on GitHub (Mar 19, 2020): I have similar confusion/mistaken belief that qBitTorrent/libtorrent downloaded pieces typically from 1 peer or seed each: https://github.com/qbittorrent/qBittorrent/issues/182#issuecomment-312121825 Some hilarously wrong logic results because of that...however my conclusion is still relevant, ...once arvidn corrected all my mistakes.
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

apologies if I did not make myself clear, in the example I meant the peers are transmitting (uploading) to us, not downloading from us. What about in that case?

I was talking about us downloading from peers.

@arvidn commented on GitHub (Mar 19, 2020): > apologies if I did not make myself clear, in the example I meant the peers are transmitting (uploading) to us, not downloading from us. What about in that case? I was talking about us downloading from peers.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@arvidn

yes and no. the sequential download feature is just affecting the piece picker. It will prefer to request from the lowest available piece. So, it depends on what the reason is that the slow peer is downloading piece 42, and whether all blocks in the piece were requested from the peer.

So if sequential mode is active, if a slower peer is making the download of piece 42 slow, will the piece picker prioritize asking other peers to help with that one or continue with the next ones?

@FranciscoPombal commented on GitHub (Mar 19, 2020): @arvidn > yes and no. the sequential download feature is _just_ affecting the piece picker. It will prefer to request from the lowest available piece. So, it depends on what the reason is that the slow peer is downloading piece 42, and whether all blocks in the piece were requested from the peer. So if sequential mode is active, if a slower peer is making the download of piece 42 slow, will the piece picker prioritize asking other peers to help with that one or continue with the next ones?
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

So if sequential mode is active, if a slower peer is making the download of piece 42 slow, will the piece picker prioritize asking other peers to help with that one or continue with the next ones?

Most likely, yes. By default libtorrent prioritizes picking blocks from pieces that have already been started. But there are cases where it might not. Like if the slow peer is in parole mode for instance.

However, peers in parole mode are supposed to pick the lowest priority piece, precisely to avoid blocking higher priority ones. But maybe the slow peer used to be fast, so fast we requested every block in piece 42 from it, but then it slowed down.

@arvidn commented on GitHub (Mar 19, 2020): > So if sequential mode is active, if a slower peer is making the download of piece 42 slow, will the piece picker prioritize asking other peers to help with that one or continue with the next ones? Most likely, yes. By default libtorrent prioritizes picking blocks from pieces that have already been started. But there are cases where it might not. Like if the slow peer is in parole mode for instance. However, peers in parole mode are supposed to pick the *lowest* priority piece, precisely to avoid blocking higher priority ones. But maybe the slow peer used to be fast, so fast we requested every block in piece 42 from it, but then it slowed down.
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

@arvidn

But maybe the slow peer used to be fast, so fast we requested every block in piece 42 from it, but then it slowed down.

Perhaps when sequential mode is set, is it possible to re-request the same piece from other faster peers, to avoid that piece "lagging behind' due to the fast peer that became slow? Does libtorrent already do that? I'm explicitly thinking about the use case of streaming a media file as it is downloaded. If a piece gets stuck with such a "fast-then-slow" peer, the playback will stop completely until that piece is complete.

Outside of sequential mode, this is irrelevant though.

@FranciscoPombal commented on GitHub (Mar 19, 2020): @arvidn > But maybe the slow peer used to be fast, so fast we requested every block in piece 42 from it, but then it slowed down. Perhaps when sequential mode is set, is it possible to re-request the same piece from other faster peers, to avoid that piece "lagging behind' due to the fast peer that became slow? Does libtorrent already do that? I'm explicitly thinking about the use case of streaming a media file as it is downloaded. If a piece gets stuck with such a "fast-then-slow" peer, the playback will stop completely until that piece is complete. Outside of sequential mode, this is irrelevant though.
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

There's some related documentation here: https://blog.libtorrent.org/2011/11/block-request-time-outs/

Outside of sequential mode, this is irrelevant though.

That isn't what sequential mode is supposed to do. It's just a simple piece picker. If you want to stream, use set_piece_deadline() (and report issues :) )

@arvidn commented on GitHub (Mar 19, 2020): There's some related documentation here: https://blog.libtorrent.org/2011/11/block-request-time-outs/ > Outside of sequential mode, this is irrelevant though. That isn't what sequential mode is supposed to do. It's just a simple piece picker. If you want to stream, use `set_piece_deadline()` (and report issues :) )
Author
Owner

@FranciscoPombal commented on GitHub (Mar 19, 2020):

I see.

@TiraelSedai
Are you satisfied with the response?

@FranciscoPombal commented on GitHub (Mar 19, 2020): I see. @TiraelSedai Are you satisfied with the response?
Author
Owner

@arvidn commented on GitHub (Mar 19, 2020):

@TiraelSedai I just read the ticket. I'm having a hard time understanding what you're suggesting or asking for. It sounds like end-game mode isn't working for you, or it's insufficient.

@arvidn commented on GitHub (Mar 19, 2020): @TiraelSedai I just read the ticket. I'm having a hard time understanding what you're suggesting or asking for. It sounds like [end-game](https://en.wikipedia.org/wiki/Glossary_of_BitTorrent_terms#Endgame_/_Endgame_mode) mode isn't working for you, or it's insufficient.
Author
Owner

@TiraelSedai commented on GitHub (Mar 20, 2020):

I was under the impression that end-game was working, however during sequential download we could have a scenario when we have to wait for a slower peer for minutes or hours. You however say that it is extremely unlikely that we'd wait more than a 30 seconds for a piece (given we have enough bandwidth with other peers), even if it's not a last one, so I'm fine with that.

@TiraelSedai commented on GitHub (Mar 20, 2020): I was under the impression that end-game was working, however during _sequential_ download we could have a scenario when we have to wait for a slower peer for minutes or hours. You however say that it is extremely unlikely that we'd wait more than a 30 seconds for a piece (given we have enough bandwidth with other peers), even if it's not a last one, so I'm fine with that.
Author
Owner

@arvidn commented on GitHub (Mar 20, 2020):

I'm only talking about the intention of the logic in libtorrent. If you actually experience libtorrent not re-requesting a stalled block from some other peer, in end-game mode, there's a bug. I was under the impression that this ticket is reporting an issue, not asking a question about a hypothetical scenario.

@arvidn commented on GitHub (Mar 20, 2020): I'm only talking about the intention of the logic in libtorrent. If you actually experience libtorrent *not* re-requesting a stalled block from some other peer, in end-game mode, there's a bug. I was under the impression that this ticket is reporting an issue, not asking a question about a hypothetical scenario.
Author
Owner

@TiraelSedai commented on GitHub (Mar 20, 2020):

It is not, however, I want to highlight once again that we are not talking about end-game mode, but rather about when it's not in end-game mode - sequential download, some piece (let's say #10).

@TiraelSedai commented on GitHub (Mar 20, 2020): It is not, however, I want to highlight once again that we are not talking about end-game mode, but rather about when it's **not** in end-game mode - sequential download, some piece (let's say #10).
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/qBittorrent#6183
No description provided.