Skip torrent checking after relaunch #16317

Open
opened 2026-02-22 03:00:48 -05:00 by deekerman · 54 comments
Owner

Originally created by @TheYMI on GitHub (Nov 4, 2024).

Originally assigned to: @glassez on GitHub.

Suggestion

When launching qBittorrent, any torrents that weren't fully checked before it was closed require a recheck, which consumes a lot of time and resources. This process should be allowed to be skipped.

Use case

After installing v5.0.1, changing some configurations that were reset after the version change required a relaunch of qBittorrent.

Since I have over 4k torrents, not all of them were updated before I closed the client.
After launching, it started checking all the torrents that weren't updated. That's thousands of torrents - several TBs worth of data - some of which are on a NAS. This will take days, if not over a week, without me changing any of the files and the recheck being completely unnecessary.

During this time my network is completely clogged, my NAS is unusable and those torrents are not seeding.
THIS NEEDS TO BE FIXED NOW!!!
(a workaround is also okay)

Extra info/examples/attachments

Closing 21766 as a duplicate of something that's been ignored for over a decade isn't a solution. I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug.

Originally created by @TheYMI on GitHub (Nov 4, 2024). Originally assigned to: @glassez on GitHub. ### Suggestion When launching qBittorrent, any torrents that weren't fully checked before it was closed require a recheck, which consumes a lot of time and resources. This process should be allowed to be skipped. ### Use case After installing v5.0.1, changing some configurations that were reset after the version change required a relaunch of qBittorrent. Since I have over 4k torrents, not all of them were updated before I closed the client. After launching, it started checking all the torrents that weren't updated. That's thousands of torrents - several TBs worth of data - some of which are on a NAS. This will take days, if not over a week, without me changing any of the files and the recheck being completely unnecessary. During this time my network is completely clogged, my NAS is unusable and those torrents are not seeding. THIS NEEDS TO BE FIXED NOW!!! (a workaround is also okay) ### Extra info/examples/attachments Closing [21766](https://github.com/qbittorrent/qBittorrent/issues/21766) as a duplicate of something that's been ignored for over a decade isn't a solution. I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug.
Author
Owner

@HanabishiRecca commented on GitHub (Nov 4, 2024):

I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug.

Then you simply will be banned.

THIS NEEDS TO BE FIXED NOW!!!
I need a solution.

This is your problem. You are not in a position to demand anything.

qBittorrent is a free and open source software developed by volunteers in their free time. You don't pay for it, you use a product of other people's free will. Noone would rush solving your problems and implementing your wishes, especially if you ask for it disgracefully.

Maybe you want to do it yourself? PRs are welcome.

@HanabishiRecca commented on GitHub (Nov 4, 2024): > I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug. Then you simply will be banned. > THIS NEEDS TO BE FIXED NOW!!! > I need a solution. This is **your** problem. You are not in a position to demand anything. qBittorrent is a free and open source software developed by volunteers in their free time. You don't pay for it, you use a product of other people's free will. Noone would rush solving your problems and implementing your wishes, especially if you ask for it disgracefully. Maybe you want to do it yourself? PRs are welcome.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug.

Then you simply will be banned.

Users on GitHub are free to make.

THIS NEEDS TO BE FIXED NOW!!!
I need a solution.

This is your problem. You are not in a position to demand anything.

Nope. There are complaints about this issue since 2012. I'm the last one, not the only one.

qBittorrent is a free and open source software developed by volunteers in their free time. You don't pay for it, you use a product of other people's free will. Noone would rush solving your problems and implementing your wishes, especially if you asking for it disgracefully.

People have been asking nicely for YEARS. No solution or workaround has ever been offered. Closing my issue as a duplicate of a 12-years-old unresolved issue while shrugging it off is also disgraceful.
The issue is the result of a problematic behavior that's been known for years, and no one ever cared enough to solve. There's probably a list somewhere that could be cleared with a single button click, if you just know the piece of code that checks it.
Might even be a file that could be edited as a workaround, but even that was never offered.

So yes, I was very annoyed when I opened this issue, because my search for a solution before coming to GitHub gave me nothing other than years-worth of frustrated people complaining about this behavior, with developers disregarding them.
And while this is a free product, if it's causing me problems (e.g. high usage of my network and overworking my NAS) when I used it as intended due to negligence, then yes, I feel like I'm owed a response from the developers.

@TheYMI commented on GitHub (Nov 4, 2024): > > I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug. > > Then you simply will be banned. Users on GitHub are free to make. > > THIS NEEDS TO BE FIXED NOW!!! > > I need a solution. > > This is **your** problem. You are not in a position to demand anything. Nope. There are complaints about this issue since 2012. I'm the last one, not the only one. > qBittorrent is a free and open source software developed by volunteers in their free time. You don't pay for it, you use a product of other people's free will. Noone would rush solving your problems and implementing your wishes, especially if you asking for it disgracefully. People have been asking nicely for YEARS. No solution or workaround has ever been offered. Closing my issue as a duplicate of a 12-years-old unresolved issue while shrugging it off is also disgraceful. The issue is the result of a problematic behavior that's been known for years, and no one ever cared enough to solve. There's probably a list somewhere that could be cleared with a single button click, if you just know the piece of code that checks it. Might even be a file that could be edited as a workaround, but even that was never offered. So yes, I was very annoyed when I opened this issue, because my search for a solution before coming to GitHub gave me nothing other than years-worth of frustrated people complaining about this behavior, with developers disregarding them. And while this is a free product, if it's causing me problems (e.g. high usage of my network and overworking my NAS) when I used it as intended due to negligence, then yes, I feel like I'm owed a response from the developers.
Author
Owner

@HanabishiRecca commented on GitHub (Nov 4, 2024):

Users on GitHub are free to make.

And repo owners are free to ban anyone from it.

Nope. There are complaints about this issue since 2012. I'm the last one, not the only one.

You are the one demanding a solution right now.

Closing my issue as a duplicate of a 12-years-old unresolved issue while shrugging it off is also disgraceful.

But it is, objectively, a duplicate. Belive me, keeping around 1000 issues for the same problem would not help fixing it faster.

People have been asking nicely for YEARS. No solution or workaround has ever been offered.
The issue is the result of a problematic behavior that's been known for years, and no one ever cared enough to solve. There's probably a list somewhere that could be cleared with a single button click, if you just know the piece of code that checks it.

Exactly. Years of asking instead of proposing a fix. If you think it's so easy, why you would not just go and fix it?

But I doubt it, as you don't even understand that the problem's roots are not even really in qBittorrent code in the first place, but grow deeply from libtorrent behavior.

If you think about it for 1 second, you might realize that if there was an easy fix, it would have been fixed already. And there wouldn't been pages of discussion around it.

if it's causing me problems (e.g. high usage of my network and overworking my NAS) when I used it as intended due to negligence, then yes, I feel like I'm owed a response from the developers.

No, you don't. Read the license.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
@HanabishiRecca commented on GitHub (Nov 4, 2024): > Users on GitHub are free to make. And repo owners are free to ban anyone from it. > Nope. There are complaints about this issue since 2012. I'm the last one, not the only one. You are the one demanding a solution right now. > Closing my issue as a duplicate of a 12-years-old unresolved issue while shrugging it off is also disgraceful. But it is, objectively, a duplicate. Belive me, keeping around 1000 issues for the same problem would not help fixing it faster. > People have been asking nicely for YEARS. No solution or workaround has ever been offered. > The issue is the result of a problematic behavior that's been known for years, and no one ever cared enough to solve. There's probably a list somewhere that could be cleared with a single button click, if you just know the piece of code that checks it. Exactly. Years of asking instead of proposing a fix. If you think it's so easy, why you would not just go and fix it? But I doubt it, as you don't even understand that the problem's roots are not even really in qBittorrent code in the first place, but grow deeply from [libtorrent](https://github.com/arvidn/libtorrent) behavior. If you think about it for 1 second, you might realize that if there was an easy fix, it would have been fixed already. And there wouldn't been pages of discussion around it. > if it's causing me problems (e.g. high usage of my network and overworking my NAS) when I used it as intended due to negligence, then yes, I feel like I'm owed a response from the developers. No, you don't. Read the license. ``` 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. ```
Author
Owner

@glassez commented on GitHub (Nov 4, 2024):

Since I have over 4k torrents, not all of them were updated before I closed the client.

What do you mean by "updated"?

Nope. There are complaints about this issue since 2012. I'm the last one, not the only one.

About what issue exactly?

Some explanations (that I have already made repeatedly in other similar topics)

If you've started "rechecking" torrent, then you can't just cancel it to get back to the previous state. "Recheck" literally means "forget the current progress and start a torrent from scratch."
(Of course, you can stop the torrent being checked and then start it again, just like any other.)

Due to the peculiarities of libtorrent's behavior, qBittorrent used to really sin by unexpectedly starting a "recheck" in various situations and it caused a lot of inconvenience for users, and everyone agreed with that. I am someone who has been struggling with these issues for a long time (as reports of similar circumstances in which it behaves this way appear). And I believe that I have fixed them all. (At least it is incorrect to refer to those old issues in relation to the current qBittorrent.)
Now qBittorrent does not start rechecking by itself under almost any circumstances (there is one, but it hardly relates to your problem, this is when moving the torrent to a new location where there are already matching files). Since then, no one has provided confirmed data on any other circumstances in which qBittorrent can spontaneously start "rechecking" torrents. So the only known way to start "rechecking" at the moment (apart from the one mentioned above) is if the user does it himself.

@glassez commented on GitHub (Nov 4, 2024): > Since I have over 4k torrents, not all of them were updated before I closed the client. What do you mean by "updated"? > Nope. There are complaints about this issue since 2012. I'm the last one, not the only one. About what issue exactly? ### Some explanations (that I have already made repeatedly in other similar topics) If you've started "rechecking" torrent, then you can't just cancel it to get back to the previous state. "Recheck" literally means "forget the current progress and start a torrent from scratch." (Of course, you can stop the torrent being checked and then start it again, just like any other.) Due to the peculiarities of libtorrent's behavior, qBittorrent used to really sin by unexpectedly starting a "recheck" in various situations and it caused a lot of inconvenience for users, and everyone agreed with that. I am someone who has been struggling with these issues for a long time (as reports of similar circumstances in which it behaves this way appear). And I believe that I have fixed them all. (At least it is incorrect to refer to those old issues in relation to the current qBittorrent.) Now qBittorrent does not start rechecking by itself under almost any circumstances (there is one, but it hardly relates to your problem, this is when moving the torrent to a new location where there are already matching files). Since then, no one has provided confirmed data on any other circumstances in which qBittorrent can spontaneously start "rechecking" torrents. So the only known way to start "rechecking" at the moment (apart from the one mentioned above) is if the user does it himself.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

I started the client. All torrents start as "Checking resume data".
I closed the client while most checks haven't finished.
I relaunched the client, and everything whose status was "Checking resume data" before I closed the client, now requires checking.

@TheYMI commented on GitHub (Nov 4, 2024): I started the client. All torrents start as "Checking resume data". I closed the client while most checks haven't finished. I relaunched the client, and everything whose status was "Checking resume data" before I closed the client, now requires checking.
Author
Owner

@HanabishiRecca commented on GitHub (Nov 4, 2024):

Yeah, we actually struggle to reproduce that. https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2367526735

@HanabishiRecca commented on GitHub (Nov 4, 2024): Yeah, we actually struggle to reproduce that. https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2367526735
Author
Owner

@glassez commented on GitHub (Nov 4, 2024):

I relaunched the client, and everything whose status was "Checking resume data" before I closed the client, now requires checking.

What do you mean by "requires checking"?
To avoid confusion with the terms used, it is better to accompany them with screenshots.
And it would be extremely useful to take a look at the latest logs.

@glassez commented on GitHub (Nov 4, 2024): > I relaunched the client, and everything whose status was "Checking resume data" before I closed the client, now requires checking. What do you mean by "requires checking"? To avoid confusion with the terms used, it is better to accompany them with screenshots. And it would be extremely useful to take a look at the latest logs.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

image
Things that are checking resume data (blue) when closed, require checking (red) after relaunch.
Seeding torrents (black) do not require recheck.

I did notice that not everything requires a recheck (red) if it was checking resume data (blue) when the client was closed.
This seems to be true for files that were recently rechecked (red) and no longer require a check (black).
From an investigation I conducted, it seems like the decision is related to the contents of the .fastresume files in the BT_backup directory.

@TheYMI commented on GitHub (Nov 4, 2024): ![image](https://github.com/user-attachments/assets/5c74893e-bd5d-4772-a3b7-4803a745c8b5) Things that are checking resume data (blue) when closed, require checking (red) after relaunch. Seeding torrents (black) do not require recheck. I did notice that not everything requires a recheck (red) if it was checking resume data (blue) when the client was closed. This seems to be true for files that were recently rechecked (red) and no longer require a check (black). From an investigation I conducted, it seems like the decision is related to the contents of the `.fastresume` files in the BT_backup directory.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

I created a small python script (for Windows) that works as a workaround to this issue.

@TheYMI commented on GitHub (Nov 4, 2024): I created a small [python script](https://gist.github.com/TheYMI/070ccf9197307eb618e3279c86730a2a) (for Windows) that works as a workaround to this issue.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

Exactly. Years of asking instead of proposing a fix. If you think it's so easy, why you would not just go and fix it?

Done (although it's a workaround and not exactly a fix).

But I doubt it, as you don't even understand that the problem's roots are not even really in qBittorrent code in the first place, but grow deeply from libtorrent behavior.

If you think about it for 1 second, you might realize that if there was an easy fix, it would have been fixed already. And there wouldn't been pages of discussion around it.

Was pretty easy once I looked at the code for 30 minutes to understand where the decision comes from. Then another 30-ish minutes to compare a .fastresume file before and after the check. About 10 minutes to write a few lines of code that simulate the change and check if it fixes the issue and another hour to clean it up and write a decent looking script that does the same to all files, as well as running a test to make sure I don't break anything.

Imagine the wonders I could do if I knew the codebase well enough to integrate it into the code and add a GUI element that evokes it.

@TheYMI commented on GitHub (Nov 4, 2024): > Exactly. Years of asking instead of proposing a fix. If you think it's so easy, why you would not just go and fix it? Done (although it's a workaround and not exactly a fix). > But I doubt it, as you don't even understand that the problem's roots are not even really in qBittorrent code in the first place, but grow deeply from [libtorrent](https://github.com/arvidn/libtorrent) behavior. > > If you think about it for 1 second, you might realize that if there was an easy fix, it would have been fixed already. And there wouldn't been pages of discussion around it. Was pretty easy once I looked at the code for 30 minutes to understand where the decision comes from. Then another 30-ish minutes to compare a `.fastresume` file before and after the check. About 10 minutes to write a few lines of code that simulate the change and check if it fixes the issue and another hour to clean it up and write a decent looking script that does the same to all files, as well as running a test to make sure I don't break anything. Imagine the wonders I could do if I knew the codebase well enough to integrate it into the code and add a GUI element that evokes it.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

In all seriousness, I don't feel confident enough to convert it into C++ and adding it properly. Feel free to take my code and use it to add this as a feature (the code is simple enough to understand, but I'd be happy to answer any questions).

I would add it as a button as suggested in the title of 13556, and then add a confirmation prompt that requires another approval with a disclaimer.
Additionally, I completely closed qBittorrent while running the script to avoid editing files that might be in use. If this is implement within qBittorrent, I would make sure to stop all activity and release all file handles before changing the files' contents. Then I would relaunch qBittorrent to start the startup process to reload all the torrents from scratch.
My script keeps a backup of the files I'm changing, but the disclaimer in the confirmation window should probably advice the user to copy the whole BT_backup directory as backup while qBittorrent is closed before proceeding.

@TheYMI commented on GitHub (Nov 4, 2024): In all seriousness, I don't feel confident enough to convert it into C++ and adding it properly. Feel free to take my code and use it to add this as a feature (the code is simple enough to understand, but I'd be happy to answer any questions). I would add it as a button as suggested in the title of [13556](https://github.com/qbittorrent/qBittorrent/issues/13556), and then add a confirmation prompt that requires another approval with a disclaimer. Additionally, I completely closed qBittorrent while running the script to avoid editing files that might be in use. If this is implement within qBittorrent, I would make sure to stop all activity and release all file handles before changing the files' contents. Then I would relaunch qBittorrent to start the startup process to reload all the torrents from scratch. My script keeps a backup of the files I'm changing, but the disclaimer in the confirmation window should probably advice the user to copy the whole BT_backup directory as backup while qBittorrent is closed before proceeding.
Author
Owner

@TheYMI commented on GitHub (Nov 4, 2024):

Yeah, we actually struggle to reproduce that. #13556 (comment)

I was able to reproduce it (on Windows) by closing qBittorrent, creating a lockfile in AppData\Local\qBittorrent, relaunching qBittorrent and then closing it again.
It wasn't as bad as I had it earlier, but I did get ~900 torrents to require an unnecessary recheck.

My script managed to fix it again.

EDIT:
Launching qBittorrent, changing a configuration and immediately closing and launching it again had a similar effect, with less torrents (~40).
My guess would be that something interferes with the update of the .fastresume file writing, and the files that don't get updated require another recheck. I also noticed that closing qBittorrent is way faster when the issue reproduces, further reinforcing my suspicion that file writing is skipped for some reason.

@TheYMI commented on GitHub (Nov 4, 2024): > Yeah, we actually struggle to reproduce that. [#13556 (comment)](https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2367526735) I was able to reproduce it (on Windows) by closing qBittorrent, creating a `lockfile` in `AppData\Local\qBittorrent`, relaunching qBittorrent and then closing it again. It wasn't as bad as I had it earlier, but I did get ~900 torrents to require an unnecessary recheck. My script managed to fix it again. EDIT: Launching qBittorrent, changing a configuration and immediately closing and launching it again had a similar effect, with less torrents (~40). My guess would be that something interferes with the update of the `.fastresume` file writing, and the files that don't get updated require another recheck. I also noticed that closing qBittorrent is way faster when the issue reproduces, further reinforcing my suspicion that file writing is skipped for some reason.
Author
Owner

@HanabishiRecca commented on GitHub (Nov 4, 2024):

Done (although it's a workaround and not exactly a fix).
If this is implement within qBittorrent, I would make sure to stop all activity and release all file handles before changing the files' contents.

Unfortunately, you didn't discover anything new here. Of course we could simply reset the state of torrents.
And we don't need to edit the files in such dirty way. We can just change the state inside the client.

Although, as you already pointed out, this is not a fix, this is a workaround. I'll quote myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366894832:

I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead.

I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted.
Giving users access to such button could lead to abuse with catastrophic consequences. E.g. "oh, my torrent is not 100%, I guess I just hit that button".

A proper fix should prevent this situation from happening.

My guess would be that something interferes with the update of the .fastresume file writing, and the files that don't get updated require another recheck.

It is likely. Again, quoting myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366910223:

I guess there is some race condition happening when the client wraps up and shuts down at the same time.

I highly recommend you to read the whole conversation. There are lot of insight from qBittorrent and libtorrent devs.


P.S. Make a regular backup of your BT_backup folder. Things could go wrong, files could be corrupted to the point of no return or deleted.

Switching Resume data storage type to SQLite database is also an option. It doesn't seem to help from this issue specifically, but should be more resilient in theory.
You would have a single torrents.db (and a bunch of WAL files while the client is running) instead of .fastresume files.

@HanabishiRecca commented on GitHub (Nov 4, 2024): > Done (although it's a workaround and not exactly a fix). > If this is implement within qBittorrent, I would make sure to stop all activity and release all file handles before changing the files' contents. Unfortunately, you didn't discover anything new here. Of course we could simply reset the state of torrents. And we don't need to edit the files in such dirty way. We can just change the state inside the client. Although, as you already pointed out, this is not a fix, this is a workaround. I'll quote myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366894832: > I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead. I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted. Giving users access to such button could lead to abuse with catastrophic consequences. E.g. "oh, my torrent is not 100%, I guess I just hit that button". **A proper fix should prevent this situation from happening.** > My guess would be that something interferes with the update of the `.fastresume` file writing, and the files that don't get updated require another recheck. It is likely. Again, quoting myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366910223: > I guess there is some race condition happening when the client wraps up and shuts down at the same time. I highly recommend you to read the whole conversation. There are lot of insight from qBittorrent and libtorrent devs. --- P.S. Make a regular backup of your `BT_backup` folder. Things could go wrong, files could be corrupted to the point of no return or deleted. Switching `Resume data storage type` to `SQLite database` is also an option. It doesn't seem to help from this issue specifically, but should be more resilient in theory. You would have a single `torrents.db` (and a bunch of WAL files while the client is running) instead of `.fastresume` files.
Author
Owner

@HanabishiRecca commented on GitHub (Nov 4, 2024):

I was able to reproduce it (on Windows) by closing qBittorrent, creating a lockfile in AppData\Local\qBittorrent, relaunching qBittorrent and then closing it again.
It wasn't as bad as I had it earlier, but I did get ~900 torrents to require an unnecessary recheck.

This experiment is actually interesting. I wonder if that happens when user somehow manages to launch multiple client copies on the same profile.

Could you try to switch Resume data storage type to SQLite database and check if you are able to reproduce the problem? Make backups before experimenting of course.

@HanabishiRecca commented on GitHub (Nov 4, 2024): > I was able to reproduce it (on Windows) by closing qBittorrent, creating a `lockfile` in `AppData\Local\qBittorrent`, relaunching qBittorrent and then closing it again. > It wasn't as bad as I had it earlier, but I did get ~900 torrents to require an unnecessary recheck. This experiment is actually interesting. I wonder if that happens when user somehow manages to launch multiple client copies on the same profile. Could you try to switch `Resume data storage type` to `SQLite database` and check if you are able to reproduce the problem? Make backups before experimenting of course.
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

From an investigation I conducted, it seems like the decision is related to the contents of the .fastresume files in the BT_backup directory.

Could you reproduce the issue so previously completed torrent become "unchecked" and provide me its .fastresume file (ideally two variants of it, one copied before experiment and one after torrent becomes "unchecked") for investigating?

@glassez commented on GitHub (Nov 5, 2024): > From an investigation I conducted, it seems like the decision is related to the contents of the `.fastresume` files in the BT_backup directory. Could you reproduce the issue so previously completed torrent become "unchecked" and provide me its .fastresume file (ideally two variants of it, one copied before experiment and one after torrent becomes "unchecked") for investigating?
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

Although, as you already pointed out, this is not a fix, this is a workaround. I'll quote myself from #13556 (comment):

I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead.

I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted. Giving users access to such button could lead to abuse with catastrophic consequences. E.g. "oh, my torrent is not 100%, I guess I just hit that button".

👍

@glassez commented on GitHub (Nov 5, 2024): > Although, as you already pointed out, this is not a fix, this is a workaround. I'll quote myself from [#13556 (comment)](https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366894832): > > > I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead. > > I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted. Giving users access to such button could lead to abuse with catastrophic consequences. E.g. "oh, my torrent is not 100%, I guess I just hit that button". 👍
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

I was able to reproduce it (on Windows) by closing qBittorrent, creating a lockfile in AppData\Local\qBittorrent, relaunching qBittorrent and then closing it again.

What lockfile do you mean? Just a regular file with name lockfile?

@glassez commented on GitHub (Nov 5, 2024): > I was able to reproduce it (on Windows) by closing qBittorrent, creating a `lockfile` in `AppData\Local\qBittorrent`, relaunching qBittorrent and then closing it again. What `lockfile` do you mean? Just a regular file with name `lockfile`?
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

I created a small python script (for Windows) that works as a workaround to this issue.

It looks doubtful... What do you think it's supposed to do?
IIRC, it can only mark all the pieces as "checked", but it cannot restore previous progress if the torrent was (partially) downloaded earlier.
So I'll repeat it again. The only correct solution would be to try to find the reason for the loss of progress and fix it, rather than invent dubious workarounds.

@glassez commented on GitHub (Nov 5, 2024): > I created a small [python script](https://gist.github.com/TheYMI/070ccf9197307eb618e3279c86730a2a) (for Windows) that works as a workaround to this issue. It looks doubtful... What do you think it's supposed to do? IIRC, it can only mark all the pieces as "checked", but it cannot restore previous progress if the torrent was (partially) downloaded earlier. So I'll repeat it again. The only correct solution would be to try to find the reason for the loss of progress and fix it, rather than invent dubious workarounds.
Author
Owner

@TheYMI commented on GitHub (Nov 5, 2024):

I managed to reproduce the issue (on Windows). Here are the steps:

  1. I copied the whole BT_backup directory while qBittorrent is running
  2. I closed qBittorrent and waited for it to completely shut down
  3. I created a regular file named lockfile at AppData\Local\qBittorrent with no content and no extension (similar to the one qBittorrent creates once it starts running) using touch from Git Bash
  4. I launched qBittorrent and immediately closed it
  5. I launched it again and after startup, I had ~750 torrents that required recheck
  6. I found one of the torrents and used its Info Hash v1 to identify the .fastresume file
  7. I copied the original file from the copy I created (_orig) and the same file from the active BT_backup directory (_after)
  8. I uploaded them both here (I added .txt to the name so GitHub will let me upload them, but I didn't open them or changed their content)

Something that could be of interest:
I failed to reproduce the issue multiple times when I tried to do so while a torrent was downloading. Once it finished, I managed to reproduce on my first try.

Files:
f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_after.fastresume.txt
f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_orig.fastresume.txt

@TheYMI commented on GitHub (Nov 5, 2024): I managed to reproduce the issue (on Windows). Here are the steps: 1. I copied the whole BT_backup directory while qBittorrent is running 2. I closed qBittorrent and waited for it to completely shut down 3. I created a regular file named `lockfile` at `AppData\Local\qBittorrent` with no content and no extension (similar to the one qBittorrent creates once it starts running) using `touch` from Git Bash 4. I launched qBittorrent and immediately closed it 5. I launched it again and after startup, I had ~750 torrents that required recheck 6. I found one of the torrents and used its `Info Hash v1` to identify the `.fastresume` file 7. I copied the original file from the copy I created (`_orig`) and the same file from the active BT_backup directory (`_after`) 8. I uploaded them both here (I added `.txt` to the name so GitHub will let me upload them, but I didn't open them or changed their content) Something that could be of interest: I failed to reproduce the issue multiple times when I tried to do so while a torrent was downloading. Once it finished, I managed to reproduce on my first try. Files: [f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_after.fastresume.txt](https://github.com/user-attachments/files/17628254/f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_after.fastresume.txt) [f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_orig.fastresume.txt](https://github.com/user-attachments/files/17628255/f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_orig.fastresume.txt)
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

4. I launched qBittorrent and immediately closed it

Could you still provide a log of this run?

@glassez commented on GitHub (Nov 5, 2024): > 4\. I launched qBittorrent and immediately closed it Could you still provide a log of this run?
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

3. I created a regular file named lockfile at AppData\Local\qBittorrent with no content and no extension (similar to the one qBittorrent creates once it starts running)

Are you sure? IIRC, qBittorrent creates lockfile at AppData\Roaming\qBittorrent (at least in my system).

@glassez commented on GitHub (Nov 5, 2024): > 3\. I created a regular file named `lockfile` at `AppData\Local\qBittorrent` with no content and no extension (similar to the one qBittorrent creates once it starts running) Are you sure? IIRC, qBittorrent creates `lockfile` at `AppData\Roaming\qBittorrent` (at least in my system).
Author
Owner

@TheYMI commented on GitHub (Nov 5, 2024):

  1. I created a regular file named lockfile at AppData\Local\qBittorrent with no content and no extension (similar to the one qBittorrent creates once it starts running)

Are you sure? IIRC, qBittorrent creates lockfile at AppData\Roaming\qBittorrent (at least in my system).

Sorry, you are correct. I created it in the correct location (Roaming), but copy pasted the wrong path to my reply.

  1. I launched qBittorrent and immediately closed it

Could you still provide a log of this run?

I'll have to check if I can find it. I restarted the client multiple times and the log rotates each time, so I'll have to see if I can find the correct one.

@TheYMI commented on GitHub (Nov 5, 2024): > > 3. I created a regular file named `lockfile` at `AppData\Local\qBittorrent` with no content and no extension (similar to the one qBittorrent creates once it starts running) > > Are you sure? IIRC, qBittorrent creates `lockfile` at `AppData\Roaming\qBittorrent` (at least in my system). Sorry, you are correct. I created it in the correct location (Roaming), but copy pasted the wrong path to my reply. > > 4. I launched qBittorrent and immediately closed it > > Could you still provide a log of this run? I'll have to check if I can find it. I restarted the client multiple times and the log rotates each time, so I'll have to see if I can find the correct one.
Author
Owner

@glassez commented on GitHub (Nov 5, 2024):

Well, I managed to reproduce it by slightly modifying the libtorrent code by adding a delay during resume data checking. Working on it...

@glassez commented on GitHub (Nov 5, 2024): Well, I managed to reproduce it by slightly modifying the libtorrent code by adding a delay during resume data checking. Working on it...
Author
Owner

@glassez commented on GitHub (Nov 6, 2024):

Well, I managed to reproduce it by slightly modifying the libtorrent code by adding a delay during resume data checking. Working on it...

#21784

@glassez commented on GitHub (Nov 6, 2024): > Well, I managed to reproduce it by slightly modifying the libtorrent code by adding a delay during resume data checking. Working on it... #21784
Author
Owner

@as-muncher commented on GitHub (Nov 7, 2024):

Oh my gosh, finally. I didn't have all the data that I could post for this issue, but this problem has been happening on my system, very annoyingly. Thank you @TheYMI and @glassez. You'd think that qbittorrent would not delete all the data about the torrent before it started its check, and there were a lot of unnecessary checks anyways. This finally looks promising.

@as-muncher commented on GitHub (Nov 7, 2024): Oh my gosh, finally. I didn't have all the data that I could post for this issue, but this problem has been happening on my system, very annoyingly. Thank you @TheYMI and @glassez. You'd think that qbittorrent would not delete all the data about the torrent before it started its check, and there were a lot of unnecessary checks anyways. This finally looks promising.
Author
Owner

@zent1n0 commented on GitHub (Dec 19, 2024):

Noticed a weird issue before… in the first half of this year.
When my torrents on local drive corrupted in specific situations, the torrent can pass the force recheck, while zip files can't be decompressed due to errors. Only remove the file and redownload fixed the torrent. It brought me doubting about the reliability of recheck function.
I'll open a issue when the issue reproduces.

@zent1n0 commented on GitHub (Dec 19, 2024): Noticed a weird issue before… in the first half of this year. When my torrents on local drive corrupted in specific situations, the torrent can pass the force recheck, while zip files can't be decompressed due to errors. Only remove the file and redownload fixed the torrent. It brought me doubting about the reliability of recheck function. I'll open a issue when the issue reproduces.
Author
Owner

@ligix commented on GitHub (Feb 3, 2025):

I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead.

I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted.

I've found this issue because, you'll never believe it, I have a similar gripe with qbittorrent rechecking.

While what you said is true, there are gonna be people who click buttons because the buttons can be clicked, using software that doesn't give the user the freedom to do whatever they want to do is extremely annoying (at least to me), especially when you know the software is being "wrong".
In my opinion a better way to do it would be to bury something deep in the settings to enable "advanced user mode" or something like that.

As an example ublock origin does this and (imho) it works really well, enabling usage from both people that don't know how it works, because it doesn't require them to change any setting, but also from people who want to deeply customize its behavior giving them tools powerful enough to break browsing altogether.

Of course this wouldn't prevent someone who doesn't know what they're doing, yet thinks that they do, from breaking qbittorrent but if there were enough warnings to inform the user that the "advanced" settings could create irreperable damage I don't see why the (relatively) few people who would ignore them could not be simply denied support.

@ligix commented on GitHub (Feb 3, 2025): >> I think "trust me" button would only mask the symptoms. And still requires significant manual user intervention, which is not good. It would be better to fix the root problem instead. > > I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted. I've found this issue because, you'll never believe it, I have a similar gripe with qbittorrent rechecking. While what you said is true, there are gonna be people who click buttons because the buttons can be clicked, using software that doesn't give the user the freedom to do whatever they want to do is extremely annoying (at least to me), especially when you know the software is being "wrong". In my opinion a better way to do it would be to bury something deep in the settings to enable "advanced user mode" or something like that. As an example ublock origin does this and (imho) it works really well, enabling usage from both people that don't know how it works, because it doesn't require them to change any setting, but also from people who want to deeply customize its behavior giving them tools powerful enough to break browsing altogether. Of course this wouldn't prevent someone who doesn't know what they're doing, yet thinks that they do, from breaking qbittorrent but if there were enough warnings to inform the user that the "advanced" settings could create irreperable damage I don't see why the (relatively) few people who would ignore them could not be simply denied support.
Author
Owner

@ProximaNova commented on GitHub (Mar 21, 2025):

qBittorrent should have an option to skip (re)checking -- meaning rehashing -- of torrents already in the transfer list, maybe like what @ligix said where you bury it in the advanced options (perhaps an advanced option to add that option to the context menu). That way you could get back to 100% complete without doing any hashing (especially if using ZFS where you can be more confident that no bit flips or bitrot will go unnoticed). This was posted above as a workaround fix for Windows: https://gist.github.com/TheYMI/070ccf9197307eb618e3279c86730a2a (.py). Here's a workaround solution for GNU/Linux which uses Bash and vim; it's similar to the code in the Windows+Python fix. It works one-by-one by entering a v1 infohash, but could probably be modified to edit many files automatically:

  1. Only edit .fastresume files while qBittorrent is not running.
  2. Make a backup of the .fastresume file: $ read -p "infohash: " h; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.bak; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; vim ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; cp ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume; echo $h
  3. In vim, run this command so that the FASTRESUME file indicates that you have all of the torrent pieces and they are all verified: :%s/14:piece_priority\(\d\+\):\(\D\+\)6:pieces\d*:\D*12/6:pieces\1:\212/g
  4. Save the file and exit: :wq
  5. Go to step 2.

With that being "solved" then the next problem is that on startup qBittorrent does "Checking resume data" on all torrents all at once. This is a real problem for my crappy/slow/limited tech and leads to OOM crashes of qBittorrent. There should be an option to do "Checking resume data" one at a time (so only one torrent ever has a status of "Checking resume data" until it finishes and moves on to the next one). Said feature is also important for scalability and using various applications (or working better on more systems).

@ProximaNova commented on GitHub (Mar 21, 2025): qBittorrent should have an option to skip (re)checking -- meaning rehashing -- of torrents already in the transfer list, maybe like what @ligix said where you bury it in the advanced options (perhaps an advanced option to add that option to the context menu). That way you could get back to 100% complete without doing any hashing (especially if using ZFS where you can be more confident that no bit flips or bitrot will go unnoticed). This was posted above as a workaround fix for Windows: https://gist.github.com/TheYMI/070ccf9197307eb618e3279c86730a2a (.py). Here's a workaround solution for GNU/Linux which uses Bash and vim; it's similar to the code in the Windows+Python fix. It works one-by-one by entering a v1 infohash, but could probably be modified to edit many files automatically: 1. Only edit .fastresume files while qBittorrent is not running. 2. Make a backup of the .fastresume file: `$ read -p "infohash: " h; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.bak; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; vim ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; cp ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume; echo $h` 3. In vim, run this command so that the FASTRESUME file indicates that you have all of the torrent pieces and they are all verified: `:%s/14:piece_priority\(\d\+\):\(\D\+\)6:pieces\d*:\D*12/6:pieces\1:\212/g` 4. Save the file and exit: `:wq` 5. Go to step 2. With that being "solved" then the next problem is that on startup qBittorrent does "Checking resume data" on all torrents all at once. This is a real problem for my crappy/slow/limited tech and leads to OOM crashes of qBittorrent. There should be an option to do "Checking resume data" one at a time (so only one torrent ever has a status of "Checking resume data" until it finishes and moves on to the next one). Said feature is also important for scalability and using various applications (or working better on more systems).
Author
Owner

@TheYMI commented on GitHub (Mar 21, 2025):

You could probably run sed on the file instead of using vim (you're using a regex replacement after all).
Regarding your last comment, from what I've seen, even though multiple files' status is set to "checking", only one file gets actually checked at a time. The rest are just queued to be checked sequentially.

@TheYMI commented on GitHub (Mar 21, 2025): You could probably run `sed` on the file instead of using vim (you're using a regex replacement after all). Regarding your last comment, from what I've seen, even though multiple files' status is set to "checking", only one file gets actually checked at a time. The rest are just queued to be checked sequentially.
Author
Owner

@ProximaNova commented on GitHub (Mar 21, 2025):

@TheYMI Totally, could use sed -i or perl to do that regular expression substitution (vim can also run in non-interactive mode).

Regarding your last comment, from what I've seen, even though multiple files' status is set to "checking", only one file gets actually checked at a time. The rest are just queued to be checked sequentially.

That's true, but it wasn't what I was referring to. There's the status of "Checking" which verifies sha1/sha2 hashes of the torrent pieces, and there's the status of "Checking resume data" which AFAIK verifies that all the file paths of a torrent point to actual files of the right name/size. "Checking resume data" happens first before the other, and it does check all the torrents simultaneously. This can lead to qBittorrent running out of memory and crashing.

"Checking resume data" simultaneously on everything is a problem if verifying more than 100,000 or more than 500,000 file paths (like millions) and not directly using an OS filesystem. For example, if the save paths are FUSE-mounted IPFS paths in a HDD. In that case (even if the IPFS data is raw blocks), it's significantly slower to verify all of the paths; and worse than that, the memory needed to do those concurrent tasks is like gigabyte(s) more than what I have. "Checking resume data" cannot be paused, and it assumes that paths can be verified quickly and without using much memory. This probably isn't true with other higher latency systems like tape drives. The assumption is maybe also wrong with compression-based filesystems like dwarFS or something ( https://github.com/mhx/dwarfs ); in that case it's possibly slower+memory-intensive due to having to decompress lots of data.

Overall, the "Checking resume data" system doesn't work well with terabytes of torrents pointing to million(s) of files in HDD(s) + crappy computer + uncommon setup (like FUSE-mounted IPFS paths). (In fact, months ago I made it so the sum of Interplanetary Filesystem paths in qBittorrent only point to a max of roughly 50,000 or 90,000 files as opposed to ~1,000,000+ before.)

@ProximaNova commented on GitHub (Mar 21, 2025): @TheYMI Totally, could use `sed -i` or perl to do that regular expression substitution (vim can also run in non-interactive mode). > Regarding your last comment, from what I've seen, even though multiple files' status is set to "checking", only one file gets actually checked at a time. The rest are just queued to be checked sequentially. That's true, but it wasn't what I was referring to. There's the status of "Checking" which verifies sha1/sha2 hashes of the torrent pieces, and there's the status of "Checking resume data" which AFAIK verifies that all the file paths of a torrent point to actual files of the right name/size. "Checking resume data" happens first before the other, and it does check all the torrents simultaneously. This can lead to qBittorrent running out of memory and crashing. "Checking resume data" simultaneously on everything is a problem if verifying more than 100,000 or more than 500,000 file paths (like millions) and not directly using an OS filesystem. For example, if the save paths are FUSE-mounted IPFS paths in a HDD. In that case (even if the IPFS data is raw blocks), it's significantly slower to verify all of the paths; and worse than that, the memory needed to do those concurrent tasks is like gigabyte(s) more than what I have. "Checking resume data" cannot be paused, and it assumes that paths can be verified quickly and without using much memory. This probably isn't true with other higher latency systems like tape drives. The assumption is maybe also wrong with compression-based filesystems like dwarFS or something ( https://github.com/mhx/dwarfs ); in that case it's possibly slower+memory-intensive due to having to decompress lots of data. Overall, the "Checking resume data" system doesn't work well with terabytes of torrents pointing to million(s) of files in HDD(s) + crappy computer + uncommon setup (like FUSE-mounted IPFS paths). (In fact, months ago I made it so the sum of Interplanetary Filesystem paths in qBittorrent only point to a max of roughly 50,000 or 90,000 files as opposed to ~1,000,000+ before.)
Author
Owner

@xavier2k6 commented on GitHub (May 25, 2025):

ANNOUNCEMENT!

For anybody coming across this "Feature Request" & would like/love to see a potential implementation in the future!
Here are some options available to you:

  1. Please select/click the 👍 &/orreactions in the original/opening post of this ticket.

  2. Please feel free (If you have the "skillset") to create a "Pull Request" implementing what's being requested in this ticket.
    (new/existing contributors/developers are always welcome)


DO:

  • Provide constructive feedback.
  • Display how other projects implemented same/similar etc.

DO NOT:

  • Add a "Bump", "me too", "2nd/3rd" etc. or "criticizing" comment(s).
    (These will be disregarded/hidden as "spam/abuse/off-topic" etc. as they don't provide anything constructive.)
@xavier2k6 commented on GitHub (May 25, 2025): ## ANNOUNCEMENT! For anybody coming across this **_"Feature Request"_** & would like/love to see a potential implementation in the future! **Here are some options available to you:** 1. Please select/click the 👍 **&/or** ❤ `reactions` in the original/opening post of this ticket. 2. Please feel free _(If you have the "skillset")_ to create a **_"Pull Request"_** implementing what's being requested in this ticket. **_(new/existing contributors/developers are always welcome)_** ____ **DO:** * Provide constructive feedback. * Display how other projects implemented same/similar etc. **DO NOT:** * Add a "Bump", "me too", "2nd/3rd" etc. or "criticizing" comment(s). **(These will be disregarded/hidden as "spam/abuse/off-topic" etc. as they don't provide anything constructive.)**
Author
Owner

@github-account1111 commented on GitHub (Dec 4, 2025):

I use a laptop and store most downloads across a few large external hard drives. It's very annoying that if I ever launch qBT without the drives connected and mapped to the exact drive letters qBT expects them to be mapped to, I then have to wait for hours for the torrents to be rechecked.

This has pretty much made me avoid launching qBT unless I'm at home, at my desk, with all the drives connected. Which sounds so unnecessarily restrictive and fragile to me. Like I literally can't download a torrent unless all these conditions are met. Or I have to use a separate torrent client for when away from home or something. There has to be a better way. Unsure if I'm reading it correctly, but it seems this feature would solve this problem.

Implementation suggestion-wise, I would think a simple popup on launch in the event qBT thinks something needs to be rechecked with a Yes and No button should mostly suffice. A list of torrent paths qBT intends to recheck would also be very useful as it would make it easier for the user to determine if the rechecking is warranted. Finally, a checkmark next to each of those torrents to select which ones to proceed with for rechecking would be an even bigger QOL improvement.

@github-account1111 commented on GitHub (Dec 4, 2025): I use a laptop and store most downloads across a few large external hard drives. It's very annoying that if I ever launch qBT without the drives connected and mapped to the exact drive letters qBT expects them to be mapped to, I then have to wait for hours for the torrents to be rechecked. This has pretty much made me avoid launching qBT unless I'm at home, at my desk, with all the drives connected. Which sounds so unnecessarily restrictive and fragile to me. Like I literally *can't download a torrent* unless all these conditions are met. Or I have to use a separate torrent client for when away from home or something. There has to be a better way. Unsure if I'm reading it correctly, but it seems this feature would solve this problem. Implementation suggestion-wise, I would think a simple popup on launch in the event qBT thinks something needs to be rechecked with a Yes and No button should mostly suffice. A list of torrent paths qBT intends to recheck would also be very useful as it would make it easier for the user to determine if the rechecking is warranted. Finally, a checkmark next to each of those torrents to select which ones to proceed with for rechecking would be an even bigger QOL improvement.
Author
Owner

@TheYMI commented on GitHub (Dec 5, 2025):

Unsure if I'm reading it correctly, but it seems this feature would solve this problem

What you're describing is a completely different issue.
The original issue was that all content existed and was available, but some internal data about the recheck process was missing in an edge case where files were marked for recheck, but the application quit before the process finished properly.

In your case, files literally "disappear" or "move", because they're not in the path they were expected to be. It's not a bug, it's a feature.
If you just skip the check at this point, when qBittorrent would try to access the files (e.g. when uploading to a peer), it would try to access the path they were at, but they might not be there, causing an error.

Imagine it like this:
You manage a storage space that contains boxes. Each box has a label with an ID. You also have a list with the ID of each box, which shelf the box is on and what it contains.
Whenever a box is added, you check it's content, and add it to the list - ID, location, content.
Every morning you go into the room and make an inventory check to make sure each box is where it's supposed to be. If there's a new box, you check it's content and add it to the list.

The original issue was that you get to work in the morning and go into the room to start the inventory check.
If you leave the room for any reason before you're done, you delete all the boxes you haven't checked from the list.
The next morning you find a bunch of "new" boxes, and you have to go and check them one by one and add them to the list.
The fix was to tell qBittorrent that if you left the room before you're done, you should assume anything you haven't checked is the same as the last time you checked it.

Your issue, on the other hand, is this:
You have your list and you do your regular check.
Some mornings you find that some boxes moved to a different shelf. Sometimes shelves are completely missing. Sometimes shelves and boxes appear from nowhere. An unknown entity just comes into your storage room at night and moves stuff around, takes some stuff out and puts stuff in without recording any of those changes.
If this is the case, you can never assume everything is fine. If a shelf is missing, you can't just assume it's here somewhere. Even if you find a box with the correct ID on a different shelf, you can't assume the box was moved but the content is the same.
In your case qBittorrent is being responsible, and updates the list to match the current state of the room every time it changes.

@TheYMI commented on GitHub (Dec 5, 2025): > Unsure if I'm reading it correctly, but it seems this feature would solve this problem What you're describing is a completely different issue. The original issue was that all content existed and was available, but some internal data about the recheck process was missing in an edge case where files were marked for recheck, but the application quit before the process finished properly. In your case, files literally "disappear" or "move", because they're not in the path they were expected to be. It's not a bug, it's a feature. If you just skip the check at this point, when qBittorrent would try to access the files (e.g. when uploading to a peer), it would try to access the path they were at, but they might not be there, causing an error. Imagine it like this: You manage a storage space that contains boxes. Each box has a label with an ID. You also have a list with the ID of each box, which shelf the box is on and what it contains. Whenever a box is added, you check it's content, and add it to the list - ID, location, content. Every morning you go into the room and make an inventory check to make sure each box is where it's supposed to be. If there's a new box, you check it's content and add it to the list. The original issue was that you get to work in the morning and go into the room to start the inventory check. If you leave the room for any reason before you're done, you delete all the boxes you haven't checked from the list. The next morning you find a bunch of "new" boxes, and you have to go and check them one by one and add them to the list. The fix was to tell qBittorrent that if you left the room before you're done, you should assume anything you haven't checked is the same as the last time you checked it. Your issue, on the other hand, is this: You have your list and you do your regular check. Some mornings you find that some boxes moved to a different shelf. Sometimes shelves are completely missing. Sometimes shelves and boxes appear from nowhere. An unknown entity just comes into your storage room at night and moves stuff around, takes some stuff out and puts stuff in without recording any of those changes. If this is the case, you can never assume everything is fine. If a shelf is missing, you can't just assume it's here somewhere. Even if you find a box with the correct ID on a different shelf, you can't assume the box was moved but the content is the same. In your case qBittorrent is being responsible, and updates the list to match the current state of the room every time it changes.
Author
Owner

@github-account1111 commented on GitHub (Dec 5, 2025):

qBittorrent is being responsible

I'd rather myself be responsible than qBittorrent try and take on my responsibility wasting a ton of my time and causing a lot of frustration along the way.

If you just skip the check at this point, when qBittorrent would try to access the files (e.g. when uploading to a peer), it would try to access the path they were at, but they might not be there, causing an error.

I'm more than okay with that. I'd much rather have to deal with the error (which I assume really just involves force-rechecking the errored torrents) and save myself hours', potentially days' worth of rechecking. Honestly who wouldn't? The "feature" doesn't seem very rational unless you have all the time in the world which none of us do.

Anyway, I think #13556 is what I actually want. Apologies for misunderstanding the issue description.

@github-account1111 commented on GitHub (Dec 5, 2025): > qBittorrent is being responsible I'd rather myself be responsible than qBittorrent try and take on my responsibility wasting a ton of my time and causing a lot of frustration along the way. > If you just skip the check at this point, when qBittorrent would try to access the files (e.g. when uploading to a peer), it would try to access the path they were at, but they might not be there, causing an error. I'm more than okay with that. I'd much rather have to deal with the error (which I assume really just involves force-rechecking the errored torrents) and save myself hours', potentially days' worth of rechecking. Honestly who wouldn't? The "feature" doesn't seem very rational unless you have all the time in the world which none of us do. Anyway, I think #13556 is what I actually want. Apologies for misunderstanding the issue description.
Author
Owner

@GuyFran commented on GitHub (Dec 20, 2025):

To this day, it seems we are not able to stop the torrents from being in the recheck queue.

There should seriously be a way to cancel that queue and have the torrents put in either "Error" or "Stop" status. This is insane when you have several thousands of files attached to removable storage.

@GuyFran commented on GitHub (Dec 20, 2025): To this day, it seems we are not able to stop the torrents from being in the recheck queue. There should seriously be a way to cancel that queue and have the torrents put in either "Error" or "Stop" status. This is insane when you have several thousands of files attached to removable storage.
Author
Owner

@as-muncher commented on GitHub (Dec 21, 2025):

To this day, it seems we are not able to stop the torrents from being in the recheck queue.

There should seriously be a way to cancel that queue and have the torrents put in either "Error" or "Stop" status. This is insane when you have several thousands of files attached to removable storage.

I also have removeable storage. What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know, so then it means having to check multiple gigabytes of data needlessly, if I'm understanding your gripe correctly.

@as-muncher commented on GitHub (Dec 21, 2025): > To this day, it seems we are not able to stop the torrents from being in the recheck queue. > > There should seriously be a way to cancel that queue and have the torrents put in either "Error" or "Stop" status. This is insane when you have several thousands of files attached to removable storage. I also have removeable storage. What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know, so then it means having to check multiple gigabytes of data needlessly, if I'm understanding your gripe correctly.
Author
Owner

@glassez commented on GitHub (Dec 22, 2025):

What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know,

Yes, qBittorrent doesn't periodically check for such torrents to be started. But it DOESN'T mean that you have "o check multiple gigabytes of data needlessly". And this has already been said many times. It is supposed that qBittorrent simply marks such torrents as "Missing files" at startup, so if you do not manually recheck them, but simply restart qBittorrent as soon as the removable drive is available again, it will continue torrents as if nothing had happened. Of course, unless you're using some previous version that has a bug.

@glassez commented on GitHub (Dec 22, 2025): > What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know, Yes, qBittorrent doesn't periodically check for such torrents to be started. But it DOESN'T mean that you have "o check multiple gigabytes of data needlessly". And this has already been said many times. It is supposed that qBittorrent simply marks such torrents as "Missing files" at startup, so if you do not manually recheck them, but simply restart qBittorrent as soon as the removable drive is available again, it will continue torrents as if nothing had happened. Of course, unless you're using some previous version that has a bug.
Author
Owner

@GuyFran commented on GitHub (Dec 22, 2025):

What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know,

Yes, qBittorrent doesn't periodically check for such torrents to be started. But it DOESN'T mean that you have "o check multiple gigabytes of data needlessly". And this has already been said many times. It is supposed that qBittorrent simply marks such torrents as "Missing files" at startup, so if you do not manually recheck them, but simply restart qBittorrent as soon as the removable drive is available again, it will continue torrents as if nothing had happened. Of course, unless you're using some previous version that has a bug.

Exactly.

Allow us to just accept them as missing, and put them in a stop status.

Then if and when those torrents are manually restart, this is the time where the recheck should happen.

But also, right now, it is literally impossible to cancel the recheck queue. Just that is not impossible. And I am not sure why.
For sure this logic is not linked to libtorrent constraints.

@GuyFran commented on GitHub (Dec 22, 2025): > > What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know, > > Yes, qBittorrent doesn't periodically check for such torrents to be started. But it DOESN'T mean that you have "o check multiple gigabytes of data needlessly". And this has already been said many times. It is supposed that qBittorrent simply marks such torrents as "Missing files" at startup, so if you do not manually recheck them, but simply restart qBittorrent as soon as the removable drive is available again, it will continue torrents as if nothing had happened. Of course, unless you're using some previous version that has a bug. Exactly. Allow us to just accept them as missing, and put them in a stop status. Then if and when those torrents are *manually* restart, this is the time where the recheck should happen. But also, right now, it is literally impossible to cancel the recheck queue. Just that is not impossible. And I am not sure why. For sure this logic is not linked to libtorrent constraints.
Author
Owner

@glassez commented on GitHub (Dec 22, 2025):

Exactly.

Allow us to just accept them as missing, and put them in a stop status.

I can't understand what you're talking about. What should it allow you?
What I described above is how qBittorrent behaves. You can accept it or not, but it's true.

Here is an example of a torrent located on a removable disk:

Image

Now I run qBittorrent when the disk is not attached:

Image

Then I closed qBittorrent, attached the disk, and ran qBittorrent again:

Image
@glassez commented on GitHub (Dec 22, 2025): > Exactly. > > Allow us to just accept them as missing, and put them in a stop status. I can't understand what you're talking about. What should it allow you? What I described above is how qBittorrent behaves. You can accept it or not, but it's true. Here is an example of a torrent located on a removable disk: <img width="580" height="58" alt="Image" src="https://github.com/user-attachments/assets/d68a8b13-3cbf-4545-aee1-02c54e8a662a" /> Now I run qBittorrent when the disk is not attached: <img width="576" height="53" alt="Image" src="https://github.com/user-attachments/assets/f38d4d6c-8ad3-4b69-804b-51191426fd47" /> Then I closed qBittorrent, attached the disk, and ran qBittorrent again: <img width="572" height="52" alt="Image" src="https://github.com/user-attachments/assets/7326e9a6-fa30-4806-b238-3b414ba42151" />
Author
Owner

@glassez commented on GitHub (Dec 22, 2025):

But also, right now, it is literally impossible to cancel the recheck queue. Just that is not impossible. And I am not sure why.
For sure this logic is not linked to libtorrent constraints.

This is also unclear. What do you mean by "cancel the recheck queue"?
In fact, you can stop a torrent that is performing hashes check, just like a torrent downloading data. Of course, you cannot cancel/skip a previously started hashes check.

For sure this logic is not linked to libtorrent constraints.

You must know libtorrent well enough to say that. Then could you explain to others what you mean exactly and how we could make qBittorrent work more correctly?

@glassez commented on GitHub (Dec 22, 2025): > But also, right now, it is literally impossible to cancel the recheck queue. Just that is not impossible. And I am not sure why. > For sure this logic is not linked to libtorrent constraints. This is also unclear. What do you mean by "cancel the recheck queue"? In fact, you can stop a torrent that is performing hashes check, just like a torrent downloading data. Of course, you cannot cancel/skip a previously started hashes check. >For sure this logic is not linked to libtorrent constraints. You must know libtorrent well enough to say that. Then could you explain to others what you mean exactly and how we could make qBittorrent work more correctly?
Author
Owner

@GuyFran commented on GitHub (Dec 22, 2025):

I just started it. My files are on removable drives that are not connected , right now, intentionally.

3k+ torrent in the check queue.

  1. I have to wait 15->30 min for the check queue to fail/complete to be able to do anything

  2. I can't just cancel those whole 3k items from the check queue and just have them sit in stop (even though they are showing as such here).

Are you saying that this is mandatory ?

Also you are exhibiting with a sample of one. Maybe you can try this with thousands and thousands of files, and that may become very obvious to experience the issue.

@GuyFran commented on GitHub (Dec 22, 2025): ![](https://i.ibb.co/Y48Lj88V/2025-12-22-21h16-59.png) I just started it. My files are on removable drives that are not connected , right now, intentionally. 3k+ torrent in the check queue. 1) I have to wait 15->30 min for the check queue to fail/complete to be able to do anything 2) I can't just cancel those whole 3k items from the check queue and just have them sit in stop (even though they are showing as such here). Are you saying that this is mandatory ? Also you are exhibiting with a sample of one. Maybe you can try this with thousands and thousands of files, and that may become very obvious to experience the issue.
Author
Owner

@glassez commented on GitHub (Dec 23, 2025):

I just started it. My files are on removable drives that are not connected , right now, intentionally.

3k+ torrent in the check queue.

Could you provide a screenshot of what status is displayed for these torrents in the torrent list (as I provided it above)?

I have to wait 15->30 min for the check queue to fail/complete to be able to do anything

"Do anything" what? Has your qBittorrent been unresponsive all this time?

Maybe you can try this with thousands and thousands of files, and that may become very obvious to experience the issue.

Maybe. But I can't rush to buy a removable disk and download thousands of torrents just for the sake of this experiment. I can only try to figure this out based on the feedback provided by users like you (who use qBittorrent in a similar way).

@glassez commented on GitHub (Dec 23, 2025): > I just started it. My files are on removable drives that are not connected , right now, intentionally. > > 3k+ torrent in the check queue. Could you provide a screenshot of what status is displayed for these torrents in the torrent list (as I provided it above)? > I have to wait 15->30 min for the check queue to fail/complete to be able to do anything "Do anything" what? Has your qBittorrent been unresponsive all this time? > Maybe you can try this with thousands and thousands of files, and that may become very obvious to experience the issue. Maybe. But I can't rush to buy a removable disk and download thousands of torrents just for the sake of this experiment. I can only try to figure this out based on the feedback provided by users like you (who use qBittorrent in a similar way).
Author
Owner

@falcon4fun commented on GitHub (Jan 1, 2026):

Same situation to me.
I have quite many torrents. Ordinary it's around 2-5k to seed. Torrent repo size around 7TB.
Every qBittorrent crash OR reboot / close after launch without not completing "Checking" leads to ~1-2h checking with literally low I/O activity around 100-300 KB/s (according to SystemInformer) with ~100% disk activity

Image Image

I really don't know what is qBittorent is doing with 1 async I/O thread. On fully defragmented drive. Why? What's happening? It's like it tryes to check files using 4K random block instead of doing it sequentally.

Even normal launch causes to "Checking resume data" and checking status for 15-40 minutes. Why? Why again it checks like via random I/O block? Why again disk activity is near 100%? 1 torrent at the same time. 1 I/O thread at the same time.

Currently I've to clean my torrent files pool from time time, because checking 2k+ torrent files can cause one eternity.
For my side I've tryed to play with most of settings and have not found any solution. For example tryed both fastresume and sqlite db. Same problem.

Something is messed with hashing on HDD.

So, I'm signing to do something with hashing: either at least fix fastresume and crashes, or give any advanced option.
While writing this, qB is still checking:

Image

In other hand, If I manually press "Recheck" after initial "Checking resume data", I get normal read speed 100MB/s:

Image
@falcon4fun commented on GitHub (Jan 1, 2026): Same situation to me. I have quite many torrents. Ordinary it's around 2-5k to seed. Torrent repo size around 7TB. Every qBittorrent crash OR reboot / close after launch without not completing "Checking" leads to ~1-2h checking with literally low I/O activity around 100-300 KB/s (according to SystemInformer) with ~100% disk activity <img width="871" height="120" alt="Image" src="https://github.com/user-attachments/assets/4ccf00d1-998f-41df-9625-6b7f809cd56d" /> <img width="843" height="964" alt="Image" src="https://github.com/user-attachments/assets/c929f578-819d-4e4c-b196-b5156b29c0fe" /> I really don't know what is qBittorent is doing with 1 async I/O thread. On fully defragmented drive. Why? What's happening? It's like it tryes to check files using 4K random block instead of doing it sequentally. Even normal launch causes to "Checking resume data" and checking status for 15-40 minutes. Why? Why again it checks like via random I/O block? Why again disk activity is near 100%? 1 torrent at the same time. 1 I/O thread at the same time. Currently I've to clean my torrent files pool from time time, because checking 2k+ torrent files can cause one eternity. For my side I've tryed to play with most of settings and have not found any solution. For example tryed both fastresume and sqlite db. Same problem. Something is messed with hashing on HDD. So, I'm signing to do something with hashing: either at least fix fastresume and crashes, or give any advanced option. While writing this, qB is still checking: <img width="744" height="733" alt="Image" src="https://github.com/user-attachments/assets/363dbd80-da19-4639-bb6b-98d483d8a1ff" /> In other hand, If I manually press "Recheck" after initial "Checking resume data", I get normal read speed 100MB/s: <img width="580" height="840" alt="Image" src="https://github.com/user-attachments/assets/b2159ba8-c6a5-4a9d-807e-cd99f3251776" />
Author
Owner

@TheYMI commented on GitHub (Jan 2, 2026):

To the best of my knowledge, if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup.

Have you tried changing these settings?

Image Image
@TheYMI commented on GitHub (Jan 2, 2026): To the best of my knowledge, if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup. Have you tried changing these settings? <img width="783" height="626" alt="Image" src="https://github.com/user-attachments/assets/0c4e2da1-28d9-4640-b776-5ab72120bb0b" /> <img width="785" height="626" alt="Image" src="https://github.com/user-attachments/assets/66554bd2-2764-4f5f-a09e-a197eb480465" />
Author
Owner

@glassez commented on GitHub (Jan 2, 2026):

Same situation to me.

@falcon4fun

  1. Could you provide information about your qBittorrent setup (OS, app and libraries versions etc.)?
  2. Could you provide screenshots directly through GitHub, rather than through dubious external services that are blocked by some ISPs?
@glassez commented on GitHub (Jan 2, 2026): > Same situation to me. @falcon4fun 1. Could you provide information about your qBittorrent setup (OS, app and libraries versions etc.)? 2. Could you provide screenshots directly through GitHub, rather than through dubious external services that are blocked by some ISPs?
Author
Owner

@glassez commented on GitHub (Jan 2, 2026):

if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup.

This behaves exactly the same as diwnloading. The torrent will continue to do unfinished job (downloading or hash checking) the next time the qBittorrent starts, starting from a previously saved position.

@glassez commented on GitHub (Jan 2, 2026): > if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup. This behaves exactly the same as diwnloading. The torrent will continue to do unfinished job (downloading or hash checking) the next time the qBittorrent starts, starting from a previously saved position.
Author
Owner

@falcon4fun commented on GitHub (Jan 2, 2026):

To the best of my knowledge, if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup.

Have you tried changing these settings?

Tryed. Leads mostly to worsier or the same situation because it's not SSD, RAID setup (especially w/ hw BBU cache) and single HDD prefers sequental I/O.
Moreover, if qB opens for first time (after graceful/ungraceful shutdown, doesn't matter), closed gracefully, then it will open and "Checking resume data" will be completed in seconds. Suppose (99,9999% possiblity) because of some files being mapped to ram.

Same situation to me.

@falcon4fun

  1. Could you provide information about your qBittorrent setup (OS, app and libraries versions etc.)?
  2. Could you provide screenshots directly through GitHub, rather than through dubious external services that are blocked by some ISPs?
  1. Sorry, still forgeting world is not perfect and there are some politics and other cases around us. :D

Setup:

  • W10.
  • qB v5.2.0beta1 (Installed yesterday for tests. Previously was latest stable build)
    • Qt: 6.10.1
    • Libtorrent: 1.2.20.0
    • Boost: 1.86.0
    • OpenSSL: 3.6.0
    • zlib: 1.3.1

HW:

  • 13700k
  • 128GB DDR4
  • torrent repo disk: WD101EDBZ-11B1DA0
  • repo FS: ReFS. 4k block

qB config:

  • doesn't matter. Tryed with my current settings and fully stock settings around 3 monthes ago with the same repo - same behavior.

My analysis:

  • Worth to mention: Resource Monitor doesn't even show any file access activity, only physical disk activity near 100%.
  • ProcMon (from Sysinternals) shows qB queryOpen every torrent file recursively:
    • Every folder and every file being checked.
    • Only after then "Checking resume data" state for this torrent is cleared.
  • QueryOpen speed is around 10-20k (varies) files per minutes. Around 150-300 files/sec (checked 4-5 various minutes intervals from log).
  • Current torrent repo has around ~485k files with 809 torrents
    • 485'000 files / 33 minutes / 60 seconds = ~240-250 files/sec average QueryOpen speed while "Checking fastresume data". This avg correlates with QueryOpen speed.
    • So, to start up qB with around 2 million files will require more than 2h (2'000'000 / 250 / 60 = 133 min) without any ability to do something on torrent repo physical disk because it will stall untill qB killed or check completed
Image Image

ProcMon filters:

  • Process Name is qBittorrent.exe
  • Path contains F:\torrents (Repo path)

Test bench:

  • Limit Download/Upload to 100 KB/s inside qB
  • Clean all ram working, modified, standby sets via RamMap. Assume fresh start after OS reboot without any mapped files to RAM.
    • This required because graceful shutdown and instant relaunch uses data mapped to RAM and "Checking resume data" takes 1-5 seconds
  • AV disabled
  • No other disk activity to torrent storage disk
  • Most of software can access torrent storage disk terminated
  • Scope: 809 torrents

Case 1: Initial start after graceful shutdown (I/O and checking threads = 1)

  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: 32 minutes

Case 2: Initial start after graceful shutdown (I/O and checking threads = 4)

  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: 37 minutes

Case 3: Initial start after not graceful shutdown (I/O and checking threads = 1)

  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: N/A (not checked)

Case 4: Initial start after not graceful shutdown (I/O and checking threads = 4)

  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: N/A (not checked)

Case 5: Fully default settings (took from stock initial profile) after graceful shutdown (I/O and checking threads = 1)

  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: 33 minutes

Case 6: Fully default settings (took from stock initial profile) after not graceful shutdown (I/O and checking threads = 1)

  • RamMap Clean all Caches
  • Launch qB
  • Kill qB after start
  • RamMap Clean all Caches
  • Launch qB
  • Time consumed: 33 minutes

To sum up:

I would suggest better logic handling.

  • Like mentioned by TS: checkbox with "Assume my data is correct"
  • Or any other logic "if qB terminated gracefully, all data is ok, skip QueryOpen all files"
    • and "if qB terminated poorly, then check only files last still with $Torrent.Downloading eq $True, $Torrent.Compeleted eq $False and $FastResume.isSupposeToBeCorrupted"
  • Or don't check torrents with many files inside
  • Or any other possible way.

Default fastresume interval equals to 60min. If torrent was downloaded and fastresume was successfully writen with completed torrent state, why do we need to check again, again and again those files? We still assume they are correct and untouched.

Even though, Sqlite fastresume option exists and we can write fastresume data more often: every 1-5-15 minutes.

Moreover, if files have any errors, particular torrent will already be set to "Error: <error event>".

I still have free space to seed larger amount of data but can't because every reboot or qB graceful termination for longer period of time (when mapped files flushed from RAM) or qB incorrect termination causes disk being stalled for long period of time

@falcon4fun commented on GitHub (Jan 2, 2026): > To the best of my knowledge, if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup. > > Have you tried changing these settings? > Tryed. Leads mostly to worsier or the same situation because it's not SSD, RAID setup (especially w/ hw BBU cache) and single HDD prefers sequental I/O. Moreover, if qB opens for first time (after graceful/ungraceful shutdown, doesn't matter), closed gracefully, then it will open and "Checking resume data" will be completed in seconds. Suppose (99,9999% possiblity) because of some files being mapped to ram. > > Same situation to me. > > [@falcon4fun](https://github.com/falcon4fun) > > 1. Could you provide information about your qBittorrent setup (OS, app and libraries versions etc.)? > 2. Could you provide screenshots directly through GitHub, rather than through dubious external services that are blocked by some ISPs? 2. Sorry, still forgeting world is not perfect and there are some politics and other cases around us. :D ### Setup: - W10. - qB v5.2.0beta1 (Installed yesterday for tests. Previously was latest stable build) - Qt: 6.10.1 - Libtorrent: 1.2.20.0 - Boost: 1.86.0 - OpenSSL: 3.6.0 - zlib: 1.3.1 ### HW: - 13700k - 128GB DDR4 - torrent repo disk: WD101EDBZ-11B1DA0 - repo FS: ReFS. 4k block ### qB config: - doesn't matter. Tryed with my current settings and fully stock settings around 3 monthes ago with the same repo - same behavior. ## My analysis: - Worth to mention: Resource Monitor doesn't even show any file access activity, only physical disk activity near 100%. - ProcMon (from Sysinternals) shows qB queryOpen every torrent file recursively: - Every folder and every file being checked. - Only after then "Checking resume data" state for this torrent is cleared. - QueryOpen speed is around 10-20k (varies) files per minutes. Around 150-300 files/sec (checked 4-5 various minutes intervals from log). - Current torrent repo has around ~485k files with 809 torrents - 485'000 files / 33 minutes / 60 seconds = ~240-250 files/sec average QueryOpen speed while "Checking fastresume data". This avg correlates with QueryOpen speed. - So, to start up qB with around 2 million files will require more than 2h (2'000'000 / 250 / 60 = 133 min) without any ability to do something on torrent repo physical disk because it will stall untill qB killed or check completed <img width="2560" height="1301" alt="Image" src="https://github.com/user-attachments/assets/e4aed718-5c98-4018-8d65-8c7dd85e508e" /> <img width="1277" height="679" alt="Image" src="https://github.com/user-attachments/assets/59af65cc-4cbe-454d-9f9c-ac0846103f9e" /> ### ProcMon filters: - Process Name is qBittorrent.exe - Path contains F:\torrents (Repo path) ### Test bench: - Limit Download/Upload to 100 KB/s inside qB - Clean all ram working, modified, standby sets via RamMap. Assume fresh start after OS reboot without any mapped files to RAM. - This required because graceful shutdown and instant relaunch uses data mapped to RAM and "Checking resume data" takes 1-5 seconds - AV disabled - No other disk activity to torrent storage disk - Most of software can access torrent storage disk terminated - Scope: 809 torrents ### Case 1: Initial start after graceful shutdown (I/O and checking threads = 1) - RamMap Clean all Caches - Launch qB - Time consumed: 32 minutes ### Case 2: Initial start after graceful shutdown (I/O and checking threads = 4) - RamMap Clean all Caches - Launch qB - Time consumed: 37 minutes ### Case 3: Initial start after not graceful shutdown (I/O and checking threads = 1) - RamMap Clean all Caches - Launch qB - Time consumed: N/A (not checked) ### Case 4: Initial start after not graceful shutdown (I/O and checking threads = 4) - RamMap Clean all Caches - Launch qB - Time consumed: N/A (not checked) ### Case 5: Fully default settings (took from stock initial profile) after graceful shutdown (I/O and checking threads = 1) - RamMap Clean all Caches - Launch qB - Time consumed: 33 minutes ### Case 6: Fully default settings (took from stock initial profile) after not graceful shutdown (I/O and checking threads = 1) - RamMap Clean all Caches - Launch qB - Kill qB after start - RamMap Clean all Caches - Launch qB - Time consumed: 33 minutes ## To sum up: I would suggest better logic handling. - Like mentioned by TS: checkbox with "Assume my data is correct" - Or any other logic "if qB terminated gracefully, all data is ok, skip QueryOpen all files" - and "if qB terminated poorly, then check only files last still with $Torrent.Downloading eq $True, $Torrent.Compeleted eq $False and $FastResume.isSupposeToBeCorrupted" - Or don't check torrents with many files inside - Or any other possible way. Default fastresume interval equals to 60min. If torrent was downloaded and fastresume was successfully writen with completed torrent state, why do we need to check again, again and again those files? We still assume they are correct and untouched. Even though, Sqlite fastresume option exists and we can write fastresume data more often: every 1-5-15 minutes. Moreover, if files have any errors, particular torrent will already be set to "Error: \<error event\>". I still have free space to seed larger amount of data but can't because every reboot or qB graceful termination for longer period of time (when mapped files flushed from RAM) or qB incorrect termination causes disk being stalled for long period of time
Author
Owner

@xavier2k6 commented on GitHub (Jan 2, 2026):

@falcon4fun Can you also test with libtorrent 2.0.11 based build?, you mention (ReFS) what version is in-use here?

@xavier2k6 commented on GitHub (Jan 2, 2026): @falcon4fun Can you also test with libtorrent 2.0.11 based build?, you mention (ReFS) what version is in-use here?
Author
Owner

@falcon4fun commented on GitHub (Jan 2, 2026):

@falcon4fun Can you also test with libtorrent 2.0.11 based build?, you mention (ReFS) what version is in-use here?

Already started testing with libtorrent 2.0 with stock settings (including stock async I/O parameter = 10). Same situation. Maybe 2-3 minutes difference (30 instead of 33). 😄
ReFS version: 3.4

Currently trying to move part of torrent repo to NTFS based disk with standard 4K cluster size to check. But still don't think will see any difference

@falcon4fun commented on GitHub (Jan 2, 2026): > [@falcon4fun](https://github.com/falcon4fun) Can you also test with libtorrent 2.0.11 based build?, you mention (ReFS) what version is in-use here? Already started testing with libtorrent 2.0 with stock settings (including stock async I/O parameter = 10). Same situation. Maybe 2-3 minutes difference (30 instead of 33). 😄 ReFS version: 3.4 Currently trying to move part of torrent repo to NTFS based disk with standard 4K cluster size to check. But still don't think will see any difference
Author
Owner

@glassez commented on GitHub (Jan 3, 2026):

@falcon4fun
Well, thank you so much for your detailed report.
As far as I can understand, it's still not about checking the contents of files (i.e. hashes checking), but just about checking the existence of files ("checking resume data" in terms of qBittorrent/libtorrent). In fact, it's a big problem with similar "checking" related reports that users confuse what they're talking about exactly and also hide important details from their reports/screenshots that could help developers figure it out.
As for checking the existence of files, this is a long-standing story. If we don't go into too much detail, libtorrent always checks the existence of files when initializing a torrent. For some time now, it has (at my suggestion) the opportunity to avoid this and blindly trust the "resume data". But I was never able to implement its use in qBittorrent at the time. The relevant PR was stuck in conflicting opinions, and I didn't have time to deal with all this. I have now reopened it to try to arrange them and find a compromise solution.

@glassez commented on GitHub (Jan 3, 2026): @falcon4fun Well, thank you so much for your detailed report. As far as I can understand, it's still not about checking the contents of files (i.e. hashes checking), but just about checking the existence of files ("checking resume data" in terms of qBittorrent/libtorrent). In fact, it's a big problem with similar "checking" related reports that users confuse what they're talking about exactly and also hide important details from their reports/screenshots that could help developers figure it out. As for checking the existence of files, this is a long-standing story. If we don't go into too much detail, libtorrent always checks the existence of files when initializing a torrent. For some time now, it has (at my suggestion) the opportunity to avoid this and blindly trust the "resume data". But I was never able to implement its use in qBittorrent at the time. [The relevant PR](https://github.com/qbittorrent/qBittorrent/pull/16581) was stuck in conflicting opinions, and I didn't have time to deal with all this. I have now reopened it to try to arrange them and find a compromise solution.
Author
Owner

@falcon4fun commented on GitHub (Jan 3, 2026):

@glassez Additionally, I will attach some more info here, when I finish my tests.
Preliminary findings shows some suspicion regarding "ReFS" and "large amount of small files" but takes too much time for tests including moving 1-4 TB from one disk to another and vice-versa.

It's not the first time I see problems with ReFS.. For example, our Veeam Repository based on ReFS likes to eat enormous amount memory for holding ReFS metafile (easily draining 64-96GB ram for 120TB repo) at my current workplace.

@falcon4fun commented on GitHub (Jan 3, 2026): @glassez Additionally, I will attach some more info here, when I finish my tests. Preliminary findings shows some suspicion regarding "ReFS" and "large amount of small files" but takes too much time for tests including moving 1-4 TB from one disk to another and vice-versa. It's not the first time I see problems with ReFS.. For example, our Veeam Repository based on ReFS likes to eat enormous amount memory for holding ReFS metafile (easily draining 64-96GB ram for 120TB repo) at my current workplace.
Author
Owner

@xavier2k6 commented on GitHub (Jan 3, 2026):

ReFS has been problematic.

@xavier2k6 commented on GitHub (Jan 3, 2026): ReFS has been problematic. - #16560
Author
Owner

@falcon4fun commented on GitHub (Jan 3, 2026):

ReFS has been problematic.

Yeap, seen that while trying figure out any issue related to ReFS. But mentioned issue has even more complicated configs: ReFS + Storage Spaces (at least 2 of participants had this config. Others - unknown: while WS mentioned, it can be anything from software raids to HW raids). I still hate any SDS (Software Defined Storage) setups 😄

I have not seen BSODs related to qB or ReFS on my current setup and previous setups with ReFS v3.2-v3.4.
Unfortunately, 0x00000050 are pretty common thing IRL.

ReFS was even worsier before. Veeam forum has large horror story: https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html

@falcon4fun commented on GitHub (Jan 3, 2026): > ReFS has been problematic. > > * ["Frequent PAGE_FAULT_IN_NONPAGED_AREA BSOD happens on Windows 11 + ReFS v3.7" #16560](https://github.com/qbittorrent/qBittorrent/issues/16560) Yeap, seen that while trying figure out any issue related to ReFS. But mentioned issue has even more complicated configs: ReFS + Storage Spaces (at least 2 of participants had this config. Others - unknown: while WS mentioned, it can be anything from software raids to HW raids). I still hate any SDS (Software Defined Storage) setups 😄 I have not seen BSODs related to qB or ReFS on my current setup and previous setups with ReFS v3.2-v3.4. Unfortunately, 0x00000050 are pretty common thing IRL. ReFS was even worsier before. Veeam forum has large horror story: https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html
Author
Owner

@falcon4fun commented on GitHub (Jan 4, 2026):

Meh:

So, to conclude my situation with 30-40 minutes "Checking resume data"
ReFS - good bye. NTFS - welcome back again.
Thank you copy-on-write and other metadata ReFS enchancements but it's slow as sh** with large amount of files. I had noticed it before with simple defrag when 2mln files takes 20-30 minutes to just analyze disk. But thought it works as intended with such amount of files

According to ProcMon data QueryOpen have very long duration on ReFS. Forget to save some dumps but it's way more higher than on NTFS. Some of then took 0.05-0.005s instead NTFS peaks like 0.000x with ordinary 0.00000x.
Don't know why. It was quite long journey and many tests performed

NTFS vs ReFS

The same repo on the same physical disk

  • ReFS v3.4 4k cluster: 32-37 minutes
  • NTFS 4k cluster: 1m30s

Whaaaat a hell?!

Some logs after test:

Repo size without additional files: around 500k files

LibTorrent 2.0. Stock settings (10 I/O threads)
1m39s

LibTorrent 1.2. Stock settings (10 I/O threads)
1m40s

LibTorrent 1.2. Stock settings (1 I/O threads)
3m20s

LibTorrent 1.2. Stock settings (4 I/O threads)
2m12s

LibTorrent 1.2. Stock settings (4 I/O threads. 512 Checking RAM)
2m12s

LibTorrent 1.2. Stock settings (8 I/O threads)
1m45s

LibTorrent 1.2. Stock settings (32 I/O threads)
55s

LibTorrent 1.2. Stock settings (256 I/O threads)
2m20s

LibTorrent 2.0. Stock settings (10 I/O threads)
49s

LibTorrent 2.0. Stock settings (20 I/O threads, 4 hashing threads)
1m30s

LibTorrent 2.0. Stock settings (10 I/O threads, 4 hashing threads)
1m49s

LibTorrent 2.0. Stock settings (10 I/O threads, 4 hashing threads)
1m49s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads)
1m35s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority))
1m50s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100)
1m59s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=pread/pwrite)
2m10s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=posix compliant)
1m20s
1m13s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
1m46s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, BelowNormal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
58s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, Medium priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
50s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
49s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 8192 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
1m38s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 2048 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
48s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 4096 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
53s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
50s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files)
56s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default)
Some data moved back to ReFS Prod volume. ~350k files inside one folder.
11m35s
Additional 1.5 mln files.
7m
All files moved back to ReFS Prod volume. With additional 1.5 mln files.
13m 

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default)
Most of files on NTFS Test volume including additional 1.5mln files
4m0s

LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default)
Most of files on NTFS Test volume
1m40s
All files on NTFS Test volume
1m9s

NTFS on Prod volume. Partial files (350k).
1m9s

NTFS on Prod volume. All files
1m22s

NTFS on Prod volume. All files. Production (settings from backup before any tests) + WS 6144 + Normal Priority
1m28s

NTFS on Prod volume. All files +1.5mln files. Production (settings from backup before any tests) + WS 6144 + BelowNormal Priority
3m10s

NTFS on Prod volume. All files +1.5mln files. Production (settings from backup before any tests) + WS 6144 + BelowNormal Priority + only 1 I/O thread
2m50s

To sum up

  • NTFS 1 I/O thread (with additional 1.5mln files. Total: 2mln) vs ReFS (500k files) I/O threads: 2m50s vs 33-37m
  • NTFS 10 I/O threads (with additional 1.5mln files) vs ReFS (500k files) I/O threads: 3m10s vs 33-37m-
  • NTFS 10 I/O threads (Total: 450k) vs ReFS (500k files) I/O threads: 1m30s vs 33-37m-
  • Posix Compliant produced sometimes faster "Checking resume data" than Memory Mapped Files. But can't reproduce now
  • It would be good if smb can cross-verify performance on ReFS vs NTFS
  • As for now I don't recommend using ReFS at least 3.4 (Latest for W10) at least for torrent repo. Can't check on 3.14 as still don't have plans to move from W10
  • Additionally found this case: https://github.com/qbittorrent/qBittorrent/issues/23704
@falcon4fun commented on GitHub (Jan 4, 2026): ### Meh: So, to conclude my situation with 30-40 minutes "Checking resume data" ReFS - good bye. NTFS - welcome back again. Thank you copy-on-write and other metadata ReFS enchancements but it's slow as sh** with large amount of files. I had noticed it before with simple defrag when 2mln files takes 20-30 minutes to just analyze disk. But thought it works as intended with such amount of files According to ProcMon data QueryOpen have very long duration on ReFS. Forget to save some dumps but it's way more higher than on NTFS. Some of then took 0.05-0.005s instead NTFS peaks like 0.000x with ordinary 0.00000x. Don't know why. It was quite long journey and many tests performed ### NTFS vs ReFS The same repo on the same physical disk - ReFS v3.4 4k cluster: 32-37 minutes - NTFS 4k cluster: 1m30s Whaaaat a hell?! ### Some logs after test: ``` Repo size without additional files: around 500k files LibTorrent 2.0. Stock settings (10 I/O threads) 1m39s LibTorrent 1.2. Stock settings (10 I/O threads) 1m40s LibTorrent 1.2. Stock settings (1 I/O threads) 3m20s LibTorrent 1.2. Stock settings (4 I/O threads) 2m12s LibTorrent 1.2. Stock settings (4 I/O threads. 512 Checking RAM) 2m12s LibTorrent 1.2. Stock settings (8 I/O threads) 1m45s LibTorrent 1.2. Stock settings (32 I/O threads) 55s LibTorrent 1.2. Stock settings (256 I/O threads) 2m20s LibTorrent 2.0. Stock settings (10 I/O threads) 49s LibTorrent 2.0. Stock settings (20 I/O threads, 4 hashing threads) 1m30s LibTorrent 2.0. Stock settings (10 I/O threads, 4 hashing threads) 1m49s LibTorrent 2.0. Stock settings (10 I/O threads, 4 hashing threads) 1m49s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads) 1m35s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority)) 1m50s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100) 1m59s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=pread/pwrite) 2m10s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=posix compliant) 1m20s 1m13s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 8192 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 1m46s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, BelowNormal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 58s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, Medium priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 50s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 512 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 49s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 8192 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 1m38s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 2048 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 48s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 4096 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 53s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, VeryLow priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 50s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=memory mapped files) 56s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default) Some data moved back to ReFS Prod volume. ~350k files inside one folder. 11m35s Additional 1.5 mln files. 7m All files moved back to ReFS Prod volume. With additional 1.5 mln files. 13m LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default) Most of files on NTFS Test volume including additional 1.5mln files 4m0s LibTorrent 2.0. Stock settings (10 I/O threads, 1 hashing threads, WS 6144 MB, Normal priority, Sqlite, Disk Queue size=65535, File Pool=5000, SocketBackLog=100, OutgoingPerSec=100, IOType=Default) Most of files on NTFS Test volume 1m40s All files on NTFS Test volume 1m9s NTFS on Prod volume. Partial files (350k). 1m9s NTFS on Prod volume. All files 1m22s NTFS on Prod volume. All files. Production (settings from backup before any tests) + WS 6144 + Normal Priority 1m28s NTFS on Prod volume. All files +1.5mln files. Production (settings from backup before any tests) + WS 6144 + BelowNormal Priority 3m10s NTFS on Prod volume. All files +1.5mln files. Production (settings from backup before any tests) + WS 6144 + BelowNormal Priority + only 1 I/O thread 2m50s ``` ### To sum up - NTFS 1 I/O thread (with additional 1.5mln files. Total: 2mln) vs ReFS (500k files) <any number> I/O threads: 2m50s vs 33-37m - NTFS 10 I/O threads (with additional 1.5mln files) vs ReFS (500k files) <any number> I/O threads: 3m10s vs 33-37m- - NTFS 10 I/O threads (Total: 450k) vs ReFS (500k files) <any number> I/O threads: 1m30s vs 33-37m- - Posix Compliant produced sometimes faster "Checking resume data" than Memory Mapped Files. But can't reproduce now - It would be good if smb can cross-verify performance on ReFS vs NTFS - As for now I don't recommend using ReFS at least 3.4 (Latest for W10) at least for torrent repo. Can't check on 3.14 as still don't have plans to move from W10 - Additionally found this case: https://github.com/qbittorrent/qBittorrent/issues/23704
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/qBittorrent#16317
No description provided.