mirror of
https://github.com/qbittorrent/qBittorrent.git
synced 2026-03-02 22:57:32 -05:00
Skip torrent checking after relaunch #16317
Labels
No labels
Accessibility
AppImage
Bounty
Build system
CI
Can't reproduce
Code cleanup
Confirmed bug
Confirmed bug
Core
Crash
Data loss
Discussion
Docker
Documentation
Duplicate
Feature
Feature request
Feature request
Feature request
Filters
Flatpak
GUI
Has workaround
I2P
Invalid
Libtorrent
Look and feel
Meta
NSIS
Network
Not an issue
OS: *BSD
OS: Linux
OS: Windows
OS: macOS
PPA
Performance
Project management
Proxy/VPN
Qt bugs
Qt6 compat
RSS
Search engine
Security
Temp folder
Themes
Translations
Triggers
Waiting diagnosis
Waiting info
Waiting upstream
Waiting web implementation
Watched folders
WebAPI
WebUI
autoCloseOldIssue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/qBittorrent#16317
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @TheYMI on GitHub (Nov 4, 2024).
Originally assigned to: @glassez on GitHub.
Suggestion
When launching qBittorrent, any torrents that weren't fully checked before it was closed require a recheck, which consumes a lot of time and resources. This process should be allowed to be skipped.
Use case
After installing v5.0.1, changing some configurations that were reset after the version change required a relaunch of qBittorrent.
Since I have over 4k torrents, not all of them were updated before I closed the client.
After launching, it started checking all the torrents that weren't updated. That's thousands of torrents - several TBs worth of data - some of which are on a NAS. This will take days, if not over a week, without me changing any of the files and the recheck being completely unnecessary.
During this time my network is completely clogged, my NAS is unusable and those torrents are not seeding.
THIS NEEDS TO BE FIXED NOW!!!
(a workaround is also okay)
Extra info/examples/attachments
Closing 21766 as a duplicate of something that's been ignored for over a decade isn't a solution. I will keep reopening this issue until someone actually takes it seriously instead of sweeping it under the rug.
@HanabishiRecca commented on GitHub (Nov 4, 2024):
Then you simply will be banned.
This is your problem. You are not in a position to demand anything.
qBittorrent is a free and open source software developed by volunteers in their free time. You don't pay for it, you use a product of other people's free will. Noone would rush solving your problems and implementing your wishes, especially if you ask for it disgracefully.
Maybe you want to do it yourself? PRs are welcome.
@TheYMI commented on GitHub (Nov 4, 2024):
Users on GitHub are free to make.
Nope. There are complaints about this issue since 2012. I'm the last one, not the only one.
People have been asking nicely for YEARS. No solution or workaround has ever been offered. Closing my issue as a duplicate of a 12-years-old unresolved issue while shrugging it off is also disgraceful.
The issue is the result of a problematic behavior that's been known for years, and no one ever cared enough to solve. There's probably a list somewhere that could be cleared with a single button click, if you just know the piece of code that checks it.
Might even be a file that could be edited as a workaround, but even that was never offered.
So yes, I was very annoyed when I opened this issue, because my search for a solution before coming to GitHub gave me nothing other than years-worth of frustrated people complaining about this behavior, with developers disregarding them.
And while this is a free product, if it's causing me problems (e.g. high usage of my network and overworking my NAS) when I used it as intended due to negligence, then yes, I feel like I'm owed a response from the developers.
@HanabishiRecca commented on GitHub (Nov 4, 2024):
And repo owners are free to ban anyone from it.
You are the one demanding a solution right now.
But it is, objectively, a duplicate. Belive me, keeping around 1000 issues for the same problem would not help fixing it faster.
Exactly. Years of asking instead of proposing a fix. If you think it's so easy, why you would not just go and fix it?
But I doubt it, as you don't even understand that the problem's roots are not even really in qBittorrent code in the first place, but grow deeply from libtorrent behavior.
If you think about it for 1 second, you might realize that if there was an easy fix, it would have been fixed already. And there wouldn't been pages of discussion around it.
No, you don't. Read the license.
@glassez commented on GitHub (Nov 4, 2024):
What do you mean by "updated"?
About what issue exactly?
Some explanations (that I have already made repeatedly in other similar topics)
If you've started "rechecking" torrent, then you can't just cancel it to get back to the previous state. "Recheck" literally means "forget the current progress and start a torrent from scratch."
(Of course, you can stop the torrent being checked and then start it again, just like any other.)
Due to the peculiarities of libtorrent's behavior, qBittorrent used to really sin by unexpectedly starting a "recheck" in various situations and it caused a lot of inconvenience for users, and everyone agreed with that. I am someone who has been struggling with these issues for a long time (as reports of similar circumstances in which it behaves this way appear). And I believe that I have fixed them all. (At least it is incorrect to refer to those old issues in relation to the current qBittorrent.)
Now qBittorrent does not start rechecking by itself under almost any circumstances (there is one, but it hardly relates to your problem, this is when moving the torrent to a new location where there are already matching files). Since then, no one has provided confirmed data on any other circumstances in which qBittorrent can spontaneously start "rechecking" torrents. So the only known way to start "rechecking" at the moment (apart from the one mentioned above) is if the user does it himself.
@TheYMI commented on GitHub (Nov 4, 2024):
I started the client. All torrents start as "Checking resume data".
I closed the client while most checks haven't finished.
I relaunched the client, and everything whose status was "Checking resume data" before I closed the client, now requires checking.
@HanabishiRecca commented on GitHub (Nov 4, 2024):
Yeah, we actually struggle to reproduce that. https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2367526735
@glassez commented on GitHub (Nov 4, 2024):
What do you mean by "requires checking"?
To avoid confusion with the terms used, it is better to accompany them with screenshots.
And it would be extremely useful to take a look at the latest logs.
@TheYMI commented on GitHub (Nov 4, 2024):
Things that are checking resume data (blue) when closed, require checking (red) after relaunch.
Seeding torrents (black) do not require recheck.
I did notice that not everything requires a recheck (red) if it was checking resume data (blue) when the client was closed.
This seems to be true for files that were recently rechecked (red) and no longer require a check (black).
From an investigation I conducted, it seems like the decision is related to the contents of the
.fastresumefiles in the BT_backup directory.@TheYMI commented on GitHub (Nov 4, 2024):
I created a small python script (for Windows) that works as a workaround to this issue.
@TheYMI commented on GitHub (Nov 4, 2024):
Done (although it's a workaround and not exactly a fix).
Was pretty easy once I looked at the code for 30 minutes to understand where the decision comes from. Then another 30-ish minutes to compare a
.fastresumefile before and after the check. About 10 minutes to write a few lines of code that simulate the change and check if it fixes the issue and another hour to clean it up and write a decent looking script that does the same to all files, as well as running a test to make sure I don't break anything.Imagine the wonders I could do if I knew the codebase well enough to integrate it into the code and add a GUI element that evokes it.
@TheYMI commented on GitHub (Nov 4, 2024):
In all seriousness, I don't feel confident enough to convert it into C++ and adding it properly. Feel free to take my code and use it to add this as a feature (the code is simple enough to understand, but I'd be happy to answer any questions).
I would add it as a button as suggested in the title of 13556, and then add a confirmation prompt that requires another approval with a disclaimer.
Additionally, I completely closed qBittorrent while running the script to avoid editing files that might be in use. If this is implement within qBittorrent, I would make sure to stop all activity and release all file handles before changing the files' contents. Then I would relaunch qBittorrent to start the startup process to reload all the torrents from scratch.
My script keeps a backup of the files I'm changing, but the disclaimer in the confirmation window should probably advice the user to copy the whole BT_backup directory as backup while qBittorrent is closed before proceeding.
@TheYMI commented on GitHub (Nov 4, 2024):
I was able to reproduce it (on Windows) by closing qBittorrent, creating a
lockfileinAppData\Local\qBittorrent, relaunching qBittorrent and then closing it again.It wasn't as bad as I had it earlier, but I did get ~900 torrents to require an unnecessary recheck.
My script managed to fix it again.
EDIT:
Launching qBittorrent, changing a configuration and immediately closing and launching it again had a similar effect, with less torrents (~40).
My guess would be that something interferes with the update of the
.fastresumefile writing, and the files that don't get updated require another recheck. I also noticed that closing qBittorrent is way faster when the issue reproduces, further reinforcing my suspicion that file writing is skipped for some reason.@HanabishiRecca commented on GitHub (Nov 4, 2024):
Unfortunately, you didn't discover anything new here. Of course we could simply reset the state of torrents.
And we don't need to edit the files in such dirty way. We can just change the state inside the client.
Although, as you already pointed out, this is not a fix, this is a workaround. I'll quote myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366894832:
I.e. "trust me" button is a slippery slope. If this situation happened, it means something went wrong. And if something went wrong, the data could be corrupted.
Giving users access to such button could lead to abuse with catastrophic consequences. E.g. "oh, my torrent is not 100%, I guess I just hit that button".
A proper fix should prevent this situation from happening.
It is likely. Again, quoting myself from https://github.com/qbittorrent/qBittorrent/issues/13556#issuecomment-2366910223:
I highly recommend you to read the whole conversation. There are lot of insight from qBittorrent and libtorrent devs.
P.S. Make a regular backup of your
BT_backupfolder. Things could go wrong, files could be corrupted to the point of no return or deleted.Switching
Resume data storage typetoSQLite databaseis also an option. It doesn't seem to help from this issue specifically, but should be more resilient in theory.You would have a single
torrents.db(and a bunch of WAL files while the client is running) instead of.fastresumefiles.@HanabishiRecca commented on GitHub (Nov 4, 2024):
This experiment is actually interesting. I wonder if that happens when user somehow manages to launch multiple client copies on the same profile.
Could you try to switch
Resume data storage typetoSQLite databaseand check if you are able to reproduce the problem? Make backups before experimenting of course.@glassez commented on GitHub (Nov 5, 2024):
Could you reproduce the issue so previously completed torrent become "unchecked" and provide me its .fastresume file (ideally two variants of it, one copied before experiment and one after torrent becomes "unchecked") for investigating?
@glassez commented on GitHub (Nov 5, 2024):
👍
@glassez commented on GitHub (Nov 5, 2024):
What
lockfiledo you mean? Just a regular file with namelockfile?@glassez commented on GitHub (Nov 5, 2024):
It looks doubtful... What do you think it's supposed to do?
IIRC, it can only mark all the pieces as "checked", but it cannot restore previous progress if the torrent was (partially) downloaded earlier.
So I'll repeat it again. The only correct solution would be to try to find the reason for the loss of progress and fix it, rather than invent dubious workarounds.
@TheYMI commented on GitHub (Nov 5, 2024):
I managed to reproduce the issue (on Windows). Here are the steps:
lockfileatAppData\Local\qBittorrentwith no content and no extension (similar to the one qBittorrent creates once it starts running) usingtouchfrom Git BashInfo Hash v1to identify the.fastresumefile_orig) and the same file from the active BT_backup directory (_after).txtto the name so GitHub will let me upload them, but I didn't open them or changed their content)Something that could be of interest:
I failed to reproduce the issue multiple times when I tried to do so while a torrent was downloading. Once it finished, I managed to reproduce on my first try.
Files:
f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_after.fastresume.txt
f6e77c00ba3d8bc4dc1f8089333ba1da1a13e3c9_orig.fastresume.txt
@glassez commented on GitHub (Nov 5, 2024):
Could you still provide a log of this run?
@glassez commented on GitHub (Nov 5, 2024):
Are you sure? IIRC, qBittorrent creates
lockfileatAppData\Roaming\qBittorrent(at least in my system).@TheYMI commented on GitHub (Nov 5, 2024):
Sorry, you are correct. I created it in the correct location (Roaming), but copy pasted the wrong path to my reply.
I'll have to check if I can find it. I restarted the client multiple times and the log rotates each time, so I'll have to see if I can find the correct one.
@glassez commented on GitHub (Nov 5, 2024):
Well, I managed to reproduce it by slightly modifying the libtorrent code by adding a delay during resume data checking. Working on it...
@glassez commented on GitHub (Nov 6, 2024):
#21784
@as-muncher commented on GitHub (Nov 7, 2024):
Oh my gosh, finally. I didn't have all the data that I could post for this issue, but this problem has been happening on my system, very annoyingly. Thank you @TheYMI and @glassez. You'd think that qbittorrent would not delete all the data about the torrent before it started its check, and there were a lot of unnecessary checks anyways. This finally looks promising.
@zent1n0 commented on GitHub (Dec 19, 2024):
Noticed a weird issue before… in the first half of this year.
When my torrents on local drive corrupted in specific situations, the torrent can pass the force recheck, while zip files can't be decompressed due to errors. Only remove the file and redownload fixed the torrent. It brought me doubting about the reliability of recheck function.
I'll open a issue when the issue reproduces.
@ligix commented on GitHub (Feb 3, 2025):
I've found this issue because, you'll never believe it, I have a similar gripe with qbittorrent rechecking.
While what you said is true, there are gonna be people who click buttons because the buttons can be clicked, using software that doesn't give the user the freedom to do whatever they want to do is extremely annoying (at least to me), especially when you know the software is being "wrong".
In my opinion a better way to do it would be to bury something deep in the settings to enable "advanced user mode" or something like that.
As an example ublock origin does this and (imho) it works really well, enabling usage from both people that don't know how it works, because it doesn't require them to change any setting, but also from people who want to deeply customize its behavior giving them tools powerful enough to break browsing altogether.
Of course this wouldn't prevent someone who doesn't know what they're doing, yet thinks that they do, from breaking qbittorrent but if there were enough warnings to inform the user that the "advanced" settings could create irreperable damage I don't see why the (relatively) few people who would ignore them could not be simply denied support.
@ProximaNova commented on GitHub (Mar 21, 2025):
qBittorrent should have an option to skip (re)checking -- meaning rehashing -- of torrents already in the transfer list, maybe like what @ligix said where you bury it in the advanced options (perhaps an advanced option to add that option to the context menu). That way you could get back to 100% complete without doing any hashing (especially if using ZFS where you can be more confident that no bit flips or bitrot will go unnoticed). This was posted above as a workaround fix for Windows: https://gist.github.com/TheYMI/070ccf9197307eb618e3279c86730a2a (.py). Here's a workaround solution for GNU/Linux which uses Bash and vim; it's similar to the code in the Windows+Python fix. It works one-by-one by entering a v1 infohash, but could probably be modified to edit many files automatically:
$ read -p "infohash: " h; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.bak; cp --update=none ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; vim ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix; cp ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume.fix ~/.local/share/data/qBittorrent/BT_backup/$h.fastresume; echo $h:%s/14:piece_priority\(\d\+\):\(\D\+\)6:pieces\d*:\D*12/6:pieces\1:\212/g:wqWith that being "solved" then the next problem is that on startup qBittorrent does "Checking resume data" on all torrents all at once. This is a real problem for my crappy/slow/limited tech and leads to OOM crashes of qBittorrent. There should be an option to do "Checking resume data" one at a time (so only one torrent ever has a status of "Checking resume data" until it finishes and moves on to the next one). Said feature is also important for scalability and using various applications (or working better on more systems).
@TheYMI commented on GitHub (Mar 21, 2025):
You could probably run
sedon the file instead of using vim (you're using a regex replacement after all).Regarding your last comment, from what I've seen, even though multiple files' status is set to "checking", only one file gets actually checked at a time. The rest are just queued to be checked sequentially.
@ProximaNova commented on GitHub (Mar 21, 2025):
@TheYMI Totally, could use
sed -ior perl to do that regular expression substitution (vim can also run in non-interactive mode).That's true, but it wasn't what I was referring to. There's the status of "Checking" which verifies sha1/sha2 hashes of the torrent pieces, and there's the status of "Checking resume data" which AFAIK verifies that all the file paths of a torrent point to actual files of the right name/size. "Checking resume data" happens first before the other, and it does check all the torrents simultaneously. This can lead to qBittorrent running out of memory and crashing.
"Checking resume data" simultaneously on everything is a problem if verifying more than 100,000 or more than 500,000 file paths (like millions) and not directly using an OS filesystem. For example, if the save paths are FUSE-mounted IPFS paths in a HDD. In that case (even if the IPFS data is raw blocks), it's significantly slower to verify all of the paths; and worse than that, the memory needed to do those concurrent tasks is like gigabyte(s) more than what I have. "Checking resume data" cannot be paused, and it assumes that paths can be verified quickly and without using much memory. This probably isn't true with other higher latency systems like tape drives. The assumption is maybe also wrong with compression-based filesystems like dwarFS or something ( https://github.com/mhx/dwarfs ); in that case it's possibly slower+memory-intensive due to having to decompress lots of data.
Overall, the "Checking resume data" system doesn't work well with terabytes of torrents pointing to million(s) of files in HDD(s) + crappy computer + uncommon setup (like FUSE-mounted IPFS paths). (In fact, months ago I made it so the sum of Interplanetary Filesystem paths in qBittorrent only point to a max of roughly 50,000 or 90,000 files as opposed to ~1,000,000+ before.)
@xavier2k6 commented on GitHub (May 25, 2025):
ANNOUNCEMENT!
For anybody coming across this "Feature Request" & would like/love to see a potential implementation in the future!
Here are some options available to you:
Please select/click the 👍 &/or ❤
reactionsin the original/opening post of this ticket.Please feel free (If you have the "skillset") to create a "Pull Request" implementing what's being requested in this ticket.
(new/existing contributors/developers are always welcome)
DO:
DO NOT:
(These will be disregarded/hidden as "spam/abuse/off-topic" etc. as they don't provide anything constructive.)
@github-account1111 commented on GitHub (Dec 4, 2025):
I use a laptop and store most downloads across a few large external hard drives. It's very annoying that if I ever launch qBT without the drives connected and mapped to the exact drive letters qBT expects them to be mapped to, I then have to wait for hours for the torrents to be rechecked.
This has pretty much made me avoid launching qBT unless I'm at home, at my desk, with all the drives connected. Which sounds so unnecessarily restrictive and fragile to me. Like I literally can't download a torrent unless all these conditions are met. Or I have to use a separate torrent client for when away from home or something. There has to be a better way. Unsure if I'm reading it correctly, but it seems this feature would solve this problem.
Implementation suggestion-wise, I would think a simple popup on launch in the event qBT thinks something needs to be rechecked with a Yes and No button should mostly suffice. A list of torrent paths qBT intends to recheck would also be very useful as it would make it easier for the user to determine if the rechecking is warranted. Finally, a checkmark next to each of those torrents to select which ones to proceed with for rechecking would be an even bigger QOL improvement.
@TheYMI commented on GitHub (Dec 5, 2025):
What you're describing is a completely different issue.
The original issue was that all content existed and was available, but some internal data about the recheck process was missing in an edge case where files were marked for recheck, but the application quit before the process finished properly.
In your case, files literally "disappear" or "move", because they're not in the path they were expected to be. It's not a bug, it's a feature.
If you just skip the check at this point, when qBittorrent would try to access the files (e.g. when uploading to a peer), it would try to access the path they were at, but they might not be there, causing an error.
Imagine it like this:
You manage a storage space that contains boxes. Each box has a label with an ID. You also have a list with the ID of each box, which shelf the box is on and what it contains.
Whenever a box is added, you check it's content, and add it to the list - ID, location, content.
Every morning you go into the room and make an inventory check to make sure each box is where it's supposed to be. If there's a new box, you check it's content and add it to the list.
The original issue was that you get to work in the morning and go into the room to start the inventory check.
If you leave the room for any reason before you're done, you delete all the boxes you haven't checked from the list.
The next morning you find a bunch of "new" boxes, and you have to go and check them one by one and add them to the list.
The fix was to tell qBittorrent that if you left the room before you're done, you should assume anything you haven't checked is the same as the last time you checked it.
Your issue, on the other hand, is this:
You have your list and you do your regular check.
Some mornings you find that some boxes moved to a different shelf. Sometimes shelves are completely missing. Sometimes shelves and boxes appear from nowhere. An unknown entity just comes into your storage room at night and moves stuff around, takes some stuff out and puts stuff in without recording any of those changes.
If this is the case, you can never assume everything is fine. If a shelf is missing, you can't just assume it's here somewhere. Even if you find a box with the correct ID on a different shelf, you can't assume the box was moved but the content is the same.
In your case qBittorrent is being responsible, and updates the list to match the current state of the room every time it changes.
@github-account1111 commented on GitHub (Dec 5, 2025):
I'd rather myself be responsible than qBittorrent try and take on my responsibility wasting a ton of my time and causing a lot of frustration along the way.
I'm more than okay with that. I'd much rather have to deal with the error (which I assume really just involves force-rechecking the errored torrents) and save myself hours', potentially days' worth of rechecking. Honestly who wouldn't? The "feature" doesn't seem very rational unless you have all the time in the world which none of us do.
Anyway, I think #13556 is what I actually want. Apologies for misunderstanding the issue description.
@GuyFran commented on GitHub (Dec 20, 2025):
To this day, it seems we are not able to stop the torrents from being in the recheck queue.
There should seriously be a way to cancel that queue and have the torrents put in either "Error" or "Stop" status. This is insane when you have several thousands of files attached to removable storage.
@as-muncher commented on GitHub (Dec 21, 2025):
I also have removeable storage. What BiglyBT does is it periodically checks to see if the torrent is still in an errored state, and so if the removeable drive is attached, after a few minutes, BiglyBT will recognize that the torrent can start again, without having to check the whole torrent, and I think qbittorrent doesn't have that ability yet, as far as I know, so then it means having to check multiple gigabytes of data needlessly, if I'm understanding your gripe correctly.
@glassez commented on GitHub (Dec 22, 2025):
Yes, qBittorrent doesn't periodically check for such torrents to be started. But it DOESN'T mean that you have "o check multiple gigabytes of data needlessly". And this has already been said many times. It is supposed that qBittorrent simply marks such torrents as "Missing files" at startup, so if you do not manually recheck them, but simply restart qBittorrent as soon as the removable drive is available again, it will continue torrents as if nothing had happened. Of course, unless you're using some previous version that has a bug.
@GuyFran commented on GitHub (Dec 22, 2025):
Exactly.
Allow us to just accept them as missing, and put them in a stop status.
Then if and when those torrents are manually restart, this is the time where the recheck should happen.
But also, right now, it is literally impossible to cancel the recheck queue. Just that is not impossible. And I am not sure why.
For sure this logic is not linked to libtorrent constraints.
@glassez commented on GitHub (Dec 22, 2025):
I can't understand what you're talking about. What should it allow you?
What I described above is how qBittorrent behaves. You can accept it or not, but it's true.
Here is an example of a torrent located on a removable disk:
Now I run qBittorrent when the disk is not attached:
Then I closed qBittorrent, attached the disk, and ran qBittorrent again:
@glassez commented on GitHub (Dec 22, 2025):
This is also unclear. What do you mean by "cancel the recheck queue"?
In fact, you can stop a torrent that is performing hashes check, just like a torrent downloading data. Of course, you cannot cancel/skip a previously started hashes check.
You must know libtorrent well enough to say that. Then could you explain to others what you mean exactly and how we could make qBittorrent work more correctly?
@GuyFran commented on GitHub (Dec 22, 2025):
I just started it. My files are on removable drives that are not connected , right now, intentionally.
3k+ torrent in the check queue.
I have to wait 15->30 min for the check queue to fail/complete to be able to do anything
I can't just cancel those whole 3k items from the check queue and just have them sit in stop (even though they are showing as such here).
Are you saying that this is mandatory ?
Also you are exhibiting with a sample of one. Maybe you can try this with thousands and thousands of files, and that may become very obvious to experience the issue.
@glassez commented on GitHub (Dec 23, 2025):
Could you provide a screenshot of what status is displayed for these torrents in the torrent list (as I provided it above)?
"Do anything" what? Has your qBittorrent been unresponsive all this time?
Maybe. But I can't rush to buy a removable disk and download thousands of torrents just for the sake of this experiment. I can only try to figure this out based on the feedback provided by users like you (who use qBittorrent in a similar way).
@falcon4fun commented on GitHub (Jan 1, 2026):
Same situation to me.
I have quite many torrents. Ordinary it's around 2-5k to seed. Torrent repo size around 7TB.
Every qBittorrent crash OR reboot / close after launch without not completing "Checking" leads to ~1-2h checking with literally low I/O activity around 100-300 KB/s (according to SystemInformer) with ~100% disk activity
I really don't know what is qBittorent is doing with 1 async I/O thread. On fully defragmented drive. Why? What's happening? It's like it tryes to check files using 4K random block instead of doing it sequentally.
Even normal launch causes to "Checking resume data" and checking status for 15-40 minutes. Why? Why again it checks like via random I/O block? Why again disk activity is near 100%? 1 torrent at the same time. 1 I/O thread at the same time.
Currently I've to clean my torrent files pool from time time, because checking 2k+ torrent files can cause one eternity.
For my side I've tryed to play with most of settings and have not found any solution. For example tryed both fastresume and sqlite db. Same problem.
Something is messed with hashing on HDD.
So, I'm signing to do something with hashing: either at least fix fastresume and crashes, or give any advanced option.
While writing this, qB is still checking:
In other hand, If I manually press "Recheck" after initial "Checking resume data", I get normal read speed 100MB/s:
@TheYMI commented on GitHub (Jan 2, 2026):
To the best of my knowledge, if a torrent was set to be checked and it didn't finish before qBittorrent is shut down, it will be checked again on the next startup.
Have you tried changing these settings?
@glassez commented on GitHub (Jan 2, 2026):
@falcon4fun
@glassez commented on GitHub (Jan 2, 2026):
This behaves exactly the same as diwnloading. The torrent will continue to do unfinished job (downloading or hash checking) the next time the qBittorrent starts, starting from a previously saved position.
@falcon4fun commented on GitHub (Jan 2, 2026):
Tryed. Leads mostly to worsier or the same situation because it's not SSD, RAID setup (especially w/ hw BBU cache) and single HDD prefers sequental I/O.
Moreover, if qB opens for first time (after graceful/ungraceful shutdown, doesn't matter), closed gracefully, then it will open and "Checking resume data" will be completed in seconds. Suppose (99,9999% possiblity) because of some files being mapped to ram.
Setup:
HW:
qB config:
My analysis:
ProcMon filters:
Test bench:
Case 1: Initial start after graceful shutdown (I/O and checking threads = 1)
Case 2: Initial start after graceful shutdown (I/O and checking threads = 4)
Case 3: Initial start after not graceful shutdown (I/O and checking threads = 1)
Case 4: Initial start after not graceful shutdown (I/O and checking threads = 4)
Case 5: Fully default settings (took from stock initial profile) after graceful shutdown (I/O and checking threads = 1)
Case 6: Fully default settings (took from stock initial profile) after not graceful shutdown (I/O and checking threads = 1)
To sum up:
I would suggest better logic handling.
Default fastresume interval equals to 60min. If torrent was downloaded and fastresume was successfully writen with completed torrent state, why do we need to check again, again and again those files? We still assume they are correct and untouched.
Even though, Sqlite fastresume option exists and we can write fastresume data more often: every 1-5-15 minutes.
Moreover, if files have any errors, particular torrent will already be set to "Error: <error event>".
I still have free space to seed larger amount of data but can't because every reboot or qB graceful termination for longer period of time (when mapped files flushed from RAM) or qB incorrect termination causes disk being stalled for long period of time
@xavier2k6 commented on GitHub (Jan 2, 2026):
@falcon4fun Can you also test with libtorrent 2.0.11 based build?, you mention (ReFS) what version is in-use here?
@falcon4fun commented on GitHub (Jan 2, 2026):
Already started testing with libtorrent 2.0 with stock settings (including stock async I/O parameter = 10). Same situation. Maybe 2-3 minutes difference (30 instead of 33). 😄
ReFS version: 3.4
Currently trying to move part of torrent repo to NTFS based disk with standard 4K cluster size to check. But still don't think will see any difference
@glassez commented on GitHub (Jan 3, 2026):
@falcon4fun
Well, thank you so much for your detailed report.
As far as I can understand, it's still not about checking the contents of files (i.e. hashes checking), but just about checking the existence of files ("checking resume data" in terms of qBittorrent/libtorrent). In fact, it's a big problem with similar "checking" related reports that users confuse what they're talking about exactly and also hide important details from their reports/screenshots that could help developers figure it out.
As for checking the existence of files, this is a long-standing story. If we don't go into too much detail, libtorrent always checks the existence of files when initializing a torrent. For some time now, it has (at my suggestion) the opportunity to avoid this and blindly trust the "resume data". But I was never able to implement its use in qBittorrent at the time. The relevant PR was stuck in conflicting opinions, and I didn't have time to deal with all this. I have now reopened it to try to arrange them and find a compromise solution.
@falcon4fun commented on GitHub (Jan 3, 2026):
@glassez Additionally, I will attach some more info here, when I finish my tests.
Preliminary findings shows some suspicion regarding "ReFS" and "large amount of small files" but takes too much time for tests including moving 1-4 TB from one disk to another and vice-versa.
It's not the first time I see problems with ReFS.. For example, our Veeam Repository based on ReFS likes to eat enormous amount memory for holding ReFS metafile (easily draining 64-96GB ram for 120TB repo) at my current workplace.
@xavier2k6 commented on GitHub (Jan 3, 2026):
ReFS has been problematic.
@falcon4fun commented on GitHub (Jan 3, 2026):
Yeap, seen that while trying figure out any issue related to ReFS. But mentioned issue has even more complicated configs: ReFS + Storage Spaces (at least 2 of participants had this config. Others - unknown: while WS mentioned, it can be anything from software raids to HW raids). I still hate any SDS (Software Defined Storage) setups 😄
I have not seen BSODs related to qB or ReFS on my current setup and previous setups with ReFS v3.2-v3.4.
Unfortunately, 0x00000050 are pretty common thing IRL.
ReFS was even worsier before. Veeam forum has large horror story: https://forums.veeam.com/veeam-backup-replication-f2/refs-4k-horror-story-t40629.html
@falcon4fun commented on GitHub (Jan 4, 2026):
Meh:
So, to conclude my situation with 30-40 minutes "Checking resume data"
ReFS - good bye. NTFS - welcome back again.
Thank you copy-on-write and other metadata ReFS enchancements but it's slow as sh** with large amount of files. I had noticed it before with simple defrag when 2mln files takes 20-30 minutes to just analyze disk. But thought it works as intended with such amount of files
According to ProcMon data QueryOpen have very long duration on ReFS. Forget to save some dumps but it's way more higher than on NTFS. Some of then took 0.05-0.005s instead NTFS peaks like 0.000x with ordinary 0.00000x.
Don't know why. It was quite long journey and many tests performed
NTFS vs ReFS
The same repo on the same physical disk
Whaaaat a hell?!
Some logs after test:
To sum up