Intensive IO read when seeding #12057

Open
opened 2026-02-21 22:16:46 -05:00 by deekerman · 30 comments
Owner

Originally created by @maisun on GitHub (Apr 25, 2021).

Hi,
I switched from rTorrent to qBittorrent due to the fact that rTorrent reads the HDD intensively during seeding - in my setup rTorrent constantly read 50MB/s while uploading only 2-3MB/s.
Among many other good things, I found qBittorrent is better in that regard, but the issue still exists. I found in Netdata that qBittorrent reads between 5-15MB/s while seeding 1-2MB/s, I'd like to get down to the problem and hopefully something can be tuned with settings.
For reference I'm on qBittorrent v4.3.3.1 with linuxserver/qbittorrent docker image. I have 1Gbps up/down fiber connection. Below are the settings I use:
Screenshot 2021-04-25 at 09 56 47
Screenshot 2021-04-25 at 09 57 06
Screenshot 2021-04-25 at 09 56 34

Any suggestion is appreciated!

Originally created by @maisun on GitHub (Apr 25, 2021). Hi, I switched from rTorrent to qBittorrent due to the fact that rTorrent reads the HDD intensively during seeding - in my setup rTorrent constantly read 50MB/s while uploading only 2-3MB/s. Among many other good things, I found qBittorrent is better in that regard, but the issue still exists. I found in Netdata that qBittorrent reads between 5-15MB/s while seeding 1-2MB/s, I'd like to get down to the problem and hopefully something can be tuned with settings. For reference I'm on qBittorrent v4.3.3.1 with linuxserver/qbittorrent docker image. I have 1Gbps up/down fiber connection. Below are the settings I use: <img width="689" alt="Screenshot 2021-04-25 at 09 56 47" src="https://user-images.githubusercontent.com/5745785/115985558-b08e3600-a5ac-11eb-84ec-48e7a18356f1.png"> <img width="689" alt="Screenshot 2021-04-25 at 09 57 06" src="https://user-images.githubusercontent.com/5745785/115985560-b126cc80-a5ac-11eb-8985-883f701e31fb.png"> <img width="689" alt="Screenshot 2021-04-25 at 09 56 34" src="https://user-images.githubusercontent.com/5745785/115985557-aec47280-a5ac-11eb-8526-1c6f3e152753.png"> Any suggestion is appreciated!
Author
Owner

@The5kull commented on GitHub (Apr 25, 2021):

My guess it has something to do with both caches set rather high.
Try with "Outstanding memory.." on 32MiB and "Disk cache" on 512MiB.

@The5kull commented on GitHub (Apr 25, 2021): My guess it has something to do with both caches set rather high. Try with "Outstanding memory.." on 32MiB and "Disk cache" on 512MiB.
Author
Owner

@maisun commented on GitHub (Apr 25, 2021):

My guess it has something to do with both caches set rather high.
Try with "Outstanding memory.." on 32MiB and "Disk cache" on 512MiB.

Thanks tried that but same problem :-(

@maisun commented on GitHub (Apr 25, 2021): > My guess it has something to do with both caches set rather high. > Try with "Outstanding memory.." on 32MiB and "Disk cache" on 512MiB. Thanks tried that but same problem :-(
Author
Owner

@The5kull commented on GitHub (Apr 25, 2021):

I forgot to ask, what kind of storage unit are we talking about? Your information is missing that.

@The5kull commented on GitHub (Apr 25, 2021): I forgot to ask, what kind of storage unit are we talking about? Your information is missing that.
Author
Owner

@maisun commented on GitHub (Apr 26, 2021):

I forgot to ask, what kind of storage unit are we talking about? Your information is missing that.

Sorry not sure what do you mean by storage unit?

@maisun commented on GitHub (Apr 26, 2021): > I forgot to ask, what kind of storage unit are we talking about? Your information is missing that. Sorry not sure what do you mean by storage unit?
Author
Owner

@ArcticGems commented on GitHub (Apr 26, 2021):

I forgot to ask, what kind of storage unit are we talking about? Your information is missing that.

Sorry not sure what do you mean by storage unit?

If it's internal/external and mechanical SATA/SATA SSD/M.2 NVMe etc...

@ArcticGems commented on GitHub (Apr 26, 2021): > > > > I forgot to ask, what kind of storage unit are we talking about? Your information is missing that. > > Sorry not sure what do you mean by storage unit? If it's internal/external and mechanical SATA/SATA SSD/M.2 NVMe etc...
Author
Owner

@maisun commented on GitHub (Apr 26, 2021):

The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD.

@maisun commented on GitHub (Apr 26, 2021): The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD.
Author
Owner

@The5kull commented on GitHub (Apr 26, 2021):

The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD.

The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive.

@The5kull commented on GitHub (Apr 26, 2021): > > > The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD. The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive.
Author
Owner

@maisun commented on GitHub (Apr 26, 2021):

The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD.

The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive.

Ok, I have enabled the settings, and all files that I’m seeding were copied from another disk, so I don’t think it has something to do with disk fragmentation. Anyways the read io is constant, it’s not like reading for x minute then stop for y minutes.

@maisun commented on GitHub (Apr 26, 2021): > > The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD. > > The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive. Ok, I have enabled the settings, and all files that I’m seeding were copied from another disk, so I don’t think it has something to do with disk fragmentation. Anyways the read io is constant, it’s not like reading for x minute then stop for y minutes.
Author
Owner

@FranciscoPombal commented on GitHub (Apr 30, 2021):

@arvidn ideas?

@FranciscoPombal commented on GitHub (Apr 30, 2021): @arvidn ideas?
Author
Owner

@arvidn commented on GitHub (Apr 30, 2021):

is this using libtorrent 1.2.x?
If so, the read_cache_line_size setting could be lowered.

If this is using libtorrent 2.0.x, the read-ahead is a kernel setting.

@arvidn commented on GitHub (Apr 30, 2021): is this using libtorrent 1.2.x? If so, the [read_cache_line_size](http://libtorrent.org/reference-Settings.html#read_cache_line_size) setting could be lowered. If this is using libtorrent 2.0.x, the read-ahead is a kernel setting.
Author
Owner

@maisun commented on GitHub (Apr 30, 2021):

is this using libtorrent 1.2.x?
If so, the read_cache_line_size setting could be lowered.

If this is using libtorrent 2.0.x, the read-ahead is a kernel setting.

I believe it’s Libtorrent V1.2.13.0.
Do you know how to change that setting in qbittorrent?

@maisun commented on GitHub (Apr 30, 2021): > is this using libtorrent 1.2.x? > If so, the [read_cache_line_size](http://libtorrent.org/reference-Settings.html#read_cache_line_size) setting could be lowered. > > If this is using libtorrent 2.0.x, the read-ahead is a kernel setting. I believe it’s Libtorrent V1.2.13.0. Do you know how to change that setting in qbittorrent?
Author
Owner

@stickz commented on GitHub (May 2, 2021):

is this using libtorrent 1.2.x?
If so, the read_cache_line_size setting could be lowered.
If this is using libtorrent 2.0.x, the read-ahead is a kernel setting.

I believe it’s Libtorrent V1.2.13.0.
Do you know how to change that setting in qbittorrent?

You can't change that without building a new custom docker image. rTorrent is actually better for reducing random disk reads than qBittorrent. You most likely didn't configure the .rtorrent.rc file properly.

These settings will greatly reduce the amount of random disk reads/writes on rTorrent.

network.send_buffer.size.set = 32M
network.receive_buffer.size.set = 32M
@stickz commented on GitHub (May 2, 2021): > > is this using libtorrent 1.2.x? > > If so, the [read_cache_line_size](http://libtorrent.org/reference-Settings.html#read_cache_line_size) setting could be lowered. > > If this is using libtorrent 2.0.x, the read-ahead is a kernel setting. > > I believe it’s Libtorrent V1.2.13.0. > Do you know how to change that setting in qbittorrent? You can't change that without building a new custom docker image. rTorrent is actually better for reducing random disk reads than qBittorrent. You most likely didn't configure the `.rtorrent.rc` file properly. These settings will greatly reduce the amount of random disk reads/writes on rTorrent. ``` network.send_buffer.size.set = 32M network.receive_buffer.size.set = 32M ```
Author
Owner

@maisun commented on GitHub (May 2, 2021):

is this using libtorrent 1.2.x?
If so, the read_cache_line_size setting could be lowered.
If this is using libtorrent 2.0.x, the read-ahead is a kernel setting.

I believe it’s Libtorrent V1.2.13.0.
Do you know how to change that setting in qbittorrent?

You can't change that without building a new custom docker image. rTorrent is actually better for reducing random disk reads than qBittorrent. You most likely didn't configure the .rtorrent.rc file properly.

These settings will greatly reduce the amount of random disk reads/writes on rTorrent.

network.send_buffer.size.set = 32M
network.receive_buffer.size.set = 32M

I actually tried the send receive buffer with rtorrent but it didn’t help. With qBittorrent the read amplification is about 3-5 times but with rtorrent it was 10-20 times

@maisun commented on GitHub (May 2, 2021): > > > is this using libtorrent 1.2.x? > > > If so, the [read_cache_line_size](http://libtorrent.org/reference-Settings.html#read_cache_line_size) setting could be lowered. > > > If this is using libtorrent 2.0.x, the read-ahead is a kernel setting. > > > > > > I believe it’s Libtorrent V1.2.13.0. > > Do you know how to change that setting in qbittorrent? > > You can't change that without building a new custom docker image. rTorrent is actually better for reducing random disk reads than qBittorrent. You most likely didn't configure the `.rtorrent.rc` file properly. > > These settings will greatly reduce the amount of random disk reads/writes on rTorrent. > > ``` > network.send_buffer.size.set = 32M > network.receive_buffer.size.set = 32M > ``` I actually tried the send receive buffer with rtorrent but it didn’t help. With qBittorrent the read amplification is about 3-5 times but with rtorrent it was 10-20 times
Author
Owner

@T2JOESl4m2ZpNC commented on GitHub (Mar 3, 2022):

The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD.

The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive.

Ok, I have enabled the settings, and all files that I’m seeding were copied from another disk, so I don’t think it has something to do with disk fragmentation. Anyways the read io is constant, it’s not like reading for x minute then stop for y minutes.

I also believe to be having the same issue here qbittorrent v4.4.1 arch linux, Maybe interestingly this also happened when i moved a lot of files (500G) from a 1TB 2.5 HDD (SATA) to a new 1TB 2.5 HDD connected with a SATA to USB3.0 external case.
I moved the files using qbittorrent's Set Location feature.
i now see that 30-40 Mb/s constantly being read while only seeding around 200Kb/s and overall decrease in upload speed.
this also affected my download speed as now i have to pause everything that is seeding to download at full speed.

@T2JOESl4m2ZpNC commented on GitHub (Mar 3, 2022): > > > The docker container runs on SATA SSD which is the system volume of my NAS. Downloaded files are on HDD Toshiba MG08 16TB. High Read io is on my Toshiba HDD. > > > > > > The only reason for the intensive IO read would be fragmentation if you use such a big capacity drive. Also make sure you have checked "Pre-allocate disk space for all files" if you use a mechanical drive. > > Ok, I have enabled the settings, and all files that I’m seeding were copied from another disk, so I don’t think it has something to do with disk fragmentation. Anyways the read io is constant, it’s not like reading for x minute then stop for y minutes. I also believe to be having the same issue here ```qbittorrent v4.4.1 arch linux```, Maybe interestingly this also happened when i moved a lot of files (500G) from a 1TB 2.5 HDD (SATA) to a new 1TB 2.5 HDD connected with a SATA to USB3.0 external case. I moved the files using qbittorrent's Set Location feature. i now see that 30-40 Mb/s constantly being read while only seeding around 200Kb/s and overall decrease in upload speed. this also affected my download speed as now i have to pause everything that is seeding to download at full speed.
Author
Owner

@terrellgf commented on GitHub (May 9, 2022):

+1
I had the same problem in 4.4.x(4.4.0/4.4.1/4.4.2)

image

@terrellgf commented on GitHub (May 9, 2022): +1 I had the same problem in 4.4.x(4.4.0/4.4.1/4.4.2) ![image](https://user-images.githubusercontent.com/54665471/167540458-64cd2d91-915d-43a0-8f3e-354fcd043bf3.png)
Author
Owner

@DaMatrix commented on GitHub (Oct 20, 2022):

cam confirm, this is definitely still an issue. i regularly see 10-30x read amplification when seeding.
image

this is on qBittorrent v4.4.3.1, with the data stored on an 8-drive btrfs raid1. all the files have been properly defragmented, so fragmentation is not the issue.

i checked the OS page cache using vmtouch on one of the larger torrent files i'm seeding, and got the following:
img

this was done 2 minutes after manually dropping the entire page cache (with echo 1 | sudo tee /proc/sys/vm/drop_caches). i watched the web UI the entire time: in that time, the torrent was active seeding for only about 45 seconds, during which barely 30 megabytes were uploaded to one client. why on earth is it necessary to access 1 gigabyte of (unique!) data for such a small uploaded amount?

@DaMatrix commented on GitHub (Oct 20, 2022): cam confirm, this is definitely still an issue. i regularly see 10-30x read amplification when seeding. ![image](https://user-images.githubusercontent.com/11216106/196985078-157b13b2-9c3b-4ff4-bc8e-f1ae8ef54782.png) this is on qBittorrent v4.4.3.1, with the data stored on an 8-drive btrfs raid1. all the files have been properly defragmented, so fragmentation is not the issue. i checked the OS page cache using `vmtouch` on one of the larger torrent files i'm seeding, and got the following: ![img](https://i.daporkchop.net/T86vDdED.png) this was done 2 minutes after manually dropping the entire page cache (with `echo 1 | sudo tee /proc/sys/vm/drop_caches`). i watched the web UI the entire time: in that time, the torrent was active seeding for only about 45 seconds, during which barely 30 megabytes were uploaded to one client. why on earth is it necessary to access 1 gigabyte of (unique!) data for such a small uploaded amount?
Author
Owner

@nosirrahdrof commented on GitHub (Jan 23, 2023):

Anyone have any further information or discoveries from this problem? I seem to be suffering with this same issue.

@nosirrahdrof commented on GitHub (Jan 23, 2023): Anyone have any further information or discoveries from this problem? I seem to be suffering with this same issue.
Author
Owner

@nosirrahdrof commented on GitHub (Feb 13, 2023):

I've actually had to download a bandwidth limiter to limit the download bandwidth of the NIC i'm using. Was seeing 50 MBps and even higher consistently of read activity on my nas when seeding at around 1 MBps.

The network limiting software seems to work fine and seeding is proceeding as normal with no more read amplification. I'm not sure if this is going to cause problems long term or not, but I don't understand why Qbit was needing to read TB's per day while only uploading a tiny fraction of that.

If anyone has any discoveries as to why this is happening, I would love to hear about it.

@nosirrahdrof commented on GitHub (Feb 13, 2023): I've actually had to download a bandwidth limiter to limit the download bandwidth of the NIC i'm using. Was seeing 50 MBps and even higher consistently of read activity on my nas when seeding at around 1 MBps. The network limiting software seems to work fine and seeding is proceeding as normal with no more read amplification. I'm not sure if this is going to cause problems long term or not, but I don't understand why Qbit was needing to read TB's per day while only uploading a tiny fraction of that. If anyone has any discoveries as to why this is happening, I would love to hear about it.
Author
Owner

@agneevX commented on GitHub (Apr 17, 2023):

This is unfortunately still an issue with the latest version of linuxserver/qbittorrent, as of this post.

image

~160GB of read for 1GB upload.


image image
@agneevX commented on GitHub (Apr 17, 2023): This is unfortunately still an issue with the latest version of `linuxserver/qbittorrent`, as of this post. <img width="423" alt="image" src="https://user-images.githubusercontent.com/19761269/232541700-714c0f5f-6c2f-48a5-a74b-d9f5166835e1.png"> ~160GB of read for 1GB upload. --- <img width="564" alt="image" src="https://user-images.githubusercontent.com/19761269/232540548-c0073dcf-5d03-4d56-87d3-fe32a12f9ccd.png"> <img width="293" alt="image" src="https://user-images.githubusercontent.com/19761269/232540539-1d4f117e-29e1-4954-899c-0bc602fa3004.png">
Author
Owner

@alexander-yakushev commented on GitHub (Jun 7, 2023):

Chiming in. I started observing 6-8x read amplification when seeding with linuxserver/qBittorrent 4.5.2 when I switched my hard drives to BTRFS. Previously, I used it with ext4, and the disk I/O strictly matched the upload speed. I can provide any extra data if requested.

@alexander-yakushev commented on GitHub (Jun 7, 2023): Chiming in. I started observing 6-8x read amplification when seeding with linuxserver/qBittorrent 4.5.2 when I switched my hard drives to BTRFS. Previously, I used it with ext4, and the disk I/O strictly matched the upload speed. I can provide any extra data if requested.
Author
Owner

@lukefor commented on GitHub (Jun 7, 2023):

I'm also seeing up to around 10x read amplification since moving from Transmission. Using XFS via NFS, also seen same behaviour with XFS directly. Kernel/NFS settings for readahead are all much lower than torrent piece size. Versions of qB/libtorrent are as shipped in Debian 12

@lukefor commented on GitHub (Jun 7, 2023): I'm also seeing up to around 10x read amplification since moving from Transmission. Using XFS via NFS, also seen same behaviour with XFS directly. Kernel/NFS settings for readahead are all much lower than torrent piece size. Versions of qB/libtorrent are as shipped in Debian 12
Author
Owner

@flisk commented on GitHub (Jun 23, 2023):

i was seeing this on qbittorrent v4.5.2 after upgrading to debian bookworm, and it got a lot better after migrating my torrents disk from btrfs to xfs. read amplification is down from a factor of up to 15x to about 2-4x. it seems like btrfs doesn't play nice with high volumes of small random reads.

@flisk commented on GitHub (Jun 23, 2023): i was seeing this on qbittorrent v4.5.2 after upgrading to debian bookworm, and it got a lot better after migrating my torrents disk from btrfs to xfs. read amplification is down from a factor of up to 15x to about 2-4x. it seems like btrfs doesn't play nice with high volumes of small random reads.
Author
Owner

@shssoichiro commented on GitHub (Jun 11, 2024):

I'm still seeing this issue as well in 4.6.5 + libtorrent 2.0.10. I'm on linux with btrfs + mdraid-5 on top of rotational disks, but that seems irrelevant to what I'm witnessing. Even though my upload speed in qBittorrent is topping out at 1 MB/sec, and averaging 500 KB/sec, my server is showing that qBittorrent is consistently reading between 10-25 MB/sec. Even though my client reports I've uploaded less than 7 GB this session, bottom reports that qbittorrent has read 217GB from disk in the same session (I have not downloaded anything or done file verification). This is so severe that I'm concerned the severe level of disk reading is impacting my upload speed, and at some times it has caused other disk IO on my system to be stalled for minutes. This did not happen when I was on rtorrent, either.

@shssoichiro commented on GitHub (Jun 11, 2024): I'm still seeing this issue as well in 4.6.5 + libtorrent 2.0.10. I'm on linux with btrfs + mdraid-5 on top of rotational disks, but that seems irrelevant to what I'm witnessing. Even though my upload speed in qBittorrent is topping out at 1 MB/sec, and averaging 500 KB/sec, my server is showing that qBittorrent is consistently reading between 10-25 MB/sec. Even though my client reports I've uploaded less than 7 GB this session, bottom reports that qbittorrent has read 217GB from disk in the same session (I have not downloaded anything or done file verification). This is so severe that I'm concerned the severe level of disk reading is impacting my upload speed, and at some times it has caused other disk IO on my system to be stalled for minutes. This did not happen when I was on rtorrent, either.
Author
Owner

@unregd commented on GitHub (Dec 7, 2024):

5.0.2
win10pro NTFS

Qt: 6.7.3
Libtorrent: 1.2.19.0
Boost: 1.86.0
OpenSSL: 3.4.0
zlib: 1.3.1

I have the same performance issue, when the upload speed is 10MBps HDD load is 98-100%, reading 35-40MBps in task manager.
The HDD has nothing else on it, just a few seeded torrents, nothing else uses it just qb, and not fragmented at all.
I've tried to set the cache smaller, larger (up to 32GiB), disabled/enabled OS cache, changed send buffer settings up/down, coalesce reads and writes is on, file pool size 50-5000 etc. nothing helped.

@unregd commented on GitHub (Dec 7, 2024): 5.0.2 win10pro NTFS Qt: 6.7.3 Libtorrent: 1.2.19.0 Boost: 1.86.0 OpenSSL: 3.4.0 zlib: 1.3.1 I have the same performance issue, when the upload speed is 10MBps HDD load is 98-100%, reading 35-40MBps in task manager. The HDD has nothing else on it, just a few seeded torrents, nothing else uses it just qb, and not fragmented at all. I've tried to set the cache smaller, larger (up to 32GiB), disabled/enabled OS cache, changed send buffer settings up/down, coalesce reads and writes is on, file pool size 50-5000 etc. nothing helped.
Author
Owner

@theolaa commented on GitHub (Feb 12, 2025):

I'm also experiencing something similar. My files are all stored on my local NAS, and qBittorrent runs on a separate server. My seeding limit is ~3.5 Mbps (my internet is only 30/5), but my NAS reports up to 200 Mbps of sent data which immediately drops to basically nothing when I close qBittorrent.

@theolaa commented on GitHub (Feb 12, 2025): I'm also experiencing something similar. My files are all stored on my local NAS, and qBittorrent runs on a separate server. My seeding limit is ~3.5 Mbps (my internet is only 30/5), but my NAS reports up to 200 Mbps of sent data which immediately drops to basically nothing when I close qBittorrent.
Author
Owner

@Smok07 commented on GitHub (May 26, 2025):

Same here
qBittorrent in LXC container absolutely crashes the container with massive disk reads and IO
it seems like a force recheck causes the same issue - continuous reads that eventually crash the container

Image

The torrent file is stored on a RAM tmpfs

CPU usage 35.06% of 2 CPU(s)
Memory usage 3.16% (64.66 MiB of 2.00 GiB)
SWAP usage: 100% (512.00 MiB of 512.00 MiB)
Bootdisk size 10.38% (826.38 MiB of 7.78 GiB)

qBittorrent v5.1.0 WebUI
Linux qbittorrent 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64 GNU/Linux

Qt: 6.9.0
Libtorrent: 2.0.11.0
Boost: 1.88.0
OpenSSL: 3.5.0
zlib: 1.3.1.zlib-ng

@Smok07 commented on GitHub (May 26, 2025): Same here qBittorrent in LXC container absolutely crashes the container with massive disk reads and IO it seems like a force recheck causes the same issue - continuous reads that eventually crash the container ![Image](https://github.com/user-attachments/assets/e196ca45-0ffe-40ba-bf1e-4475c49a3506) The torrent file is stored on a RAM tmpfs CPU usage 35.06% of 2 CPU(s) Memory usage 3.16% (64.66 MiB of 2.00 GiB) SWAP usage: 100% (512.00 MiB of 512.00 MiB) Bootdisk size 10.38% (826.38 MiB of 7.78 GiB) qBittorrent v5.1.0 WebUI Linux qbittorrent 6.8.12-10-pve #1 SMP PREEMPT_DYNAMIC PMX 6.8.12-10 (2025-04-18T07:39Z) x86_64 GNU/Linux Qt: 6.9.0 Libtorrent: 2.0.11.0 Boost: 1.88.0 OpenSSL: 3.5.0 zlib: 1.3.1.zlib-ng
Author
Owner

@xavier2k6 commented on GitHub (May 26, 2025):

@Smok07 Please provide qBittorrent/libtorrent etc. info.

@xavier2k6 commented on GitHub (May 26, 2025): @Smok07 Please provide qBittorrent/libtorrent etc. info.
Author
Owner

@Smok07 commented on GitHub (May 26, 2025):

@Smok07 Please provide qBittorrent/libtorrent etc. info.

updated previous post

@Smok07 commented on GitHub (May 26, 2025): > [@Smok07](https://github.com/Smok07) Please provide qBittorrent/libtorrent etc. info. updated previous post
Author
Owner

@Smok07 commented on GitHub (May 26, 2025):

looks like a swap issue (leaking?)

Image

ok. looks like qBittorrent requires at least 5 GB (4613.5 MB in this case) of RAM for caching and processing

Image

or not. all RAM is exhausted

Image

SSD: seeding one file
RAM tmpfs (mounted from host): checking one file
Image

Disk IO Reads 520-540 MB/S - this is my SSD but why!?
Image

max IO on SSD
Image

@Smok07 commented on GitHub (May 26, 2025): looks like a swap issue (leaking?) ![Image](https://github.com/user-attachments/assets/e175b76f-f362-49e1-bb95-2d36c688131b) ok. looks like qBittorrent requires at least 5 GB (4613.5 MB in this case) of RAM for caching and processing ![Image](https://github.com/user-attachments/assets/835369b6-4133-431d-a905-099803f7d0be) or not. all RAM is exhausted ![Image](https://github.com/user-attachments/assets/0112733b-02ae-493e-baaa-f49275c1ca24) SSD: seeding one file RAM tmpfs (mounted from host): checking one file ![Image](https://github.com/user-attachments/assets/4dad96d5-88a3-4258-96da-9109b10a2989) Disk IO Reads 520-540 MB/S - this is my SSD but why!? ![Image](https://github.com/user-attachments/assets/32d825af-44e7-43e5-9a9f-848298c24e77) max IO on SSD ![Image](https://github.com/user-attachments/assets/bbf3f321-7b19-4465-9dbc-cae4636c3f99)
Author
Owner

@HanabishiRecca commented on GitHub (May 26, 2025):

@Smok07

Libtorrent: 2.0.11.0

Try Settings > Advanced > Disk IO type set to Simple pread/pwrite, don't forget to restart the client.

Or even better use the official docker image, it is shipped with libtorrent 1.2.x.

RAM tmpfs (mounted from host): checking one file
Disk IO Reads 520-540 MB/S - this is my SSD but why!?

Tmpfs is also a subject for swapping.

@HanabishiRecca commented on GitHub (May 26, 2025): @Smok07 > Libtorrent: 2.0.11.0 Try `Settings` > `Advanced` > `Disk IO type` set to `Simple pread/pwrite`, don't forget to restart the client. Or even better use the official docker image, it is shipped with libtorrent 1.2.x. > RAM tmpfs (mounted from host): checking one file > Disk IO Reads 520-540 MB/S - this is my SSD but why!? Tmpfs is also a subject for swapping.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/qBittorrent#12057
No description provided.