mirror of
https://github.com/qbittorrent/qBittorrent.git
synced 2026-03-02 22:57:32 -05:00
Cache DNS queries #7379
Labels
No labels
Accessibility
AppImage
Bounty
Build system
CI
Can't reproduce
Code cleanup
Confirmed bug
Confirmed bug
Core
Crash
Data loss
Discussion
Docker
Documentation
Duplicate
Feature
Feature request
Feature request
Feature request
Filters
Flatpak
GUI
Has workaround
I2P
Invalid
Libtorrent
Look and feel
Meta
NSIS
Network
Not an issue
OS: *BSD
OS: Linux
OS: Windows
OS: macOS
PPA
Performance
Project management
Proxy/VPN
Qt bugs
Qt6 compat
RSS
Search engine
Security
Temp folder
Themes
Translations
Triggers
Waiting diagnosis
Waiting info
Waiting upstream
Waiting web implementation
Watched folders
WebAPI
WebUI
autoCloseOldIssue
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/qBittorrent#7379
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tomose on GitHub (Jun 3, 2018).
qbittorrent-nox on linux
Seems like the client doesn't do any sort of caching of DNS queries at all.
Even if the caching was only for 30s at a time, it'd make for less noisy network activity.
@AGlezB commented on GitHub (Mar 18, 2020):
Also on Windows please.
Just look at NextDNS analytics for the past 7 days while seeding some 140 small torrents.


This is for the last 30 days.


And that is just the top 6.
I'm guessing qBittorrent alone ate twice over the 300K querys NextDNS intends to give for free.
@FranciscoPombal commented on GitHub (Mar 18, 2020):
@arvidn what is libtorrent's behaviour regarding caching of DNS queries?
@arvidn commented on GitHub (Mar 18, 2020):
by default it caches DNS lookups for 1200 seconds.
https://github.com/arvidn/libtorrent/blob/RC_1_2/src/resolver.cpp#L118
I would expect operating systems would cache for at least the domain TTL as well though.
@arvidn commented on GitHub (Mar 18, 2020):
@AGlezB are those counting the actual DNS request messages sent to the server? Or are they counting calls to the OS function to request a lookup?
In either case, what are the TTLs for those names? Perhaps they are configured to be re-requested frequently.
@AGlezB commented on GitHub (Mar 18, 2020):
@arvidn I honestly don't know.
I have NextDNS Official App installed and the screenshots come from the Analitics tab in my.nextdns.io
@AGlezB commented on GitHub (Mar 18, 2020):
Most of those are not resolving properly which makes a lot of sense.
@arvidn libtorrent has any logic to prevent repeating failed queries or maybe a delay between repeats?
@arvidn commented on GitHub (Mar 18, 2020):
I recently added this
@tz1 commented on GitHub (May 9, 2020):
I too use NextDNS and saw a similar problem, but it is more subtle.
CACHING WILL NOT FIX THIS
NextDNS does NOT distinguish failed DNS requests from actually resolved ones.
The problem is the excess queries are a few seconds apart and happen when the lookup FAILS.
It seems to be instantly retried, often dozens of times without any backoff. There are several in my list, and when I manually check with nslookup, I don't get the "blocked", I get "doesn't exist", and from many DNS servers.
So if there is a failure looking up a tracker by DNS, it should mark it as not working and/or try maybe after a few seconds, then after a longer period (moving cables, router reboot), then timeout for another hour or so.
It also appears if there are several active torrents with the same unresolvable tracker, they will ALL try the DNS entry, so caching would help this as well since I often see multiple queries for the SAME tracker less than a second apart, and the TTLs can't be that small on the entry.
I was looking in the code, but didn't find where it is looking up the tracker DNS entry, and there is a lot of code.
@arvidn commented on GitHub (May 9, 2020):
The host name lookups happen via a class
libtorrent/resolver.hppwhich does some caching. The underlying primitive of performing lookups goes through boost.asio's resolver though, which has each platform specific call to perform lookups.Is it really the case that operating systems don't cache DNS requests? It would seem quite natural for them to do so, they even have the TTL.
@AGlezB commented on GitHub (May 9, 2020):
Windows has a DNS Cache service but maybe libtorrent is bypassing it.
@tz1 commented on GitHub (May 9, 2020):
Operating systems generally do NOT cache requests unless the DNS "server" is running on the OS. Normally they simply forward requests upstream to that specified in the DHCP entry for DNS, typically the router. The router itself may or may not cache, and a problem is that most that forward to NextDNS (the program) do NOT cache replies, and the NextDNS app doesn't cache.
The other problem is nothing I know of caches the "doesn't exist" response, only the resolved IP address. So the large blob of tries for unresolavable domains will always result in attempts.
Maybe I need to look into libtorrent since it isn't in the tree.
@tz1 commented on GitHub (May 9, 2020):
to clarify, "gethostbyname()" doesn't cache. A local DNS server that forwards requests generally does when it gets a valid response (and for the TTL duraton).
@farid-fari commented on GitHub (May 11, 2020):
@tz1 I don't think the issue about failing requests is the main issue at hand, it's probably a separate issue you can open. In my case, every tracker is resolved correctly, yet qBittorrent makes a DNS request per torrent. It makes them all nearly simultaneously, meaning that it's not waiting to cache results.
So I do think that caching would solve this, you just need to be clever about the way you do it -- maybe ping a tracker once and only once you've cached the DNS result make further requests to it.
This is a necessary feature though, every major piece of software has to go through this if it's doing this many DNS requests, it's incredibly wasteful as is right now.
Edit: my temporary fix is resolving these and putting them in
/etc/hostsbut clearly, not a long-term solution.@crackwitz commented on GitHub (Nov 4, 2020):
I'm running v4.3.0.1 on windows and I notice some CPU load from the "Dnscache" service of Windows (svchost), that goes away as soon as I close qB. it is not changed by enabling or disabling "Advanced > Resolve peer host names" so my suspicion is that it's related to trackers. I have ~200 "resumed" torrents, some of which surely have trackers that aren't there anymore.
I'd really like to get this CPU load taken care of. this wasn't an issue on qB v3.3 and whatever libtorrent version was used back then.
@nasbdh9 commented on GitHub (Apr 7, 2023):
dns returns an unavailable CDN IP, so it takes 20 minutes to refresh again?
What needs to be set to disable these caches?
@arvidn commented on GitHub (Apr 8, 2023):
I'm experimenting with this: https://github.com/arvidn/libtorrent/pull/7373
@luzpaz commented on GitHub (Mar 15, 2025):
See also #22439
@xavier2k6 commented on GitHub (May 23, 2025):
ANNOUNCEMENT!
For anybody coming across this "Feature Request" & would like/love to see a potential implementation in the future!
Here are some options available to you:
Please select/click the 👍 &/or ❤
reactionsin the original/opening post of this ticket.Please feel free (If you have the "skillset") to create a "Pull Request" implementing what's being requested in this ticket.
(new/existing contributors/developers are always welcome)
DO:
DO NOT:
(These will be disregarded/hidden as "spam/abuse/off-topic" etc. as they don't provide anything constructive.)