mirror of
https://github.com/AdguardTeam/AdGuardHome.git
synced 2026-03-04 00:01:12 -05:00
Cache not working as expected #4161
Labels
No labels
P1: Critical
P2: High
P3: Medium
P4: Low
UI
bug
cannot reproduce
compatibility
dependencies
docker
documentation
duplicate
enhancement
enhancement
external libs
feature request
good first issue
help wanted
infrastructure
invalid
localization
needs investigation
performance
potential-duplicate
question
recurrent
research
snap
waiting for data
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/AdGuardHome#4161
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @agent-purple on GitHub (Jan 13, 2023).
Prerequisites
I have checked the Wiki and Discussions and found no answer
I have searched other issues and found no duplicates
I want to report a bug and not ask a question
Operating system type
Linux, Other (please mention the version in the description)
CPU architecture
AMD64
Installation
GitHub releases or script from README
Setup
On one machine
AdGuard Home version
v0.107.21
Description
What did you do?
I am running two instances of AGH (as proxmox container), one for my servers and another for my regular clients.
Usually I regularly have an eye on all servers, but I neglected it a bit in the last weeks.
Now I see, that the cache hit rate in AGH is not as good as expected.
In the DNS request log overview of the AGH app (iOS in my case) you can see which requests are served from the cache.
The average response processing time was <10ms on both servers for a long time.
In the last weeks I installed every AGH update without checking.
With the latest version I see that the avg. response time on my AGH instances are ~30ms and ~50ms which indicates that a lot of requests that could be cached are forwarded to upstream servers.
This can also be seen in the request log. Only a very few requests are marked as cached.
I cannot remember the last version the caching worked fine.
My settings

I can find these issues related to caching:
https://github.com/AdguardTeam/AdGuardHome/issues/4942
https://github.com/AdguardTeam/AdGuardHome/issues/5241
Expected result
Way better cache hit rate
Actual result
Poor cache hit rate
Screenshots (if applicable)
Additional information
@agent-purple commented on GitHub (Feb 9, 2023):
Any update with this issue?
cache seems to work only randomly.
@tuanalumi commented on GitHub (Jun 14, 2023):
Mine has avg of 50ms (which was 3ms before).
The strange thing is that requests have significantly different response time, while they are all served from cache.
Screenshots from Query Log


@JuanPabloPearce commented on GitHub (Feb 29, 2024):
I also have a similar issue. Whatever DNS server I use, Adguard Home (and custom Adguard DNS dashboard in my tests) (and Adguard client dns protection) it seems Adguard more than doubles the latency.
I am getting 7ms cached response times using OpenDNS without DNS filtering enabled in the AdGuard Client. On the AdGuard client if I enable DNS protection (with or without any rules active) and it immediately goes to 30+ms response times for CACHED results. This occurs with any manual DNS I input or even System DNS.
AdGuard Home and the custom AdGuard DNS have the same exact symptoms and broken numbers that I get from the client version's DNS protection feature (with and without rules active).
It is like it routes the original request with whatever filtering you have active and then does it again however many times it decides it needs to. It used to work just fine and AdGuard was actually one of the fastest DNS services I could use in my area.
@ghost commented on GitHub (Mar 27, 2024):
Hi, is this still an issue?
@agent-purple commented on GitHub (Mar 27, 2024):
Yes and no.
#1. you can see that howadays the majority of DNS records have a TTL of 60 to 300 (1-5 minutes), which have been changed over the time (-> shorter TTL). This makes DNS caches very ineffective and it's quite useless, because most DNS/IPs are static.
This seems to be one of the reasons why caches are missed (the optimistic thing is not active).
#2. I have seen a very stange behavior related to DNS upstream requests, caching and logging.
In the screenshot you can see a lot of requests failing (with SERVFAIL and NXDOMAIN) that are logged with weird response ms. These requests were sent to different DNS upstream servers and the log implies to me, that all of them were not available -> huh?
I can see same behavior with internal DNS requests sent to [/domain.tld/]internal IP.
What is happening here completely destroys the average response time statistics of the AGH instance.
I have three AGH instances running, for regular clients, servers, and VPN clients and the behavior is similar.
@andylau004 commented on GitHub (Apr 9, 2024):
that maybe about it.
when client doh request set CD FLAG=1,
doh server disable cache