DNS Monitor only checks first CAA record when multiple records exist #4545

Closed
opened 2026-02-28 04:06:50 -05:00 by deekerman · 2 comments
Owner

Originally created by @am17torres on GitHub (Dec 23, 2025).

This issue relates specifically to CAA records. Doing a search for CAA issues yields few results.

The only PR I can see which may have address this issue #3919 was closed.

🛡️ Security Policy

📝 Description

When monitoring DNS records where multiple CAA records exist, the DNS monitor appears to only check the first record returned rather than all records.

👟 Reproduction steps

  1. Create multiple CAA records
  2. Configure DNS monitor with expected value matching one specific record
  3. Observe that test results are inconsistent across multiple checks

👀 Expected behavior

When multiple CAA records exist, the monitor should check all returned records against the expected value, not just the first one.

😓 Actual Behavior

  • Check passes intermittently depending on which record DNS returns first
  • Test fails when a different record is returned first
  • DNS servers can return records in any order, causing inconsistent results

🐻 Uptime-Kuma Version

2.0.2

💻 Operating System and Arch

OSx Tahoe Version 26.1 (25B78)

🌐 Browser

Version 143.0.7499.147 (Official Build) (arm64)

🖥️ Deployment Environment

Docker OSX

📝 Relevant log output

Debugging

I have a root domain with 3 CAA records defined.

[
  { critical: 0, issuewild: 'letsencrypt.org' },
  { critical: 0, issue: 'letsencrypt.org' },
  { critical: 0, issue: 'REDACTED' }
]

It appears the relevant source code can be found here where it explicitly pulls the first record and assumes the key issue

github.com/louislam/uptime-kuma@d23ff8c486/server/monitor-types/dns.js (L53-L54)

This causes my health check to fail intermittently due to the non-deterministic response ordering.

I had hoped to use the condition "Record contains" option but it throws an indexOf error.

Image

I'm not sure how best to implement this in a backward compatible way.

I see for other record types the use of some

# Could we do something like this?
- conditionsResult = handleConditions({ record: dnsRes[0].issue });
+ conditionsResult = dnsRes.some(record => handleConditions({ record: record.issue }));

An issue I see what that approach is the assumption of the key issue.

According to the spec there is issue, issueWild, and iodef.

Looking for suggestions on how to proceed!

Thanks!

Originally created by @am17torres on GitHub (Dec 23, 2025). ### 📑 I have found these related issues/pull requests This issue relates specifically to CAA records. Doing a [search for CAA issues](https://github.com/louislam/uptime-kuma/issues?q=is%3Aissue%20state%3Aclosed%20CAA) yields few results. The only PR I can see which may have address this issue #3919 was closed. ### 🛡️ Security Policy - [x] I have read and agree to Uptime Kuma's [Security Policy](https://github.com/louislam/uptime-kuma/security/policy). ### 📝 Description When monitoring DNS records where multiple CAA records exist, the DNS monitor appears to only check the first record returned rather than all records. ### 👟 Reproduction steps 1. Create multiple CAA records 2. Configure DNS monitor with expected value matching one specific record 3. Observe that test results are inconsistent across multiple checks ### 👀 Expected behavior When multiple CAA records exist, the monitor should check all returned records against the expected value, not just the first one. ### 😓 Actual Behavior - Check passes intermittently depending on which record DNS returns first - Test fails when a different record is returned first - DNS servers can return records in any order, causing inconsistent results ### 🐻 Uptime-Kuma Version 2.0.2 ### 💻 Operating System and Arch OSx Tahoe Version 26.1 (25B78) ### 🌐 Browser Version 143.0.7499.147 (Official Build) (arm64) ### 🖥️ Deployment Environment Docker OSX ### 📝 Relevant log output ## Debugging I have a root domain with 3 CAA records defined. ```javascript [ { critical: 0, issuewild: 'letsencrypt.org' }, { critical: 0, issue: 'letsencrypt.org' }, { critical: 0, issue: 'REDACTED' } ] ``` It appears the relevant source code can be found here where it explicitly pulls the first record and assumes the key `issue` https://github.com/louislam/uptime-kuma/blob/d23ff8c4860b09cdd8c57ba9ab6ced5321dd3782/server/monitor-types/dns.js#L53-L54 This causes my health check to fail intermittently due to the non-deterministic response ordering. I had hoped to use the condition "Record contains" option but it throws an `indexOf` error. <img width="941" height="388" alt="Image" src="https://github.com/user-attachments/assets/fb1af104-8137-4ed4-81bd-97fcc5fae8fe" /> I'm not sure how best to implement this in a backward compatible way. I see for other record types the use of `some` ```diff # Could we do something like this? - conditionsResult = handleConditions({ record: dnsRes[0].issue }); + conditionsResult = dnsRes.some(record => handleConditions({ record: record.issue })); ``` An issue I see what that approach is the assumption of the key `issue`. According to the [spec](https://datatracker.ietf.org/doc/html/rfc6844#section-7.2) there is `issue`, `issueWild`, and `iodef`. Looking for suggestions on how to proceed! Thanks!
deekerman 2026-02-28 04:06:50 -05:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@CommanderStorm commented on GitHub (Dec 23, 2025):

the diff you showed would defintively be an improvemnt.

The more permanent fix is to implement subtypes for our conditions.
I.e. this the record field globally, but issue, issueWild, and iodef only if dns_resolve_type is CAA.

In v3.0 we can replace record for this dns_resolve_type with something different.

@CommanderStorm commented on GitHub (Dec 23, 2025): the diff you showed would defintively be an improvemnt. The more permanent fix is to implement subtypes for our conditions. I.e. this the record field globally, but issue, issueWild, and iodef only if dns_resolve_type is CAA. In v3.0 we can replace `record` for this `dns_resolve_type` with something different.
Author
Owner

@am17torres commented on GitHub (Dec 23, 2025):

the diff you showed would defintively be an improvemnt.

I've implemented that fix and submitted https://github.com/louislam/uptime-kuma/pull/6520.

Local testing shows this to be an effective resolution for my current issue.

@am17torres commented on GitHub (Dec 23, 2025): > the diff you showed would defintively be an improvemnt. I've implemented that fix and submitted https://github.com/louislam/uptime-kuma/pull/6520. Local testing shows this to be an effective resolution for my current issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4545
No description provided.