mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Allow to send notifications when UK database is down #4534
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#4534
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jcjveraa on GitHub (Dec 17, 2025).
📑 I have found these related issues/pull requests
Somewhat related as a topic to #6046
🛡️ Security Policy
📝 Description
Tested on external Mariadb, perhaps the same issue on others.
When the uptime kuma database (it's own database) is not available, UK stops functioning completely. It does not send a notification to inform UK itself is having issues.
👟 Reproduction steps
👀 Expected behavior
UK will send a notification to all notification channels that UK itself is having issues/is down.
😓 Actual Behavior
No notifications were sent
🐻 Uptime-Kuma Version
2.0.2
💻 Operating System and Arch
Debian Trixie (13.2), x86_64
Kernel Debian 6.12.57-1 (2025-11-05) x86_64
🌐 Browser
not relevant
🖥️ Deployment Environment
26.1.5+dfsg1, builda72d7cd(default Debian Trixie package)2.26.1-4(default Debian Trixie package)11.8.3-MariaDB(default Debian Trixie package)4📝 Relevant log output
redacted my server IP to 192.168.x.x
@jcjveraa commented on GitHub (Dec 17, 2025):
Meant to have this here as a Draft Issue, but that's not a n option - will complete with full environment details within the hour.done@CommanderStorm commented on GitHub (Dec 17, 2025):
That would make sense indeed.
@jcjveraa commented on GitHub (Dec 17, 2025):
The solution is simple but perhaps implementation wise tricky, I haven’t looked at the source code yet, but if the auth tokens/urls/ etc for sending the notification (I’ll just call them “notification config”) are in the db this might pose a challenge. A solution could then be to cache all notification config in memory - it should not be a massive amount of data, a few kB per channel at most.
Otherwise it might make sense to add this as an (invisible?) regular “monitor”, with the special property that anytime a notification channel is added, that channel gets added to this monitor as well. The logic being that any channel that wants to get notified of some monitored app being down, probably also wants a notification if the monitor itself is down.
@CommanderStorm commented on GitHub (Dec 17, 2025):
You are right, we likely should just crash loop. The db being down when we try to load which notifications exist is not recoverable.
I would expect to currently either crash loop or at least not have a positive health check.
@CommanderStorm commented on GitHub (Jan 18, 2026):
looking at https://github.com/louislam/uptime-kuma/pull/6596 it seems that it can be hacked in, but this does not seem to be cleanly possible.
We need a bit of architectural rethinking before this is possible.
I am going to leave this issue open if someone wants to look into what our options are to solve this and which way we need to refactor.
Be warned: this likely requires a bit of a larger refactoring.