mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
When uptime-kuma is offline and back online, the status report lead to wrong assertions to services status #4044
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#4044
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Fade78 on GitHub (Mar 19, 2025).
📑 I have found these related issues/pull requests
It's very hard to formulate but I didn't find any issues about "offline" or "take into account".
🛡️ Security Policy
Description
I have a host with many VMs with services and also the uptime-kuma instance. Sometimes, I power the entire host off. When it's back online, the open-kuma instance reports one little red bar and all the other are green. At a glance, I could say there was a little downtime while in fact it was for hours.
What is displayed by uptime-kuma.
Looking at this bar you can say that nothing happened, maybe a little interruption. But in fact it was a entire night off. The actual truth should be the following:
What should be displayed by uptime-kuma (Yes, I made this image myself, that's how I care about this bug and this software).
👟 Reproduction steps
Try to shutdown an instance of open-kuma for hours.
👀 Expected behavior
There should be a special color, maybe gray, to indicate that no check could be performed and everything should be proportional to time, not probe activations.

😓 Actual Behavior
Lies about uptimes of monitored services:

What is displayed by uptime-kuma: everything is fine, nothing to see, a little glitch
🐻 Uptime-Kuma Version
1.23.16
💻 Operating System and Arch
Ubuntu server 24.04 docker host, using official uptime-kuma container
🌐 Browser
Firefox
🖥️ Deployment Environment
📝 Relevant log output
@louislam commented on GitHub (Mar 19, 2025):
Unfortunately, Uptime Kuma was not designed like that, it is a normal by current design.
@Fade78 commented on GitHub (Mar 20, 2025):
But this is a purely functional limitation, right? The application already have, in fact, the information because it displays it in the main dashboard:
So it would be possible to recreate the status page from this information. Maybe as a configurable option?
@CommanderStorm commented on GitHub (Mar 24, 2025):
Current implementation for the heartbeat bar just looks at the last n heartbeats. If one is missing that is thus not known.
The chart below is a much newer implementation using the aggregator.
@CommanderStorm commented on GitHub (Mar 24, 2025):
(talking about v2)
@Fade78 commented on GitHub (Mar 25, 2025):
If it can boost the priority of this feature for v2, please consider that in case the uptime-kuma is down and then recover because of a global outage, a client may sue the service provider company about giving false information about the uptime. Behind this, there's maybe penalties to pay because of the SLA and the service provider could be accused to conceal the actual downtime.
@CommanderStorm commented on GitHub (Mar 25, 2025):
First of all, your monitoring should be external, NOT internal => not experience the same faults.
How SLA-rebates usually work is that the client has to notice a fault and then report it via a ticket to get a rebate. I don't see how this is affected.
If you are handling this level of big tech, you likely have your own system for this.
For SLAs you also want custom accounting. How are you handling maintenance/pausing/retries/...
Adding different accounting modes is not planned.