mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Understanding Push Monitor Type Timing #2486
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#2486
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bernarddt on GitHub (Aug 18, 2023).
⚠️ Please verify that this bug has NOT been raised before.
🛡️ Security Policy
📝 Describe your problem
I'm using the Push (Passive) Monitor Type. After thinking of the concept for a while I've realised that it sounds simple, but I believe it must have taken quite some time to build a process to alert "up" and "down" status from incoming push notifications. Other than the active monitors where you do a request and when it fails you alert it.
One such concept that I'm unclear of, and couldn't find any Wiki Page for it (please write a wiki page for this!) is how the timing work on Push monitor types.
So I've got a backup process that runs each 24hours, when it completes it sends a "heartbeat" to Uptime Kuma to alert that it was successful. Now you can understand that the process of running a backup would take various durations from say 1min to hours.
My first understanding was to create a Push Monitor for every 24 hours (converted to seconds as 86400).
But now depending on how the duration is measured, or what "smart" concepts was implemented in the code, this could work or (falsly) fail regularly.
The backup is on a schedule for every 24 hours. So say it runs 1am each night. But it could take say 6 hours to complete, the notification to Uptime Kuma will only be received at 6am, now this would not be a problem if the backup ran exactly 6 hours each day, as the next notification will also come at 6am every day.
But say one backup runs faster, the notification will come at say 4am, no problem as this is "within" 24hours from the last notification. But if the next backup takes 6 hours again, then 6am will be 24hours + 2 hours after. Will this generate a problem on Uptime Kuma?
One important fact is you don't set an "expected" time on Uptime Kuma. So I can't say to Uptime Kuma, if you don't hear from the Push service by 8am with a duration of 24 hours (so 8am each day) then consider it a fail. So Uptime Kuma has to determine by itself the window in which it "expects" the notification and if it does not get it alerts a "down" status.
I don't know how Uptime Kuma does this, but by looking at the input data, it only has a duration. So I would guess that it takes the first notification and then just adds the duration, and that is its "due" date and time. Once it gets another notification (could be sooner) it then determines its next "due" date and time by just adding the duration to that (last) notification time.
So in layman's terms it "allows" for a period between notifications equal to the duration? Is this the actual case, or does it have a different way of determining the window in which the notification should come in?
Some other person asked for a Feature Request of a "grace period" in request #3590, but if my understanding is correct, you could effectively give yourself a grace period by adding the period to your duration. Or using the "retries" part, but using the grace period as the retries interval and a count of 1, so it will fail after 24 hours, but "check" again after 8 hours and if failed will report a fail. Or I could also use a retries period of 1 hours, and a retries count of 8. The only problem with this last approach is that it shows potential failures on the graphs even if it is "expected" behaviour. So adding the grace period to the duration is probably the better option.
📝 Error Message(s) or Log
n/a for "help"
🐻 Uptime-Kuma Version
1.22.1
💻 Operating System and Arch
Windows Server 2019
🌐 Browser
Chrome
🐋 Docker Version
n/a
🟩 NodeJS Version
n/a
@chakflying commented on GitHub (Aug 18, 2023):
Simply put - The Push monitor will send you a notification if the defined url has not been visited for
monitor intervalseconds.@bernarddt commented on GitHub (Aug 18, 2023):
but from when does it measure?
@chakflying commented on GitHub (Aug 18, 2023):
Since server restart or the last time the defined url was visited.
@chakflying commented on GitHub (Aug 18, 2023):
If you want to understand more about the implementation, PR #1428 contains a well written description for the current implementation and considerations.
@bernarddt commented on GitHub (Aug 18, 2023):
Thanks!
@twiesing commented on GitHub (Dec 30, 2024):
Question: Is the timer resetted after restarting Uptime Kuma?
If my 1-day-backup which I monitor fails, there is no message from kuma.