Understanding Push Monitor Type Timing #2486

Closed
opened 2026-02-28 02:56:22 -05:00 by deekerman · 6 comments
Owner

Originally created by @bernarddt on GitHub (Aug 18, 2023).

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I'm using the Push (Passive) Monitor Type. After thinking of the concept for a while I've realised that it sounds simple, but I believe it must have taken quite some time to build a process to alert "up" and "down" status from incoming push notifications. Other than the active monitors where you do a request and when it fails you alert it.

One such concept that I'm unclear of, and couldn't find any Wiki Page for it (please write a wiki page for this!) is how the timing work on Push monitor types.

So I've got a backup process that runs each 24hours, when it completes it sends a "heartbeat" to Uptime Kuma to alert that it was successful. Now you can understand that the process of running a backup would take various durations from say 1min to hours.

My first understanding was to create a Push Monitor for every 24 hours (converted to seconds as 86400).

But now depending on how the duration is measured, or what "smart" concepts was implemented in the code, this could work or (falsly) fail regularly.

The backup is on a schedule for every 24 hours. So say it runs 1am each night. But it could take say 6 hours to complete, the notification to Uptime Kuma will only be received at 6am, now this would not be a problem if the backup ran exactly 6 hours each day, as the next notification will also come at 6am every day.

But say one backup runs faster, the notification will come at say 4am, no problem as this is "within" 24hours from the last notification. But if the next backup takes 6 hours again, then 6am will be 24hours + 2 hours after. Will this generate a problem on Uptime Kuma?

One important fact is you don't set an "expected" time on Uptime Kuma. So I can't say to Uptime Kuma, if you don't hear from the Push service by 8am with a duration of 24 hours (so 8am each day) then consider it a fail. So Uptime Kuma has to determine by itself the window in which it "expects" the notification and if it does not get it alerts a "down" status.

I don't know how Uptime Kuma does this, but by looking at the input data, it only has a duration. So I would guess that it takes the first notification and then just adds the duration, and that is its "due" date and time. Once it gets another notification (could be sooner) it then determines its next "due" date and time by just adding the duration to that (last) notification time.

So in layman's terms it "allows" for a period between notifications equal to the duration? Is this the actual case, or does it have a different way of determining the window in which the notification should come in?

Some other person asked for a Feature Request of a "grace period" in request #3590, but if my understanding is correct, you could effectively give yourself a grace period by adding the period to your duration. Or using the "retries" part, but using the grace period as the retries interval and a count of 1, so it will fail after 24 hours, but "check" again after 8 hours and if failed will report a fail. Or I could also use a retries period of 1 hours, and a retries count of 8. The only problem with this last approach is that it shows potential failures on the graphs even if it is "expected" behaviour. So adding the grace period to the duration is probably the better option.

📝 Error Message(s) or Log

n/a for "help"

🐻 Uptime-Kuma Version

1.22.1

💻 Operating System and Arch

Windows Server 2019

🌐 Browser

Chrome

🐋 Docker Version

n/a

🟩 NodeJS Version

n/a

Originally created by @bernarddt on GitHub (Aug 18, 2023). ### ⚠️ Please verify that this bug has NOT been raised before. - [X] I checked and didn't find similar issue ### 🛡️ Security Policy - [X] I agree to have read this project [Security Policy](https://github.com/louislam/uptime-kuma/security/policy) ### 📝 Describe your problem I'm using the Push (Passive) Monitor Type. After thinking of the concept for a while I've realised that it sounds simple, but I believe it must have taken quite some time to build a process to alert "up" and "down" status from incoming push notifications. Other than the active monitors where you do a request and when it fails you alert it. One such concept that I'm unclear of, and couldn't find any Wiki Page for it (please write a wiki page for this!) is how the timing work on Push monitor types. So I've got a backup process that runs each 24hours, when it completes it sends a "heartbeat" to Uptime Kuma to alert that it was successful. Now you can understand that the process of running a backup would take various durations from say 1min to hours. My first understanding was to create a Push Monitor for every 24 hours (converted to seconds as 86400). But now depending on how the duration is measured, or what "smart" concepts was implemented in the code, this could work or (falsly) fail regularly. The backup is on a schedule for every 24 hours. So say it runs 1am each night. But it could take say 6 hours to complete, the notification to Uptime Kuma will only be received at 6am, now this would not be a problem if the backup ran exactly 6 hours each day, as the next notification will also come at 6am every day. But say one backup runs faster, the notification will come at say 4am, no problem as this is "within" 24hours from the last notification. But if the next backup takes 6 hours again, then 6am will be 24hours + 2 hours after. Will this generate a problem on Uptime Kuma? One important fact is you don't set an "expected" time on Uptime Kuma. So I can't say to Uptime Kuma, if you don't hear from the Push service by **8am** with a duration of 24 hours (so 8am each day) then consider it a fail. So Uptime Kuma has to determine by itself the window in which it "expects" the notification and if it does not get it alerts a "down" status. I don't know how Uptime Kuma does this, but by looking at the input data, it only has a duration. So I would guess that it takes the first notification and then just adds the duration, and that is its "due" date and time. Once it gets another notification (could be sooner) it then determines its next "due" date and time by just adding the duration to that (last) notification time. **So in layman's terms it "allows" for a period between notifications equal to the duration?** Is this the actual case, or does it have a different way of determining the window in which the notification should come in? Some other person asked for a Feature Request of a "grace period" in request #3590, but if my understanding is correct, you could effectively give yourself a grace period by adding the period to your duration. Or using the "retries" part, but using the grace period as the retries interval and a count of 1, so it will fail after 24 hours, but "check" again after 8 hours and if failed will report a fail. Or I could also use a retries period of 1 hours, and a retries count of 8. The only problem with this last approach is that it shows potential failures on the graphs even if it is "expected" behaviour. So adding the grace period to the duration is probably the better option. ### 📝 Error Message(s) or Log n/a for "help" ### 🐻 Uptime-Kuma Version 1.22.1 ### 💻 Operating System and Arch Windows Server 2019 ### 🌐 Browser Chrome ### 🐋 Docker Version n/a ### 🟩 NodeJS Version n/a
deekerman 2026-02-28 02:56:22 -05:00
  • closed this issue
  • added the
    help
    label
Author
Owner

@chakflying commented on GitHub (Aug 18, 2023):

Simply put - The Push monitor will send you a notification if the defined url has not been visited for monitor interval seconds.

@chakflying commented on GitHub (Aug 18, 2023): Simply put - The Push monitor will send you a notification if the defined url has not been visited for `monitor interval` seconds.
Author
Owner

@bernarddt commented on GitHub (Aug 18, 2023):

[...] has not been visited for monitor interval seconds.

but from when does it measure?

@bernarddt commented on GitHub (Aug 18, 2023): > [...] has not been visited for `monitor interval` seconds. but from when does it measure?
Author
Owner

@chakflying commented on GitHub (Aug 18, 2023):

Since server restart or the last time the defined url was visited.

@chakflying commented on GitHub (Aug 18, 2023): Since server restart or the last time the defined url was visited.
Author
Owner

@chakflying commented on GitHub (Aug 18, 2023):

If you want to understand more about the implementation, PR #1428 contains a well written description for the current implementation and considerations.

@chakflying commented on GitHub (Aug 18, 2023): If you want to understand more about the implementation, PR #1428 contains a well written description for the current implementation and considerations.
Author
Owner

@bernarddt commented on GitHub (Aug 18, 2023):

Thanks!

@bernarddt commented on GitHub (Aug 18, 2023): Thanks!
Author
Owner

@twiesing commented on GitHub (Dec 30, 2024):

Question: Is the timer resetted after restarting Uptime Kuma?

If my 1-day-backup which I monitor fails, there is no message from kuma.

Bildschirmfoto 2024-12-30 um 20 54 22
@twiesing commented on GitHub (Dec 30, 2024): Question: Is the timer resetted after restarting Uptime Kuma? If my 1-day-backup which I monitor fails, there is no message from kuma. <img width="972" alt="Bildschirmfoto 2024-12-30 um 20 54 22" src="https://github.com/user-attachments/assets/612b1c9b-9d26-4fc8-91da-f8dfb85b2c84" />
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#2486
No description provided.