Implementing the retries enhancement #46

Closed
opened 2026-02-28 01:32:54 -05:00 by deekerman · 1 comment
Owner

Originally created by @Spiritreader on GitHub (Jul 15, 2021).

I've been looking at the code responsible for sending notifications when a service goes down mentioned in #21.
It looks like this would be quite reasonable to implement, as such I am writing a draft and am curious to see if ends up being any good.

I've had the issue many times now since using Uptime Kuma that my services get marked as down and I get a notification, when it was just a pinging issue.

I would actually try and implement this myself if that is welcome, however I am currently quite busy and don't know if I can follow up in a timely enough matter before someone else picks it up.
But making a draft is potentially helpful for the developer who ends up implementing it.

The steps would pretty much include (if I'm not mistaken, I only had 30 min or so to look at the codebase)

  • [BACKEND] add a field to the monitor called "maxRetries" which holds the maximum amount of retries allowed before a service will go "down" and trigger notifications
  • [BACKEND] Create a new monitor.status value (for example 2, indicating that the service is currently "disrupted")
  • [BACKEND] keep track of the threshold by storing the amount of failed retries with the monitor ID in an array or KV data structure (easy) or in-memory database (more sophisticated). And also update the status to the new value (instead of it being 0 if the service has not reached maxRetries. I believe this should go here in the catch section of the polling try block github.com/louislam/uptime-kuma@b3bff8d735/server/model/monitor.js (L112)
  • [BACKEND] make the modifications here and in similar places below github.com/louislam/uptime-kuma@b3bff8d735/server/model/monitor.js (L121) to incorporate the notification triggering only when the threshold has been reached
  • [FRONTEND] Add the retries field to the monitor edit page
  • [FRONTEND] Optional: Indicate that the service is potentially disrupted by displaying the retry count. This could add bloat to the UI experience, so not sure if that makes sense.

I also never worked with bean, so it's possible that a manual data migration would be necessary if a new field is added to the database. The documentation lists that bean does not have migration?
Does anyone know how @louislam is solving the migration task?

I think this would describe the minimum valuable product. Using the configured monitor ping interval doesn't require too many changes and requires little modification in the codebase.

What do you think?

Also I apologize that this issue is such a mess, I accidentally hit enter instead of backspace when editing the title so I essentially had to write the draft while the issue already existed 😢

Originally created by @Spiritreader on GitHub (Jul 15, 2021). I've been looking at the code responsible for sending notifications when a service goes down mentioned in #21. It looks like this would be quite reasonable to implement, as such I am writing a draft and am curious to see if ends up being any good. I've had the issue many times now since using Uptime Kuma that my services get marked as down and I get a notification, when it was just a pinging issue. I would actually try and implement this myself if that is welcome, however I am currently quite busy and don't know if I can follow up in a timely enough matter before someone else picks it up. But making a draft is potentially helpful for the developer who ends up implementing it. The steps would pretty much include (if I'm not mistaken, I only had 30 min or so to look at the codebase) - **[BACKEND]** add a field to the monitor called "maxRetries" which holds the maximum amount of retries allowed before a service will go "down" and trigger notifications - **[BACKEND]** Create a new monitor.status value (for example `2`, indicating that the service is currently "disrupted") - **[BACKEND]** keep track of the threshold by storing the amount of failed retries with the monitor ID in an array or KV data structure (easy) or in-memory database (more sophisticated). And also update the status to the new value (instead of it being `0` if the service has not reached `maxRetries`. I believe this should go here in the catch section of the polling try block https://github.com/louislam/uptime-kuma/blob/b3bff8d7357d75d3871aa68bc71db35dd79506a9/server/model/monitor.js#L112 - **[BACKEND]** make the modifications here and in similar places below https://github.com/louislam/uptime-kuma/blob/b3bff8d7357d75d3871aa68bc71db35dd79506a9/server/model/monitor.js#L121 to incorporate the notification triggering only when the threshold has been reached - **[FRONTEND]** Add the retries field to the monitor edit page - **[FRONTEND]** _Optional: Indicate that the service is potentially disrupted by displaying the retry count. This could add bloat to the UI experience, so not sure if that makes sense._ I also never worked with bean, so it's possible that a manual data migration would be necessary if a new field is added to the database. The documentation lists that bean [does not have migration? ](https://redbean-node.whatsticker.online/Migration) Does anyone know how @louislam is solving the migration task? I think this would describe the minimum valuable product. Using the configured monitor ping interval doesn't require too many changes and requires little modification in the codebase. What do you think? Also I apologize that this issue is such a mess, I accidentally hit enter instead of backspace when editing the title so I essentially had to write the draft while the issue already existed 😢
Author
Owner

@Spiritreader commented on GitHub (Jul 21, 2021):

This feature is now tracked in PR #86.

@Spiritreader commented on GitHub (Jul 21, 2021): This feature is now tracked in PR #86.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#46
No description provided.