mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Master Monitor over other Monitors #1856
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#1856
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @maxwai on GitHub (Feb 6, 2023).
⚠️ Please verify that this feature request has NOT been suggested before.
🏷️ Feature Request Type
Other
🔖 Feature description
Be able to set a kind of "Master" Monitor. So when this "Master" Monitor (this could be for example the Server itself where the Services are running) goes down, the other Monitors that depend on this "Master" Monitor go down as well.
This would allow to avoid sending multiple Notifications when the "Master" Monitor is down, since currently one would get a Notification for every Service that goes down and the Server itself.
This can be quite useful for people that want to check the individual Services (so to check if a docker goes down for example) but also to quickly see that the Server itself has a Problem (has crashed or doesn't have an Internet connection anymore).
Here is an example how this could look like: (ignore the Services in Maintenance)
Here the all Services (called "Dienste" here) are living on the Server so if the first Monitor goes down, all Services should also be marked as down immediately and only one Notification should be send instead of 6
✔️ Solution
Add a way to link multiple Monitors to a single Monitor that acts like a "Master Switch"
❓ Alternatives
No response
📝 Additional Context
No response
@drewlsvern commented on GitHub (Mar 15, 2023):
I am also looking for a feature similar to this. I would be willing to contribute towards this feature. I would love to gather some thoughts from others on this before starting though.
I have used Nagios before so my mind immediately kind of jumped to how I used it. You have a host and then you have a bunch of services that belong to that host.
I'm wondering if it might be easiest to automatically setup a master host to ping and then any sub monitors could be anything else.
In terms of the UI, my thought is to have an accordion and then any sub monitors would be visible by expanding the accordion.
@uptimejeff commented on GitHub (May 8, 2023):
If internet is down for the uptimekuma host, I'd like it to pause all notifications until internet is restored.
Example: ping a known host which ALL checks depend on (ping an upstream router or 8.8.8.8) or two hosts (1.1.1.1/8.8.8.8), if unreachable, pause all checks/notification until service restored.
@feofan69 commented on GitHub (May 9, 2023):
How to make pause all monitors?
@CommanderStorm commented on GitHub (Jul 26, 2023):
@maxwai
I think this is a duplicate of https://github.com/louislam/uptime-kuma/issues/2487 or https://github.com/louislam/uptime-kuma/pull/1236
If you agree, could you please close this Issue, as duplicates only create immortal zombies and are really hard to issue-manage?
If not, what makes this issue unique enough to require an additional issue? (Could this be integrated into the issue linked above?) ^^
@maxwai commented on GitHub (Jul 27, 2023):
@CommanderStorm
I looked at the issued, and you are right, this is a duplicate of #1236 . Sorry, didn't see that one.