mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
[groups] Conditional monitoring #1699
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#1699
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @LukasL28 on GitHub (Dec 27, 2022).
⚠️ Please verify that this feature request has NOT been suggested before.
🏷️ Feature Request Type
New Monitor
🔖 Feature description
A Monitor, that uses other Monitors as a and/or condition.
✔️ Solution
Add another Monitor, where you can select other Monitors as condition
Example:
I have an application, with a Database and a Webserver, but I want users to only see the application as a whole on their Status Page. So why not make a Monitor where I can select the Database Monitor (Docker Monitor) and the Webserver Monitor (Ping) and combine them into one called “Application”, then I have to set the condition for example “or” so if the Database or the Webserver are down the whole Application is displayed as being down.
❓ Alternatives
No response
📝 Additional Context
No response
@michield commented on GitHub (Dec 28, 2022):
Isn't it sufficient to just monitor the Website? If the DB is down, the Website would be down as well. Unless I misunderstand the setup.
@skaempfe commented on GitHub (Dec 30, 2022):
In general I like the idea. In detail it might get quite complex quite fast.
Being able to add more than one URL to the http check and define how many have to be UP to be "healthy" sounds interesting. But the complexity of several and/or's of stacked checks might be a real pain...
There are other feature requests already suggested where one can define dependent monitors( #1236 ), so if monitor a is down, then monitor b will also not work and should not be checked separately (and alerted separately).
One possible solution, in a direction as @michield suggested (outside uptime-kuma):
Implement a small "health-check" as part of your Webserber which checks all dependent services on demand.
That's what we are doing in many of our applications and that's why I suggested a new monitor type ( see #2501 ).
Every application checks it's direct dependencies (database, backend systems and so on) and reports their status within its own healthcheck.
For some more details and examples please take a look at the eclipse documentation:
protocol-and-wireformat
@danmed commented on GitHub (Jan 5, 2023):
My use case may be a little more simplified than what you all would use this for but for example.. .i'm monitoring my home internet connection along side services within my LAN.. if my internet connection is down then i don't need notifications for everything inside my LAN.. so for me i'd only need a 2 tiered option..
@wortkrieg commented on GitHub (Jan 6, 2023):
+1 on this request. Use case would be e.g. for multiple servers running in a cluster. If one of them fails I can display an outage via only one monitor on a status page. Or just send a notification if all of the monitors (could be multiple redundant switches in my core network) in one condition fail.
Configuration might be done quite easy, a bit like adding tags but choosing monitors instead of tags and saying globally if the monitor should work as a and/or conditional.
No need to setup the monitors twice, just use the status information of the already existing monitors..
@maxpivo commented on GitHub (Jan 11, 2023):
also +1. The ideal feature for me would allow association with the parent monitor, e.g. ping. If the ping monitor is down, none of the below service specific monitors get notified on. should just be able to select parent when creating a new monitor. This way for the basic use case of "is host on the network?" no, then all the http/tcp service monitors in place don't notify, which make incident response more actionable (host is offline, don't worry about the services until the host issue is resolved)
@Bodge-IT commented on GitHub (Apr 7, 2023):
+1 on this. While I can see the arguement for creating monitors that are simple and highlight any subordinate issues, it does sometimes help to have structural dependencies. e.g. had 1 monitor for http call (to track internet availablity) failing, all subsesequent fault finding showed DNS issue on the Uptime Kuma system, spent 30 mins tracking down DNS issues when suddenly realised the Kuma system gets external DNS through site-to-site VPN. The VPN was down....
Would be a nice to have to link monitors showing clear relationships.
@ACiDGRiM commented on GitHub (Jun 4, 2023):
I will share that this is valuable beyond the original post usecase. For example, I want to test connectivity to the internet and only send alerts if "internet" is up. Furthermore, I want to test if the edge router for my remote network is up before sending alerts for services behind it.
@CommanderStorm commented on GitHub (Jun 4, 2023):
@Bodge-IT
I think #2693 (to be merged in https://github.com/louislam/uptime-kuma/milestone/31) will solve this issue.
From a UX perspective, I think Conditional Monitors could easily get very confusing.
Ideas to get around this?
@Bodge-IT commented on GitHub (Jun 5, 2023):
A suggestion for the ui, would simply be a depends-on button for a node(showing status of depended-on nodes i.e. down/red if any depended-on node is affected). The button could maybe trigger a popup(I know...terrible UI but quick), that displays links to one or more depended-on nodes(including their current status).
Your point about getting confusing is valid, but by definition these things are hierarchical in nature and therefore threadable as opposed to flat. You could simply display them as a folder structure with down-the-line dependents shown inset under the dependent node.
@CommanderStorm commented on GitHub (Jun 6, 2023):
I dont think the popup would solve the problem of having
I don't fully get what you mean by the folder structure.
Could you provide a mockup (ms-paint/ drawn on paper and photographed)?
@ACiDGRiM commented on GitHub (Jun 6, 2023):
The dialogue I'm suggesting could look like this mockup
WebApp dependents ☑️ dependent 1 ✅️ API health ☑️ dependent 3Then when opening dependent settings for any other monitor, WebApp could not be selected as a dependent. However a monitor could be a dependent for multiple master services.
API Health dependents ☑️ dependent 1 ☑️ dependent 3This would allow any number of apps to depend on one master monitor, and prevent a master monitor from being dependent on any other, including loop scenario.
This has some limitations, where a line of legitimate dependencies (i.e. monitoring remote service which depends on internet, monitor which depends on the local router) however that would require testing the whole relationship chain for loops. This achieves the goal to minimize Down notifications while potentially minimizing complexity.
Just my 0.03$ adjusted for inflation
@zimbres commented on GitHub (Jul 5, 2023):
Not sure if it is similar to this, but in systems like WhatsUP Gold, PRTG, Zabbix, there is an option for device dependencies, I can set a parent monitor and in case of its down or paused, its childs status will be ignored.
@Bodge-IT commented on GitHub (Jul 5, 2023):
Sort of. I used to use What'sup Gold in my DevOps days. My idea sort of
combines that relationship dependency to enable a sort of "if this and that
then this must be at fault" then activate this notification sort of
monitoring.
I'd be happy with the parent dependency thing too, it would still improve
the product.
On Wed, 5 Jul 2023 at 15:24, Marcio Zimbres @.***>
wrote:
@sevmonster commented on GitHub (Dec 16, 2023):
Would it simply be enough to allow a service to be assigned to multiple groups? I think all the usecases presented here would be covered, except the advanced conditional processing part—but I am not sure what real benefit that would provide over multi-grouping.
@fboaventura commented on GitHub (Jul 16, 2024):
Hi!
I'll share an image to help explain my use case for conditional monitoring, or dependency, for groups and single assets.
This is my home lab, from where I monitor not only assets in my local network but also external servers and services:
As it is now, if
srv01(uptime-kuma) fails to reach thefirewall, I'll get a push notification for each asset above it as beingdown. If it fails to reach thewifi-rtr02, I'll get a push notification for everything outsidesrv01beingdown.Ideally, in both cases above, I would only receive a notification stating that
wifi-rtr02isdownand everything above it isunreachablebecausesrv01can't tell if they are down or not. And the notification for each state (up,down, andunreachable) would be configured separately. I could opt not to receive anunreachablenotification or just for the first alert while still getting adownnotification everyXminutes.Again, ideally, dependencies would be established on all levels: services or a group of services to servers, servers or a group of servers to network assets, network assets to links, etc.
@tomiko23lol commented on GitHub (Sep 9, 2025):
Conditional Monitoring would be super usefull, it even looks like there was effort to add it here: https://github.com/louislam/uptime-kuma/pull/5791 but that is stuck now and there are no changes there for a long time. Could someone revive this effort and add conditional monitoring in to Kuma?
@PintjesB commented on GitHub (Nov 14, 2025):
See my comment in #5791