mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Separation of web portal and monitoring node(s) (for distributed/regional monitoring) #1272
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#1272
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @kristiandg on GitHub (Jul 17, 2022).
⚠️ Please verify that this feature request has NOT been suggested before.
🏷️ Feature Request Type
New Monitor, UI Feature, Other
🔖 Feature description
This would allow the installation of monitoring notes in various cloud regions, with a centralized web front end. When building a monitor, you could select which monitoring nodes would be involved in the health check, and health checks would be executed from all those selected nodes, and reported on.
This would allow the ability to have a single monitor execute health checks from multiple regions, which could indicate regional internet issues (latency, outages, etc.), but also allow for a centralized front-end, separating the worker processes that do the health checks, from being inadvertently impacted by front end web usage.
✔️ Solution
Once nodes are enrolled on the main server (as in, installed in multiple cloud regions, then linked/associated to the main web front-end server), when a user creates a new Monitor, they would select all the nodes they want to take part in the health check. Viewable statistics would display the results from each location, notify accordingly, and even notify whether completely down, or just regionally (listed as "partially down" or "down regionally").
❓ Alternatives
None that's self-hosted. Several service providers offer such capabilities.
📝 Additional Context
none
@rocket357 commented on GitHub (Jul 25, 2022):
This would be awesome. I'm brand new to Uptime Kuma, and my particular use case is trivial compared to a full multi-cloud monitoring requirement, but I would still love to see this. The problem I'm having is monitoring "public" services from within my network since my modem does not support hairpin nat. It would be amazing to have an external probe that checks my "public" services and reports that to uptime kuma.
@officiallymarky commented on GitHub (Sep 6, 2022):
Please please!
@jaydrogers commented on GitHub (Apr 19, 2023):
I'm interested in something like this too, but I understand this could be a major refactor and put a ton of effort on the authors.
Instead of taking on that amount of effort, having the ability to manage many Uptime Kuma instances with something like Ansible would be great.
I'm new to Uptime Kuma. Is anyone aware of managing settings via a configuration file (like a YML file) where you can set what sites to ping and which channels you want to alert?
Although this would be a different feature than what is proposed, this should help others manage a pool of Uptime Kuma servers so they can track uptime across the globe.
Would be interested in hearing any thoughts.
Thanks for all the effort on this beautiful project 🙌
@CommanderStorm commented on GitHub (Dec 6, 2023):
First of all: this issue can already be achieved using the
push-monitorWe are consolidating duplicate issues and I think we should track this issue in #84 instead.
=> closing as a duplicate