Separation of web portal and monitoring node(s) (for distributed/regional monitoring) #1272

Closed
opened 2026-02-28 02:15:46 -05:00 by deekerman · 4 comments
Owner

Originally created by @kristiandg on GitHub (Jul 17, 2022).

⚠️ Please verify that this feature request has NOT been suggested before.

  • I checked and didn't find similar feature request

🏷️ Feature Request Type

New Monitor, UI Feature, Other

🔖 Feature description

This would allow the installation of monitoring notes in various cloud regions, with a centralized web front end. When building a monitor, you could select which monitoring nodes would be involved in the health check, and health checks would be executed from all those selected nodes, and reported on.

This would allow the ability to have a single monitor execute health checks from multiple regions, which could indicate regional internet issues (latency, outages, etc.), but also allow for a centralized front-end, separating the worker processes that do the health checks, from being inadvertently impacted by front end web usage.

✔️ Solution

Once nodes are enrolled on the main server (as in, installed in multiple cloud regions, then linked/associated to the main web front-end server), when a user creates a new Monitor, they would select all the nodes they want to take part in the health check. Viewable statistics would display the results from each location, notify accordingly, and even notify whether completely down, or just regionally (listed as "partially down" or "down regionally").

Alternatives

None that's self-hosted. Several service providers offer such capabilities.

📝 Additional Context

none

Originally created by @kristiandg on GitHub (Jul 17, 2022). ### ⚠️ Please verify that this feature request has NOT been suggested before. - [X] I checked and didn't find similar feature request ### 🏷️ Feature Request Type New Monitor, UI Feature, Other ### 🔖 Feature description This would allow the installation of monitoring notes in various cloud regions, with a centralized web front end. When building a monitor, you could select which monitoring nodes would be involved in the health check, and health checks would be executed from all those selected nodes, and reported on. This would allow the ability to have a single monitor execute health checks from multiple regions, which could indicate regional internet issues (latency, outages, etc.), but also allow for a centralized front-end, separating the worker processes that do the health checks, from being inadvertently impacted by front end web usage. ### ✔️ Solution Once nodes are enrolled on the main server (as in, installed in multiple cloud regions, then linked/associated to the main web front-end server), when a user creates a new Monitor, they would select all the nodes they want to take part in the health check. Viewable statistics would display the results from each location, notify accordingly, and even notify whether completely down, or just regionally (listed as "partially down" or "down regionally"). ### ❓ Alternatives None that's self-hosted. Several service providers offer such capabilities. ### 📝 Additional Context none
deekerman 2026-02-28 02:15:46 -05:00
Author
Owner

@rocket357 commented on GitHub (Jul 25, 2022):

This would be awesome. I'm brand new to Uptime Kuma, and my particular use case is trivial compared to a full multi-cloud monitoring requirement, but I would still love to see this. The problem I'm having is monitoring "public" services from within my network since my modem does not support hairpin nat. It would be amazing to have an external probe that checks my "public" services and reports that to uptime kuma.

@rocket357 commented on GitHub (Jul 25, 2022): This would be awesome. I'm brand new to Uptime Kuma, and my particular use case is trivial compared to a full multi-cloud monitoring requirement, but I would still love to see this. The problem I'm having is monitoring "public" services from within my network since my modem does not support hairpin nat. It would be amazing to have an external probe that checks my "public" services and reports that to uptime kuma.
Author
Owner

@officiallymarky commented on GitHub (Sep 6, 2022):

Please please!

@officiallymarky commented on GitHub (Sep 6, 2022): Please please!
Author
Owner

@jaydrogers commented on GitHub (Apr 19, 2023):

I'm interested in something like this too, but I understand this could be a major refactor and put a ton of effort on the authors.

Instead of taking on that amount of effort, having the ability to manage many Uptime Kuma instances with something like Ansible would be great.

I'm new to Uptime Kuma. Is anyone aware of managing settings via a configuration file (like a YML file) where you can set what sites to ping and which channels you want to alert?

Although this would be a different feature than what is proposed, this should help others manage a pool of Uptime Kuma servers so they can track uptime across the globe.

Would be interested in hearing any thoughts.

Thanks for all the effort on this beautiful project 🙌

@jaydrogers commented on GitHub (Apr 19, 2023): I'm interested in something like this too, but I understand this could be a major refactor and put a ton of effort on the authors. Instead of taking on that amount of effort, having the ability to manage many Uptime Kuma instances with something like Ansible would be great. I'm new to Uptime Kuma. Is anyone aware of managing settings via a configuration file (like a YML file) where you can set what sites to ping and which channels you want to alert? Although this would be a different feature than what is proposed, this should help others manage a pool of Uptime Kuma servers so they can track uptime across the globe. Would be interested in hearing any thoughts. Thanks for all the effort on this beautiful project 🙌
Author
Owner

@CommanderStorm commented on GitHub (Dec 6, 2023):

First of all: this issue can already be achieved using the push-monitor

We are consolidating duplicate issues and I think we should track this issue in #84 instead.
=> closing as a duplicate

@CommanderStorm commented on GitHub (Dec 6, 2023): First of all: this issue can already be achieved using the `push`-monitor We are consolidating duplicate issues and I think we should track this issue in #84 instead. => closing as a duplicate
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#1272
No description provided.