Remote Executors #52

Open
opened 2026-02-28 01:33:07 -05:00 by deekerman · 39 comments
Owner

Originally created by @proffalken on GitHub (Jul 19, 2021).

OK, so this is quite possibly taking the project way past what it was ever intended to be, but stay with me on this one...

It would be amazing if I could launch "remote executors" and install them on various devices/virtual instances but have them report back to a central Uptime-Kuma instance.

This is a feature that some of the more advanced site-monitoring tools have, and would allow me to spin up instances across the globe to see what the response time was from each region etc.

As a rough outline, I'd probably be looking for the agents to either post their results back to the "primary" setup via HTTPS, or just load the results onto an MQTT Queue and have the "primary" read those results and log them.

Having it as a lightweight app would also mean that I could deploy onto a Raspberry Pi or similar if I didn't want to use virtual machines in a cloud-provider.

Originally created by @proffalken on GitHub (Jul 19, 2021). OK, so this is quite possibly taking the project way past what it was ever intended to be, but stay with me on this one... It would be amazing if I could launch "remote executors" and install them on various devices/virtual instances but have them report back to a central Uptime-Kuma instance. This is a feature that some of the more advanced site-monitoring tools have, and would allow me to spin up instances across the globe to see what the response time was from each region etc. As a rough outline, I'd probably be looking for the agents to either post their results back to the "primary" setup via HTTPS, or just load the results onto an MQTT Queue and have the "primary" read those results and log them. Having it as a lightweight app would also mean that I could deploy onto a Raspberry Pi or similar if I didn't want to use virtual machines in a cloud-provider.
Author
Owner

@gaby commented on GitHub (Oct 9, 2021):

@louislam With push notifications merged, I think this can be closed?

@gaby commented on GitHub (Oct 9, 2021): @louislam With push notifications merged, I think this can be closed?
Author
Owner

@proffalken commented on GitHub (Oct 10, 2021):

@gaby This isn't about push notifications, it's about being able to deploy multiple instances of Uptime-Kuma that all feed back to a central instance.

The idea is that I could deploy copies of Uptime Kuma into AWS EU-West-1, US-East-2, and APAC-West-1, but have them all report the latency back into an instance in EU-West-2 that shows the actual data, rather than just being able to push notifications from anywhere on the planet back to my browser/mobile device?

I'd rather keep this open if people are happy to do so?

@proffalken commented on GitHub (Oct 10, 2021): @gaby This isn't about push notifications, it's about being able to deploy multiple instances of Uptime-Kuma that all feed back to a central instance. The idea is that I could deploy copies of Uptime Kuma into AWS EU-West-1, US-East-2, and APAC-West-1, but have them all report the latency back into an instance in EU-West-2 that shows the actual data, rather than just being able to push notifications from anywhere on the planet back to my browser/mobile device? I'd rather keep this open if people are happy to do so?
Author
Owner

@louislam commented on GitHub (Oct 11, 2021):

Yes, should keep this open. Don't forget to give a 👍 to your post.

@louislam commented on GitHub (Oct 11, 2021): Yes, should keep this open. Don't forget to give a 👍 to your post.
Author
Owner

@deefdragon commented on GitHub (Oct 15, 2021):

Very much bike-sheding but I couldn't resist. Given Kuma=Bear, I propose that we call remote executors (at least in the case of many-to-one) "Cub" instances, with the primary being the "Mother" instance.

@deefdragon commented on GitHub (Oct 15, 2021): _Very much_ bike-sheding but I couldn't resist. Given Kuma=Bear, I propose that we call remote executors (at least in the case of many-to-one) "Cub" instances, with the primary being the "Mother" instance.
Author
Owner

@codeagencybe commented on GitHub (Nov 25, 2021):

I like this idea a lot, as coming from updown.io service.
They also have this option to have checks from places all over the globe and report what kind of latency and apdex score it returns.

But this kind of setup does come with some complex caveats to expect which I also face randomly from time to time from updown.io and others:

  • it can result in false negatives warnings and triggering alerts for no reason.
    Then the question is: what do you with that information and how should Uptime Kuma handle/output the metrics?
    Let's say you have 7 locations and 1 or 2 report down while others report up.
    Are you going to show down in statuspage or up? Are you going to show the metrics publicly for all locations so users can interpret themselves?
    Or we need a raft consensus that makes a decision that a certain percentage of checks is required before it change status down/up.

I do like the idea alot as I like it from updown.io, but I think it's a challenge to get something like this developed properly and leave alone deployed properly.
Really curious where Uptime Kuma wants to go with features like this.

@codeagencybe commented on GitHub (Nov 25, 2021): I like this idea a lot, as coming from updown.io service. They also have this option to have checks from places all over the globe and report what kind of latency and apdex score it returns. But this kind of setup does come with some complex caveats to expect which I also face randomly from time to time from updown.io and others: * it can result in false negatives warnings and triggering alerts for no reason. Then the question is: what do you with that information and how should Uptime Kuma handle/output the metrics? Let's say you have 7 locations and 1 or 2 report down while others report up. Are you going to show down in statuspage or up? Are you going to show the metrics publicly for all locations so users can interpret themselves? Or we need a raft consensus that makes a decision that a certain percentage of checks is required before it change status down/up. I do like the idea alot as I like it from updown.io, but I think it's a challenge to get something like this developed properly and leave alone deployed properly. Really curious where Uptime Kuma wants to go with features like this.
Author
Owner

@trogper commented on GitHub (Dec 29, 2021):

I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it.

@trogper commented on GitHub (Dec 29, 2021): I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it.
Author
Owner

@proffalken commented on GitHub (Dec 31, 2021):

I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it.

I'm talking about something that's optional here, not required - Uptime Kuma would continue to run as a single instance by default, but adding some kind of "scaling" ability would be a configuration/settings flag.

At the moment, I'm just deploying multiple Uptime Kuma installations, but until #898 is in a better state it's very difficult to determine which datacentre/cloud region the metrics and alerts are coming from in Prometheus, and it requires custom configuration of each installation.

@proffalken commented on GitHub (Dec 31, 2021): > I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it. I'm talking about something that's optional here, not required - Uptime Kuma would continue to run as a single instance by default, but adding some kind of "scaling" ability would be a configuration/settings flag. At the moment, I'm just deploying multiple Uptime Kuma installations, but until #898 is in a better state it's very difficult to determine which datacentre/cloud region the metrics and alerts are coming from in Prometheus, and it requires custom configuration of each installation.
Author
Owner

@Peppershade commented on GitHub (Jan 12, 2022):

It would be nice to have a possibility to have a second docker container on a remote location to handle the same checks to prevent false positives, which could be scalable and report back with a voting system. I'd love to replace Uptime Robot at our office, but just one server to handle the checks is not enough for our infrastructure monitoring.

I do use Uptime Kuma for personal servers and for a large Dutch Minecraft server with a complex server infrastructure. So far I am very happy with this platform, and it has a lot of potential.

@Peppershade commented on GitHub (Jan 12, 2022): It would be nice to have a possibility to have a second docker container on a remote location to handle the same checks to prevent false positives, which could be scalable and report back with a voting system. I'd love to replace Uptime Robot at our office, but just one server to handle the checks is not enough for our infrastructure monitoring. I do use Uptime Kuma for personal servers and for a large Dutch Minecraft server with a complex server infrastructure. So far I am very happy with this platform, and it has a lot of potential.
Author
Owner

@proffalken commented on GitHub (Feb 21, 2022):

I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it.

I'm coming back to this comment because there's soon going to be incident management and reporting added to UptimeKuma, which to me moves it firmly into the Enterprise space as far as functionality is concerned - I know very few people who run incident management on their home labs or similar!

With the above in mind, to me the concept of "remote nodes" becomes even more important - the last thing you want to do is declare an incident that your entire infrastructure is down just because one instance of Uptime Kuma lost it's connectivity.

I realise that most will be using additional tooling, but this is just a small thing that would appear to make it a lot easier to monitor disparate systems and consolidate the results in one place.

@proffalken commented on GitHub (Feb 21, 2022): > I agree it would be nice to have, but I think kuma is supposed to be light-weight, not enterprise-ready. Not many people would use it. I'm coming back to this comment because there's soon going to be incident management and reporting added to UptimeKuma, which to me moves it firmly into the Enterprise space as far as functionality is concerned - I know very few people who run incident management on their home labs or similar! With the above in mind, to me the concept of "remote nodes" becomes even more important - the last thing you want to do is declare an incident that your entire infrastructure is down just because one instance of Uptime Kuma lost it's connectivity. I realise that most will be using additional tooling, but this is just a small thing that would appear to make it a lot easier to monitor disparate systems and consolidate the results in one place.
Author
Owner

@temamagic commented on GitHub (Mar 3, 2022):

it can result in false negatives warnings and triggering alerts for no reason.
Then the question is: what do you with that information and how should Uptime Kuma handle/output the metrics?
Let's say you have 7 locations and 1 or 2 report down while others report up.

It should use something like "paxos distributed consensus algorithm"

It would be cool to have this option.

For example, I check the operation of the service from different servers, but one of them has a network problem. In this case, he will say that "there was a problem." If the check came from 2 or more places, and only 1 server said about the problem, then I would conclude that the problem is on the checking server, but not on the service being checked

@temamagic commented on GitHub (Mar 3, 2022): > it can result in false negatives warnings and triggering alerts for no reason. Then the question is: what do you with that information and how should Uptime Kuma handle/output the metrics? Let's say you have 7 locations and 1 or 2 report down while others report up. It should use something like "paxos distributed consensus algorithm" It would be cool to have this option. For example, I check the operation of the service from different servers, but one of them has a network problem. In this case, he will say that "there was a problem." If the check came from 2 or more places, and only 1 server said about the problem, then I would conclude that the problem is on the checking server, but not on the service being checked
Author
Owner

@officiallymarky commented on GitHub (Aug 10, 2022):

Any progress on this?

@officiallymarky commented on GitHub (Aug 10, 2022): Any progress on this?
Author
Owner

@jmiaje commented on GitHub (Sep 9, 2022):

These is badly needed features I was looking for :(

@jmiaje commented on GitHub (Sep 9, 2022): These is badly needed features I was looking for :(
Author
Owner

@davidak commented on GitHub (Sep 29, 2022):

@hanisirfan @jmiaje @officiallymarky add your 👍 to the initial issue if you want this feature and please don't spam everyone that is subscribed to this issue with unhelpful comments. That makes it take longer. This applies to GitHub and other places in general!

@davidak commented on GitHub (Sep 29, 2022): @hanisirfan @jmiaje @officiallymarky add your :+1: to the initial issue if you want this feature and please don't spam everyone that is subscribed to this issue with unhelpful comments. That makes it take longer. This applies to GitHub and other places in general!
Author
Owner

@wokawoka commented on GitHub (Mar 22, 2023):

+1

@wokawoka commented on GitHub (Mar 22, 2023): +1
Author
Owner

@tigattack commented on GitHub (Mar 22, 2023):

@wokawoka

add your 👍 to the initial issue if you want this feature and please don't spam everyone that is subscribed to this issue with unhelpful comments. That makes it take longer. This applies to GitHub and other places in general!

@tigattack commented on GitHub (Mar 22, 2023): @wokawoka > add your 👍 to the initial issue if you want this feature and please don't spam everyone that is subscribed to this issue with unhelpful comments. That makes it take longer. This applies to GitHub and other places in general!
Author
Owner

@Sid-Sun commented on GitHub (Oct 1, 2023):

Not exactly remote executors but I've created a small script on top of uptime kuma to do "replication" at database layer through restic https://github.com/Sid-Sun/replicator-kuma there are a few quirks though

@Sid-Sun commented on GitHub (Oct 1, 2023): Not exactly remote executors but I've created a small script on top of uptime kuma to do "replication" at database layer through restic https://github.com/Sid-Sun/replicator-kuma there are a few quirks though
Author
Owner

@MikeBishop commented on GitHub (Dec 6, 2023):

It seems like there are two different "versions" of this that can be envisioned, one much more complex than the other.

The more complex one is monitoring one endpoint from many perspectives, and then consolidating the perspectives into an up/degraded/down output. Great to have, but there's a simpler version.

If you just want to monitor certain services from certain viewpoints, that's already possible, just clunky. From each peripheral instance, expose a status page. The central instance adds a monitor which queries /api/status-page/heartbeat/ on the remote and checks that everything / a particular service is status=1.

I think a step in this direction would be the ability to add a monitor type which simply mirrors the state of a monitor from another UptimeKuma instance.

@MikeBishop commented on GitHub (Dec 6, 2023): It seems like there are two different "versions" of this that can be envisioned, one much more complex than the other. The more complex one is monitoring one endpoint from many perspectives, and then consolidating the perspectives into an up/degraded/down output. Great to have, but there's a simpler version. If you just want to monitor certain services from certain viewpoints, that's already possible, just clunky. From each peripheral instance, expose a status page. The central instance adds a monitor which queries `/api/status-page/heartbeat/` on the remote and checks that everything / a particular service is status=1. I think a step in this direction would be the ability to add a monitor type which simply mirrors the state of a monitor from another UptimeKuma instance.
Author
Owner

@officiallymarky commented on GitHub (Dec 6, 2023):

I think one of the cleaner ways to do this would have an option to install as a slave which links with a master install.

@officiallymarky commented on GitHub (Dec 6, 2023): I think one of the cleaner ways to do this would have an option to install as a slave which links with a master install.
Author
Owner

@modem7 commented on GitHub (Dec 10, 2023):

One of the major benefits of something like this (which is how I came across this issue), could be multiple hosts with different setups.

E.g.

Primary/master uptime-kuma instance on a VPS somewhere outside the main infrastructure (especially for self hosting).
Secondary uptime-kuma instance with Docker containers and docker endpoint exposed so that it can monitor said containers.
Tertiary uptime-kuma instance with databases which aren't exposed to the public.

Primary is then able to consolidate the information without exposing the services publicly, including the consolidation of maintenance windows utilising a single node.

This would go hand in hand with @MikeBishop's suggestion of mirroring a monitor from a slave instance.

@modem7 commented on GitHub (Dec 10, 2023): One of the major benefits of something like this (which is how I came across this issue), could be multiple hosts with different setups. E.g. Primary/master uptime-kuma instance on a VPS somewhere outside the main infrastructure (especially for self hosting). Secondary uptime-kuma instance with Docker containers and docker endpoint exposed so that it can monitor said containers. Tertiary uptime-kuma instance with databases which aren't exposed to the public. Primary is then able to consolidate the information without exposing the services publicly, including the consolidation of maintenance windows utilising a single node. This would go hand in hand with @MikeBishop's suggestion of mirroring a monitor from a slave instance.
Author
Owner

@maxshcherbina commented on GitHub (Dec 21, 2023):

I could do this now with something like Home Assistant, but I would prefer to have this option!

@maxshcherbina commented on GitHub (Dec 21, 2023): I could do this now with something like Home Assistant, but I would prefer to have this option!
Author
Owner

@jrbgit commented on GitHub (Dec 25, 2023):

Thought I should add to this convo that I am looking for something similar since people are still commenting on this post.

@jrbgit commented on GitHub (Dec 25, 2023): Thought I should add to this convo that I am looking for something similar since people are still commenting on this post.
Author
Owner

@Sid-Sun commented on GitHub (Jan 13, 2024):

I have added another method to replicator-kuma, and documented the different methods on https://github.com/Sid-Sun/replicator-kuma. I haven't yet tried upgrades but I've been using it with snapshot mode for a few months without issues so far.
P.S. If you plan to use S3, you should go with Cloudflare R2 and I quickly reached free limits on AWS S3. R2 has worked great (and without additional cost).
Edit: Just upgraded and it went as expected.

@Sid-Sun commented on GitHub (Jan 13, 2024): I have added another method to replicator-kuma, and documented the different methods on https://github.com/Sid-Sun/replicator-kuma. I haven't yet tried upgrades but I've been using it with snapshot mode for a few months without issues so far. P.S. If you plan to use S3, you should go with Cloudflare R2 and I quickly reached free limits on AWS S3. R2 has worked great (and without additional cost). Edit: Just upgraded and it went as expected.
Author
Owner

@JaneX8 commented on GitHub (May 6, 2024):

As mentioned here: https://github.com/louislam/uptime-kuma/issues/1259#issuecomment-2094916395.

I would love to see this feature. It would be great if multiple nodes of Uptime Kuma can be linked. And that for each check you add there is an option to select which nodes this check should run on. And also use it as a fail condition. As in "report if all fail", "report if N fails". A syncing of tasks would be better, because this way each node can keep running in standalone mode if another is down. Which makes it kind of a distributed network of individual instances that can work standalone as well as cooperate, rather than for example workers that still depends on a master to be online.

This way I would add Uptime Kuma on many of my geographically separated servers and simply make sure my checks work on all of them, without having to configure many different individual instances.

@JaneX8 commented on GitHub (May 6, 2024): As mentioned here: https://github.com/louislam/uptime-kuma/issues/1259#issuecomment-2094916395. > I would love to see this feature. It would be great if multiple nodes of Uptime Kuma can be linked. And that for each check you add there is an option to select which nodes this check should run on. And also use it as a fail condition. As in "report if all fail", "report if N fails". A syncing of tasks would be better, because this way each node can keep running in standalone mode if another is down. Which makes it kind of a distributed network of individual instances that can work standalone as well as cooperate, rather than for example workers that still depends on a master to be online. > This way I would add Uptime Kuma on many of my geographically separated servers and simply make sure my checks work on all of them, without having to configure many different individual instances.
Author
Owner

@NHendriks01 commented on GitHub (Jun 3, 2024):

Any news on this? I'm not into enterprise or anything but would love to run multiple instances. My view on this might be a bit different but also a bit simpler. Multiple nodes that all show the same data but also have a monitor to each other. More or less like a CDN. Now there is no redundancy except building it by hand, which is possible but a bit clunky.

@NHendriks01 commented on GitHub (Jun 3, 2024): Any news on this? I'm not into enterprise or anything but would love to run multiple instances. My view on this might be a bit different but also a bit simpler. Multiple nodes that all show the same data but also have a monitor to each other. More or less like a CDN. Now there is no redundancy except building it by hand, which is possible but a bit clunky.
Author
Owner

@officiallymarky commented on GitHub (Jun 3, 2024):

I don't think it is going to happen.

@officiallymarky commented on GitHub (Jun 3, 2024): I don't think it is going to happen.
Author
Owner

@codeagencybe commented on GitHub (Jun 3, 2024):

been asking and waiting since 2021, I also don't think it's ever going to happen.
As much as I live uptime kuma and been sponsoring this project, I gave up on this feature and currently looking into OpenStatus which supports multi-zone/regio monitoring out of the box.
Just not entirely sure if this is equally open source like Uptime Kuma, but it does tick several boxes for features that UK is missing for many years and doesn't seem to show any movement or changes coming anytime soon which is a pity.

@codeagencybe commented on GitHub (Jun 3, 2024): been asking and waiting since 2021, I also don't think it's ever going to happen. As much as I live uptime kuma and been sponsoring this project, I gave up on this feature and currently looking into [OpenStatus](https://www.openstatus.dev/) which supports multi-zone/regio monitoring out of the box. Just not entirely sure if this is equally open source like Uptime Kuma, but it does tick several boxes for features that UK is missing for many years and doesn't seem to show any movement or changes coming anytime soon which is a pity.
Author
Owner

@CommanderStorm commented on GitHub (Jun 3, 2024):

I think the basic usecase (monitoring one thing from mutliple sides) behind this feature can also be resolved via ripe atlas.
I know that this is somewhat limited for (very good) ethical+security reasons (f.ex. you cannot http a CP-site from somebody elses' device). I think that the limitations are likely fine for most users.

I plan on implementing a monitor for this when I have time again.

About the redundancy aspect:
We are not a distributed system. A distributed system requires an entirely different kind of effort to maintian and extend.
This is especially true in terms of

  • the consistency/leader election/heartbeat scheduling side (yes, RAFT is possible, but defeats the entire point of being a simple)
  • in terms of security/privacy

Given the engineering resources we currently have, I would currently classify this as out of scope.
Using a second instance (or a commercial status page like uptimerobot) to monitor if Uptime Kuma is up is simple enough and does not require this level of complexity.

@CommanderStorm commented on GitHub (Jun 3, 2024): I think the basic usecase (monitoring one thing from mutliple sides) behind this feature can also be resolved via ripe atlas. I know that this is somewhat limited for (very good) ethical+security reasons (f.ex. you cannot http a [CP-site](https://en.wikipedia.org/wiki/Child_pornography) from somebody elses' device). I think that the limitations are likely fine for most users. - https://github.com/louislam/uptime-kuma/issues/4031 I plan on implementing a monitor for this when I have time again. About the redundancy aspect: We are not a distributed system. A distributed system requires an entirely different kind of effort to maintian and extend. This is especially true in terms of - the consistency/leader election/heartbeat scheduling side (yes, [RAFT](https://raft.github.io/) is possible, but defeats the entire point of being a simple) - in terms of security/privacy Given the engineering resources we currently have, I would currently classify this as out of scope. Using a second instance (or a commercial status page like uptimerobot) to monitor if Uptime Kuma is up is simple enough and does not require this level of complexity.
Author
Owner

@ramphex commented on GitHub (Jun 17, 2024):

Also interested in this. I monitor several networks and it would be nice to have a VM with a kuma instance watching all of the local devices there and reporting back to my main mothership. That would help avoid manually visiting several different kuma instances to check up on every network.

@ramphex commented on GitHub (Jun 17, 2024): Also interested in this. I monitor several networks and it would be nice to have a VM with a kuma instance watching all of the local devices there and reporting back to my main mothership. That would help avoid manually visiting several different kuma instances to check up on every network.
Author
Owner

@kirsysuv commented on GitHub (Jul 25, 2024):

I have four uptimekuma instances. One in my homelab, one in my office, and others on my vps. None of them are public accesible. I have notifications on telegram and works fine.
It would be nice that I can see all the status from all uptimekumas in a single webpage.
Maybe uptimekuma can support to push services' status to a webhook endpoint periodically.
Current api: prometheus call the /metrics api to get data from uptimekuma
what I want: uptimekuma call the webhook to send data periodically(the data is the same as prometheus received)

@kirsysuv commented on GitHub (Jul 25, 2024): I have four uptimekuma instances. One in my homelab, one in my office, and others on my vps. None of them are public accesible. I have notifications on telegram and works fine. It would be nice that I can see all the status from all uptimekumas in a single webpage. Maybe uptimekuma can support to push services' status to a webhook endpoint periodically. Current api: prometheus call the /metrics api to get data from uptimekuma what I want: uptimekuma call the webhook to send data periodically(the data is the same as prometheus received)
Author
Owner

@ClashTheBunny commented on GitHub (Sep 3, 2024):

If this is out of scope, could you add a list of workarounds in a wiki and close this?

I've seen:

  • RIPE ATLAS #4031
  • Running multiple uptime Kuma and scraping into Prometheus: #4704
  • json_query other uptime Kuma or prober to scrape into a single other uptime Kuma?

Are there any other ideas or workarounds that I missed?

@ClashTheBunny commented on GitHub (Sep 3, 2024): If this is out of scope, could you add a list of workarounds in a wiki and close this? I've seen: * RIPE ATLAS #4031 * Running multiple uptime Kuma and scraping into Prometheus: #4704 * json_query other uptime Kuma or prober to scrape into a single other uptime Kuma? Are there any other ideas or workarounds that I missed?
Author
Owner

@zwimer commented on GitHub (Oct 31, 2024):

Among others, there are benefits to doing this with respect to remotely monitoring services on a locked down system, and making a distinction between "unreachable from this specific monitor" services and "down" services. That it is rather important to me, as the internet connection between my UptimeKuma proxy instance and the remote server it monitors is shaky. Meaning my monitor is sometimes unavailable. I'd rather move the monitor onto the frontend server, but then it couldn't reach the services it tests because of firewalls, so the solution for me here would be a remote executor that sends data to a frontend. Hope this issue :)

@zwimer commented on GitHub (Oct 31, 2024): Among others, there are benefits to doing this with respect to remotely monitoring services on a locked down system, and making a distinction between "unreachable from this specific monitor" services and "down" services. That it is rather important to me, as the internet connection between my UptimeKuma proxy instance and the remote server it monitors is shaky. Meaning my monitor is sometimes unavailable. I'd rather move the monitor onto the frontend server, but then it couldn't reach the services it tests because of firewalls, so the solution for me here would be a remote executor that sends data to a frontend. Hope this issue :)
Author
Owner

@iRazvan2745 commented on GitHub (Nov 8, 2024):

i think this is the most asked feature :))

@iRazvan2745 commented on GitHub (Nov 8, 2024): i think this is the most asked feature :))
Author
Owner

@iRazvan2745 commented on GitHub (Nov 8, 2024):

i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work

@iRazvan2745 commented on GitHub (Nov 8, 2024): i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work
Author
Owner

@codeagencybe commented on GitHub (Nov 8, 2024):

i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work

I changed to openstatus project (also open source) as they have this feature native.
If you really need multiple checks from multi regions, I can recommend this project.

https://www.openstatus.dev/

For more simple/single check use cases, Uptime Kuma is still a perfect solution.

@codeagencybe commented on GitHub (Nov 8, 2024): > i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work I changed to openstatus project (also open source) as they have this feature native. If you really need multiple checks from multi regions, I can recommend this project. https://www.openstatus.dev/ For more simple/single check use cases, Uptime Kuma is still a perfect solution.
Author
Owner

@iRazvan2745 commented on GitHub (Nov 8, 2024):

i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work

I changed to openstatus project (also open source) as they have this feature native. If you really need multiple checks from multi regions, I can recommend this project.

https://www.openstatus.dev/

For more simple/single check use cases, Uptime Kuma is still a perfect solution.

Oh yeah, openstatus is cool, but i cant manage to self host it, i wanted to even offered to pay the owner to set it up for me but i just got ghosted

@iRazvan2745 commented on GitHub (Nov 8, 2024): > > i found this repo https://github.com/CodeSpaceCZ/Uptime-Status but it doesnt work > > I changed to openstatus project (also open source) as they have this feature native. If you really need multiple checks from multi regions, I can recommend this project. > > https://www.openstatus.dev/ > > For more simple/single check use cases, Uptime Kuma is still a perfect solution. Oh yeah, openstatus is cool, but i cant manage to self host it, i wanted to even offered to pay the owner to set it up for me but i just got ghosted
Author
Owner

@derekoharrow commented on GitHub (Dec 4, 2024):

I was just thinking on this as I'd like to see something similar. If it helps move it forward, here's my idea for how this could be achieved.

Add a new type of Monitor - Remote Uptime Kuma. When adding this, you could then add a remote instance of Uptime Kuma (much in the same way as you add a new docker host to U.K. today), and then from that select the monitor you want to monitor. This monitor could be one of the monitors on that remote instance, or it could be a "group" monitor, which means it could even be a top-level group monitor.

That way, this preserves the main architecture of Uptime Kuma and just uses a new type of Monitor to implement the remote monitoring capability.

Thoughts?

@derekoharrow commented on GitHub (Dec 4, 2024): I was just thinking on this as I'd like to see something similar. If it helps move it forward, here's my idea for how this could be achieved. Add a new type of Monitor - Remote Uptime Kuma. When adding this, you could then add a remote instance of Uptime Kuma (much in the same way as you add a new docker host to U.K. today), and then from that select the monitor you want to monitor. This monitor could be one of the monitors on that remote instance, or it could be a "group" monitor, which means it could even be a top-level group monitor. That way, this preserves the main architecture of Uptime Kuma and just uses a new type of Monitor to implement the remote monitoring capability. Thoughts?
Author
Owner

@9k001 commented on GitHub (Jan 23, 2025):

I also have similar needs. Currently, I have about 5 locations that need to use uptime kuma for monitoring. If I want to connect through vpn, I can theoretically do this, but it requires VPN traffic and also needs to open up the k8s cluster network.
I hope there is such agent-like functionality, or forwarding to a remote server.

@9k001 commented on GitHub (Jan 23, 2025): I also have similar needs. Currently, I have about 5 locations that need to use uptime kuma for monitoring. If I want to connect through vpn, I can theoretically do this, but it requires VPN traffic and also needs to open up the k8s cluster network. I hope there is such agent-like functionality, or forwarding to a remote server.
Author
Owner

@Husky110 commented on GitHub (Feb 5, 2025):

Adding to @derekoharrow 's comment - I was actually thinking more like in the way Portainer does things. In my scenario I got multiple servers, running multiple docker containers and I need to do various monitorings (like containers are up, pings and latencies to endpoints). Having a central instance which collects data from it's clients or remote-kumas would be great. Portainer does it's management by having the portainer-agent running, which can be only connected to one instance, something simillar would be great - plus if the main-kuma does this via polling you automatically get a general uptime-monitoring for the agent-kuma-system out of the box.

@Husky110 commented on GitHub (Feb 5, 2025): Adding to @derekoharrow 's comment - I was actually thinking more like in the way Portainer does things. In my scenario I got multiple servers, running multiple docker containers and I need to do various monitorings (like containers are up, pings and latencies to endpoints). Having a central instance which collects data from it's clients or remote-kumas would be great. Portainer does it's management by having the portainer-agent running, which can be only connected to one instance, something simillar would be great - plus if the main-kuma does this via polling you automatically get a general uptime-monitoring for the agent-kuma-system out of the box.
Author
Owner

@bsdev90 commented on GitHub (Oct 2, 2025):

Hello everyone, I am also in favor of this feature!
My main Uptime Kuma is on a dedicated online server and does not have access to my containers on my local server, so I have a second Uptime Kuma locally.
I hope this isn't inappropriate, in which case I'll delete this part of the message, but while waiting for an official solution, I created a nodeJS app that automatically retrieves the services monitored by a local Uptime Kuma and generates a URL for each service to be entered into a main Uptime Kuma: if the service is UP, the URL returns an HTTP 200 code; if the service is DOWN, the URL returns an HTTP 503 code, as a result, the main Uptime Kuma indicates that the service is DOWN too.
If you're interested, the script is here, and there's even a Docker container: https://github.com/bsdev90/subtime-kuma

@bsdev90 commented on GitHub (Oct 2, 2025): Hello everyone, I am also in favor of this feature! My main Uptime Kuma is on a dedicated online server and does not have access to my containers on my local server, so I have a second Uptime Kuma locally. I hope this isn't inappropriate, in which case I'll delete this part of the message, but while waiting for an official solution, I created a nodeJS app that automatically retrieves the services monitored by a local Uptime Kuma and generates a URL for each service to be entered into a main Uptime Kuma: if the service is UP, the URL returns an HTTP 200 code; if the service is DOWN, the URL returns an HTTP 503 code, as a result, the main Uptime Kuma indicates that the service is DOWN too. If you're interested, the script is here, and there's even a Docker container: https://github.com/bsdev90/subtime-kuma
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#52
No description provided.