Add a "location" field to the configuration for organisations running multiple monitors #117

Closed
opened 2026-02-28 01:35:27 -05:00 by deekerman · 10 comments
Owner

Originally created by @proffalken on GitHub (Aug 10, 2021).

Describe the solution you'd like
Given that #84 is a massive task to add, is there any chance of getting a "location" field added to the global settings?

This would allow us to deploy Uptime-Kuma to multiple geographic locations checking the same URL's, and then filter/compare the response times based on those locations (as you know I'm all about the Prometheus, so this would then become a common label in the prom code)

Originally created by @proffalken on GitHub (Aug 10, 2021). **Describe the solution you'd like** Given that #84 is a massive task to add, is there any chance of getting a "location" field added to the global settings? This would allow us to deploy Uptime-Kuma to multiple geographic locations checking the same URL's, and then filter/compare the response times based on those locations (as you know I'm all about the Prometheus, so this would then become a common label in the prom code)
deekerman 2026-02-28 01:35:27 -05:00
Author
Owner

@TychoWerner commented on GitHub (Aug 23, 2021):

Custom fields like this would have my preference, I wish to add a Venue & Discipline

@TychoWerner commented on GitHub (Aug 23, 2021): Custom fields like this would have my preference, I wish to add a Venue & Discipline
Author
Owner

@proffalken commented on GitHub (Aug 24, 2021):

Custom fields like this would have my preference, I wish to add a Venue & Discipline

I'm intrigued, are you able to share what venue/discipline relate to?

The idea of adding custom metadata to a check is definitely interesting, and potentially a really neat way to implementing this.

My original plan was to deploy uptimeKuma into multiple regions of aws/gcp and use the "location" field to track where it is, but doing it through metadata and infinitely customisable"tags" could be an even better approach!

@proffalken commented on GitHub (Aug 24, 2021): > Custom fields like this would have my preference, I wish to add a Venue & Discipline I'm intrigued, are you able to share what venue/discipline relate to? The idea of adding custom metadata to a check is definitely interesting, and potentially a really neat way to implementing this. My original plan was to deploy uptimeKuma into multiple regions of aws/gcp and use the "location" field to track where it is, but doing it through metadata and infinitely customisable"tags" could be an even better approach!
Author
Owner

@TychoWerner commented on GitHub (Aug 24, 2021):

Custom fields like this would have my preference, I wish to add a Venue & Discipline

I'm intrigued, are you able to share what venue/discipline relate to?

The idea of adding custom metadata to a check is definitely interesting, and potentially a really neat way to implementing this.

My original plan was to deploy uptimeKuma into multiple regions of aws/gcp and use the "location" field to track where it is, but doing it through metadata and infinitely customisable"tags" could be an even better approach!

I will elaborate my idea, with my current idea I have devices I wish to monitor.
Those devices all exist in the same building but belong to different venues inside of this building. So let's say we got a Pop venue with has 2 computers: computer 1 is for audio & computer 2 is for light

Those computers would have Venue 'Pop' and in our case it would be the discilipine of Audio and Light respectively.
As we have certain people that work for either the whole venue 'Pop', or we have people that work on Audio in general on all venues.

Notification options by custom metadata would be great, lets say:

  • something out of the venue 'Pop' goes offline > send a message to a Telegram group for people involved with that venue
  • something with the discipline 'Audio' goes offline > send a message to a Telegram group for people involved with Audio

Edit: You could also see discilpines as servers/networking/phones/workstations which all will have different people respond to it.

Hope you will consider those custom metadata options,
If you have any questions about my idea here not hesitate to ask them, I am more then wanting to try and answer them 😄

@TychoWerner commented on GitHub (Aug 24, 2021): > > Custom fields like this would have my preference, I wish to add a Venue & Discipline > > I'm intrigued, are you able to share what venue/discipline relate to? > > The idea of adding custom metadata to a check is definitely interesting, and potentially a really neat way to implementing this. > > My original plan was to deploy uptimeKuma into multiple regions of aws/gcp and use the "location" field to track where it is, but doing it through metadata and infinitely customisable"tags" could be an even better approach! I will elaborate my idea, with my current idea I have devices I wish to monitor. Those devices all exist in the same building but belong to different venues inside of this building. So let's say we got a Pop venue with has 2 computers: computer 1 is for audio & computer 2 is for light Those computers would have Venue 'Pop' and in our case it would be the discilipine of Audio and Light respectively. As we have certain people that work for either the whole venue 'Pop', or we have people that work on Audio in general on all venues. Notification options by custom metadata would be great, lets say: - something out of the venue 'Pop' goes offline > send a message to a Telegram group for people involved with that venue - something with the discipline 'Audio' goes offline > send a message to a Telegram group for people involved with Audio Edit: You could also see discilpines as servers/networking/phones/workstations which all will have different people respond to it. Hope you will consider those custom metadata options, If you have any questions about my idea here not hesitate to ask them, I am more then wanting to try and answer them 😄
Author
Owner

@github-actions[bot] commented on GitHub (Feb 20, 2022):

We are clearing up our old issues and your ticket has been open for 6 months with no activity. Remove stale label or comment or this will be closed in 7 days.

@github-actions[bot] commented on GitHub (Feb 20, 2022): We are clearing up our old issues and your ticket has been open for 6 months with no activity. Remove stale label or comment or this will be closed in 7 days.
Author
Owner

@proffalken commented on GitHub (Feb 21, 2022):

Still trying to get traction on this via #680 - would be great to get this done and merged.

@proffalken commented on GitHub (Feb 21, 2022): Still trying to get traction on this via #680 - would be great to get this done and merged.
Author
Owner

@CommanderStorm commented on GitHub (Jul 17, 2023):

@proffalken
I think this is a duplicate of https://github.com/louislam/uptime-kuma/issues/680
If you agree, could you please close this Issue, as duplicates only create immortal zombies and are really hard to issue-manage?
If not, what makes this issue unique enough to require an additional issue? (Could this be integrated into the issue linked above?) ^^

@CommanderStorm commented on GitHub (Jul 17, 2023): @proffalken I think this is a duplicate of https://github.com/louislam/uptime-kuma/issues/680 If you agree, could you please close this Issue, as duplicates only create immortal zombies and are really hard to issue-manage? If not, what makes this issue unique enough to require an additional issue? (Could this be integrated into the issue linked above?) ^^
Author
Owner

@proffalken commented on GitHub (Jul 27, 2023):

@CommanderStorm apologies, completely missed this ping!

This is slightly different to #680

This ticket is to get generic custom fields applied to checks within Uptime Kuma.

#680 is to ensure that those custom fields are then propagated through to the Prometheus metrics.

Does that make sense?

@proffalken commented on GitHub (Jul 27, 2023): @CommanderStorm apologies, completely missed this ping! This is slightly different to #680 This ticket is to get generic custom fields applied to checks within Uptime Kuma. #680 is to ensure that those custom fields are then propagated through to the Prometheus metrics. Does that make sense?
Author
Owner

@CommanderStorm commented on GitHub (Jul 27, 2023):

I am unsure what you mean by

to get generic custom fields applied to checks within Uptime Kuma

Where would those be exposed other than the metrics?

I am confused about what you mean, as https://github.com/louislam/uptime-kuma/issues/202#issuecomment-904465239 is not a clear description of what is needed (or I am not smart enough to understand ^^).
Seems like this is refering to being able to configure different notifications for different montiors, which is a thing that is supported.

@CommanderStorm commented on GitHub (Jul 27, 2023): I am unsure what you mean by > to get generic custom fields applied to checks within Uptime Kuma Where would those be exposed other than the metrics? I am confused about what you mean, as https://github.com/louislam/uptime-kuma/issues/202#issuecomment-904465239 is not a clear description of what is needed (or I am not smart enough to understand ^^). Seems like this is refering to being able to configure different notifications for different montiors, which is a thing that is supported.
Author
Owner

@proffalken commented on GitHub (Jul 27, 2023):

@CommanderStorm - I've just checked, the "custom tags" didn't exist as an option when I originally raised this PR - they now meet the requirements.

I'll close this off, hopefully we'll get #680 merged so that if you create a new alert with a tag of aws_region=eu-west-2, that then is exposed as a tag in the prometheus metrics.

@proffalken commented on GitHub (Jul 27, 2023): @CommanderStorm - I've just checked, the "custom tags" didn't exist as an option when I originally raised this PR - they now meet the requirements. I'll close this off, hopefully we'll get #680 merged so that if you create a new alert with a tag of `aws_region=eu-west-2`, that then is exposed as a tag in the prometheus metrics.
Author
Owner

@CommanderStorm commented on GitHub (Jul 27, 2023):

Ah nice

@CommanderStorm commented on GitHub (Jul 27, 2023): Ah nice
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#117
No description provided.