Discussion: Defining a Shared Security Model for Uptime Kuma #4598

Open
opened 2026-02-28 04:08:43 -05:00 by deekerman · 6 comments
Owner

Originally created by @CommanderStorm on GitHub (Jan 11, 2026).

Context

Uptime Kuma has grown in popularity and complexity, and security considerations increasingly affect feature design (e.g. multi-user support, permissions, monitor capabilities).
Maintainers already handle security topics informally, but we currently lack a shared structure or threat model.
As such, during security issues it is often not clear if something is a security issue or not.

Examples of questions that have been raised:

  • Can attackers insert USB devices into the host system?
  • Can attackers read from or write to the file system?
  • Do attackers have admin-level access to the host?
  • Do attackers have admin-level access in uptime kuma?
  • Is authenticated server-side request forgery considered in-scope for us?
  • Are these open metadata endpoints on google clould an exploit in us or in google cloud?

A clearer shared model would help answer these questions more consistently.

Proposal

I would like to introduce two security-related groups with clearly separated responsibilities:

  • public Uptime Kuma Security Working Group (click to expand)

    Purpose

    This work is expected to unblock future features (e.g. multi-user support, safer monitor designs) by making it clear what "being secure" means for us and improve our processes.

    • Threat modeling and security design discussions
    • Improve overall security posture (scorecard, depenedency management, ...)

    Membership

    • Open to anyone
    • No security background required

    Typical expectations

    • Willingness to learn and discuss security topics
    • Constructive participation
    • Availability for occasional online meetings
  • private Uptime Kuma Security Triage Team (click to expand)

    Purpose

    • Handle infrequent private vulnerability reports
    • Coordinate fixes, releases, and advisories
    • Handle embargoed information

    Membership

    • By invitation only

    Typical requirements

    • Maintainer or long-term contributor (this or another project)
    • Demonstrated responsible behavior and good judgment
    • Ability to handle sensitive / embargoed information
    • Demonstrated project involvement (issues, PRs, reviews, etc.)
    • Availability during coordinated disclosures

    Requirements are guidelines, not strict rules. Trust matters most.

Call to Action

I’d like feedback from other maintainers and the community on this proposal:

  • Does this structure make sense for Uptime Kuma?
  • Are the scopes and responsibilities clearly defined?
  • Are there important security assumptions or threat boundaries we should explicitly document?
  • If you’re interested in participating in the Security Working Group, please comment on this issue.
  • If you’re a maintainer or long-term contributor and think you might be a good fit for the Security Triage Team, feel free to reach out privately.

The goal here is not to add bureaucracy, but to make security discussions clearer, more consistent, and easier to handle as the project continues to grow.

Originally created by @CommanderStorm on GitHub (Jan 11, 2026). ## Context Uptime Kuma has grown in popularity and complexity, and security considerations increasingly affect feature design (e.g. multi-user support, permissions, monitor capabilities). Maintainers already handle security topics informally, but we currently lack a shared structure or threat model. As such, during security issues it is often not clear if something is a security issue or not. Examples of questions that have been raised: - Can attackers insert USB devices into the host system? - Can attackers read from or write to the file system? - Do attackers have admin-level access to the host? - Do attackers have admin-level access in uptime kuma? - Is authenticated server-side request forgery considered in-scope for us? - Are these open metadata endpoints on google clould an exploit in us or in google cloud? A clearer shared model would help answer these questions more consistently. ## Proposal I would like to introduce two security-related groups with clearly separated responsibilities: - <details><summary><i>public</i> <b>Uptime Kuma Security Working Group</b> (click to expand)</summary> **Purpose** This work is expected to unblock future features (e.g. multi-user support, safer monitor designs) by making it clear what "being secure" means for us and improve our processes. * Threat modeling and security design discussions * Improve overall security posture (scorecard, depenedency management, ...) **Membership** * Open to anyone * No security background required **Typical expectations** * Willingness to learn and discuss security topics * Constructive participation * Availability for occasional online meetings </details> - <details><summary><i>private</i> <b>Uptime Kuma Security Triage</b> Team (click to expand)</summary> **Purpose** * Handle infrequent private vulnerability reports * Coordinate fixes, releases, and advisories * Handle embargoed information **Membership** * By invitation only **Typical requirements** * Maintainer or long-term contributor (this or another project) * Demonstrated responsible behavior and good judgment * Ability to handle sensitive / embargoed information * Demonstrated project involvement (issues, PRs, reviews, etc.) * Availability during coordinated disclosures Requirements are guidelines, not strict rules. Trust matters most. </details> ## Call to Action I’d like feedback from other maintainers and the community on this proposal: - Does this structure make sense for Uptime Kuma? - Are the scopes and responsibilities clearly defined? - Are there important security assumptions or threat boundaries we should explicitly document? - If you’re interested in participating in the **Security Working Group**, please comment on this issue. - If you’re a maintainer or long-term contributor and think you might be a good fit for the **Security Triage Team**, feel free to reach out privately. The goal here is not to add bureaucracy, but to make security discussions clearer, more consistent, and easier to handle as the project continues to grow.
Author
Owner

@CommanderStorm commented on GitHub (Jan 18, 2026):

Proposal: Baseline Security Model (Draft)

I’d like to propose a baseline security model for Uptime Kuma to clarify assumptions and guide future security discussions:

  • Network-external attacker only
  • Trusted admins and underlying system
  • Out of scope: insider threats and compromised infrastructure

Currently, we’ve been acting as if malicious admins and compromised infrastructure were in scope, which is hard/impossible to uphold.
Reasonable hardening (credentials, monitor configs, accidental missteps, ...) remains in scope.

Questions for feedback

  • Does this attacker/assumption model make sense for Uptime Kuma?
  • Are there risks we’re overlooking by excluding insider threats?
  • Are there hardening practices you think should be explicitly in scope?

This is intended as a starting point, not a final decision.
I’d love input from other contributors before we formalize it.

@CommanderStorm commented on GitHub (Jan 18, 2026): Proposal: Baseline Security Model (Draft) I’d like to propose a baseline security model for Uptime Kuma to clarify assumptions and guide future security discussions: - **Network-external attacker** only - **Trusted admins** and underlying system - Out of scope: insider threats and compromised infrastructure Currently, we’ve been acting as if malicious admins and compromised infrastructure were in scope, which is hard/impossible to uphold. **Reasonable hardening** (credentials, monitor configs, accidental missteps, ...) remains **in scope**. Questions for feedback - Does this attacker/assumption model make sense for Uptime Kuma? - Are there risks we’re overlooking by excluding insider threats? - Are there hardening practices you think should be explicitly in scope? This is intended as a starting point, not a final decision. I’d love input from other contributors before we formalize it.
Author
Owner

@ChlorideCull commented on GitHub (Jan 21, 2026):

Does this attacker/assumption model make sense for Uptime Kuma?

Mostly.

If we talk in CVSS terms, I think "Network" and "Adjacent" would make sense for being in scope. In Uptime Kuma's case, that would be over the Internet, and on the local subnet.

"Local" (essentially terminal access, physical or over SSH) should also be in scope, assuming "Privileges Required" would be "None" or "Low" - this brings things like overly permissive default file permissions into scope.

OS misconfiguration and such are beyond the control of Uptime Kuma, and thus explicitly out of scope.

In other words: Trust people with root shell access, trust people with physical access to the hardware, trust that the OS is configured securely. Don't trust anything else.

As for how to handle admin users in Uptime Kuma, that's up for debate, I suppose. You definitely don't want an admin user to be able to use their admin status in Uptime Kuma to gain privileges on the host (because they might not even have access to the host - think a vendor hosting Uptime Kuma for multiple companies, for example) but when it comes to doing things to Uptime Kuma it's reasonable to take the same approach as Microsoft takes with Windows. Microsoft generally doesn't consider a process running as Administrator being able to do malicious things a vulnerability, because they aren't limited by design.

Are there risks we’re overlooking by excluding insider threats?

Insider threats should be in scope IMO, because on an internet accessible service that will also cover any attacker who manages to crack an account. This is inherently much more of a subjective area though, where you'll just have to make decisions on what restrictions should apply in the name of reducing the scope of damage that could be done.

A common hard boundary for insider threats is making sure no one can actively mess up the host or other services on the host, so that damage would be limited to the Uptime Kuma instance itself.

This would of course be limited by the scope mentioned above as well. Uptime Kuma can't really do anything against rm -rf /*, after all.

Are there hardening practices you think should be explicitly in scope?

Explicitly keep track and document what data is trusted and untrusted, in other words, make sure the code has clear trust boundaries. Having clear trust boundaries prevent so many vulnerabilities.

@ChlorideCull commented on GitHub (Jan 21, 2026): > Does this attacker/assumption model make sense for Uptime Kuma? Mostly. If we talk in CVSS terms, I think "Network" and "Adjacent" would make sense for being in scope. In Uptime Kuma's case, that would be over the Internet, and on the local subnet. "Local" (essentially terminal access, physical or over SSH) should also be in scope, assuming "Privileges Required" would be "None" or "Low" - this brings things like overly permissive default file permissions into scope. OS misconfiguration and such are beyond the control of Uptime Kuma, and thus explicitly out of scope. In other words: Trust people with root shell access, trust people with physical access to the hardware, trust that the OS is configured securely. Don't trust anything else. As for how to handle admin users *in* Uptime Kuma, that's up for debate, I suppose. You definitely don't want an admin user to be able to use their admin status in Uptime Kuma to gain privileges on the host (because they might not even have access to the host - think a vendor hosting Uptime Kuma for multiple companies, for example) but when it comes to doing things *to* Uptime Kuma it's reasonable to take the same approach as Microsoft takes with Windows. Microsoft generally doesn't consider a process running as Administrator being able to do malicious things a vulnerability, because they aren't limited by design. > Are there risks we’re overlooking by excluding insider threats? Insider threats should be in scope IMO, because on an internet accessible service that will also cover any attacker who manages to crack an account. This is inherently much more of a subjective area though, where you'll just have to make decisions on what restrictions should apply in the name of reducing the scope of damage that could be done. A common hard boundary for insider threats is making sure no one can actively mess up the host or other services on the host, so that damage would be limited to the Uptime Kuma instance itself. This would of course be limited by the scope mentioned above as well. Uptime Kuma can't really do anything against `rm -rf /*`, after all. > Are there hardening practices you think should be explicitly in scope? Explicitly keep track and document what data is trusted and untrusted, in other words, make sure the code has clear trust boundaries. Having clear trust boundaries prevent *so many* vulnerabilities.
Author
Owner

@CommanderStorm commented on GitHub (Jan 23, 2026):

how to handle admin users in Uptime Kuma, that's up for debate

So which way would you argue on https://github.com/louislam/uptime-kuma/security/advisories/GHSA-qjxc-h5jf-c7rj ?

@CommanderStorm commented on GitHub (Jan 23, 2026): > how to handle admin users in Uptime Kuma, that's up for debate So which way would you argue on https://github.com/louislam/uptime-kuma/security/advisories/GHSA-qjxc-h5jf-c7rj ?
Author
Owner

@ChlorideCull commented on GitHub (Jan 23, 2026):

So which way would you argue on GHSA-qjxc-h5jf-c7rj ?

I would've rejected/objected by virtue of it not being a vulnerability in Uptime Kuma. There's no SSRF, because that inherently requires the actual request to not be intended. An administrator using the intended functionality of "send an HTTP request" to send an HTTP request is not a vulnerability. Uptime Kuma can never have an exhaustive list of what endpoints it shouldn't have access to, so that's up to people to block themselves.

In general, if a behavior doesn't make you go "that shouldn't happen" or "that's unexpected", it's not a security issue. (Exceptions stemming from unsafe design does occur here though.)

Not to mention this would also be rejected at all cloud vendors, because the fact that the VM/deployed resource is a trust boundary is documented on all of them, and it's up to the customer to configure their VM/deployed resource so unauthorized requests to the metadata service cannot be performed.

There's a larger discussion that could be had about the quality of security reports you can expect, how much they need to be scrutinized, and how scary numbers like CVSS are fairly subjective, but the gist is that you'll need to tell people they are wrong sometimes.

Looking at the CVE history, there are several I would definitely re-score, as they claim that only low privileges are required, but as Uptime Kuma doesn't have non-admin users, it would've been high privileges.

@ChlorideCull commented on GitHub (Jan 23, 2026): >So which way would you argue on [GHSA-qjxc-h5jf-c7rj](https://github.com/louislam/uptime-kuma/security/advisories/GHSA-qjxc-h5jf-c7rj) ? I would've rejected/objected by virtue of it not being a vulnerability in Uptime Kuma. There's no SSRF, because that inherently requires the actual request to not be intended. An administrator using the intended functionality of "send an HTTP request" to send an HTTP request is not a vulnerability. Uptime Kuma can never have an exhaustive list of what endpoints it shouldn't have access to, so that's up to people to block themselves. In general, if a behavior doesn't make you go "that shouldn't happen" or "that's unexpected", it's not a security issue. (Exceptions stemming from unsafe design does occur here though.) Not to mention this would also be rejected at all cloud vendors, because the fact that the VM/deployed resource is a trust boundary is documented on all of them, and it's up to the customer to configure their VM/deployed resource so unauthorized requests to the metadata service cannot be performed. There's a larger discussion that could be had about the quality of security reports you can expect, how much they need to be scrutinized, and how scary numbers like CVSS are fairly subjective, but the gist is that you'll need to tell people they are wrong sometimes. Looking at the CVE history, there are several I would definitely re-score, as they claim that only low privileges are required, but as Uptime Kuma doesn't have non-admin users, it would've been high privileges.
Author
Owner

@drkim-dev commented on GitHub (Jan 23, 2026):

Hello @louislam,@CommanderStorm,@ChlorideCull and the community,

Thank you for opening this discussion. I am the researcher who recently reported the "Visual SSRF via Browser Engine" (which was marked as a duplicate of a previous IMDS report). I’ve reviewed the shared security model arguments, and I’d like to provide a perspective on why the current "intended feature" logic poses a critical risk.

  1. The "Authenticated User" Fallacy The current model assumes that since a user is authenticated, they should have full access to the host's internal network. However, this ignores Shared Instances and Demo Sites. My demonstration on the official demo.kuma.pet proved that any user (even on a demo platform) can exfiltrate AWS IMDS data. If a tool can be easily weaponized to compromise the underlying infrastructure, it is a design flaw, not just a configuration issue.

  2. Visual SSRF vs. Standard SSRF We must distinguish between a monitor "checking a port" and a "Headless Browser capturing a screenshot." The latter is a Visual Exfiltration Channel. By allowing the Browser Engine to render 169.254.169.254 or internal admin panels, Uptime Kuma provides an automated way to bypass firewalls and visually capture sensitive credentials (IAM roles, internal PII) that were never meant to leave the internal network.

  3. Proposed Security Boundary: "Secure by Default" Uptime Kuma should prioritize protection for the majority of users who may not be security experts. I propose:

Default Deny for Sensitive IPs: Block access to 169.254.169.254 and local loopback ranges by default.

Explicit Opt-in for Internal Monitoring: If a user truly needs to monitor the internal metadata endpoint, they should have to explicitly enable a "Dangerous: Allow Internal Network SSRF" toggle in the settings.

Network Sandboxing: The Browser Engine should be isolated from the host's management network by default.

Conclusion: A monitoring tool should be a "watchman," not a "bridge" for attackers. Relying on users to secure their environment is not enough when the tool itself provides a convenient way to exfiltrate data. I hope my PoC on the demo site serves as a clear example of why these boundaries are necessary.

Best regards, @drkim-dev

@drkim-dev commented on GitHub (Jan 23, 2026): Hello @louislam,@CommanderStorm,@ChlorideCull and the community, Thank you for opening this discussion. I am the researcher who recently reported the "Visual SSRF via Browser Engine" (which was marked as a duplicate of a previous IMDS report). I’ve reviewed the shared security model arguments, and I’d like to provide a perspective on why the current "intended feature" logic poses a critical risk. 1. The "Authenticated User" Fallacy The current model assumes that since a user is authenticated, they should have full access to the host's internal network. However, this ignores Shared Instances and Demo Sites. My demonstration on the official demo.kuma.pet proved that any user (even on a demo platform) can exfiltrate AWS IMDS data. If a tool can be easily weaponized to compromise the underlying infrastructure, it is a design flaw, not just a configuration issue. 2. Visual SSRF vs. Standard SSRF We must distinguish between a monitor "checking a port" and a "Headless Browser capturing a screenshot." The latter is a Visual Exfiltration Channel. By allowing the Browser Engine to render 169.254.169.254 or internal admin panels, Uptime Kuma provides an automated way to bypass firewalls and visually capture sensitive credentials (IAM roles, internal PII) that were never meant to leave the internal network. 3. Proposed Security Boundary: "Secure by Default" Uptime Kuma should prioritize protection for the majority of users who may not be security experts. I propose: Default Deny for Sensitive IPs: Block access to 169.254.169.254 and local loopback ranges by default. Explicit Opt-in for Internal Monitoring: If a user truly needs to monitor the internal metadata endpoint, they should have to explicitly enable a "Dangerous: Allow Internal Network SSRF" toggle in the settings. Network Sandboxing: The Browser Engine should be isolated from the host's management network by default. Conclusion: A monitoring tool should be a "watchman," not a "bridge" for attackers. Relying on users to secure their environment is not enough when the tool itself provides a convenient way to exfiltrate data. I hope my PoC on the demo site serves as a clear example of why these boundaries are necessary. Best regards, @drkim-dev
Author
Owner

@ChlorideCull commented on GitHub (Jan 23, 2026):

@drkim-dev,

To begin with, as mentioned before, none of this is SSRF, since the "forgery" part requires it to be unintentional. It's operating as designed. Continuing,

The current model assumes that since a user is authenticated, they should have full access to the host's internal network.

The current model assumes that since a user is authenticated, they should have the ability to make Uptime Kuma invoke arbitrary requests. This is an important distinction, because there are other layers you can block those requests, like container orchestrator, or machine firewall. This isn't that weird either - this is already the case for any other service capable of making outgoing requests on the whims of a user.

My demonstration on the official demo.kuma.pet proved that any user (even on a demo platform) can exfiltrate AWS IMDS data.

They can do so in a non-production environment where no outgoing firewall is present, unlike most production environments. You might have guessed that the temporary demo instance is not a production environment.

We must distinguish between a monitor "checking a port" and a "Headless Browser capturing a screenshot."

We really don't. All you get is more information per request - you can just make more requests to compensate. For example, SQLMap can dump an entire SQL database based on timing alone, as long as it can make that timing dependent on the data.

Uptime Kuma should prioritize protection for the majority of users who may not be security experts.

I would argue that setting up a firewall is not in the realm of "security experts" but rather a very basic skill in server administration.

Default Deny for Sensitive IPs: Block access to 169.254.169.254 and local loopback ranges by default.

Generally, a half-assed partial "fix" that brings a false sense of security is worse than no change at all. Uptime Kuma can never know what IPs are sensitive on the network, hence it can never block them. The only ones capable of properly blocking sensitive IPs are the people setting it up, and they have plenty of tools to do so available outside Uptime Kuma.

There are also cloud services using IPs other than 169.254.169.254 for metadata endpoints, like Akamai and AliBaba Cloud. Anything running CloudStack doesn't even have a fixed IP for the metadata endpoint.

If a user truly needs to monitor the internal metadata endpoint, they should have to explicitly enable a "Dangerous: Allow Internal Network SSRF" toggle in the settings.

This implies that it's somehow less dangerous to have those endpoints blocked, because the ability to make arbitrary requests is fine otherwise, or what? Uptime Kuma makes arbitrary requests (which is dangerous) by design, and it's reasonable to expect that someone installing it is aware of that.

Your argument at the moment is kinda like saying that a web control panel has "authenticated remote code execution" because it can run shell commands, and that it will be safer if you block rm because that means you can't delete files.

Also, if someone has the ability to add a new montior, they have the ability to enable that toggle...

Network Sandboxing: The Browser Engine should be isolated from the host's management network by default.

That's a great idea, and should probably apply to all of Uptime Kuma. Uptime Kuma can't be aware of what's a "management network" though, so it's almost like that's something the user should be configuring their firewall for when deploying it, and not an issue with Uptime Kuma itself.


Relying on users to secure their environment is not enough when the tool itself provides a convenient way to exfiltrate data.

curl is also convenient, so is any web browser, and as long as Uptime Kuma is capable of making any external requests, it will be a convenient way to exfiltrate data.

@ChlorideCull commented on GitHub (Jan 23, 2026): @drkim-dev, To begin with, as mentioned before, none of this is SSRF, since the "forgery" part requires it to be unintentional. It's operating as designed. Continuing, >The current model assumes that since a user is authenticated, they should have full access to the host's internal network. The current model assumes that since a user is authenticated, they should have *the ability to make Uptime Kuma invoke arbitrary requests.* This is an important distinction, because there are other layers you can block those requests, like container orchestrator, or machine firewall. This isn't that weird either - this is already the case for any other service capable of making outgoing requests on the whims of a user. >My demonstration on the official demo.kuma.pet proved that any user (even on a demo platform) can exfiltrate AWS IMDS data. They can do so in a non-production environment where no outgoing firewall is present, unlike most production environments. You might have guessed that the temporary demo instance is not a production environment. >We must distinguish between a monitor "checking a port" and a "Headless Browser capturing a screenshot." We really don't. All you get is more information per request - you can just make more requests to compensate. For example, SQLMap can dump an entire SQL database based on timing alone, as long as it can make that timing dependent on the data. >Uptime Kuma should prioritize protection for the majority of users who may not be security experts. I would argue that setting up a firewall is not in the realm of "security experts" but rather a very basic skill in server administration. >Default Deny for Sensitive IPs: Block access to 169.254.169.254 and local loopback ranges by default. Generally, a half-assed partial "fix" that brings a false sense of security is worse than no change at all. Uptime Kuma can never know what IPs are sensitive on the network, hence it can never block them. The only ones capable of properly blocking sensitive IPs are the people setting it up, and they have plenty of tools to do so available *outside* Uptime Kuma. There are also cloud services using IPs other than 169.254.169.254 for metadata endpoints, like Akamai and AliBaba Cloud. Anything running CloudStack doesn't even have a fixed IP for the metadata endpoint. >If a user truly needs to monitor the internal metadata endpoint, they should have to explicitly enable a "Dangerous: Allow Internal Network SSRF" toggle in the settings. This implies that it's somehow less dangerous to have those endpoints blocked, because the ability to make arbitrary requests is fine otherwise, or what? Uptime Kuma makes arbitrary requests (which is dangerous) by design, and it's reasonable to expect that someone installing it is aware of that. Your argument at the moment is kinda like saying that a web control panel has "authenticated remote code execution" because it can run shell commands, and that it will be safer if you block `rm` because that means you can't delete files. Also, if someone has the ability to add a new montior, they have the ability to enable that toggle... >Network Sandboxing: The Browser Engine should be isolated from the host's management network by default. That's a great idea, and should probably apply to all of Uptime Kuma. Uptime Kuma can't be aware of what's a "management network" though, so it's almost like **that's something the user should be configuring their firewall for when deploying it, and not an issue with Uptime Kuma itself.** ---- >Relying on users to secure their environment is not enough when the tool itself provides a convenient way to exfiltrate data. `curl` is also convenient, so is any web browser, and as long as Uptime Kuma is capable of making any external requests, it will be a convenient way to exfiltrate data.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4598
No description provided.