mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Discussion: Defining a Shared Security Model for Uptime Kuma #4598
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#4598
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @CommanderStorm on GitHub (Jan 11, 2026).
Context
Uptime Kuma has grown in popularity and complexity, and security considerations increasingly affect feature design (e.g. multi-user support, permissions, monitor capabilities).
Maintainers already handle security topics informally, but we currently lack a shared structure or threat model.
As such, during security issues it is often not clear if something is a security issue or not.
Examples of questions that have been raised:
A clearer shared model would help answer these questions more consistently.
Proposal
I would like to introduce two security-related groups with clearly separated responsibilities:
public Uptime Kuma Security Working Group (click to expand)
Purpose
This work is expected to unblock future features (e.g. multi-user support, safer monitor designs) by making it clear what "being secure" means for us and improve our processes.
Membership
Typical expectations
private Uptime Kuma Security Triage Team (click to expand)
Purpose
Membership
Typical requirements
Requirements are guidelines, not strict rules. Trust matters most.
Call to Action
I’d like feedback from other maintainers and the community on this proposal:
The goal here is not to add bureaucracy, but to make security discussions clearer, more consistent, and easier to handle as the project continues to grow.
@CommanderStorm commented on GitHub (Jan 18, 2026):
Proposal: Baseline Security Model (Draft)
I’d like to propose a baseline security model for Uptime Kuma to clarify assumptions and guide future security discussions:
Currently, we’ve been acting as if malicious admins and compromised infrastructure were in scope, which is hard/impossible to uphold.
Reasonable hardening (credentials, monitor configs, accidental missteps, ...) remains in scope.
Questions for feedback
This is intended as a starting point, not a final decision.
I’d love input from other contributors before we formalize it.
@ChlorideCull commented on GitHub (Jan 21, 2026):
Mostly.
If we talk in CVSS terms, I think "Network" and "Adjacent" would make sense for being in scope. In Uptime Kuma's case, that would be over the Internet, and on the local subnet.
"Local" (essentially terminal access, physical or over SSH) should also be in scope, assuming "Privileges Required" would be "None" or "Low" - this brings things like overly permissive default file permissions into scope.
OS misconfiguration and such are beyond the control of Uptime Kuma, and thus explicitly out of scope.
In other words: Trust people with root shell access, trust people with physical access to the hardware, trust that the OS is configured securely. Don't trust anything else.
As for how to handle admin users in Uptime Kuma, that's up for debate, I suppose. You definitely don't want an admin user to be able to use their admin status in Uptime Kuma to gain privileges on the host (because they might not even have access to the host - think a vendor hosting Uptime Kuma for multiple companies, for example) but when it comes to doing things to Uptime Kuma it's reasonable to take the same approach as Microsoft takes with Windows. Microsoft generally doesn't consider a process running as Administrator being able to do malicious things a vulnerability, because they aren't limited by design.
Insider threats should be in scope IMO, because on an internet accessible service that will also cover any attacker who manages to crack an account. This is inherently much more of a subjective area though, where you'll just have to make decisions on what restrictions should apply in the name of reducing the scope of damage that could be done.
A common hard boundary for insider threats is making sure no one can actively mess up the host or other services on the host, so that damage would be limited to the Uptime Kuma instance itself.
This would of course be limited by the scope mentioned above as well. Uptime Kuma can't really do anything against
rm -rf /*, after all.Explicitly keep track and document what data is trusted and untrusted, in other words, make sure the code has clear trust boundaries. Having clear trust boundaries prevent so many vulnerabilities.
@CommanderStorm commented on GitHub (Jan 23, 2026):
So which way would you argue on https://github.com/louislam/uptime-kuma/security/advisories/GHSA-qjxc-h5jf-c7rj ?
@ChlorideCull commented on GitHub (Jan 23, 2026):
I would've rejected/objected by virtue of it not being a vulnerability in Uptime Kuma. There's no SSRF, because that inherently requires the actual request to not be intended. An administrator using the intended functionality of "send an HTTP request" to send an HTTP request is not a vulnerability. Uptime Kuma can never have an exhaustive list of what endpoints it shouldn't have access to, so that's up to people to block themselves.
In general, if a behavior doesn't make you go "that shouldn't happen" or "that's unexpected", it's not a security issue. (Exceptions stemming from unsafe design does occur here though.)
Not to mention this would also be rejected at all cloud vendors, because the fact that the VM/deployed resource is a trust boundary is documented on all of them, and it's up to the customer to configure their VM/deployed resource so unauthorized requests to the metadata service cannot be performed.
There's a larger discussion that could be had about the quality of security reports you can expect, how much they need to be scrutinized, and how scary numbers like CVSS are fairly subjective, but the gist is that you'll need to tell people they are wrong sometimes.
Looking at the CVE history, there are several I would definitely re-score, as they claim that only low privileges are required, but as Uptime Kuma doesn't have non-admin users, it would've been high privileges.
@drkim-dev commented on GitHub (Jan 23, 2026):
Hello @louislam,@CommanderStorm,@ChlorideCull and the community,
Thank you for opening this discussion. I am the researcher who recently reported the "Visual SSRF via Browser Engine" (which was marked as a duplicate of a previous IMDS report). I’ve reviewed the shared security model arguments, and I’d like to provide a perspective on why the current "intended feature" logic poses a critical risk.
The "Authenticated User" Fallacy The current model assumes that since a user is authenticated, they should have full access to the host's internal network. However, this ignores Shared Instances and Demo Sites. My demonstration on the official demo.kuma.pet proved that any user (even on a demo platform) can exfiltrate AWS IMDS data. If a tool can be easily weaponized to compromise the underlying infrastructure, it is a design flaw, not just a configuration issue.
Visual SSRF vs. Standard SSRF We must distinguish between a monitor "checking a port" and a "Headless Browser capturing a screenshot." The latter is a Visual Exfiltration Channel. By allowing the Browser Engine to render 169.254.169.254 or internal admin panels, Uptime Kuma provides an automated way to bypass firewalls and visually capture sensitive credentials (IAM roles, internal PII) that were never meant to leave the internal network.
Proposed Security Boundary: "Secure by Default" Uptime Kuma should prioritize protection for the majority of users who may not be security experts. I propose:
Default Deny for Sensitive IPs: Block access to 169.254.169.254 and local loopback ranges by default.
Explicit Opt-in for Internal Monitoring: If a user truly needs to monitor the internal metadata endpoint, they should have to explicitly enable a "Dangerous: Allow Internal Network SSRF" toggle in the settings.
Network Sandboxing: The Browser Engine should be isolated from the host's management network by default.
Conclusion: A monitoring tool should be a "watchman," not a "bridge" for attackers. Relying on users to secure their environment is not enough when the tool itself provides a convenient way to exfiltrate data. I hope my PoC on the demo site serves as a clear example of why these boundaries are necessary.
Best regards, @drkim-dev
@ChlorideCull commented on GitHub (Jan 23, 2026):
@drkim-dev,
To begin with, as mentioned before, none of this is SSRF, since the "forgery" part requires it to be unintentional. It's operating as designed. Continuing,
The current model assumes that since a user is authenticated, they should have the ability to make Uptime Kuma invoke arbitrary requests. This is an important distinction, because there are other layers you can block those requests, like container orchestrator, or machine firewall. This isn't that weird either - this is already the case for any other service capable of making outgoing requests on the whims of a user.
They can do so in a non-production environment where no outgoing firewall is present, unlike most production environments. You might have guessed that the temporary demo instance is not a production environment.
We really don't. All you get is more information per request - you can just make more requests to compensate. For example, SQLMap can dump an entire SQL database based on timing alone, as long as it can make that timing dependent on the data.
I would argue that setting up a firewall is not in the realm of "security experts" but rather a very basic skill in server administration.
Generally, a half-assed partial "fix" that brings a false sense of security is worse than no change at all. Uptime Kuma can never know what IPs are sensitive on the network, hence it can never block them. The only ones capable of properly blocking sensitive IPs are the people setting it up, and they have plenty of tools to do so available outside Uptime Kuma.
There are also cloud services using IPs other than 169.254.169.254 for metadata endpoints, like Akamai and AliBaba Cloud. Anything running CloudStack doesn't even have a fixed IP for the metadata endpoint.
This implies that it's somehow less dangerous to have those endpoints blocked, because the ability to make arbitrary requests is fine otherwise, or what? Uptime Kuma makes arbitrary requests (which is dangerous) by design, and it's reasonable to expect that someone installing it is aware of that.
Your argument at the moment is kinda like saying that a web control panel has "authenticated remote code execution" because it can run shell commands, and that it will be safer if you block
rmbecause that means you can't delete files.Also, if someone has the ability to add a new montior, they have the ability to enable that toggle...
That's a great idea, and should probably apply to all of Uptime Kuma. Uptime Kuma can't be aware of what's a "management network" though, so it's almost like that's something the user should be configuring their firewall for when deploying it, and not an issue with Uptime Kuma itself.
curlis also convenient, so is any web browser, and as long as Uptime Kuma is capable of making any external requests, it will be a convenient way to exfiltrate data.