mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
uptime kuma 100% usage of cpu #2839
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#2839
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @syamsullivan on GitHub (Nov 24, 2023).
⚠️ Please verify that this bug has NOT been raised before.
🛡️ Security Policy
📝 Describe your problem
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
70b8a69b1ae8 uptime-kuma-saas 103.76% 133.9MiB / 7.637GiB 1.71% 991kB / 8.08MB 41kB / 129MB 12
i have issue with uptimekuma that used only single core, then affected performance dashboard
any suggestion
im using docker
Docker version 24.0.5, build ced0996
centos 7
with 8 cores 8G
📝 Error Message(s) or Log
No response
🐻 Uptime-Kuma Version
1.22.1
💻 Operating System and Arch
CentOS Linux release 7.9.2009
🌐 Browser
Version 117.0.5938.88
🐋 Docker Version
Docker version 24.0.5
🟩 NodeJS Version
No response
@chakflying commented on GitHub (Nov 24, 2023):
You can post the container logs, the output of
htopwhen run inside the container, and the number and types of monitors you are running to help with troubleshooting.@CommanderStorm commented on GitHub (Nov 24, 2023):
*also include the retention time you configured
@syamsullivan commented on GitHub (Nov 27, 2023):
is retention will cost the CPU ?
and also i used docker as main platform for deploy kuma.
and always cost cpu 100%, should i increase the CPU limit of container ?
@CommanderStorm commented on GitHub (Nov 27, 2023):
@syamsullivan please give us the information we asked for.
See https://github.com/louislam/uptime-kuma/wiki/Troubleshooting if you need help getting this information.
Retention is not a likely culprit. Please report it anyway.
Depends what you set your limits to. One CPU is the max node should use.
Note that CPU limits were originally designed to curb power consumption in large datacenters. Use this feature of your runtime with caution.
@bmdbz commented on GitHub (Jan 11, 2024):
I have the same problem.
The problem was exacerbated when I logged into WebUI.
Generally, after I restart the docker container, I can log in to WebUI and see the contents of the monitoring items normally. After a period of time (maybe 15 minutes or less), I will open WebUI again and the interface will not display any monitoring items. (But the monitoring task is actually still running)
Number of my monitoring items 500+
Mainly because uptime-kuma is easier than zabbix tools, but the CPU utilization rate of 100% makes me unable to start.
@CommanderStorm commented on GitHub (Jan 11, 2024):
@bmdbz
Could you report the values for:
htop-outputNote that the first beta of v2.0 is still a few weeks out, but said release will come with a lot of performance improvements.
In v1, 500+ (depending what "+" means) is likely pushing it.
@bmdbz commented on GitHub (Jan 11, 2024):
Thank you for your reply.
htopoutput, to be provided tomorrow after using environment queryIn v1, 500+ means more than 500.
@bmdbz commented on GitHub (Jan 11, 2024):
The above is the output screenshot of htop, thank you!
@CommanderStorm commented on GitHub (Feb 29, 2024):
missed this response.
The htop output you reported is sorted by memory, could you sort by CPU utilisation instead?
In the screenshot, this is not 100%, but rather 30%
@cayenne17 commented on GitHub (Mar 16, 2024):
I just noticed the same problem. When I don't have the uptime-kuma web interface open, I'm in the 5%~ CPU range:


When I have a tab open in the background with no actions on it, it's a variable 30%-70% CPU:
Uptime kuma is installed in a Docker version 25.0.4, build 1a576c5 on a Debian 12.5 VM.
Uptime Kuma
Version: 1.23.11
Version frontend: 1.23.11
AVG VM CPU graph from Proxmox VE:

@sunlewuyou commented on GitHub (Apr 30, 2024):
Non-Docker

@github-actions[bot] commented on GitHub (Jun 29, 2024):
We are clearing up our old
help-issues and your issue has been open for 60 days with no activity.If no comment is made and the stale label is not removed, this issue will be closed in 7 days.
@cayenne17 commented on GitHub (Jun 30, 2024):
The problem still exists
@CommanderStorm commented on GitHub (Jul 1, 2024):
This is likely resolved by the performance improvement in #4500, more specific https://github.com/louislam/uptime-kuma/pull/3515
Testing PRs can be done via https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests, but I don't expect that due to you needing to create 500 monitors without good import/export functionality.
I have changed this to a FR to avoid stalebot doing shit.
What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about
@cayenne17 commented on GitHub (Jul 1, 2024):
@CommanderStorm
how many monitors do you have configured ?
74 online, 2 offline and 5 on pause
what is their type ?
It's mostly ICMP probes and a few HTTPS probes
what is your retention ?
30 days
@rezzorix commented on GitHub (Jul 16, 2024):
Since Proxmox is used; Just some questions on terminology: you are using a VM - not an LXC, correct?
In any case, CPU usage by default in Proxmox is not exclusive.
Lets say CPU 1 is assigned to your VM/LXC, and the host computer decides to use it for some reason, the usage % of the process on the VM/LXC would automatically look very high.
You can assign CPU resources exclusively / reserved for a VM/LXC then you will not have this issues.
To mitigate this and ensure more predictable CPU usage, you can:
Set CPU Affinity (Exclusive CPU Allocation):
Limit CPU Usage: