uptime kuma 100% usage of cpu #2839

Open
opened 2026-02-28 03:08:57 -05:00 by deekerman · 16 comments
Owner

Originally created by @syamsullivan on GitHub (Nov 24, 2023).

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
70b8a69b1ae8 uptime-kuma-saas 103.76% 133.9MiB / 7.637GiB 1.71% 991kB / 8.08MB 41kB / 129MB 12

i have issue with uptimekuma that used only single core, then affected performance dashboard

any suggestion

im using docker
Docker version 24.0.5, build ced0996
centos 7
with 8 cores 8G

📝 Error Message(s) or Log

No response

🐻 Uptime-Kuma Version

1.22.1

💻 Operating System and Arch

CentOS Linux release 7.9.2009

🌐 Browser

Version 117.0.5938.88

🐋 Docker Version

Docker version 24.0.5

🟩 NodeJS Version

No response

Originally created by @syamsullivan on GitHub (Nov 24, 2023). ### ⚠️ Please verify that this bug has NOT been raised before. - [X] I checked and didn't find similar issue ### 🛡️ Security Policy - [X] I agree to have read this project [Security Policy](https://github.com/louislam/uptime-kuma/security/policy) ### 📝 Describe your problem CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 70b8a69b1ae8 uptime-kuma-saas 103.76% 133.9MiB / 7.637GiB 1.71% 991kB / 8.08MB 41kB / 129MB 12 i have issue with uptimekuma that used only single core, then affected performance dashboard any suggestion im using docker Docker version 24.0.5, build ced0996 centos 7 with 8 cores 8G ### 📝 Error Message(s) or Log _No response_ ### 🐻 Uptime-Kuma Version 1.22.1 ### 💻 Operating System and Arch CentOS Linux release 7.9.2009 ### 🌐 Browser Version 117.0.5938.88 ### 🐋 Docker Version Docker version 24.0.5 ### 🟩 NodeJS Version _No response_
Author
Owner

@chakflying commented on GitHub (Nov 24, 2023):

You can post the container logs, the output of htop when run inside the container, and the number and types of monitors you are running to help with troubleshooting.

@chakflying commented on GitHub (Nov 24, 2023): You can post the container logs, the output of `htop` when run inside the container, and the number and types of monitors you are running to help with troubleshooting.
Author
Owner

@CommanderStorm commented on GitHub (Nov 24, 2023):

*also include the retention time you configured

@CommanderStorm commented on GitHub (Nov 24, 2023): *also include the retention time you configured
Author
Owner

@syamsullivan commented on GitHub (Nov 27, 2023):

is retention will cost the CPU ?

and also i used docker as main platform for deploy kuma.
and always cost cpu 100%, should i increase the CPU limit of container ?

@syamsullivan commented on GitHub (Nov 27, 2023): is retention will cost the CPU ? and also i used docker as main platform for deploy kuma. and always cost cpu 100%, should i increase the CPU limit of container ?
Author
Owner

@CommanderStorm commented on GitHub (Nov 27, 2023):

@syamsullivan please give us the information we asked for.
See https://github.com/louislam/uptime-kuma/wiki/Troubleshooting if you need help getting this information.

is retention will cost the CPU ?

Retention is not a likely culprit. Please report it anyway.

should i increase the CPU limit of container ?

Depends what you set your limits to. One CPU is the max node should use.
Note that CPU limits were originally designed to curb power consumption in large datacenters. Use this feature of your runtime with caution.

@CommanderStorm commented on GitHub (Nov 27, 2023): @syamsullivan please give us the information we asked for. See https://github.com/louislam/uptime-kuma/wiki/Troubleshooting if you need help getting this information. > is retention will cost the CPU ? Retention is not a likely culprit. Please report it anyway. > should i increase the CPU limit of container ? Depends what you set your limits to. One CPU is the max node should use. Note that CPU limits were originally designed to curb power consumption in large datacenters. Use this feature of your runtime with caution.
Author
Owner

@bmdbz commented on GitHub (Jan 11, 2024):

I have the same problem.
The problem was exacerbated when I logged into WebUI.
Generally, after I restart the docker container, I can log in to WebUI and see the contents of the monitoring items normally. After a period of time (maybe 15 minutes or less), I will open WebUI again and the interface will not display any monitoring items. (But the monitoring task is actually still running)
Number of my monitoring items 500+
Mainly because uptime-kuma is easier than zabbix tools, but the CPU utilization rate of 100% makes me unable to start.

@bmdbz commented on GitHub (Jan 11, 2024): I have the same problem. The problem was exacerbated when I logged into WebUI. Generally, after I restart the docker container, I can log in to WebUI and see the contents of the monitoring items normally. After a period of time (maybe 15 minutes or less), I will open WebUI again and the interface will not display any monitoring items. (But the monitoring task is actually still running) Number of my monitoring items 500+ Mainly because uptime-kuma is easier than zabbix tools, but the CPU utilization rate of 100% makes me unable to start.
Author
Owner

@CommanderStorm commented on GitHub (Jan 11, 2024):

@bmdbz
Could you report the values for:

  • htop-output
  • retention time (not likely a problem, still worth reporting)
  • estimated average heartbeat check-time per monitor
  • monitor-type distribution
  • "Do you expect a lot of traffic on your status pages"?

Note that the first beta of v2.0 is still a few weeks out, but said release will come with a lot of performance improvements.
In v1, 500+ (depending what "+" means) is likely pushing it.

@CommanderStorm commented on GitHub (Jan 11, 2024): @bmdbz Could you report the values for: - `htop`-output - retention time (not likely a problem, still worth reporting) - estimated average heartbeat check-time per monitor - monitor-type distribution - "Do you expect a lot of traffic on your status pages"? Note that the first beta of v2.0 is still a few weeks out, but said release will come with a lot of performance improvements. In v1, 500+ (depending what "+" means) is likely pushing it.
Author
Owner

@bmdbz commented on GitHub (Jan 11, 2024):

Thank you for your reply.

  • htop output, to be provided tomorrow after using environment query
  • 7 days retention time
  • Heartbeat check time for each monitoring item 10-60 seconds
  • Monitor type is ping
  • "Do you expect a lot of traffic on your status pages"? I don't understand this problem very well.

In v1, 500+ means more than 500.

@bmdbz commented on GitHub (Jan 11, 2024): Thank you for your reply. - `htop` output, to be provided tomorrow after using environment query - 7 days retention time - Heartbeat check time for each monitoring item 10-60 seconds - Monitor type is ping - "Do you expect a lot of traffic on your status pages"? I don't understand this problem very well. In v1, 500+ means more than 500.
Author
Owner

@bmdbz commented on GitHub (Jan 11, 2024):

image
The above is the output screenshot of htop, thank you!

@bmdbz commented on GitHub (Jan 11, 2024): ![image](https://github.com/louislam/uptime-kuma/assets/22940127/c2498567-3bc9-4c2f-ab02-e6879cadd9fa) The above is the output screenshot of htop, thank you!
Author
Owner

@CommanderStorm commented on GitHub (Feb 29, 2024):

missed this response.
The htop output you reported is sorted by memory, could you sort by CPU utilisation instead?
In the screenshot, this is not 100%, but rather 30%

@CommanderStorm commented on GitHub (Feb 29, 2024): missed this response. The htop output you reported is sorted by memory, could you sort by CPU utilisation instead? In the screenshot, this is not 100%, but rather 30%
Author
Owner

@cayenne17 commented on GitHub (Mar 16, 2024):

I just noticed the same problem. When I don't have the uptime-kuma web interface open, I'm in the 5%~ CPU range:
image



When I have a tab open in the background with no actions on it, it's a variable 30%-70% CPU:
image




Uptime kuma is installed in a Docker version 25.0.4, build 1a576c5 on a Debian 12.5 VM.

root@UptimeKuma:~# docker -v
Docker version 25.0.4, build 1a576c5

root@UptimeKuma:~# cat /etc/debian_version 
12.5

Uptime Kuma
Version: 1.23.11
Version frontend: 1.23.11

AVG VM CPU graph from Proxmox VE:
image

@cayenne17 commented on GitHub (Mar 16, 2024): I just noticed the same problem. When I don't have the uptime-kuma web interface open, I'm in the 5%~ CPU range: ![image](https://github.com/louislam/uptime-kuma/assets/47927025/d952bd47-c41e-4e2a-82ea-cfa369564e92) <br><br> When I have a tab open in the background with no actions on it, it's a variable 30%-70% CPU: ![image](https://github.com/louislam/uptime-kuma/assets/47927025/2ee09675-6470-44f0-a427-57eabcb54565) <br><br> Uptime kuma is installed in a Docker version 25.0.4, build 1a576c5 on a Debian 12.5 VM. ``` root@UptimeKuma:~# docker -v Docker version 25.0.4, build 1a576c5 root@UptimeKuma:~# cat /etc/debian_version 12.5 ``` Uptime Kuma Version: 1.23.11 Version frontend: 1.23.11 AVG VM CPU graph from Proxmox VE: ![image](https://github.com/louislam/uptime-kuma/assets/47927025/d60dde29-5a8f-406b-bd4a-f442366cd2be)
Author
Owner

@sunlewuyou commented on GitHub (Apr 30, 2024):

Non-Docker
image

@sunlewuyou commented on GitHub (Apr 30, 2024): Non-Docker ![image](https://github.com/louislam/uptime-kuma/assets/19728435/53ea775e-dbe1-4fda-bbf9-d0a36c2dd1b3)
Author
Owner

@github-actions[bot] commented on GitHub (Jun 29, 2024):

We are clearing up our old help-issues and your issue has been open for 60 days with no activity.
If no comment is made and the stale label is not removed, this issue will be closed in 7 days.

@github-actions[bot] commented on GitHub (Jun 29, 2024): We are clearing up our old `help`-issues and your issue has been open for 60 days with no activity. If no comment is made and the stale label is not removed, this issue will be closed in 7 days.
Author
Owner

@cayenne17 commented on GitHub (Jun 30, 2024):

The problem still exists

@cayenne17 commented on GitHub (Jun 30, 2024): The problem still exists
Author
Owner

@CommanderStorm commented on GitHub (Jul 1, 2024):

This is likely resolved by the performance improvement in #4500, more specific https://github.com/louislam/uptime-kuma/pull/3515

Testing PRs can be done via https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests, but I don't expect that due to you needing to create 500 monitors without good import/export functionality.

I have changed this to a FR to avoid stalebot doing shit.

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

  • how many monitors do you have configured
  • what is their type
  • what is your retention
@CommanderStorm commented on GitHub (Jul 1, 2024): This is likely resolved by the performance improvement in #4500, more specific https://github.com/louislam/uptime-kuma/pull/3515 Testing PRs can be done via https://github.com/louislam/uptime-kuma/wiki/Test-Pull-Requests, but I don't expect that due to you needing to create 500 monitors without good import/export functionality. I have changed this to a FR to avoid stalebot doing shit. What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about - how many monitors do you have configured - what is their type - what is your retention
Author
Owner

@cayenne17 commented on GitHub (Jul 1, 2024):

What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about

  • how many monitors do you have configured
  • what is their type
  • what is your retention

@CommanderStorm

how many monitors do you have configured ?
74 online, 2 offline and 5 on pause

what is their type ?
It's mostly ICMP probes and a few HTTPS probes

what is your retention ?
30 days

@cayenne17 commented on GitHub (Jul 1, 2024): > What I need from the others in this issue (@sunlewuyou @cayenne17) is the metadata about > > * how many monitors do you have configured > * what is their type > * what is your retention @CommanderStorm how many monitors do you have configured ? 74 online, 2 offline and 5 on pause what is their type ? It's mostly ICMP probes and a few HTTPS probes what is your retention ? 30 days
Author
Owner

@rezzorix commented on GitHub (Jul 16, 2024):

Since Proxmox is used; Just some questions on terminology: you are using a VM - not an LXC, correct?

In any case, CPU usage by default in Proxmox is not exclusive.
Lets say CPU 1 is assigned to your VM/LXC, and the host computer decides to use it for some reason, the usage % of the process on the VM/LXC would automatically look very high.

You can assign CPU resources exclusively / reserved for a VM/LXC then you will not have this issues.

To mitigate this and ensure more predictable CPU usage, you can:

Set CPU Affinity (Exclusive CPU Allocation):

  • For LXC containers, set lxc.cgroup.cpuset.cpus in the container configuration file.
  • For Docker containers, use --cpuset-cpus when running the container.

Limit CPU Usage:

  • Use CPU limits to control how much CPU time the VM/LXC can use.
  • For LXC: Set lxc.cgroup.cpu.shares.
  • For Docker: Use --cpus or --cpu-shares.
@rezzorix commented on GitHub (Jul 16, 2024): Since Proxmox is used; Just some questions on terminology: you are using a VM - not an LXC, correct? In any case, CPU usage by default in Proxmox is not exclusive. Lets say CPU 1 is assigned to your VM/LXC, and the host computer decides to use it for some reason, the usage % of the process on the VM/LXC would automatically look very high. You can assign CPU resources exclusively / reserved for a VM/LXC then you will not have this issues. To mitigate this and ensure more predictable CPU usage, you can: Set CPU Affinity (Exclusive CPU Allocation): - For LXC containers, set lxc.cgroup.cpuset.cpus in the container configuration file. - For Docker containers, use --cpuset-cpus when running the container. Limit CPU Usage: - Use CPU limits to control how much CPU time the VM/LXC can use. - For LXC: Set lxc.cgroup.cpu.shares. - For Docker: Use --cpus or --cpu-shares.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#2839
No description provided.