mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Uptime Kuma 2.0 Upgrade - Percentage Indicator broken & Disk Usage not changing almost at all / No MariaDB Files Saved ? #4349
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#4349
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @luckylinux on GitHub (Oct 21, 2025).
📑 I have found these related issues/pull requests
Unable to find any Issues related to
percentage,indicatororupgradewhich seemed relevant.🛡️ Security Policy
📝 Description
I changed the Image from Uptime Kuma 1.x Branch to Uptime Kuma 2.x, as indicated in the Upgrade Notes.
The Progress Indicator always stays at 0%, even though the Monitor Data is currently set at
43/87.Disk Space for the
dataBind-Mount seems to also be unaffected.I would expect either the Disk Space to Double (if cleanup later) or at least some other new Files for MariaDB (embedded) to be created, but neither of these are happening.
It's been approx. 10 Hours.
👟 Reproduction steps
Just change the Docker Image from Uptime Kuma 1.x Branch to Uptime Kuma 2.x, as indicated in the Upgrade Notes.
In my Case, I change the Podman Quadlet
uptime-kuma-server.containerfrom:To
uptime-kuma-server.container:👀 Expected behavior
Percentage Indicator should increase.
Furthermore, not sure where uptime Kuma is saving the Data for the new embedded MariaDB (I assume it's still somewhere in the
dataFolder), but I would expect the Size on Disk to increase.If I check my bind-mounts there does NOT seem to be any Change happening at all:
I would expect either the Disk Space to Double (if cleanup later) or at least some other new Files for MariaDB (embedded) to be created, but neither of these are happening.
I'm also extremely confused by the Statement at the end of the Migration Guide:
If the Migration of Data is not done automatically, then what is it taking 10+ Hours for the Migration to take place ?
😓 Actual Behavior
🐻 Uptime-Kuma Version
2.0.1
💻 Operating System and Arch
Fedora 42 AMD64 on KVM [Proxmox VE] (Linux podmanserver 6.16.12-200.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Sun Oct 12 16:31:16 UTC 2025 x86_64 GNU/Linux)
🌐 Browser
Librewolf 144.0-1 (64-Bit)
🖥️ Deployment Environment
5.6.2(Git Commit9dd5e1ed33830612bc200d7a13db00af6ab865a4)15.1(LTS: Yes/No - Unknown), embedded inlouislam/uptime-kuma:2Docker Image) -mariadb Ver 15.1 Distrib 10.11.14-MariaDB, for debian-linux-gnu (x86_64) using EditLine wrapper20.19.5(LTS: Yes/No - Unknown)87📝 Relevant log output
@CommanderStorm commented on GitHub (Oct 21, 2025):
We are migrating from storing every heartbeat to storing them in an Aggregated form.
We essentially need to read every row of your heartbeat and aggregate them.
This is somewhat expensive.
We will not migrate the dB backend for you.
@luckylinux commented on GitHub (Oct 21, 2025):
Do you mean that it's actually reading and re-writing an aggregated Form of the existing Data, but keeping it in the existing SQLite Database Form ?
@CommanderStorm commented on GitHub (Oct 21, 2025):
Yes. And that takes a while which is to be expected.
Could likely be optimised, but hey
@luckylinux commented on GitHub (Oct 21, 2025):
So after 10 Hours sitting still at 0% (even though half the Monitors were processed), that's normal to you ?
Unfortunately I had to restart from Scratch 😞. I tried to disable Automatic Reboots after Fedora Updates (since sometimes DNF Triggers it), but then I restarted one Service which actually restarted the entire System.
So there we go again 🤣 ... 2 Hours in now and sitting at:
@klafbang commented on GitHub (Oct 21, 2025):
Just did an upgrade as well, and my percentages were wonky as well (got to 100% at around monitor 55 out of 65). It still finishes just fine.
My upgrade took around an hour for 65 monitors and 100 days of history – running from source without Docker (just under half being daily heartbeats with way fewer data points, say 4000 monitor-days of data), so 10+ hours for 87 monitors going back almost 2 years (~55000 monitor-days, or ~14 times mine) doesn't seem unreasonable. Mine also ran faster per day (1-3 days/second vs 1-3 seconds/day for yours - possibly you have more data points per day).
@louislam commented on GitHub (Oct 21, 2025):
In case you dont need the history for all monitors, you can start Uptime Kuma 1.23.x using your backup, delete heartbeat data for some monitors, and upgrade to v2 again.
@github-actions[bot] commented on GitHub (Dec 20, 2025):
We are clearing up our old
help-issues and your issue has been open for 60 days with no activity.If no comment is made and the stale label is not removed, this issue will be closed in 7 days.
@Harry-Chen commented on GitHub (Dec 22, 2025):
The
0.00%is actually a rounding issue in progress calculation:github.com/louislam/uptime-kuma@eb0b6cdb09/server/database.js (L828-L876)The logic can be simplified as:
The problem is,
part / dates.length * 100can be really small if you have a longmonitors/dates. SoMath.roundjust returns 0.@CommanderStorm commented on GitHub (Dec 22, 2025):
yea, showing the absolute values in the log is likely good for long
monitors/dates. Would you like to do such a PR?@Harry-Chen commented on GitHub (Dec 22, 2025):
@CommanderStorm Please see #6516.
@CommanderStorm commented on GitHub (Dec 22, 2025):
thanks