Probes do not preserve that last state? #4165

Closed
opened 2026-02-28 03:53:27 -05:00 by deekerman · 5 comments
Owner

Originally created by @Demellion on GitHub (Jun 12, 2025).

Not related

🛡️ Security Policy

📝 Description

After migrating from 1.13.XX to latest release we noticed that probes no longer preserve the UP state anymore, and just keep sending UI notification and writing history that probe is now UP again on every heartbeat, even if it was already up before.

👟 Reproduction steps

  • Upgrade from older versions.
  • Login to UI
  • Wait for any healthy probe to keep reporting its UP again and again.

👀 Expected behavior

Once probe is already in healthy state it should no longer report that its UP again and write that to history with every heartbeat that returns with success.

😓 Actual Behavior

Every probe keeps sending UI notification as well as writing history that its UP on every heartbeat, even if it was already up before.

🐻 Uptime-Kuma Version

1.23.16

💻 Operating System and Arch

Debian 12 (bookworm): 6.1.0-37-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64 GNU/Linux

🌐 Browser

Any latest browser used

🖥️ Deployment Environment

  • Runtime Environment:
    • Docker: Version 28.2.2 (Build: Fri May 30 12:07:26 2025)
    • Docker Compose: Version 2.36.2
    • Portainer (BE/CE): Version 2.27.6 (LTS: Yes)
    • MariaDB: None
    • Node.js: None
    • Kubernetes (K3S/K8S): None
  • Database: Built-in (Docker volume)
  • Database Storage: Built-in (Docker volume)
    • Filesystem:
      • Linux: ext4
    • Storage Medium: AHV Guest filesystem
  • Uptime Kuma Setup:
    • Number of monitors: 86

📝 Relevant log output


Originally created by @Demellion on GitHub (Jun 12, 2025). ### 📑 I have found these related issues/pull requests Not related ### 🛡️ Security Policy - [x] I have read and agree to Uptime Kuma's [Security Policy](https://github.com/louislam/uptime-kuma/security/policy). ### 📝 Description After migrating from 1.13.XX to latest release we noticed that probes no longer preserve the UP state anymore, and just keep sending UI notification and writing history that probe is now UP again on every heartbeat, even if it was already up before. ### 👟 Reproduction steps - Upgrade from older versions. - Login to UI - Wait for any healthy probe to keep reporting its UP again and again. ### 👀 Expected behavior Once probe is already in healthy state it should no longer report that its UP again and write that to history with every heartbeat that returns with success. ### 😓 Actual Behavior Every probe keeps sending UI notification as well as writing history that its UP on every heartbeat, even if it was already up before. ### 🐻 Uptime-Kuma Version 1.23.16 ### 💻 Operating System and Arch Debian 12 (bookworm): 6.1.0-37-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.140-1 (2025-05-22) x86_64 GNU/Linux ### 🌐 Browser Any latest browser used ### 🖥️ Deployment Environment - **Runtime Environment**: - Docker: Version `28.2.2` (Build: Fri May 30 12:07:26 2025) - Docker Compose: Version `2.36.2` - Portainer (BE/CE): Version `2.27.6` (LTS: Yes) - MariaDB: None - Node.js: None - Kubernetes (K3S/K8S): None - **Database**: Built-in (Docker volume) - **Database Storage**: Built-in (Docker volume) - **Filesystem**: - Linux: ext4 - **Storage Medium**: AHV Guest filesystem - **Uptime Kuma Setup**: - Number of monitors: `86` ### 📝 Relevant log output ```bash session ```
deekerman 2026-02-28 03:53:27 -05:00
  • closed this issue
  • added the
    help
    label
Author
Owner

@CommanderStorm commented on GitHub (Jun 12, 2025):

which monitor do you use?

@CommanderStorm commented on GitHub (Jun 12, 2025): which monitor do you use?
Author
Owner

@Demellion commented on GitHub (Jun 13, 2025):

which monitor do you use?

It does seem no matter, observing same behavior on: ping, HTTPS, TCP

@Demellion commented on GitHub (Jun 13, 2025): > which monitor do you use? It does seem no matter, observing same behavior on: ping, HTTPS, TCP
Author
Owner

@CommanderStorm commented on GitHub (Jun 14, 2025):

Which settings are they in?
Can you share an example?

Is it reproducible on uptime.kuma.pet?

is this maybe a duplicate/cousin of:

@CommanderStorm commented on GitHub (Jun 14, 2025): Which settings are they in? Can you share an example? Is it reproducible on uptime.kuma.pet? is this maybe a duplicate/cousin of: - #5911
Author
Owner

@Demellion commented on GitHub (Jun 16, 2025):

This was something related to migrating docker volumes between 1.13 and 1.26 docker image that was freshly setup. Unfortunately I had no time to investigate further as it kept growing the database and broke the UI (UP notifications were spamming it endlessly) so I simply re-setup this from scratch using the minimal backup feature from Kuma itself instead of moving docker volumes as whole.

My guess at this point is that there was either some fatal mismatch of databases between 1.13 and 1.26 and/or some incompatibility of what could be inside of the Kuma files itself.

@Demellion commented on GitHub (Jun 16, 2025): This was something related to migrating docker volumes between 1.13 and 1.26 docker image that was freshly setup. Unfortunately I had no time to investigate further as it kept growing the database and broke the UI (UP notifications were spamming it endlessly) so I simply re-setup this from scratch using the minimal backup feature from Kuma itself instead of moving docker volumes as whole. My guess at this point is that there was either some fatal mismatch of databases between 1.13 and 1.26 and/or some incompatibility of what could be inside of the Kuma files itself.
Author
Owner

@CommanderStorm commented on GitHub (Jun 16, 2025):

as it kept growing the database and broke the UI

The performance issues are fixed in v2.0.
Consider upgrading to the beta.
See #4500 for furhter context

@CommanderStorm commented on GitHub (Jun 16, 2025): > as it kept growing the database and broke the UI The performance issues are fixed in v2.0. Consider upgrading to the beta. See #4500 for furhter context
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4165
No description provided.