Cloning a port monitor to a new group adds the new monitor in status paused by default (and can't resume with the "resume" button) #4500

Open
opened 2026-02-28 04:05:14 -05:00 by deekerman · 2 comments
Owner

Originally created by @apio-sys on GitHub (Dec 3, 2025).

I haven't found related issues.

🛡️ Security Policy

📝 Description

This is a very small "issue" yet I thought to post it here. If you clone a running port monitor to a different (new) group, the status of the new port monitor will be paused (greyed out) but you can't click on resume. You have to edit and save it to make it active. I can't imagine this is by design?

👟 Reproduction steps

  • Go to group "A" where you a running port monitor on i.e. 443. Click "clone".
  • Update the "Friendly name" and the "Hostname" to reflect what is needed for this new monitor.
  • Go to Monitor Group and select create "B" > Confirm > Save
  • The new monitor group "B" will be created and underneath you will see the new port monitor greyed out. Yet still pinging in the background. On the top, instead of "Pause" the "Resume" button is there but not clickable.

👀 Expected behavior

One would expect for the new monitor in group "B" to come up in a running state (with the option to pause available). If for any reason, it is wanted behavior to set a cloned monitor to paused, then the resume button should work.

😓 Actual Behavior

As explained in the reproduction steps above. As a workaround, you can click on edit then save and the monitor will be in a working state. Also the behavior of pause/resume is now normal/as expected as well.

🐻 Uptime-Kuma Version

2.0.1

💻 Operating System and Arch

Ubuntu 24.04.3 LTS (x86_64 GNU/Linux)

🌐 Browser

Firefox 145.0.2 (64-bit)

🖥️ Deployment Environment

  • Runtime Environment:
    • Docker: N/A
    • Docker Compose: N/A
    • Portainer (BE/CE): N/A
    • MariaDB: Version 10.11.13 (LTS: Yes)
    • Node.js: Version v20.19.6 (LTS: Yes)
    • Kubernetes (K3S/K8S): N/A
  • Database:
    • SQLite: N/A
    • MariaDB: Embedded
  • Database Storage:
    • Filesystem:
      • Linux: ext4
    • Storage Medium: SSD
  • Uptime Kuma Setup:
    • Number of monitors: ~150

📝 Relevant log output

N/A
Originally created by @apio-sys on GitHub (Dec 3, 2025). ### 📑 I have found these related issues/pull requests I haven't found related issues. ### 🛡️ Security Policy - [x] I have read and agree to Uptime Kuma's [Security Policy](https://github.com/louislam/uptime-kuma/security/policy). ### 📝 Description This is a very small "issue" yet I thought to post it here. If you clone a running port monitor to a different (new) group, the status of the new port monitor will be paused (greyed out) but you can't click on resume. You have to edit and save it to make it active. I can't imagine this is by design? ### 👟 Reproduction steps - Go to group "A" where you a running port monitor on i.e. 443. Click "clone". - Update the "Friendly name" and the "Hostname" to reflect what is needed for this new monitor. - Go to Monitor Group and select create "B" > Confirm > Save - The new monitor group "B" will be created and underneath you will see the new port monitor greyed out. Yet still pinging in the background. On the top, instead of "Pause" the "Resume" button is there but not clickable. ### 👀 Expected behavior One would expect for the new monitor in group "B" to come up in a running state (with the option to pause available). If for any reason, it is wanted behavior to set a cloned monitor to paused, then the resume button should work. ### 😓 Actual Behavior As explained in the reproduction steps above. As a workaround, you can click on edit then save and the monitor will be in a working state. Also the behavior of pause/resume is now normal/as expected as well. ### 🐻 Uptime-Kuma Version 2.0.1 ### 💻 Operating System and Arch Ubuntu 24.04.3 LTS (x86_64 GNU/Linux) ### 🌐 Browser Firefox 145.0.2 (64-bit) ### 🖥️ Deployment Environment - **Runtime Environment**: - Docker: N/A - Docker Compose: N/A - Portainer (BE/CE): N/A - MariaDB: Version `10.11.13` (LTS: Yes) - Node.js: Version `v20.19.6` (LTS: Yes) - Kubernetes (K3S/K8S): N/A - **Database**: - SQLite: N/A - MariaDB: Embedded - **Database Storage**: - **Filesystem**: - Linux: ext4 - **Storage Medium**: SSD - **Uptime Kuma Setup**: - Number of monitors: `~150` ### 📝 Relevant log output ```bash session N/A ```
Author
Owner

@iotux commented on GitHub (Dec 18, 2025):

@apio-sys,
I am not able to reproduce the behavior you describe.
Could it possibly be fixed as a result of a recently unrelated PR?

@iotux commented on GitHub (Dec 18, 2025): @apio-sys, I am not able to reproduce the behavior you describe. Could it possibly be fixed as a result of a recently unrelated PR?
Author
Owner

@claytonlin1110 commented on GitHub (Jan 19, 2026):

I could reproduce the issue and just created PR.

@claytonlin1110 commented on GitHub (Jan 19, 2026): I could reproduce the issue and just created PR.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4500
No description provided.