mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Docker Socket connection is lost during live restore #4446
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#4446
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @thariq-shanavas on GitHub (Nov 16, 2025).
📑 I have found these related issues/pull requests
NA
🛡️ Security Policy
📝 Description
Uptime Kuma marks all containers monitored via a docker socket as down with the message
connect ECONNREFUSED /var/run/docker.sockwhen the docker daemon is updated with the 'Live Restore' option enabled.👟 Reproduction steps
/etc/docker/daemon.jsonThe docker engine is restarted when the
docker-cepackage is updated in the host. To reproduce, you may restart the docker socket withsudo systemctl restart docker.socket. If live restore is enabled, all containers remain running. However, the containers monitored via the docker socket at/var/run/docker.sockare marked as down by uptime kuma, until uptime kuma is restarted.👀 Expected behavior
Uptime Kuma reconnects to the docker socket
😓 Actual Behavior
Uptime Kuma needs to be restarted after a docker engine update.
🐻 Uptime-Kuma Version
1.23.17
💻 Operating System and Arch
Debian 13.2
🌐 Browser
Firefox
🖥️ Deployment Environment
31📝 Notes
@louislam commented on GitHub (Nov 16, 2025):
I don't usually restart my Docker daemon. However, as I remember, for nginx,
systemctl reloadis the command to restart with zero downtime.systemctl restartis not. Not sure if it is also applied to Docker daemon.If it is not related to
systemctl reload, could you try to map the folder to verify the old file descriptor issue? Like:@thariq-shanavas commented on GitHub (Nov 16, 2025):
Thank you for the quick response. Mapping the
/var/run/folder as you suggested has resolved the issue - Can't believe I didn't think of that. I gives uptime-kuma read access to several other sockets on the host, but its a compromise I can live with.Do not map
- /var/run:/var/run:ro.This is because the container will have some sockets in its
/var/runfolder that will conflict with the host. So the/var/runfolder in the host must be mapped to a folder that is not a standard Linux directory.If you would like, I am happy to make a PR adding a note to the documentation. Otherwise, please feel free to close this issue.
P.S. The real issue was always the docker socket being recreated when docker engine was updated on the host.
systemctl restartwas just a quick and dirty way to reproduce the effect, and not something that will be done manually in practice.@louislam commented on GitHub (Nov 16, 2025):
Interesting.
/var/runmay be too powerful though, Uptime Kuma container can access all sock files on the host. Not sure if it is able to make a hard link like/var/run/docker/docker.sock, and map/var/run/dockerfolder.Feel free to update the wiki and improve the description on frontend.
Change to
feature-request, because it is more like Docker mapping behavior.https://github.com/louislam/uptime-kuma-wiki/blob/master/How-to-Monitor-Docker-Containers.md
github.com/louislam/uptime-kuma@f9751bfd81/src/components/DockerHostDialog.vue (L32)@thariq-shanavas commented on GitHub (Nov 16, 2025):
A hard link will not work, since
/var/run/docker.sockdoes not actually refer to a location in the filesystem. In fact, trying to hard link a socket to another location will lead to the following error:ln: failed to create hard link 'docker.sock' => '/var/run/docker.sock': Invalid cross-device linkA symlink will not solve the problem of losing the file descriptor either - I just checked.
You can, however, change the location of the docker socket to be in a folder other than
/var/runby editing the systemd unit file of the docker socket and specifying a custom ListenStream variable.i.e.,
sudo systemctl edit docker.socketand settingListenStream=/var/run/docker-socket/docker.sock.Then you would need to edit the docker systemd service and ask it to use the new socket with
sudo systemctl edit docker.serviceand settingExecStart=/usr/bin/dockerd -H unix:///var/run/docker-socket/docker.sock.Finally, you would edit the bind location in the compose file as:
I have not tested if this would work, this is merely my hypothesis. I run uptime kuma behind a firewall and use the
roflag, so I am okay giving it access to the entire/var/run/folder. For a production system, something like the above may be important. I'll edit the wiki when I have some extra time at hand.