Docker Socket connection is lost during live restore #4446

Open
opened 2026-02-28 04:02:59 -05:00 by deekerman · 4 comments
Owner

Originally created by @thariq-shanavas on GitHub (Nov 16, 2025).

NA

🛡️ Security Policy

📝 Description

Uptime Kuma marks all containers monitored via a docker socket as down with the message connect ECONNREFUSED /var/run/docker.sock when the docker daemon is updated with the 'Live Restore' option enabled.

👟 Reproduction steps

  1. Bind the docker socket to uptime kuma by adding the following to the docker compose file.
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
  1. Enable live restore.Live restore improves the uptime of your services by preventing docker containers from being restarted during a docker engine update. From the official documentation, enable Live Restore by adding the following to /etc/docker/daemon.json
{
  "live-restore": true
}
  1. Restart the docker daemon.

The docker engine is restarted when the docker-ce package is updated in the host. To reproduce, you may restart the docker socket with sudo systemctl restart docker.socket. If live restore is enabled, all containers remain running. However, the containers monitored via the docker socket at /var/run/docker.sock are marked as down by uptime kuma, until uptime kuma is restarted.

👀 Expected behavior

Uptime Kuma reconnects to the docker socket

😓 Actual Behavior

Uptime Kuma needs to be restarted after a docker engine update.

🐻 Uptime-Kuma Version

1.23.17

💻 Operating System and Arch

Debian 13.2

🌐 Browser

Firefox

🖥️ Deployment Environment

  • Runtime Environment:
    • Docker version 29.0.1
  • Filesystem:
    • Linux: ext4
  • Uptime Kuma Setup:
    • Number of monitors: 31

📝 Notes

  • It is possible to work around this issue by creating a TLS connection to the docker socket, but this creates unnecessary security issues.
  • I suspect that this happens because uptime kuma holds on to the old file descriptor when the docker socket is recreated, in which case I'm not sure if it can be fixed from within the container. Hopefully, I'm wrong and there may be a clever solution somewhere.
Originally created by @thariq-shanavas on GitHub (Nov 16, 2025). ### 📑 I have found these related issues/pull requests NA ### 🛡️ Security Policy - [x] I have read and agree to Uptime Kuma's [Security Policy](https://github.com/louislam/uptime-kuma/security/policy). ### 📝 Description Uptime Kuma marks all containers monitored via a docker socket as down with the message `connect ECONNREFUSED /var/run/docker.sock` when the docker daemon is updated with the 'Live Restore' option enabled. ### 👟 Reproduction steps 1. Bind the docker socket to uptime kuma by adding the following to the docker compose file. ``` volumes: - /var/run/docker.sock:/var/run/docker.sock:ro ``` 2. Enable live restore.Live restore improves the uptime of your services by preventing docker containers from being restarted during a docker engine update. From the official documentation, enable [Live Restore](https://docs.docker.com/engine/daemon/live-restore/) by adding the following to `/etc/docker/daemon.json` ``` { "live-restore": true } ``` 3. Restart the docker daemon. The docker engine is restarted when the `docker-ce` package is updated in the host. To reproduce, you may restart the docker socket with `sudo systemctl restart docker.socket`. If live restore is enabled, all containers remain running. However, the containers monitored via the docker socket at `/var/run/docker.sock` are marked as down by uptime kuma, until uptime kuma is restarted. ### 👀 Expected behavior Uptime Kuma reconnects to the docker socket ### 😓 Actual Behavior Uptime Kuma needs to be restarted after a docker engine update. ### 🐻 Uptime-Kuma Version 1.23.17 ### 💻 Operating System and Arch Debian 13.2 ### 🌐 Browser Firefox ### 🖥️ Deployment Environment - **Runtime Environment**: - Docker version 29.0.1 - **Filesystem**: - Linux: ext4 - **Uptime Kuma Setup**: - Number of monitors: `31` ### 📝 Notes - It is possible to work around this issue by creating a TLS connection to the docker socket, but this creates unnecessary security issues. - I suspect that this happens because uptime kuma holds on to the old file descriptor when the docker socket is recreated, in which case I'm not sure if it **can** be fixed from within the container. Hopefully, I'm wrong and there may be a clever solution somewhere.
Author
Owner

@louislam commented on GitHub (Nov 16, 2025):

To reproduce, you may restart the docker socket with sudo systemctl restart docker.socket.

I don't usually restart my Docker daemon. However, as I remember, for nginx, systemctl reload is the command to restart with zero downtime. systemctl restart is not. Not sure if it is also applied to Docker daemon.

I suspect that this happens because uptime kuma holds on to the old file descriptor

If it is not related to systemctl reload, could you try to map the folder to verify the old file descriptor issue? Like:

    volumes:
      - /var/run/:/var/run/host/:ro
@louislam commented on GitHub (Nov 16, 2025): > To reproduce, you may restart the docker socket with sudo systemctl restart docker.socket. I don't usually restart my Docker daemon. However, as I remember, for nginx, `systemctl reload` is the command to restart with zero downtime. `systemctl restart` is not. Not sure if it is also applied to Docker daemon. > I suspect that this happens because uptime kuma holds on to the old file descriptor If it is not related to `systemctl reload`, could you try to map the folder to verify the old file descriptor issue? Like: ``` volumes: - /var/run/:/var/run/host/:ro ```
Author
Owner

@thariq-shanavas commented on GitHub (Nov 16, 2025):

Thank you for the quick response. Mapping the /var/run/ folder as you suggested has resolved the issue - Can't believe I didn't think of that. I gives uptime-kuma read access to several other sockets on the host, but its a compromise I can live with.

    volumes:
      - /var/run:/host/var/run:ro

Do not map - /var/run:/var/run:ro.
This is because the container will have some sockets in its /var/run folder that will conflict with the host. So the /var/run folder in the host must be mapped to a folder that is not a standard Linux directory.

If you would like, I am happy to make a PR adding a note to the documentation. Otherwise, please feel free to close this issue.

P.S. The real issue was always the docker socket being recreated when docker engine was updated on the host. systemctl restart was just a quick and dirty way to reproduce the effect, and not something that will be done manually in practice.

@thariq-shanavas commented on GitHub (Nov 16, 2025): Thank you for the quick response. Mapping the `/var/run/` folder as you suggested has resolved the issue - Can't believe I didn't think of that. I gives uptime-kuma read access to several other sockets on the host, but its a compromise I can live with. ``` volumes: - /var/run:/host/var/run:ro ``` Do not map ` - /var/run:/var/run:ro`. This is because the container will have some sockets in its `/var/run` folder that will conflict with the host. So the `/var/run` folder in the host must be mapped to a folder that is not a standard Linux directory. If you would like, I am happy to make a PR adding a note to the documentation. Otherwise, please feel free to close this issue. P.S. The real issue was always the docker socket being recreated when docker engine was updated on the host. `systemctl restart` was just a quick and dirty way to reproduce the effect, and not something that will be done manually in practice.
Author
Owner

@louislam commented on GitHub (Nov 16, 2025):

Interesting.

/var/run may be too powerful though, Uptime Kuma container can access all sock files on the host. Not sure if it is able to make a hard link like /var/run/docker/docker.sock, and map /var/run/docker folder.

Feel free to update the wiki and improve the description on frontend.

Change to feature-request, because it is more like Docker mapping behavior.

https://github.com/louislam/uptime-kuma-wiki/blob/master/How-to-Monitor-Docker-Containers.md

github.com/louislam/uptime-kuma@f9751bfd81/src/components/DockerHostDialog.vue (L32)

@louislam commented on GitHub (Nov 16, 2025): Interesting. `/var/run` may be too powerful though, Uptime Kuma container can access all sock files on the host. Not sure if it is able to make a hard link like `/var/run/docker/docker.sock`, and map `/var/run/docker` folder. Feel free to update the wiki and improve the description on frontend. Change to `feature-request`, because it is more like Docker mapping behavior. https://github.com/louislam/uptime-kuma-wiki/blob/master/How-to-Monitor-Docker-Containers.md https://github.com/louislam/uptime-kuma/blob/f9751bfd81fee622a3cc6ef9eba11894fc245978/src/components/DockerHostDialog.vue#L32
Author
Owner

@thariq-shanavas commented on GitHub (Nov 16, 2025):

A hard link will not work, since /var/run/docker.sock does not actually refer to a location in the filesystem. In fact, trying to hard link a socket to another location will lead to the following error:
ln: failed to create hard link 'docker.sock' => '/var/run/docker.sock': Invalid cross-device link

A symlink will not solve the problem of losing the file descriptor either - I just checked.

You can, however, change the location of the docker socket to be in a folder other than /var/run by editing the systemd unit file of the docker socket and specifying a custom ListenStream variable.
i.e., sudo systemctl edit docker.socket and setting ListenStream=/var/run/docker-socket/docker.sock.

Then you would need to edit the docker systemd service and ask it to use the new socket with sudo systemctl edit docker.service and setting ExecStart=/usr/bin/dockerd -H unix:///var/run/docker-socket/docker.sock.

Finally, you would edit the bind location in the compose file as:

    volumes:
      - /var/run/docker-socket:/var/run:ro

I have not tested if this would work, this is merely my hypothesis. I run uptime kuma behind a firewall and use the ro flag, so I am okay giving it access to the entire /var/run/ folder. For a production system, something like the above may be important. I'll edit the wiki when I have some extra time at hand.

@thariq-shanavas commented on GitHub (Nov 16, 2025): A hard link will not work, since `/var/run/docker.sock` does not actually refer to a location in the filesystem. In fact, trying to hard link a socket to another location will lead to the following error: `ln: failed to create hard link 'docker.sock' => '/var/run/docker.sock': Invalid cross-device link` A symlink will not solve the problem of losing the file descriptor either - I just checked. You can, however, change the location of the docker socket to be in a folder other than `/var/run` by editing the systemd unit file of the docker socket and specifying a custom ListenStream variable. i.e., `sudo systemctl edit docker.socket` and setting `ListenStream=/var/run/docker-socket/docker.sock`. Then you would need to edit the docker systemd service and ask it to use the new socket with `sudo systemctl edit docker.service` and setting `ExecStart=/usr/bin/dockerd -H unix:///var/run/docker-socket/docker.sock`. Finally, you would edit the bind location in the compose file as: ``` volumes: - /var/run/docker-socket:/var/run:ro ``` I have not tested if this would work, this is merely my hypothesis. I run uptime kuma behind a firewall and use the `ro` flag, so I am okay giving it access to the entire `/var/run/` folder. For a production system, something like the above may be important. I'll edit the wiki when I have some extra time at hand.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4446
No description provided.