mirror of
https://github.com/louislam/dockge.git
synced 2026-03-03 02:06:55 -05:00
Docker Stack Shows "Exited" Status Due to One-Time Execution Container #211
Labels
No labels
bug
feature-request
help
help wanted
invalid-format
need-reproduce-steps
question
security
upstream
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/dockge-louislam#211
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Tealk on GitHub (May 24, 2025).
⚠️ Please verify that this bug has NOT been reported before.
🛡️ Security Policy
Description
There was a similar issue before.
When deploying the Docker stack, the status is displayed as "Exited" because one of the containers, which is designed to run only once during startup (e.g., a migration job), is not running.
In this case, the affine_migration_job container completes its task and exits as expected, but this causes the entire stack to appear as if it is in an "Exited" state, which can be misleading.

👟 Reproduction steps
👀 Expected behavior
The stack should show a "Healthy" or "Running" status if all other containers are running as expected, even if a one-time execution container has exited.
😓 Actual Behavior
The stack shows an "Exited" status, which might suggest an issue with the deployment, even though the behavior of the one-time execution container is correct.
Dockge Version
1.5.0
💻 Operating System and Arch
Debian GNU/Linux 12 (bookworm)
🌐 Browser
LibreWolf 138.0.4-1
🐋 Docker Version
Docker version 24.0.2
🟩 NodeJS Version
No response
📝 Relevant log output
@tclayson commented on GitHub (Jun 6, 2025):
I'm also having this issue. I have a container that waits for my network drive to become available and then exits to allow other containers to boot up (that rely on the network drives).
Very confusing to see in the interface that this stack has exited when it's working perfectly.
I think the challenge exists in this commit:
github.com/louislam/dockge@c8770a9605When there are multiple statuses for a stack (e.g.
exited(1), running(1)) Dockge first checks if any one of the statuses areexited(1), which marks the stack asexitedeven if it's just an init script or something.A way to fix this would be to say instead if any of the statuses are
running(1)first, then mark it as active. This appears to be howdocker compose lsworks. If any of the statuses arerunning(1)then the stack status isrunning(1). Marking this stack asactivewould replicate the same output asdocker compose ls.Alternatively, perhaps there's an option to check the exit code of the containers and decide if
exited(1)is relevant?Running
docker ps -a --filter "status=exited" --format jsongives an output of exited containers:In here, we can see the exit code
(0)indicates this container exited successfully (I think 😅 ). Could we use this as an indicator that the status should beactivestill and notexitedmaybe?Another command you could use is
docker inspect wait_for_nfs_duplicati --format='{{.State.ExitCode}}'which will output either0for successful exit or any other number for failed. But this would require iterating through all containers in a stack.Thanks!
@spuder commented on GitHub (Aug 21, 2025):
I see there was a PR merged to address this on June 27th, I don't think it fixed it as I still observe this behavior in version:
1.5.0.Looking at docker hub I see that louislam/dockge hasn't been pushed since about March 2025.