1
0
Fork 0
mirror of https://github.com/louislam/dockge.git synced 2026-03-03 02:06:55 -05:00

stack marked as "exited" when conatiners are running #40

Open
opened 2026-02-20 13:09:50 -05:00 by deekerman · 26 comments
Owner

Originally created by @zx900930 on GitHub (Nov 24, 2023).

⚠️ Please verify that this bug has NOT been reported before.

  • I checked and didn't find similar issue

🛡️ Security Policy

Description

I'm trying to deploy this project using dockge
https://github.com/makeplane/plane
but it was marked as exited even if those containers are running.
image

👟 Reproduction steps

  1. cd /opt/stacks
  2. git clone https://github.com/makeplane/plane.git
  3. hit the deploy button on dockge

👀 Expected behavior

Stack will be marked as ''active'' when containers starts running.

😓 Actual Behavior

Stack marked as ''exited'' when containers starts running.

Dockge Version

1.1.1

💻 Operating System and Arch

Debian GNU/Linux 12 (bookworm) x86_64

🌐 Browser

Google Chrome 119.0.6045.160

🐋 Docker Version

Docker CE 24.0.7

🟩 NodeJS Version

No response

📝 Relevant log output

No response

Originally created by @zx900930 on GitHub (Nov 24, 2023). ### ⚠️ Please verify that this bug has NOT been reported before. - [X] I checked and didn't find similar issue ### 🛡️ Security Policy - [X] I agree to have read this project [Security Policy](https://github.com/louislam/dockge/security/policy) ### Description I'm trying to deploy this project using dockge https://github.com/makeplane/plane but it was marked as exited even if those containers are running. ![image](https://github.com/louislam/dockge/assets/734749/3d21898c-6a39-4e71-a721-349106c7bac0) ### 👟 Reproduction steps 1. cd /opt/stacks 2. git clone https://github.com/makeplane/plane.git 3. hit the deploy button on dockge ### 👀 Expected behavior Stack will be marked as ''active'' when containers starts running. ### 😓 Actual Behavior Stack marked as ''exited'' when containers starts running. ### Dockge Version 1.1.1 ### 💻 Operating System and Arch Debian GNU/Linux 12 (bookworm) x86_64 ### 🌐 Browser Google Chrome 119.0.6045.160 ### 🐋 Docker Version Docker CE 24.0.7 ### 🟩 NodeJS Version _No response_ ### 📝 Relevant log output _No response_
Author
Owner

@louislam commented on GitHub (Nov 25, 2023):

I tried this stack, I saw the minio container is not started, I think it is the reason, because the stack is active only if all containers are up.

@louislam commented on GitHub (Nov 25, 2023): I tried this stack, I saw the `minio` container is not started, I think it is the reason, because the stack is active only if all containers are up.
Author
Owner

@zx900930 commented on GitHub (Nov 25, 2023):

I tried this stack, I saw the minio container is not started, I think it is the reason, because the stack is active only if all containers are up.

That createbuckets minio container is an init task, like the jobs in k8s, once the needed bucket is created, it will exit.

Can we add a filter (using labels for example: - "dockge.container.status.enable=false" ) to exclude containers from being checked by dockge?

@zx900930 commented on GitHub (Nov 25, 2023): > I tried this stack, I saw the `minio` container is not started, I think it is the reason, because the stack is active only if all containers are up. That `createbuckets` minio container is an init task, like the `jobs` in k8s, once the needed bucket is created, it will exit. Can we add a filter (using labels for example: `- "dockge.container.status.enable=false"` ) to exclude containers from being checked by dockge?
Author
Owner

@Yann-J commented on GitHub (Nov 29, 2023):

I have a similar issue with a Plex compose file, which contains an init container that installs/updates some plugins, then exits. There is a depends_on: {plex_plugins:{condition: service_completed_successfully}} condition on the main plex container

@Yann-J commented on GitHub (Nov 29, 2023): I have a similar issue with a Plex compose file, which contains an init container that installs/updates some plugins, then exits. There is a `depends_on: {plex_plugins:{condition: service_completed_successfully}}` condition on the main plex container
Author
Owner

@Yann-J commented on GitHub (Nov 29, 2023):

Looking a bit into the code, I fear this might be tricky to implement, as right now the status is computed based on parsing the response from docker compose ls which will return something like exited(1), running(2), without further details...

@Yann-J commented on GitHub (Nov 29, 2023): Looking a bit into the code, I fear this might be tricky to implement, as right now [the status is computed](https://github.com/louislam/dockge/blob/master/backend/stack.ts#L255) based on parsing the response from `docker compose ls` which will return something like `exited(1), running(2)`, without further details...
Author
Owner

@thefrana commented on GitHub (Dec 1, 2023):

I also encountered this issue and all responses from docker compose ls are in status running. It still shows exited in the UI.

@thefrana commented on GitHub (Dec 1, 2023): I also encountered this issue and all responses from `docker compose ls` are in status `running`. It still shows exited in the UI.
Author
Owner

@nzprog commented on GitHub (Dec 6, 2023):

Im also having this problem.

@nzprog commented on GitHub (Dec 6, 2023): Im also having this problem.
Author
Owner

@queeup commented on GitHub (Dec 22, 2023):

Same here. I am using bash container to do some jobs before services start like init task.

version: "3"
services:
  bash:
    image: bash:latest
    container_name: bash
    network_mode: none
    user: 1000:100
    volumes:
      - /container-data:/container-data
    restart: no
    command: -c 'mkdir -p
      /container-data/{sonarr/config,radarr,prowlarr/config,bazarr/config,transmission/{config,custom-cont-init.d}}'
  transmission:
    image: ghcr.io/linuxserver/transmission:latest
    container_name: transmission
    hostname: transmission
    networks:
      - servarr
    depends_on:
      - bash
    environment:
      - PUID=1000
      - PGID=100
      - TZ=Europe/Istanbul
    volumes:
      - /container-data/transmission/config:/config
      - /container-data/transmission/custom-cont-init.d:/custom-cont-init.d:ro
      - /data/downloads:/data/downloads:shared
    ports:
      - 9091:9091
      - 51413:51413
      - 51413:51413/udp
    restart: unless-stopped
@queeup commented on GitHub (Dec 22, 2023): Same here. I am using bash container to do some jobs before services start like init task. ```yml version: "3" services: bash: image: bash:latest container_name: bash network_mode: none user: 1000:100 volumes: - /container-data:/container-data restart: no command: -c 'mkdir -p /container-data/{sonarr/config,radarr,prowlarr/config,bazarr/config,transmission/{config,custom-cont-init.d}}' transmission: image: ghcr.io/linuxserver/transmission:latest container_name: transmission hostname: transmission networks: - servarr depends_on: - bash environment: - PUID=1000 - PGID=100 - TZ=Europe/Istanbul volumes: - /container-data/transmission/config:/config - /container-data/transmission/custom-cont-init.d:/custom-cont-init.d:ro - /data/downloads:/data/downloads:shared ports: - 9091:9091 - 51413:51413 - 51413:51413/udp restart: unless-stopped ```
Author
Owner

@golgor commented on GitHub (Jan 2, 2024):

I also bumped into this problem. I haven't looked into the code at all, but isn't is possible to somehow specify which services to include in the status? I.e. no real changes in the how everything is executed/managed and no changes needed in the docker-compose, but just some updated to the GUI like a setting "Ignore these services to track status".

Sorry if that is a stupid idea... I'm fairly new to docker as a whole and especially dockge.

@golgor commented on GitHub (Jan 2, 2024): I also bumped into this problem. I haven't looked into the code at all, but isn't is possible to somehow specify which services to include in the status? I.e. no real changes in the how everything is executed/managed and no changes needed in the docker-compose, but just some updated to the GUI like a setting "Ignore these services to track status". Sorry if that is a stupid idea... I'm fairly new to docker as a whole and especially dockge.
Author
Owner

@carelinus commented on GitHub (Jan 4, 2024):

Same issue here. docker compose ls shows correct status, dockge shows stack as exited.

@carelinus commented on GitHub (Jan 4, 2024): Same issue here. `docker compose ls` shows correct status, dockge shows stack as _exited_.
Author
Owner

@akshara-tg commented on GitHub (Jan 10, 2024):

I also have the same issue. I have total 50 containers.
Around 40 containers are showing as running. The remaining 10 are shown as inactive.

Below is one of the stack showing as inactive while actually it is up & running. docker compose ls also shows it as running.

image

@akshara-tg commented on GitHub (Jan 10, 2024): I also have the same issue. I have total 50 containers. Around 40 containers are showing as running. The remaining 10 are shown as inactive. Below is one of the stack showing as inactive while actually it is up & running. docker compose ls also shows it as running. ![image](https://github.com/louislam/dockge/assets/92619859/708205b9-3a4a-4857-9169-0a11f8cff036)
Author
Owner

@tippfehlr commented on GitHub (Jan 10, 2024):

I would propose to just show how many of the containers are running, just like the output of docker compose ls.
e.g. "4/5 running"

If one of the containers crashes/exits before the others, there is currently no indicator that some containers might still be running.

@tippfehlr commented on GitHub (Jan 10, 2024): I would propose to just show how many of the containers are running, just like the output of `docker compose ls`. e.g. "4/5 running" If one of the containers crashes/exits before the others, there is currently no indicator that some containers might still be running.
Author
Owner

@arminus commented on GitHub (Jan 24, 2024):

Here's another perfectly valid example where an init container is stopped by default:

2024-01-24_185526

Appreciate the work on this regardless!

@arminus commented on GitHub (Jan 24, 2024): Here's another perfectly valid example where an init container is stopped by default: ![2024-01-24_185526](https://github.com/louislam/dockge/assets/683680/0e05bd7c-30cf-4f9c-93eb-b5b89cf24bb5) Appreciate the work on this regardless!
Author
Owner

@ChrisB85 commented on GitHub (Jan 24, 2024):

Same issue here with just one container only.
image

@ChrisB85 commented on GitHub (Jan 24, 2024): Same issue here with just one container only. ![image](https://github.com/louislam/dockge/assets/792899/1c1e036c-e5ef-409d-982c-f6ce1fce9756)
Author
Owner

@Triskae commented on GitHub (Mar 6, 2024):

Me too here, is the satck missing a config, or something like that ?

Great job for dockge, saves me a lot of time !

image

@Triskae commented on GitHub (Mar 6, 2024): Me too here, is the satck missing a config, or something like that ? Great job for dockge, saves me a lot of time ! ![image](https://github.com/louislam/dockge/assets/28785555/2d4fe4b6-2e38-425e-9d9c-ebe8d24779af)
Author
Owner

@bwcummings1 commented on GitHub (Jun 3, 2024):

Has anyone found a resolution to this bug yet? I have a project that is running in the browser port, but showing as exited in the UI.

@bwcummings1 commented on GitHub (Jun 3, 2024): Has anyone found a resolution to this bug yet? I have a project that is running in the browser port, but showing as exited in the UI.
Author
Owner

@x1ao4 commented on GitHub (Jun 9, 2024):

I tried this stack, I saw the minio container is not started, I think it is the reason, because the stack is active only if all containers are up.

You're right, when a stack has one or more containers that have exited but still has other running containers, the stack is shown as "exited." To me, this seems like a bug because if there are still running containers, the stack's status should be "running" rather than "exited." In Docker Desktop or Orbstack, such a state would be shown as "running," which better meets user expectations. I hope the logic can be modified so that in this situation, the status is displayed as "running." Only when all containers have exited should it show as "exited."

@x1ao4 commented on GitHub (Jun 9, 2024): > I tried this stack, I saw the `minio` container is not started, I think it is the reason, because the stack is active only if all containers are up. You're right, when a stack has one or more containers that have exited but still has other running containers, the stack is shown as "exited." To me, this seems like a bug because if there are still running containers, the stack's status should be "running" rather than "exited." In Docker Desktop or Orbstack, such a state would be shown as "running," which better meets user expectations. I hope the logic can be modified so that in this situation, the status is displayed as "running." Only when all containers have exited should it show as "exited."
Author
Owner

@Shponzo commented on GitHub (Jul 16, 2024):

I'm experiencing the same problem. Is there any update on this bug?

@Shponzo commented on GitHub (Jul 16, 2024): I'm experiencing the same problem. Is there any update on this bug?
Author
Owner

@Preclowski commented on GitHub (Aug 31, 2024):

cd /opt/stacks/yourstack && docker compose up -d --remove-orphans should help most of you guys :)

@Preclowski commented on GitHub (Aug 31, 2024): `cd /opt/stacks/yourstack && docker compose up -d --remove-orphans` should help most of you guys :)
Author
Owner

@Handrail9 commented on GitHub (Sep 2, 2024):

cd /opt/stacks/yourstack && docker compose up -d --remove-orphans should help most of you guys :)

Perhaps another solution to this bug could be for Dockge to automatically run this before marking a container as exited.

@Handrail9 commented on GitHub (Sep 2, 2024): > `cd /opt/stacks/yourstack && docker compose up -d --remove-orphans` should help most of you guys :) Perhaps another solution to this bug could be for Dockge to automatically run this before marking a container as exited.
Author
Owner

@chaun14 commented on GitHub (Oct 30, 2024):

Same weird issue here, all containers are working, but the status is shown as exited which doesn't really make sense.
I tried the above solutions of clearing orphans but no real effect.

version: "3"
services:
  db:
    image: mariadb:10
    command:
      - mysqld
      - --character-set-server=utf8mb4
      - --collation-server=utf8mb4_unicode_ci
    volumes:
      - ./db:/var/lib/mysql
    env_file:
      - .env
    restart: always
  redis:
    image: redis:4.0-alpine
    restart: always
  addy:
    image: anonaddy/anonaddy:latest
    depends_on:
      - db
      - redis
    ports:
      - 25:25
      - 127.0.0.1:8100:8000
      - 127.0.0.1:11334:11334
    volumes:
      - ./data:/data
    env_file:
      - .env
    restart: always
networks: {}

image

@chaun14 commented on GitHub (Oct 30, 2024): Same weird issue here, all containers are working, but the status is shown as exited which doesn't really make sense. I tried the above solutions of clearing orphans but no real effect. ```yaml version: "3" services: db: image: mariadb:10 command: - mysqld - --character-set-server=utf8mb4 - --collation-server=utf8mb4_unicode_ci volumes: - ./db:/var/lib/mysql env_file: - .env restart: always redis: image: redis:4.0-alpine restart: always addy: image: anonaddy/anonaddy:latest depends_on: - db - redis ports: - 25:25 - 127.0.0.1:8100:8000 - 127.0.0.1:11334:11334 volumes: - ./data:/data env_file: - .env restart: always networks: {} ``` ![image](https://github.com/user-attachments/assets/a0960a35-0ccb-43bd-a75a-dec876a957a3)
Author
Owner

@GilDev commented on GitHub (Dec 12, 2024):

Same issue with default Paperless-ngx and InvenTree stacks:

screencapture-dockge-gildev-dev-compose-paperless-ngx-2024-12-12-15_51_59

screencapture-dockge-gildev-dev-compose-inventree-2024-12-12-15_51_24

@GilDev commented on GitHub (Dec 12, 2024): Same issue with default Paperless-ngx and InvenTree stacks: ![screencapture-dockge-gildev-dev-compose-paperless-ngx-2024-12-12-15_51_59](https://github.com/user-attachments/assets/0c2ca059-a0d4-4cd3-9145-96a7ecc246c7) ![screencapture-dockge-gildev-dev-compose-inventree-2024-12-12-15_51_24](https://github.com/user-attachments/assets/137c1204-e949-4bb1-a7b1-224e35ef1837)
Author
Owner

@possiblyanowl commented on GitHub (Jan 12, 2025):

I also am running into this issue. I use a container to do some init scripts, and the other containers in my compose file use

depends_on:
init:
condition: service_completed_succesfully

I would love to have a solution to exempt a container from the stacks overall "status".

@possiblyanowl commented on GitHub (Jan 12, 2025): I also am running into this issue. I use a container to do some init scripts, and the other containers in my compose file use depends_on: init: condition: service_completed_succesfully I would love to have a solution to exempt a container from the stacks overall "status".
Author
Owner

@justin13888 commented on GitHub (Mar 22, 2025):

Wanted to add that any docker compose setup where there are some sort of init task that are suppose to exit early (e.g. all the Zitadel docker compose examples in my case) would have this symptom.

Perhaps some sort of flag specific dockge to indicate that this is intended would be a potential solution.

@justin13888 commented on GitHub (Mar 22, 2025): Wanted to add that any docker compose setup where there are some sort of init task that are suppose to exit early (e.g. all the Zitadel docker compose examples in my case) would have this symptom. Perhaps some sort of flag specific dockge to indicate that this is intended would be a potential solution.
Author
Owner

@blackshroud commented on GitHub (Jul 22, 2025):

I also am running into this issue. I use a container to do some init scripts, and the other containers in my compose file use

depends_on: init: condition: service_completed_succesfully

I would love to have a solution to exempt a container from the stacks overall "status".

This is a great idea. A simple check box on each item to exclude from the overall status or something similar?

@blackshroud commented on GitHub (Jul 22, 2025): > I also am running into this issue. I use a container to do some init scripts, and the other containers in my compose file use > > depends_on: init: condition: service_completed_succesfully > > I would love to have a solution to exempt a container from the stacks overall "status". This is a great idea. A simple check box on each item to exclude from the overall status or something similar?
Author
Owner

@GabeDuarteM commented on GitHub (Dec 31, 2025):

@louislam Is anyone working on this? If not, I'd like to tackle it!

From the discussions here, I see a few options:

  1. Label-based exclusion: Add a label like dockge.container.status.enable=false to exclude containers from status checks
  2. GUI setting: Add a UI setting somewhere to "Ignore these services for status"
  3. Show ratio: Display "2/3 running" instead of just "running/exited"
  4. Change logic: Only show "exited" when all containers have exited

Personally, I think something like option 3 would work pretty nice, it provides better status visibility without requiring users to add labels or config, and naturally handles init containers that are expected to exit.

Do you have a preference, or another approach in mind?

@GabeDuarteM commented on GitHub (Dec 31, 2025): @louislam Is anyone working on this? If not, I'd like to tackle it! From the discussions here, I see a few options: 1. Label-based exclusion: Add a label like `dockge.container.status.enable=false` to exclude containers from status checks 2. GUI setting: Add a UI setting somewhere to "Ignore these services for status" 3. Show ratio: Display "2/3 running" instead of just "running/exited" 4. Change logic: Only show "exited" when all containers have exited Personally, I think something like option 3 would work pretty nice, it provides better status visibility without requiring users to add labels or config, and naturally handles init containers that are expected to exit. Do you have a preference, or another approach in mind?
Author
Owner

@major-mayer commented on GitHub (Jan 21, 2026):

@GabeDuarteM I think option 3 is a good way to tackle this issue.
In my case, one stack is constantly showing as inactive because I use docker profiles to prevent containers from running on default stack startup: https://docs.docker.com/compose/how-tos/profiles/

  vaultwarden-backup-everything:
    image: bruceforce/vaultwarden-backup
    restart: on-failure
    init: true
    depends_on:
      - vaultwarden
    ....
    command: manual
    profiles:
      - backup 

So this stack would currently never show as active, even though the relevant containers are running.

docker compose ls shows the status correctly:

root@truenas:~# docker compose ls
NAME STATUS CONFIG FILES
bitwarden running(1) /mnt/data/apps/dockge/stacks/bitwarden/compose.yaml

@major-mayer commented on GitHub (Jan 21, 2026): @GabeDuarteM I think option 3 is a good way to tackle this issue. In my case, one stack is constantly showing as inactive because I use docker profiles to prevent containers from running on default stack startup: https://docs.docker.com/compose/how-tos/profiles/ ```yml vaultwarden-backup-everything: image: bruceforce/vaultwarden-backup restart: on-failure init: true depends_on: - vaultwarden .... command: manual profiles: - backup ``` So this stack would currently never show as active, even though the relevant containers are running. `docker compose ls` shows the status correctly: > root@truenas:~# docker compose ls NAME STATUS CONFIG FILES bitwarden running(1) /mnt/data/apps/dockge/stacks/bitwarden/compose.yaml
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/dockge-louislam#40
No description provided.