The db-wal file is too large #1322

Closed
opened 2026-02-28 02:17:20 -05:00 by deekerman · 9 comments
Owner

Originally created by @AnnAngela on GitHub (Aug 8, 2022).

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

I run a uptime-kuma instance in docker using docker louislam/uptime-kuma:1 image.

After running the instance for a long time, I found out that the db-wal file is too large:

image

Is there any way to suppress it?

🐻 Uptime-Kuma Version

1.17.1

💻 Operating System and Arch

Ubuntu 20.04.4 LTS x64

🌐 Browser

Microsoft Edge 103.0.1264.77 x64

🐋 Docker Version

Docker version 20.10.17, build 100c701

🟩 NodeJS Version

v16.15.0

Originally created by @AnnAngela on GitHub (Aug 8, 2022). ### ⚠️ Please verify that this bug has NOT been raised before. - [X] I checked and didn't find similar issue ### 🛡️ Security Policy - [X] I agree to have read this project [Security Policy](https://github.com/louislam/uptime-kuma/security/policy) ### 📝 Describe your problem I run a uptime-kuma instance in docker using docker `louislam/uptime-kuma:1` image. After running the instance for a long time, I found out that the `db-wal` file is too large: ![image](https://user-images.githubusercontent.com/9762652/183390134-d295cf20-8b24-4682-85c3-6412abb9b63b.png) Is there any way to suppress it? ### 🐻 Uptime-Kuma Version 1.17.1 ### 💻 Operating System and Arch Ubuntu 20.04.4 LTS x64 ### 🌐 Browser Microsoft Edge 103.0.1264.77 x64 ### 🐋 Docker Version Docker version 20.10.17, build 100c701 ### 🟩 NodeJS Version v16.15.0
deekerman 2026-02-28 02:17:20 -05:00
  • closed this issue
  • added the
    help
    label
Author
Owner

@louislam commented on GitHub (Aug 8, 2022):

How many monitors do you have?

You can keep fewer history to save disk space.

image

@louislam commented on GitHub (Aug 8, 2022): How many monitors do you have? You can keep fewer history to save disk space. ![image](https://user-images.githubusercontent.com/1336778/183405389-5d5001c9-0507-4afa-8bb9-9e3bd250a7a9.png)
Author
Owner

@AnnAngela commented on GitHub (Aug 8, 2022):

@louislam I have 8 monitor with 180 days period, I'm just curious, can the sqlite control the size of it's wal log

@AnnAngela commented on GitHub (Aug 8, 2022): @louislam I have 8 monitor with 180 days period, I'm just curious, can the sqlite control the size of it's wal log
Author
Owner

@louislam commented on GitHub (Aug 8, 2022):

If your database created before 1.10.0, you may need to click "Shrink Database" in order to to shrink the wal size.

@louislam commented on GitHub (Aug 8, 2022): If your database created before 1.10.0, you may need to click "Shrink Database" in order to to shrink the wal size.
Author
Owner

@AnnAngela commented on GitHub (Aug 13, 2022):

@louislam Sorry for late response, I tried but not help. I learned from Stackoverflow that the wal log file should not be large since when the lines of logs hit limit, sqlite should write the logs back to main file and write the log to the head of wal log file. IDK if there is a problem or just my misunderstanding.

@AnnAngela commented on GitHub (Aug 13, 2022): @louislam Sorry for late response, I tried but not help. I learned from Stackoverflow that the wal log file should not be large since when the lines of logs hit limit, sqlite should write the logs back to main file and write the log to the head of wal log file. IDK if there is a problem or just my misunderstanding.
Author
Owner

@louislam commented on GitHub (Aug 13, 2022):

Not sure what is happening. Could you try to shutdown the container gratefully, after it is stopped, see if the wal file merge back to the main database file.

For your reference, my instance with 17 monitors is running for over a year. The wal file is small.

image

@louislam commented on GitHub (Aug 13, 2022): Not sure what is happening. Could you try to shutdown the container gratefully, after it is stopped, see if the wal file merge back to the main database file. For your reference, my instance with 17 monitors is running for over a year. The wal file is small. ![image](https://user-images.githubusercontent.com/1336778/184476768-df6f3f37-d1aa-406d-8e13-8e0ca31da50b.png)
Author
Owner

@AnnAngela commented on GitHub (Aug 13, 2022):

After gratefully restart, the problem disappeared. Really strange, sorry for wasting your time, I will close this issue and keep an eyes on the size, and open another issue if it go large again.

@AnnAngela commented on GitHub (Aug 13, 2022): After gratefully restart, the problem disappeared. Really strange, sorry for wasting your time, I will close this issue and keep an eyes on the size, and open another issue if it go large again.
Author
Owner

@strarsis commented on GitHub (Mar 21, 2023):

I have the same issue, a very large .db-wal file.
kuma.db is large (which is expected) (1.2 GB), but the kuma.db-wal also (also around 1.2 GB!).
Can I let uptime-kuma regenerate the kuma.db-wal file to get it back small?
I also already restarted uptime-kuma multiple times.

@strarsis commented on GitHub (Mar 21, 2023): I have the same issue, a very large `.db-wal` file. `kuma.db` is large (which is expected) (1.2 GB), but the `kuma.db-wal` also (also around 1.2 GB!). Can I let `uptime-kuma` regenerate the `kuma.db-wal` file to get it back small? I also already restarted `uptime-kuma` multiple times.
Author
Owner

@fredskis commented on GitHub (Apr 9, 2023):

I'm troubleshooting my instance, running the latest version (1.21.2), changed my monitor history down to 30 days from 180 and tried the Shrink Database button.
I was alerted to this because my docker host had run out of storage space, then I found the Uptime Kuma volume was using >20GB!
I was able to get it back down to 9.6GB with the Shrink Database button, now /app/data/kuma.db is only 27MB but there are still other large files:
/app/data/kuma.db-wal @ 2.1GB
and 3 other .bak files at similar sizes.

How do I gracefully restart the container?

I've tried docker stop uptime-kuma followed by docker compose up -d but this doesn't help. Is there a command I can send from inside the container?

Edit: okay, the kuma.db-wal file is now only 1.3M but the .bak files are still crazy sizes, I'm going to assume they can safely be deleted. #1671
Is this a past bug that is now fixed?

@fredskis commented on GitHub (Apr 9, 2023): I'm troubleshooting my instance, running the latest version (1.21.2), changed my monitor history down to 30 days from 180 and tried the Shrink Database button. I was alerted to this because my docker host had run out of storage space, then I found the Uptime Kuma volume was using >20GB! I was able to get it back down to 9.6GB with the Shrink Database button, now /app/data/kuma.db is only 27MB but there are still other large files: /app/data/kuma.db-wal @ 2.1GB and 3 other .bak files at similar sizes. How do I gracefully restart the container? I've tried `docker stop uptime-kuma` followed by `docker compose up -d` but this doesn't help. Is there a command I can send from inside the container? Edit: okay, the kuma.db-wal file is now only 1.3M but the .bak files are still crazy sizes, I'm going to assume they can safely be deleted. #1671 Is this a past bug that is now fixed?
Author
Owner

@louislam commented on GitHub (Apr 10, 2023):

I'm troubleshooting my instance, running the latest version (1.21.2), changed my monitor history down to 30 days from 180 and tried the Shrink Database button. I was alerted to this because my docker host had run out of storage space, then I found the Uptime Kuma volume was using >20GB! I was able to get it back down to 9.6GB with the Shrink Database button, now /app/data/kuma.db is only 27MB but there are still other large files: /app/data/kuma.db-wal @ 2.1GB and 3 other .bak files at similar sizes.

How do I gracefully restart the container?

I've tried docker stop uptime-kuma followed by docker compose up -d but this doesn't help. Is there a command I can send from inside the container?

Edit: okay, the kuma.db-wal file is now only 1.3M but the .bak files are still crazy sizes, I'm going to assume they can safely be deleted. #1671 Is this a past bug that is now fixed?

Yes, .bak files can be deleted.

@louislam commented on GitHub (Apr 10, 2023): > I'm troubleshooting my instance, running the latest version (1.21.2), changed my monitor history down to 30 days from 180 and tried the Shrink Database button. I was alerted to this because my docker host had run out of storage space, then I found the Uptime Kuma volume was using >20GB! I was able to get it back down to 9.6GB with the Shrink Database button, now /app/data/kuma.db is only 27MB but there are still other large files: /app/data/kuma.db-wal @ 2.1GB and 3 other .bak files at similar sizes. > > How do I gracefully restart the container? > > I've tried `docker stop uptime-kuma` followed by `docker compose up -d` but this doesn't help. Is there a command I can send from inside the container? > > Edit: okay, the kuma.db-wal file is now only 1.3M but the .bak files are still crazy sizes, I'm going to assume they can safely be deleted. #1671 Is this a past bug that is now fixed? Yes, .bak files can be deleted.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#1322
No description provided.