Query often without taking up tons of database space #2280

Closed
opened 2026-02-28 02:49:06 -05:00 by deekerman · 3 comments
Owner

Originally created by @charlespick on GitHub (Jun 17, 2023).

⚠️ Please verify that this feature request has NOT been suggested before.

  • I checked and didn't find similar feature request

🏷️ Feature Request Type

Other

🔖 Feature description

I've been having some tricky to find networking issues. Connectivity is lost for a few seconds/day. I ran an endless ping on my computer overnight and caught the problem but Uptime Kuma did not catch it. I have the monitor set to 20 seconds. I recognize that going any lower would start to have serious implications for lower powered devices but my Uptime Kuma is running on a full server. Additionally, I think it would be fine to save the data in a summarized form of some kind.

✔️ Solution

For example, Uptime Kuma could make 10 pings/sec and every 20 seconds if they all received a reply save a single entry to the database. If some were lost, then we could save all the pings for that time frame or just report the service as down for the entire period.

Alternatives

No response

📝 Additional Context

No response

Originally created by @charlespick on GitHub (Jun 17, 2023). ### ⚠️ Please verify that this feature request has NOT been suggested before. - [X] I checked and didn't find similar feature request ### 🏷️ Feature Request Type Other ### 🔖 Feature description I've been having some tricky to find networking issues. Connectivity is lost for a few seconds/day. I ran an endless ping on my computer overnight and caught the problem but Uptime Kuma did not catch it. I have the monitor set to 20 seconds. I recognize that going any lower would start to have serious implications for lower powered devices but my Uptime Kuma is running on a full server. Additionally, I think it would be fine to save the data in a summarized form of some kind. ### ✔️ Solution For example, Uptime Kuma could make 10 pings/sec and every 20 seconds if they all received a reply save a single entry to the database. If some were lost, then we could save all the pings for that time frame or just report the service as down for the entire period. ### ❓ Alternatives _No response_ ### 📝 Additional Context _No response_
deekerman 2026-02-28 02:49:06 -05:00
Author
Owner

@CommanderStorm commented on GitHub (Jun 17, 2023):

What do you mean (the issue body talks about something completely different) by

Query often without taking up tons of database space

The issue body is a duplicte of https://github.com/louislam/uptime-kuma/pull/1740 and the associated issue.
If this is a duplicate, please close this issue. Duplicates only hurt issue management and feature planning.

@CommanderStorm commented on GitHub (Jun 17, 2023): What do you mean (the issue body talks about something completely different) by > Query often without taking up tons of database space The issue body is a duplicte of https://github.com/louislam/uptime-kuma/pull/1740 and the associated issue. If this is a duplicate, please close this issue. Duplicates only hurt issue management and feature planning.
Author
Owner

@charlespick commented on GitHub (Jun 19, 2023):

What do you mean (the issue body talks about something completely different) by

Query often without taking up tons of database space

The issue body is a duplicte of #1740 and the associated issue. If this is a duplicate, please close this issue. Duplicates only hurt issue management and feature planning.

Issue #1740 only lowers the limit to 1 second and I think would take up alot of database space very quickly right? I didn't see any mitigations in that pull for the much larger space requirement of keeping 86400+ records/day

@charlespick commented on GitHub (Jun 19, 2023): > What do you mean (the issue body talks about something completely different) by > > > Query often without taking up tons of database space > > The issue body is a duplicte of #1740 and the associated issue. If this is a duplicate, please close this issue. Duplicates only hurt issue management and feature planning. Issue #1740 only lowers the limit to 1 second and I think would take up alot of database space very quickly right? I didn't see any mitigations in that pull for the much larger space requirement of keeping 86400+ records/day
Author
Owner

@CommanderStorm commented on GitHub (Dec 3, 2023):

This issue was resolved by https://github.com/louislam/uptime-kuma/pull/2750 (merged into the v2.0-release train)
The lowering of the limit is tracked in:

=> closing as resolved

@CommanderStorm commented on GitHub (Dec 3, 2023): This issue was resolved by https://github.com/louislam/uptime-kuma/pull/2750 (merged into the [`v2.0`-release](https://github.com/louislam/uptime-kuma/milestone/24) train) The lowering of the limit is tracked in: - https://github.com/louislam/uptime-kuma/issues/1645 - https://github.com/louislam/uptime-kuma/issues/1015 => closing as resolved
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#2280
No description provided.