Uptime kuma keeps on giving the following error #1166

Closed
opened 2026-02-28 02:12:13 -05:00 by deekerman · 10 comments
Owner

Originally created by @iamdempa on GitHub (Jun 13, 2022).

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

📝 Describe your problem

This is the error I am getting

at Timeout.safeBeat [as _onTimeout] (/app/server/model/monitor.js:532:25)
2022-06-13T13:02:25.881Z [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19)
at async Function.sendCertInfo (/app/server/model/monitor.js:676:23)
at async Function.sendStats (/app/server/model/monitor.js:641:13) {
sql: undefined,
bindings: undefined
}
at process. (/app/server/server.js:1696:13)
at process.emit (node:events:390:28)
at emit (node:internal/process/promises:136:22)
at processPromiseRejections (node:internal/process/promises:242:25)
at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19)
at async Function.sendCertInfo (/app/server/model/monitor.js:676:23)
at async Function.sendStats (/app/server/model/monitor.js:641:13) {
sql: undefined,
bindings: undefined
}
at process. (/app/server/server.js:1696:13)
at process.emit (node:events:390:28)
at emit (node:internal/process/promises:136:22)
at processPromiseRejections (node:internal/process/promises:242:25)
at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19)
at async Function.sendCertInfo (/app/server/model/monitor.js:676:23)
at async Function.sendStats (/app/server/model/monitor.js:641:13) {
sql: undefined,
bindings: undefined
}
at process. (/app/server/server.js:1696:13)
at process.emit (node:events:390:28)
at emit (node:internal/process/promises:136:22)
at processPromiseRejections (node:internal/process/promises:242:25)
at processTicksAndRejections (node:internal/process/task_queues:97:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
2022-06-13T13:02:36.496Z [MANAGE] INFO: Resume Monitor: 42 User ID: 1
2022-06-13T13:02:36.501Z [MONITOR] INFO: Added Monitor: undefined User ID: 1
2022-06-13T13:02:36.990Z [MONITOR] INFO: Monitor #42 'Test': Successful Response: 472 ms | Interval: 60 seconds | Type: http
2022-06-13T13:02:37.366Z [MONITOR] INFO: Monitor #27 'TF Serve Bert Head': Successful Response: 31 ms | Interval: 20 seconds | Type: http
2022-06-13T13:02:38.156Z [MONITOR] INFO: Monitor #28 'Triton Server GPU': Successful Response: 24 ms | Interval: 20 seconds | Type: http
2022-06-13T13:02:38.200Z [MONITOR] INFO: Monitor #30 'Alexanndria Green': Successful Response: 52 ms | Interval: 20 seconds | Type: http
2022-06-13T13:02:38.251Z [MONITOR] INFO: Monitor #32 'Autocomplete Green': Successful Response: 31 ms | Interval: 20 seconds | Type: http
2022-06-13T13:02:38.289Z [MONITOR] INFO: Monitor #34 'Bot Brain Green': Successful Response: 62 ms | Interval: 20 seconds | Type: http
2022-06-13T13:02:38.367Z [MONITOR] INFO: Monitor #36 'Bot Model de Green': Successful Response: 50 ms | Interval: 20 seconds | Type: http

🐻 Uptime-Kuma Version

1.16.1

💻 Operating System and Arch

Kubernetes

🌐 Browser

Chrorme

🐋 Docker Version

No response

🟩 NodeJS Version

No response

Can someone help me please? I can't login and it is freezing @louislam

Originally created by @iamdempa on GitHub (Jun 13, 2022). ### ⚠️ Please verify that this bug has NOT been raised before. - [X] I checked and didn't find similar issue ### 🛡️ Security Policy - [X] I agree to have read this project [Security Policy](https://github.com/louislam/uptime-kuma/security/policy) ### 📝 Describe your problem This is the error I am getting > at Timeout.safeBeat [as _onTimeout] (/app/server/model/monitor.js:532:25) 2022-06-13T13:02:25.881Z [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:305:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js:515:19) at async Function.sendCertInfo (/app/server/model/monitor.js:676:23) at async Function.sendStats (/app/server/model/monitor.js:641:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:1696:13) at process.emit (node:events:390:28) at emit (node:internal/process/promises:136:22) at processPromiseRejections (node:internal/process/promises:242:25) at processTicksAndRejections (node:internal/process/task_queues:97:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues 2022-06-13T13:02:36.496Z [MANAGE] INFO: Resume Monitor: 42 User ID: 1 2022-06-13T13:02:36.501Z [MONITOR] INFO: Added Monitor: undefined User ID: 1 2022-06-13T13:02:36.990Z [MONITOR] INFO: Monitor #42 'Test': Successful Response: 472 ms | Interval: 60 seconds | Type: http 2022-06-13T13:02:37.366Z [MONITOR] INFO: Monitor #27 'TF Serve Bert Head': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.156Z [MONITOR] INFO: Monitor #28 'Triton Server GPU': Successful Response: 24 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.200Z [MONITOR] INFO: Monitor #30 'Alexanndria Green': Successful Response: 52 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.251Z [MONITOR] INFO: Monitor #32 'Autocomplete Green': Successful Response: 31 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.289Z [MONITOR] INFO: Monitor #34 'Bot Brain Green': Successful Response: 62 ms | Interval: 20 seconds | Type: http 2022-06-13T13:02:38.367Z [MONITOR] INFO: Monitor #36 'Bot Model de Green': Successful Response: 50 ms | Interval: 20 seconds | Type: http ### 🐻 Uptime-Kuma Version 1.16.1 ### 💻 Operating System and Arch Kubernetes ### 🌐 Browser Chrorme ### 🐋 Docker Version _No response_ ### 🟩 NodeJS Version _No response_ Can someone help me please? I can't login and it is freezing @louislam
deekerman 2026-02-28 02:12:13 -05:00
  • closed this issue
  • added the
    Stale
    help
    labels
Author
Owner

@louislam commented on GitHub (Jun 13, 2022):

Usually due to read/write issue. What is your disk type?

@louislam commented on GitHub (Jun 13, 2022): Usually due to read/write issue. What is your disk type?
Author
Owner

@iamdempa commented on GitHub (Jun 13, 2022):

I am using the AWS EBS volume. This is running as a pod in the kubernetes. It was working fine for a week

@iamdempa commented on GitHub (Jun 13, 2022): I am using the AWS EBS volume. This is running as a pod in the kubernetes. It was working fine for a week
Author
Owner

@iamdempa commented on GitHub (Jun 13, 2022):

Can you help me fix this please?

@iamdempa commented on GitHub (Jun 13, 2022): Can you help me fix this please?
Author
Owner

@louislam commented on GitHub (Jun 13, 2022):

EBS volume should be working.

Please make sure the load average of your system is low by top command.

@louislam commented on GitHub (Jun 13, 2022): EBS volume should be working. Please make sure the load average of your system is low by `top` command.
Author
Owner

@iamdempa commented on GitHub (Jun 13, 2022):

Hi @louislam, this is the current pod spec (current utilization of the pod by kubectl top pods command). Can you analyse this and let me know if this is the normal consumption?

image

@iamdempa commented on GitHub (Jun 13, 2022): Hi @louislam, this is the current pod spec (current utilization of the pod by `kubectl top pods` command). Can you analyse this and let me know if this is the normal consumption? ![image](https://user-images.githubusercontent.com/32482769/173364055-17312a35-08f9-4820-a1f8-d6a6a6c1aec1.png)
Author
Owner

@iamdempa commented on GitHub (Jun 13, 2022):

Can this happen when I add more monitors? I have around 25

@iamdempa commented on GitHub (Jun 13, 2022): Can this happen when I add more monitors? I have around 25
Author
Owner

@daeho-ro commented on GitHub (Jun 13, 2022):

The spec seems too small. 24m ~ 0.024 cpu and 62Mi ~ 62MB memory, right?

@daeho-ro commented on GitHub (Jun 13, 2022): The spec seems too small. 24m ~ 0.024 cpu and 62Mi ~ 62MB memory, right?
Author
Owner

@iamdempa commented on GitHub (Jun 13, 2022):

Yes you are right. That is the amount of memory the pod is consuming. Not the specs I have configured. It is the real-time usage of the memory and cpu by the uptime kuma.

@iamdempa commented on GitHub (Jun 13, 2022): Yes you are right. That is the amount of memory the pod is consuming. Not the specs I have configured. It is the real-time usage of the memory and cpu by the uptime kuma.
Author
Owner

@github-actions[bot] commented on GitHub (Sep 11, 2022):

We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 7 days.

@github-actions[bot] commented on GitHub (Sep 11, 2022): We are clearing up our old issues and your ticket has been open for 3 months with no activity. Remove stale label or comment or this will be closed in 7 days.
Author
Owner

@github-actions[bot] commented on GitHub (Sep 18, 2022):

This issue was closed because it has been stalled for 7 days with no activity.

@github-actions[bot] commented on GitHub (Sep 18, 2022): This issue was closed because it has been stalled for 7 days with no activity.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#1166
No description provided.