Unable to log into uptime-kuma. Get strange error. #128

Closed
opened 2026-02-28 01:35:50 -05:00 by deekerman · 31 comments
Owner

Originally created by @jim361tx on GitHub (Aug 16, 2021).

I am running the container in docker on an unraid server. It has been running for 6 days and is still monitoring hosts and sending alerts however I am unable to log in to the web portal. The logs give this error over and over:

(node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 592)

Originally created by @jim361tx on GitHub (Aug 16, 2021). I am running the container in docker on an unraid server. It has been running for 6 days and is still monitoring hosts and sending alerts however I am unable to log in to the web portal. The logs give this error over and over: (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 592)
deekerman 2026-02-28 01:35:50 -05:00
  • closed this issue
  • added the
    help
    label
Author
Owner

@louislam commented on GitHub (Aug 16, 2021):

Are there any more logs?

I suggest you could try to restart the container first.

@louislam commented on GitHub (Aug 16, 2021): Are there any more logs? I suggest you could try to restart the container first.
Author
Owner

@jim361tx commented on GitHub (Aug 16, 2021):

I have restarted the container several times as well as the host server. Hopefully it is just a me problem and I can delete the container and add it back. I get this over and over in the log:
(node:18) UnhandledPromiseRejectionWarning: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:548:22)
at async Function.sendUptime (/app/server/model/monitor.js:348:29)
(node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 6266)

@jim361tx commented on GitHub (Aug 16, 2021): I have restarted the container several times as well as the host server. Hopefully it is just a me problem and I can delete the container and add it back. I get this over and over in the log: (node:18) UnhandledPromiseRejectionWarning: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at runNextTicks (internal/process/task_queues.js:60:5) at listOnTimeout (internal/timers.js:526:9) at processTimers (internal/timers.js:500:7) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:548:22) at async Function.sendUptime (/app/server/model/monitor.js:348:29) (node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 6266)
Author
Owner

@louislam commented on GitHub (Aug 16, 2021):

How many service do you have currently?

I believe it is related to query timeout problem. If a sql query time is longer than 60 seconds, it will throw an exception.

github.com/knex/knex@1744c8c265/lib/client.js (L204)

@louislam commented on GitHub (Aug 16, 2021): How many service do you have currently? I believe it is related to query timeout problem. If a sql query time is longer than 60 seconds, it will throw an exception. https://github.com/knex/knex/blob/1744c8c2655a8362260ba987d706ec3473681a78/lib/client.js#L204
Author
Owner

@GlassedSilver commented on GitHub (Aug 22, 2021):

I am experiencing the same issue. I have 32 services watched if I remember correctly.

I can open the website and when I click login after filling my details the submit button is set to activated, but nothing happens after that.

Same errors in the uptime-kuma log.

I can show a little more of it, mind you my watched items are being checked still.

at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #27 '<REDACTED>': Successful Response: 5 ms | Interval: 30 seconds | Type: http
Monitor #28 '<REDACTED>': Successful Response: 8 ms | Interval: 30 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:548:22)
at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:534:22)
at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:569:19)
at async Function.sendAvgPing (/app/server/model/monitor.js:326:32)
at async Function.sendStats (/app/server/model/monitor.js:313:9) {
sql: '\n' +
' SELECT AVG(ping)\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND ping IS NOT NULL\n' +
' AND monitor_id = ? limit ?',
bindings: [ -24, 33, 1 ]
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #29 '<REDACTED>': Successful Response: 2 ms | Interval: 30 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #35 '! 1) <REDACTED>@<REDACTED> [not redacting the special chars incase this is useful for debugging]': Successful Response: 13 ms | Interval: 60 seconds | Type: keyword
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at processTimers (internal/timers.js:497:9)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
at runNextTicks (internal/process/task_queues.js:64:3)
at processTimers (internal/timers.js:497:9)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #34 '<REDACTED>': Successful Response: 1 ms | Interval: 30 seconds | Type: port
Monitor #32 '<REDACTED>': Successful Response: 9 ms | Interval: 30 seconds | Type: http
Monitor #30 '<REDACTED>': Successful Response: 18 ms | Interval: 30 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #31 '<REDACTED>': Successful Response: 15 ms | Interval: 30 seconds | Type: http
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:836:13)
at process.emit (events.js:400:28)

All of this is just a short excerpt. The log does keep going, so the watched are still working, which is nice of course.

Hope this proves helpful.

Started experiencing this about a week ago after applying an update to uptime-kuma. Couldn't tell you which commit, but it's been about a week yes.

@GlassedSilver commented on GitHub (Aug 22, 2021): I am experiencing the same issue. I have 32 services watched if I remember correctly. I can open the website and when I click login after filling my details the submit button is set to activated, but nothing happens after that. Same errors in the uptime-kuma log. I can show a little more of it, mind you my watched items are being checked still. ``` at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #27 '<REDACTED>': Successful Response: 5 ms | Interval: 30 seconds | Type: http Monitor #28 '<REDACTED>': Successful Response: 8 ms | Interval: 30 seconds | Type: http Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:548:22) at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:534:22) at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:569:19) at async Function.sendAvgPing (/app/server/model/monitor.js:326:32) at async Function.sendStats (/app/server/model/monitor.js:313:9) { sql: '\n' + ' SELECT AVG(ping)\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND ping IS NOT NULL\n' + ' AND monitor_id = ? limit ?', bindings: [ -24, 33, 1 ] } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #29 '<REDACTED>': Successful Response: 2 ms | Interval: 30 seconds | Type: http Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #35 '! 1) <REDACTED>@<REDACTED> [not redacting the special chars incase this is useful for debugging]': Successful Response: 13 ms | Interval: 60 seconds | Type: keyword Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at runNextTicks (internal/process/task_queues.js:60:5) at processTimers (internal/timers.js:497:9) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) at runNextTicks (internal/process/task_queues.js:64:3) at processTimers (internal/timers.js:497:9) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #34 '<REDACTED>': Successful Response: 1 ms | Interval: 30 seconds | Type: port Monitor #32 '<REDACTED>': Successful Response: 9 ms | Interval: 30 seconds | Type: http Monitor #30 '<REDACTED>': Successful Response: 18 ms | Interval: 30 seconds | Type: http Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #31 '<REDACTED>': Successful Response: 15 ms | Interval: 30 seconds | Type: http Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:149:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:106:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:265:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:836:13) at process.emit (events.js:400:28) ``` All of this is just a short excerpt. The log does keep going, so the watched are still working, which is nice of course. Hope this proves helpful. Started experiencing this about a week ago after applying an update to uptime-kuma. Couldn't tell you which commit, but it's been about a week yes.
Author
Owner

@ErikZandboer commented on GitHub (Aug 22, 2021):

I am seeing the same thing. UI pops up the username password question, when entered it just sits there. It is still monitoring, I can see from the logs that everything is reported up. But no UI.... If I refresh the page after login I simply get the login request page again. Logs are spitting the same error as described above:

Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is proba
bly full. Are you missing a .transacting(trx) call?

Edit: I am running 1.3.2 from a container. The container runs on a Synology using the Synology-native docker tooling.

@ErikZandboer commented on GitHub (Aug 22, 2021): I am seeing the same thing. UI pops up the username password question, when entered it just sits there. It is still monitoring, I can see from the logs that everything is reported up. But no UI.... If I refresh the page after login I simply get the login request page again. Logs are spitting the same error as described above: ``` Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is proba bly full. Are you missing a .transacting(trx) call? ``` Edit: I am running 1.3.2 from a container. The container runs on a Synology using the Synology-native docker tooling.
Author
Owner

@louislam commented on GitHub (Aug 22, 2021):

@ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?

@louislam commented on GitHub (Aug 22, 2021): @ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?
Author
Owner

@GlassedSilver commented on GitHub (Aug 22, 2021):

@ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?

Is it safe to just go back to that version database-file-wise? If so I'd quickly try it out!

@GlassedSilver commented on GitHub (Aug 22, 2021): > > > @ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0? Is it safe to just go back to that version database-file-wise? If so I'd quickly try it out!
Author
Owner

@louislam commented on GitHub (Aug 22, 2021):

@ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?

Is it safe to just go back to that version database-file-wise? If so I'd quickly try it out!

Should be safe, since both database structure are same in this two version.

@louislam commented on GitHub (Aug 22, 2021): > > @ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0? > > Is it safe to just go back to that version database-file-wise? If so I'd quickly try it out! Should be safe, since both database structure are same in this two version.
Author
Owner

@GlassedSilver commented on GitHub (Aug 22, 2021):

Nope, same problem. Should I try 1.1.0?

@GlassedSilver commented on GitHub (Aug 22, 2021): Nope, same problem. Should I try 1.1.0?
Author
Owner

@louislam commented on GitHub (Aug 22, 2021):

Nope, same problem. Should I try 1.1.0?

Should not try 1.1.0.

Thank you for testing. My guess is, maybe there are some slow queries blocking all other things. I will try to investigate.

@louislam commented on GitHub (Aug 22, 2021): > Nope, same problem. Should I try 1.1.0? Should not try 1.1.0. Thank you for testing. My guess is, maybe there are some slow queries blocking all other things. I will try to investigate.
Author
Owner

@GlassedSilver commented on GitHub (Aug 22, 2021):

Thank you so much for prioritizing this issue. Really fell in love with Uptime Kuma and its downtime (I just realized how ironic that is, no hard feelings :P) is giving me withdrawal symptoms. :D

@GlassedSilver commented on GitHub (Aug 22, 2021): Thank you so much for prioritizing this issue. Really fell in love with Uptime Kuma and its downtime (I just realized how ironic that is, no hard feelings :P) is giving me withdrawal symptoms. :D
Author
Owner

@ErikZandboer commented on GitHub (Aug 25, 2021):

@ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?

Sorry for the late reply. In 1.20 it worked, I did sometimes see very long times between login and the dashboard showing. This might have been the same issue but cannot be sure. Next to try is the nightly :)

@ErikZandboer commented on GitHub (Aug 25, 2021): > @ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0? Sorry for the late reply. In 1.20 it worked, I did sometimes see very long times between login and the dashboard showing. This might have been the same issue but cannot be sure. Next to try is the nightly :)
Author
Owner

@ErikZandboer commented on GitHub (Aug 25, 2021):

So I just tried 1.20. Also quite a number of errors in the log. Next to the previously mentioned error around the Knex timeout/pool is probably full (same behavior not able to login after the user/pass prompt) I now also see this one:

(node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-
rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1580) (node:25) UnhandledPromiseRejectionWarning: TypeError: msg.includes is not a fun ction at RedBeanNode.checkError (/app/node_modules/redbean-node/dist/redbean-node. js:394:26) at RedBeanNode.checkAllowedError (/app/node_modules/redbean-node/dist/redbea n-node.js:402:14) at RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js: 478:18) at async Function.sendCertInfo (/app/server/model/monitor.js:339:24) (node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-
rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1583)

@ErikZandboer commented on GitHub (Aug 25, 2021): So I just tried 1.20. Also quite a number of errors in the log. Next to the previously mentioned error around the Knex timeout/pool is probably full (same behavior not able to login after the user/pass prompt) I now also see this one: `(node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled- rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1580) (node:25) UnhandledPromiseRejectionWarning: TypeError: msg.includes is not a fun ction at RedBeanNode.checkError (/app/node_modules/redbean-node/dist/redbean-node. js:394:26) at RedBeanNode.checkAllowedError (/app/node_modules/redbean-node/dist/redbea n-node.js:402:14) at RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js: 478:18) at async Function.sendCertInfo (/app/server/model/monitor.js:339:24) (node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled- rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1583) `
Author
Owner

@ErikZandboer commented on GitHub (Aug 25, 2021):

Btw the nightly build I just downloaded would not even show me a web page, not sure of its status now.

@ErikZandboer commented on GitHub (Aug 25, 2021): Btw the nightly build I just downloaded would not even show me a web page, not sure of its status now.
Author
Owner

@louislam commented on GitHub (Aug 25, 2021):

Btw the nightly build I just downloaded would not even show me a web page, not sure of its status now.

Strange, maybe the new sqlite3 library doesn't help too.

I need more info. After started Uptime Kuma, could you show me the load average, cpu usage and free memory?

Command:

top
@louislam commented on GitHub (Aug 25, 2021): > Btw the nightly build I just downloaded would not even show me a web page, not sure of its status now. Strange, maybe the new sqlite3 library doesn't help too. I need more info. After started Uptime Kuma, could you show me the load average, cpu usage and free memory? Command: ```bash top ```
Author
Owner

@ErikZandboer commented on GitHub (Aug 26, 2021):

Version 1.20 shows quite some CPU load, around 33%. I have to say that I run this on a Synology DS412+ which has a tiny Intel Atom D2700 CPU. It seems that 1.32 also delivers comparable CPU loads:

image

@ErikZandboer commented on GitHub (Aug 26, 2021): Version 1.20 shows quite some CPU load, around 33%. I have to say that I run this on a Synology DS412+ which has a tiny Intel Atom D2700 CPU. It seems that 1.32 also delivers comparable CPU loads: ![image](https://user-images.githubusercontent.com/16650934/130927036-9425c06c-3a8f-47d9-9c67-eab63e87a3fb.png)
Author
Owner

@ErikZandboer commented on GitHub (Aug 26, 2021):

Not sure what happened, but I just redownloaded the 1.32 version and it starts like lightning, lets me login and I am looking at the dashboard? No errors in the log, just successful responses. CPU load is also down to just a few %% (I see 0-2% CPU now at rest, 22% on a scan).

@ErikZandboer commented on GitHub (Aug 26, 2021): Not sure what happened, but I just redownloaded the 1.32 version and it starts like lightning, lets me login and I am looking at the dashboard? No errors in the log, just successful responses. CPU load is also down to just a few %% (I see 0-2% CPU now at rest, 22% on a scan).
Author
Owner

@louislam commented on GitHub (Aug 26, 2021):

Load average around 2.00 is pretty high in my opinion.

I think it maybe related to your NAS doing a lot of I/O jobs which also blocks Uptime Kuma's I/O. Another reason maybe Atom CPU don't like Node.js application.

FYI, here is my free vps on Oracle with 1/8 CPU (Yes, 0.125 core only). There are about 10 monitors. It is smooth.

image

@louislam commented on GitHub (Aug 26, 2021): Load average around 2.00 is pretty high in my opinion. I think it maybe related to your NAS doing a lot of I/O jobs which also blocks Uptime Kuma's I/O. Another reason maybe Atom CPU don't like Node.js application. FYI, here is my free vps on Oracle with 1/8 CPU (Yes, 0.125 core only). There are about 10 monitors. It is smooth. ![image](https://user-images.githubusercontent.com/1336778/130938478-cc51e365-2941-4542-893f-7841268675f3.png)
Author
Owner

@ErikZandboer commented on GitHub (Aug 26, 2021):

The NAS is stationary, it is my backup nas that really only acts at a dumb target during the night. Weirdly after redownloading and reinstalling Uptime-Kuma 1.3.2 it started very fast and allowed me to login without any errors. Nothing changed, I just created a new container like before from scratch only pointing to the existing /app/data folder to pickup on the Dbase. CPU loads now show a "normal" 0-2% during sleep and bounce up to 22% at peak (while scanning).

Until I run into the errors described above again I think I am diqualified for testing on this issue, it works now and I cannot reproduce. Sorry!

@ErikZandboer commented on GitHub (Aug 26, 2021): The NAS is stationary, it is my backup nas that really only acts at a dumb target during the night. Weirdly after redownloading and reinstalling Uptime-Kuma 1.3.2 it started very fast and allowed me to login without any errors. Nothing changed, I just created a new container like before from scratch only pointing to the existing /app/data folder to pickup on the Dbase. CPU loads now show a "normal" 0-2% during sleep and bounce up to 22% at peak (while scanning). Until I run into the errors described above again I think I am diqualified for testing on this issue, it works now and I cannot reproduce. Sorry!
Author
Owner

@ErikZandboer commented on GitHub (Aug 26, 2021):

Looking at the HTTP reponse time I got from UK, you can see that the web server was really slow to respond while I had the error occuring. The sensor was paused ("off") at first, then started with the error. After redownload and reconstruction of the container repsonse times went down to "sane" values again:
image

@ErikZandboer commented on GitHub (Aug 26, 2021): Looking at the HTTP reponse time I got from UK, you can see that the web server was really slow to respond while I had the error occuring. The sensor was paused ("off") at first, then started with the error. After redownload and reconstruction of the container repsonse times went down to "sane" values again: ![image](https://user-images.githubusercontent.com/16650934/130962884-181c4533-b033-4715-b31c-32842683ed5c.png)
Author
Owner

@raspberrycoulis commented on GitHub (Aug 31, 2021):

I've just pulled the nightly image and it now loads on my Synology DS718+ NAS.

@raspberrycoulis commented on GitHub (Aug 31, 2021): I've just pulled the `nightly` image and it now loads on my Synology DS718+ NAS.
Author
Owner

@GlassedSilver commented on GitHub (Aug 31, 2021):

For me the nightly still doesn't even load the main page. (as in: maybe login is fixed, but I can't get to the point of sending my credentials)

@GlassedSilver commented on GitHub (Aug 31, 2021): For me the nightly still doesn't even load the main page. (as in: maybe login is fixed, but I can't get to the point of sending my credentials)
Author
Owner

@louislam commented on GitHub (Aug 31, 2021):

For me the nightly still doesn't even load the main page. (as in: maybe login is fixed, but I can't get to the point of sending my credentials)

Need hardware spec.
And I want to see the cpu usage, memory usage and load average
Command: top

@louislam commented on GitHub (Aug 31, 2021): > For me the nightly still doesn't even load the main page. (as in: maybe login is fixed, but I can't get to the point of sending my credentials) Need hardware spec. And I want to see the cpu usage, memory usage and load average Command: `top`
Author
Owner

@GlassedSilver commented on GitHub (Sep 1, 2021):

Good news, switched to the :1 tag, and it instantly worked flawlessly again!

Edit:
Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help.

**Errors from the log (very long excerpt, hence the collapsing)**
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:552:22)
at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:587:19)
at async Function.sendAvgPing (/app/server/model/monitor.js:375:32) {
sql: '\n' +
' SELECT AVG(ping)\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND ping IS NOT NULL\n' +
' AND monitor_id = ? limit ?',
bindings: [ -24, 6, 1 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
at runNextTicks (internal/process/task_queues.js:64:3)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #20 '! 2) DNS @Pi-Hole': Successful Response: 1 ms | Interval: 30 seconds | Type: port
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:360:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -24, 12 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Monitor #19 '!! LanguageTool': Failing: connect ECONNREFUSED 192.168.2.107:8010 | Type: port
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:360:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -24, 13 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:361:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -720, 30 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Username from JWT: admin
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:361:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -720, 35 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:360:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -24, 15 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:360:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -24, 17 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
at runNextTicks (internal/process/task_queues.js:64:3)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at processTimers (internal/timers.js:497:9)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
at runNextTicks (internal/process/task_queues.js:64:3)
at processTimers (internal/timers.js:497:9)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:361:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -720, 6 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Login
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22)
at async Function.sendUptime (/app/server/model/monitor.js:410:29)
at async Function.sendStats (/app/server/model/monitor.js:360:13) {
sql: '\n' +
' SELECT duration, time, status\n' +
' FROM heartbeat\n' +
" WHERE time > DATETIME('now', ? || ' hours')\n" +
' AND monitor_id = ? ',
bindings: [ -24, 19 ]
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26)
at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28)
at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19)
at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26)
at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20)
at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) {
sql: undefined,
bindings: undefined
}
at process.<anonymous> (/app/server/server.js:846:13)
at process.emit (events.js:400:28)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)

As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5.
App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives).

Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P)

The results from the top command (run inside the docker itself I presume you wished for):

Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached
CPU:  14% usr   5% sys   0% nic  80% idle   0% io   0% irq   0% sirq
Load average: 7.18 7.59 8.22 8/4460 563
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    1     0 root     S     319m   1%   2   2% node server/server.js
  556     0 root     S     1652   0%   1   0% /bin/sh
  563   556 root     R     1588   0%   3   0% top
@GlassedSilver commented on GitHub (Sep 1, 2021): Good news, switched to the `:1` tag, and it instantly worked flawlessly again! **Edit:** Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help. <details> <summary>**Errors from the log (very long excerpt, hence the collapsing)**</summary> <div> ``` If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at runNextTicks (internal/process/task_queues.js:60:5) at listOnTimeout (internal/timers.js:526:9) at processTimers (internal/timers.js:500:7) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async RedBeanNode.getRow (/app/node_modules/redbean-node/dist/redbean-node.js:552:22) at async RedBeanNode.getCell (/app/node_modules/redbean-node/dist/redbean-node.js:587:19) at async Function.sendAvgPing (/app/server/model/monitor.js:375:32) { sql: '\n' + ' SELECT AVG(ping)\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND ping IS NOT NULL\n' + ' AND monitor_id = ? limit ?', bindings: [ -24, 6, 1 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) at runNextTicks (internal/process/task_queues.js:64:3) at listOnTimeout (internal/timers.js:526:9) at processTimers (internal/timers.js:500:7) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #20 '! 2) DNS @Pi-Hole': Successful Response: 1 ms | Interval: 30 seconds | Type: port Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:360:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -24, 12 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Monitor #19 '!! LanguageTool': Failing: connect ECONNREFUSED 192.168.2.107:8010 | Type: port Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:360:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -24, 13 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:361:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -720, 30 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Username from JWT: admin Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:361:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -720, 35 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:360:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -24, 15 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:360:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -24, 17 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at runNextTicks (internal/process/task_queues.js:60:5) at listOnTimeout (internal/timers.js:526:9) at processTimers (internal/timers.js:500:7) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) at runNextTicks (internal/process/task_queues.js:64:3) at listOnTimeout (internal/timers.js:526:9) at processTimers (internal/timers.js:500:7) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at runNextTicks (internal/process/task_queues.js:60:5) at processTimers (internal/timers.js:497:9) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) at runNextTicks (internal/process/task_queues.js:64:3) at processTimers (internal/timers.js:497:9) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:361:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -720, 6 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Login Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:566:22) at async Function.sendUptime (/app/server/model/monitor.js:410:29) at async Function.sendStats (/app/server/model/monitor.js:360:13) { sql: '\n' + ' SELECT duration, time, status\n' + ' FROM heartbeat\n' + " WHERE time > DATETIME('now', ? || ' hours')\n" + ' AND monitor_id = ? ', bindings: [ -24, 19 ] } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:295:26) at async Runner.ensureConnection (/app/node_modules/knex/lib/execution/runner.js:259:28) at async Runner.run (/app/node_modules/knex/lib/execution/runner.js:30:19) at async RedBeanNode.storeCore (/app/node_modules/redbean-node/dist/redbean-node.js:166:26) at async RedBeanNode.store (/app/node_modules/redbean-node/dist/redbean-node.js:126:20) at async Timeout.beat [as _onTimeout] (/app/server/model/monitor.js:307:13) { sql: undefined, bindings: undefined } at process.<anonymous> (/app/server/server.js:846:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) ``` </div> </details> As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5. App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives). Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P) The results from the top command (run inside the docker itself I presume you wished for): ``` Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached CPU: 14% usr 5% sys 0% nic 80% idle 0% io 0% irq 0% sirq Load average: 7.18 7.59 8.22 8/4460 563 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 1 0 root S 319m 1% 2 2% node server/server.js 556 0 root S 1652 0% 1 0% /bin/sh 563 556 root R 1588 0% 3 0% top ```
Author
Owner

@louislam commented on GitHub (Sep 1, 2021):

Good news, switched to the :1 tag, and it instantly worked flawlessly again!

Edit:
Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help.

Errors from the log (very long excerpt, hence the collapsing)
As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5.
App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives).

Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P)

The results from the top command (run inside the docker itself I presume you wished for):

Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached
CPU:  14% usr   5% sys   0% nic  80% idle   0% io   0% irq   0% sirq
Load average: 7.18 7.59 8.22 8/4460 563
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    1     0 root     S     319m   1%   2   2% node server/server.js
  556     0 root     S     1652   0%   1   0% /bin/sh
  563   556 root     R     1588   0%   3   0% top

Load average: 7.18 is higher than expected with such powerful machine, that is strange.

@louislam commented on GitHub (Sep 1, 2021): > Good news, switched to the `:1` tag, and it instantly worked flawlessly again! > > **Edit:** > Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help. > > **Errors from the log (very long excerpt, hence the collapsing)** > As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5. > App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives). > > Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P) > > The results from the top command (run inside the docker itself I presume you wished for): > > ``` > Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached > CPU: 14% usr 5% sys 0% nic 80% idle 0% io 0% irq 0% sirq > Load average: 7.18 7.59 8.22 8/4460 563 > PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND > 1 0 root S 319m 1% 2 2% node server/server.js > 556 0 root S 1652 0% 1 0% /bin/sh > 563 556 root R 1588 0% 3 0% top > ``` Load average: 7.18 is higher than expected with such powerful machine, that is strange.
Author
Owner

@ErikZandboer commented on GitHub (Sep 1, 2021):

I got 1.32 running after a lot of trial and error. Now upgraded to 1.52 and it worked perfectly the first time. This version seems to have fixed the issue (at least on my side).

Thanks for all of your work, this is becoming (and already IS) a great, straight-forward tool!

@ErikZandboer commented on GitHub (Sep 1, 2021): I got 1.32 running after a lot of trial and error. Now upgraded to 1.52 and it worked perfectly the first time. This version seems to have fixed the issue (at least on my side). Thanks for all of your work, this is becoming (and already IS) a great, straight-forward tool!
Author
Owner

@GlassedSilver commented on GitHub (Sep 1, 2021):

Good news, switched to the :1 tag, and it instantly worked flawlessly again!
Edit:
Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help.
Errors from the log (very long excerpt, hence the collapsing)
As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5.
App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives).
Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P)
The results from the top command (run inside the docker itself I presume you wished for):

Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached
CPU:  14% usr   5% sys   0% nic  80% idle   0% io   0% irq   0% sirq
Load average: 7.18 7.59 8.22 8/4460 563
  PID  PPID USER     STAT   VSZ %VSZ CPU %CPU COMMAND
    1     0 root     S     319m   1%   2   2% node server/server.js
  556     0 root     S     1652   0%   1   0% /bin/sh
  563   556 root     R     1588   0%   3   0% top

Load average: 7.18 is higher than expected with such powerful machine, that is strange.

That's a MongoDB instance doing that for the most part. Overall my server has a lot of stuff to do, so whilst 7.18 is more than usual, the server is not starved for tasks. Don't worry about it.

Apparently it works now again after going to the 1.5.2 tag.
Bit weirded out that :latest doesn't pull 1.5.2, maybe that's my Docker having cached the older one still for that tag. (1.3.2)

Bit weirded out by the fact that the db files grow VERY large in size and that the watches need a bit of time to load up. (I have 32 watches)

I'm not gonna poke the bear any further and let the docker sit there and live with it for now. :D

@GlassedSilver commented on GitHub (Sep 1, 2021): > > > > Good news, switched to the `:1` tag, and it instantly worked flawlessly again! > > **Edit:** > > Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help. > > **Errors from the log (very long excerpt, hence the collapsing)** > > As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5. > > App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives). > > Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P) > > The results from the top command (run inside the docker itself I presume you wished for): > > ``` > > Mem: 48593808K used, 818240K free, 1951944K shrd, 168K buff, 14896880K cached > > CPU: 14% usr 5% sys 0% nic 80% idle 0% io 0% irq 0% sirq > > Load average: 7.18 7.59 8.22 8/4460 563 > > PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND > > 1 0 root S 319m 1% 2 2% node server/server.js > > 556 0 root S 1652 0% 1 0% /bin/sh > > 563 556 root R 1588 0% 3 0% top > > ``` > > Load average: 7.18 is higher than expected with such powerful machine, that is strange. That's a MongoDB instance doing that for the most part. Overall my server has a lot of stuff to do, so whilst 7.18 is more than usual, the server is not starved for tasks. Don't worry about it. Apparently it works now again after going to the 1.5.2 tag. Bit weirded out that :latest doesn't pull 1.5.2, maybe that's my Docker having cached the older one still for that tag. (1.3.2) Bit weirded out by the fact that the db files grow VERY large in size and that the watches need a bit of time to load up. (I have 32 watches) I'm not gonna poke the bear any further and let the docker sit there and live with it for now. :D
Author
Owner

@chakflying commented on GitHub (Sep 1, 2021):

@GlassedSilver that sounds a bit strange. I did a rough calculation and it should only use about ~100KB per monitor per day, so about 50MB if you have 32 monitors running for couple of weeks. How big is your kuma.db file?

@chakflying commented on GitHub (Sep 1, 2021): @GlassedSilver that sounds a bit strange. I did a rough calculation and it should only use about ~100KB per monitor per day, so about 50MB if you have 32 monitors running for couple of weeks. How big is your `kuma.db` file?
Author
Owner

@GlassedSilver commented on GitHub (Sep 2, 2021):

@chakflying Take a look :)

image

@GlassedSilver commented on GitHub (Sep 2, 2021): @chakflying Take a look :) ![image](https://user-images.githubusercontent.com/1912133/131795173-46fd2191-e88b-4c8f-bc41-056dda1f7f0f.png)
Author
Owner

@louislam commented on GitHub (Sep 2, 2021):

@chakflying Take a look :)

image

As I cannot reproduce the problem in my side, it is hard to address the problem.

If you don't mind, you can send me the db files (kuma.db, kuma.db-shm, kuma.db-wal) for me to investigate. But first you have to delete all sensitive data like hashed password, notification info and the JWT secret inside db file.

  1. Make a copy of kuma.db, kuma.db-shm, kuma.db-wal
  2. Use any SQLite client to open kuma.db (My favourite one is SQLite Expert Personal: http://www.sqliteexpert.com/download.html)
  3. Delete notification, setting, user tables.
    image
  4. If you don't want me to know the monitor details, go to monitor table and mask the data with [MASKED].
  5. Close SQLite client and reopen it to double check.
  6. Zip them and send to uptime@kuma.pet.
@louislam commented on GitHub (Sep 2, 2021): > @chakflying Take a look :) > > ![image](https://user-images.githubusercontent.com/1912133/131795173-46fd2191-e88b-4c8f-bc41-056dda1f7f0f.png) As I cannot reproduce the problem in my side, it is hard to address the problem. If you don't mind, you can send me the db files (kuma.db, kuma.db-shm, kuma.db-wal) for me to investigate. But first you have to delete all sensitive data like hashed password, notification info and the JWT secret inside db file. 1. Make a copy of `kuma.db`, `kuma.db-shm`, `kuma.db-wal` 2. Use any SQLite client to open `kuma.db` (My favourite one is SQLite Expert Personal: http://www.sqliteexpert.com/download.html) 3. Delete `notification`, `setting`, `user` tables. ![image](https://user-images.githubusercontent.com/1336778/131797897-3d1ec1d0-0fd6-4d89-b230-592d762563d7.png) 4. If you don't want me to know the monitor details, go to `monitor` table and mask the data with `[MASKED]`. 5. Close SQLite client and reopen it to double check. 6. Zip them and send to uptime@kuma.pet.
Author
Owner

@louislam commented on GitHub (Sep 20, 2021):

I have re-written some queries in 1.6.x. Hope this problem has solved.

@louislam commented on GitHub (Sep 20, 2021): I have re-written some queries in 1.6.x. Hope this problem has solved.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#128
No description provided.