2.1.0 talking more CPU resources than 2.0 #4682

Open
opened 2026-02-28 04:11:34 -05:00 by deekerman · 19 comments
Owner

Originally created by @milandzuris on GitHub (Feb 8, 2026).

.

🛡️ Security Policy

📝 Description

Hi after updating to 2.1.0 my Uptime Kuma after few minutes went from 20% performance (CPU RAM) to 100% performance, before update i dont have any issue same settings i have before update.

Image Image

👟 Reproduction steps

reboot :D

👀 Expected behavior

.

😓 Actual Behavior

.

🐻 Uptime-Kuma Version

2.1.0

💻 Operating System and Arch

Debian

🌐 Browser

.

🖥️ Deployment Environment

.

📝 Relevant log output

journalctl -u uptime-kuma -n 200 --no-pager
-- Journal begins at Thu 2026-01-29 15:29:32 CET, ends at Sun 2026-02-08 14:19:56 CET. --
Feb 08 14:02:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:02:43 UptimeKuma npm[169]: 2026-02-08T14:02:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:02:43 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:02:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:02:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:02:45 UptimeKuma npm[169]: 2026-02-08T14:02:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:02:54 UptimeKuma npm[169]: 2026-02-08T14:02:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:02:58 UptimeKuma npm[169]: 2026-02-08T14:02:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:03 UptimeKuma npm[169]: 2026-02-08T14:03:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:03:03 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:03:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:03:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:06 UptimeKuma npm[169]: 2026-02-08T14:03:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:15 UptimeKuma npm[169]: 2026-02-08T14:03:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:03:23 UptimeKuma npm[169]: 2026-02-08T14:03:23+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:03:23 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:03:23 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:03:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:43 UptimeKuma npm[169]: 2026-02-08T14:03:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:03:43 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:03:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:03:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:45 UptimeKuma npm[169]: 2026-02-08T14:03:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:03:54 UptimeKuma npm[169]: 2026-02-08T14:03:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:03:58 UptimeKuma npm[169]: 2026-02-08T14:03:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:03 UptimeKuma npm[169]: 2026-02-08T14:04:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:04:03 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:04:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:04:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:06 UptimeKuma npm[169]: 2026-02-08T14:04:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:15 UptimeKuma npm[169]: 2026-02-08T14:04:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:04:23 UptimeKuma npm[169]: 2026-02-08T14:04:23+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:04:23 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:04:23 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:04:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:43 UptimeKuma npm[169]: 2026-02-08T14:04:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:04:43 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:04:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:04:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:45 UptimeKuma npm[169]: 2026-02-08T14:04:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:04:54 UptimeKuma npm[169]: 2026-02-08T14:04:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:04:58 UptimeKuma npm[169]: 2026-02-08T14:04:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:03 UptimeKuma npm[169]: 2026-02-08T14:05:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:05:03 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:05:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:05:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:06 UptimeKuma npm[169]: 2026-02-08T14:05:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:15 UptimeKuma npm[169]: 2026-02-08T14:05:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:05:24 UptimeKuma npm[169]: 2026-02-08T14:05:24+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:05:24 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:05:24 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:05:24 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:45 UptimeKuma npm[169]: 2026-02-08T14:05:45+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:05:45 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:05:45 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:05:45 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:45 UptimeKuma npm[169]: 2026-02-08T14:05:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:05:54 UptimeKuma npm[169]: 2026-02-08T14:05:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:05:58 UptimeKuma npm[169]: 2026-02-08T14:05:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:07 UptimeKuma npm[169]: 2026-02-08T14:06:07+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:07 UptimeKuma npm[169]: 2026-02-08T14:06:07+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:06:07 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:06:07 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:06:07 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:15 UptimeKuma npm[169]: 2026-02-08T14:06:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:06:28 UptimeKuma npm[169]: 2026-02-08T14:06:28+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:06:28 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:06:28 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:06:28 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:45 UptimeKuma npm[169]: 2026-02-08T14:06:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:06:50 UptimeKuma npm[169]: 2026-02-08T14:06:50+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/
Feb 08 14:06:50 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:06:50 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:06:50 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:55 UptimeKuma npm[169]: 2026-02-08T14:06:55+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:06:58 UptimeKuma npm[169]: 2026-02-08T14:06:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:07:07 UptimeKuma npm[169]: 2026-02-08T14:07:07+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:07:19 UptimeKuma npm[169]: 2026-02-08T14:07:19+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:07:35 UptimeKuma npm[169]: 2026-02-08T14:07:35+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:07:35 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:07:35 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:07:35 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:08:35 UptimeKuma npm[169]: 2026-02-08T14:08:29+01:00 [MONITOR] WARN: Monitor #31 'Music Assistant': Pending: Connection failed | Max retries: 10 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:11:44 UptimeKuma npm[169]: 2026-02-08T14:11:36+01:00 [MONITOR] WARN: Monitor #6 'MQTT Broker': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: mqtt
Feb 08 14:14:01 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Feb 08 14:14:01 UptimeKuma npm[169]:     at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26)
Feb 08 14:14:01 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:65:5)
Feb 08 14:14:01 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:14:01 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async RedBeanNode.normalizeRaw (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:572:22)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async RedBeanNode.getRow (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:558:22)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async RedBeanNode.getCell (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:593:19)
Feb 08 14:14:01 UptimeKuma npm[169]:     at async Settings.get (/opt/uptime-kuma/server/settings.js:49:21) {
Feb 08 14:14:01 UptimeKuma npm[169]:   sql: 'SELECT `value` FROM setting WHERE `key` = ?  limit ?',
Feb 08 14:14:01 UptimeKuma npm[169]:   bindings: [ 'disableAuth', 1 ]
Feb 08 14:14:01 UptimeKuma npm[169]: }
Feb 08 14:14:01 UptimeKuma npm[169]:     at process.unexpectedErrorHandler (/opt/uptime-kuma/server/server.js:1982:13)
Feb 08 14:14:01 UptimeKuma npm[169]:     at process.emit (node:events:519:28)
Feb 08 14:14:01 UptimeKuma npm[169]:     at emitUnhandledRejection (node:internal/process/promises:252:13)
Feb 08 14:14:01 UptimeKuma npm[169]:     at throwUnhandledRejectionsMode (node:internal/process/promises:388:19)
Feb 08 14:14:01 UptimeKuma npm[169]:     at processPromiseRejections (node:internal/process/promises:475:17)
Feb 08 14:14:01 UptimeKuma npm[169]:     at processTicksAndRejections (node:internal/process/task_queues:106:32)
Feb 08 14:14:01 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:69:3)
Feb 08 14:14:01 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:14:01 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:14:26 UptimeKuma npm[169]: If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Feb 08 14:15:12 UptimeKuma npm[169]: 2026-02-08T14:15:03+01:00 [MONITOR] WARN: Monitor #8 'Driveway Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:15:59 UptimeKuma npm[169]: 2026-02-08T14:15:43+01:00 [MONITOR] WARN: Monitor #9 'Front Yard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: ping
Feb 08 14:17:01 UptimeKuma npm[169]: 2026-02-08T14:16:42+01:00 [MONITOR] WARN: Monitor #2 'Pi-Hole': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 20 seconds | Type: dns
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #5 'Dzuriš Home': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 10 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #10 'Side Yard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #11 'Backyard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: ping
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #14 'Switch 1': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #17 'Node-RED': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #25 'MySpeed': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #22 'Doorbell Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:06 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Feb 08 14:17:06 UptimeKuma npm[169]:     at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26)
Feb 08 14:17:06 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:65:5)
Feb 08 14:17:06 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:17:06 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async RedBeanNode.storeCore (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:141:17)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async RedBeanNode.store (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:110:20)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async UptimeCalculator.update (/opt/uptime-kuma/server/uptime-calculator.js:315:9)
Feb 08 14:17:06 UptimeKuma npm[169]:     at async beat (/opt/uptime-kuma/server/model/monitor.js:1093:32) {
Feb 08 14:17:06 UptimeKuma npm[169]:   sql: undefined,
Feb 08 14:17:06 UptimeKuma npm[169]:   bindings: undefined
Feb 08 14:17:06 UptimeKuma npm[169]: }
Feb 08 14:17:06 UptimeKuma npm[169]:     at Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1134:25)
Feb 08 14:17:06 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:65:5)
Feb 08 14:17:06 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:17:06 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] INFO: Try to restart the monitor
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #26 'go2rtc': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #28 'Matterbridge': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #32 'AP 1': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #33 'AP 2': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #34 'Dzuriš Network': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #36 'Proton VPN USA 🇺🇸': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #38 'Dzuriš Network Guest VPN': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping
Feb 08 14:17:07 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Feb 08 14:17:07 UptimeKuma npm[169]:     at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26)
Feb 08 14:17:07 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:65:5)
Feb 08 14:17:07 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:17:07 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async RedBeanNode.storeCore (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:141:17)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async RedBeanNode.store (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:110:20)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async UptimeCalculator.update (/opt/uptime-kuma/server/uptime-calculator.js:315:9)
Feb 08 14:17:07 UptimeKuma npm[169]:     at async beat (/opt/uptime-kuma/server/model/monitor.js:1093:32) {
Feb 08 14:17:07 UptimeKuma npm[169]:   sql: undefined,
Feb 08 14:17:07 UptimeKuma npm[169]:   bindings: undefined
Feb 08 14:17:07 UptimeKuma npm[169]: }
Feb 08 14:17:07 UptimeKuma npm[169]:     at Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1134:25)
Feb 08 14:17:07 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:65:5)
Feb 08 14:17:07 UptimeKuma npm[169]:     at listOnTimeout (node:internal/timers:549:9)
Feb 08 14:17:07 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:523:7)
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] INFO: Try to restart the monitor
Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #20 'Cloudflare DNS': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: dns
Feb 08 14:17:42 UptimeKuma npm[169]: 2026-02-08T14:17:42+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:43 UptimeKuma npm[169]: 2026-02-08T14:17:43+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:44 UptimeKuma npm[169]: 2026-02-08T14:17:44+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:45 UptimeKuma npm[169]: 2026-02-08T14:17:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:45 UptimeKuma npm[169]: 2026-02-08T14:17:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:46 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:46 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/
Feb 08 14:17:46 UptimeKuma npm[169]: =========================== logs ===========================
Feb 08 14:17:46 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle"
Feb 08 14:17:46 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0
Feb 08 14:17:47 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:47 UptimeKuma npm[169]: 2026-02-08T14:17:47+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:48 UptimeKuma npm[169]: 2026-02-08T14:17:48+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:48 UptimeKuma npm[169]: 2026-02-08T14:17:48+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:49 UptimeKuma npm[169]: 2026-02-08T14:17:49+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:17:50 UptimeKuma npm[169]: 2026-02-08T14:17:49+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:18:10 UptimeKuma npm[169]: 2026-02-08T14:18:09+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:18:43 UptimeKuma npm[169]: 2026-02-08T14:18:43+01:00 [RATE-LIMIT] INFO: remaining requests: 60
Feb 08 14:19:00 UptimeKuma npm[169]: 2026-02-08T14:19:00+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:19:07 UptimeKuma npm[169]: 2026-02-08T14:19:07+01:00 [MONITOR] ERROR: Cannot send notification to Telegram
Feb 08 14:19:08 UptimeKuma npm[169]: 2026-02-08T14:19:07+01:00 [MONITOR] ERROR: Error: AggregateError (code=ETIMEDOUT) - caused by: connect ETIMEDOUT 149.154.166.110:443 (code=ETIMEDOUT); connect ENETUNREACH 2001:67c:4e8:f004::9:443 - Local (:::0) (code=ENETUNREACH)
Feb 08 14:19:08 UptimeKuma npm[169]:     at Telegram.throwGeneralAxiosError (/opt/uptime-kuma/server/notification-providers/notification-provider.js:165:15)
Feb 08 14:19:08 UptimeKuma npm[169]:     at Telegram.send (/opt/uptime-kuma/server/notification-providers/telegram.js:107:18)
Feb 08 14:19:08 UptimeKuma npm[169]:     at processTicksAndRejections (node:internal/process/task_queues:105:5)
Feb 08 14:19:08 UptimeKuma npm[169]:     at runNextTicks (node:internal/process/task_queues:69:3)
Feb 08 14:19:08 UptimeKuma npm[169]:     at process.processTimers (node:internal/timers:520:9)
Feb 08 14:19:08 UptimeKuma npm[169]:     at async Monitor.sendNotification (/opt/uptime-kuma/server/model/monitor.js:1543:21)
Feb 08 14:19:08 UptimeKuma npm[169]:     at async beat (/opt/uptime-kuma/server/model/monitor.js:1009:21)
Feb 08 14:19:08 UptimeKuma npm[169]:     at async Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1132:17)
Feb 08 14:19:09 UptimeKuma npm[169]: 2026-02-08T14:19:09+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0
Feb 08 14:19:16 UptimeKuma npm[169]: 2026-02-08T14:19:16+01:00 [MONITOR] WARN: Monitor #37 'Dzuriš Network VPN': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: ping | Down Count: 0 | Resend Interval: 0
Originally created by @milandzuris on GitHub (Feb 8, 2026). ### 📑 I have found these related issues/pull requests . ### 🛡️ Security Policy - [x] I have read and agree to Uptime Kuma's [Security Policy](https://github.com/louislam/uptime-kuma/security/policy). ### 📝 Description Hi after updating to 2.1.0 my Uptime Kuma after few minutes went from 20% performance (CPU RAM) to 100% performance, before update i dont have any issue same settings i have before update. <img width="356" height="330" alt="Image" src="https://github.com/user-attachments/assets/7f6a6f3d-9df7-4c2a-b474-96d2cd3a11cc" /> <img width="353" height="327" alt="Image" src="https://github.com/user-attachments/assets/2c69f3ba-0eb8-4f14-9062-b50a890c3ddf" /> ### 👟 Reproduction steps reboot :D ### 👀 Expected behavior . ### 😓 Actual Behavior . ### 🐻 Uptime-Kuma Version 2.1.0 ### 💻 Operating System and Arch Debian ### 🌐 Browser . ### 🖥️ Deployment Environment . ### 📝 Relevant log output ```bash session journalctl -u uptime-kuma -n 200 --no-pager -- Journal begins at Thu 2026-01-29 15:29:32 CET, ends at Sun 2026-02-08 14:19:56 CET. -- Feb 08 14:02:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:02:43 UptimeKuma npm[169]: 2026-02-08T14:02:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:02:43 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:02:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:02:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:02:45 UptimeKuma npm[169]: 2026-02-08T14:02:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:02:54 UptimeKuma npm[169]: 2026-02-08T14:02:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:02:58 UptimeKuma npm[169]: 2026-02-08T14:02:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:03 UptimeKuma npm[169]: 2026-02-08T14:03:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:03:03 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:03:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:03:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:06 UptimeKuma npm[169]: 2026-02-08T14:03:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:15 UptimeKuma npm[169]: 2026-02-08T14:03:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:03:23 UptimeKuma npm[169]: 2026-02-08T14:03:23+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:03:23 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:03:23 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:03:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:43 UptimeKuma npm[169]: 2026-02-08T14:03:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:03:43 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:03:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:03:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:45 UptimeKuma npm[169]: 2026-02-08T14:03:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:03:54 UptimeKuma npm[169]: 2026-02-08T14:03:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:03:58 UptimeKuma npm[169]: 2026-02-08T14:03:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:03 UptimeKuma npm[169]: 2026-02-08T14:04:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:04:03 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:04:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:04:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:06 UptimeKuma npm[169]: 2026-02-08T14:04:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:15 UptimeKuma npm[169]: 2026-02-08T14:04:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:04:23 UptimeKuma npm[169]: 2026-02-08T14:04:23+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:04:23 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:04:23 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:04:23 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:43 UptimeKuma npm[169]: 2026-02-08T14:04:43+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:04:43 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:04:43 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:04:43 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:45 UptimeKuma npm[169]: 2026-02-08T14:04:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:04:54 UptimeKuma npm[169]: 2026-02-08T14:04:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:04:58 UptimeKuma npm[169]: 2026-02-08T14:04:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:03 UptimeKuma npm[169]: 2026-02-08T14:05:03+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:05:03 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:05:03 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:05:03 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:06 UptimeKuma npm[169]: 2026-02-08T14:05:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:15 UptimeKuma npm[169]: 2026-02-08T14:05:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:05:24 UptimeKuma npm[169]: 2026-02-08T14:05:24+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:05:24 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:05:24 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:05:24 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:45 UptimeKuma npm[169]: 2026-02-08T14:05:45+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:05:45 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:05:45 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:05:45 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:45 UptimeKuma npm[169]: 2026-02-08T14:05:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:05:54 UptimeKuma npm[169]: 2026-02-08T14:05:54+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:05:58 UptimeKuma npm[169]: 2026-02-08T14:05:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:07 UptimeKuma npm[169]: 2026-02-08T14:06:07+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:07 UptimeKuma npm[169]: 2026-02-08T14:06:07+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:06:07 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:06:07 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:06:07 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:15 UptimeKuma npm[169]: 2026-02-08T14:06:15+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:06:28 UptimeKuma npm[169]: 2026-02-08T14:06:28+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:06:28 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:06:28 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:06:28 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:45 UptimeKuma npm[169]: 2026-02-08T14:06:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:06:50 UptimeKuma npm[169]: 2026-02-08T14:06:50+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_CERT_COMMON_NAME_INVALID at https://dzuris.dev/ Feb 08 14:06:50 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:06:50 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:06:50 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:55 UptimeKuma npm[169]: 2026-02-08T14:06:55+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:06:58 UptimeKuma npm[169]: 2026-02-08T14:06:58+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:07:07 UptimeKuma npm[169]: 2026-02-08T14:07:07+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:07:19 UptimeKuma npm[169]: 2026-02-08T14:07:19+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:07:35 UptimeKuma npm[169]: 2026-02-08T14:07:35+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:07:35 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:07:35 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:07:35 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:08:35 UptimeKuma npm[169]: 2026-02-08T14:08:29+01:00 [MONITOR] WARN: Monitor #31 'Music Assistant': Pending: Connection failed | Max retries: 10 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:11:44 UptimeKuma npm[169]: 2026-02-08T14:11:36+01:00 [MONITOR] WARN: Monitor #6 'MQTT Broker': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: mqtt Feb 08 14:14:01 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? Feb 08 14:14:01 UptimeKuma npm[169]: at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26) Feb 08 14:14:01 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:65:5) Feb 08 14:14:01 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:14:01 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:14:01 UptimeKuma npm[169]: at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28) Feb 08 14:14:01 UptimeKuma npm[169]: at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19) Feb 08 14:14:01 UptimeKuma npm[169]: at async RedBeanNode.normalizeRaw (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:572:22) Feb 08 14:14:01 UptimeKuma npm[169]: at async RedBeanNode.getRow (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:558:22) Feb 08 14:14:01 UptimeKuma npm[169]: at async RedBeanNode.getCell (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:593:19) Feb 08 14:14:01 UptimeKuma npm[169]: at async Settings.get (/opt/uptime-kuma/server/settings.js:49:21) { Feb 08 14:14:01 UptimeKuma npm[169]: sql: 'SELECT `value` FROM setting WHERE `key` = ? limit ?', Feb 08 14:14:01 UptimeKuma npm[169]: bindings: [ 'disableAuth', 1 ] Feb 08 14:14:01 UptimeKuma npm[169]: } Feb 08 14:14:01 UptimeKuma npm[169]: at process.unexpectedErrorHandler (/opt/uptime-kuma/server/server.js:1982:13) Feb 08 14:14:01 UptimeKuma npm[169]: at process.emit (node:events:519:28) Feb 08 14:14:01 UptimeKuma npm[169]: at emitUnhandledRejection (node:internal/process/promises:252:13) Feb 08 14:14:01 UptimeKuma npm[169]: at throwUnhandledRejectionsMode (node:internal/process/promises:388:19) Feb 08 14:14:01 UptimeKuma npm[169]: at processPromiseRejections (node:internal/process/promises:475:17) Feb 08 14:14:01 UptimeKuma npm[169]: at processTicksAndRejections (node:internal/process/task_queues:106:32) Feb 08 14:14:01 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:69:3) Feb 08 14:14:01 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:14:01 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:14:26 UptimeKuma npm[169]: If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues Feb 08 14:15:12 UptimeKuma npm[169]: 2026-02-08T14:15:03+01:00 [MONITOR] WARN: Monitor #8 'Driveway Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:15:59 UptimeKuma npm[169]: 2026-02-08T14:15:43+01:00 [MONITOR] WARN: Monitor #9 'Front Yard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: ping Feb 08 14:17:01 UptimeKuma npm[169]: 2026-02-08T14:16:42+01:00 [MONITOR] WARN: Monitor #2 'Pi-Hole': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 3 | Retry: 1 | Retry Interval: 20 seconds | Type: dns Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #5 'Dzuriš Home': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 10 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #10 'Side Yard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #11 'Backyard Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 60 seconds | Type: ping Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #14 'Switch 1': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #17 'Node-RED': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #25 'MySpeed': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #22 'Doorbell Camera': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:06 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? Feb 08 14:17:06 UptimeKuma npm[169]: at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26) Feb 08 14:17:06 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:65:5) Feb 08 14:17:06 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:17:06 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:17:06 UptimeKuma npm[169]: at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28) Feb 08 14:17:06 UptimeKuma npm[169]: at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19) Feb 08 14:17:06 UptimeKuma npm[169]: at async RedBeanNode.storeCore (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:141:17) Feb 08 14:17:06 UptimeKuma npm[169]: at async RedBeanNode.store (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:110:20) Feb 08 14:17:06 UptimeKuma npm[169]: at async UptimeCalculator.update (/opt/uptime-kuma/server/uptime-calculator.js:315:9) Feb 08 14:17:06 UptimeKuma npm[169]: at async beat (/opt/uptime-kuma/server/model/monitor.js:1093:32) { Feb 08 14:17:06 UptimeKuma npm[169]: sql: undefined, Feb 08 14:17:06 UptimeKuma npm[169]: bindings: undefined Feb 08 14:17:06 UptimeKuma npm[169]: } Feb 08 14:17:06 UptimeKuma npm[169]: at Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1134:25) Feb 08 14:17:06 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:65:5) Feb 08 14:17:06 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:17:06 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] INFO: Try to restart the monitor Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #26 'go2rtc': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #35 'VPN.Dzuris.Dev': Failing: timeout of 48000ms exceeded | Interval: 60 seconds | Type: http | Down Count: 0 | Resend Interval: 0 Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #28 'Matterbridge': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: port Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #32 'AP 1': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:06 UptimeKuma npm[169]: 2026-02-08T14:17:06+01:00 [MONITOR] WARN: Monitor #33 'AP 2': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #34 'Dzuriš Network': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #36 'Proton VPN USA 🇺🇸': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #38 'Dzuriš Network Guest VPN': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: ping Feb 08 14:17:07 UptimeKuma npm[169]: Trace: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? Feb 08 14:17:07 UptimeKuma npm[169]: at Client_SQLite3.acquireConnection (/opt/uptime-kuma/node_modules/knex/lib/client.js:332:26) Feb 08 14:17:07 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:65:5) Feb 08 14:17:07 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:17:07 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:17:07 UptimeKuma npm[169]: at async Runner.ensureConnection (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:305:28) Feb 08 14:17:07 UptimeKuma npm[169]: at async Runner.run (/opt/uptime-kuma/node_modules/knex/lib/execution/runner.js:30:19) Feb 08 14:17:07 UptimeKuma npm[169]: at async RedBeanNode.storeCore (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:141:17) Feb 08 14:17:07 UptimeKuma npm[169]: at async RedBeanNode.store (/opt/uptime-kuma/node_modules/redbean-node/dist/redbean-node.js:110:20) Feb 08 14:17:07 UptimeKuma npm[169]: at async UptimeCalculator.update (/opt/uptime-kuma/server/uptime-calculator.js:315:9) Feb 08 14:17:07 UptimeKuma npm[169]: at async beat (/opt/uptime-kuma/server/model/monitor.js:1093:32) { Feb 08 14:17:07 UptimeKuma npm[169]: sql: undefined, Feb 08 14:17:07 UptimeKuma npm[169]: bindings: undefined Feb 08 14:17:07 UptimeKuma npm[169]: } Feb 08 14:17:07 UptimeKuma npm[169]: at Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1134:25) Feb 08 14:17:07 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:65:5) Feb 08 14:17:07 UptimeKuma npm[169]: at listOnTimeout (node:internal/timers:549:9) Feb 08 14:17:07 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:523:7) Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] ERROR: Please report to https://github.com/louislam/uptime-kuma/issues Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] INFO: Try to restart the monitor Feb 08 14:17:07 UptimeKuma npm[169]: 2026-02-08T14:17:07+01:00 [MONITOR] WARN: Monitor #20 'Cloudflare DNS': Pending: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Max retries: 1 | Retry: 1 | Retry Interval: 20 seconds | Type: dns Feb 08 14:17:42 UptimeKuma npm[169]: 2026-02-08T14:17:42+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:43 UptimeKuma npm[169]: 2026-02-08T14:17:43+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:44 UptimeKuma npm[169]: 2026-02-08T14:17:44+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:45 UptimeKuma npm[169]: 2026-02-08T14:17:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:45 UptimeKuma npm[169]: 2026-02-08T14:17:45+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:46 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:46 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [MONITOR] WARN: Monitor #30 'Dzuriš.Dev': Failing: page.goto: net::ERR_SSL_PROTOCOL_ERROR at https://dzuris.dev/ Feb 08 14:17:46 UptimeKuma npm[169]: =========================== logs =========================== Feb 08 14:17:46 UptimeKuma npm[169]: navigating to "https://dzuris.dev/", waiting until "networkidle" Feb 08 14:17:46 UptimeKuma npm[169]: ============================================================ | Interval: 20 seconds | Type: real-browser | Down Count: 0 | Resend Interval: 0 Feb 08 14:17:47 UptimeKuma npm[169]: 2026-02-08T14:17:46+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:47 UptimeKuma npm[169]: 2026-02-08T14:17:47+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:48 UptimeKuma npm[169]: 2026-02-08T14:17:48+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:48 UptimeKuma npm[169]: 2026-02-08T14:17:48+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:49 UptimeKuma npm[169]: 2026-02-08T14:17:49+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:17:50 UptimeKuma npm[169]: 2026-02-08T14:17:49+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:18:10 UptimeKuma npm[169]: 2026-02-08T14:18:09+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:18:43 UptimeKuma npm[169]: 2026-02-08T14:18:43+01:00 [RATE-LIMIT] INFO: remaining requests: 60 Feb 08 14:19:00 UptimeKuma npm[169]: 2026-02-08T14:19:00+01:00 [MONITOR] WARN: Monitor #18 'Zigbee2MQTT': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:19:07 UptimeKuma npm[169]: 2026-02-08T14:19:07+01:00 [MONITOR] ERROR: Cannot send notification to Telegram Feb 08 14:19:08 UptimeKuma npm[169]: 2026-02-08T14:19:07+01:00 [MONITOR] ERROR: Error: AggregateError (code=ETIMEDOUT) - caused by: connect ETIMEDOUT 149.154.166.110:443 (code=ETIMEDOUT); connect ENETUNREACH 2001:67c:4e8:f004::9:443 - Local (:::0) (code=ENETUNREACH) Feb 08 14:19:08 UptimeKuma npm[169]: at Telegram.throwGeneralAxiosError (/opt/uptime-kuma/server/notification-providers/notification-provider.js:165:15) Feb 08 14:19:08 UptimeKuma npm[169]: at Telegram.send (/opt/uptime-kuma/server/notification-providers/telegram.js:107:18) Feb 08 14:19:08 UptimeKuma npm[169]: at processTicksAndRejections (node:internal/process/task_queues:105:5) Feb 08 14:19:08 UptimeKuma npm[169]: at runNextTicks (node:internal/process/task_queues:69:3) Feb 08 14:19:08 UptimeKuma npm[169]: at process.processTimers (node:internal/timers:520:9) Feb 08 14:19:08 UptimeKuma npm[169]: at async Monitor.sendNotification (/opt/uptime-kuma/server/model/monitor.js:1543:21) Feb 08 14:19:08 UptimeKuma npm[169]: at async beat (/opt/uptime-kuma/server/model/monitor.js:1009:21) Feb 08 14:19:08 UptimeKuma npm[169]: at async Timeout.safeBeat [as _onTimeout] (/opt/uptime-kuma/server/model/monitor.js:1132:17) Feb 08 14:19:09 UptimeKuma npm[169]: 2026-02-08T14:19:09+01:00 [MONITOR] WARN: Monitor #27 'Frigate': Failing: Connection failed | Interval: 60 seconds | Type: port | Down Count: 0 | Resend Interval: 0 Feb 08 14:19:16 UptimeKuma npm[169]: 2026-02-08T14:19:16+01:00 [MONITOR] WARN: Monitor #37 'Dzuriš Network VPN': Failing: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? | Interval: 60 seconds | Type: ping | Down Count: 0 | Resend Interval: 0 ```
Author
Owner

@CommanderStorm commented on GitHub (Feb 8, 2026):

Can you build a reproduciton how to do this?

The logs that you are shared are not much of context

@CommanderStorm commented on GitHub (Feb 8, 2026): Can you build a reproduciton how to do this? The logs that you are shared are not much of context
Author
Owner

@louislam commented on GitHub (Feb 8, 2026):

In case you have no idea how to reproduce.

We had 4 beta versions in between 2.0 and 2.1. If possible, please downgrade to the previous version 2.1.0-beta.3, see if the problem still persists. Try another version if not. It would help us to identify which version introduced this performance issue.

https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.0
https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.1
https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.2
https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.3

Personally, my instance seems fine.

Image
@louislam commented on GitHub (Feb 8, 2026): In case you have no idea how to reproduce. We had 4 beta versions in between `2.0` and `2.1`. If possible, please downgrade to the previous version `2.1.0-beta.3`, see if the problem still persists. Try another version if not. It would help us to identify which version introduced this performance issue. https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.0 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.1 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.2 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.3 Personally, my instance seems fine. <img width="600" alt="Image" src="https://github.com/user-attachments/assets/9f224174-b520-40e5-8dfe-bfa8c914d79b" />
Author
Owner

@milandzuris commented on GitHub (Feb 8, 2026):

In case you have no idea how to reproduce.

We had 4 beta versions in between 2.0 and 2.1. If possible, please downgrade to the previous version 2.1.0-beta.3, see if the problem still persists. Try another version if not. It would help us to identify which version introduced this performance issue.

https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.0 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.1 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.2 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.3

Before version 2.1.0 i was using version 2.0.2, how can i downgrade on Linux?

@milandzuris commented on GitHub (Feb 8, 2026): > In case you have no idea how to reproduce. > > We had 4 beta versions in between `2.0` and `2.1`. If possible, please downgrade to the previous version `2.1.0-beta.3`, see if the problem still persists. Try another version if not. It would help us to identify which version introduced this performance issue. > > https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.0 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.1 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.2 https://github.com/louislam/uptime-kuma/releases/tag/2.1.0-beta.3 Before version 2.1.0 i was using version 2.0.2, how can i downgrade on Linux?
Author
Owner

@CommanderStorm commented on GitHub (Feb 8, 2026):

we don't support downgrading perse. It might be possible (our mirations all have up + down), but it is not tested.

So the simplest way is to start with a backup and upgrade one beta after another

@CommanderStorm commented on GitHub (Feb 8, 2026): we don't support downgrading perse. It might be possible (our mirations all have up + down), but it is not tested. So the simplest way is to start with a backup and upgrade one beta after another
Author
Owner

@milandzuris commented on GitHub (Feb 8, 2026):

So strange now i don't have problem with CPU but with RAM and SWAP

Image
@milandzuris commented on GitHub (Feb 8, 2026): So strange now i don't have problem with CPU but with RAM and SWAP <img width="355" height="328" alt="Image" src="https://github.com/user-attachments/assets/e646d01d-85d3-4c9e-8032-62185dc3aaad" />
Author
Owner

@milandzuris commented on GitHub (Feb 8, 2026):

So strange now i don't have problem with CPU but with RAM and SWAP

Image

Very unstable

Image
@milandzuris commented on GitHub (Feb 8, 2026): > So strange now i don't have problem with CPU but with RAM and SWAP > > <img alt="Image" width="355" height="328" src="https://private-user-images.githubusercontent.com/42905545/546793104-e646d01d-85d3-4c9e-8032-62185dc3aaad.png?jwt=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NzA1NzkyNTIsIm5iZiI6MTc3MDU3ODk1MiwicGF0aCI6Ii80MjkwNTU0NS81NDY3OTMxMDQtZTY0NmQwMWQtODVkMy00YzllLTgwMzItNjIxODVkYzNhYWFkLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNjAyMDglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjYwMjA4VDE5MjkxMlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTEwODNiOTg3YWMzZDQxN2VmYjk1OTJmMjExNTExOGI1NzM4OGFkYmYyMmEwNDkxNDk3YzJkODY4NzJmNWUxOTEmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.oFND_ef__J1WJf1x16G80qhKw6fq7xPKrSkME39h4hk"> Very unstable <img width="338" height="331" alt="Image" src="https://github.com/user-attachments/assets/f4d47502-0feb-4a24-9bcb-06354715ea7a" />
Author
Owner

@milandzuris commented on GitHub (Feb 8, 2026):

we don't support downgrading perse. It might be possible (our mirations all have up + down), but it is not tested.

So the simplest way is to start with a backup and upgrade one beta after another

i don't see somewhere in settings backup button

@milandzuris commented on GitHub (Feb 8, 2026): > we don't support downgrading perse. It might be possible (our mirations all have up + down), but it is not tested. > > So the simplest way is to start with a backup and upgrade one beta after another i don't see somewhere in settings backup button
Author
Owner

@CommanderStorm commented on GitHub (Feb 8, 2026):

We don't currently have a backup button. That feature was removed in v2.0 since it was very broken and unmaintained.

@CommanderStorm commented on GitHub (Feb 8, 2026): We don't currently have a backup button. That feature was removed in v2.0 since it was very broken and unmaintained.
Author
Owner

@smhawkes commented on GitHub (Feb 9, 2026):

I have the same issue, I went to beta.0 and seemed ok now running beta.1 and will let it run tonight and check it in the morning.

@smhawkes commented on GitHub (Feb 9, 2026): I have the same issue, I went to beta.0 and seemed ok now running beta.1 and will let it run tonight and check it in the morning.
Author
Owner

@hmnd commented on GitHub (Feb 13, 2026):

Also seeing this.

Image

When editing a status page I also see this, paired with very slow client-side performance which is possible related?
Screen recording

@hmnd commented on GitHub (Feb 13, 2026): Also seeing this. <img width="686" height="182" alt="Image" src="https://github.com/user-attachments/assets/1cba3bfd-95cc-485a-8ff6-a3e5dfce3805" /> When editing a status page I also see this, paired with very slow client-side performance which is possible related? [Screen recording](https://github.com/user-attachments/assets/78f2ac15-bcab-47cb-847b-1a61edf020fc)
Author
Owner

@CommanderStorm commented on GitHub (Feb 14, 2026):

since everyone in this thread is sharing the same design of screenshots, might this be related to that?

If not, what is going on inside of the contianer or on the system itsself?

Without being able to reproduce it, there is nothing that I can do

@CommanderStorm commented on GitHub (Feb 14, 2026): since everyone in this thread is sharing the same design of screenshots, might this be related to that? If not, what is going on inside of the contianer or on the system itsself? Without being able to reproduce it, there is nothing that I can do
Author
Owner

@hmnd commented on GitHub (Feb 14, 2026):

Mine aren't from the same hosting provider as the others. I'm on PikaPods, which runs on Docker. I'll do my best to find a way to repro...

@hmnd commented on GitHub (Feb 14, 2026): Mine aren't from the same hosting provider as the others. I'm on PikaPods, which runs on Docker. I'll do my best to find a way to repro...
Author
Owner

@louislam commented on GitHub (Feb 14, 2026):

Because we have too many changes in between 2.0.2 and 2.1.0, it is honestly hard to identify the root cause.

It would be helpful if you can try older 2.1.0 beta version, and identify which beta version caused this issue.

2.1.0-beta.3
2.1.0-beta.2
2.1.0-beta.1
2.1.0-beta.0

Don't forget to backup your data folder before downgrade.

See: https://github.com/louislam/uptime-kuma/issues/6888#issuecomment-3867372272

@louislam commented on GitHub (Feb 14, 2026): Because we have too many changes in between 2.0.2 and 2.1.0, it is honestly hard to identify the root cause. It would be helpful if you can try older 2.1.0 beta version, and identify which beta version caused this issue. `2.1.0-beta.3` `2.1.0-beta.2` `2.1.0-beta.1` `2.1.0-beta.0` Don't forget to backup your data folder before downgrade. See: https://github.com/louislam/uptime-kuma/issues/6888#issuecomment-3867372272
Author
Owner

@hmnd commented on GitHub (Feb 14, 2026):

@louislam I haven't been able to repro the server-side cpu hogging issue locally yet, but the client-side perf degradation when editing status pages is reproducible when a large number of monitors are added to a page (ie. 50+; 72 in my case). Shall I open a separate issue for that while I try against each of the beta versions?

@hmnd commented on GitHub (Feb 14, 2026): @louislam I haven't been able to repro the server-side cpu hogging issue locally yet, but the client-side perf degradation when editing status pages is reproducible when a large number of monitors are added to a page (ie. 50+; 72 in my case). Shall I open a separate issue for that while I try against each of the beta versions?
Author
Owner

@Eckart0 commented on GitHub (Feb 16, 2026):

I have the same problem with version 2.1.0 – I use Uptime-Kuma as an app/add-on in Home Assistant. Recently, it has been using up a lot of CPU resources - 28-30% just for Uptime-kuma. When I stop the Uptime-Kuma app, overall cpu load returns to 5%

Image

@Eckart0 commented on GitHub (Feb 16, 2026): I have the same problem with version 2.1.0 – I use Uptime-Kuma as an app/add-on in Home Assistant. Recently, it has been using up a lot of CPU resources - 28-30% just for Uptime-kuma. When I stop the Uptime-Kuma app, overall cpu load returns to 5% ![Image](https://github.com/user-attachments/assets/ff3a88b6-32ed-4ee1-b1d8-28d629a6e8c1)
Author
Owner

@CommanderStorm commented on GitHub (Feb 16, 2026):

What monitors are you running?

@CommanderStorm commented on GitHub (Feb 16, 2026): What monitors are you running?
Author
Owner

@Eckart0 commented on GitHub (Feb 16, 2026):

Approximately 20 ping monitors

@Eckart0 commented on GitHub (Feb 16, 2026): Approximately 20 ping monitors
Author
Owner

@CommanderStorm commented on GitHub (Feb 16, 2026):

not how many. Which kind and under which options?

@CommanderStorm commented on GitHub (Feb 16, 2026): not how many. Which kind and under which options?
Author
Owner

@Eckart0 commented on GitHub (Feb 16, 2026):

Image
I have another suspicion: I updated to 2.1.0 with HA, but the monitors are still displayed as version 2.0.1 in the integration. Could there have been a mix-up during the update? This would fit with the fact that HA displayed for some time that an Uptime Koma update was available but could not be installed.

@Eckart0 commented on GitHub (Feb 16, 2026): ![Image](https://github.com/user-attachments/assets/69ec4960-22d7-43ef-aae5-cfea4162b842) I have another suspicion: I updated to 2.1.0 with HA, but the monitors are still displayed as version 2.0.1 in the integration. Could there have been a mix-up during the update? This would fit with the fact that HA displayed for some time that an Uptime Koma update was available but could not be installed.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4682
No description provided.