mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
Unable to log into uptime-kuma. Get strange error. #128
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#128
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jim361tx on GitHub (Aug 16, 2021).
I am running the container in docker on an unraid server. It has been running for 6 days and is still monitoring hosts and sending alerts however I am unable to log in to the web portal. The logs give this error over and over:
(node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag
--unhandled-rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 592)@louislam commented on GitHub (Aug 16, 2021):
Are there any more logs?
I suggest you could try to restart the container first.
@jim361tx commented on GitHub (Aug 16, 2021):
I have restarted the container several times as well as the host server. Hopefully it is just a me problem and I can delete the container and add it back. I get this over and over in the log:
(node:18) UnhandledPromiseRejectionWarning: KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_SQLite3.acquireConnection (/app/node_modules/knex/lib/client.js:348:26)
at runNextTicks (internal/process/task_queues.js:60:5)
at listOnTimeout (internal/timers.js:526:9)
at processTimers (internal/timers.js:500:7)
at async RedBeanNode.normalizeRaw (/app/node_modules/redbean-node/dist/redbean-node.js:548:22)
at async Function.sendUptime (/app/server/model/monitor.js:348:29)
(node:18) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag
--unhandled-rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 6266)@louislam commented on GitHub (Aug 16, 2021):
How many service do you have currently?
I believe it is related to query timeout problem. If a sql query time is longer than 60 seconds, it will throw an exception.
github.com/knex/knex@1744c8c265/lib/client.js (L204)@GlassedSilver commented on GitHub (Aug 22, 2021):
I am experiencing the same issue. I have 32 services watched if I remember correctly.
I can open the website and when I click login after filling my details the submit button is set to activated, but nothing happens after that.
Same errors in the uptime-kuma log.
I can show a little more of it, mind you my watched items are being checked still.
All of this is just a short excerpt. The log does keep going, so the watched are still working, which is nice of course.
Hope this proves helpful.
Started experiencing this about a week ago after applying an update to uptime-kuma. Couldn't tell you which commit, but it's been about a week yes.
@ErikZandboer commented on GitHub (Aug 22, 2021):
I am seeing the same thing. UI pops up the username password question, when entered it just sits there. It is still monitoring, I can see from the logs that everything is reported up. But no UI.... If I refresh the page after login I simply get the login request page again. Logs are spitting the same error as described above:
Edit: I am running 1.3.2 from a container. The container runs on a Synology using the Synology-native docker tooling.
@louislam commented on GitHub (Aug 22, 2021):
@ErikZandboer @GlassedSilver Do you encountered the same error in 1.2.0?
@GlassedSilver commented on GitHub (Aug 22, 2021):
Is it safe to just go back to that version database-file-wise? If so I'd quickly try it out!
@louislam commented on GitHub (Aug 22, 2021):
Should be safe, since both database structure are same in this two version.
@GlassedSilver commented on GitHub (Aug 22, 2021):
Nope, same problem. Should I try 1.1.0?
@louislam commented on GitHub (Aug 22, 2021):
Should not try 1.1.0.
Thank you for testing. My guess is, maybe there are some slow queries blocking all other things. I will try to investigate.
@GlassedSilver commented on GitHub (Aug 22, 2021):
Thank you so much for prioritizing this issue. Really fell in love with Uptime Kuma and its downtime (I just realized how ironic that is, no hard feelings :P) is giving me withdrawal symptoms. :D
@ErikZandboer commented on GitHub (Aug 25, 2021):
Sorry for the late reply. In 1.20 it worked, I did sometimes see very long times between login and the dashboard showing. This might have been the same issue but cannot be sure. Next to try is the nightly :)
@ErikZandboer commented on GitHub (Aug 25, 2021):
So I just tried 1.20. Also quite a number of errors in the log. Next to the previously mentioned error around the Knex timeout/pool is probably full (same behavior not able to login after the user/pass prompt) I now also see this one:
(node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1580) (node:25) UnhandledPromiseRejectionWarning: TypeError: msg.includes is not a fun ction at RedBeanNode.checkError (/app/node_modules/redbean-node/dist/redbean-node. js:394:26) at RedBeanNode.checkAllowedError (/app/node_modules/redbean-node/dist/redbea n-node.js:402:14) at RedBeanNode.findOne (/app/node_modules/redbean-node/dist/redbean-node.js: 478:18) at async Function.sendCertInfo (/app/server/model/monitor.js:339:24) (node:25) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This er ror originated either by throwing inside of an async function without a catch bl ock, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-rejections=strict
(see https://nodejs.org/api/cli.html#cli_unhandled_rejections _mode). (rejection id: 1583)@ErikZandboer commented on GitHub (Aug 25, 2021):
Btw the nightly build I just downloaded would not even show me a web page, not sure of its status now.
@louislam commented on GitHub (Aug 25, 2021):
Strange, maybe the new sqlite3 library doesn't help too.
I need more info. After started Uptime Kuma, could you show me the load average, cpu usage and free memory?
Command:
@ErikZandboer commented on GitHub (Aug 26, 2021):
Version 1.20 shows quite some CPU load, around 33%. I have to say that I run this on a Synology DS412+ which has a tiny Intel Atom D2700 CPU. It seems that 1.32 also delivers comparable CPU loads:
@ErikZandboer commented on GitHub (Aug 26, 2021):
Not sure what happened, but I just redownloaded the 1.32 version and it starts like lightning, lets me login and I am looking at the dashboard? No errors in the log, just successful responses. CPU load is also down to just a few %% (I see 0-2% CPU now at rest, 22% on a scan).
@louislam commented on GitHub (Aug 26, 2021):
Load average around 2.00 is pretty high in my opinion.
I think it maybe related to your NAS doing a lot of I/O jobs which also blocks Uptime Kuma's I/O. Another reason maybe Atom CPU don't like Node.js application.
FYI, here is my free vps on Oracle with 1/8 CPU (Yes, 0.125 core only). There are about 10 monitors. It is smooth.
@ErikZandboer commented on GitHub (Aug 26, 2021):
The NAS is stationary, it is my backup nas that really only acts at a dumb target during the night. Weirdly after redownloading and reinstalling Uptime-Kuma 1.3.2 it started very fast and allowed me to login without any errors. Nothing changed, I just created a new container like before from scratch only pointing to the existing /app/data folder to pickup on the Dbase. CPU loads now show a "normal" 0-2% during sleep and bounce up to 22% at peak (while scanning).
Until I run into the errors described above again I think I am diqualified for testing on this issue, it works now and I cannot reproduce. Sorry!
@ErikZandboer commented on GitHub (Aug 26, 2021):
Looking at the HTTP reponse time I got from UK, you can see that the web server was really slow to respond while I had the error occuring. The sensor was paused ("off") at first, then started with the error. After redownload and reconstruction of the container repsonse times went down to "sane" values again:

@raspberrycoulis commented on GitHub (Aug 31, 2021):
I've just pulled the
nightlyimage and it now loads on my Synology DS718+ NAS.@GlassedSilver commented on GitHub (Aug 31, 2021):
For me the nightly still doesn't even load the main page. (as in: maybe login is fixed, but I can't get to the point of sending my credentials)
@louislam commented on GitHub (Aug 31, 2021):
Need hardware spec.
And I want to see the cpu usage, memory usage and load average
Command:
top@GlassedSilver commented on GitHub (Sep 1, 2021):
Good news, switched to the
:1tag, and it instantly worked flawlessly again!Edit:
Worked exactly for one load of the site. Even opening the page in incognito mode to make sure no cookies or caches mess around doesn't help.
**Errors from the log (very long excerpt, hence the collapsing)**
As for my hardware, it's an HPE DL380e Gen8 with 48 GB of ECC memory. The environment is unRAID 6.9.2 with Docker 20.10.5.
App data as well as the image reside on a pool of BTRFS 1 TB Samsung SSDs (2 drives).
Everything else on my setup is working flawlessly at the moment and overall the system has been very stable in the past. ('ts a good girl, yes my server is a girl :P)
The results from the top command (run inside the docker itself I presume you wished for):
@louislam commented on GitHub (Sep 1, 2021):
Load average: 7.18 is higher than expected with such powerful machine, that is strange.
@ErikZandboer commented on GitHub (Sep 1, 2021):
I got 1.32 running after a lot of trial and error. Now upgraded to 1.52 and it worked perfectly the first time. This version seems to have fixed the issue (at least on my side).
Thanks for all of your work, this is becoming (and already IS) a great, straight-forward tool!
@GlassedSilver commented on GitHub (Sep 1, 2021):
That's a MongoDB instance doing that for the most part. Overall my server has a lot of stuff to do, so whilst 7.18 is more than usual, the server is not starved for tasks. Don't worry about it.
Apparently it works now again after going to the 1.5.2 tag.
Bit weirded out that :latest doesn't pull 1.5.2, maybe that's my Docker having cached the older one still for that tag. (1.3.2)
Bit weirded out by the fact that the db files grow VERY large in size and that the watches need a bit of time to load up. (I have 32 watches)
I'm not gonna poke the bear any further and let the docker sit there and live with it for now. :D
@chakflying commented on GitHub (Sep 1, 2021):
@GlassedSilver that sounds a bit strange. I did a rough calculation and it should only use about ~100KB per monitor per day, so about 50MB if you have 32 monitors running for couple of weeks. How big is your
kuma.dbfile?@GlassedSilver commented on GitHub (Sep 2, 2021):
@chakflying Take a look :)
@louislam commented on GitHub (Sep 2, 2021):
As I cannot reproduce the problem in my side, it is hard to address the problem.
If you don't mind, you can send me the db files (kuma.db, kuma.db-shm, kuma.db-wal) for me to investigate. But first you have to delete all sensitive data like hashed password, notification info and the JWT secret inside db file.
kuma.db,kuma.db-shm,kuma.db-walkuma.db(My favourite one is SQLite Expert Personal: http://www.sqliteexpert.com/download.html)notification,setting,usertables.monitortable and mask the data with[MASKED].@louislam commented on GitHub (Sep 20, 2021):
I have re-written some queries in 1.6.x. Hope this problem has solved.
/setupis redirected to/dashboardand i am unable to register #3510