[K8s] Server fails to start in 1.9.0 (Solution in comment) #436

Closed
opened 2026-02-28 01:46:28 -05:00 by deekerman · 9 comments
Owner

Originally created by @ikaruswill on GitHub (Oct 18, 2021).

Is it a duplicate question?
Please search in Issues without filters: https://github.com/louislam/uptime-kuma/issues?q=

Describe the bug
First of thanks for your hard work. Really happy that there's someone on the community that understands the gap in today's open source, of not having a tool that reliably tracks uptime. I came from using statping and that was the closest I got to a good tool for uptime monitoring, barring performance issues.

Anyways on to the issue. I upgraded to 1.9.0 and realized that the pod no longer started. The logs below show the error it threw.

To Reproduce
Steps to reproduce the behavior:

  1. Be running 1.8.0 before
  2. Pull and run 1.9.0 with the same settings as before.

Expected behavior
Server starts

Info
Uptime Kuma Version:
Using Docker?: No but using Kubernetes
Docker version: NA
Node.js Version (Without Docker only):
OS: Linux
Browser: NA

Screenshots
If applicable, add screenshots to help explain your problem.

Error Log
It is easier for us to find out the problem.

==> Performing startup jobs and maintenance tasks
==> Starting application with user 0 group 0
Welcome to Uptime Kuma
Node Env: production
Importing Node libraries
Importing 3rd-party libraries
Importing this project modules
Prepare Notification Providers
Version: 1.9.0
Creating express and socket.io instance
Server Type: HTTP
Data Dir: ./data/
Connecting to Database
SQLite config:
[ { journal_mode: 'wal' } ]
[ { cache_size: -12000 } ]
SQLite Version: 3.36.0
Connected
Your database version: 10
Latest database version: 10
Database no need to patch
Database Patch 2.0 Process
Load JWT secret from database.
No user, need setup
Adding route
Adding socket handler
Init the server
Trace: RangeError [ERR_SOCKET_BAD_PORT]: options.port should be >= 0 and < 65536. Received NaN.
    at new NodeError (internal/errors.js:322:7)
    at validatePort (internal/validators.js:216:11)
    at Server.listen (net.js:1457:5)
    at /app/server/server.js:1250:12 {
  code: 'ERR_SOCKET_BAD_PORT'
}
    at process.<anonymous> (/app/server/server.js:1456:13)
    at process.emit (events.js:400:28)
    at processPromiseRejections (internal/process/promises.js:245:33)
    at processTicksAndRejections (internal/process/task_queues.js:96:32)
If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues
Originally created by @ikaruswill on GitHub (Oct 18, 2021). **Is it a duplicate question?** Please search in Issues without filters: https://github.com/louislam/uptime-kuma/issues?q= **Describe the bug** First of thanks for your hard work. Really happy that there's someone on the community that understands the gap in today's open source, of not having a tool that reliably tracks uptime. I came from using statping and that was the closest I got to a good tool for uptime monitoring, barring performance issues. Anyways on to the issue. I upgraded to 1.9.0 and realized that the pod no longer started. The logs below show the error it threw. **To Reproduce** Steps to reproduce the behavior: 1. Be running 1.8.0 before 2. Pull and run 1.9.0 with the same settings as before. **Expected behavior** Server starts **Info** Uptime Kuma Version: Using Docker?: No but using Kubernetes Docker version: NA Node.js Version (Without Docker only): OS: Linux Browser: NA **Screenshots** If applicable, add screenshots to help explain your problem. **Error Log** It is easier for us to find out the problem. ``` ==> Performing startup jobs and maintenance tasks ==> Starting application with user 0 group 0 Welcome to Uptime Kuma Node Env: production Importing Node libraries Importing 3rd-party libraries Importing this project modules Prepare Notification Providers Version: 1.9.0 Creating express and socket.io instance Server Type: HTTP Data Dir: ./data/ Connecting to Database SQLite config: [ { journal_mode: 'wal' } ] [ { cache_size: -12000 } ] SQLite Version: 3.36.0 Connected Your database version: 10 Latest database version: 10 Database no need to patch Database Patch 2.0 Process Load JWT secret from database. No user, need setup Adding route Adding socket handler Init the server Trace: RangeError [ERR_SOCKET_BAD_PORT]: options.port should be >= 0 and < 65536. Received NaN. at new NodeError (internal/errors.js:322:7) at validatePort (internal/validators.js:216:11) at Server.listen (net.js:1457:5) at /app/server/server.js:1250:12 { code: 'ERR_SOCKET_BAD_PORT' } at process.<anonymous> (/app/server/server.js:1456:13) at process.emit (events.js:400:28) at processPromiseRejections (internal/process/promises.js:245:33) at processTicksAndRejections (internal/process/task_queues.js:96:32) If you keep encountering errors, please report to https://github.com/louislam/uptime-kuma/issues ```
deekerman 2026-02-28 01:46:28 -05:00
  • closed this issue
  • added the
    wontfix
    label
Author
Owner

@gaby commented on GitHub (Oct 18, 2021):

Probably related to this commit changing the logic for getting the default port. github.com/louislam/uptime-kuma@f75c9e4f0c (diff-8cf1ffe378)

@gaby commented on GitHub (Oct 18, 2021): Probably related to this commit changing the logic for getting the default port. https://github.com/louislam/uptime-kuma/commit/f75c9e4f0ca4f7417ea0c7f2184be4dd1332be05#diff-8cf1ffe3788af127768748703e2f15dcfb3a05ffd20b4c2375223637e3260938
Author
Owner

@ikaruswill commented on GitHub (Oct 18, 2021):

Interesting. @gaby thanks for the highlight. You may be right, I think I found the issue. It's because of Kubernetes service links.

For Kubernetes users who have their service named uptime-kuma, Kubernetes service links feature (default) would generate some environment variables, one of which is UPTIME_KUMA_PORT. (Docs: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables)

If we would like to avoid this issue, we should probably validate the value of the environment variable before using it.

For people facing this issue on Kubernetes, either rename the service to something else like uptime-kuma-web or disable service links for uptime-kuma by setting enableServiceLinks: false in the Pod spec of the Deployment object.

@ikaruswill commented on GitHub (Oct 18, 2021): Interesting. @gaby thanks for the highlight. You may be right, I think I found the issue. It's because of Kubernetes service links. For Kubernetes users who have their service named `uptime-kuma`, Kubernetes service links feature (default) would generate some environment variables, one of which is `UPTIME_KUMA_PORT`. (Docs: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables) If we would like to avoid this issue, we should probably validate the value of the environment variable before using it. For people facing this issue on Kubernetes, either rename the service to something else like `uptime-kuma-web` or disable service links for uptime-kuma by setting `enableServiceLinks: false` in the Pod spec of the Deployment object.
Author
Owner

@gaby commented on GitHub (Oct 18, 2021):

Great find! It doesnt help that the project owners are making changes directly into master without code reviews.

@gaby commented on GitHub (Oct 18, 2021): Great find! It doesnt help that the project owners are making changes directly into master without code reviews.
Author
Owner

@louislam commented on GitHub (Oct 18, 2021):

As I stated in k8s readme:

"⚠ Warning: K8s deployment is provided by contributors. I have no experience with K8s and I can't fix error in the future. I only test Docker and Node.js. Use at your own risk."

https://github.com/louislam/uptime-kuma/tree/master/kubernetes/README.md

@louislam commented on GitHub (Oct 18, 2021): As I stated in k8s readme: "⚠ Warning: K8s deployment is provided by contributors. I have no experience with K8s and I can't fix error in the future. I only test Docker and Node.js. Use at your own risk." https://github.com/louislam/uptime-kuma/tree/master/kubernetes/README.md
Author
Owner

@ikaruswill commented on GitHub (Oct 18, 2021):

@gaby Nah can't be blamed. The project belongs to @louislam and he has the freedom to do whatever he wants to the codebase, even if the git branching strategy is not exactly the best practice. Support here is already at his own free time, and that I appreciate.

@louislam I understand that it's written in the docs there at k8s support is offered by community only. Nonetheless I did not follow the deployment instructions from there and just wrote my own so the problem does not lie with the Kubernetes instructions so I'm not asking for a fix there.

My intention is to put it up here for anyone who faces the same issue.

I'd also like to suggest for stronger validation on accepted values for port in the environment variables, and for a safer fallback such that instead of raising a fatal error in the event of a validation error of the port value, the program may fallback to default value of 3001.

@ikaruswill commented on GitHub (Oct 18, 2021): @gaby Nah can't be blamed. The project belongs to @louislam and he has the freedom to do whatever he wants to the codebase, even if the git branching strategy is not exactly the best practice. Support here is already at his own free time, and that I appreciate. @louislam I understand that it's written in the docs there at k8s support is offered by community only. Nonetheless I did not follow the deployment instructions from there and just wrote my own so the problem does not lie with the Kubernetes instructions so I'm not asking for a fix there. My intention is to put it up here for anyone who faces the same issue. I'd also like to suggest for stronger validation on accepted values for port in the environment variables, and for a safer fallback such that instead of raising a fatal error in the event of a validation error of the port value, the program may fallback to default value of 3001.
Author
Owner

@louislam commented on GitHub (Oct 18, 2021):

@ikaruswill Thank you for your report. I changed the title, so it is easier to find your solution for other k8s users.

@louislam commented on GitHub (Oct 18, 2021): @ikaruswill Thank you for your report. I changed the title, so it is easier to find your solution for other k8s users.
Author
Owner

@jasonwitty commented on GitHub (Oct 29, 2021):

I ran into this issue with my uptime kuma install running on k3s and adding environment variable below solved my issue.


...

      image: louislam/uptime-kuma:1
      env:
        - name: UPTIME_KUMA_PORT
          value: "3001"

...

Just a note for the author, I see your "warning" that you mentioned above but actually I was not even using the manifest file you have in your GIT rep. I dont actually even remember seeing it there originally. I created my own manifest to host this some weeks ago. Kubernetes is a common way people may choose to run this container. I am not sure why any issue with be unique to Kubernetes installations, i would assume docker users should also have issues with the latest version if a required env variable is not defined.

I think the big difference with a Kubernetes user would be that the container can often be moved to a new location and if that location doesnt have a cached image its going to pull it from docker hub again. A simple thing you can do to better accommodate this would to tag each release with a unique tag. I was using the :1 tag and assumed it would not change, but on review I found that :1 and :latest are the same. It would be good to have a tag I could use for the current version that I know is working, if kuberneties needs to move the pod then no problem its always the same version. If later I decide to upgrade to the latest version I can update my tag and if new env variables are required to run this version I will be actively involved in this release and can patch it at that time.

thank you for your support. :)

@jasonwitty commented on GitHub (Oct 29, 2021): I ran into this issue with my uptime kuma install running on k3s and adding environment variable below solved my issue. --- ... image: louislam/uptime-kuma:1 env: - name: UPTIME_KUMA_PORT value: "3001" ... --- Just a note for the author, I see your "warning" that you mentioned above but actually I was not even using the manifest file you have in your GIT rep. I dont actually even remember seeing it there originally. I created my own manifest to host this some weeks ago. Kubernetes is a common way people may choose to run this container. I am not sure why any issue with be unique to Kubernetes installations, i would assume docker users should also have issues with the latest version if a required env variable is not defined. I think the big difference with a Kubernetes user would be that the container can often be moved to a new location and if that location doesnt have a cached image its going to pull it from docker hub again. A simple thing you can do to better accommodate this would to tag each release with a unique tag. I was using the :1 tag and assumed it would not change, but on review I found that :1 and :latest are the same. It would be good to have a tag I could use for the current version that I know is working, if kuberneties needs to move the pod then no problem its always the same version. If later I decide to upgrade to the latest version I can update my tag and if new env variables are required to run this version I will be actively involved in this release and can patch it at that time. thank you for your support. :)
Author
Owner

@gaby commented on GitHub (Oct 29, 2021):

@jasonwitty If you don't want your image tag to change, you should be using:

louislam/uptime-kuma:1.9.2-debian
or
louislam/uptime-kuma:1.9.2-alpine

The louislam/uptime-kuma:1 is a major release tag, which means it will change for every Minor/Patch release.

@gaby commented on GitHub (Oct 29, 2021): @jasonwitty If you don't want your image tag to change, you should be using: ```louislam/uptime-kuma:1.9.2-debian``` or ```louislam/uptime-kuma:1.9.2-alpine``` The `louislam/uptime-kuma:1` is a major release tag, which means it will change for every Minor/Patch release.
Author
Owner

@ikaruswill commented on GitHub (Oct 29, 2021):

@jasonwitty The reason why the issue is unique to Kubernetes is already explained in my comment above, https://github.com/louislam/uptime-kuma/issues/741#issuecomment-945854426. Not sure why you didn't see that. I'd advise following that as opposed to overriding the environment variable. Docker users won't face this issue, and Kubernetes is not as common as you think (and not as common as I'd liked it to be).

The image tag issue is perhaps off topic, but I'd like to chime in that what the author is doing was already industry standard since the early days of Docker. If you ran any official Docker image, you'd have already known this. Do read up on semver.

@ikaruswill commented on GitHub (Oct 29, 2021): @jasonwitty The reason why the issue is unique to Kubernetes is already explained in my comment above, https://github.com/louislam/uptime-kuma/issues/741#issuecomment-945854426. Not sure why you didn't see that. I'd advise following that as opposed to overriding the environment variable. Docker users won't face this issue, and Kubernetes is not as common as you think (and not as common as I'd liked it to be). The image tag issue is perhaps off topic, but I'd like to chime in that what the author is doing was already industry standard since the early days of Docker. If you ran any official Docker image, you'd have already known this. Do read up on semver.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#436
No description provided.