mirror of
https://github.com/louislam/uptime-kuma.git
synced 2026-03-02 22:57:00 -05:00
[K8s] Server fails to start in 1.9.0 (Solution in comment) #436
Labels
No labels
A:accessibility
A:api
A:cert-expiry
A:core
A:dashboard
A:deployment
A:documentation
A:domain expiry
A:incidents
A:maintenance
A:metrics
A:monitor
A:notifications
A:reports
A:settings
A:status-page
A:ui/ux
A:user-management
Stale
ai-slop
blocked
blocked-upstream
bug
cannot-reproduce
dependencies
discussion
duplicate
feature-request
feature-request
good first issue
hacktoberfest
help
help wanted
house keeping
invalid
invalid-format
invalid-format
question
releaseblocker 🚨
security
spam
type:enhance-existing
type:new
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/uptime-kuma#436
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ikaruswill on GitHub (Oct 18, 2021).
Is it a duplicate question?
Please search in Issues without filters: https://github.com/louislam/uptime-kuma/issues?q=
Describe the bug
First of thanks for your hard work. Really happy that there's someone on the community that understands the gap in today's open source, of not having a tool that reliably tracks uptime. I came from using statping and that was the closest I got to a good tool for uptime monitoring, barring performance issues.
Anyways on to the issue. I upgraded to 1.9.0 and realized that the pod no longer started. The logs below show the error it threw.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Server starts
Info
Uptime Kuma Version:
Using Docker?: No but using Kubernetes
Docker version: NA
Node.js Version (Without Docker only):
OS: Linux
Browser: NA
Screenshots
If applicable, add screenshots to help explain your problem.
Error Log
It is easier for us to find out the problem.
@gaby commented on GitHub (Oct 18, 2021):
Probably related to this commit changing the logic for getting the default port.
github.com/louislam/uptime-kuma@f75c9e4f0c (diff-8cf1ffe378)@ikaruswill commented on GitHub (Oct 18, 2021):
Interesting. @gaby thanks for the highlight. You may be right, I think I found the issue. It's because of Kubernetes service links.
For Kubernetes users who have their service named
uptime-kuma, Kubernetes service links feature (default) would generate some environment variables, one of which isUPTIME_KUMA_PORT. (Docs: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables)If we would like to avoid this issue, we should probably validate the value of the environment variable before using it.
For people facing this issue on Kubernetes, either rename the service to something else like
uptime-kuma-webor disable service links for uptime-kuma by settingenableServiceLinks: falsein the Pod spec of the Deployment object.@gaby commented on GitHub (Oct 18, 2021):
Great find! It doesnt help that the project owners are making changes directly into master without code reviews.
@louislam commented on GitHub (Oct 18, 2021):
As I stated in k8s readme:
"⚠ Warning: K8s deployment is provided by contributors. I have no experience with K8s and I can't fix error in the future. I only test Docker and Node.js. Use at your own risk."
https://github.com/louislam/uptime-kuma/tree/master/kubernetes/README.md
@ikaruswill commented on GitHub (Oct 18, 2021):
@gaby Nah can't be blamed. The project belongs to @louislam and he has the freedom to do whatever he wants to the codebase, even if the git branching strategy is not exactly the best practice. Support here is already at his own free time, and that I appreciate.
@louislam I understand that it's written in the docs there at k8s support is offered by community only. Nonetheless I did not follow the deployment instructions from there and just wrote my own so the problem does not lie with the Kubernetes instructions so I'm not asking for a fix there.
My intention is to put it up here for anyone who faces the same issue.
I'd also like to suggest for stronger validation on accepted values for port in the environment variables, and for a safer fallback such that instead of raising a fatal error in the event of a validation error of the port value, the program may fallback to default value of 3001.
@louislam commented on GitHub (Oct 18, 2021):
@ikaruswill Thank you for your report. I changed the title, so it is easier to find your solution for other k8s users.
@jasonwitty commented on GitHub (Oct 29, 2021):
I ran into this issue with my uptime kuma install running on k3s and adding environment variable below solved my issue.
...
...
Just a note for the author, I see your "warning" that you mentioned above but actually I was not even using the manifest file you have in your GIT rep. I dont actually even remember seeing it there originally. I created my own manifest to host this some weeks ago. Kubernetes is a common way people may choose to run this container. I am not sure why any issue with be unique to Kubernetes installations, i would assume docker users should also have issues with the latest version if a required env variable is not defined.
I think the big difference with a Kubernetes user would be that the container can often be moved to a new location and if that location doesnt have a cached image its going to pull it from docker hub again. A simple thing you can do to better accommodate this would to tag each release with a unique tag. I was using the :1 tag and assumed it would not change, but on review I found that :1 and :latest are the same. It would be good to have a tag I could use for the current version that I know is working, if kuberneties needs to move the pod then no problem its always the same version. If later I decide to upgrade to the latest version I can update my tag and if new env variables are required to run this version I will be actively involved in this release and can patch it at that time.
thank you for your support. :)
@gaby commented on GitHub (Oct 29, 2021):
@jasonwitty If you don't want your image tag to change, you should be using:
louislam/uptime-kuma:1.9.2-debianor
louislam/uptime-kuma:1.9.2-alpineThe
louislam/uptime-kuma:1is a major release tag, which means it will change for every Minor/Patch release.@ikaruswill commented on GitHub (Oct 29, 2021):
@jasonwitty The reason why the issue is unique to Kubernetes is already explained in my comment above, https://github.com/louislam/uptime-kuma/issues/741#issuecomment-945854426. Not sure why you didn't see that. I'd advise following that as opposed to overriding the environment variable. Docker users won't face this issue, and Kubernetes is not as common as you think (and not as common as I'd liked it to be).
The image tag issue is perhaps off topic, but I'd like to chime in that what the author is doing was already industry standard since the early days of Docker. If you ran any official Docker image, you'd have already known this. Do read up on semver.