Add import/export (backup/restore) for settings (non-settings data not required) #4250

Open
opened 2026-02-28 03:56:01 -05:00 by deekerman · 12 comments
Owner

Originally created by @zwimer on GitHub (Aug 8, 2025).

This might be a lot easier if the following issue was also addressed:

Related but quite different:

🏷️ Feature Request Type

Settings

🔖 Feature description

It'd be nice if it were possible to import/export (backup/restore) uptime kuma settings. I'm not talking about the entire DB and all the data uptime has collected, just the settings; i.e. the monitors, the status pages, and the settings page.

Ideally this would be in a human-readable format, as it would make mass-editing of monitors as simple as export to json (or whatever file format is used), edit, then import. One distinct advantage of it being human-readable would be that it'd be a lot easier to merge instances, or split instances, should such a need ever arise.

✔️ Solution

Provide a user friendly method of importing/exporting settings.

There are multiple methods of doing this, though I think the best method would be implementing #2496 (splitting settings out of the main database so that it is distinct from the rest of the database). Doing so would allow users the most basic of import/exports, backing up the split-out settings file (ideally the resulting settings file would be human readable, like json, but even as a distinct database that'll be better than it is now)

Alternatives

Provide a backup/restore UI option that imports/exports only the settings, not the data generated by the monitors.

📝 Additional Context

Benefits:

  1. Backup/Restore: Right now the best backup/restore option available is periodically copying all the data to another directory. This is functional, but there are reasons so many programs out there provide a method to backup/restore independent of having the user backup from / restore to the underlying file system. Providing this method would enable backup/restore of settings, which is the important bit for me at least.
  2. Version Control for Settings: Users could version control (git) their uptime kuma settings. Right now, since everything is stored together, git would constantly complain that something has changed as uptime kuma collects and stores it in the database. This basically nullifies any benefit of using git over periodically copying all the files to a date-labeled folder.
  3. Mass Editing of Settings: If the exported settings are human readable, it could be very useful if, for example, a user needs to do a find-and-replace across all monitors. For example, say some IP address changed and multiple monitors need that updated. With this feature, updating everything could be as simple as: export, sed -i 's/old_ip/new_ip/g' settings.json; then import.
  4. Merging / splitting instances: On multiple occasions I've needed to either split a single uptime kuma instance into two (such as moving some services to a new machine and needing to setup a new uptime kuma instance with all the relevant monitors migrated from the existing one) or merge multiple instances (such as when consolidating servers and wanting to consolidate the uptime kuma instances / monitors into one). If the exported settings were easily editable, this would be a trivial action.
Originally created by @zwimer on GitHub (Aug 8, 2025). ### 📑 I have found these related issues/pull requests This might be a lot easier if the following issue was also addressed: - https://github.com/louislam/uptime-kuma/issues/2496 Related but quite different: - https://github.com/louislam/uptime-kuma/issues/5440 ### 🏷️ Feature Request Type Settings ### 🔖 Feature description It'd be nice if it were possible to import/export (backup/restore) uptime kuma settings. I'm not talking about the entire DB and all the data uptime has collected, just the settings; i.e. the monitors, the status pages, and the settings page. Ideally this would be in a human-readable format, as it would make mass-editing of monitors as simple as export to json (or whatever file format is used), edit, then import. One distinct advantage of it being human-readable would be that it'd be a lot easier to merge instances, or split instances, should such a need ever arise. ### ✔️ Solution Provide a user friendly method of importing/exporting settings. There are multiple methods of doing this, though I think the best method would be implementing #2496 (splitting settings out of the main database so that it is distinct from the rest of the database). Doing so would allow users the most basic of import/exports, backing up the split-out settings file (ideally the resulting settings file would be human readable, like json, but even as a distinct database that'll be better than it is now) ### ❓ Alternatives Provide a backup/restore UI option that imports/exports only the settings, not the data generated by the monitors. ### 📝 Additional Context Benefits: 1. _Backup/Restore_: Right now the best backup/restore option available is periodically copying all the data to another directory. This is functional, but there are reasons so many programs out there provide a method to backup/restore independent of having the user backup from / restore to the underlying file system. Providing this method would enable backup/restore of settings, which is the important bit for me at least. 2. _Version Control for Settings_: Users could version control (`git`) their uptime kuma settings. Right now, since everything is stored together, `git` would constantly complain that something has changed as uptime kuma collects and stores it in the database. This basically nullifies any benefit of using `git` over periodically copying all the files to a date-labeled folder. 3. _Mass Editing of Settings_: If the exported settings are human readable, it could be very useful if, for example, a user needs to do a find-and-replace across all monitors. For example, say some IP address changed and multiple monitors need that updated. With this feature, updating everything could be as simple as: export, `sed -i 's/old_ip/new_ip/g' settings.json`; then import. 4. _Merging / splitting instances_: On multiple occasions I've needed to either split a single uptime kuma instance into two (such as moving some services to a new machine and needing to setup a new uptime kuma instance with all the relevant monitors migrated from the existing one) or merge multiple instances (such as when consolidating servers and wanting to consolidate the uptime kuma instances / monitors into one). If the exported settings were easily editable, this would be a trivial action.
Author
Owner

@pscriptos commented on GitHub (Aug 8, 2025):

That's funny, because I just opened my Uptime Kuma installation and thought, man, this installation is so slow thanks to the sqlite database. I wish there was a way to export all the settings so I could set up a new instance with MariaDB, and then I find your post about it, which is only 3 hours old :D

Thank you!
I totally agree. That would be really great!

@pscriptos commented on GitHub (Aug 8, 2025): That's funny, because I just opened my Uptime Kuma installation and thought, man, this installation is so slow thanks to the sqlite database. I wish there was a way to export all the settings so I could set up a new instance with MariaDB, and then I find your post about it, which is only 3 hours old :D Thank you! I totally agree. That would be really great!
Author
Owner

@seitzbg commented on GitHub (Oct 24, 2025):

+1, I just upgraded and cannot restore my JSON backup :(

@seitzbg commented on GitHub (Oct 24, 2025): +1, I just upgraded and cannot restore my JSON backup :(
Author
Owner

@V0LDY commented on GitHub (Dec 10, 2025):

Big up! I feel like something like this would only take a few KBs of data, while right now AFAIK the only way to easily backup Uptime Kuma is snapshotting the entire thing which grows fast with the database...

@V0LDY commented on GitHub (Dec 10, 2025): Big up! I feel like something like this would only take a few KBs of data, while right now AFAIK the only way to easily backup Uptime Kuma is snapshotting the entire thing which grows fast with the database...
Author
Owner

@guice commented on GitHub (Dec 16, 2025):

Traced through a series of linked issues for exactly this. AI answers gave "there's a backup option" but later found it was removed in 2.x. :/

Add my voice for a backup option.

@guice commented on GitHub (Dec 16, 2025): Traced through a series of linked issues for exactly this. AI answers gave "there's a backup option" but later found it was removed in 2.x. :/ Add my voice for a backup option.
Author
Owner

@MrNickIE commented on GitHub (Jan 5, 2026):

I just got to this post, because I was thinking the same thing. In my case I am testing and I would have loved to just grab all the hosts I have setup on one instance, to run on my test instance!

@MrNickIE commented on GitHub (Jan 5, 2026): I just got to this post, because I was thinking the same thing. In my case I am testing and I would have loved to just grab all the hosts I have setup on one instance, to run on my test instance!
Author
Owner

@danielgoepp commented on GitHub (Feb 7, 2026):

I know this is not really the answer to this issue. However I will share here since it works for me in the meantime, until a better native solution is provided. I do love the related issue idea of splitting this and putting the data in a proper time series DB and the config in something other than MariaDB. I would vote for Victoria Metrics support in that too. Interestingly, I just tested this process this morning because I attempted to change from the beta version to the newly released 2.1, and I accidentally put the "latest" tag which wiped me out completely and put me back to v1.x. I updated to 2.1 and a clean DB and restored from backup. All good and super fast. This is what I did...it might not be right, but it worked.

I mount a backup directory to my pod.
I have an AWX job that performs a nightly backup - connects to the pod via shell and runs a DB backup.

If anyone is interested, it is here: https://raw.githubusercontent.com/danielgoepp/ansible/refs/heads/main/playbooks/k3s/backup-uptime-kuma.yaml

But the basic idea is just this (--ignore-table-data=kuma.heartbeat):

mariadb-dump kuma -S /app/data/run/mariadb.sock --ignore-table-data=kuma.heartbeat > /app/backups/uptime-kuma/{{ backup_filename }}

To restore, I just connected manually via shell and executed this:

mariadb kuma -S /app/data/run/mariadb.sock < /app/backups/uptime-kuma/kuma_backup_2026-02-07.sql

I restarted, and everything was back. No history, just config, and very fast.

@danielgoepp commented on GitHub (Feb 7, 2026): I know this is not really the answer to this issue. However I will share here since it works for me in the meantime, until a better native solution is provided. I do love the related issue idea of splitting this and putting the data in a proper time series DB and the config in something other than MariaDB. I would vote for Victoria Metrics support in that too. Interestingly, I just tested this process this morning because I attempted to change from the beta version to the newly released 2.1, and I accidentally put the "latest" tag which wiped me out completely and put me back to v1.x. I updated to 2.1 and a clean DB and restored from backup. All good and super fast. This is what I did...it might not be right, but it worked. I mount a backup directory to my pod. I have an AWX job that performs a nightly backup - connects to the pod via shell and runs a DB backup. If anyone is interested, it is here: https://raw.githubusercontent.com/danielgoepp/ansible/refs/heads/main/playbooks/k3s/backup-uptime-kuma.yaml But the basic idea is just this (--ignore-table-data=kuma.heartbeat): ``` mariadb-dump kuma -S /app/data/run/mariadb.sock --ignore-table-data=kuma.heartbeat > /app/backups/uptime-kuma/{{ backup_filename }} ``` To restore, I just connected manually via shell and executed this: ``` mariadb kuma -S /app/data/run/mariadb.sock < /app/backups/uptime-kuma/kuma_backup_2026-02-07.sql ``` I restarted, and everything was back. No history, just config, and very fast.
Author
Owner

@CommanderStorm commented on GitHub (Feb 7, 2026):

do note that in v2 we have more aggregation tables than just the inital heartbetat staging ground:

  • stat_minutely
  • stat_hourly
  • stat_daily
@CommanderStorm commented on GitHub (Feb 7, 2026): do note that in v2 we have more aggregation tables than just the inital `heartbetat` staging ground: - stat_minutely - stat_hourly - stat_daily
Author
Owner

@danielgoepp commented on GitHub (Feb 7, 2026):

Nice! Good to know @CommanderStorm. I will add to my backups to not include those also. Appreciate it.

@danielgoepp commented on GitHub (Feb 7, 2026): Nice! Good to know @CommanderStorm. I will add to my backups to not include those also. Appreciate it.
Author
Owner

@danielgoepp commented on GitHub (Feb 7, 2026):

Excellent, that dropped my size down quite a bit! I didn't even know about those. Much better!

-rw-r--r--  1 dang  staff   422K Feb  7 05:20 kuma_backup_2026-02-07.sql
-rw-r--r--  1 dang  staff    10M Feb  6 01:00 kuma_backup_2026-02-06.sql

Revised my code and pushed it. Now:

mariadb-dump kuma -S /app/data/run/mariadb.sock \
            --ignore-table-data=kuma.heartbeat \
            --ignore-table-data=kuma.stat_minutely \
            --ignore-table-data=kuma.stat_hourly \
            --ignore-table-data=kuma.stat_daily \
            > /app/backups/uptime-kuma/{{ backup_filename }}"
@danielgoepp commented on GitHub (Feb 7, 2026): Excellent, that dropped my size down quite a bit! I didn't even know about those. Much better! ``` -rw-r--r-- 1 dang staff 422K Feb 7 05:20 kuma_backup_2026-02-07.sql -rw-r--r-- 1 dang staff 10M Feb 6 01:00 kuma_backup_2026-02-06.sql ``` Revised my code and pushed it. Now: ``` mariadb-dump kuma -S /app/data/run/mariadb.sock \ --ignore-table-data=kuma.heartbeat \ --ignore-table-data=kuma.stat_minutely \ --ignore-table-data=kuma.stat_hourly \ --ignore-table-data=kuma.stat_daily \ > /app/backups/uptime-kuma/{{ backup_filename }}" ```
Author
Owner

@CommanderStorm commented on GitHub (Feb 7, 2026):

also, something like this (mariadb dump / sqlite dump) with being able to not include historical data is something that would be nice to have in the frontend.
Maybe even with including the other assets (how to connect to the db, images for status pages).

Applying that backup does not need to be convenient, since that is meant for the catastropic all disk fail case.

@CommanderStorm commented on GitHub (Feb 7, 2026): also, something like this (mariadb dump / sqlite dump) with being able to not include historical data is something that would be nice to have in the frontend. Maybe even with including the other assets (how to connect to the db, images for status pages). Applying that backup does not need to be convenient, since that is meant for the catastropic all disk fail case.
Author
Owner

@danielgoepp commented on GitHub (Feb 7, 2026):

Also if anyone wants a quick script to just dump tests:

https://github.com/danielgoepp/utility-scripts/blob/main/uptime-kuma/uptime-kuma-export.py

I have used this method for import / export before.

@danielgoepp commented on GitHub (Feb 7, 2026): Also if anyone wants a quick script to just dump tests: https://github.com/danielgoepp/utility-scripts/blob/main/uptime-kuma/uptime-kuma-export.py I have used this method for import / export before.
Author
Owner

@danielgoepp commented on GitHub (Feb 7, 2026):

also, something like this (mariadb dump / sqlite dump) with being able to not include historical data is something that would be nice to have in the frontend. Maybe even with including the other assets (how to connect to the db, images for status pages).

Applying that backup does not need to be convenient, since that is meant for the catastropic all disk fail case.

Agreed!

@danielgoepp commented on GitHub (Feb 7, 2026): > also, something like this (mariadb dump / sqlite dump) with being able to not include historical data is something that would be nice to have in the frontend. Maybe even with including the other assets (how to connect to the db, images for status pages). > > Applying that backup does not need to be convenient, since that is meant for the catastropic all disk fail case. Agreed!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#4250
No description provided.