Allow bulk upload monitors using a CSV file or JSON file. #2818

Closed
opened 2026-02-28 03:08:18 -05:00 by deekerman · 7 comments
Owner

Originally created by @qburst-rupesh on GitHub (Nov 20, 2023).

⚠️ Please verify that this feature request has NOT been suggested before.

  • I checked and didn't find similar feature request

🏷️ Feature Request Type

New Monitor

🔖 Feature description

Here, we would like to propose the addition of a new functionality, such as the ability to upload a CSV or JSON file for mass or bulk creation on additional monitors.

For example, I would like to upload 2 monitors one for https://hackernoon.com/ and second for https://www.diffchecker.com/
so there should be facility for uploading file from VueJS frontend with file validation (File type may be CSV or JSON whichever easy and compatible with our system) and after successfully submit the file new monitor should be create one for hackernoon and second for diffchecker and rest of the operation should be act as as it is like.

@louislam and all other contributers

✔️ Solution

The bulk upload monitor option is available on the right side when you click Add Monitor. The user uploads the file from there, and it is sent to the backend node.js over a socket.

It is necessary for us to read and save the file, then create the monitors using the given keys or parameters. and then unlink or delete file after our operation will be completed for avoiding junks.

Alternatives

NA

📝 Additional Context

NA

Originally created by @qburst-rupesh on GitHub (Nov 20, 2023). ### ⚠️ Please verify that this feature request has NOT been suggested before. - [X] I checked and didn't find similar feature request ### 🏷️ Feature Request Type New Monitor ### 🔖 Feature description Here, we would like to propose the addition of a new functionality, such as the ability to upload a CSV or JSON file for mass or bulk creation on additional monitors. **For example,** I would like to upload 2 monitors one for **https://hackernoon.com/** and second for **https://www.diffchecker.com/** so there should be facility for uploading file from VueJS frontend with file validation (File type may be CSV or JSON whichever easy and compatible with our system) and after successfully submit the file new monitor should be create one for **hackernoon** and second for **diffchecker** and rest of the operation should be act as as it is like. **_@louislam and all other contributers_** ### ✔️ Solution The bulk upload monitor option is available on the right side when you click Add Monitor. The user uploads the file from there, and it is sent to the backend node.js over a socket. It is necessary for us to read and save the file, then create the monitors using the given keys or parameters. and then unlink or delete file after our operation will be completed for avoiding junks. ### ❓ Alternatives NA ### 📝 Additional Context NA
deekerman 2026-02-28 03:08:18 -05:00
Author
Owner

@CommanderStorm commented on GitHub (Nov 20, 2023):

I think this is a duplicate of https://github.com/louislam/uptime-kuma/issues/1190 https://github.com/louislam/uptime-kuma/issues/2297
If you agree, could you please close this Issue, as duplicates only create immortal zombies and are really hard to issue-manage?
If not, what makes this issue unique enough to require an additional issue? (Could this be integrated into the issue linked above?) ^^

@CommanderStorm commented on GitHub (Nov 20, 2023): I think this is a duplicate of https://github.com/louislam/uptime-kuma/issues/1190 https://github.com/louislam/uptime-kuma/issues/2297 If you agree, could you please close this Issue, as duplicates only create immortal zombies and are really hard to issue-manage? If not, what makes this issue unique enough to require an additional issue? (Could this be integrated into the issue linked above?) ^^
Author
Owner

@qburst-rupesh commented on GitHub (Nov 21, 2023):

@CommanderStorm No front-end code or socket code with file type inspection and file unlinking to prevent trash can be found. There are various types of monitors, such as TCP, MQTT, and https, thus it's possible that some information about which parameter to send when choosing a monitor type is lacking. Thus, it appears that the precise top-to-bottom flow is unable to identify and fully satisfy the criteria that are thought of as generic requirements.

@qburst-rupesh commented on GitHub (Nov 21, 2023): @CommanderStorm No front-end code or socket code with file type inspection and file unlinking to prevent trash can be found. There are various types of monitors, such as TCP, MQTT, and https, thus it's possible that some information about which parameter to send when choosing a monitor type is lacking. Thus, it appears that the precise top-to-bottom flow is unable to identify and fully satisfy the criteria that are thought of as generic requirements.
Author
Owner

@CommanderStorm commented on GitHub (Nov 21, 2023):

something was lost in translation (what you wrote does not make sense to me) ^^

No front-end code or socket code [...] can be found

That is to be expected, as noted above, I think this issue is a duplicate of other issues and merging them would be benefitial

There are various types of monitors [...] thus it's possible that some information about which [monitor] parameter[s] [..] is lacking

It is unclear why this requires another issue.
A solution should be general enough to not require hardcoded columns or https://github.com/louislam/uptime-kuma/pull/3967#issuecomment-1786493484

the precise top-to-bottom flow is unable to identify and fully satisfy the criteria that are thought of as generic requirements

It is unclear what you mean by this statement. What makes this issue unique enough to require an additional issue?

@CommanderStorm commented on GitHub (Nov 21, 2023): something was lost in translation (what you wrote does not make sense to me) ^^ > No front-end code or socket code [...] can be found That is to be expected, as noted above, I think this issue is a duplicate of other issues and merging them would be benefitial > There are various types of monitors [...] thus it's possible that some information about which [monitor] parameter[s] [..] is lacking It is unclear why this requires another issue. A solution should be general enough to not require hardcoded columns or https://github.com/louislam/uptime-kuma/pull/3967#issuecomment-1786493484 > the precise top-to-bottom flow is unable to identify and fully satisfy the criteria that are thought of as generic requirements It is unclear what you mean by this statement. What makes this issue unique enough to require an additional issue?
Author
Owner

@CommanderStorm commented on GitHub (Nov 21, 2023):

note that our contibution guide is here: github.com/louislam/uptime-kuma@1550a5f792/CONTRIBUTING.md

@CommanderStorm commented on GitHub (Nov 21, 2023): note that our contibution guide is here: https://github.com/louislam/uptime-kuma/blob/1550a5f79270f39007086d8d23c180ba9f63096b/CONTRIBUTING.md
Author
Owner

@CommanderStorm commented on GitHub (Dec 13, 2023):

Duplicate of https://github.com/louislam/uptime-kuma/issues/1190

@CommanderStorm commented on GitHub (Dec 13, 2023): Duplicate of https://github.com/louislam/uptime-kuma/issues/1190
Author
Owner

@Vaskata84 commented on GitHub (Aug 30, 2024):

Hi I'm using the new version of kuma 1.23.13 and I can't import a big json file with many 1500 hosts I made it look one to one with a downloaded file and I added the new hosts. I do this with this script because there are many hosts.

import json

data = {
    "version": "1.23.13",
    "notificationList": [],
    "monitorList": [
        {
            "id": 1,
            "name": "admin",
            "description": None,
            "pathName": "admin",
            "parent": None,
            "childrenIDs": [],
            "url": "https://",
            "method": "GET",
            "hostname": "10.0.1.23",
            "port": None,
            "maxretries": 1,
            "weight": 2000,
            "active": True,
            "forceInactive": False,
            "type": "ping",
            "timeout": 48,
            "interval": 60,
            "retryInterval": 60,
            "resendInterval": 0,
            "keyword": None,
            "invertKeyword": False,
            "expiryNotification": False,
            "ignoreTls": False,
            "upsideDown": False,
            "packetSize": 56,
            "maxredirects": 10,
            "accepted_statuscodes": [
                "200-299"
            ],
            "dns_resolve_type": "A",
            "dns_resolve_server": "1.1.1.1",
            "dns_last_result": None,
            "docker_container": "",
            "docker_host": None,
            "proxyId": None,
            "notificationIDList": {},
            "tags": [],
            "maintenance": False,
            "mqttTopic": "",
            "mqttSuccessMessage": "",
            "databaseQuery": None,
            "authMethod": None,
            "grpcUrl": None,
            "grpcProtobuf": None,
            "grpcMethod": None,
            "grpcServiceName": None,
            "grpcEnableTls": False,
            "radiusCalledStationId": None,
            "radiusCallingStationId": None,
            "game": None,
            "gamedigGivenPortOnly": True,
            "httpBodyEncoding": None,
            "jsonPath": None,
            "expectedValue": None,
            "kafkaProducerTopic": None,
            "kafkaProducerBrokers": [],
            "kafkaProducerSsl": False,
            "kafkaProducerAllowAutoTopicCreation": False,
            "kafkaProducerMessage": None,
            "screenshot": None,
            "headers": None,
            "body": None,
            "grpcBody": None,
            "grpcMetadata": None,
            "basic_auth_user": None,
            "basic_auth_pass": None,
            "oauth_client_id": None,
            "oauth_client_secret": None,
            "oauth_token_url": None,
            "oauth_scopes": None,
            "oauth_auth_method": "client_secret_basic",
            "pushToken": None,
            "databaseConnectionString": None,
            "radiusUsername": None,
            "radiusPassword": None,
            "radiusSecret": None,
            "mqttUsername": "",
            "mqttPassword": "",
            "authWorkstation": None,
            "authDomain": None,
            "tlsCa": None,
            "tlsCert": None,
            "tlsKey": None,
            "kafkaProducerSaslOptions": {
                "mechanism": "None"
            },
            "includeSensitiveData": True
        }
    ]
}

def load_data_from_file(file_path):
    with open(file_path, 'r') as file:
        lines = file.readlines()
    cleaned_data = []
    for line in lines:
        stripped_line = line.strip()
        if stripped_line:  
            parts = stripped_line.split("|")
            if len(parts) >= 2: 
                description = parts[1].strip()  
                ip = parts[0].strip() 
                cleaned_data.append((description, ip))
            else:
                print(f"Невалиден формат на реда: {line}")  
    return cleaned_data

def update_json_from_file(json_data, file_path):
    data_from_file = load_data_from_file(file_path)
    
    if len(data_from_file) != 1472:
        print(f"Очаквани 1472 реда, намерени: {len(data_from_file)}")
    
    current_id = max(monitor["id"] for monitor in json_data["monitorList"]) + 1
    
    for description, ip in data_from_file:
        new_monitor = {
            "id": current_id,
            "name": description,
            "description": None,
            "pathName": description,
            "parent": None,
            "childrenIDs": [],
            "url": "https://",
            "method": "GET",
            "hostname": ip,
            "port": None,
            "maxretries": 1,
            "weight": 2000,
            "active": True,
            "forceInactive": False,
            "type": "ping",
            "timeout": 48,
            "interval": 60,
            "retryInterval": 60,
            "resendInterval": 0,
            "keyword": None,
            "invertKeyword": False,
            "expiryNotification": False,
            "ignoreTls": False,
            "upsideDown": False,
            "packetSize": 56,
            "maxredirects": 10,
            "accepted_statuscodes": [
                "200-299"
            ],
            "dns_resolve_type": "A",
            "dns_resolve_server": "1.1.1.1",
            "dns_last_result": None,
            "docker_container": "",
            "docker_host": None,
            "proxyId": None,
            "notificationIDList": {},
            "tags": [],
            "maintenance": False,
            "mqttTopic": "",
            "mqttSuccessMessage": "",
            "databaseQuery": None,
            "authMethod": None,
            "grpcUrl": None,
            "grpcProtobuf": None,
            "grpcMethod": None,
            "grpcServiceName": None,
            "grpcEnableTls": False,
            "radiusCalledStationId": None,
            "radiusCallingStationId": None,
            "game": None,
            "gamedigGivenPortOnly": True,
            "httpBodyEncoding": None,
            "jsonPath": None,
            "expectedValue": None,
            "kafkaProducerTopic": None,
            "kafkaProducerBrokers": [],
            "kafkaProducerSsl": False,
            "kafkaProducerAllowAutoTopicCreation": False,
            "kafkaProducerMessage": None,
            "screenshot": None,
            "headers": None,
            "body": None,
            "grpcBody": None,
            "grpcMetadata": None,
            "basic_auth_user": None,
            "basic_auth_pass": None,
            "oauth_client_id": None,
            "oauth_client_secret": None,
            "oauth_token_url": None,
            "oauth_scopes": None,
            "oauth_auth_method": "client_secret_basic",
            "pushToken": None,
            "databaseConnectionString": None,
            "radiusUsername": None,
            "radiusPassword": None,
            "radiusSecret": None,
            "mqttUsername": "",
            "mqttPassword": "",
            "authWorkstation": None,
            "authDomain": None,
            "tlsCa": None,
            "tlsCert": None,
            "tlsKey": None,
            "kafkaProducerSaslOptions": {
                "mechanism": "None"
            },
            "includeSensitiveData": True
        }
        json_data["monitorList"].append(new_monitor)
        current_id += 1
    
    return json_data

file_path = "monitors.txt"

updated_data = update_json_from_file(data, file_path)

print(f"Броят на устройствата в JSON: {len(updated_data['monitorList'])}")

with open('updated_data.json', 'w') as json_file:
    json.dump(updated_data, json_file, indent=4)

print("JSON структурата е успешно актуализирана и записана в 'updated_data.json'.")
@Vaskata84 commented on GitHub (Aug 30, 2024): Hi I'm using the new version of kuma 1.23.13 and I can't import a big json file with many 1500 hosts I made it look one to one with a downloaded file and I added the new hosts. I do this with this script because there are many hosts. ```py import json data = { "version": "1.23.13", "notificationList": [], "monitorList": [ { "id": 1, "name": "admin", "description": None, "pathName": "admin", "parent": None, "childrenIDs": [], "url": "https://", "method": "GET", "hostname": "10.0.1.23", "port": None, "maxretries": 1, "weight": 2000, "active": True, "forceInactive": False, "type": "ping", "timeout": 48, "interval": 60, "retryInterval": 60, "resendInterval": 0, "keyword": None, "invertKeyword": False, "expiryNotification": False, "ignoreTls": False, "upsideDown": False, "packetSize": 56, "maxredirects": 10, "accepted_statuscodes": [ "200-299" ], "dns_resolve_type": "A", "dns_resolve_server": "1.1.1.1", "dns_last_result": None, "docker_container": "", "docker_host": None, "proxyId": None, "notificationIDList": {}, "tags": [], "maintenance": False, "mqttTopic": "", "mqttSuccessMessage": "", "databaseQuery": None, "authMethod": None, "grpcUrl": None, "grpcProtobuf": None, "grpcMethod": None, "grpcServiceName": None, "grpcEnableTls": False, "radiusCalledStationId": None, "radiusCallingStationId": None, "game": None, "gamedigGivenPortOnly": True, "httpBodyEncoding": None, "jsonPath": None, "expectedValue": None, "kafkaProducerTopic": None, "kafkaProducerBrokers": [], "kafkaProducerSsl": False, "kafkaProducerAllowAutoTopicCreation": False, "kafkaProducerMessage": None, "screenshot": None, "headers": None, "body": None, "grpcBody": None, "grpcMetadata": None, "basic_auth_user": None, "basic_auth_pass": None, "oauth_client_id": None, "oauth_client_secret": None, "oauth_token_url": None, "oauth_scopes": None, "oauth_auth_method": "client_secret_basic", "pushToken": None, "databaseConnectionString": None, "radiusUsername": None, "radiusPassword": None, "radiusSecret": None, "mqttUsername": "", "mqttPassword": "", "authWorkstation": None, "authDomain": None, "tlsCa": None, "tlsCert": None, "tlsKey": None, "kafkaProducerSaslOptions": { "mechanism": "None" }, "includeSensitiveData": True } ] } def load_data_from_file(file_path): with open(file_path, 'r') as file: lines = file.readlines() cleaned_data = [] for line in lines: stripped_line = line.strip() if stripped_line: parts = stripped_line.split("|") if len(parts) >= 2: description = parts[1].strip() ip = parts[0].strip() cleaned_data.append((description, ip)) else: print(f"Невалиден формат на реда: {line}") return cleaned_data def update_json_from_file(json_data, file_path): data_from_file = load_data_from_file(file_path) if len(data_from_file) != 1472: print(f"Очаквани 1472 реда, намерени: {len(data_from_file)}") current_id = max(monitor["id"] for monitor in json_data["monitorList"]) + 1 for description, ip in data_from_file: new_monitor = { "id": current_id, "name": description, "description": None, "pathName": description, "parent": None, "childrenIDs": [], "url": "https://", "method": "GET", "hostname": ip, "port": None, "maxretries": 1, "weight": 2000, "active": True, "forceInactive": False, "type": "ping", "timeout": 48, "interval": 60, "retryInterval": 60, "resendInterval": 0, "keyword": None, "invertKeyword": False, "expiryNotification": False, "ignoreTls": False, "upsideDown": False, "packetSize": 56, "maxredirects": 10, "accepted_statuscodes": [ "200-299" ], "dns_resolve_type": "A", "dns_resolve_server": "1.1.1.1", "dns_last_result": None, "docker_container": "", "docker_host": None, "proxyId": None, "notificationIDList": {}, "tags": [], "maintenance": False, "mqttTopic": "", "mqttSuccessMessage": "", "databaseQuery": None, "authMethod": None, "grpcUrl": None, "grpcProtobuf": None, "grpcMethod": None, "grpcServiceName": None, "grpcEnableTls": False, "radiusCalledStationId": None, "radiusCallingStationId": None, "game": None, "gamedigGivenPortOnly": True, "httpBodyEncoding": None, "jsonPath": None, "expectedValue": None, "kafkaProducerTopic": None, "kafkaProducerBrokers": [], "kafkaProducerSsl": False, "kafkaProducerAllowAutoTopicCreation": False, "kafkaProducerMessage": None, "screenshot": None, "headers": None, "body": None, "grpcBody": None, "grpcMetadata": None, "basic_auth_user": None, "basic_auth_pass": None, "oauth_client_id": None, "oauth_client_secret": None, "oauth_token_url": None, "oauth_scopes": None, "oauth_auth_method": "client_secret_basic", "pushToken": None, "databaseConnectionString": None, "radiusUsername": None, "radiusPassword": None, "radiusSecret": None, "mqttUsername": "", "mqttPassword": "", "authWorkstation": None, "authDomain": None, "tlsCa": None, "tlsCert": None, "tlsKey": None, "kafkaProducerSaslOptions": { "mechanism": "None" }, "includeSensitiveData": True } json_data["monitorList"].append(new_monitor) current_id += 1 return json_data file_path = "monitors.txt" updated_data = update_json_from_file(data, file_path) print(f"Броят на устройствата в JSON: {len(updated_data['monitorList'])}") with open('updated_data.json', 'w') as json_file: json.dump(updated_data, json_file, indent=4) print("JSON структурата е успешно актуализирана и записана в 'updated_data.json'.") ```
Author
Owner

@CommanderStorm commented on GitHub (Aug 30, 2024):

@Vaskata84 this issue is not implemented. No import functionality currently exists. Please see #1190

@CommanderStorm commented on GitHub (Aug 30, 2024): @Vaskata84 this issue is not implemented. No import functionality currently exists. Please see #1190
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/uptime-kuma#2818
No description provided.