Placeholder text has many disadvantages #327

Open
opened 2026-02-28 04:41:46 -05:00 by deekerman · 7 comments
Owner

Originally created by @dlaliberte on GitHub (Jan 8, 2024).

Having experienced the confusion first hand of thinking that placeholder text was actual text, I found this article which highlights several additional reasons that placeholder text should probably not be used: Don’t Use The Placeholder Attribute.

In my case, the placeholder text happened to be the default value that was actually what I would have entered, which is why I thought it was already entered. (Since the text was not entered when the submit proceeded without complaint, I was very confused about the subsequent erroneous consequences.) Making it easy to enter that same text would be a good alternative. e.g. select (focus on) the empty field, then the default text is displayed in a dropdown that the user can select.

If you agree, I volunteer to create a PR to implement this.

Originally created by @dlaliberte on GitHub (Jan 8, 2024). Having experienced the confusion first hand of thinking that placeholder text was actual text, I found this article which highlights several additional reasons that placeholder text should probably not be used: [Don’t Use The Placeholder Attribute](https://www.smashingmagazine.com/2018/06/placeholder-attribute/). In my case, the placeholder text happened to be the default value that was actually what I would have entered, which is why I thought it was already entered. (Since the text was not entered when the submit proceeded without complaint, I was very confused about the subsequent erroneous consequences.) Making it easy to enter that same text would be a good alternative. e.g. select (focus on) the empty field, then the default text is displayed in a dropdown that the user can select. If you agree, I volunteer to create a PR to implement this.
Author
Owner

@timothycarambat commented on GitHub (Jan 9, 2024):

For as long as I have been an SWE in various roles and projects I've heard this argument both ways. Currently, I take the opposite stance for that article, which I have read the likes of before! At a certain point I think the arguments can be pedantic but at our current stage of development, I'm strongly opposed to removing placeholders.

An example, take the LocalAI input field.
Screen Shot 2024-01-09 at 9 50 42 AM

We have validations on the backend to ensure two things

  • the URL does not end in / (eg. https://localhost:1223/v1/)
  • the URL does end in /v1 (eg. https://localhost:1223/v1)

If the placeholder was removed there is zero indication that the input requires any specific format at all, which now the user is left to simply put something, likely just the hostname https://localhost:1223, and get an error saying it needs to end in /v1 so now they had to submit twice.

Additionally, should any input be empty because there was placeholder-is-input confusion we use HTML native form controls for required fields so we need not rely on ping-ponging validations between the server and client.
Screen Shot 2024-01-09 at 9 53 46 AM

Lastly, the color of the placeholder is a "softer" shade than the real input text, which is pretty normal for web design however if there is an agreement that its too close to white then we can make it darker but now will face possible contract issues.

I think the UX where an input shows as an option during focus for the default is counterintuitive so we should not pursue that. Inputs should be inputs, dropdowns should be dropdowns, etc etc.

Possible resolutions

  • We can supplement each input label to have an additional info symbol next to it the user can hover and get a tooltip that is descriptive of the required input or functionality of input

  • We can then revise the placeholders to not be exact inputs one would expect to put anyway (eg. Token context window input!) where we can be more descriptive or now assume defaults
    Screen Shot 2024-01-09 at 9 57 05 AM

  • We could move the placeholder under the input as a hint and that way it is no longer confusing what is in the input. Then placeholders can be dropped totally since the input instruction is still implied.

I'm open to arguments either way but I do stand that in our current stage of UI/UX around inputs and placeholders. That's just where I'm at mentally on this. Appreciate you bringing this up so we can all find the best way forward 👍

@timothycarambat commented on GitHub (Jan 9, 2024): For as long as I have been an SWE in various roles and projects I've heard this argument both ways. Currently, I take the opposite stance for that article, which I have read the likes of before! At a certain point I think the arguments can be pedantic but at our current stage of development, I'm strongly opposed to removing placeholders. An example, take the LocalAI input field. <img width="270" alt="Screen Shot 2024-01-09 at 9 50 42 AM" src="https://github.com/Mintplex-Labs/anything-llm/assets/16845892/c9a958ed-cac3-42a1-bbeb-2e2114a2d408"> We have validations on the backend to ensure two things - the URL **does not** end in `/` (eg. `https://localhost:1223/v1/`) - the URL **does** end in `/v1` (eg. `https://localhost:1223/v1`) If the placeholder was removed there is zero indication that the input requires any specific format at all, which now the user is left to simply put something, likely just the hostname `https://localhost:1223`, and get an error saying it needs to end in `/v1` so now they had to submit twice. Additionally, should any input be empty because there was placeholder-is-input confusion we use HTML native form controls for required fields so we need not rely on ping-ponging validations between the server and client. <img width="294" alt="Screen Shot 2024-01-09 at 9 53 46 AM" src="https://github.com/Mintplex-Labs/anything-llm/assets/16845892/4905d150-e305-411f-a079-43ea13ca110f"> Lastly, the color of the placeholder is a "softer" shade than the real input text, which is pretty normal for web design however if there is an agreement that its too close to white then we can make it darker but now will face possible contract issues. I think the UX where an input shows as an option during focus for the default is counterintuitive so we should not pursue that. Inputs should be inputs, dropdowns should be dropdowns, etc etc. ### Possible resolutions - We can supplement each input label to have an additional `info` symbol next to it the user can hover and get a tooltip that is descriptive of the required input or functionality of input - We can then revise the placeholders to not be _exact_ inputs one would expect to put anyway (eg. Token context window input!) where we can be more descriptive or now assume defaults <img width="286" alt="Screen Shot 2024-01-09 at 9 57 05 AM" src="https://github.com/Mintplex-Labs/anything-llm/assets/16845892/516f801a-c7b8-4212-bbaf-799399b1e45e"> - We could move the placeholder _under_ the input as a `hint` and that way it is no longer confusing what is in the input. Then placeholders can be dropped totally since the input instruction is still implied. I'm open to arguments either way but I do stand that in our current stage of UI/UX around inputs and placeholders. That's just where I'm at mentally on this. Appreciate you bringing this up so we can all find the best way forward 👍
Author
Owner

@dlaliberte commented on GitHub (Jan 17, 2024):

Thanks for your extensive response. It affirms that you do care about the UI and ease of use. Btw, I think the AnythingLLM UI is mostly great, but this particular issue tripped me up and I thought it might be useful to help others.

I was able to submit the form without touching the field with the placeholder value that was identical to what I needed to enter. So perhaps there is a bug that allows an empty value for a required field. A passive indication that the field is required wouldn't help.

There was no other form text to compare with to suggest that it might not be the actual value. So lightening the text won't necessarily help either.

Showing the placeholder separate from the field is probably the best choice, if you don't want to show it in a drop-down that the user can quickly select.

@dlaliberte commented on GitHub (Jan 17, 2024): Thanks for your extensive response. It affirms that you do care about the UI and ease of use. Btw, I think the AnythingLLM UI is mostly great, but this particular issue tripped me up and I thought it might be useful to help others. I was able to submit the form without touching the field with the placeholder value that was identical to what I needed to enter. So perhaps there is a bug that allows an empty value for a required field. A passive indication that the field is required wouldn't help. There was no other form text to compare with to suggest that it might not be the actual value. So lightening the text won't necessarily help either. Showing the placeholder separate from the field is probably the best choice, if you don't want to show it in a drop-down that the user can quickly select.
Author
Owner

@dlaliberte commented on GitHub (Jan 17, 2024):

Here is what I experienced regarding the default settings, which I had posted to the discord.

I installed (cloned and built) the AnythingLLM repo for development. I am on a windows 11 machine, using WSL Ubuntu shell. I want to use LMStudio with one of the huggingface LLMs that I downloaded, and I want to use the default AnythingLLM embedding. In the server/.env.development I uncommented the three LMStudio lines:

LLM_PROVIDER='lmstudio'
LMSTUDIO_BASE_PATH='http://localhost:1234/v1'
LMSTUDIO_MODEL_TOKEN_LIMIT=4096

and left everything else as is. In the localhost:3000 web page, I set up a workspace, and clicked on the little wrench tool to configure the preferences. In the LLM Preferences, I selected LMStudio, and it seemed to have the right default base URL and token context window, so I just Saved my selection and tried to use it. This failed in a mysterious way, which I could see in the AnythingLLM server log:

Error: INVALID LM STUDIO SETUP. No embedding engine has been set. Go to instance settings and set up an embedding interface to use LMStudio as your LLM.
    at new LMStudioLLM (/mnt/c/Users/danie/anything-llm/server/utils/AiProviders/lmStudio/index.js:24:13)
    at getLLMProvider (/mnt/c/Users/danie/anything-llm/server/utils/helpers/index.js:42:14)
    at streamChatWithWorkspace (/mnt/c/Users/danie/anything-llm/server/utils/chats/stream.js:32:24)
    at /mnt/c/Users/danie/anything-llm/server/endpoints/chat.js:76:15

Why was it complaining about "No embedding engine"? I also changed the model I was loading in LMStudio from "orca 2 ..." to "mistral instruct ..." and the error when trying to use this changed to:

Error: No local Llama model was set.
    at new NativeLLM (/mnt/c/Users/danie/anything-llm/server/utils/AiProviders/native/index.js:15:13)
    at getLLMProvider (/mnt/c/Users/danie/anything-llm/server/utils/helpers/index.js:51:14)
    at streamChatWithWorkspace (/mnt/c/Users/danie/anything-llm/server/utils/chats/stream.js:32:24)
    at /mnt/c/Users/danie/anything-llm/server/endpoints/chat.js:76:15

I subsequently figured it out, so no need to help me on that. I'm just getting to make it clear why I was confused.

@dlaliberte commented on GitHub (Jan 17, 2024): Here is what I experienced regarding the default settings, which I had posted to the discord. I installed (cloned and built) the AnythingLLM repo for development. I am on a windows 11 machine, using WSL Ubuntu shell. I want to use LMStudio with one of the huggingface LLMs that I downloaded, and I want to use the default AnythingLLM embedding. In the server/.env.development I uncommented the three LMStudio lines: ``` LLM_PROVIDER='lmstudio' LMSTUDIO_BASE_PATH='http://localhost:1234/v1' LMSTUDIO_MODEL_TOKEN_LIMIT=4096 ``` and left everything else as is. In the localhost:3000 web page, I set up a workspace, and clicked on the little wrench tool to configure the preferences. In the LLM Preferences, I selected LMStudio, and it seemed to have the right default base URL and token context window, so I just Saved my selection and tried to use it. This failed in a mysterious way, which I could see in the AnythingLLM server log: ``` Error: INVALID LM STUDIO SETUP. No embedding engine has been set. Go to instance settings and set up an embedding interface to use LMStudio as your LLM. at new LMStudioLLM (/mnt/c/Users/danie/anything-llm/server/utils/AiProviders/lmStudio/index.js:24:13) at getLLMProvider (/mnt/c/Users/danie/anything-llm/server/utils/helpers/index.js:42:14) at streamChatWithWorkspace (/mnt/c/Users/danie/anything-llm/server/utils/chats/stream.js:32:24) at /mnt/c/Users/danie/anything-llm/server/endpoints/chat.js:76:15 ``` Why was it complaining about "No embedding engine"? I also changed the model I was loading in LMStudio from "orca 2 ..." to "mistral instruct ..." and the error when trying to use this changed to: ``` Error: No local Llama model was set. at new NativeLLM (/mnt/c/Users/danie/anything-llm/server/utils/AiProviders/native/index.js:15:13) at getLLMProvider (/mnt/c/Users/danie/anything-llm/server/utils/helpers/index.js:51:14) at streamChatWithWorkspace (/mnt/c/Users/danie/anything-llm/server/utils/chats/stream.js:32:24) at /mnt/c/Users/danie/anything-llm/server/endpoints/chat.js:76:15 ``` I subsequently figured it out, so no need to help me on that. I'm just getting to make it clear why I was confused.
Author
Owner

@timothycarambat commented on GitHub (Jan 17, 2024):

@dlaliberte Ah, then I think we have a path forward on this. I can completely understand how/why that confusion and the resulting ambiguous error would occur. We will outline some tickets in the internal tasks board to address this issue and also prevent the UX that ultimately leads to the error because of the input ambiguity.

Great write-up and explanation by the way. Makes addressing issues like this much easier!

@timothycarambat commented on GitHub (Jan 17, 2024): @dlaliberte Ah, then I think we have a path forward on this. I can completely understand how/why that confusion and the resulting ambiguous error would occur. We will outline some tickets in the internal tasks board to address this issue and also prevent the UX that ultimately leads to the error because of the input ambiguity. Great write-up and explanation by the way. Makes addressing issues like this much easier!
Author
Owner

@dlaliberte commented on GitHub (Jan 22, 2024):

Great. BTW, I continue to be fooled by placeholders that look like real text.

Another example: I went to re-import the GitHub repo of AnythingLLM, which I had done previously, but I wanted a fresh copy. (Auto-updating this, and other document sources will be nice - I expect you are planning on supporting that eventually.) The text in the URL field looked like it was all filled out as it was previously, but there was nothing there actually. But the Access Token was more obviously bogus looking. So I had to go create a new one, since it was not actually stored locally. Oh well. I digress.

When I went to copy the URL for real from GitHub, I pasted in the text, and it included the ".git" suffix which is not the expected format, but I couldn't tell since it is not even visible in the truncated placeholder text.

@dlaliberte commented on GitHub (Jan 22, 2024): Great. BTW, I continue to be fooled by placeholders that look like real text. Another example: I went to re-import the GitHub repo of AnythingLLM, which I had done previously, but I wanted a fresh copy. (Auto-updating this, and other document sources will be nice - I expect you are planning on supporting that eventually.) The text in the URL field looked like it was all filled out as it was previously, but there was nothing there actually. But the Access Token was more obviously bogus looking. So I had to go create a new one, since it was not actually stored locally. Oh well. I digress. When I went to copy the URL for real from GitHub, I pasted in the text, and it included the ".git" suffix which is not the expected format, but I couldn't tell since it is not even visible in the truncated placeholder text.
Author
Owner

@MarwanNakhaleh commented on GitHub (Mar 28, 2024):

Related to this, entering http://localhost:11434 will not allow AnythingLLM to find local models, but http://127.0.0.1:11434 works perfectly fine. Please allow "localhost" in addition to "127.0.0.1"?

@MarwanNakhaleh commented on GitHub (Mar 28, 2024): Related to this, entering `http://localhost:11434` will not allow AnythingLLM to find local models, but `http://127.0.0.1:11434` works perfectly fine. Please allow "localhost" in addition to "127.0.0.1"?
Author
Owner

@shatfield4 commented on GitHub (Mar 28, 2024):

Related to this, entering http://localhost:11434 will not allow AnythingLLM to find local models, but http://127.0.0.1:11434 works perfectly fine. Please allow "localhost" in addition to "127.0.0.1"?

This is not an AnythingLLM issue but it is actually because Ollama does not listen to "localhost" it is just one of the quirks of using Ollama for running local models. Please keep using "127.0.0.1" as this just will point to your local device's IP address.

@shatfield4 commented on GitHub (Mar 28, 2024): > Related to this, entering `http://localhost:11434` will not allow AnythingLLM to find local models, but `http://127.0.0.1:11434` works perfectly fine. Please allow "localhost" in addition to "127.0.0.1"? This is not an AnythingLLM issue but it is actually because Ollama does not listen to "localhost" it is just one of the quirks of using Ollama for running local models. Please keep using "127.0.0.1" as this just will point to your local device's IP address.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/anything-llm#327
No description provided.