mirror of
https://github.com/Mintplex-Labs/anything-llm.git
synced 2026-03-02 22:57:05 -05:00
[FEAT]: Allow users to add transcription of a meeting in the Meeting Assistant over being forced to use audio source #3128
Labels
No labels
Desktop
Docker
Integration Request
Integration Request
OS: Linux
OS: Mobile
OS: Windows
UI/UX
blocked
bug
bug
core-team-only
documentation
duplicate
embed-widget
enhancement
feature request
github_actions
good first issue
investigating
needs info / can't replicate
possible bug
question
stage: specifications
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/anything-llm#3128
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @GabrieleGrezzana on GitHub (Jan 29, 2026).
What would you like to see?
This could be handy for those situations where meetings are in languages not yet supported by the Transcription Model and user is forced to process speech-to-text outside AnythingLLM.
Current workaround is to create a new chat, using the transcription for the LLM to create minutes/summaries and interacting with the text.
Proposing this feature as with v1.10.0 we now have this amazing Meeting Assistant feature.
@timothycarambat commented on GitHub (Jan 29, 2026):
Do you happen to have an example of what these text transcriptions are formatted like? Is it just one big chunk of text or is it JSON? Curiously, where are you getting transcripts from, and ultimately what would you like do be done with them?
I presume skipping the transcript, but doing the summary, agent items, and such?
@GabrieleGrezzana commented on GitHub (Jan 30, 2026):
It is pure text with punctuation coming from Vibe, which supports Chinese and Japanese. There is also the option to have Speaker identification, just like with AnythingLLM via Parakeet 0.6B v3.
Summary, with the current version of AnythingLLM, can be performed only if the Meeting Assistant successfully transcribe the audio. Letting the users to provide a transcription rather than audio only, would allow them to create summaries, use agents and chat.
Can I summarize the meeting directly via a regular chat using the transcript I have? Sure
Would it be nice to have it through the Meeting Agent? Yes ;)