Earlier today I found myself thinking about the opposite of CAPTCHA. Instead of proving something isn't a bot, how do you create a non-repudiable mechanism that proves something is a bot? We’ve mostly solved the "human verification" side, but this direction feels much harder.
I understand (and largely agree with) the intent behind this policy as written in the Jellyfin LLM guidance: it’s trying to protect contributor and maintainer time by preventing low-effort, unverified, "looks plausible" LLM output being dumped into issues, PRs, and support channels.
That said, I don’t think a blanket "never post LLM-written text" rule is the right boundary, because it conflates two very different behaviours:
1. Posting unreviewed LLM output as if it were real investigation or understanding (bad, and I agree this should be discouraged or prohibited), versus
2. A human doing the work, validating the result, and using an LLM as a tool to produce a clear, structured summary (good, and often beneficial).
Both humans and LLMs require context to understand and move things forward. For bug investigation specifically, it is increasingly optimal to use an LLM as part of the workflow: reasoning through logs, reproduction steps, likely root cause, and then producing a concise update that captures the outcome of the investigation.
I worked on an open source "AI friendly" project this morning and did exactly this.
I suspect the reporter filed the issue using an LLM, but I read it as a human and then worked with an LLM to investigate. The comment I posted is brief, technical, and adds useful context for the next person to continue the work. Most importantly, I stand behind it as accurate.
Is it really worth anyone’s time for me to rewrite that comment purely to make it sound more human?
So I do agree with Jellyfin's goal (no AI spam, no unverifiable content, no extra burden on maintainers). I just don’t think "LLM involvement" is the right line to draw. The right line is accountability and verification.
That has nothing to do with the UI framework. The X11 dependency comes as part of the clipboard integration (which I'd argue should be optional or even removed). Still, I wouldn't call it modern if Wayland is outright not supported.
I hesitated a bit bringing in this feature. On one hand, I really like to have clipboard support, on the other hand, I don't like that it requires you to change from static to dynamic linking (and have the x11 dependency).
Maybe I could write an install.sh script for installation that detects the OS and fetches the correct version/tarball from the Github release.
Thanks a lot for your contribution, this is something I will look into in the upcoming days. I totally agree that CGO isn't ideal, I had to make the build/release process also a lot more complicated purely for that clipboard requirement (see GHAs and the different goreleaser files).
On the other hand, I also don't want whosthere to be depended on a fork that isn't maintained anymore. I will think about this trade-off, but I am also interested how others look at this problem.
Yikes, so it's a "TUI" app... that still requires a display server? So I can't run this TUI over SSH or a virtual terminal. Wondering what the point of a tui is that still requires a gui environment to run?
Sorry, I was unhelpfully flippant. You totally can, and I don't want to distract from the great app that has been shared. This bug was just a compile time issue, which needed X libs to bake in clipboard support which is optional at runtime.
I'm willing to bet they were the first user to try and add example.com to their Outlook account, and MS then just assigned it to them without verifying they own the domain.
API usage would cost me >$1000 p/m for personal/hobby projects. I really like Anthropic models, but I do not want to pay-per-call and I prefer opencode to CC.
I have no moral issues with using the client of my choice, despite it being against their TOS.
reply