I’ve used this library on a couple of projects with great results. One, a drag-and-drop IaC builder and the other a GitHub Actions-like task execution graph viewer.
There’s a pattern to emoji use in docs, especially when combined with one or more other common LLM-generated documentation patterns, that makes it plainly obvious that you’re about to read slop.
Even when I create the first draft of a project’s README with an LLM, part of the final pass is removing those slop-associated patterns to clarify to the reader that they’re not reading unfiltered LLM output.
For larger tasks that I know are parallelizable, I just tell Claude to figure out which steps can be parallelized and then have it go nuts with sub-agents. I’ve had pretty good success with that.
I need to try this because I've never deliberately told it to, but I've had it do it on it's own before. Now I'm wondering if that project had instructions somewhere about that, which could it explain why it happened.
It sometimes does it on its own, but to get it to do so consistently, it needs to be told. Doubly so if you want it to split off more than one sub-agent.
This works great for refactors that touch a large number of files. You can knock out a refactor that might take 30 minutes, a persistent checklist, and possibly multiple conversations, and one-shot it in two minutes and a single prompt.
More like they can better react to user input within their context window. With older models, the value of that additional user input would have been much more limited.
Is this a parody of the Dropbox comment or is this sincere? I don’t think iPads have built in ssh… and even if they do, this is a far cry from an app. It assumes you have a Linux machine on your local network and are willing and able to set up ssh to connect to it as well as learn command line tooling for making calculations.
When this was first posted a couple of weeks ago by the spec's author, I took it as an opportunity to see how quickly I could spin up an IntelliJ language plugin since the last time I worked on a language plugin was pre-GPT (Klotho Annotations - basically TOML inside of @annotations inside comments or string literals in a variety of host languages). Back then, it took a week for me to figure out the ins and outs of basic syntax highlighting with GrammarKit.
This time around, I worked with Claude Code and we basically filled in each other's knowledge gaps to finish implementing every feature I was looking for in about 3 days of work:
Day 1:
- Plugin initialization
- Syntax highlighting
- JSON Schema integration
- Error inspections
Day 2:
- Code formatter (the code style settings page probably took longer to get right than the formatter)
- Test suite for existing features
Day 3:
- Intentions, QuickFix actions, etc. to help quickly reformat or fix issues detected in the file
- More graceful parsing error recovery and reporting
- Contextual completions (e.g., relevant keys/values from a JSON schema, existing keys from elsewhere in the file, etc.)
- Color picker gutter icon from string values that represent colors (in various formats)
I'm sure there are a few other features that I'm forgetting, but at the end of the day, roughly 80-85% of the code was generated from the command line by conversing with Claude Code (Sonnet 4.5) to plan, implement, test, and revise individual features.
For IntelliJ plugins, the SDK docs tend to cover the bare minimum to get common functionality working, and beyond that, the way to learn is by reading the source of existing OSS plugins. Claude was shockingly good at finding extension points for features I'd never implemented before and figuring out how to wire them up (though not always 100% successfully). It turns out that Claude can be quite an accelerator for building plugins for the JetBrains ecosystem.
Bottom line, if you're sitting on an idea for a plugin because you thought it might to take too long to bootstrap and figure out all the IDE integration parts, there's never been a better time to just go for it.
Loading that gist works for me on both Firefox and Chrome.
You can submit a bug report on GitHub with more environment details, screenshots, and console logs (if available) and I might be able to take a closer look.
My favorite is when you ask Claude to implement two requirements and it implements the first, gets confused by the the second, removes the implementation for the first to “focus” on the second, and then finishes by having implemented nothing.
It's a pretty big lift. Python was somewhat easy with pyodide, but I couldn't get Java to work locally. There's a company called CheerpJ that can do it over an API though.
I think if I was going to look into code execution in Tachi Code, it’d probably as part of a transformation into some sort of remote development experience rather than pursue wasm and all its complexities.
Charging per minute for self-hosted runners seems absolutely bananas!