Essentially this is manual context management, and it’s still better for straightforward tasks that don’t require the AI to run commands (e.g. running unit tests).
I had Gemini cli running trying to do a straightforward refactor today, but when I copy-pasted the relevant code into the Gemini web app, it came up with the solution instantly.
Yes, I've seen this multiple times personally, it's often better to copy/paste and give detailed prompts in the standalone apps for higher quality than in the coding agents in your codebase.
The models don't know what portion of the entire context is relevant to your most recent query. The reason it works better is because in the standalone app, your query is the entire context, whereas otherwise it's query + x irrelevant tokens.
But if it's truly better (as in the content and the result being better), then copying and pasting is not the most important thing. I used Claude the other day by just copying and pasting and that worked just fine.
It cannot be better because Cursor looks across files, whereas with grok you'd be giving it a single one. Grok won't have any context about the rest of your repo, which makes it only useful for toy examples.
What's stopping you at pasting only a single file? I use the workflow Elon suggests (although I've never used it with Grok) predominately, it's well over 30% of my use of LLMs. I have a small piece of python called "crawlxml" that filters + dumps into <file> tags. And of course the LLM doesn't need your actual code in its context to do its job.
I'm invested in the JetBrains ecosystem though. I tried Junie but it crashed so I'm putting that on pause for now. Maybe there is a Claude plugin that looks across files, not sure.
Any experiences from HN'ers using JetBrains IDE's like IntelliJ, PyCharm, WebStorm, CLion etc?
Can you explain why? I like how I can select chunks of code for context and hit cmd-L (or K) to immediate trigger a change. And the tab autocomplete is amazing.
You just have to use Claude Code for a few days and it will be obvious. Cursor may as well go out of business to me and I really loved it a few weeks ago.
Once you figure out the work flow, Claude Code is just insane.
You're ignoring the fact that Cursor does all sorts of context management (actually, reduction) and prompt engineering to try and get good results for cheaper. The fact that you're saying the only 3 explanations are
1. Musk didn't test Cursor
2. Yesmen
3. Lying
Shows much more about your biases than anything related to Grok 4 usage
The very first thing I said was he was touting a feature that was already available in all other AIs. That was the whole point, Musk described something that was a feature of literally every other AI. Grok's features are independent of my parent comment. I only assumed his lack of knowledge was of the usual suspects, which all have have real-life evidence of happening.
Prove Musk doesn't has a circle of yesmen, prove he tested cursor (That's a hard one, given the context), and doesn't have a long history of lying.
Shows much more about your eagerness to put someone down who's even a little critical of Musk.
My whole first comment is independent of his billionaire-scale social media driven tantrums, election influence to give himself tax cuts and ads for his cars from the white house lawn, and nazi salutes. But you know, that stuff is just public knowledge and due public criticism doesn't just come out of thin air.
I don't understand what's so amazing in that screenshot demonstrating the detected errors in the vim plugin. Each item looks like it could be caught by some by some stricter linting rules.
> This is what everyone @xAI does. Works better than Cursor.
This makes no sense to me whatsoever.
https://xcancel.com/elonmusk/status/1943178423947661609