Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You can cut & paste your entire source code file into the query entry box on grok.com and @Grok 4 will fix it for you!

> This is what everyone @xAI does. Works better than Cursor.

This makes no sense to me whatsoever.

https://xcancel.com/elonmusk/status/1943178423947661609



Essentially this is manual context management, and it’s still better for straightforward tasks that don’t require the AI to run commands (e.g. running unit tests).

I had Gemini cli running trying to do a straightforward refactor today, but when I copy-pasted the relevant code into the Gemini web app, it came up with the solution instantly.


Yes, I've seen this multiple times personally, it's often better to copy/paste and give detailed prompts in the standalone apps for higher quality than in the coding agents in your codebase.


The models don't know what portion of the entire context is relevant to your most recent query. The reason it works better is because in the standalone app, your query is the entire context, whereas otherwise it's query + x irrelevant tokens.


I've seen this too! Any idea why or whats going on?


Cursor is a leap in difference because it writes to your filesystem and is an AI agent in front of other AIs.

Musk obviously didn't test Cursor, and either got this from his yesmen, or he's just lying unchecked as usual.


But if it's truly better (as in the content and the result being better), then copying and pasting is not the most important thing. I used Claude the other day by just copying and pasting and that worked just fine.


It cannot be better because Cursor looks across files, whereas with grok you'd be giving it a single one. Grok won't have any context about the rest of your repo, which makes it only useful for toy examples.


What's stopping you at pasting only a single file? I use the workflow Elon suggests (although I've never used it with Grok) predominately, it's well over 30% of my use of LLMs. I have a small piece of python called "crawlxml" that filters + dumps into <file> tags. And of course the LLM doesn't need your actual code in its context to do its job.


There's no way I'm going to go through my repo dependency tree and paste twenty files into grok one by one.


well, your loss then. clearly your work steps aren’t big enough to benefit from SoA LLM


My work steps are too big to sit around pasting my repo into a text box every time I have a task. This is why integrated IDEs are taking off.


I'm invested in the JetBrains ecosystem though. I tried Junie but it crashed so I'm putting that on pause for now. Maybe there is a Claude plugin that looks across files, not sure.

Any experiences from HN'ers using JetBrains IDE's like IntelliJ, PyCharm, WebStorm, CLion etc?


Update: Tried Claude using AI Assistant now in JetBrains and it works great


Claude code is much better than cursor + sonnet in my opinion, even without the good ide integration


Can you explain why? I like how I can select chunks of code for context and hit cmd-L (or K) to immediate trigger a change. And the tab autocomplete is amazing.


You just have to use Claude Code for a few days and it will be obvious. Cursor may as well go out of business to me and I really loved it a few weeks ago.

Once you figure out the work flow, Claude Code is just insane.


its ability to understand tasks and execute them in a way that works without having it try again over and over 10x


You're ignoring the fact that Cursor does all sorts of context management (actually, reduction) and prompt engineering to try and get good results for cheaper. The fact that you're saying the only 3 explanations are

1. Musk didn't test Cursor

2. Yesmen

3. Lying

Shows much more about your biases than anything related to Grok 4 usage


The very first thing I said was he was touting a feature that was already available in all other AIs. That was the whole point, Musk described something that was a feature of literally every other AI. Grok's features are independent of my parent comment. I only assumed his lack of knowledge was of the usual suspects, which all have have real-life evidence of happening.

Prove Musk doesn't has a circle of yesmen, prove he tested cursor (That's a hard one, given the context), and doesn't have a long history of lying.

Shows much more about your eagerness to put someone down who's even a little critical of Musk.

My whole first comment is independent of his billionaire-scale social media driven tantrums, election influence to give himself tax cuts and ads for his cars from the white house lawn, and nazi salutes. But you know, that stuff is just public knowledge and due public criticism doesn't just come out of thin air.


He speaks in movies terms, exactly what I say when I watch movie about programming


is sending your whole codebase to xAI a good idea?


A later post clarifies there’s some issue with cursor integration that will get fixed.


I don't understand what's so amazing in that screenshot demonstrating the detected errors in the vim plugin. Each item looks like it could be caught by some by some stricter linting rules.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: