Hacker Newsnew | past | comments | ask | show | jobs | submit | ppeetteerr's commentslogin

An article mentioned that it will be an iOS chip


The M5 Pro/Max models are likely going to arrive in March (but maybe earlier)


Oh, the M5s available max out at 32GB ram, even in the MBP. That’s a nonstarter for me in a pro machine.


Java is maturing into a syntactically nice language, albeit slowly, and it's the backbone of many medium and large companies.

You might have trouble finding small companies using anything but JS/Ruby/Python. These companies align more with velocity and cost of engineering, and not so much with performance. That's probably why the volume of interpreted languages is greater than that of "enterprisey" or "performance" languages.


Aren't a lot of those Java employers stuck on an old version of the language, one that's lacking most of those nice features?


A lot are, but there is an equal amount that are of modern versions. It’s a big landscape of employers. I’ve already begun moving my team to Java 25 from 21.


less and less, with the new release cycles.

What you get is either really old (Java 8 stuck on something nasty like weblogic).

Or companies running either cutting edge or LTS.


> Java is maturing into a syntactically nice language, albeit slowly, and it's the backbone of many medium and large companies.

I've heard about Java initiatives to improve it, but can you point to examples of how how Java "is maturing into a syntactically nice language"?

I'm tempted to learn it, but wonder whether it would really become nice enough to become a 'go-to' language (over TS in my case)


I've always felt it was verbose and the need for classes for everything was a bit of a overkill in 90% of circumstances (we're even seeing a pushback against OOP these days).

Here are some actual improvements:

- Record classes

public record Point(int x, int y) { }

- Record patterns

record Person(String name, int age) { }

if (obj instanceof Person(String name, int age)) { System.out.println(name + " is " + age); }

- No longer needing to import base Java types - Automatic casting

if (obj instanceof String s) { // use s directly }

Don't get me wrong, I still find some aspects of the language frustrating:

- all pointers are nullable with support from annotation to lessen the pain

- the use of builder class functions (instead of named parameters like in other languages)

- having to define a type for everything (probably the best part of TS is inlining type declarations!)

But these are minor gripes


It has virtual threads, that under most circumstances let you get away from the async model. It has records, that are data-first immutable classes, that can be defined in a single line, with sane equals toString and hash. It has sealed classes as well, the latter two giving you product and sum types, with proper pattern matching.

Also, a very wide-reaching standard library, good enough type system, and possibly the most advanced runtime with very good tooling.


There are some interesting talks and slide decks that I have to search around to find. Here is one: https://speakerdeck.com/bazlur_rahman/breaking-java-stereoty...

Check jbang.dev, and then talks by its author Max Rydahl Andersen. That could be a starting point.



This is not unique to the age of LLMs. PR reviews are often shallow because the reviewer is not giving the contribution the amount of attention and understanding it deserves.

With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.

We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.


> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.

90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.


> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.

This seems silly to me. In most cases, the least amount of work you can possibly do is logically describe the process you want and the boundaries, and run that logic over the input data. In other words, coding.

The idea that we should, to avoid coding or reading code, come up with a whole new process to keep generated code on track - would almost certainly take more effort than just getting the logical incantations correct the first time.


One thing to take into account is that PR reviews aren't there for just catching errors in the code. They also ensure that the business logic is correct. For example, you can have code that pass all tests, and look good, but they don't align with the business logic.


“ With LLMs, the volume of code has only gotten larger ”

It’s even worse with offshore devs. They produce a ton of code you have to review every morning.


I wonder if the paradigm shift is the adoption of a higher level language. Akin to what python did to blackboxing C libraries.


I'm not a programmer but I always had the impression that different languages were appropriate for different tasks. My question is, "For what type of programming tasks is English the correct level of abstraction?"


Can you define what an "error" is?


Logic error, for instance


Well, it depends on the logic error doesn't it? And it depends on how the system is intended to behave. A method that does 2+2=5 is a logic error, but it could be a load-bearing method in the system that blows up when changed to be correct.

Something like blowing up the stack or going out of bounds is more obviously a bug, but detecting those will often require inferences from how a code behaves during runtime to identify. LLMs might work for detecting the most basic cases because those appear most often in their data set, but whenever I see people suggest that they're good at reviewing I think it's from people that don't deeply review code.


Why have stair railing when the real problem are buildings with multiple floors?


I love Zed and I'm glad you now have native support for Claude. I previously ran it using the instructions in this post: https://benswift.me/blog/2025/07/23/running-claude-code-with...

One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).

I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode


I was somewhat surprised to find that Zed still doesn't have a way to add your own local autocomplete AI using something like Ollama. Something like Qwen 2.5 coder at a tiny 1.5b parameters will work just fine for the stuff that I want. It runs fast and works when I'm between internet connections too.

I'd also like to see a company like Zed allow me to buy a license of their autocomplete AI model to run locally rather than renting and running it on their servers.

I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing. Something with the coding knowledge of Qwen Coder combined with the professionalism and predictability of IBM Granite 3. I'd pay quite a lot for such an agent (especially if it got updates every couple of months that worked in new documentation, bugfixes, github threads, etc to keep the answers up-to-date).


You don't have to buy a license; the autocomplete model is open source https://huggingface.co/zed-industries/zeta

It is indeed a fine tuned Qwen2.5-Coder-7B


> I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing.

Unfortunately, pretraining on a lot of data (~everything they can get their hands on) is needed to give current LLMs their "intelligence" (for whatever definition of intelligence). Using less training data doesn't work as well for now. There definitely not enough programming and business writing to train a good model only on that.


If the LLM isn’t getting its data about coding projects from those projects and their surrounding documentation and tutorials, what is it going to train with?

Maybe it also needs some amount of other training data for basic speech patterns, but I’d again show IBM Granite as an example that professional and to-the-point LLMs are possible.


There's an active PR providing inline edit completions via Ollama: https://github.com/zed-industries/zed/pull/33616


You can use a local model! It's in Settings in a Thread and you can select Ollama.


But that doesn't work for inline edit predictions, right?


Ditto, that was one of the dealbreakers for me using Zed, the Copilot integration is miles behind Cursor's


> Ollama

You mean an locally run OpenAI API compatible server?


thats why i created myself nanocoder 0.5b FT for autocomplete in couple of days going to release a v2 version much better

https://huggingface.co/srisree/nano_coder


I'll third this. AI autocomplete is THE most efficient and helpful feature of Cursor, not the agents.


I use Cursor solely for the agent mode and do all my editing in an proper IDE, meaning Jetbrains products.

I genuinely don't understand why one would want to AI autocomplete. Deterministic autocomplete is amazing but AI autocomplete completely breaks my flow. Even just the few seconds of lag absolutely drive me nuts and then it often it is close to what I wanted but not exactly what I wanted. Either I am in control or the generative AI but mixing both feels so wrong.

I am happy people find use for the autocomplete but ugh I really don't get how they can stomach it. Maybe it is for people that are not good at typing or something.


Same sentiment for me. I barely use the agent, but love their autocomplete. Though I sometimes hear people say that GH Copilot has largely caught up on this front. Can anyone speak to that? I haven’t compared them recently.

If performance were equal, I’d strongly consider going back to GH Copilot just because I don’t love my main IDE being a fork. I occasionally encounter IDE-level bugs in Cursor that are unrelated to the AI features. Perhaps they’re in the upstream as well, but I always wonder if a. there will be a delay in merging fixes or b. whether the fork is introducing new bugs. Just an inherent tradeoff I guess of forking a complex codebase.


They haven’t. They had time to catch up, but they didn’t. They recently switched their auto complete model from 4o-mini to 4.1-mini. It’s not smarter at predicting what you are trying to do. Nothing magical like last year experience on Cursor (I haven’t tested lately, so it might be even better now).

I heard Windsurf is quite good and the closest to Cursor magic, available on Windsurf free plan (unlimited autocomplete). I should give that a try.


They plan to allow defining your own endpoint for auto-complete soon and when they do switching to a better model like Sonnet or a fine tune should beat Cursor


I don't know, I think it's a tie. I can have the agent do some busy work or refactoring while I'm writing code with the autocomplete. I can tell it how I want a file split up or how I want stuff changed, and tell it that I'll be making other changes and where. It's smart enough to ignore me and my work while it keeps itself busy with another task. Sort of the best of both worlds. Right now I have it replacing DraftJS with another library while I'm working on some feature requests.


I feel like this is the big divide, some people have no use for agents and swear by autocomplete. Others find the autocomplete a little annoying/not that useful and swear by agents.

For me my aha moment came with Claude Code and Sonnet 4. Before that AI coding was more of a novelty than actually useful.


I have recently been using Zed much more than cursor. However, the autocomplete is literally the only thing missing, and when dealing with refactors or code with tons of boilerplate, its just unbeatable. Eagerly awaiting a better autocomplete model and I can finally ditch Cursor.


Out of curiosity, why not just stick to Cursor instead?


For me, the editor is still the most important component of my tooling. The AI features are secondary to my needs/wants when it comes to an editor.

Zed is hitting all the checkboxes when it comes to performance and user experience (yeah, I care about that in my editor).

I'm not a hardcore user of AI, but I do make use of Zed's inline suggestions and occasional use of Opus 4.1 through my Zed subscription.


This is it, in terms of pure text editing zed is the best GUI land editor I've used.

Not quite there with emacs/vim but it's a much more accessible environment and more convenient for typical workloads.


I agree. I used to use vscode, then switched to Zed and used it for over a year (without AI). In February of this year, I started using Cursor to try out the AI features and I realised I really hated vscode now. Once Zed shipped agent mode, I switched back, and haven’t looked back. I very strongly never want to use vscode again.


I'm in the same boat but a neovim/cursor user. I desperately wish there was a package I could use in nvim that matched the multiline, file-aware autocomplete feature of Cursor. Of course I've tried supermaven, copilot etc, but I've only ever gotten those to work as in-line completions. They can do multiline but only from where my cursor is. What I love about Cursor is that I can spam tab and make a quick change across a whole file. Plus its suggestions are far faster and far better than the alternative.

That said, vscode's UX sucks ass to me. I believe it's the best UX for people that want a "good enough and just works" editor, but I'm an emacs/vim (yes both) guy and I don't like taking my hands off the keyboard ever. Vscode just doesn't have a good keyboard only workflow with vim bindings like emacs and nvim do.


What Zed lacks in code generation quality it makes up for in not-being-an-Electron-app


Every single new HN thread should come with an automod post badmouthing Electron to save everyone time.


its a bad anti-pattern that trades developer convenience for performance, UX etc. Its fair to hate on it

With the advent of coding agents, I really hope we see devs move away - back to the traditional approach of using native frameworks/languages as now, you can write for 1 platform and easily task AI to handle other platforms.


This will never happen and it's a bizarre, legacy fantasy, borne of a fixed imaginary ideal of what computing should be. Programming will continue to move in the direction of ease-of-use and every time I see an out-of-topic reference to Electron in this forum I feel insane, like I'm fighting upstream. You will not see this - you will see more Electron apps, because that is the modern way of building cross-platform apps, and if you genuinely don't understand why that is I don't know how to explain it to you. You won't see another version of those because nobody is going to waste their time building cross-platform native apps at a native layer to performatively impress posters on HN. You, and seemingly everybody else on HN, can continue to pretend that devex doesn't matter, but that's the difference I guess between caring about devex and shipping products.


It's not out of topic. We're discussing the Zed editor. Their whole marketing ploy is "we are not electron", "we are rust", "we are native", "we are not slow" alternative to VScode.

This is literally their whole distinguishing feature and people are switching because of it and just it.


It is! If the thing runs like shit, say it runs like shit. Say it's native or not, like every topic title and comment on HN until we weep of boredom. I know it's rust! everything is rust here! Is there any other reason I should care? Are we a forum for discussing interesting technology or are we a forum for discussing alternatives to VSCode? And again, who is switching? People shipping products or HN posters with their dumb metrics?


People who install zed are switching. I don't understand what you're trying to "get" at. You're complaining about people talking about Zed in a topic about Zed.

Zed seems to have been hugely succesful recently and their only real distinguishing feature is "fast from the ground up". It has less features than vscode. Worse AI features than Cursor. but people seem to love it nonetheless.

Turns out there is a market for people fed up with VScode-derivatives.


Cross platform application development is cool but the guys who made Zed are the guys who made Atom are the guys who made Electron, and they pointed out that long term the devex sucks and that Electron simply isn't a good platform for native applications that need any kind of memory control or similar features: https://zed.dev/blog/we-have-to-start-over

> My experience in Atom always felt like bending over backwards to try to achieve something that in principle should have been simple. Lay out some lines and read the position of the cursor at this spot in between these two characters. That seems fundamentally doable and yet it always felt like the tools were not at our disposal. They were very far away from what we wanted to do.

> Nathan: It was a nightmare. I mean, the ironic thing is that we created Electron to create Atom, but I can't imagine a worse application for Electron than a code editor, I don't know. For something simpler, it's probably fine, the memory footprint sucks, but it's fine. But for a code editor you just don't have the level of control I think you need to do these things in a straightforward way at the very least. It's always some... backflip.


Thinking Javascript was a language meant for desktop applications is what is insane - even more so than the convenience of using it, which is comparatively less insane.


Absolutely nothing new has been said about Electron since like 2015, it's boring as hell to downvote and scroll past.


Fucking Amen.


I find Zed has some really frustrating UX choices. I’ll run an operation and it will either fail quietly, or be running in the background for a while with no indication that it is doing so.


and then loses by not having plugin support


It does have extensions, but they are much more limited. In particular they can't define UI elements inside buffers, so you can't replicate something with rich UI like the Git integration in an extension.


Does it really? At the end of the day i need it to do my job. Ideal values don’t help me doing my job. So i choose the editor best suited and the features i need. And that’s not zed at the moment.


There's an analogue here with programming language iteration— Python, Ruby and friends showed what the semantics were that were needed, and then a decade or two later, Go and Rust took those semantics and put them in compiled, performance-oriented languages.

Electron has been a powerful tool for quickly iterating UIs and plugin architectures in VSCode, Brackets, Atom, etc, now the window is open for a modern editor to deliver that experience without the massive memory footprint and UI stalls.


I agree with the main point but I am on battery often and the difference between native vs. one or multiple Electron apps in "doing my job" is easily several hours lost to battery life or interruptions for charging. Not a huge deal, but it's not my ideals that make me frown at charge cycles occurring twice as often.


This is simply not true… that’s the problem. As much as I like Zed, using it for the sake of not being an electron app doesn’t make any sense when Cursor’s edit prediction adds so much value. I’m not starved of resources and can run Cursor just fine – as far as Electron apps go VS Code is great, performant enough. I value productivity. I’ll very happily drop Cursor for Zed the second edit prediction is comparable. I’m eagerly waiting.


Zed includes node.js runtime and 100s of megabytes of javascript. It is essentially Electron.


I'm gonna need you to back that claim up dawg, because the only thing I see the node runtime used for, is for when the user loads a js project.

then it's basically just a proxy for node/npm afaik.



~/.local/share/zed/


No one's debating your first sentence, they're debating your second.


I wonder if Augment [1] are working on a Zed plugin.

I've been using Augment for more than a year in Jetbrains IDEs, and been very impressed by it, both the autocomplete and the Cursor-style agent. I've looked at Cursor and couldn't figure out why anyone needed to use a dedicated IDE when Augment exists as a plugin. Colleagues who have used Cursor have switched to Augment and say it's better.

Seems to me like Augment is an AI tool flying under most people's radar; not sure why it's not all over Hacker News.

[1] https://www.augmentcode.com/


I now plain hate Cursor's auto complete, it's too aggressive I cannot write any code anymore, it seems to have hijacked CMD too, not just tab.


This is why I’m not a fan of auto complete in my editor. Much rather pair program with an agent.

Give the agent as much context as possible and let it go, review and correct the implementation, let it go again, finish it off…

The I just find the autocomplete a little annoying in my workflow, especially with the local self-hosted models I need to use at work.

Claude Code on corporate approved AWS Bedrock account.


I like autocomplete when it used to be a bit slower and only act on tab.

Right now it's borderline impossible to write code, the autocompletion results are loaded ultra fast and Cursor maps different buttons to autocompleting functionality.

It's no longer usable for me.

I'm fine getting autocompletes, but I decide when to trigger it, ideally after reading it, like this I can't even type.


I'd also like to second this and probably will in every Zed post. This is the primary reason I'm not ready to switch to Zed just yet.


>One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).

It's not only the autocomplete. I've never had any issue with Cursor while Zed panicked, crashed and behaved inconsistently often (the login indicator would flicker between states while you were logged in and vice versa, clicking some menus would crash it and similar annoyances). Another strange thing I've observed is the reminder in the UI that rating an AI prompt would send your _entire chat history_ to Zed, which might be a major red flag for many people. One could accidentally rate it without being aware of that and then Zed has access to large and potentially sensitive parts of your company's code - I can't imagine any company being happy with that.

>I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed

There are plenty of great VCs out there, going with Sequoia will definitely come with some unpleasant late consequences.

>This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode

There are many "real competitors" to Cursor, like Windsurf, (Neo-)Vim, Helix, Emacs, Jetbrains. It's also worth being aware that not everybody is too excited about letting AI slop be the dominant part of their work. Some people prefer sprinkling a little AI here and there, instead of letting it do pretty much everything.


> I've never had any issue with Cursor

Glad it's working for you but I think you might be the only one!


Glad it was helpful :)

I’ll keep an eye on this ‘proper’ Zed support for sure, although the current setup is working just fine so I might wait for v0.2.


"even pulling up a file via search is more accurate in Cursor"

Huh? it takes it sometimes like 40s to find some file with the fuzzy search for me. In that time im going to the terminal running a "find" command with lots of * before I get some result in cursor


I consider it a blessing to have a choice in how I go, rather than leaving it up to nature and the medical system.


Pay $180 for our new, slightly better, but still not accurate service.


For starters, never commit to a timeline without doing your due diligence. We're not selling carpets. Anyone who gives a time estimate on the spot is setting themselves up for failure.

Second, always pad your estimates. If you have been in the industry longer than 6 months, you'll already know how "off" your estimates can be. Take the actual delivery date, divide that by the estimated date, and that's your multiplier.


The reason people are holding out is that the current generation of models are still pretty poor in many areas. You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical. The accuracy of the generated output is too low be trusted in most practical applications.


Any email you trust an LLM to write is one you probably don't need to send.


Glib but the reality is that there are lots of cases where you can use an AI in writing but don’t need to entrust it with the whole job blindly.

I mostly use AIs in writing as a glorified grammar checker that sometimes suggests alternate phrasing. I do the initial writing and send it to an AI for review. If I like the suggestions I may incorporate some. Others I ignore.

The only times I use it to write is when I have something like a status report and I’m having a hard time phrasing things. Then I may write a series of bullet points and send that through an AI to flesh it out. Again, that is just the first stage and I take that and do editing to get what I want.

It’s just a tool, not a creator.


>> have something like a status report and I’m having a hard time phrasing things

I believe the above suggested that this type of email likely doesn't need to be sent. Is anyone really reading the status report? If they read it, what concrete decisions do they make based on it. We all get in this trap of doing what people ask of us but it often isn't what shareholders and customers really care about.


Considering that I do get questions and comments about the projects, yet, people are reading this.


Google (even now) wasn't absolutely accurate either. That didn't stop it from becoming many billions worth.

> You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical

My point is that an entire world lies between these two extremes.


Google became a billion dollar company creating the best search and indexing service at the time and putting ads around the results (that and YouTube). The didn't own the answer of the question.


I would say that anything you write can come back to you in the future, so don’t blindly sign your name on anything you didn’t review yourself.


Why don't you give actual concrete testable examples back with evidence where this is the case? Put your skin in the game.


A support ticket is a good middle ground. This is probably the area of most robust enterprise deployment. Synthesizing knowledge to produce a draft reply with some logic either to automatically send it or have human review. There are both shitty and ok systems that save real money with case deflection and even improved satisfaction rates. Partly this works because human responses can also suck, so you are raising a low bar. But it is a real use case with real money and reputation on the line.


Keyword is "draft". You still need a person to review the response with knowledge of the context of the issue. It's the same as my email example.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: