Java is maturing into a syntactically nice language, albeit slowly, and it's the backbone of many medium and large companies.
You might have trouble finding small companies using anything but JS/Ruby/Python. These companies align more with velocity and cost of engineering, and not so much with performance. That's probably why the volume of interpreted languages is greater than that of "enterprisey" or "performance" languages.
A lot are, but there is an equal amount that are of modern versions. It’s a big landscape of employers. I’ve already begun moving my team to Java 25 from 21.
I've always felt it was verbose and the need for classes for everything was a bit of a overkill in 90% of circumstances (we're even seeing a pushback against OOP these days).
Here are some actual improvements:
- Record classes
public record Point(int x, int y) { }
- Record patterns
record Person(String name, int age) { }
if (obj instanceof Person(String name, int age)) {
System.out.println(name + " is " + age);
}
- No longer needing to import base Java types
- Automatic casting
if (obj instanceof String s) {
// use s directly
}
Don't get me wrong, I still find some aspects of the language frustrating:
- all pointers are nullable with support from annotation to lessen the pain
- the use of builder class functions (instead of named parameters like in other languages)
- having to define a type for everything (probably the best part of TS is inlining type declarations!)
It has virtual threads, that under most circumstances let you get away from the async model. It has records, that are data-first immutable classes, that can be defined in a single line, with sane equals toString and hash. It has sealed classes as well, the latter two giving you product and sum types, with proper pattern matching.
Also, a very wide-reaching standard library, good enough type system, and possibly the most advanced runtime with very good tooling.
This is not unique to the age of LLMs. PR reviews are often shallow because the reviewer is not giving the contribution the amount of attention and understanding it deserves.
With LLMs, the volume of code has only gotten larger but those same LLMs can help review the code being written. The current code review agents are surprisingly good at catching errors. Better than most reviewers.
We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
The real breakthrough would be finding a way to not even do things that don’t need to be done in the first place.
90% of what management thinks it wants gets discarded/completely upended a few days/weeks/months later anyway, so we should have AI agents that just say “nah, actually you won’t need that” to 90% of our requests.
> We'll soon get to a point where it's no longer necessary to review code, either by the LLM prompter, or by a second reviewer (the volume of generate code will be too great). Instead, we'll need to create new tools and guardrails to ensure that whatever is written is done in a sustainable way.
This seems silly to me. In most cases, the least amount of work you can possibly do is logically describe the process you want and the boundaries, and run that logic over the input data. In other words, coding.
The idea that we should, to avoid coding or reading code, come up with a whole new process to keep generated code on track - would almost certainly take more effort than just getting the logical incantations correct the first time.
One thing to take into account is that PR reviews aren't there for just catching errors in the code. They also ensure that the business logic is correct. For example, you can have code that pass all tests, and look good, but they don't align with the business logic.
I'm not a programmer but I always had the impression that different languages were appropriate for different tasks. My question is, "For what type of programming tasks is English the correct level of abstraction?"
Well, it depends on the logic error doesn't it? And it depends on how the system is intended to behave. A method that does 2+2=5 is a logic error, but it could be a load-bearing method in the system that blows up when changed to be correct.
Something like blowing up the stack or going out of bounds is more obviously a bug, but detecting those will often require inferences from how a code behaves during runtime to identify. LLMs might work for detecting the most basic cases because those appear most often in their data set, but whenever I see people suggest that they're good at reviewing I think it's from people that don't deeply review code.
One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
I am glad to hear that Zed got a round of funding. https://zed.dev/blog/sequoia-backs-zed This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
I was somewhat surprised to find that Zed still doesn't have a way to add your own local autocomplete AI using something like Ollama. Something like Qwen 2.5 coder at a tiny 1.5b parameters will work just fine for the stuff that I want. It runs fast and works when I'm between internet connections too.
I'd also like to see a company like Zed allow me to buy a license of their autocomplete AI model to run locally rather than renting and running it on their servers.
I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing. Something with the coding knowledge of Qwen Coder combined with the professionalism and predictability of IBM Granite 3. I'd pay quite a lot for such an agent (especially if it got updates every couple of months that worked in new documentation, bugfixes, github threads, etc to keep the answers up-to-date).
> I'd also pay for something in the 10-15b parameter range that used more limited training data focused almost entirely on programming documentation and books along with professional business writing.
Unfortunately, pretraining on a lot of data (~everything they can get their hands on) is needed to give current LLMs their "intelligence" (for whatever definition of intelligence). Using less training data doesn't work as well for now. There definitely not enough programming and business writing to train a good model only on that.
If the LLM isn’t getting its data about coding projects from those projects and their surrounding documentation and tutorials, what is it going to train with?
Maybe it also needs some amount of other training data for basic speech patterns, but I’d again show IBM Granite as an example that professional and to-the-point LLMs are possible.
I use Cursor solely for the agent mode and do all my editing in an proper IDE, meaning Jetbrains products.
I genuinely don't understand why one would want to AI autocomplete. Deterministic autocomplete is amazing but AI autocomplete completely breaks my flow. Even just the few seconds of lag absolutely drive me nuts and then it often it is close to what I wanted but not exactly what I wanted. Either I am in control or the generative AI but mixing both feels so wrong.
I am happy people find use for the autocomplete but ugh I really don't get how they can stomach it. Maybe it is for people that are not good at typing or something.
Same sentiment for me. I barely use the agent, but love their autocomplete. Though I sometimes hear people say that GH Copilot has largely caught up on this front. Can anyone speak to that? I haven’t compared them recently.
If performance were equal, I’d strongly consider going back to GH Copilot just because I don’t love my main IDE being a fork. I occasionally encounter IDE-level bugs in Cursor that are unrelated to the AI features. Perhaps they’re in the upstream as well, but I always wonder if a. there will be a delay in merging fixes or b. whether the fork is introducing new bugs. Just an inherent tradeoff I guess of forking a complex codebase.
They haven’t. They had time to catch up, but they didn’t. They recently switched their auto complete model from 4o-mini to 4.1-mini. It’s not smarter at predicting what you are trying to do. Nothing magical like last year experience on Cursor (I haven’t tested lately, so it might be even better now).
I heard Windsurf is quite good and the closest to Cursor magic, available on Windsurf free plan (unlimited autocomplete). I should give that a try.
They plan to allow defining your own endpoint for auto-complete soon and when they do switching to a better model like Sonnet or a fine tune should beat Cursor
I don't know, I think it's a tie. I can have the agent do some busy work or refactoring while I'm writing code with the autocomplete. I can tell it how I want a file split up or how I want stuff changed, and tell it that I'll be making other changes and where. It's smart enough to ignore me and my work while it keeps itself busy with another task. Sort of the best of both worlds. Right now I have it replacing DraftJS with another library while I'm working on some feature requests.
I feel like this is the big divide, some people have no use for agents and swear by autocomplete. Others find the autocomplete a little annoying/not that useful and swear by agents.
For me my aha moment came with Claude Code and Sonnet 4. Before that AI coding was more of a novelty than actually useful.
I have recently been using Zed much more than cursor. However, the autocomplete is literally the only thing missing, and when dealing with refactors or code with tons of boilerplate, its just unbeatable. Eagerly awaiting a better autocomplete model and I can finally ditch Cursor.
I agree. I used to use vscode, then switched to Zed and used it for over a year (without AI). In February of this year, I started using Cursor to try out the AI features and I realised I really hated vscode now. Once Zed shipped agent mode, I switched back, and haven’t looked back. I very strongly never want to use vscode again.
I'm in the same boat but a neovim/cursor user. I desperately wish there was a package I could use in nvim that matched the multiline, file-aware autocomplete feature of Cursor. Of course I've tried supermaven, copilot etc, but I've only ever gotten those to work as in-line completions. They can do multiline but only from where my cursor is. What I love about Cursor is that I can spam tab and make a quick change across a whole file. Plus its suggestions are far faster and far better than the alternative.
That said, vscode's UX sucks ass to me. I believe it's the best UX for people that want a "good enough and just works" editor, but I'm an emacs/vim (yes both) guy and I don't like taking my hands off the keyboard ever. Vscode just doesn't have a good keyboard only workflow with vim bindings like emacs and nvim do.
its a bad anti-pattern that trades developer convenience for performance, UX etc. Its fair to hate on it
With the advent of coding agents, I really hope we see devs move away - back to the traditional approach of using native frameworks/languages as now, you can write for 1 platform and easily task AI to handle other platforms.
This will never happen and it's a bizarre, legacy fantasy, borne of a fixed imaginary ideal of what computing should
be. Programming will continue to move in the direction of ease-of-use and every time I see an out-of-topic reference to Electron in this forum I feel insane, like I'm fighting upstream. You will not see this - you will see more Electron apps, because that is the modern way of building cross-platform apps, and if you genuinely don't understand why that is I don't know how to explain it to you. You won't see another version of those because nobody is going to waste their time building cross-platform native apps
at a native layer to performatively impress posters on HN. You, and seemingly everybody else on HN, can continue to pretend that devex doesn't matter, but that's the difference I guess between caring about devex and shipping products.
It's not out of topic. We're discussing the Zed editor. Their whole marketing ploy is "we are not electron", "we are rust", "we are native", "we are not slow" alternative to VScode.
This is literally their whole distinguishing feature and people are switching because of it and just it.
It is! If the thing runs like shit, say it runs like shit. Say it's native or not, like every topic title and comment on HN until we weep of boredom. I know it's rust! everything is rust here! Is there any other reason I should care? Are we a forum for discussing interesting technology or are we a forum for discussing alternatives to VSCode? And again, who is switching? People shipping products or HN posters with their dumb metrics?
People who install zed are switching. I don't understand what you're trying to "get" at. You're complaining about people talking about Zed in a topic about Zed.
Zed seems to have been hugely succesful recently and their only real distinguishing feature is "fast from the ground up". It has less features than vscode. Worse AI features than Cursor. but people seem to love it nonetheless.
Turns out there is a market for people fed up with VScode-derivatives.
Cross platform application development is cool but the guys who made Zed are the guys who made Atom are the guys who made Electron, and they pointed out that long term the devex sucks and that Electron simply isn't a good platform for native applications that need any kind of memory control or similar features: https://zed.dev/blog/we-have-to-start-over
> My experience in Atom always felt like bending over backwards to try to achieve something that in principle should have been simple. Lay out some lines and read the position of the cursor at this spot in between these two characters. That seems fundamentally doable and yet it always felt like the tools were not at our disposal. They were very far away from what we wanted to do.
> Nathan: It was a nightmare. I mean, the ironic thing is that we created Electron to create Atom, but I can't imagine a worse application for Electron than a code editor, I don't know. For something simpler, it's probably fine, the memory footprint sucks, but it's fine. But for a code editor you just don't have the level of control I think you need to do these things in a straightforward way at the very least. It's always some... backflip.
Thinking Javascript was a language meant for desktop applications is what is insane - even more so than the convenience of using it, which is comparatively less insane.
I find Zed has some really frustrating UX choices. I’ll run an operation and it will either fail quietly, or be running in the background for a while with no indication that it is doing so.
It does have extensions, but they are much more limited. In particular they can't define UI elements inside buffers, so you can't replicate something with rich UI like the Git integration in an extension.
Does it really? At the end of the day i need it to do my job. Ideal values don’t help me doing my job. So i choose the editor best suited and the features i need. And that’s not zed at the moment.
There's an analogue here with programming language iteration— Python, Ruby and friends showed what the semantics were that were needed, and then a decade or two later, Go and Rust took those semantics and put them in compiled, performance-oriented languages.
Electron has been a powerful tool for quickly iterating UIs and plugin architectures in VSCode, Brackets, Atom, etc, now the window is open for a modern editor to deliver that experience without the massive memory footprint and UI stalls.
I agree with the main point but I am on battery often and the difference between native vs. one or multiple Electron apps in "doing my job" is easily several hours lost to battery life or interruptions for charging. Not a huge deal, but it's not my ideals that make me frown at charge cycles occurring twice as often.
This is simply not true… that’s the problem. As much as I like Zed, using it for the sake of not being an electron app doesn’t make any sense when Cursor’s edit prediction adds so much value. I’m not starved of resources and can run Cursor just fine – as far as Electron apps go VS Code is great, performant enough. I value productivity. I’ll very happily drop Cursor for Zed the second edit prediction is comparable. I’m eagerly waiting.
I wonder if Augment [1] are working on a Zed plugin.
I've been using Augment for more than a year in Jetbrains IDEs, and been very impressed by it, both the autocomplete and the Cursor-style agent. I've looked at Cursor and couldn't figure out why anyone needed to use a dedicated IDE when Augment exists as a plugin. Colleagues who have used Cursor have switched to Augment and say it's better.
Seems to me like Augment is an AI tool flying under most people's radar; not sure why it's not all over Hacker News.
I like autocomplete when it used to be a bit slower and only act on tab.
Right now it's borderline impossible to write code, the autocompletion results are loaded ultra fast and Cursor maps different buttons to autocompleting functionality.
It's no longer usable for me.
I'm fine getting autocompletes, but I decide when to trigger it, ideally after reading it, like this I can't even type.
>One thing that still suffers is AI autocomplete. While I tried Zed's own solution and supermaven (now part of Cursor), I still find Cursor's AI autocomplete and predictions much more accurate (even pulling up a file via search is more accurate in Cursor).
It's not only the autocomplete. I've never had any issue with Cursor while Zed panicked, crashed and behaved inconsistently often (the login indicator would flicker between states while you were logged in and vice versa, clicking some menus would crash it and similar annoyances). Another strange thing I've observed is the reminder in the UI that rating an AI prompt would send your _entire chat history_ to Zed, which might be a major red flag for many people. One could accidentally rate it without being aware of that and then Zed has access to large and potentially sensitive parts of your company's code - I can't imagine any company being happy with that.
There are plenty of great VCs out there, going with Sequoia will definitely come with some unpleasant late consequences.
>This will go a long way to creating real competition to Cursor in the form of a quality IDE not built on VSCode
There are many "real competitors" to Cursor, like Windsurf, (Neo-)Vim, Helix, Emacs, Jetbrains. It's also worth being aware that not everybody is too excited about letting AI slop be the dominant part of their work. Some people prefer sprinkling a little AI here and there, instead of letting it do pretty much everything.
"even pulling up a file via search is more accurate in Cursor"
Huh? it takes it sometimes like 40s to find some file with the fuzzy search for me. In that time im going to the terminal running a "find" command with lots of * before I get some result in cursor
For starters, never commit to a timeline without doing your due diligence. We're not selling carpets. Anyone who gives a time estimate on the spot is setting themselves up for failure.
Second, always pad your estimates. If you have been in the industry longer than 6 months, you'll already know how "off" your estimates can be. Take the actual delivery date, divide that by the estimated date, and that's your multiplier.
The reason people are holding out is that the current generation of models are still pretty poor in many areas. You can have it craft an email, or to review your email, but I wouldn't trust an LLM with anything mission-critical. The accuracy of the generated output is too low be trusted in most practical applications.
Glib but the reality is that there are lots of cases where you can use an AI in writing but don’t need to entrust it with the whole job blindly.
I mostly use AIs in writing as a glorified grammar checker that sometimes suggests alternate phrasing. I do the initial writing and send it to an AI for review. If I like the suggestions I may incorporate some. Others I ignore.
The only times I use it to write is when I have something like a status report and I’m having a hard time phrasing things. Then I may write a series of bullet points and send that through an AI to flesh it out. Again, that is just the first stage and I take that and do editing to get what I want.
>> have something like a status report and I’m having a hard time phrasing things
I believe the above suggested that this type of email likely doesn't need to be sent. Is anyone really reading the status report? If they read it, what concrete decisions do they make based on it. We all get in this trap of doing what people ask of us but it often isn't what shareholders and customers really care about.
Google became a billion dollar company creating the best search and indexing service at the time and putting ads around the results (that and YouTube). The didn't own the answer of the question.
A support ticket is a good middle ground. This is probably the area of most robust enterprise deployment. Synthesizing knowledge to produce a draft reply with some logic either to automatically send it or have human review. There are both shitty and ok systems that save real money with case deflection and even improved satisfaction rates. Partly this works because human responses can also suck, so you are raising a low bar. But it is a real use case with real money and reputation on the line.