>First responders/doctors/CPS investigators see the worst but they also have days where they make a difference. Save a life or multiple lives. I'm sure it's a huge part of what makes the job bearable, and to some meaningful.
You think miners don't make a difference or save lives?
> You think miners don't make a difference or save lives?
Do you think miners mining is saving lives in the same way that doctors saving lives is saving lives?
To continue the parents point, do you think miners derive a deep or powerful satisfaction from some of their mining work which might offer some of the heavy cost it has on them physically and emotionally?
I think miners save more lives (through the supply of gas, energy, battery materials, pesticides, fertilizers, solar panel minerals, and ultimately electricity, computing materials, etc) than doctors do.
And I think what prevents miners to "derive a deep or powerful satisfaction from some of their mining work which might offset some of the heavy cost it has on them physically and emotionally" is people thinking the way you do, that only direct affect should be prestigious and satisfying and not the thankless background work to keep the lights on.
What exactly is phenomenal and novel about Zed? I've tried it a couple of times for a week or so, didn't see the point, and moved on every time.
And I'm not luddite swearing by vi or something, I use VSCode and Idea, and have used Sublime for many years, Xcode on/off for some Obj-C/Swift dev, Eclipse for 5-6 years in the 2000s, and vim for everything cli/lightweight since forever.
Is the GUI tech what's supposed to be novel? I couldn't care less about that backend in my everyday editor use as long as the editor is fast enough. Which on modern hardware, even Idea is.
Currently on this machine: using 900MB of RAM, including all language servers, with nine open projects - that is pretty phenomenal. VSCode could barely keep one open with the same memory.
The perception of 'fast' is very subjective. To me having a smooth, jitter-free UI, low input latency, and instant startup, all matter a lot.
It's amazing that a gig of ram is considered lightweight for having 8 project dirs open in an editor, which normally means 8 tree views and a few open file tabs per project :)
Even more amazing that 10GB for the same purpose is considered acceptable. ± 100MB for window, project files, LSP servers, ASTs etc is something very few editors can achieve - I'm pretty sure Zed beats both Emacs and Neovim in memory consumption.
I understand wanting your software to be well optimized, but at no point in my years of using VSCode have I ever actually had to care about how much RAM it's using. I have 32GB, I'm going to use it.
I made the mistake of buying an 8 GB macbook air m3 a while ago, thinking it would be enough. I wasn't accounting for docker or vscode. It REALLY lags. The vim mode plugin will regularly lag on nearly every keystroke, until I kill everything and restart.
On the topic of vim, the built-in vim mode in zed is really good. The helix mode is great too!!
I, too, would like to use my RAM. And I would like to be able to use it on the things I deem important, not to subsidize the laziness of devs who reach for Electron.
VS Code is also offering significant more ability than Zed at the moment. If you want to sell RAM-usage as a phenomenal benefit, then you should compare it with similar editors, like Sublime or (Neo)Vim.
A side effect of Electron crap, before Zed many editors and IDEs on Atari, Amiga, Windows, OS/2, BeOS, Mac OS, NeXTSTEP, were written in fully native code.
I heard that Zed has very impressive collaboration features. I tried them a little and they really look well (like discord, but directly in editor). But this was very superficial look
What?! Really?! Link? I'm not a Zed user. That comment was based off a few minutes of research, and I guess a small dose hopium of a VSCode user and understanding what a shit show the extensions setup is and wanting someone to do better.
Yep, it pulls stuff from at least npm, it’s not a secret - check the source code.
Actually it pulls latest versions (checking registry then installing that exact version, not sure why they sidestep normal resolution algorithms) no matter what .npmrc may say, so min-release-age breaks almost everywhere it integrates with JS/TS ecosystem (most visibly, Copilot). I probably should’ve filed an issue.
It also installs Go packages but I haven’t looked into that.
Recent example I looked at: https://github.com/nilskch/zed-jj-lsp, which downloads jj-lsp if not found in the system. I have seen other extensions doing similar for convenience, but can't remember names to give concrete links.
> TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.
> Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.
So... If you want broad language support via LSP servers, then you're going to have to bring in other ecosystems, and Node/Typescript is a big one that doesn't always have alternatives. [0] That's not a Zed-specific problem.
IMO the real issue with Zed is the "runs them by default without asking" part. Plus the questionable practices with binary blobs and the cavalier attitude in the discussions, when I can just use an editor that... Doesn't do any of that.
>Aren't you forgetting the part that says "solely: (a) to perform its obligations set forth in the Terms, including its Support obligations as applicable; (b) to derive and generate Telemetry (see Section 4.4); and (c) as necessary to comply with applicable Laws
None of the above I like, and (a) is so vague as to be useless, including if you read the obligations.
>Except as required by applicable Laws, Zed will not provide Customer Data to any person or entity other than Customer’s designees (including pursuant to Section 7) or service providers."
Companies still do it all the time despite "applicable laws". And when the company is sold, all bets are off.
I'd rather they don't get, or keep, any to begin with.
The telemetry section of the TOS explicitly clarifies that that does not restrict their ability to use the data that is sent to them.
> Customer may configure the Software to opt out of the collection of certain Telemetry Processed locally by the Software itself, but Zed may still collect, generate, and Process Telemetry on Zed’s servers.
Note that they have (or did have, I haven't used their editor in awhile) an AI tab completion feature... it's safe to assume that all of the code you edit is sent to them at least when that is enabled.
>The telemetry section of the TOS explicitly clarifies that that does not restrict their ability to use the data that is sent to them.
Hopefully it does restrict them being sent to them in the first place.
I also found there are a couple of "Chromium" style builds.
>Note that they have (or did have, I haven't used their editor in awhile) an AI tab completion feature... it's safe to assume that all of the code you edit is sent to them at least when that is enabled.
There's also an option to turn ai features off. At which point of course, nvim is just as good :)
What I understand reading this is that if you use their online services, incl AI-agents, llm based tab-completion, auto-updates etc, you send data to their servers, and on that part they run analytics. Frankly, this is what I would expect anyway, ie if I disable telemetry locally, it would affect what I do locally, ie no data about how I use my software locally would leave the machine, but if I sent data to some server I would not expect people not to run analytics on their servers.
> AI tab completion feature... it's safe to assume that all of the code you edit is sent to them
Yes, this is quite obvious, how else could they provide AI tab completion? I hope anybody understands this before using sth like this. They do specify that "[...] telemetry expressly does not include Customer Data" though.
> They do specify that "[...] telemetry expressly does not include Customer Data" though.
Yes and no. They first grant themselves a license "to derive and generate Telemetry" from the users copyrighted material, something that they only need if they're deriving it from the actual creative works the customer updates, and not just the metadata about them.
And they define telemetry extremely broadly, effectively "anything useful for lawful business purposes except customer data".
So this agreement would seem to cover things like "an update to an AI model trained off of your code" or even "an AI summary of what you're working on and any relevant business information contained therein". As long as they process it to something new, it's not "customer data" (a term defined narrowly in the agreement). I don't expect that they are doing that, but I think they've given themselves permission to. The agreement is far too broad.
I agree that I expect that they are deriving metadata, and would expect that regardless of this agreement, but this agreement doesn't seem necessary for that.
I was willing to give it another go. Now I read on this thread that it installs tons of node packages (so much for Rust native code) and even Go packages, and gets many extra processes running along with it.
TL;DR: Mix of language tooling, unsigned proprietary blobs, corrupted and/or GLIBC-dependent files, redundant copies of already-installed executables. The Node packages especially are able to run scripts on install. Personal preference aside, might also create issues with security laws, certifications. All without user consent.
Issues opened in January and June 2024. They've been rejected, closed, and opened a couple times since then. No changes directly improving this yet as of April 2026.
Personally, I think even if they eventually fix this, given the attitude shown towards their users' machines, I should probably just use an editor where I don't have to worry about it.
Compared to lisp? Ok fine. Syntax doesn't get more simple than Lisp. But compared to JavaScript? C++? C#? Haskell is top tier when it comes to syntactic and conceptual elegance. The biggest problem is tooling, I would say.
I could not agree less. People used to call Python “executable pseudocode” - in that spirit, Haskell is executable pseudo-math. If you’ve done enough higher math that a professor’s whiteboard notation feels natural to you, then Haskell might feel like a reasonable approximation of that style. Otherwise: it’s line noise.
Haskell is very elegant and pretty. It's hard to describe what pretty is when it comes to programming languages, but imo golang is ugly, rust is good, and Haskell the best.
We share more details about what happened. Tokens on Railway can actually be scoped pretty narrowly, all the way down to a single environment. In the post that went viral the token was account-scoped, which was way more access than the task needed. We'll improve the UX so it's obvious which token to create.
Also, volume deletions now schedule with a 48-hour grace period, in both the dashboard and the API. You can always undo. Overall we want to have more guardrails set up in place since more and more people will interact with Railway via agents
And what’s the variance & accuracy of their responses? Isn’t comparing the models’ variance to baseline human variance what matters here? It seems like they didn’t do that, and I agree with parent’s call for that kind of baseline.
Having counted calories for years, I don’t think I could reliably estimate the calories or carbs in the example picture of a cheese sandwich. I can make assumptions about the bread and the cheese, but I might easily be off by 2-3x. Calorie counting apps that use text descriptions also have huge variance for the same thing. The problem might be the belief that a picture or description is enough, regardless of who or what is guessing…?
Edit: Ah, I see from sibling thread you meant commercial services are LLMs, I thought you meant there were human-backed services to compare to. Anyway, I totally agree there’s a problem if people rely on AI for safety, but I’m not sure LLMs are the core issue here, it seems like using vague information and guessing is the core issue.
> Isn’t comparing the models’ variance to baseline human variance what matters here?
You seem to be missing the context that this isn't just about diet apps - this is about apps claiming to be able to track carbs sufficiently accurately to be used in a medical context to dose insulin (a substance which can be lethal if incorrectly dosed)
No I understand apps are making dubious claims and implications; obviously claiming LLMs can accurately estimate carbs from a photo is just wrong. But that doesn’t necessarily change my question. Should people use photos to estimate carbs? Can people looking at photos do any better?
The presence of variance in the LLM output doesn’t actually prove anything, in fact I would expect and hope for variance when confidence is less than 1.0. I’m more curious about accuracy of the mean of guesses for different models, for example.
But should any diabetic expect photos to be reliable, regardless of whether it’s an app or an LLM or a human? I know some diabetics, and the people I know do not rely on photos for their safety. They don’t even rely on food labels either (which are far more accurate than photos), they measure their insulin.
It’s probably useful to raise awareness, and useful to scare app makers away from making bogus medical claims - products and scams that make bogus medical claims is of course a practice as old as history. But we can still hold the studies and PR around this up to high standards, right? Even assuming this article & the paper behind it are right, there are reasonable questions here about how to demonstrate the problem and what the baselines are.
It’s worth keeping in mind that trying to prove the bogus apps wrong with a flawed methodology or questionable reasoning or just an overly heavy handed style can cause backlash and do damage to the cause. We’re already seeing that effect play out with respect to vaccinations.
But I don't see them using those commercial services in this study - instead, they're using frontier model companies? Is Gemini advertising that you get a realistic calorie count from a picture? Maybe so - in which case i'd take it back!
The commercial services likely also have frontier model dependencies...
The opening to the actual paper is quite explicit that (i) other studies have already tested commercial apps with with unimpressive results and (ii) a popular open source app for carb counting directly relies on API calls from these frontier models, and this research batch tested the images used the exact same models and prompts as the popular open source app.
A carb counting app might use API calls to these frontier models and then do some kind of analysis. It could see if different models agree or not, or multiple calls, and with how much variance.
So it would be more accurate to test the apps rather than the APIs, unless the goal is to warn people that just open chatgpt and ask there.
The open source app could in theory do that, but the paper's authors would be able to determine whether it did or not by reading its code, which they evidently did to replicate the API calls it made with their own script.
(And of course it would also be far more tedious to submit each picture 500 times manually using an app and manually log the response than using a script which is designed to collect the data automatically as fast as API rate limits permit)
Great point - and i'd love a study to address that. If the study pointed out that X services sit perfectly within the analysis found, I think that would be a fantastic study that would be enlightening & useful to show.
The app the study is based on is open-source, so you yourself can verify that it does indeed just call a frontier model with the same prompts used in the study
That's not really the same thing as what I'm saying - which is to investigate the applications specifically advertising AI calorie counting capabilities
They investigated an open source application specifically advertising carb counting capabilities, replicated its prompts and API calls in a way optimised to collect data from 26000 queries (which is a lot to do using a GUI!). They also note other people have already done [necessarily] smaller scale studies of the commercial AI carb counting apps and been similarly unimpressed by the responses.
This is all in the first few paragraphs of a preprint paper describing the research in considerably more detail which is linked at the bottom of TFA
Meta: enjoying nearly half this HN thread being arguments that surely people care about what's in their food don't ask ChatGPT for comment instead of looking it up properly, and most of the rest of it being people who apparently care what's in a research paper asking HN for comment instead of looking it up :)
Not really. My point is people don't even think they're buying a general purpose computer because they don't know what that is. They are buying a cloud terminal and it works as a cloud terminal. You might think of your smartphone as a computer, but you're wrong. So it's a chair and anyone can sit on it, it's just not a table. It never was.
You think miners don't make a difference or save lives?
reply