Hacker Newsnew | past | comments | ask | show | jobs | submit | otabdeveloper4's commentslogin

> ...and it'll be night and day.

That's just, like, your opinion, man.

> You really can't compare a model that's got trillions of parameters to a 27B one.

Parameter count doesn't matter much when coding. You don't need in-depth general knowledge or multilingual support in a coding model.


I often do need in-depth general knowledge in my coding model so that I don't have to explain domain specific logic to it every time and so that it can have some sense of good UX.

> For coding often quality at the margin is crucial even at a premium.

For coding, quality is not measurable and is based entirely on feels (er, sorry, "vibes").

Employers paying for SOTA models is nothing but a lifestyle status perk for employees, like ping-pong tables or fancy lunch snacks.


I’m building my own company and I consider model choice crucial to my marginal ability to produce a higher quality product I don’t regret having built. Every higher end dev shop I’ve worked at over the last few years perceives things the same. There are measurable outcomes from software built well and software not, even if the code itself isn’t easily measurable. I would rather pay a few thousand more per year for a better overall outcome with less developer struggle against bad model decisions than end up with an inferior end product and have expensive developer spin wheels containing a dumb as a brick model. But everyone’s career experiences are different and I’d feel sad to work at a place where SOTA is a lifestyle choice rather than a rational engineering and business choice.

"based entirely on feels"

Now there's a word I haven't heard in a long, long time.


API rates are the real rates. Subscription costs are the "first hit is free" subsidized pricing.

They’re not the “real rates”, they’re the rates that are being charged for API use. API reportedly has a margin of profit

You also neglect that products like Cursor run on two margins, their own plus the API provider’s. That’s always going to come at a premium


There is -- you can expose a UNIX socket for serving credentials and allow access to it only from a whitelist of systemd services.

They would still exist in plaintext, just the permissions would make it a little harder to access.

No, UNIX sockets work over SSL too.

You can, theoretically, decompile the system memory dump and try to mine the credentials out of the credential server's heap, but that exploit is exponentially more difficult to do that a simple `cat /proc/1234/environ`.


That works on a single persistent box, but unfortunately, that means giving up on autoscaling, which is not so nice for cloud applications.

You can proxy the UNIX socket to a network server if you want to. You can even use SSL encryption at all times too.

Once it's networked you lose the "whitelist of systemd services" and it's then no different from any networked secret store.

No, this is a solved problem: https://spiffe.io/

You can do service attestation securely, even for networked services.


Env vars are not secure. Anything that has root access can see all env vars of all applications via /proc.

(And modern Linux is unusable without root access, thanks to Docker and other fast-and-loose approaches.)


How often do you log in as root, or use sudo to become root, when you're working with Docker containers?

Because I never do, unless I'm down in the depths of /var/lib/docker doing stuff I shouldn't.


That just means you outsourced the `sudo` invocations to some other person. (Which is even worse.)

No, it means I understand how Unix permissions work.

Glib response, but in reality you basically cannot do anything in a modern Linux system without root except read and write files in your home directory.

> Its not anywhere close

Close to what, and how are you measuring?

> nobody in the USA would be spending 7 figures on infrastructure for it

Au contraire, if AI had a moat it would pay for itself. They're funneling capital into infrastructure because they know it can't.


You need the infrastructure to train and run it regardless though. Kimi is great but I'm not getting the same performance from it running it on my MacBook or a 3090 as it running on a H100 or a Grace Hopper supercomputer. Pretend you did have said moat. Why wouldn't you also books infrastructure to run it on?

> Why wouldn't you also books infrastructure to run it on?

No, you wouldn't be using venture capital to overprovision your AI a hundredfold if selling AI was the end goal.


What?

> ...because LLMs are really good at writing tests.

No, they're absolutely shit at writing tests. Writing tests is mostly about risk and threat analysis, which LLMs can't do.

(This is why LLMs write "tests" that check if inputs are equal to outputs or flip `==` to `!=`, etc.)


People think these LLM's are anthropomorphic magic boxes.

It will take years until the understanding sets in that they're just calculators for text and you're not praying to a magic oracle, you're just putting tokens into a context window to add bias to statistical weights.


> This seems like a common-sense addition.

Mm, yes. Let's add mitigation for every possible psychological disorder under the sun to my Python coding context. Very common-sense.


It's what you get when you create sycophant-as-a-service. It will, by design, feed all of your worst fears and desires.

LLMs aren't AGI, and I'd go further and say they aren't AI, but admitting it is snake oil doesn't sell subscriptions.


This. Misses the compile-time evaluation boat completely, even though the proverbial "sufficiently smart compiler" is based on the idea.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: