Hacker Newsnew | past | comments | ask | show | jobs | submit | notnullorvoid's commentslogin

It's a nice sentiment to say use HTML (or HTMX), but often the people that push for that either don't know enough HTML to know how broken the spec is, or they don't know enough about the requirements of modern web applications.

If you're making static blog or landing page with HTML you won't hit many of the bad parts.

There's really no chance of fixing the HTML spec though, despite the many attempts, which tend to make it worse. Many of the framework authors know this, and in part that's why they've chosen such a user-space heavy path. It allows working around the bad parts of the platform, and iterating in ways where ideas that don't pan out can be left behind.


> there's seemingly nothing in Windows or Linux

Linux has flatpak


Security through obscurity can be a good security layer, but you need to maintain obscurity. That's a lot harder than Cal.com seems to realize.

For example using something like Next.js means a very large chunk of important obscurity is thrown out the window. The same for any publicly available server/client isomorphic framework.


> The moat of Cal.com is not the code, it's the users who don't want to migrate.

That's a very weak moat unless you have something else like the friction of network dependence similar to a social network.


Exactly, that's why most Saas companies are in a very tough position.

You have to bring value that goes beyond the source code and hosting, otherwise your clients are going to vibe code a custom solution instead of paying you.


> otherwise your clients are going to vibe code a custom solution instead of paying you.

How many things do you want to be responsible for? How many vibe coded projects do you want to maintain?

I think this line of reasoning is overblown. Just because you can doesn't mean a significant number of people will. I think the 3D printer comparison is apt.


Individuals and SMB might stick with Saas but those don't pay much.

Enterprise customers have the means to develop in house, those are the customers that will leave. And those are the whales of the Saas business.


They already have the means to develop in house. Why aren't they?

Same story as always, writing the code in the easy part. Requirement gathering, analysis, consensus, direction, those are all the hard parts. Enterprises have a business to run and don’t want to run a software shop on top of everything else.

The story is usually that businesses don't want to commit to indefinitely expending their limited efforts maintaining software which isn't part of the company's core competencies. Most of the cost and effort of software happens after the first release is delivered.

> Enterprises have a business to run and don’t want to run a software shop on top of everything else.

It sounds like you mostly understand here. The biggest part of "running a software shop" they want to avoid is responsibility for support, bugs, fires, ongoing maintenance, and legal issues, of post-release software.

Dave's Pizza around the corner doesn't make a social media app, not because Dave can't figure it out, not because he can't vibe code one, not because he can't contract someone to do it, but because running a social media site isn't a core competency of Dave's Pizza. Instead, Dave uses existing social media sites, and focuses his efforts and passions on making pizza.


So I work in enterprise tech. consulting, my current project is with a large, global, chemicals company (it wouldn't be right to call out my client by name). This client is extremely competent from their multiple enterprise architects down to their analysts, they're a pleasure to work with. One of the business requirements could be met by a very simple in-house developed and hosted API, it's a perfect use case for GenAI assisted coding too. There's no magic, it's a problem solved over and over already. However, they don't want to touch inhouse dev with a 10 foot pole for the reasons we're both talking about. They don't want to support it, extend it, back it up, monitor it, and all the other things that have to happen after the code is done. They're perfectly happy to buy licenses from a saas so if anything goes wrong they can tell the CTO "it's not me, it's them". And when the CTO says "why doesn't it do this too!?!" they can say "i'll call our rep and ask".

saas value to an enterprise is more than just the functionality provided and I think that is lost on a lot of the heads down software devs here.


They are, and always have. Looking over "software engineer" roles in my local area, I see folks at companies in a variety of industries: finance, health, logistics, health care, and the local power utility, all well outside the software industry.

Most enterprise companies don't develop everything in house, but usually do have a varied mix of in-house infrastructure, IaaS and PaaS solutions, and SaaS products. Large organizations across varied industries often have multiple internal dev teams, and the availability of increasingly sophisticated AI tools is going to enable the same teams to be effective at more, and more complex, projects. AI will definitely start shifting make-or-buy decisions, especially for mature, commodity use cases, to 'make'.


This is much less work (= cheaper) to develop in-house with AI now than before.

I don't think it's much cheaper. Writing some code to do some CRUD has always been easy. Getting to a proof of concept is definitely quicker. But creating something that can be relied upon in production? That's as difficult and time consuming as it has ever been.

Yup. I've explained it as okay, some software is free as in beer and others are free as in speech. DIY software is free as in yacht.

It sounds nice, but now you have something that takes an enormous amount of time and effort to use and maintain, plus you need to have someone with the skills to run it.


They won’t, because specialization is a key aspect of capitalism.

This is why companies outsource anything. Google, Inc. is big enough to own farms and ranches to grow the food eaten in its cafeterias. They could make trucks to transport that food. They could operate factories to make cutlery, etc. Why do they instead choose to pay layers of margins to layers of middlemen?

Absurd example? How about Apple? They outsource production of their chips, instead of capturing the margin they are currently gifting to their partners. Why?

Delta Airlines doesn’t operate oil fields or even refineries even though a major cost of their operations is jet fuel. Why?

Once you can reason through these very simple examples, you will understand why enterprises are unlikely to walk away from SaaS.



Sigh.

s/Delta/United/ or s/Delta/Southwest/ or s/Delta/Lufthansa/. Or if you prefer, s/refinery/oilfield, or s/refinery/pipeline. Or even s/refinery/farm/ because Delta also buys food in vast quantities (I would not be surprised to find they have interests in ag producers that offset a small % of their food purchases, which does not diminish the argument).

Delta also does not make airplanes, jet engines, seats, radios, GPS, glass, or even wires. They don't distill the spirits they serve on their flights. They don't own and operate a satellite Internet capability. They don't even make movies for in-flight entertainment.

The point is that Delta, like most successful firms, outsources key aspects of core service delivery.

The second article you linked says plainly that the refinery is an offset/hedge. QED Delta still outsources the vast majority of its fuel costs. (They could, for example, own large swathes of the Permian and do E&P as well. They choose to leave that to others.)


Vertical integration has been a common practice in industry for 150 years. Yes, very few firms fully control their upstream supply chains, but very few conversely produce nothing but their core market offering in-house. Most companies are somewhere in between, doing some things in-house, and obtaining other things from vendors.

Most large firms have in-house software dev teams responsible for at least some portion of their development work. I know software engineers locally working, variously, at banks, pet supply distributors, power companies, soft drink bottlers, and many other non-tech industries. And AI can and will extend these teams' capacity to internally manager larger segments of their companies' tech stacks.


Lmao I love this flavor of the ‘tism that always surfaces in hn comment threads exactly like this. Like moths to a flame

> How many vibe coded projects do you want to maintain?

here comes the next SaaS idea - vibe coded services as a service. You tell what service you want, may be point out a couple examples, and you get that service vibe coded and hosted for you for a small monthly fee!


I think you missed the point. Being responsible for a vibe coded product means also being able to support it and handle outages etcetera.

So, no, hosting LLM output is not the same as being responsible


Sunk cost is sufficient friction for most people even without network dependence.

For a meeting scheduler site? I feel like you're overestimating the capabilities of something that is akin to college graduate project.

This company does not seem healthy at all:

https://getlatka.com/companies/calcom

I agree with the other poster that mention this is likely a publicity stunt but all it's really showing is that VC is still incredibly stupid with their money. All the more reason to seize it from them then properly fund useful software and not subsidize vanity projects for stanford grads.


About the friction, not the capabilities...I haven't switched off my biz calendar/appointment provider I'm paying for even though I've kinda outgrown it.

I wouldn't under estimate switching friction.


How much does your friction avoidance cost, if you don't mind my asking?

idk my mom still pays for her aol email account

Email is actually a excellent example of something with network dependence. Changing email providers requires that you change your email address too (unless you own and use your own domain). An address change causes friction from having to update the network of contacts and services which used your old email address.

Best business insight posted on HN. This. Your code is not your business.

I might be in the minority, but I think the best way to learn how to write a compiler is to try writing one without books or tutorials. Keep it very small in scope at first, small enough that you can scrap the entire implementation and rewrite in an afternoon or less.

Hell no, the current state of centralized AI is bad enough, socializing it won't make it better.

We need to let the AI as a service businesses fail.


But in the meantime you prefer privately-controlled monopsony datacenters?

Yes I'd much rather big investment firms waste their money instead of government.

> I hypothesize it will find the exploit, but it will also turn up so much irrelevant nonsense that it won't matter.

The trick with Mythos wasn't that it didn't hallucinate nonsense vulnerabilities, it absolutely did. It was able to verify some were real though by testing them.

The question is if smaller models can verify and test the vulnerabilities too, and can it be done cheaper than these Mythos experiments.


People often undervalue scaffolding. I was looking at a bug yesterday, reported by a tester. He has access to Opus, but he's looking through a single repo, and Amazon Q. It provided some useful information, but the scaffolding wasn't good enough.

I took its preliminary findings into Claude Code with the same model. But in mine it knows where every adjacent system is, the entire git history, deployment history, and state of the feature flags. So instead of pointing at a vague problem, it knew which flag had been flipped in a different service, see how it changed behavior, and how, if the flag was flipped in prod, it'd make the service under testing cry, and which code change to make to make sure it works both ways.

It's not as if a modern Opus is a small model: Just a stronger scaffold, along with more CLI tools available in the context.

The issue here in the security testing is to know exactly what was visible, and how much it failed, because it makes a huge difference. A middling chess player can find amazing combinations at a good speed when playing puzzle rush: You are handed a position where you know a decisive combination exist, and that it works. The same combination, however, might be really hard to find over the board, because in a typical chess game, it's rare for those combinations to exist, and the energy needed to thoroughly check for them, and calculate all the way through every possible thing. This is why chess grandmasters would consider just being able to see the computer score for a position to be massive cheating: Just knowing when the last move was a blunder would be a decisive advantage.

When we ask a cheap model to look for a vulnerability with the right context to actually find it, we are already priming it, vs asking to find one when there's nothing.


The article positions the smaller models as capable under expert orchestration, which to be any kind of comparable must include validation.

Calling it “expert orchestration” is misleading when they were pointing it at the vulnerable functions and giving it hints about what to look for because they already knew the vulnerability.

You know for loops exist and you can run opencode against any section of code with just a small amount of templating, right? There's zero stopping you from writing a harness that does what you're saying.

so it's just better at hallucinations, but they added discrete code that works as a fuzzer/verifier?

> but they already made it too smart (Mythos).

It's largely a marketing tactic. It will be released, and it won't be long before other models show similar capabilities.

If they wanted they could add guardrails. The scales required to brute force search for vulnerabilities like they did would be very identifiable.


Scam Altman already pulled this trick numerous times.

Whats wrong with people? Is it really that hard to see the truth?


Did you verify it's the RCEs actually work, and weren't hallucinated?


The argument against rejecting to cancel seems like a stretch to me. It's completely fine if you view cancellation as a error condition, it allows you to recover from a cancellation if you want (swallow the error w catch) or to propagate it.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: