Hacker Newsnew | past | comments | ask | show | jobs | submit | ElectricalUnion's commentslogin

> can anticipate what you want to do before you even finished your thoughts

I find that claim to be complete BS. I claim instead most stuff will remain undone, incomplete (as it is now).

Even with super-powerful singularity AI, there are two main plausible scenarios for task failure:

- Aligned AI won't allow you to do what you want as it is self-harming, or harm other sentient beings - over time, Aligned AI will refuse to follow most orders, as they will, indirectly or over the long term, cause either self-harming, or harm other sentient beings;

- A non Aligned AI prevents sentient beings from doing what they want. It does what it wants instead.


I am pretty sure that a hole in the pocket in the order of 50 000 000 USD/month (assuming around 20 000 people using AI in not the smartest or most optimized way possible, therefore burning A LOT of tokens) will be noticeable by even the largest companies.

It is noticeable and even promoted, large companies do pay such sums for the API, like $5k+ per person per month. Not every eng is using AI that much already, but companies are clearly willing to pay those sums.

Those days, what I see as "premature database optimization" is non-DBAs, without query plans, EXPLAINS and profiling, sprinkling lots of useless single column indexes that don't cover the actual columns used for joins and wheres, confusing the query planner and making the database MUCH slower and therefore more deadlock-prone instead.

I say "if we got $CURRENT_MODEL that can run under local hardware" claims are postproning BS.

What is gonna happen when that happens? They are gonna cry they need GPT-$CURRENT capabilities locally.

Now we have local models that are way better that GPT-2 (careful, this one is way too dangerous for release!) GPT3.5, in some ways better that 4, and can run on reasonably modest hardware.


From my point of view, they can't even "just turn on the Internet", even if they wanted.

We know from the Ukraine side that "keeping the internet on" requires a whole bunch of personal sacrifice, and a lot of "reasonably recent" electronic equipment and infrastructure that Iran can't simply buy or repair right now.


I'll bet you - dollars to donuts - that Iran has many countrywide IP-based networks running at this second, for things such as broadcast and telecoms.

Perhaps you are underestimating the resources available to a country of 90 million. You could play a game where you estimate the number of routers and switches outside of Tehran under a hypothetical where the the capital was leveled. I don't know how many universities Iran has, but my working assumption is that any one Computer Science department from a D-Tier university is equal to the task, if the physical carrier medium for the Internet is still present and they are bringing their ancient half-rack of equipment.


Someone thought the "commit all previous operations to persistent storage" step would take just 1% of the time.


Starting with SQL Server 2017, native Linux support exists. Probably because of Azure.


Ironically, SQL Server AFAIK in order to run on Linux uses what basically amounts to a Microsoft reimplementation of Wine. Which always makes me wonder if they'll ever get rid of Windows altogether someday in favour of using Linux + a Win32 shim. I think there are still somewhat strong incentives nowadays to keep NT around, but I wouldn't be that surprised it this happened sometime down the line.


It's a Windows container. It runs the NT kernel and a few minimal other things. The closest would be the Nano Server container


AFAIK it's more like a reimplementation of NT APIs in userspace - aka basically Wine with extra steps, or Linux UM. There was a slide deck going around about Project Drawbridge, here: https://threedots.ovh/slides/Drawbridge.pdf


I also find it really strange this weird ORM fascination. Besides the generic "ORM are the Vietnam War of CS" feeling, I feel that with average database to ORM/REST things, end up with at least one of:

a) you somehow actually have a "system of record", so modelling something in a very CRUD way makes sense, but, on the other hand, who the hell ends up building so many system of record systems in the first place to need those kinds of tools and frameworks?

b) you pretend your system is somehow a system of record when it isn't and modelling everything as CRUD makes it a uniform ball of mud. I don't understand what is so important that you can uniformly "CRUD" a bunch of objects? The three most important parts of an API, making it easy to use it right, hard to use it wrong, and easy to figure out the intent behind how your system is meant to be used, are lost that way.

c) you leak your API to your database, or your database to your API, compromising both, so both sucks.


The vietnam of computer science was written 20 years ago (2006 even), and didn't kill off ORMs then. We've only had 20 years of improvement of ORMs since then. We've long ago accepted Vietnam (the country) as what it is and what it will be in the forseeable future. We should do the same with ORM.

I for one don't want to write in a low level assembly language, and shouldn't have to in 2026. Yet, SQL still feels like one.

I've written a lot of one off products using an ORM, and I don't regret any of the time savings from doing so. When and if I make $5-50M a year on a shipped product, okay, maybe I'll think about optimizing. And then I'll hire an expert while I galavant around europe.


SQL is a pretty high-level, declarative language. It's unnecessarily wordy though, and not very composable.

The problem with ORMs is that they usually give you a wrong abstraction. They map poorly on how a relational database works, and what it is capable of. But the cost of it is usually poor performance, rarely it's obvious bugs. So it's really easy to get started to use; when it starts costing you, you can optimize the few most critical paths, and just pay more and more for the DB. This looks like an okay deal for the industry, it seems.


> interface designed for humans — the DOM.

Citation needed.

> The web already went through this evolution once: we went from screen-scraping HTML to structured APIs. Now we're regressing back to scraping because agents need to interact with sites that only have human interfaces.

To me, sites that "only have human interfaces" are more likely that not be that way totally on purpose, attempting to maximize human retention/engagement and are more likely to require strict anti-bot measures like Proof-of-Work to be usable at all.


The funny thing is that those days you can fit 64 TB of DDR5 in a single physical system (IBM Power Server), so almost all non data-lake-class data is "Small data".


And a single machine can hold petabytes of disk for medium scale. There aren't many datasets exceeding that outside fundamental physics.


> There aren't many datasets exceeding that outside fundamental physics.

Just about every physical world telemetry or sensing data source of any note will generate petabytes of analytical data model in hours to days. On the high end, there are single categories of data source that aggregate to more like an exabyte per day of high-value data.

It is a completely different standard of scale than web data. In many industrial domains the average small-to-medium sized company I come across retains tens of petabytes of data and it has been this way for many years. The prohibitive cost is the only thing keeping them for scaling even more.

The major issue is that the large-scale analytics infrastructure developed for web data are hopelessly inadequate.


You could generate PB of data from a random number generator.

My question would be, why does a company need PBs of sensor data? What justifies retaining so much? Surely you aren’t using it beyond the immediate present.


There's nothing wrong with that. Small data is relative, and my clients often find it useful to rent or get access to beefy machines to process it with "small" techniques rather than use clusters...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: