Hacker Newsnew | past | comments | ask | show | jobs | submit | magicalist's commentslogin

> People tend to behave if they know they are being watched

At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus...


> The privacy absolutists will tell you that license plate cameras are “Orwellian.” But here’s what I know: unsolved crime means more innocent people get hurt and maimed and killed. Flock has audit trails. There’s accountability. The people who benefit from keeping murders unsolved aren’t victims—they’re criminals.

jesus christ. assuming he's not going to start syndicating this, who is this even pandering to?


  The only question is whether your city has the courage to use it.

  Take Action

  Share this with your city officials—demand they adopt Flock Safety
Unless I missed it they don't even bother with the pretense of disclosing his financial self-interest in promoting Flock anywhere on the site.

Yes, and you can get fresh tomatoes any time of year for cheap and they're so firm they won't get damaged in transit and with a blast of ethylene they're a perfect shade of red when you buy them.

All things unquestionably better than the past. What's there to complain about?


> All things unquestionably better than the past. What's there to complain about?

Taste and nutritional value a worse then in the past. Arguably the two most important things when it comes to food.


> Yes, and you can get fresh tomatoes any time of year for cheap and they're so firm they won't get damaged in transit and with a blast of ethylene they're a perfect shade of red when you buy them

My family calls those "water balloons".


> but monocular depth estimation was spectacularly good by 2021

It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.

The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.


It was spectacularly good before 2021, 2021 is just when I noticed that it had become spectacularly good. 7.5 billion miles later, this appears to have been the correct call.

What are the techniques (and the papers thereof) that you consider to be spectacularly good before 2021 for depth estimation, monocular or not?

I do some tangent work from this field for applications in robotics, and I would consider (metric) depth estimation (and 3D reconstruction) starting to be solved only by 2025 thanks to a few select labs.

Car vision has some domain specificity (high similarity images from adjacent timestamps, relatively simpler priors, etc) that helps, indeed.


> They've said this directly and analysts agree [1]

chasing down a few sources in that article leads to articles like this at the root of claims[1], which is entirely based on information "according to a person with knowledge of the company’s financials", which doesn't exactly fill me with confidence.

[1] https://www.theinformation.com/articles/openai-getting-effic...


"according to a person with knowledge of the company’s financials" is how professional journalists tell you that someone who they judge to be credible has leaked information to them.

I wrote a guide to deciphering that kind of language a couple of years ago: https://simonwillison.net/2023/Nov/22/deciphering-clues/


Unfortunately tech journalists' judgement of source credibility don't have a very good track record

But there are companies which are only serving open weight models via APIs (ie. they are not doing any training), so they must be profitable? here's one list of providers from OpenRouter serving LLama 3.3 70B: https://openrouter.ai/meta-llama/llama-3.3-70b-instruct/prov...

> The Z-80 microprocessor could address 64kb (which was 65,536 bytes) on its 16-bit address bus.

I was going to say that what it could address and what they called what it could address is an important distinction, but found this fun ad from 1976[1].

"16K Bytes of RAM Memory, expandable to 60K Bytes", "4K Bytes of ROM/RAM Monitor software", seems pretty unambiguous that you're correct.

Interestingly wikipedia at least implies the IBM System 360 popularized the base-2 prefixes[2], citing their 1964 documentation, but I can't find any use of it in there for the main core storage docs they cite[3]. Amusingly the only use of "kb" I can find in the pdf is for data rate off magnetic tape, which is explicitly defined as "kb = thousands of bytes per second", and the only reference to "kilo-" is for "kilobaud", which would have again been base-10. If we give them the benefit of the doubt on this, presumably it was from later System 360 publications where they would have had enough storage to need prefixes to describe it.

[1] https://commons.wikimedia.org/wiki/File:Zilog_Z-80_Microproc...

[2] https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...

[3] http://www.bitsavers.org/pdf/ibm/360/systemSummary/A22-6810-...


> I will say there's a MASSIVE cost to getting power infrastructure, land, legal stuff done on terra firma; all that just sort of .. goes away when you're deploying to space, at least if you're deploying to space early and fast.

You need both power infrastructure and structures to build within for deploying in space too. And you have to build them and then put it all into space.

Cost per square foot of land is not that high basically anywhere you could build a datacentre to offset that.


Well some stuff you either don't need, or just can't have so you do something different - for instance, transformers to convert grid power - no grid - no transformers. Those are like a 36 month wait list in the US right now. And solar is something like 2x as efficient in space.

I agree those don't seem immediately to be huge wins to me; not dealing with local politics might be a big one, though. Depending on location. There's a lot of red tape in the world.


They don’t need the grid if they’re deploying their own solar. I find it exceedingly unlikely that there is nowhere in the U.S., much less the world, that they couldn’t use some of Tesla’s battery experience to deploy a boatload of solar panels and batteries for less than the launch costs, and then have something which can be serviced or upgraded in place.

$/sq foot or meter belies the cost of dealing with every regulatory agency that has claim on that area. There's no environmental commission you've got to pay off if your satellite starts leaking noxious chemicals all over the place, the same way you'd have to if you spilled something at NUMMI in Fremont, California.

> It's not just a converter, it's a gui with the tools needed to facilitate a quick manual conversion.

is this like a meta-joke?

> I have a prime example of this were my company was able to save $250/usr/mo for 3 users by having Claude build a custom tool for updating ancient (80's era) proprietary manufacturing files to modern ones.

The funny thing about examples like this is that they mostly show how dumb and inefficient the market is with many things. This has been possible for a long time with, you know, people, just a little more expensive than a Claude subscription, but would have paid for itself many times over through the years.


It's not just a joke, it's a meta-joke! To address the substance of your comment, it's probably an opportunity cost thing. Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.

Now with Claude, it's easy to make a quick and dirty tool to do this without derailing other efforts, so it gets done.


> Programmers on staff were likely engaged in what was at least perceived as higher value work, and replacing the $250/mo subscription didn't clear the bar for cost/benefit.

Agreed absolutely, but that's also what I'm talking about. It's very clear it was a bad tradeoff. Not only $250/month x three seats, but also apparently whatever the opportunity cost just of personnel tied up doing "2-3 files a day" when they could have been doing "2-3 files an hour".

Even if we take at face value that there are no "programmers" at this company (with an employee commenting on hacker news, someone using Claude to iterate on a GUI frontend for this converter, and apparently enough confidence in Claude's output to move their production system to it), there are a million people you could have hired over the last decade to throw together a file conversion utility.

And this happens all the time in companies where they don't realize which side of https://xkcd.com/1205/ they're on.

It's great if, like personal projects people never get started on, AI shoves them over the edge and gets them to do it, but we can also be honest that they were being pretty dumb for continually spending that money in the first place.


We have no programers on staff, we are not a tech company.

I know we are in a bubble here, but AI has definitely made its way out of silicon valley.


The problem with this reasoning is it requires assuming that companies do things for no reason.

However possible it was to do this work in the past, it is now much easier to do it. When something is easier it happens more often.

No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.


> There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.

I wonder how much of the benefit of AI is just companies permitting it to bypass their process overhead. (And how many will soon be discovering why that process overhead was there)


Sure, there's a lot of process that is entirely justified, but there's also a whole lot of process that exists for reasons that are no longer relevant or simply because there are a lot more people whose job it is to make process than whose job it is to stop people from making too much process.

>The problem with this reasoning is it requires assuming that companies do things for no reason

Experience shows that that's the case at least 50% of the time


> No one is arguing it was impossible to do before. There's a lot of complexity and management attention and testing and programmer costs involved in building something in house such that you need a very obvious ROI before you attempt it especially since in house efforts can fail.

I mean, I'm absolutely familiar with how company decision making and inertia can lead to these things happening, it happens constantly, and the best time to plant a tree is today and all that, but the ex post facto rationalizations ring pretty hollow when the solution was apparently vibecoded with no programmers at the company, immediately saved them $750 a month and improved their throughput by 8x.

Clearly it was a very bad call not to have someone spend a couple of days looking into the feasibility of this 10 years ago.


> Even if device manufacturers want to support devices forever it won’t matter if the actual SoC platform drops support.

Yeah, so that's not a why, that's a how (and it's not necessary or sufficient anymore, see the Samsung and Pixel reference).

The why seems very much what the article covers.


> No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

And then you give in and ask what they're building with AI, that activation energy finally available to build the side project they wouldn't have built otherwise.

"Oh, I'm building a custom agentic harness!"

...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: