Hacker Newsnew | past | comments | ask | show | jobs | submit | makeworld's commentslogin

Is that not insane?


Both Karpathy and must have explained it many times. The additional sensors add more signal than noise in the end. You also then have to decide which sensor system is correct any time they disagree. Also the entire road system is designed for vision. Lidar cannot read signs, see colors, etc. Humans can drive with two eyes. It's not insane to think computers can do it with 7 or 8 cameras.

As someone who has used Tesla FSD iterations for 4 years, their current system is quite incredible, and improving rapidly. It drives for me 95% of the time already.


And that last 5% is the toughest nut to crack. There is a reason waymo is way ahead even if they can not scale. Cameras are passive devices with relatively poor dynamic range and low light behavior. They are nowhere near a match/replacement for the human eye. Just try to picture a 5 year old at dusk or indoors and what you see will not be what you get.


Agree that the last fiew percentage points are exponentially more difficult each step of the way. What's your metric for saying Waymo is ahead, in terms of tech? They are strictly geo fenced, limited to specific road types, and often get stuck/confused. Also their system is very expensive, and not scalable to million of cars. Your point about cameras seems odd. Cameras have much better low light performance than human eyes. And cars have headlights.


waymo already has driverless taxi service in a major us city and is expanding. Tesla is in the process. again this is if they cover the last 5%. Scalability arguments wont matter when they can not launch such a service. And no, cmos cameras are close but are not better than the human eye in low light unless you have an ir camera and can flood everywhere with active ir lights. they are certainly inferior in dynamic range. I have been doing vision for more than two decades and I would not be comfortable in a camera only robotaxi at high speed. Certainly not at night or under adverse weather conditions. But this is all speculation of course. Considering fully autonomous driving at scale has been a major unrealised promise for the past 10 years, I stand by my assessment until I see a major advancement in camera technology or affordable active sensors.



Can't wait for someone to buy boolean.exposed and teach me about some esoteric representation of booleans in memory that I'd never considered (either that or it's a very simple page).


...authored by [Julia Evans](https://jvns.ca) of tech-explainer fame :)


Get SMILE instead.


Yes, for any considering eye surgery at least research SMILE, and other alternative surgeries and upcoming surgical techniques.


What was the preferred way of doing FFT at that time?


hasn't the preferred way been Cooley-Tukey consistently since 1965?

https://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algor...


Bingo. We certainly learned about Cooley-Tukey in undergrad back then. That power station was 100% Hitachi Heavy Industries turnkey. The control rooms had Hitachi mainframe and some minicomputers running proprietary real time OS (I guess). These were the days when the video controller for a colour industrial process control raster display CRT was a waist-high cabinet. So you'd transduce the flicker and then transmit it via analogue current loop to a rack in the control room annex, convert back to voltage, A/D it... and crunch the FFT on one of the control room computers. Something like that. Cheap distributed compute just wasn't a thing at the time.


this is so cool, thanks for sharing :)


Notably, Anthropic does not do this with Claude.

https://docs.anthropic.com/en/docs/claude-code/data-usage


Glad I contributed to this in some small way.


Same, it's nice to see my username on the leaderboards.

Even though all I did was setup the docker container one day and forget about it



It’s hard to call that standard, it’s just the latest hn rust craze idolisation.


Sure, but it’s demonstrably better than Poetry, which was the best until uv.

If uv isn’t a standard, it’s because not enough people have tried it. It is obscenely good at its job.


uv is young, unstable and still lacking in details. It has multiple updates per month, nearly every week there are significant enhancements and bug fixes. It's not mature enough for being a standard yet, even though, what it already offers is excellent. But let it grow, change needs time.


False dichotomy. Been using pipenv for 8 years. At first it was a bit too slow, but at this moment it gets the job done.


   standard
You keep using that word, I don't think it means what you think it means.


"Prepare to die."


uv is an excellent piece of software regardless of the language used to write it. Really, if you do python, it's worth giving it a try, especially script mode.


Sure but it’s quality or your/my opinion doesn’t make it ‘standard’ even if it will be some day in the future.


Eh, I’m not so sure.

We didn’t see adoption nearly this fast for poetry, pipenv, or conda (or hatch or PDM, but I never saw those as even reaching critical mass in the first place).

Those tools got pretty popular, but it took a long time and most folks found them to have a lot of tradeoffs (miles better than Python’s first party tooling, but still).

I’m not seeing that with “uv”. Other than concerns about Astral’s stewardship model (which could be valid!), I’m not seeing widespread “it works but is hard to use” dissatisfaction with the to the way I do with, say, poetry.

Couple that with uv durably solving the need for pyenv/asdf/mise by removing the pain of local interpreter compilation entirely, and I do think that adds up to uv being fundamentally different in popularity or approach compared to prior tools. Is that “different” the same as “better”? Time will tell.

As to being written in Rust? Shrug. A ton of shops for whom uv has been transformative don’t even know or care what language it’s written in. Being Rust provides, in my opinion, two benefits: a) avoiding chicken-and-egg problems by writing the tool for managing a programming language environment in a different language that is b) not bash.


> avoiding chicken-and-egg problems by writing the tool for managing a programming language environment in a different language

I've heard this a lot, but I don't really understand the use case. It seems that people want to do development in Python, want to install and manage third-party Python packages, and know how to use command-line tools, but somehow they don't already have Python installed and would find it challenging to install directly? Building from source on Linux is a standard "get dev packages from the system package manager; configure, make and install" procedure that I've done many times (yes, putting it beside the system Python could break things, but you can trivially set an alternate install prefix, and anyway the system Python will usually be a version that meets the basic needs of most developers). Installing on Windows is a standard Windows installer experience.

Aside from that, people seem to imagine "chicken-and-egg" scenarios with pip making itself available in the environment. But this is a thoroughly (if inefficiently) solved problem. First off, for almost three years now pip has been able to install cross-environment (albeit with an ugly hack; I detail some of this in https://zahlman.github.io/posts/2025/01/07/python-packaging-...). Second, the standard library `venv` defaults to bootstrapping pip into new environments — taking advantage of the fact that pre-built Python packages are zip archives, and that Python has a protocol for running code from zip archives, which the pip package implements.

The real bootstrapping issue I've heard about is https://github.com/pypa/packaging-problems/issues/342 , but this affects very few people — basically, Linux distro maintainers who want to "build" an entire Python toolchain "from source" even though it's all Python code that the runtime could bytecode-compile on demand anyway.


Whether or not you like it, and whether or not it is better, it is not in the standard lib, and is not the way the vast majority of people install python libraries.


You forgot to update your HN craze list. Zig is chic, Rust is out.


No, it’s not. Everywhere I see, uv is adopted.


We're like a year into the uv hype cycle. It needs enough time to make sure it solves the issues of its predecessors.

So what if uv is everywhere you look? So were Poetry, pipenv, and so on. Give it time.


Tried all of them, dropped all of them. Sticked with uv, I’ll take my chances.


I'm using uv but in corporate places I'm seeing just Conda.


What is Astral's business model?


Yes. Try uv and never look back.


You still need pip-tools in uv environment


What for (honest question) ? Doesn't uv handle locking?


uv for project management and pipx for user-/system-wide tool installation.


uv handles that too with "uv tool".


But does it create completely isolated, updatable tools possessing all of pipx functionality?


Yep!


In practice, the "ticket" provided by dumbpipe contains your machine's IP and port information. So I believe two machines could connect without any need for discovery infra, in situations that use tickets. (And have UPnP enabled or something.)

See also https://www.iroh.computer/docs/concepts/discovery


OK so given

    $ ./dumbpipe listen
    ...
    To connect use: ./dumbpipe connect nodeecsxraxj...
that `nodeecsxraxj...` is a serialized form of some data type that includes the IP address(es) that the client needs to connect to?

forgive me for what is maybe a dumb question, but if this is the case, then what is the value proposition here? is it just the smushing together of some IPs with a public key in a single identifier?


The value proposition of the ticket is that it is just a single string that is easy to copy and paste into chats and the like, and that it has a stable text encoding which we aim to stay compatible with for some time.

We have a tool https://ticket.iroh.computer/ that allows you to see exactly what's in a ticket.


a URL is also a single string that's easy to copy and paste, the question I have is how these strings get resolved to something that I can connect to

if you need to go thru a relay to do resolution, and relays are specified in terms of DNS names, then that's not much different than just a plain URL

if the string embeds direct IPs then that's great, but IPs are ephemeral, so the string isn't gonna be stable (for users) over time, and therefore isn't really useful as an identifier for end users

if the string represents some value that resolves to different IPs over time (like a DNS entry) but can be resolved via different channels (like thru a relay, or via a blockchain, or over mdns, or whatever) then that string only has meaning in the context of how (and when) it was resolved -- if you share "abcd" with alice and bob, but alice resolves it according to one relay system, and bob resolves it according to mdns, they will get totally different results. so then what purpose does that string serve?


The value prop is that dumbpipe handles encryption, reconnection, UPnP, hole punching, relays, etc. It's not something I could easily replicate with netcat, for example.


ngrok and tailscale and lots of other services offer all of these capabilities, the only unique thing of this one seems to be the opaque string identifiers + some notion of "decentralization" which is what I'm trying to understand, particularly in the realm of how discovery works


You could pipe to bash?


Ah right, but this does not support bidirectional streaming so I won't be able to get the remote stdout on the client, I guess.


Couldn’t you just pipe the stdout to another dumbpipe


Not a very friendly API


This works:

Remote:

  $ socat TCP-LISTEN:4321,reuseaddr,fork EXEC:"bash -li",pty,stderr,setsid,sigint,rawer&
  $ dumbpipe listen-tcp --host 127.0.0.1:4321
  using secret key fe82...7efd
  Forwarding incoming requests to '127.0.0.1:4321'.
  To connect, use e.g.:
  dumbpipe connect-tcp nodeabj...wkqay

Local:

  $ dumbpipe connect-tcp --addr 127.0.0.1:4321 nodeabj...wkqay&
  using secret key fe82...7efd
  $ nc 127.0.0.1 4321
  root@localhost:~#


No need to muck around with SQL, just use Kiwix.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: