Hacker Newsnew | past | comments | ask | show | jobs | submit | samwillis's commentslogin

> But they need to FEEL the weather

You mean they need to smell it!


The key limitation (at the moment) is that it only supports a single connection. W're planning to lift that limitation though.

This is what I'm most interested in. I have an application which has a smaller trimmed down client version but it shares a lot of code with the larger full version of itself. Part of that code is query logic and it's very dependent on multiple connections and even the simplest transactions on it will deadlock without multiple connections. Right now if one wants to use the Postgres option, it needs Postgres manually installed and connected to it which is a mess. It would be the dream to have a way to easily ship Postgres in a small to medium sized app in a enterprise-Windows-sysadmin-friendly way and be able to use the same Postgres queries.

Was going to ask exactly about that. Thanks for sharing. Looking forward to it!

This is such awesome work! We *are* going to get this integrated with the ongoing work for "libpglite".

You can use http://electric-sql.com to sync into PGlite in the browser from postgres. There are docs here: https://pglite.dev/docs/sync

There are a few people using it in prod for customer facing web apps.

Extensions are also available - we have a list here: https://pglite.dev/extensions/. We would love to extend the availability of more, some are more complex than others though. We are getting close to getting PostGIS to work, there is an open PR that anyone is welcome to pick up and hack on.


We have a long on running research project with the intention of carting a "libpglite" with a C FFI and compiled as a dynamic library for native embedding. We're making steady progress towards it.

It's now used by a huge number of developers for running local dev environments, and emulating server products (Google firebase and Prisma both embed it in their CLI). Unit testing postgres backed apps is also made significantly easer with it.

Hey everyone, I work on PGlite. Excited to see this on HN again.

If you have any questions I'll be sure to answer them.

We recently crossed a massive usage milestone with over 3M weekly downloads (we're nearly at 4M!) - see https://www.npmjs.com/package/@electric-sql/pglite

While we originally built this for embedding into web apps, we have seen enormous growth in devtools and developer environments - both Google Firebase and Prisma have embedded PGlite into their CLIs to emulate their server products.


This looks really interesting...but why WASM-only? Naively it seems like WASM-ification would be a 2nd step, after lib-ification.

Obviously missing something...


If I understand correctly, what this project does is take the actual postgresql sources, which are written in C, compile them to wasm and provide typescript wrappers. So you need the wasm to be able to use the C code from js/ts.

Yes. I would like to use the code as a library from something other than js/ts.

You can use it in Rust if you like. I've used pglite through wasmer before. Also [pglite-oxide](https://lib.rs/crates/pglite-oxide) is pretty usable.

Sounds you only need to create the APIs for calling into WASM if so, so as long as your language of choice can do that, you're good to go.

That adds extra unnecessary complexity. The code is written in C. There are C compilers for all CPUs. So just call the C code from <other language that's not JS>.

Well, a project has scope.

Looking at the repo, it started as postgres-in-the-browser. An abstract interface with C and wasm as targets is just more scope.

But it looks like the hard part of patching postgres to librar-ify it is already done agnostically in C.

So you just need to ctrl-f for "#if defined(__EMSCRIPTEN__)" to impl those else branches and port the emmake file to make.


So compile it and use it?

WASM means you only need to develop for one target run time. That's my guess as to why.

Yeah... I was super excited by this project when it was first announced--and would even use it from Wasm--but since it ONLY works in Wasm, that seemed way too niche.

Hi there, would you like to share the progress of converting PGlite into a native system library? I can see there is a repo for that, but it hasn't been updated for 5 months

We are actively looking into it. But as you can see from the comments here, there are quite a lot of other features that users want and we have limited bandwidth. We will do it!

I see you guys are working on supporting the postgis extension. This would be HUGE!!! The gis community would be all over this.

If anyone wants to help out who has compiled the postgis extension and is familiar with WASM. You can help out here. https://github.com/electric-sql/pglite/pull/807


This is awesome, thanks for your work! Could this work with the file system api in the bowser to write to user disk instead of indexeddb? I'm interested in easy ways for syncing fot local-first single user stuff <3 thanks again

That's a very nice idea, we will look into it!

Thanks for your work!

Is the project interested in supporting http-vfs readonly usecases? I'm thinking of tools like DuckDB or sql.js-httpvfs that support reading blocks from a remote url via range requests.

Curious because we build stuff like this https://news.ycombinator.com/item?id=45774571 at my lab, and the current ecosystem for http-vfs is very slim — a lot of proofs of concept, not many widely used and optimized libraries.

I have no idea if this makes sense for postgres — are the disk access patterns better or worse for http-vfs in postgres than they are in sqlite?


This looks REALLY awesome. Could you name a few usecases when i would want to use this. Is the goal to be an sqlite/duckdb alternative?

Any chance for a Flutter library?

I'm interested to use Pglite for local unit-testing, but I'm using timescaledb in prod, do you think you will have this extension pre-built for Pglite?

We have a walk-through on porting extensions to PGlite: https://pglite.dev/extensions/development#building-postgres-...

I'm not aware of anything trying to compile timescale for it. Some extensions are easer than other, if there is limited (or ideally no) network IO and its written in C (Timescale is!) with minimal dependencies then its a little easer to get them working.

I’ve had incredible success with testcontainers for local unit-testing

Does pglite in memory outperform “normal” postgres?

If so then supporting the network protocol so it could be run in CI for non-JS languages could be really cool


Look into libeatmydata LD_PRELOAD. it disables fsync and other durability syscalls, fabulous for ci. Materialize.com uses it for their ci that’s where i learned about it.

for CI you can already use postgresql with "eat-my-data" library ? I don't know if there's more official image , but in my company we're using https://github.com/allan-simon/postgres-eatmydata

You can just set fsync=off if you don't want to flush to disk and are ok with corruption in case of a OS/hw level crash.

Huh, i always just mounted the data directory as tmpfs/ramdisk. Worked nicely too

Yupp, this has big potential for local-first !

Small world! We spoke about this at the QCon dinner.

Amazing work! It makes setting up CI so much easier.

huh. could you tell how you use it in ci?

I'm using it for a service that has DB dependencies. Instead of using SQLite in tests and PG in production, or spinning up a Postgres container, you use Postgres via pglite.

In my case, the focus is on DX ie faster tests. I load shared database from `pglite-schema.tgz` (~1040ms) instead of running migrations from a fresh DB and then use transaction rollback isolation (~10ms per test).

This is a lot faster and more convenient than spinning up a container. Test runs are 5x faster.

I'm hoping to get this working on a python service soon as well (with py-pglite).


Thank you for the details. This makes a lot of sense!

Well downloads doesn’t equal usage does it ?

How do you know how many deployments you actually have in the wild?


True downloads don’t equal usage but there’s a correlation. I also doubt deployment equals usage - I can deploy to some env and not make any requests.

Additionally, how you can get data on how many deployments without telemetry? The only telemetry that I’m interested in is for my uses, and don’t really care about sending data on deployment count to a third party. So the download count becomes a “good enough” metric.


I'm really intrigued by the use of differential dataflow in a static site toolkit, but there isn't much in the way written about it. If anyone from the team are here I would love it if you could explain how it's being used? Does this enable fast incremental builds, only changing the parts that change in the input? If so how do you model that? Are you using multisets as a message format inside the engine?

For context, I work on TanStack DB which is a differential dataflow / DBSP inspired reactive client datastore. Super interested in your use case.


Excellent question. We're not using differential dataflow (DD), but are rolling our own differential runtime. It's basically functions stitched together with operators, heavily inspired by DD and RxJS, and is optimized for performance and ease of use. The decision to go from scratch allows us to provide something that, IMHO, is much simpler to work with than DD and Rx, as our goal is to allow an ecosystem to evolve, as MkDocs did as well. For this, the API needs to be as simple as possible, while ensuring DD semantics.

Right now, builds are not fast, since Python Markdown is our bottleneck. We decided to go this route to offer the best compatibility possible with the Material for MkDocs ecosystem, so users can switch easily. In the next 12 months, we'll be working on moving the rest of the code base gradually to Rust and entirely detaching from Python, which will make builds significantly faster. Rebuilds are already fast, due to the (still preliminary) caching we employ.

The differential runtime is called ZRX[1], which forms the foundation of Zensical.

[1]: https://github.com/zensical/zrx


I think is really interesting how it's often suggested Waymo is at a disadvantage over Tesla due to its reliance on LIDAR and the costs associated with it. But the reality is that it's enabled Waymo to move faster and gain significant more operational experience than Tesla, and that's far more important than front loading with cost reductions in a service business.

Tesla have been operating as a product business, and cost reduction of that product was key to scale and profitability. I completely understand why they have focused on optical sensors for autopilot, lidar was always going to be impossibly expensive for a consumer product.

Waymo on the other hand have always been aiming to build a service business and that changes the model significantly, they need to get to market and gain operational experience. Doing that with more expensive equipment to move faster is exactly what was needed. They can worry about cost of building their cars later, much later.


lidar was always going to be impossibly expensive for a consumer product.

Everything is expensive without scale. But lidar will be very cost effective, when scaled to millions upon millions of cars annually.

And with scale, there are reasons to optimise, reduce cost, etc. Large volumes of sales draws more research. Research to reduce cost.

Self driving is a long game. Decades.


It's already cost effective. Lidar prices have been divided by like 10 in a bit more than a decade. I've read a Wall street story about a SV company that wanted to enter the Lidar market for cars, only to bifurcate to a weird scam and a SPAC when they realized as the prices fell that Lidar would never be a very profitable.

Lidar production costs have already scaled. Now it need more miniaturization (which will help with production costs even more) and something against diffraction.


Oh, yeh, don't get me wrong. I meant impossibly expensive to drop into a mid range car with no scaling up from a higher value lower volume product range first.


It's also quite ugly, not sure you can have it on a convertible etc. - all probably solvable, but not ideal. I can see why Tesla tried the camera only approach but doesn't look like it's working out


> It's also quite ugly

To each their own, but it's possible having sensor bumps on your car become a status symbol that indicates you can afford a private driver.

> not sure you can have it on a convertible etc.

Radically different car shapes are possible when human driving never happens or is very rare. Maybe a small van (like a private lounge on wheels, or a train observation car) with a huge panoramic sunroof becomes en vogue.


> due its reliance on LIDAR

Due to its choice to use LIDAR. Waymo has tested a working system using cameras only, but they choose to use LIDAR because it is safer and does not significantly change cost.

> Waymo on the other hand have always been aiming to build a service business

Waymo's roadmap (from https://www.youtube.com/watch?v=WXLgzP3gv2k):

1) Ride hailing 2) Local delivery 3) Long haul trucking 4) Personally owned cars

Is step 4 still considered a service? It will almost certainly require a subscription.


> Waymo on the other hand have always been aiming to build a service business and that changes the model significantly

This isn't the main factor. The main factor is Waymo only does this one thing. Tesla has been: building electric cars and forcing all car makers to do the same; building charging networks; funding and releasing free of charge battery tech research improvements; and doing self driving, and all of it while trying to make a profit to keep running. Waymo is funded by Google, which has infinitely deep ads-spying-on-you pockets. Which is just much, much easier.


> Which is just much, much easier.

Building a self-driving car is more difficult as evidenced by no one having delivered one except for Google. Many companies (including astronomically rich ones like Apple) have tried.


The cost for Waymo is the whole car, so Waymo is in it at ~$100k per operational taxi. They are beholden to hardware manufacturers for their product.

Tesla is trying to get in at ~$20k per operational taxi, with everything made in house.

Assuming Tesla can figure out it's FSD (and convince people it's safe), they could dramatically undercut Waymo on price, while still being profitable. If a Waymo to the airport is $20, and Robotaxi is $5, Tesla will win, regardless of anything else (assuming equal safety).


This is just Google doing what they've done for years now: start with the software, partner with hardware OEMs, then build their own hardware. Android to Nexus to Pixel line is one example, Google Now on Tap to their own smart speakers is another (though they may not have hit the third step there yet), and Waymo / Google Maps to self driving is following the same path.


The amount of "if"s in your comment is astounding.


The cost discussion on LIDAR always confused a layman like me. How much more expensive is it that it seemed like such a splurge? LIDAR seems to be the only thing that could make sense to me. The fact Tesla does it with only cameras (please correct me understanding if I'm wrong) never made sense to me. The benefits of LIDAR seem huge and I'd assume they'd just become more cost effective over time if the tech became more high in demand.

I'm _way_ out of my depth though.


> How much more expensive is it that it seemed like such a splurge?

LiDARs at the time Tesla decided against them were $75k per unit. Currently they are $9,300 per car with some promising innovations around solid state LiDAR which could push per-unit down to hundreds of dollars.

Tesla went consumer first so at the time, a car would've likely cost $200k+ so it makes sense why they didn't integrate it. I believe their idea was to kick off a flywheel effect on training data.


Holy - okay never mind, I didn't realize just how expensive LiDAR was...


Lidar will continue to get cheaper, but it has fundamental features that limit how cheap it can get that passive vision does not.

You’re sending your own illumination energy into the environment. This has to be large enough that you can detect the small fraction of it that is reflected back at your sensor, while not being hazardous to anything it hits, notably eyeballs, but also other lidar sensors and cameras around you. To see far down the road, you have to put out quite a lot of energy.

Also, lidar data is not magic: it has its own issues and techniques to master. Since you need vision as well, you have at least two long range sensor technologies to get your head around. Plus the very real issue of how to handle their apparent disagreements.

The evidence from human drivers is that you don’t absolutely need an active illumination sensor to be as good as a human.

The decision to skip LiDAR is based on managing complexity as well as cost, both of which could reduce risk getting to market.

That’s the argument. I don’t know who is right. Waymo has fielded taxis, while Tesla is driving more but easier autonomous miles.

The acid test: I don’t use the partial autonomy in my Tesla today.


Does the "sensor fusion" argument that Tesla made against LiDAR make as much sense now that everyone is basically just plugging all the sensor data into a large NN model?


It's still a problem conceptually, but in practice now that it's end to end ML, plug'n'pray, I guess it's an empirical question. Which gives one the willies a bit.

It'll always be a challenge to get ground truth training data from the real world, since you can't know for sure what was really out there causing the disagreeing sensor readings. Synthetic data addresses this, but requires good error models for both modalities.

On the latter, an interesting approach that has been explored a little is to SOAK your synthetic sensor training data in noise so that the details you get wrong in your sensor model are washed out by the grunge you impose, and only the deep regularities shine through. Avoids overfitting to the sim. This is Jakobi's 'Radical Envelope of Noise Hypothesis' [1], a lovely idea since it means you might be able to write a cheap and cheerful sim that does better than a 'good' one. Always enjoyed that.

[1] https://www.sussex.ac.uk/informatics/cogslib/reports/csrp/cs...


now that it's end to end ML, plug'n'pray, I guess it's an empirical question

Aren't human drivers the same empirical question?

That paper is really interesting, thanks!


Tesla's strategy is bad. Build something that works first then optimize.

They optimized for cost first and may never get it to work.


Exactly, and the Waymo sensors are practically a superset of those of Tesla, so with all the acquired data, then can build models that slowly phase out the need for Lidars.


It seems to me Tesla's mistake was over optimism about AI. Musk always seemed to believe they'd have it cracked the following year but it seems to be taking its time.


If I recall correctly, at the last Tesla AI Day, he said ~"FSD basically requires AGI."

There is a lot to unwrap there, but that's what he said. I believe at that moment he was in ML talent recruitment mode, and yet he admitted the true scale of this issue that Tesla faces given the vision-only direction.


That's an interesting take. So, classic premature optimization for scale?


Yes.

A normal approach is:

1) Make it work

2) Make it right(/safe)

3) Make it fast/cheap


The surprising thing to me is when you look at Starlink, there was a very expensive blocker there: consumer phased array. Prior to Starlink, I think the cheapest consumer unit was around $50k. That did not stop Musk from charging ahead.

Is there some technological thing about LiDAR that would prevent similar cost reductions? Or, is it just the philosophical difference over pre-mapping, and not doing so?


LIDAR has seen many cost reductions as capabilities continue to increase. I don't know the area well enough to speculate how much optimization might be left.

> Or, is it just the philosophical difference over pre-mapping, and not doing so?

It seems to be a "burn the ships" style bet that the Tesla engineers will get to camera-only self driving first without having ever relied on LIDAR. It's equally as likely (or moreso) that Waymo could get there first with better ground truth data from the LIDAR.


>A LiDAR unit, for instance, used to cost 30,000 yuan (about $4,100), but now it costs only around 1,000 yuan (about $138) — a dramatic decrease, said Li. https://cleantechnica.com/2025/03/20/lidars-wicked-cost-drop...

at $138 I'm not sure there's much need to go cheaper.


I think it's that with Tesla he had hardware to sell (and maybe already sold?) to existing customers with the contractual promise that they'd get self-driving as soon as TESLA cracked it. Retrofitting LIDAR into all those already sold cars would have been pretty expensive at the time, and the more he doubles down the more monstrously expensive it'll get.

With Starlink, there was no baseline consumer product to sell before getting it working.


Yeah, that makes perfectly rational sense to me. But, still disappointing as Musk is one of the few CEOs in a position to admit miscalculations, and pivot. The only thing I am left with is uncharitable, and it involves online ego.


Have they already solved the issue of LIDAR destroying CMOS cameras with its laser?


> lidar was always going to be impossibly expensive for a consumer product.

I just don't buy this at all

>"The new iPad Pro adds ... a breakthrough LiDAR Scanner that delivers cutting-edge depth-sensing capabilities, opening up more pro workflows and supporting pro photo and video apps." [1]

Yes of course the specs of LiDAR on a car are higher but if apple are putting it on iPads I just don't buy the theory that an affordable car-spec LiDAR is totally out of the realm of the possible. One of the things istr Elon Musk saying is that one of the reasons they got rid of the LiDAR is the problem of sensor fusion - what do you do when the LiDAR says one thing and the vision says something different.

[1] https://www.apple.com/uk/newsroom/2020/03/apple-unveils-new-...


Tesla got rid of radar because of sensor fusion, and particularly for reasons that wouldn't apply to high resolution radar. Sensor fusion with a high resolution source like LiDAR isn't particularly tricky.


The iPad lidar has a range of a handful of meters indoors and is not safety critical.

Higher specs can make all the difference. A model rocket engine vs Space Shuttle main engine, for an extreme example. Or a pistol round vs an anti-armor tank round. The cost of the former says nothing at all about the latter.


OK, how about this? Volvo EX90, a consumer SUV on sale now in the UK. Fitted with Lidar.

https://www.volvocars.com/uk/support/car/ex90/article/47d2c9...


They are getting there. But that link has big caveats. Not sure how cool it is to endanger other people’s cameras.

From your linked page:

> Important Use responsibly

The lidar and features that can rely on it are supplements to safe driving practices. They do not reduce or replace the need for the driver to stay attentive and focused on driving safely.

Safe for the eyes

The lidar is not harmful to the eyes.

Lidar light waves can damage external cameras

Do not point a camera directly at the lidar. The lidar, being a laser based system, uses infrared light waves that may cause damage to certain camera devices. This can include smartphones or phones equipped with a camera.


How many cameras don't use IR filters? At one time, at least, they were quite common.

I suppose one example might include fixed security cameras with IR-based night vision capability.


And that Lidar is not assisting drivers and prevously sold US EX90s now need a computer upgrade to get it to work


That's what I've been saying for years.

Waymo can just add the cameras exactly the way Tesla has, and train based only on that information.

Now it has tons and tons of data, they could gradually remove the Lidar on cities that they've driven over and over again. IF driving without Lidar is worth it... maybe it isn't even worth it and we should pursue using Lidars in order to further reduce accidents.

Meanwhile people use Tesla sporadically in a few spots they consider safe, they will always have data that isn't useful at all, as it can already drive on those spots.

--

Another thing, we can definitely afford to have Lidars on every car, if that would make our cars safer.

Imagine if China does a huge supply chain of Lidars, I bet the cost would be very tiny. And this is supposing there aren't any more automation and productivity gains in the future, which is very unlikely.

Lidar production just doesn't have that big scale, because it's a very tiny market as of now. With scale, those prices would fall like batteries and other hardware have fallen with the years.

TLDR: Tesla lost badly


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: