Tanstack DB - a new client side store for web apps, with transactions, optimistic state, and live queries spanning multiple collections.
It's designed for sync, so rather than fetching you can hook it up to a sync engine (any!) to keep your front end in sync with your backend. It's built on Tanstack Query, making the sync engine optional, and a great path for incremental adoption.
The query engine uses a typescript implementation of differential dataflow to enable incremental computation of the live queries - they are very fast to update. This gives you sub ms fine grade reactivity of complex queries (think sql like joins, group by etc).
Interesting, I'll have to look at this in the near future. Definitely like what I see at a glance. One problem I've had with some other client sync/db options is that they don't support the use case for public, shared and private tables/collections. A lot of real world apps may have items that are available to all users (read only or not), some users (by group or management chain) or private (but reassignable by managers) in order to support real world workflows and potentially confrontational work (think avoiding stealing other worker's contacts/commissions).
It's great that the rust community are finding ways to improve the performance of decoding strings from WASM to js, it's one of the major performance holes you hit when using WASM.
The issue comes down to the fact that even if your WASM code can return a utf16 buffer, to use it as a string in JS code the engine needs to make a copy at some point. The TextDecoder api does a first good job of making this efficient, ensuring there is just a single copy, but it's still overhead.
Ideally there should be a way to wrap an array buffer with a "String View", offloading the responsibility of ensuring its utf16 to the WASM code, and there being no copy made. But that brings a ton of complexities as strings need to be immutable in js, but the underlying buffer could still be changed.
Personally I feel this is backwards - I don't want access to js literals and objects from WASM, I just want a way to wrap an arbitrary array buffer that contains a utf16 string as a js string.
It keeps WASM simple and provides a thin layer as an optimisation.
At the cost of complicating JS string implementations, probably to the point of undoing the benefits.
Currently JS strings are immutable objects, allowing for all kinds of optimization tricks (interning, ropes, etc.). Having one string represented by a mutable arraybuffer messes with that.
There's probably also security concerns with allowing mutable access to string internals inside the JS engine side.
So the simple-appearing solution you suggested would be rejected all major browser vendors who back the various WASM and JS engines.
Access to constant JS strings without any form of mutability is the only realistic option for accessing JS strings. And creating constant strings is the only one for sending them back.
Along with the others mentioned, it's worth highlighting Yjs. It's an incredible CRDT toolkit that enables many of the realtime and async collaborative editing experience you want from local-first software.
I’ve built several apps on yjs and highly recommend it. My only complaint is that storing user data as a CRDT isn’t great for being able to inspect or query the user data server-side (or outside the application). You have to load all the user’s data into memory via the yjs library before you can work with any part of it. There are major benefits to CRDTs but I don’t think this trade-off is worth it for all projects.
> This is trying to solve a business problem (I can't trust cloud-providers) with a technical trade-off (avoid centralized architecture).
I don't think that's quite correct. I think the authors fully acknowledge that the business case for local-first is not complexly solved and is a closely related problem. These issues need both a business and technical solution, and the paper proposes a set of characteristics of what a solution could look like.
It's also incorrect to suggest that local-first is an argument for decentralisation - Martin Kleppmann has explicitly stated that he doesn't think decentralised tech solves these issues in a way that could become mass market. He is a proponent of centralised standardised sync engines that enable the ideals of local-first. See his talk from Local-first conf last year: https://youtu.be/NMq0vncHJvU?si=ilsQqIAncq0sBW95
I'm sure I'm missing a lot, but the paper is proposing CRDTs (Conflict-free Replicated Data Types) as the way to get all seven checkmarks. That is fundamentally a distributed solution, not a centralized one (since you don't need CRDTs if you have a central server).
And while they spend a lot of time on CRDTs as a technical solution, I didn't see any suggestions for business model solutions.
In fact, if we had a business model solution--particularly one where your data is not tied to a specific cloud-vendor--then decentralization would not be needed.
I get that they are trying to solve multiple problems with CDRTs (such a latency and offline support) but in my experience (we did this with Groove in the early 2000s) the trade-offs are too big for average users.
Tech has improved since then, of course, so maybe it will work this time.
There is now a great annual Local-first Software conference in Berlin (https://www.localfirstconf.com/) organised by Ink and Switch, and it's spawned a spin out Sync Conf this November in SF (https://syncconf.dev/)
There was a great panel discussion this year from a number of the co-authors of the the paper linked, discussing what is Local-first software in the context of dev tools and what they have learnt since the original paper. It's very much worth watching: https://youtu.be/86NmEerklTs?si=Kodd7kD39337CTbf
The community are very much settling on "Sync" being a component of local first, but applicable so much wider. Along with local first software being a characteristic of end user software, with dev tools - such as sync engines - being an enabling tool but not "local first" in as much themselves.
It's an exciting time for the local-first / sync engine community, we've been working on tools that enable realtime collaborative and async collaborative experiences, and now with the onset of AI the market for this is exploring. Every AI app is inherently multi user collaborative with the agents as actors within the system. This requires the tech that the sync engine community has been working on.
I think a lot of the problems come from the fact testing stored procedures ventures into e2e testing land. You have to stand up infra in order to test them. There's not really been a simple way to unit test stored procedures as part of your application codes testing framework.
(Aside: I think this is something PGlite helps with if your in Postgres land)
PostgreSQL server is a single process that starts in under 100ms on a developer's laptop.
In the company I work for we use real PostgreSQL in unit tests — it's cheap to start one at the beginning of a suite, load the schema and go, and then shut it down and discard its file store.
I keep thinking of moving that file store to tmpfs when run on Linux, but it's nowhere near the top of the performance improvements for the test suite.
So: no more mocks or subsitute databases with their tiny inconsistencies.
My team has found a lot of success using testcontainers for testing Go and Java applications integrating with Postgres. They feel more unit-testy than e2e-testy.
Admittedly I’m only talking about selects, inserts, updates, views, etc. not stored procedures. But having worked in codebases with far too many stored procedures in the past, I think there might be a marginal improvement.
For what it’s worth, I fully agree that the main problem with using store procedures is testability and debugability.
I spent an afternoon vibe coding a game with them (10 + 6). We took it in turns describing what to build, and I did my best to explain that the AI was interpreting our instructions and writing to code. They could see the code changing and sort of understood the concept from that.
Key thing I tried to emphasise what that this whole process was new and that before November last year wasn't possible.
They really got it, my younger son is very excited about the idea of build games that follow the stories he comes up with (he's recently been spending time writing stories on a iPad, inspired by his novelist mother). We're going to spend more time experimenting together over the summer holidays.
Kids are curious sponges, I don't think you need to spell out to them exactly how it works, just show them and their curiosity takes over.
I know at school the teachers have been using image GenAI in English lessons with my older daughter, using it in lessons about descriptive language. They have had the kids experiment with describing things and getting images back. I was quite impressed to hear they were doing that, it's a great way to introduce the concepts in the context of a topic they are covering.
On the general topic of tech we have always (from as soon as they could hold a device) let the kids play computer games, and experiment themselves with tablets. But we've had the internet locked down, and not let them have things like YouTube Kids, it feels too close to social media, and serve explained the dangers of that to them. So very pro exposing children to tech, but no social media at all. I think in time we will try and explain the danger and downsides of AI, but it's all so new, there's not much to cover yet, particularly as we are still developing our own opinions.
This would make sense if there was any indication that AR is going to happen. I would argue that there isn't even the faintest signal that it will.
People do not want invasive glasses, even if they make them as small at normal glasses. I just don't see it becoming anything other than a niche product.
It's like all the moves to voice/audio interfaces powered by AI. They simply won't take off as audio is inherently low bandwidth and low definition. Our eyes are able to see so much more in our peripheral vision, at a much higher bandwidth.
Some would argue that's an indication that AR will happen, but it's still so low deff, and incredibly intrusive, as much as I love the demos and the vision (pun not intended) behind it.
As far as I can see, the only motivation for the visual overall is that they need something to fill the gap until they have some real AI innovations to show. This is a "tick" in the traditional "tick" -> "tock" development and release cycle - a facelift while they work on some difficult re-engineering underneath. But that's not AR, it AI.
I think the appeal and the value equation of AR would be completely different if it didn’t feel like you were donning a heavy headset to step into the matrix. It’s very likely that there will be innovation in translucent displays and input methods that make AR ubiquitous at some point in the future. I just don’t know if that will be in 5 years or 15 years.
> People do not want invasive glasses, even if they make them as small at normal glasses. I just don't see it becoming anything other than a niche product.
Wait, are you arguing that consumers will reject something that puts, say, a social media feed in front of their face 24hrs a day? That will allow them to just gaze at an internet site constantly without even having to think about it? That will allow them to have videos in their peripheral vision while they “concentrate” on something else?
AR headsets will not replace computers, they’ll replace phones.
I actually have a homebrew Linux AR setup that I use heavily and absolutely think it will be the future (although it will be similar to the smartphone where you get a combined form factor and paradigm shift that people think are both connected.)
Good AR glasses are already available and combined with modern LLMs you can have normal people thinking about computers the way we do. This will feel less invasive than smartphones do currently while being able to do much more.
I'm absolutely certain Apple will not survive the transition though.
I genuinely expect that in a few years, Apple will release something that is effectively identical to Google Glass, and that will historically be seen as the real start of wide-spread usage of AR.
Anything less than lightweight glasses is a non-starter outside of gaming and other enthusiasts. The Vision Pro is just too bulky for it to sell serious numbers.
VR / AR will definitely replace desktop / stationary computers, but they need to be as lightweight as headphones. Steve Jobs said it best (also his opinion of the current Vision Pro at the very end): https://www.youtube.com/watch?v=bQECSInWVPY
This is something I've explored as part of my work on PGlite. It's possible but needs quite a bit of work, and would come with some limitations until Postgres upstream make some changes.
You will need to use an unrolled main loop similar to what we have in PGlite, using the "single user mode" (you likely don't want to fork sub processes like a normal Postgres). The problems come with then trying to run multiple instances in a single process, Postgres makes heavy use of global vars for state (they can as they fork for each session), these would clash if you had multiple instances. There is work happening to possibly make Postgres multi-threaded, this will solve that problem.
The long term ambition of the PGlite project is to create a libpglite, a low level embedded Postgres with a C api, that will enable all this. We quite far off though - happy to have people join the project to help make it happen!
It's designed for sync, so rather than fetching you can hook it up to a sync engine (any!) to keep your front end in sync with your backend. It's built on Tanstack Query, making the sync engine optional, and a great path for incremental adoption.
The query engine uses a typescript implementation of differential dataflow to enable incremental computation of the live queries - they are very fast to update. This gives you sub ms fine grade reactivity of complex queries (think sql like joins, group by etc).
Having a lot of fun building it!
https://tanstack.com/db/latest https://github.com/TanStack/db