The Gaggiuno is quite complex; a Shades of Coffee PID kit is simpler if you want better shot-to-shot consistency and don't care much about customizing to the nth degree.
Single boiler is fine if you're making coffee for one or two - any more than that and the overhead of switching boiler temperatures makes the process painful.
I've had mine for about 3 years, routine descale with solution about a year ago, no issues. My water is around 120ppm.
the full on duino mod is way better for one primary reason. power steam or whatever they call the thing they do with the steamer. it takes the sputtery slow and weak steam wand and makes it actually functional.
also the "adaptive" settings you can do on there let you basically never have to "dial in" a shot. throw any random (still decent q and paired with a good grinder) beans on there and you get a shot thats very drinkable. For lattes and stuff where you are hiding the coffee in milk anyway this is like still better than 90% of what you can get from the coffee shop with the 20k$ big iron and 0 effort. and when you do have that special bag that wants the extra attention and straight espresso you've got all the controls and presets and shot tracking.
if you are gonna try and do home espresso you already have an involved hobby, if you are gonna mod your machine you are already pretty hardcore about your hobby... you might as well go all in at that point
I have used 1Blocker for years and it has worked great. There are many others all using the same principle. It also allows me to have a custom rule to disable JS entirely on some sites.
p0 handles the entire process in an app that you can just set up and run with - generating a comprehensive spec, preparing your multi-repo-codebase, executing, testing, sharing: https://news.ycombinator.com/item?id=47247672
When you build features with p0, it suggests keeping the standards updated. We plan to tie this more into git hooks so that we can do this on code merges and not rely on it client side.
The standards are synced across the team but you need to use p0 to make full use of them, or at least re-import them into a custom harness.
Slightly disagree on the orchestration. It's not unusual for AI native solo devs to have some self-made harness, but most teams don't have that, and don't have the time to make one. Claude Code etc. only ships the primitives. With p0 you get one out of the box that we have been and keep tweaking.
Thank you, we appreciate it. We're here for this kind of feedback!
Our thinking is that you wouldn't use p0 if you are vibe coding a side project, our focus is on folks who need to ship meaty features in existing codebases where the value we generate far outweighs the $100/month.
We debated offering a free tier, but that would have meant offering it with limited functionality, and that would take away from the experience in too fundamental of a way. We want people to have the whole thing.
You can try it for free for 14 days, and we are not locking anything in. Everything lives on your machine and you could move it into your own harness or workflow.
I hope too, maybe with iPad Pros first: A new hybrid binary for apps that allows you to seamlessly switch between MacOS mode when connected to peripherals and iOS when not, apps just render in a different place, but maintain state.
I'd be totally happy with both modes being completely independent. Cloud-synced files could take care of shared state for a lot of usecases. I mainly just want this for the portability aspect with the phone.
There are some practical issues with having to make sure you have the screen and keyboard access (in practice the all-in-one of a laptop is pretty handy - though I guess you could still have this form factor in a much lighter shell minus the compute) but for a lot of cases like home <-> office this would be the dream, just carry your computer in your pocket.
Something worth thinking about, even if you're also carrying a laptop-style shell with you too... this means only buying RAM for one device rather than two. Laptop shell doesn't need RAM of its own.
The RAM shortage is only starting to hit but I think this could potentially start to be more appealing if it lasts too long and gets too bad.
Pretty heavily, yes. The Anthropic primitives are the best starting point we think, and focusing allowed us to fine-tune the workflow and harness for this use-case. But support for other models/providers is on the roadmap.
What would you like to see? Other other subs like Codex? Self-hosted?
Yes, it works with subscription or API key. We use it with Max 20x. But in full disclosure, I do not know what plans the Anthropic team has, and they've been sending mixed messages. We'll start adding support for other providers/models as well.
A single markdown file will definitely reach its limits very quickly. We also try our best to provide templates for the standards for the agent to follow in the initial code review and interview with you to make those cover all the basics. Obviously this isn't proprietary to us, just works really well in our opinion.
reply