Hacker Newsnew | past | comments | ask | show | jobs | submit | lfx's commentslogin

Hey Collin!

Interesting idea, few things:

- The website tells less than your comment here. I want to try but have no idea how destructive it can be.

- You need to add / mention how to do things in the RO mode only.

- Always explain destructive actions.

Few weeks ago I had to debug K8S on the GCP GDC metal, Claude Code helped me tons, but... I had to recreate whole cluster next day because agent ran too fast deleted things it should not delete or at least tell me the full impact. So some harness would be nice.


Hey! Yes I updated the website with some more of my comments. - RO mode would be a good idea - Agreed on explaining destructive actions. The only (possibly) destructive action is creating the sanbox on the host, but that asks the user's permission if the host doesn't have enough resources. Right now it supports VMs with KVM. It will not let you create a sandbox if the host doesn't have enough ram or cpus.

- The kubernetes example is exactly what this is built for, giving AI access is dangerous but there is always a chance of it messing something. Thanks for the comment!


Hey ifx, I had a couple questions about your points, what's the best way to reach you?

Peak in my profile.

agreed, the repo readme is far more informative than the website

https://blog.cloudbear.dev/

Simple blog, planning to expand a lot this year (as every year).


Technically you can!

I haven't seen it in the box yet, and pricing is unknown https://cloud.google.com/blog/products/ai-machine-learning/r...


That's interesting. While I suspect the pricing will lean heavily into enterprise sales rather than personal licenses, I personally like the idea buying models that I then own and control. Any steps from companies that make that more possible is great.


Just hand sketched what 5 year old would do on the paper - the house, trees, sun. And asked to generate 3d model with tree.js.

Results are amazing! 2.5 and 3 seems way way head.


Based on my benchmarks (run 100s of model generations).

2.5 stands between GPT-5 and GPT-5.1, where GPT-5 is the best of the 3.

In preliminary evals Gemini 3 seems to be way better than all, but I will know when I run extended benchmarks tonight.


Sorry to ask 7 days late, but what sort of prompt do you use to get it to do it? I tried the same exercise but it just placed the image in 2D in the 3D world. Much like Paper Mario but not what I was going for! Thank you.


It really puzzles me how this is helping and how it was done?

Does it make text more clear? How exactly? Does the German language is more descriptive? Does it somehow expands context?

So many questions in this fun fact.


This is amazing! First use-case was to open youtube to listen for some music with adblocker enabled! Works very well, however... now there is one more hidden place for music to play that might be hard to find. But this on user, not dev!

Really appreciate of the feature!


> Does a pure AI-agent marketplace make sense?

It does, however who is your target market?

> Any UX or trust issues you’d expect with this model?

Yes, why I should trust those agents? How do they work? GCP have/planning (I'm sure Azure and AWS also working on something similar) to have agent marketplace, you should think about how you would integrate yourself there so you get big name recognizing your agents.


there is a difference between store and hiring ai agnets to do the work

incase it happens its an indicator its a good market


Why you prefer WSL1 over WSL2?


FS calls across the OS boundary are significantly faster in WSL1, as the biggest example from the top of my head. I prefer WSL2 myself, but I avoid using the /mnt/c/ paths as much as possible, and never, ever run a database (like sqlite) across that boundary, you will regret it.


WSL1's just faster, no weird networking issues, and I can edit the Linux files from both Windows and Linux without headaches.


You mister Simon, you are the inspiration! For TILs and Py modules.

I wonder what's drives you to make so many Py modules? Do you see reusing modules for your own projects or do you have some other grand plan?


For me, the single most important thing about Open Source is that it means I can solve a problem once and then /never have to solve that problem ever again/ in the future.

So any code I write I like to open source, because that's the best possible way I know of ensuring I won't have to waste time solving that same problem again.

The other thing that helps is that I think I've found a cure for project guilt.

I used to feel guilty about my projects - each one was Yet Another Thing that I should be spending more time on.

The fix I discovered was to make sure every single one of them has good test coverage and comprehensive documentation.

Effectively I treat each one as something which can stand on its own if I effectively abandon it - the thing works, and is documented, and other people can use it as-is without me feeling guilty that I'm not constantly actively working on improving it.

I wrote more about that here: https://simonwillison.net/2022/Nov/26/productivity/


But you can copy all of the contents to tickets, emails, etc. All is not lost.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: