Hacker Newsnew | past | comments | ask | show | jobs | submit | greggh's commentslogin

Use a devcontainer. Claude Code's repo has one built specifically for it:

https://github.com/anthropics/claude-code/tree/main/.devcont...


The Claude Code devcontainer works really well, especially the firewalling script! I had do a bit of GitHub Actions spelunking to figure out how to build binary images (with my own devtools preinstalled), which I wrote up here: https://anil.recoil.org/notes/ocaml-claude-dev

With this I have a nice loop where I get Claude to analyse its own sessions via a cronjob and rewrite my devcontainer Dockerfile to have any packages that I've started using during the interactive sessions. This rebuilds via GHActions and my fresh image the next day has an updated Claude and dev environment in a sandbox.


Following that story as it happened, it was all on the phone with the phone keyboard and he somehow made multiple good Neovim plugins including that very popular one (which I use in multiple configs).


neovim is probably the only sane way you could code like this on a small screen. everything works pretty much the same way it does on a desktop terminal, the only thing you have to get used to is having so many lines wrapped, and not having quick access to some characters like $ or ^, but they can just be added to the toolbar in termux


Right, but Eva found an RCE and only got $5,000.


An RCE in what? Nobody's buying your Discord RCE.


Development seems pretty rapid, how often are breaking changes forcing workflow modifications to keep updated with the latest versions?


We keep all workflows running on Sim cloud backwards compatible always. The idea is that you build, deploy, and never have to make modifications again unless you want to.

If we release a breaking change that requires a migration for existing local workflows, we release notice at least a few weeks ahead of time/bake it into the db migrations.

Incase there are significant changes made, everything is versioned so you opt-in to upgrading.


Thanks, and that sounds great. On the backend what are you using for the DAG stuff to make it durable? Temporal?


We actually wrote our own serializer & execution engine and for long-running jobs, we defer to trigger.dev


It was given today's front page to riff on. Thats why it not only reads like a HN front page, but also has near duplicates from todays front page.


This is my new favorite response.


(Travels back to the 90s)

Pretty good for Emacs*

Long live VI.


People still use WezTerm when we have Kitty and Ghostty? Can you explain why? I'm actually interested to know what would make someone make that choice.


Wezterm is actually programmable. I am looking to drop Kitty as it intentionally offers minimal tmux support and the text rendering options that made it superior for me are being deprecated.

Until Ghostty offers the scriptability found in wezterm and kitty (e.g., hit a keybind, spawn a new terminal and execute a font picker script), I am trying out wezterm, which is pretty great, but renders fonts too thin by default. I stare at this thing eight hours a day so text rendering is super important.


I had some issues with Wezterm fonts - I was able to configure it away: https://github.com/bbkane/dotfiles/tree/master/wezterm#fixed...


> People still use WezTerm when we have Kitty and Ghostty?

Very customizable and extensible using Lua. Extensive documentation, native ssh support and built-in multiplexing.


I have them all installed, but I use WezTerm most often because it is fastest to give me a window when I hit the assigned shortcut key. Ghostty is a hair slower. Kitty takes 2-3 seconds. I keep launching terminals pretty frequently, so this matters to me a lot. The only other feature that it must have is truecolor.


I prefer WezTerm over ghosttty and kitty.

I prefer it's UI and level of customization.

Ghostty animations run like crap for me on linux (not sure why).


Folks have responded about WezTerm's programmability being the reason they like it, but if you don't mind I'd like to flip the question around: why do you prefer Kitty or Ghostty to WezTerm?


Deepseek, Qwen, GLM (quite good). All being open and available for local use definitely puts them ahead in that space, which means a lot of the tinkerers and younger people learning to do things like train and fine-tune are getting good with Chinese models and I do think getting in early like that is a great way to gain mindshare in a space. Look at Apple or Microsoft doing everything they could early on to get their machines and software into schools as early as possible.


If you really need a lot of VRAM cheap rocm still supports the amd MI50 and you can get 32gb versions of the MI50 on alibaba/aliexpress for around $150-$250 each. A few people on r/localllama have shown setups with multiple MI50s running with 128gb of VRAM and doing a decent job with large models. Obviously it won't running as fast as any brand new GPUs because of memory bandwidth and a few other things, but more than fast enough to be usable.

This can end up getting you 128gb of VRAM for under $1000.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: