Hacker Newsnew | past | comments | ask | show | jobs | submit | pgt's commentslogin

This reminds me of the META [^1] II paper [^2]:

> META II is a domain-specific programming language for writing compilers. It was created in 1963–1964 by Dewey Val Schorre at University of California, Los Angeles (UCLA). META II uses what Schorre called syntax equations.

The interesting part of META II is that it can be defined in itself on one page (refer page 8 of the paper).

[^1] Wikipedia https://en.wikipedia.org/wiki/META_II

[^2] Paper: https://dl.acm.org/doi/epdf/10.1145/800257.808896


Datomic was the reason I switched [^1] to Clojure as my primary language in 2014. It was a gamble, but it paid off in the end.

I maintain that Clojure is the best AI-first language due to the lightning-fast iteration via the nREPL and Clojure's token efficiency.

[^1]: https://petrustheron.com/posts/why-clojure.html


Your device is the ultimate edge. The next frontier would be running models on your wetware.


Not just running it on your wetware, but charging you for it.

Can't wait until AI companies go from mimicking human thought to figuring how to licensing those thoughts. ;)


Man can't wait for AI in my brain. And then intelligence will be pay to win.


You need to curate your algorithm. Took me 10 years before I started blocking aggressively and now my feed is amazing with 90% bangers. Twitter is by far the best product in this space. Every other platform is 2+ weeks behind. Twitter is where the news breaks.


I had a well curated feed too (even used word filters) and yet I felt compelled to pack up and walk away. It was simply not enough.

The negative effect the various drivel had on me was nonlinear. Even if 99% of posts were fine, if that 1% was seriously upsetting, it just ruined the whole thing.


The inversion is really cool, e.g.

> f = λa λb concat ["Hello ",a," ",b,"!"] > f "Jane" "Doe" Hello Jane Doe!

then,

> g = f "Admiral" > invert g "Hello Admiral Alice!" Alice


@dang, pleaaase can we get proper markdown formatting on HN? I tried adding two spaces after each line, but I don't want paragraphs between code


4 spaces indent

The inversion is really cool, e.g.

    > f = λa λb concat ["Hello ", a, " ", b, "!"] 
    > f "Jane" "Doe" 
    Hello Jane Doe!
then,

    > g = f "Admiral" 
    > invert g "Hello Admiral Alice!" 
    Alice


thx that looks much better, but I'll forget that syntax in the next 3 months. nowadays I am just casual commenter on HN spending points on contrarian views that I know will be downvoted and I know no one else will say it.


Fellow software engineers, what are we doing here? Why are we letting the EU / UK define the future of software?


1. The UK and EU are rather large markets that they don’t want to miss out on.

2. There are software engineers in the UK and EU.

3. This specific implementation by Apple is not actually required by any UK or EU law, to my knowledge.

4. This specifically is or will be required by the laws of some US states and other countries.


1 Since when is Linux about marketing? And who is "they"?

2 Devs for companies can start working with proprietary OSes for the businesses they sell their soul to.

3 Who cares what apple is doing.

4 And systemd should not be liable for upholding any of them.


Maybe carefully read TFA - the age verification came from a Californian law


"Apolitical" technology


Recently rewatched Demolition Man (1993) where criminals are frozen in cryostasis and then reanimated – a very prescient film. All I could think of was Demolition Pig


I am getting disproportionately good results with the models by following a process: spec -> plan -> critique -> improve plan -> implement plan.


If I may "yes, and" this: spec → plan → critique → improve plan → implement plan → code review

It may sound absurd to review an implementation with the same model you used to write it, but it works extremely well. You can optionally crank the "effort" knob (if your model has one) to "max" for the code review.


A blanket follow-up "are you sure this is the best way to do it?"

Frequently returns, "Oh, you are absolutely correct, let me redo this part better."


You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

At the end of the day it’s an autocomplete. So if you ask “are you sure?” then “oh, actually” is a statistically likely completion.


> You should start a new session for the code review to make sure the context window is not polluted with the work on implementation itself.

I'm just a sample size of one, but FWIW I didn't find that this noticably improved my results.

Not having to completely recreate all the LLM context neccessary to understand the literal context and the spectrum of possible solutions (which the LLM still "knows" before you clear the session) saves lots of time and tokens.


Interesting, I definitely see better results on a clean session. On a “dirty” session it’s more likely to go with “this is what we implemented, it’s good, we could improve it this way”, whereas on a clean session it’s a lot more likely to find actual issues or things that were overlooked in the implementation session.


Can you give a little more detail how you execute these steps? Is there a specific tool you use, or is it simply different kinds of prompts?


I wrote it down here: https://x.com/BraaiEngineer/status/2016887552163119225

However, I have since condensed this into 2 prompts:

1. Write plan in Plan Mode

2. (Exit Plan Mode) Critique -> Improve loop -> Implement.


I follow a very similar workflow, with manual human review of plans and continuous feedback loops with the plan iterations

See me in action here. It's a quick demo: https://youtu.be/a_AT7cEN_9I


similar approach


No one left ChatGPT over that deal: they decided to try Anthropic's Claude because the Department of War gave them free marketing.


I was paying both $200+/mo and I went down to only paying Anthropic $200/mo.

My experience has, for a few months, been that OpenAI's models are consistently quite noticeably better for me, and so my Codex CLI usage had been probably 5x as much as my Claude Code usage. So it's a major bummer to have cancelled, but I don't have it in me to keep giving them money.

I'd love to get off Anthropic too, despite the admirable stance they took, the whole deal made me extra uncomfortable that they were ever a defense contractor (war contractor?) to begin with.


I left the openai platform long before this, because I expected things like this. A few called me alarmist but are now also jumping ship because of this. OpenAI has zero moral or ethical substance and people _do_ care about that. I'm extreme enough that joining openAI after a specific date works against you and your CV, not with/for you, while leaving at a specific date speaks volumes in favour of you. People are the sum of their actions, not their words and siding with / continuing to use openAI speaks volumes on who you are.


The DoW or the CEO of Anthropic and his telenovela?


modelless


This thread reminds me how Javas heavy GUI written in Java itself was called "lightweight" when in fact it did not feel lightweight at all on the hardware of the time.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: