Hacker Newsnew | past | comments | ask | show | jobs | submit | hdjdbdirbrbtv's commentslogin

As a professional rust dev, I will say this, I don't notice. Generally because I am doing partial builds mostly. And when I am not the m1 max I have is fast enough to compile the project.

Really there are much larger issues that I have to deal with that build time does not affect.


But also unfathomably large because of the speed at which you could consume it was measured in kbps....


Project Binky (YouTube) ep 39 or 40 did this with their CNC mill


Respectfully, I disagree. An llm in my mind is a new compiler. Just it takes natural language and produces code.


It feels like we're talking about different technologies sometimes.

I find its a slightly improved google for vague questions. Or a doxygen writer.

Its all use I've found for any ai model since i first started playing with github copilot beta.

Ive been trying the newer models as they arrived, and found they're getting more verbose, more prone to hallucinating functions that dont exist, and more prone to praise me as a god when trying to ask about basic assumptions. (you're cutting to the heart of the matter)

What kind of code do you write where its somehow replacing coding itself? I spent 30 minutes trying to get mistral to write a basic bash script yesterday.


I am playing with open weights models at home and yeah they are like that ... I use Claude 3.7 @ work and yeah it is a lot better ... Sometimes it will flub things but it also can write large amounts of code ... Mostly how I want (the pareto principle comes into play for the parts I don't want though).

So for me, the future will tend towards this ... Currently the tech is early days, we have no way to steer thought.. We have no way to align it to our thought processes... But eventually we will get to I want x pls make and it will be able to do it well.


So the issue with genetic algorithms / genetic programming is you need a good way to handle the path the population takes. It is more reinforcement than y = f(x) for deep learning f() is what the nn is computing. X and y is the training data.

Finding a good scoring algorithm is hard as it is so easy for a GA to cheat...

Source: experience


Are you talking about teaching in the context window or fine tuning?

If it is the context window, then you are limited to the size of said window and everything is lost on the next run.

Learning is memory, what you are describing is an llm being the main character in the movie Momento, I.e. no longterm memories past what was trained in the last training run.


There's really no defensible way to call one "learning" and the other not. You can carry a half-full context window (aka prompt) with you at all times. Maybe you can't learn many things at once this way (though you might be surprised what knowledge can be densely stored in 1m tokens), but it definitely fits the GP's definition of (1) real-time and (2) based on a few examples.


Yes, one is committing knowledge to neurons, the other is commuting knowledge to short term memory.

Put another way, if you took a llm with random weights. Do you expect you could rely on context alone?


IIRC ( and it was 20 years ago now that I learnt this) the brain uses 20% of the body's resting energy usage. Most of that is keeping neurons polarised to the outside (ion pumps need ATP!!!).

The body uses 25w resting and thus the brain is about 5w.

Source: biology degree but like I said please take with the same amount of weight as a hallucinating LLM.


GPT says: Unless you're a hamster hooked up to a Fitbit, it's more like 60–70W for a normal adult human. So the brain’s real power draw is more like 15–20W, not 5W

Resting energy usage in humans is ~1200–1500 kcal/day, or about 60–70 watts, depending on the person. Logic holds, estimate is just low


Lol thanks for the correction! like I said... it had been 20 years.. I misremembered the amounts :P


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: