Hacker Newsnew | past | comments | ask | show | jobs | submit | pepoluan's commentslogin

Safe and living also doesn't match.


We created computing to be fast, not safe. We could have made it safe, but we didn't, it was not a priority.

We can't say the same about living, because we have not created living.

Your comment makes zero sense.


Why, you want to buy a new printer?

If you want inkjets, buy those with ink tanks. More expensive up front, but operating cost is so cheap. And no more "you have to replace a whole cartridge just because Magenta is low"; if Magenta is low, buy a bottle of Magenta, and fill.

For laser printers, buy those whose toner cartridges are separate from the drum, and those whose toner cartridges can be reset mechanically. And refillable.

My go-to brand for printers is Brother, btw.


Type hints are 100% optional, though.

And to be honest when you start using it, even just for simple things such as function signature, with the proper IDE it helps you catch mistakes.


So. Another regex problem?


I am one of the maintainers of aiosmtpd [1], and the largest PR I ever made was migrating the library's tests from nosetest to pytest. Before doing that, though, I discussed with the other maintainers if such a migration is welcome. And after getting support from them, I made the changes with gusto. It took weeks, even months to complete and the PR is massive [2]

But still the crux of the matter is: Massive changes require buy-in from other maintainers BEFORE the changes even start.

[1] https://github.com/aio-libs/aiosmtpd [2] https://github.com/aio-libs/aiosmtpd/pull/202


LLM will guiltlessly produce hallucinated 'review', because LLMs does NOT 'understand' what it is writing.

LLMs will merely regurgitate a chain of words -- tokens -- that best match its Hidden Markov Model chains. It's all just a probabilistic game, with zero actual understanding.

LLMs are even known to hide or fake Unit Test results: Claiming success when it fails, or not skipping the results completely. Why? Because based on the patterns it has seen, the most likely word that follow "the results of tests" are the words "all successful". Why? Because it tries to reproduce other PRs it has seen, PRs where the PR author actually performed tests on their own systems first, iterating multiple times until the tests succeed, so the PRs that the public sees are almost invariably PRs with the declaration that "all tests pass".

I'm quite certain that LLMs never actually tried to compile the code, much less run Test Cases against them. Simply because there is no such ability provided in their back-ends.

All LLMs can do is "generate the most probabilistically plausible text". In essence, a Glorified AutoComplete.

I personally won't touch code generated wholly by an AutoComplete with a 10-foot pole.


The statement preceding your quote is more telling:

> as long as the code generation doesn’t use too much energy or cause unforeseen problems.

A badly-written code can be a time bomb, just waiting for the right situation to explode.

And also, using LLM to generate garbage requires so much energy.


If the submitter is sloppy with things that are not complicated, how can one be sure of things that ARE complicated?


The funny thing is that it works, have a look at the MR. It says:

  All existing tests pass. Additional DWARF tests verify:

  DWARF structure (DW_TAG_compile_unit, DW_TAG_subprogram).
  Breakpoints by function and line in both GDB and LLDB.
  Type information and variable visibility.
  Correct multi-object linking.
  Platform-specific relocation handling.
So the burden of proof is obviously not anymore on the MR submitter side but the other.


"AI has a deep understanding" is very oxymoronic, especially if the "AI" being used was an LLM.


Or maybe just don't use LLM.

LLM is just a tool in the A.I. world. There are lots of other A.I. tools, such as Neural Network, Fuzzy Logic, Genetic Programming, and so on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: