Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Recently I asked an AI following question

  const MyClass& getMyClass(){....}
  auto obj = getMyClass();

  this makes a copy right?
And it was very confident about it not making a copy. It thinks auto will deduce the type as a const ref and not make a copy. Which is wrong, you need auto& or const auto& for that. I asked it if it is sure and it was even more confident.

Here is the godbolt output https://godbolt.org/z/Mz8x74vxe . You can see the "copy" being printed. And you can also see you can call non-const methods on copied object, which implies it is a non-const type

I asked the very same question to phind and it gave the same answer https://www.phind.com/search?cache=k3l4g010kuichh9rp4dl9ikb

How come two different AIs, one was supposed to be specialized on coding, fails in such a confident way?



You prove the point that these are just token generation machines whose output is psuedo-intelligent. It’s probably not there yet to be blindly trusted.


More to the point; I wouldn't blindly trust 99% of humans, let alone a machine.

Though to be fair we will hopefully quickly approach a point where a machine can be much more trusted than a human being, which will be fun. Don't cry about it, it's our collectives faults for proving that meat bags can develop ulterior motives.


One of the oldest tricks to make LLMs perform better is to ask them to "think step by step". I asked your question to Claude with that one

    ```
    const MyClass& getMyClass(){....}
      auto obj = getMyClass();
    ```

    Does this make a copy. Let's think step by step.
This might help you if you're trying to get these to help you more often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: