I would argue that today most people do not understand that and actually trust LLM output more on face value.
Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side
I would argue that today most people do not understand that and actually trust LLM output more on face value.
Unless maybe you mean people = software engineers who at least dabble in some AI research/learnings on the side