you're more polite than me, but that's essentially the same response as what i have to people citing chatGPT. i just say "ChatGPT told me that's wrong".
if somebody thinks that unverified LLM output is relevant to a conversation, i don't want to have to defend why it shouldn't be part of the conversion, i want to put the responsibility for justifying it back onto them.
Asking for the receipts so you can figure out where they put their thumb on the scale is more illuminating.