This is basically the "Rhinos are just fat unicorns" approach. Totally fine if you want to go that route but a bit goofy. You can get SOTA models to generate a 5-legged dog simply by being more specific about the placement of the fifth leg.
haha fair point, you can get the expected results with the right prompt, but I think it still reveals a general lack of true reasoning ability (or something)
Or it just shows that it tries to overcorrect the prompt which is generally a good idea in the most cases where the prompter is not intentionally asking a weird thing.
This happens all the time with humans. Imagine you're at a call center and get all sorts of weird descriptions of problems with a product: every human is expected to not expect the caller is an expert and actually will try to interpolate what they might mean by the weird wording they use
> This is also why I believe that language is a bottleneck for thought. Most of what you remember is nothing like an approximate copy of the things you experienced in real life—even in the specific case of text, memory is not even remotely like a paraphrase of previously read words. Many of our thoughts happen in a highly abstracted and distilled form, interacting and connecting with each other as a network that simply cannot be faithfully converted into a sequence of words, however long.
Perhaps the most interesting quote in an interesting article.
Gemini responds:
Conceptualizing the "Millipup"
https://gemini.google.com/share/b6b8c11bd32f
Draw the five legs of a dog as if the body is a pentagon
https://gemini.google.com/share/d74d9f5b4fa4
And animal legs are quite standardized
https://en.wikipedia.org/wiki/List_of_animals_by_number_of_l...
It's all about the prompt. Example:
Can you imagine a dog with five legs?
https://gemini.google.com/share/2dab67661d0e
And generally, the issue sits between the computer and the chair.
;-)
reply