Isn't there a risk that you're engaging with an inaccurate summarization? At some point inaccurate information is worse than no information.
Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.
I can probably process anything short and highlevel by myself in a reasonable time, and if I can’t, I will know, while the LLM will always simulate perfect understanding.
There is, but there is an equal risk if you were to engage about any topic with any teacher you know. Everyone has a bias, and as long as you dont base your worldview and decisions fully on one output you will be fine.
Experimenting with LLMs, I've had examples like it providing the Cantor Set (a totally disconnected topological space) as an example of a Continuum immediately after it provides the (correct) definition as a non-empty compact, connected (Hausdorff) topological space. This is immediately obvious as nonsense if you understand the topic, but if one was attempting to learn from this, it could be very confusing and misleading. No human teacher would do this.
But I’m not trying to become an expert in these subjects. If I were, this isn’t the tool I’d use in isolation (which I don’t for these cases anyway.)
Part of reading, questioning, interpreting, and thinking about these things is (a) defining concepts I don’t understand and (b) digging into the levels beneath what I might.
It doesn’t have to be 100% correct to understand the shape and implications of a given study. And I don’t leave any of these interactions thinking, “ah, now I am an expert!”
Even if it were perfectly correct, neither my memory nor understanding is. That’s fine. If I continue to engage with the topic, I’ll make connections and notice inconsistencies. Or I won’t! Which is also fine. It’s right enough to be net (incredibly) useful compared to what I had before.
I've been doing this a fair amount recently, and way I manage it is: first, give the LLM the PDF and ask it to summarize + provide high-level reading points. Then read the paper with that context to verify details, and while doing so, ask the LLM follow-up questions (very helpful for topics I'm less familiar with). Typically, everything is either directly in the original paper or verifiable on the internet, so if something feels off then I'll dig into it. Through the course of ~20 papers, I've run into one or two erroneous statements made by the LLM.
To your point, it would be easy to accidentally accept things as true (especially the more subjective "why" things), but the hit rate is good enough that I'm still getting tons of value through this approach. With respect to mistakes, it's honestly not that different from learning something wrong from a friend or a teacher, which, frankly, happens all the time. So it pretty much comes down to the individual person's skepticism and desire for deep understanding, which usually will reveal such falsehoods.
There is, but just ask it to cite the foundational material. A huge issue with reading papers in topics you don't know about is that you lack the prerequisite knowledge and without a professor in that field, it may be difficult to really build that. Chat GPT is a huge productivity boost. Just ask it to cite references and read those.
Perhaps in low stakes situations it could at least guarantee some entertainment value. Though I worry that folks will get into high stakes situations without the tools to distinguish facts from smoothly worded slop.