When explaining what LLMs are (and aren't), I've been using this analogy: AI isn't thinking, just like the horsepower in your car isn't horses—but it can still get you places.
An engine doesn't have horses inside. But, we needed to relate it to something folks understood at the time. Weirdly, now that we ride in cars more than on horses, the metaphor has lost its explanatory power—yet we keep using it. Likewise, we'll probably be talking about "thinking" in AI for awhile yet, likely long past any literal usefulness.
I appreciate this bit in Co-Intelligence: Living and Working with AI by Ethan Mollick: "I'm about to commit a sin. And not just once, but many, many times. For the rest of this book, I am going to anthropomorphize AI." He's right to frame it as a 'sin'—and also right to do it anyway.
Engaging with an LLM isn't truly a conversation. But, when we use conversational patterns, we effectively mine the model for responses that are often useful.
If you play-act with the machine, it offers back thinking-like responses that aren't actually thinking—but nonetheless can still produce interesting connections and notions that resemble real information. With care from both AI developers and users, they might even match correct information that would be hard to find otherwise.
But you still need to keep your wits about you and remember you're not riding a horse. If you pass out on a horse, she still knows the way home. If you pass out in a car, you end up in a ditch. A horse knows to avoid the cliff; a car just follows your lead.