If you didn't have a question, don't subject yourself to an endless stream of answers
An outstanding post by Sangeet Paul Choudary on the importance of questions in a world of infinite, cheaply available answers.
When LLMs first became widely available, it seemed to me that widespread usage might actually help us learn how to ask better questions and make fewer assumptions in our interactions with other people. After all, deep skepticism is the only safe way to get the responses you need as an LLM user. Well-considered questions are the only way to elicit answers that stick to your line of inquiry.
Two years later, that's probably not a bet I would make now.
It's quite easy to zero in on specifics with an LLM—even if the those specifics are not factually accurate. It's a lot harder to zoom out. Pulling back and looking for the big picture or seeking the pros, cons, and white space in and around my path of inquiry is really all on me as the human.
Realizing and continually acting on this understanding requires constant vigilance. Ultimately, I've found that when using LLMs I do tend to write more in order to frame my questions. I even have a running feed of long-form framed questions in sidecar applications like Drafts where I can spend time considering and revising my writing before moving back to an LLM.
But this level of consideration is probably not a behavior we can expect from everyone, and certainly I find myself slipping more often than I want to admit.
I don't have a prescription for how to remain vigilant. A couple of simple tools that can help:
- Your "conversation" with an LLM is usually presented as a chat, but that's not what it really is. It's of course fine to liberally scrap chats and start over with a better question. The bot won't get its feelings hurt.
- Sometimes you might not want to give up the context of the current chat thread to create a new chat. In that case, instead of correcting the LLM after a poor question has generated a poor response, go back and edit your previous question. Doing so, you can get the bad context out of the thread altogether.
In other words, asking a bad question and getting a mediocre response helps to hone in on the right level and direction of inquiry... if you keep your radar on continually.
From Choudary's article:
The irony of our current moment is that the more knowledge we accumulate, the less certain we seem to become.
When plausible answers come faster than we can formulate questions, the challenge lies in structuring the inquiry, and knowing where to look next.
Which paths of exploration still hide useful ambiguity.
In the midst of structural uncertainty i.e. uncertainty not just in outcomes but in the structure of the systems which deliver those outcomes, advantage lies not in what you know, but in your ability to navigate what you don’t.
When answers aren’t static anymore, the ability to keep asking the right questions is the only thing that matters.
To me, this describes not only a dynamic in our interactions with LLMs, but also goes pretty far in explaining some of our deepening societal ills. Non-stop news channels increasingly seem to me like poorly prompted, echo-chambered LLM-generated content: a never-ending stream of plausible answers in a world of uncertainty.
Maybe it's been taken down now, but when I happen to see 24-hours news channels when out in the world, I always think of this project shared on Hacker News back in 2022: "An AI generated, never-ending discussion between Werner Herzog and Slavoj ŽIžek".
Do yourself a favor: if you didn't have a question, don't subject yourself to an endless stream of answers. Turn that TV off; turn that social feed off. Ask good questions.