Impromptu
A guide to thinking on your feet and making better decisions when you don't have time to prepare.
Much of what we do as modern people—at work and beyond—is to process information and generate action. GPT-4 will massively speed your ability to do these things, and with greater breadth and scope. Within a few years, this copilot will fall somewhere between useful and essential to most professionals and many other sorts of workers. Without GPT-4, they’ll be slower, less comprehensive, and working at a great disadvantage.
Whatever your skill level at a given task, GPT-4 can potentially amplify your abilities and productivity, so it’s equally useful to beginners, experts, and everyone in between. Given a request for any sort of information that you might ask a human assistant for, GPT-4 can come back instantly with an answer that is likely between good and excellent quality (though also with a non-zero chance of completely missing the mark, as we’ll see).
it’s all just math and programming. LLMs don’t (or at least haven’t yet) learn facts or principles that let them engage in commonsense reasoning or make new inferences about how the world works. When you ask an LLM a question, it has no awareness of or insights into your communicative intent. As it generates a reply, it’s not making factual assessments or ethical distinctions about the text it is producing; it’s simply making algorithmic guesses at what to compose in response to the sequence of words in your prompt.
Much as Google devalued the steel-trap memory, electronic calculators speeded up complex calculations, Wikipedia displaced the printed encyclopedia, and online databases diminished the importance of a vast physical library, so, too, platforms like ChatGPT will profoundly alter the most prized skills.
Mintz agreed with Chamorro-Premuzic that humans could thrive alongside AI by: (1) specializing in asking the best questions, (2) learning insights or skills that are not available in the “training data” used by the deep learning networks, and (3) turning insights into actions.
In the end, I assume what most people would choose is an LLM that functions in a reliably factual way in some contexts, in a more imaginative way in others, and is clear about which of these modes it is currently in. (Which, of course, is how we’d like the other humans we engage with to act as well, but which they don’t always do.)