Are Large Language Models Stupid? Or Are We?
Why Both Questions Are Worth Asking

Are Large Language Models (OpenAI's ChatGPT, Anthropic's Claude, Google's Gemini and the others) relentlessly getting better?
Not for me. Not so far, anyway. The problems I had with LLMs six months ago are still with me today: They make stuff up, they get facts wrong, they don't fix mistakes when told about them, and their writing is often meh.
Still, I use them every day. Even with their flaws, LLMs help me get more done in less time. With a LLM-powered search, I find out about ideas, people and facts more quickly. Also quicker: Checks on my understanding of technical prose ("does this mean what I think it does?") Another useful thing: Sometimes I'll get a LLM (GPT-4 or Claude) to produce some paragraphs on a topic, as a guide to what "everybody knows" alre…

