Having an AI Coworker Can Be Convenient, Efficient -- and Lonesome
This month's AI news leaves me unworried about AI replacing me -- but a little worried about AI replacing people in my work life.
After a couple of years of generative AI swirling around workplaces and schools, a rough consensus is starting to form: 2024’s genAI won’t work as your replacement, but it will do fine as your assistant. It can’t write as well as real writers, or do the same kind of research as a trained historian, or versify as well as good poet, or run an experiment like a seasoned scientist. Yet it can help all those people.
Teachers
In high schools and colleges, for example, teachers are discovering that genAI can indeed help students learn, if they have the discipline to set it up as a kind of teacher, rather than just having it do their work. To paraphrase the old cliché, when you have AI catch your fish, you get a fish. But when you have AI teach you to fish, you have a new skill. AIs are being tried widely in schools, and self-motivated people have been using them to learn all kinds of stuff, like coding or music. Scholarship is starting to confirm this intuition, as Ethan Mollick, a leading researcher on AI adoption in the real world, recently noted.
Research Assistants
But AI’s uses as a learning facilitator extend beyond formal education, into workplaces, where genAI tools can duplicate the kind of collaboration in which one person’s skill rubs off on another. One common example: Large Language Models can suggest ways to start a piece of writing, then offer ways to improve a draft. Another: AIs have been compared favorably to graduate students when it comes to devising a research program.
A lot of people, from high school students to Principal Investigators, have been tweeting, writing or saying that AIs have helped them quite a bit in this way. In this recent study of 308 researchers getting feedback on their papers, for example, more than half deemed OpenAI’s GPT-4 comments to be helpful. And more than 8 out of 10 said the AI feedback was more beneficial than certain human reviewers. (This may reveal more about academia’s reviewing culture than about AI, but still.)
I’ve recently been experimenting with a new tool of this sort, Google’s NotebookLM. It can take in a bunch of pdfs, then answer questions like “how does this author’s famous theory A relate to the politics of technology B?” or “I think these papers represent this trend in thinking about C, am I right?” or, of course (one I use a lot) “explain this body of work in a way that a high-school student would understand.” If you’ve read the papers this can be a way of clarifying your thoughts or reminding yourself what’s in them. This kind of genAI is also, of course, a way of making yourself seem to have read stuff, as this Apple Intelligence ad touts.
In other words, you can make NotebookLM work as I did for an author when I was in college – making sure footnotes were correct, pointing out inadvertent repetitions, telling him what was in such-and-such an article that he had read years before and largely forgotten.
However, if you’ve seen NotebookLM lately in your newsfeed, it’s likely because of the attention it got for its most gonzo feature: It can take a pile of written material and turn it into a two-“person” podcast. (Here, for example, are NotebookLM’s two AI voices discussing a recent post of mine. All I supplied here was a link and about 3 minutes of wait time.) I can’t imagine what real use anyone would make of this, but it does have a wow aspect: AI standing in not just for a research assistant, but also the talking head who’ll read her work on air.
Transcribers, Searchers, Summarizers Etc.
GenAI is also good at helping you search the web – you can tell it to look for that needle in the haystack of a Web search, and it does that pretty well (you just have to check that, eager assistant that it is, it did not make up the source).
It’s worth noting too that AI is also a fast, capable and cheap transcriber of recordings (which it can also summarize somewhat competently). Again, though, its imperfections pose a risk to anyone who doesn’t check its work at least occasionally. In this AP story about the risks of AI “hallucinations” in medical transcription, the most alarming fact is that the original recordings often aren’t preserved after an AI transcript is generated. So when the AI mishears speech and invents a non-existent malady or drug, there’s no way to check what was actually said.
Here Come the Agents
But what will truly make most people see genAI as a collaborator is the next step in AI tool-making: AI “agents.”
Agents are AIs that can make things happen in the real world. Pay a bill, reserve a flight, change a password, send an email to people it decides fit certain criteria. Businesses already use them in their operations, but they’ll soon be available to us ordinary mooks.
Last week Anthropic, makers of the Claude line of LLMs, released a prototype version of Claude that can run a computer. In other words, where standard Claude might tell me how to check train schedules with Amtrak and reserve a seat, this version can just go off and do it. It’s easy to imagine what could go wrong in an agent filled world, but I certainly would not hesitate to try these things once I can get my hands on them.
Yet there is no question that something will be lost.
There Go the People
In most every instance of AI assistance I have mentioned above, the current most common way to accomplish the work is to make contact with another human being. The student would ask a teacher why he got the problem wrong. The writer would ask a colleague to read over her work and tell her what he thinks. The scholar, wanting to know if her take on recent research makes sense, would ask another professor to talk it over.
There has been a lot written about the dangers of AI playing on our emotions, substituting virtual friendship or love for the human kind. The link to reports of a “loneliness epidemic” in rich nations is obvious. Much less has been said about the cognitive isolation of the person who uses AI in the supposedly less fraught realm of work. But an AI “colleague” in my work life will have the same effect as an AI “friend” in my emotional life: It will replace human contact. Yes, that means less friction and lost time – but also less of a chance for surprise, warm feelings and that sense of being seen that we get from engaging with real people. Some human colleagues, after all, become friends. But AIs will stick to the job at hand.
The frequent reply to this lament is that the machines aren’t replacing real human contact. And I admit, I have asked a Large Language Model questions that, had it not been there, I would not have asked at all. But I’ve noticed that this habit, once formed, has spread. It’s not just at 2 a.m., on deadline, that I’ve checked some prose with GPT-4 or Claude. I’ve also done it in workday hours when I could have called or emailed a person. After all, dealing with genAI is easier.
It’s also flattering. When I go for the speedy AI assist, I am, in effect, telling myself that I have better things to do. It is mighty hard to admit that, ultimately, that may not be true. That the hour I save by not having to consult a fellow worker could well be wasted doom-scrolling or YouTubing.
Do AI collaborators encourage people to make a good call? To say, “spending 45 minutes having coffee with a colleague to work out meeting logistics is better than asking an AI to arrange it all”? I doubt it. Because that would require people to say that frictionful, seamful human conversation is more valuable than having that time to ourselves. Our habits and our tech push us in the other direction. “AI romantic partners,” “AI friends” and “AI therapists” are easy to spot as instigators of loneliness. But innocuous, convenient AI helpers will have the same effect.
News from All Over
On Substack
Alice Evans explores the gender gap in AI use, and explains how she works to combat it as she teaches her students to engage with AI.
Evan Ratliff has been tracking the adventures of an AI version of himself. this conversation may be a new milestone. It pits AI-Evan against a telephone scammer’s AI, which, after “Evan” shows some interest, hands him over to a closer who will try to clinch the deal. Ratliff thinks this higher-up is another AI. See if you agree.
At Marcus on AI, Garrison Lovely reflected on the disbanding of OpenAI’s team dedicated to making sure smarter-than-human AI is safe, and the departure of its chief, Miles Brundage. Then, a few days later, Brundage himself launched his own newsletter.
Elsewhere on the Web
Making Robots Go Rogue
If you join a Large Language Model to a robot, you get a machine that can understand ordinary language and maybe speak it, too. Convenient! But, as I wrote last Spring in Scientific American, people speculated that such an AI could be hacked to make the robot go rogue. Speculation no longer: These researchers built an algorithm that successfully “jailbreaks” the GPT-3.5 system built in to some Unitree Go2 robot dogs. That is, by the way, the robot that these folks sell with a flamethrower attachment.
Robot Muscles
Polish robot company Clone shows off artificial muscles.
Robot Agility
Boston Dynamic’s newest Atlas humanoid robot shows off its all-electric, completely autonomous moves in this video.