More on the Need for Inhuman AI
Why Yoshua Bengio, the machine-learning pioneer, wants to make AI less like us
It’s easy to assume, without noticing, that artificial intelligence must try to seem as human as possible. That it must, for instance, refer to itself as “I,” and answer your questions with sentences that a human could likely have written, and take action, like a person does. Like, for example, filling out a form for you.
That, after all, has been the goal in training the Large Language Models that are most people’s experience of AI: Make the LLM able to interact with you so that the vibe is the same as talking to a person. (A pleasing, unobjectionable, maybe even sycophantic person, since a lot of the texts in the data have that tone, and because that’s the the behavior that trainers reinforce when they tune it.)
Maybe this tendency began as the path of least resistance for making genAI that wouldn’t freak out us ordinary mooks. But now, the public creates incentives to make AI act ever-more human: People are turning to the models fo…