Why AI for Science Might Be a Better Bet Than AI for Words and Images
Some recent breakthroughs in human biology, plus a robot made of candy and some other things that caught my eye this week
Apologies for a rather late and sparse post this week. I have been busy with other things and gearing up to take in one of the most interesting of robot-related annual meetings -- the ACM/IEEE International Conference on Human Robot Interaction (HRI). I hope to bring you dispatches from this meeting next week (to the extent I can -- I am attending virtually).
Is AI for science safer than AI for chatting, images and video?
As a person preoccupied with moving words around, I've been sensitive to the dangers of AI "hallucination": The accepted (if anthropomorphic) term for "making stuff up." I've seen GPT-4 invent quotations, and books that never existed, and even people who never did either. In the realm where these AIs are most famous -- images and words -- there is a reasonable fear of their inextinguishable capacity for bullshit. You have to check everything they tell you!
But in the natural sciences, there is an automatic check on AI fabrications.
Here is the contrast: If a Large Language Model tells you that David Berreby wrote a book about mollusks before his death in 1998, you have no way of knowing that this is false until you peer into the same system of code and symbols that presented the AI to you.
You will search on the Web, where I hope you'll quickly find evidence that my book was about people, not clams, and that I am still breathing. If you don't find that information, or don’t care, or are really mad that I never paid back that $5 I borrowed in 1993, you may spread the untruth on the Web, creating further confusion. Point is, true sentences and false ones look exactly alike, and move around the Web and our brains in the same way.
But if an AI working in human biology makes up a gene or a protein that doesn't exist, it's game over for that assertion. Because nothing can be done with it. You can relate words to other words regardless of their truth. But you can't make a drug that targets a gene that isn't really there.
While we're (rightly!) concerned with the risks of human-facing AI, other AIs, built to face the real world, might offer fewer risks and greater rewards.
Case in point: Last Friday, this paper in Nature Biotechnology announced that an AI called pandaOmics had helped researchers find a gene that contributes to scarring of tissue in the lungs and kidneys. The AI also helped find a molecule that suppresses the activity of this gene. This has led to the development of a drug which now has already passed "Phase I" trials (showing it to be safe) and is now in "Phase II" trials (testing on patients to see how well it performs against their illness).
Finding a disease-related gene, understanding its contribution to disease, figuring out what chemical compound can disrupt that contribution, and turning that into an actual medication is a research project that typically takes ten years or more, the authors write. Their "AI-driven methodology" did it all in 18 months.
That statistic is dwarfed by one Carl Zimmer presents in his piece today on AI in biology. Biologists discovered many years ago that the body makes more red blood cells when oxygen is less abundant (as it is on high mountains), Zimmer writes. Then it took 134 years to find the specialized cells in the kidneys that spur the making of more of those cells. But an AI system at Stanford, fed raw data on human biology, took much less time to find that such cells must exist: Six weeks.
In the project described in the Nature paper, the PandaOmics AI found relationships in research papers, databases and experiments which suggested that the gene, TNIK, is implicated in fibrosis -- scarring within organs -- in both lung and kidney ailments.
Both diseases involve tissue that scars and stiffens over time. The lung ailment, idiopathic pulmonary fibrosis, makes it harder over time for sufferers to breathe. Most die within two to three years of being diagnosed. Meanwhile, people in the final stages of Chronic Kidney Disease also suffer from this kind of scarring, which prevents their kidneys from functioning correctly. There are no medications for this "renal fibrosis." There are drugs for "pulmonary fibrosis," but still, sufferers usually live only two to three years after diagnosis. So the AI-driven acceleration, if it yields an effective drug, will prolong lives.
I don't know if PandaOmics every hallucinates like ChatGPT. I suppose it must. But it does so in a context where its illusions won't do much harm. This has left me wondering: While we're fretting about ChatGPT and Sora and Gemini, perhaps its AI for science that will have more profound impacts.
If You Happen to Have an Old Game Controller and Some Lollipops
This may be the oldest link I ever post. It's from 2012, but it's still great. This is Thomas Tilley's schematic for a simple robot you build out of an old game controller, a few other parts, and some lollipops. Total cost, less than $10. (In 2012. Might be more now, but still -- a working robot that a kid can use to learn, for a pittance!
Trust in AI is Falling. Also: Sky is Blue.
I wasn't surprised by this report that trust in AI companies, according to a global survey, is way down from five years ago. But why is it down much more in rich nations than in the developing world? Is it just that well-off people have more exposure to AI? Or more exposure to media about AI gone wrong? Or more leisure time to worry?
Cynical Use of ChatGPT in a Cynical Game
Have I asked a Large Language Model how my resume might be improved to pass muster with other AIs? Yes, I sure have. Did it work? I have reason to believe it did. According to this survey, AI-guided resumes also help people get higher salaries. Biggest surprise to me: More than 80 percent of job-seekers do not use AI.
Literary Note:
"[T]hough people these days read fewer things labeled 'Literature,' they consume more literature than ever. Text weaves through the online digital world, stitching it together. It bonds and bounds whatever is meant by artificial intelligence, in the way an encyclopedia or a library holds the sum total of human knowledge.''
—Dennis Yi Tenen, Literary Theory for Robots