Computing pioneers profoundly disagree on AI risk
Table of Contents
HEIDELBERG, Germany—There are few higher educational institutions in this world that are older—and, some argue, more esteemed—than Germany’s Heidelberg University. But there aren’t any others that host an annual gathering of the most celebrated mathematicians and computer scientists of their generations.
Many of those who descend on the banks of the Neckar River each September are recipients of the Turing Award—sometimes known as the Nobel of computing. These luminaries designed the internet’s architecture, developed cryptographic methods for secure online transactions, invented large-scale artificial intelligence systems and provided conceptual and engineering breakthroughs that made deep neural networks a critical computing component, among other accomplishments.
In some ways, the Heidelberg Laureate Forum is a story of human connections. Like the 46 affiliated Nobel laureates who have mentored students at this university that awards approximately 1,000 doctorates per year, the 32 Heidelberg laureates present at this year’s invitation-only forum interacted with 200 young researchers. To get to Heidelberg, the young researchers passed through Addis Ababa, Sydney, Bangalore, Paris, Buenos Aires, Istanbul, Toronto and other cities, traveling an average of 4,500 kilometers, including many who logged more than 10,000 kilometers.
At the forum, they attended talks given by the laureates and distinguished guests with titles such as “Computer Science for Understanding Ourselves” and “Existential Threats of AI.” After the formal program ended each day, they mingled with the laureates over coffee or craft beer in the baroque old town, strolled along Philosopher’s Walk and delighted in reading centuries-old graffiti in the no-longer-operational studentenkarzers—student jail cells–turned–party rooms for those who committed minor offenses such as disturbing the peace.
But this gathering is more than a story of academic networking and mentoring. Given the acceleration of progress in artificial intelligence, this is a story about humanity. The Heidelberg laureates are rightfully proud that their accomplishments have benefited humans in countless ways. They also acknowledge that their inventions, including AI systems that have dominated world headlines this year, are sometimes abused in ways that harm humans. Yet as conversations at their forum unfolded, the computing pioneers respectfully disagreed with each other on just how much AI threatens people.
“AI systems are not reliable. They never will be reliable. Neither are human beings. So why are you asking AI to be reliable?” said Raj Reddy, recipient of the Turing Award for pioneering the design and construction of large-scale artificial intelligence systems. “Some people are more reliable than others. I think you should give the same benefit to machines.” Reddy is widely credited with demonstrating the practical importance and potential commercial impact of AI.
But some, including Martin Hellman, recipient of the Turing Award for co-inventing public-key cryptography (technology that supports secure online transactions), offered another perspective.
“Technology is giving humans godlike physical powers, and yet our maturity as a species is, at best, that of an irresponsible adolescent and often the terrible twos,” said Hellman, professor emeritus of electrical engineering at Stanford University. “I don’t see how you’re going to rein in AI. How can you rein in the threat of AI to humanity when countries are using AI in weapons systems?”
A (Brief) Primer on the Forum
The Heidelberg Laureate Forum is an invitation-only networking conference where young math and computer science researchers spend a week interacting with their disciplines’ laureates: recipients of the Abel Prize, the Fields Medal and the Abacus Medal (formerly the Nevanlinna Prize) in mathematics and recipients of the Association for Computing Machinery’s Prize in Computing and Turing Award in computer science.
The computer science laureates show up in greater numbers at the forum than the mathematicians, and more than half of this year’s group received the Turing Award. That prize, which comes with $1 million (now financed by Google), was named after the British mathematician Alan M. Turing, who articulated the mathematical foundations of computing.
Not every Turing Award recipient shows up in Heidelberg every year. “AI godfathers” Yann LeCun and Yoshua Bengio—both recipients—joined the forum last year but not this year. (Given the moment, Inside Higher Ed sat down with Bengio in his office in Montreal just before the forum, and his insight is included in this article.) But most show up at some point—and a proud core group has attended every year since its inception a decade ago. The Lindau Nobel Laureate Meeting provided the template for the intergenerational scientific exchange.
The forum’s optics are hard to ignore. With notable exceptions, the Heidelberg laureates—identifiable by bright-red lanyards attached to their conference badges—are mostly older white men from the United States and Europe. Meanwhile, the young researchers, whose lanyards are gray, represent a range of gender, racial and cultural backgrounds.
But to the forum’s credit, the young researchers are often referred to as “the next generation of scientists.” Moreover, the laureates routinely ask for—and respond to—the young researchers’ ideas. A host of distinguished guest speakers enhances the diversity and viewpoints present at the event. Green, yellow and blue lanyards are for distinguished guests, press and the forum’s staff respectively; the red ones for the laureates are the easiest to spot from across a room. A visible security presence ensures that uninvited stragglers do not wander in.
As with the Nobel Prizes, computing’s highest award may have overlooked significant contributions from some demographics, given that three women and 73 men have won the Turing Award. Katherine Johnson, Dorothy Vaughan, Mary Jackson and Christine Darden were early “computers” at the precursor to NASA, where they made significant contributions in the space race. Grace Hopper (creator of the first compiler and first English-like data processing language), and the ENIAC team (Betty Jean Bartik, Kathleen McNulty Mauchly Antonelli, Ruth Teitelbaum, Frances Spence, Marlyn Meltzer and Frances Holberton, who were responsible for the world’s first general purpose computer), among many others, might have been deemed Turing Award–worthy.
The forum also exists against the backdrop of societal bias. During a press conference, one journalist asked Yael Tauman Kalai—the only woman computer science laureate present at this year’s forum, recipient of the Association for Computing Machinery’s Prize in Computing for breakthroughs and fundamental contributions to cryptography—how she balances raising three children with her work. Kalai is hardly the only laureate with children, but only she was asked this question in the multiple daily press conferences.
Kalai handled the question with grace and moved on to discuss how her work helps secure the digital world in cloud computing and a possible quantum future.
The laureates are a collegial group. Robert Metcalfe—Turing Award recipient for inventing, standardizing and commercializing ethernet—indulged in a bit of dark humor. Apparently, members of a connectivity society recently served as pallbearers for the inventor of the USB (flash) drive. But the coffin jammed as they lowered it into the grave. So, they pulled it out, rotated it and put it back in, Metcalfe said.
Kalai, a researcher at Microsoft and an adjunct professor at the Massachusetts Institute of Technology, was gently teased for having a high school report card that noted 150 absences. But she defended herself by arguing that her school did not challenge her. Oxford University mathematician Tom Crawford needled Hugo Duminil-Copin (Fields Medal recipient and professor of mathematics at the Université de Genève) for having used his prize in a coin toss.
In this academic field where T-shirts and jeans are standard attire, the world maintained some order when Vinton Cerf (Turing Award recipient for developing the internet’s architecture and now chief internet evangelist at Google) showed up in his trademark three-piece suit. Cerf still refers to the internet as “the net,” which makes him sound like a dad attempting to sound cool—except that, as the father of the internet, his vibe appears to be the real deal.
Stark Differences in AI Risk Assessments
Many of the pioneers who developed the technology that permeates modern life were or remain based at universities where they teach and interact with undergraduate and graduate students. Some now divide their time with or work fully in industry.
When Raj Reddy discussed artificial intelligence, he briefly acknowledged some concern about its impact on humans before pivoting to paint a vivid picture of how AI may transform human lives in seemingly magical ways. (Reddy is the Moza Bint Nasser Professor of Computer Science at Carnegie Mellon University.) Once autonomous vehicles are improved and adopted, traffic deaths will decline dramatically, Reddy said. Also, AI will serve as a catalyst for personalized learning, which will accelerate access to and success in education for greater swaths of the population. AI will also enable individuals to do “10 times more things than we can do today, which means creating 10 times more wealth,” Reddy asserted.
“I don’t believe all of this nonsense about computers taking over the world,” Reddy said. “There will be a global benefit from [AI] technologies, but there will be local disasters … But you can’t throw the baby out with the bathwater … We’ll have to be willing to put up with some problems. And if society is not willing, that’s OK, too. We’ll only use the things that society is comfortable with.”
But cybersecurity pioneer Martin Hellman is less certain that humans will be able to assess AI risk in real time.
“Humanity is like a 16-year-old kid with a brand-new driver’s license who somehow gets its hands on a 500-horsepower Ferrari,” Hellman said. “We’re either going to grow up fast or kill ourselves.” AI is a “threat du jour,” Hellman said, and now joins other threats that include nuclear weapons, genetic engineering and climate change.
Other computing pioneers see threats greater than AI on their list of global concerns. Alexei Efros, recipient of the Association for Computing Machinery’s Prize in Computing for groundbreaking contributions in computer vision, was born and raised in the Soviet Union before emigrating to the United States at the age of 13.
“Putin and the war in Ukraine are way scarier than AI. That is a threat to all of democracy … Either we defeat Putin, or Putin defeats the world,” Efros, who is a professor of computer science at the University of California, Berkeley, said in his mild Russian accent. “Bioterrorism and climate change are also way scarier than the threat of AI.”
“AI godfather” and Turing Award recipient Yoshua Bengio, with whom Inside Higher Ed met in Montreal, where he serves as scientific director of the Mila–Quebec AI Institute and professor of computer science at the University of Montreal, has experienced a personal reckoning about his work in the past year.
Earlier, Bengio had imagined that AI’s potential existential risks were far enough into the future that he need not be concerned. But he has since told the U.S. Senate that “none of the current advanced AI systems are demonstrably safe against the risk of loss of control to a misaligned AI.” In a story this reporter wrote for the Bulletin of the Atomic Scientists, he shared his assessment of his attempts to bridge the gap between current AI systems and human intelligence.
“Every month I was coming up with a new idea that might be the key to breaking that barrier,” Bengio said. “It hasn’t happened, but it could happen quickly—maybe not my group, but maybe another group. Maybe it’s going to take 10 years. With research, it may feel like you’re very close, but there could be some obstacle you didn’t consider.”
Foresight, but No Crystal Balls
Those who have had a hand in developing AI and other emerging technologies are arguably well positioned to comment on the trajectories of their inventions. But when one reporter asked Cerf, for example, whether he expected the internet to fragment into multiple internets, or whether it would be more or less safe in 20 to 30 years, the internet architect responded with a dose of humility.
“You know, my ability to predict the future is demonstrably poor,” Cerf said. “For example, I’m the guy who thought that 32 bits of address space for IPv4 would be enough, and of course it’s not. We need IPv6 now.”
Cerf was referring to Internet Protocol Version 4 (IPv4), which was introduced in 1982 to route internet traffic and other packet-switched networks. Its 32-bit address allows for more than 4.2 billion unique addresses. But the human appetite for webpages exceeds that count. Version 4 is still in use, but its successor—Version 6—has been deployed to meet demand Cerf never anticipated.
Other pioneers have acknowledged miscalculations. In 1995, Metcalfe famously—and literally—ate his words after predicting that the internet would suffer a “catastrophic collapse” the following year. He had fretted about periodic outages at America Online, among others. When that did not happen, he brought a printed copy of the prediction and a blender to his keynote at the sixth International World Wide Web Conference in 1997. There, he pureed his column with some water and drank the paper smoothie as the crowd chanted, “Eat, baby, eat!” (When asked whether a photo of the moment existed, he said with some regret that iPhones did not exist back then.)
But sometimes computing pioneers are right—as they were when they imagined a world in which people would communicate electronically, carry powerful computers in their pockets and create artificial intelligences capable of solving intractable medical problems that humans could not solve.
Now, intrepid computer scientists, including many who work or are training at universities, are attempting to replicate the human brain. Further, individuals from a range of academic and other backgrounds see AI as integral to entrepreneurship.
“[The end goal] is AI agents that are as broadly capable as human brains,” David Silver, recipient of the ACM Prize in Computing, principal research scientist and professor at University College London, told me at an earlier Heidelberg Laureate Forum. “We don’t know how to get there yet, but we have a proof of existence in the human brain.”
“We are machines,” Efros told me this year.
Bengio echoed this sentiment, which is noteworthy given that he is the most cited scientist in the world (not simply the most cited computer scientist).
“I feel very, very confident that the human brain is just a big machine,” Bengio said in his exclusive conversation with Inside Higher Ed. “There’s no evidence for anything else than what we can currently explain with science. Physics is computing, which can be described by equations. What matters for our intelligence and for our consciousness, which is another bag of worms, is the computation that is being performed. If you think that there is something in our behavior that is not computational, then you have to believe in something magical. Of course, lots of people do. Religions are all around this. But among scientists, mostly we agree that it’s all cause and effect. There’s nothing else.”