Yudkowsky's views on the safety challenges future generations of AI systems pose are discussed in
Stuart Russell's and
Peter Norvig's undergraduate textbook Artificial Intelligence: A Modern Approach. Noting the difficulty of formally specifying general-purpose goals by hand, Russell and Norvig cite Yudkowsky's proposal that autonomous and adaptive systems be designed to learn correct behavior over time:
Yudkowsky (2008)[10] goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to design a mechanism for evolving AI under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.[6]
In response to the
instrumental convergence concern, that autonomous decision-making systems with poorly designed goals would have default incentives to mistreat humans, Yudkowsky and other MIRI researchers have recommended that work be done to specify software agents that converge on safe default behaviors even when their goals are misspecified.[11][7]
Capabilities forecasting
In the
intelligence explosion scenario hypothesized by
I. J. Good, recursively self-improving AI systems quickly transition from subhuman general intelligence to
superintelligent.
Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies sketches out Good's argument in detail, while citing Yudkowsky on the risk that
anthropomorphizing advanced AI systems will cause people to misunderstand the nature of an intelligence explosion. "AI might make an apparently sharp jump in intelligence purely as the result of anthropomorphism, the human tendency to think of 'village idiot' and 'Einstein' as the extreme ends of the intelligence scale, instead of nearly indistinguishable points on the scale of minds-in-general."[6][10][12]
In Artificial Intelligence: A Modern Approach, Russell and Norvig raise the objection that there are known limits to intelligent problem-solving from
computational complexity theory; if there are strong limits on how efficiently algorithms can solve various tasks, an intelligence explosion may not be possible.[6]
Time op-ed
In a 2023 op-ed for
Time magazine, Yudkowsky discussed the risk of artificial intelligence and proposed action that could be taken to limit it, including a total halt on the development of AI,[13][14] or even "destroy[ing] a rogue datacenter by airstrike".[5] The article helped introduce the debate about
AI alignment to the mainstream, leading a reporter to ask President
Joe Biden a question about AI safety at a press briefing.[2]
Rationality writing
Between 2006 and 2009, Yudkowsky and
Robin Hanson were the principal contributors to Overcoming Bias, a cognitive and social science blog sponsored by the
Future of Humanity Institute of Oxford University. In February 2009, Yudkowsky founded LessWrong, a "community blog devoted to refining the art of human rationality".[15][16]Overcoming Bias has since functioned as Hanson's personal blog.
Over 300 blog posts by Yudkowsky on philosophy and science (originally written on LessWrong and Overcoming Bias) were released as an ebook, Rationality: From AI to Zombies, by MIRI in 2015.[17] MIRI has also published Inadequate Equilibria, Yudkowsky's 2017 ebook on societal inefficiencies.[18]
Yudkowsky, Eliezer (2011).
"Complex Value Systems in Friendly AI"(PDF). Artificial General Intelligence: 4th International Conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Berlin: Springer.
LaVictoire, Patrick; Fallenstein, Benja; Yudkowsky, Eliezer; Bárász, Mihály; Christiano, Paul; Herreshoff, Marcello (2014).
"Program Equilibrium in the Prisoner's Dilemma via Löb's Theorem". Multiagent Interaction without Prior Coordination: Papers from the AAAI-14 Workshop. AAAI Publications. Archived from
the original on April 15, 2021. Retrieved October 16, 2015.
Soares, Nate; Fallenstein, Benja; Yudkowsky, Eliezer (2015).
"Corrigibility"(PDF). AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
^
abHutson, Matthew (May 16, 2023).
"Can We Stop Runaway A.I.?". The New Yorker.
ISSN0028-792X.
Archived from the original on May 19, 2023. Retrieved May 19, 2023. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didn't report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that it's legitimate to take action. But, in A.I., there's no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. "There will be no fire alarm that is not an actual running AGI," Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. ... That may require quitting A.I. cold turkey before we feel it's time to stop, rather than getting closer and closer to the edge, tempting fate. But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange."
^Soares, Nate; Fallenstein, Benja;
Yudkowsky, Eliezer (2015).
"Corrigibility". AAAI Workshops: Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, January 25–26, 2015. AAAI Publications.
Archived from the original on January 15, 2016. Retrieved October 16, 2015.