If Anyone Builds It, Everyone Dies Audiobook By Eliezer Yudkowsky, Nate Soares cover art

If Anyone Builds It, Everyone Dies

Why Superhuman AI Would Kill Us All

Pre-order with offer Pre-order for $0.00
Offer ends April 30, 2025 at 11:59PM PT.
Prime logo Prime members: New to Audible? Get 2 free audiobooks during trial.
Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo after 3 months. Cancel anytime.
Pick 1 audiobook a month from our unmatched collection.
Listen all you want to thousands of included audiobooks, Originals, and podcasts.
Access exclusive sales and deals.
Premium Plus auto-renews for $14.95/mo after 30 days. Cancel anytime.

If Anyone Builds It, Everyone Dies

By: Eliezer Yudkowsky, Nate Soares
Pre-order with offer Pre-order for $0.00

$14.95/mo. after 3 months. Offer ends April 30, 2025 11:59PM PT. Cancel anytime.

$14.95/month after 30 days. Cancel anytime.

Pre-order for $19.49

Pre-order for $19.49

Confirm pre-order
Pay using card ending in
By confirming your purchase, you agree to Audible's Conditions of Use and Amazon's Privacy Notice. Taxes where applicable.
Cancel

About this listen

An urgent warning from two artificial intelligence insiders on the reckless scramble to build superhuman AI—and how it will end humanity unless we change course.

In 2023, hundreds of machine-learning scientists signed an open letter warning about our risk of extinction from smarter-than-human AI. Yet today, the race to develop superhuman AI is only accelerating, as many tech CEOs throw caution to the wind, aggressively scaling up systems they don't understand—and won’t be able to restrain. There is a good chance that they will succeed in building an artificial superintelligence on a timescale of years or decades. And no one is prepared for what will happen next.

For over 20 years, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have been studying the potential of AI and warning about its consequences. As Yudkowsky and Soares argue, sufficiently intelligent AIs will develop persistent goals of their own: bleak goals that are only tangentially related to what the AI was trained for; lifeless goals that are at odds with our own survival. Worse yet, in the case of a near-inevitable conflict between humans and AI, superintelligences will be able to trivially crush us, as easily as modern algorithms crush the world’s best humans at chess, without allowing the conflict to be close or even especially interesting.

How could an AI kill every human alive, when it’s just a disembodied intelligence trapped in a computer? Yudkowsky and Soares walk through both argument and vivid extinction scenarios and, in so doing, leave no doubt that humanity is not ready to face this challenge—ultimately showing that, on our current path, If Anyone Builds It, Everyone Dies.

©2025 Eliezer Yudkowsky and Nate Soares (P)2025 Little, Brown & Company
Computer Science History & Culture Politics & Government Public Policy Science & Technology Technology & Society
adbl_web_global_use_to_activate_webcro768_stickypopup

What listeners say about If Anyone Builds It, Everyone Dies

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.