BrothersJudd.com

Home | Reviews | Blog | Daily | Glossary | Orrin's Stuff | Email

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
    All Watched Over By Machines Of Loving Grace (Richard Brautigan)
I understand Artificial Intelligence even less well than most of the books we’ve discussed in these pages, so I’m not even going to try and engage with Anthropic boss, Dario Amodei’s long essay. But, in reading, it does seem to be more measured and detailed than many of the Utopian claims made by AI enthusiasts and salesmen. And I want to have an entry where we can start pinning interesting stories, essays, podcasts etc. on the topic, so here are two large chunks of the essay, which is available in full online and is worth reading in its entirety.
To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present.

What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.

By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc. In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access.

It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.

The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with.

Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

We could summarize this as a “country of geniuses in a datacenter”.

Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.

Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little6. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans).

I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way.

[...]
To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments.

Some examples include:

CRISPR: a technique that allows live editing of any gene in living organisms (replacement of any arbitrary gene sequence with any other arbitrary sequence). Since the original technique was developed, there have been constant improvements to target specific cell types, increasing accuracy, and reducing edits of the wrong gene—all of which are needed for safe use in humans.

Various kinds of microscopy for watching what is going on at a precise level: advanced light microscopes (with various kinds of fluorescent techniques, special optics, etc), electron microscopes, atomic force microscopes, etc.

Genome sequencing and synthesis, which has dropped in cost by several orders of magnitude in the last couple decades.

Optogenetic techniques that allow you to get a neuron to fire by shining a light on it.

mRNA vaccines that, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines of course became famous during COVID).

Cell therapies such as CAR-T that allow immune cells to be taken out of the body and “reprogrammed” to attack, in principle, anything.

Conceptual insights like the germ theory of disease or the realization of a link between the immune system and cancer13.

I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them.

Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.

Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.

Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15.

What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests.

When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.

Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.

To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.


(Reviewed:)

Grade: (B+)


Websites:

See also:

Essays
Science
Dario Amodei Links:

    -AUTHOR SITE: darioamodei.com
    -WIKIPEDIA: Dario Amodei
    -LINKED IN: Dario Amodei
    -TWITTER: @DarioAmodei
    -FELLOWSHIP: Dario Amodei, PhD 2007 Hertz Fellow (The Hertz Foundation)
    -ENTRY: Dario Amodei (Forbes)
    -
   
-
   
-
   
-
   
-
   
-
   
-
   
-ESSAY: Machines of Loving Grace: How AI Could Transform the World for the Better (Dario Amodei, October 2024, darioamodei.com)
    -THESIS: Network-Scale Electrophysiology: Measuring and Understanding the Collective Behavior of Neural Circuits (Dario Amodei, 2011, Princeton University)
    -PODCAST: Anthropic's Dario Amodei on AI Competition (China Talk, 2/05/25)
    -LECTURE: CEO Speaker Series With Dario Amodei of Anthropic (Council on Foreign Relations, March 10, 2025)
    -
   
-PROFILE: If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be Born: The brother goes on vision quests. The sister is a former English major. Together, they defected from OpenAI, started Anthropic, and built (they say) AI’s most upstanding citizen, Claude. (Steven Levy, Mar 28, 2025, Wired)
    -PROFILE: Anthropic CEO Admits We Have No Idea How AI Works: "This lack of understanding is essentially unprecedented in the history of technology." (Noor Al-Sibai, 5/05/25, Forbes)
    -PROFILE: Anthropic Is Trying to Win the AI Race Without Losing Its Soul (Shirin Ghaffary, May. 19th, 2025, Bloomberg)
    -
   
-
   
-
   
-

Book-related and General Links:

   
-ESSAY: Hype, Anthropic’s Dario Amodei, the podcasters who love him — and how the New York Times’ commentary on AI has degenerated into industry cheerleading: Real journalists do due diligence (Gary Marcus, Mar 14, 2025, Marcus on AI)
    -ESSAY: Life, Liberty, and Superintelligence (Dean Ball, 3/03/25, Arena)
    -ESSAY: How to Avoid the Eugenics Wars: Principles for Enhancement Alignment (Kyle Munkittrick, 3/18/25, 3Quarks)
    -ESSAY: Anthropic CEO Dario Amodei pens a smart look at our AI future: Neither a doomer or a profiteer, Amodei talks in reasoned scenarios, not abstractions. (Mark Sullivan, 10/17/24, Fast Company)
    -ESSAY: Dario Amodei’s Essay on AI, ‘Machines of Loving Grace,’ Is Like a Breath of Fresh Air (Ralph Losey, October 31, 2024, EDRM Blog)
    -ESSAY: Thoughts on Dario Amodei's essay Machines of Loving Grace: The "Compressed 21st Century" beckons… (Ben Reid, Oct 12, 2024, Memia)
    -ESSAY: Is this how AI will transform the world over the next decade?: Anthropic's CEO Dario Amodei has just published a radical vision of an AI-accelerated future. It's audacious, compelling, and a must-read for anyone working at the intersection of AI and society. (Andrew Maynard, Oct 13, 2024, The Future of Being Human)
    -ESSAY: Response to “Machines of Loving Grace” by Dario Amodei (Springtail AI, 10/16/24)
    -PODCAST: Machines of Loving Grace?: Joel Jacob, Andrew Noble, and Austin Gravely discuss the valuable insights that Amodei offers, as well as the shortcomings. (WWJT, 11/07/24)
    -ESSAY: The Promise and Perils of AI: A Discussion on Dario Amodei’s "Machines of Loving Grace" (Horay AI Team, Oct 23, 2024, Hooray AI
    -PODCAST: Machines Of Loving Grace - Dario Amodei (Marvin's Memos)
    -ESSAY: Machines of Loving Grace: How AI Could Transform the World for the Better – Dario Amodei’s Optimistic Vision (Ully, Oct 12, 2024, Medium)
    -PODCAST: Machines of Loving Grace - Essay review of Dario's Amodei (Venture Europe, Feb 23, 2025)
    -ESSAY: Reading Notes: Dario Amodei essay on the future of AI: Machines of Loving Grace (DOMINIQUE C LAHAIX, Nov 13, 2024, WisdomWare.AI)
    -REVIEW: of The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution: Will new technology help to make the world a better place, or is AI just another tech bubble that will benefit the few? (Steven Poole, 19 May 2025, The Guardian)
    -
   
-
   
-
   
-
   
-
   
-
   
-
   
-
   
-
   
-
   
-
   
-
   
-ESSAY: Signs Grow That AI Is Starting to Seriously Bite Into the Job Market: It's not looking good for college graduates.(Victor Tangermann, 5/01/25, Futurism)
    -ESSAY: Why Even Try if You Have A.I.?: Now that machines can think for us, we have to choose whether to be the passengers or pilots of our lives. (Joshua Rothman, 5/01/25, The New Yorker)
    -VIDEO INTERVIEW: “Godfather of AI” Geoffrey Hinton shares predictions for future of AI (Open Culture, 5/02/25)
    -PODCAST: The Past and Future of AI (with Dwarkesh Patel): Dwarkesh Patel interviewed the most influential thinkers and leaders in the world of AI and chronicled the history of AI up to now in his book, The Scaling Era. Listen as he talks to EconTalk's Russ Roberts about the book, the dangers and potential of AI, and the role scale plays in AI progress. The conversation concludes with a discussion of the art of podcasting. (EconTalk podcast, 4/28/25)
    -
   
-ESSAY: A.I. could become too independent for us to control, ex OpenAI exec who raised $450 million for a new company warns (Alexei Oreskovic, July 10, 2023, Fortune)
    -ESSAY: As Anthropic seeks billions to take on OpenAI, ‘industrial capture’ is nigh. Or is it? (Sharon Goldman, April 7, 2023, Venture Beat)
    -ESSAY: The messy, secretive reality behind OpenAI’s bid to save the world: The AI moonshot was founded in the spirit of transparency. This is the inside story of how competitive pressure eroded that idealism. (Karen Hao, February 17, 2020, MIT Technology Review)
    -ESSAY: Inside the White-Hot Center of A.I. Doomerism (Kevin Roose, Jul. 11th, 2023, NY Times)
    -ESSAY: A.I. Will Break the Creator Economy As We Know It —And a New One Will Rise: From democratized virality to commoditized authenticity, creators face an existential pivot as artificial intelligence dismantles old boundaries. (Nic Young, 03/19/25, NY Observer)
    -ESSAY: OpenAI’s Deep Research Agent Is Coming for White-Collar Work (Will Knight, Mar. 19th, 2025, Wired)
    -ESSAY: AI Anxiety: Can writing at Harvard coexist with new technologies? (Serena Jampel, March-April 2025, Harvard)
    -ESSAY: The prophet of Silicon Valley doom: 'We must stop, we are not ready for AI': Eliezer Yudkowsky, a pioneer in AI research, called for halting AI development before it's too late; according to him, humanity lacks the ability to control technology that will eventually surpass human intelligence; 'It can evolve beyond our control' (Amir Bogen, 03.17.25, YNet News)
    -VIDEO LECTURE: Accelerating Scientific Discovery with AI - lecture by Sir Demis Hassabis (CambridgeComputerLab, Mar 24, 2025)
    -ESSAY: A Superforecaster’s View on AGI (Malcolm Murray, 3/31/25, 3Quarks)
    -ESSAY: Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End: "The vast investments in scaling... always seemed to me to be misplaced." (Frank Landymore, 3/18/25, Futurism)
    -PODCAST: The Age of AI: Today? Tomorrow? When?: Slowly, slowly, slowly, bits of economic evidence are emerging … (James Pethokoukis, Mar 20, 2025, Faster! Please!)
    -ESSAY: The Perilous Race to Superintelligent A.I.: Progress or Pandora’s Box?: Unchecked A.I. growth could either unlock boundless potential or lead to irreversible ruin—how do we keep control? (Chetan Dube, 03/21/25, NY Observer)
    -ESSAY: Evidence shows AI systems are already too much like humans. Will that be a problem? (Sandra Peter, Jevin West, Kai Riemer, 5/23/25, The Conversation)
    -
   
-REVIEW: of The Technological Republic: The Word Is Not Enough: Palantir and the Language of Creation: A Review of “The Technological Republic” (Kristin de Montfort , IM1776)