Community Homepage
Community Homepage
/
🎓
Sentient Futures Fellowships
/
🤖
AI×Animals
/
🤖
Week 7: Animal Advocacy in the Age of Transformative AI
Week 7: Animal Advocacy in the Age of Transformative AI
🤖

Week 7: Animal Advocacy in the Age of Transformative AI

The emergence of Transformative Artificial Intelligence (TAI) may fundamentally change the world as we know it – perhaps even sooner than you think. Imagine 100 years of scientific and technological progress, coupled with explosive economic growth – all in the span of 10 years. While it’s hard to predict what exactly this means for animals, one thing’s for certain: unless advocates adapt, our current business-as-usual strategies risk becoming obsolete. This session explores how best to future-proof our movement by shifting priorities toward interventions that remain robust across a wildly different technological and political landscape.

🧩 Central questions

  1. TAI timelines: What evidence supports the claim that AI progress is accelerating and unpredictable, making TAI a near-term possibility?
  2. Not in Kansas: In what ways might the post-TAI world fundamentally differ from the world we know today?
  3. Strategic obsolescence: Which existing advocacy strategies (e.g. consumer boycotts, legislation) are least likely to be effective across different post-TAI scenarios?
  4. Futureproofing: Which advocacy startegies might be robustly positive across different post-TAI scenarios?

🧭 Learning objectives

  1. Understand: Define Transformative AI (TAI) and explain the evidence for rapid, unpredictable technological development.
  2. Assess: Identify key institutional, economic, and otherwise structural assumptions of current advocacy strategies, and evaluate their plausibility across different TAI scenarios, while noting areas of uncertainty.
  3. Reason: Apply your reasoning to advocacy theories of change as well asto your own impact trajectory.
  4. Next steps: Adapt your own impact trajectory (e.g. focus, career plan, research agenda) accordingly.
💡

Use the table of contents on the right to quickly navigate this page.

Resources

‣

Required readings

Please review all of these resources prior to your session.

Several readings this week are excerpts.

  • While you are welcome to explore further, you are only required to read the sections indicated with §.
  • Click ▸ (View excerpt) to view the assigned sections only.
  • Access the original link if you prefer to annotate your own copy (e.g. a PDF).
icon

Estimated time: 1h45m

icon

We encourage you to spend more time focusing on the readings that most interest you.

icon

Playback audio and video resources at faster speeds (e.g. 1.25×) to save time.

‣

AGI vs. ASI vs. TAI: Terminology

Experts have developed a variety of terms to describe powerful AI systems. The first two, artificial general intelligence (AGI) and artificial superintelligence (ASI) define systems in terms of their capabilities (things that they are capable of doing and how well they can do them, e.g. play chess, code, pass the bar exam, etc.):

icon

AGI (Artificial General Intelligence)

An AI whose capabilities match or surpass human performance across all domains. It can learn and apply knowledge to solve problems across a broad range of domains with human-level competence.

icon

ASI (Artificial Superintelligence)

An AI which outperforms even the best humans by a wide margin across all domains. It can learn and apply knowledge to solve problems across a vast range of domains with superhuman competence.

By contrast, transformative AI (TAI) defines systems in terms of their impact on humanity:

icon

TAI (Transformative Artificial Intelligence)

An AI system which changes the world to a significant extent comparable to (or exceeding) the agricultural or industrial revolution.

Having noted these definitions, we recommend you to bear the following points in mind:

  1. Different experts may have different definitions of these terms.
  2. Radical transformations might be brought about not by one AI system but through the complex interactions of multiple AI systems (whether coordinated or not).
  3. Most AGI scenarios are probably transformative, but not all TAI scenarios involve AGI. For example, a narrow AI that is highly effective at engineering diseases or generating convincing misinformation at low cost may qualify as TAI despite lacking general intelligence.
  4. The map is not the territory. The value of these classifications lies not in their taxonomic precision, but in their usefulness for guiding strategy.
I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.

–Dario Amodei, CEO of Anthropic (creators of Claude)

‣

The case for TAI-aware advocacy

Here’s one way of making the case for TAI-aware advocacy:

  1. Speed and uncertainty: AI development is fast and hard to predict.
  2. Near-term TAI: If (i), then TAI may be coming soon.
  3. Weird futures: The world after TAI could be radically, fundamentally different from today’s world in terms of power structures, institutions, markets, and more.
  4. Strategic obsolescence: If (iii), then current advocacy strategies may not work in post-TAI world, insofar as they assume business-as-usual.
  5. Futureproof now: If (iv), then advocates should adapt their strategies to accommodate potentially near-term TAI.

∴ Conclusion: Advocates should adapt their strategies to accommodate potentially near-term TAI.

The following sections provide support for these premises:

(1-2): TAI may be coming soon

(3-5): The post-TAI world could be very weird – and advocates should prepare now

‣

TAI may be coming soon

AI development is fast and hard to predict. If this is right, then TAI may be coming soon.

AI timelines

AI experts often talk about timelines: their predictions of when AGI or TAI will be created. Today, an increasing number of experts believe that TAI will be created within 5 years or less – they have so-called “short” timelines. Others believe that TAI will be created within the century. And some believe that TAI will never be created (e.g. due to unsolvable conceptual or technological barriers).

<10 years

“Short” timelines

10+ years

“Long” timelines

Never

A minority of sceptics doubt that AGI or TAI will not be achieved

Much like with the definitions and classifications above (AGI, ASI, TAI), timelines are only valuable insofar as they guide strategic coordination. The emerging picture seems to be as follows:

  1. Timelines have been getting shorter and shorter (see right). What was “short” before (e.g. 20 years) is now considered by many to be “long”.
  2. Even so-called “long” timelines (e.g. AGI by 2061) may not be enough time for governments to adequately prepare.
  3. Timeline predictions are subject to significant uncertainty. Rather than strive for precision, it is probably best to focus on areas of agreement.
image
image

The Case for Prioritising AI Risks

Benjamin Todd (2025) | 5 min read (§1-3 only)

This overview of AI risk as a high-impact career area by 80,000 Hours summarises three key considerations:
  1. World-changing AI systems could come sooner than expected
  2. Their societal impact could be enormous
  3. Advanced AI also poses significant risks
‣
(View excerpt)

The following text is excerpted from the reading.

Within 5 years, there’s a real chance that AI systems will be created that cause explosive technological and economic change. This would increase the risk of disasters like war between US and China, concentration of power in a small minority, or even total loss of human control over the future.

Many people — with a diverse range of skills and experience — are urgently needed to help mitigate these risks.

I think you should consider making this the focus of your career.

This article explains why.

1. World-changing AI systems could come much sooner than people expect

In an earlier article I explained why there’s a significant chance that AI could contribute to scientific research or automate many jobs by 2030. Current systems can already do a lot, there are clear ways to continue to improve them in the coming years. Forecasters and experts widely agree that the probability of widespread disruption is much higher than it was even just a couple of years ago.

AI systems are rapidly becoming more autonomous, as measured by the
AI systems are rapidly becoming more autonomous, as measured by the METR time horizon benchmark. The most recent models, such as o3, seem to be on an even faster trend that started in 2024.

2. The impact on society could be explosive

People say AI will be transformative, but few really get just how wild it could be. Here are three types of explosive impact we might see, which are now all supported by credible theoretical and empirical research:

  • The intelligence explosion: it might only take a few years from developing advanced AI to having billions of AI remote workers, making cognitive labour available for pennies.
  • The technological explosion: empirically informed estimates suggest that with sufficiently advanced AI 100 years of technological progress in 10 is plausible. That means we could have advanced biotech, robotics, novel political philosophies, and more arrive much sooner than commonly imagined.
  • The industrial explosion: if AI and robotics automate industrial production that would create a positive feedback loop, meaning production could plausibly end up doubling each year. Within a decade of reaching that growth rate, humanity would harvest all available solar energy on Earth and start to expand into space.

Along the way, we could see rapid progress on many key technological challenges — like curing cancer and developing green energy. But…

The number of AI models is growing extremely fast. If they can start to substitute for scientific researchers, then the effective size of the scientific community would grow at that rate, leading to faster scientific progress.
The number of AI models is growing extremely fast. If they can start to substitute for scientific researchers, then the effective size of the scientific community would grow at that rate, leading to faster scientific progress. Preparing for the intelligence explosion by Forethought Research.

3. Advanced AI could bring enormous dangers

We’ve written before about how it might be hard to keep control of billions of AI systems thinking 10x faster than ourselves. But that’s only the first hurdle. The developments above could:

  • Destabilise the world order (e.g. leading to conflict over Taiwan)
  • Enable the development of new weapons of mass destruction, like man-made viruses
  • Empower governments (or even individual companies) to entrench their power
  • Force us to face civilisation-defining questions about how to treat AI systems, how to share the benefits of AI, and how to govern an expansion into space.

Why Do People Disagree About When Powerful AI Will Arrive?

Sarah Hastings-Woodhouse (2025) | 12 min read (audio version available on Substack app)

This blog post compiles 5 arguments for short timelines (left) and 4 arguments for long timelines (right):
  1. Across many domains, AI capabilities (measured by benchmarks) continue to advance faster than expected
  2. AIs are able to complete longer and longer tasks
  3. Automating AI research itself may be the only prerequisite needed to achieve AGI
  4. We could train much larger models before 2030
  5. Experts' timelines continue to shorten
  1. Some essential capabilities may not be easy to measure in benchmarks
  2. Some tasks are easy for humans but hard for AIs (and vice-versa)
  3. We aren't certain whether an intelligence explosions really is possible
  4. There may be more to discovery than just “raw intelligence”

Speed of AI Development (excerpt from AI Safety, Ethics, and Society)

Dan Hendrycks (2024) | 8 min read (7 min audio available on Spotify)

This excerpt from the textbook AI Safety, Ethics, and Society covers important factors influencing the speed of AI development.

See also the eponymous course created by the Center for AI Safety.

‣
(View excerpt)

The following text is excerpted from the reading.

§ Introduction

It is comfortable to believe that we are nowhere close to creating AI systems that match or surpass human performance on a wide range of cognitive tasks. However, given the wide range of opinions among experts and current trends in compute and algorithmic efficiency, we do not have strong reasons to rule out the possibility that such AI systems will exist in the near future. Even if development in this direction is slower than the more optimistic projections, the development of AI systems with powerful capabilities on a narrower set of tasks is already happening and is likely to introduce novel risks that will be challenging to manage.

HLAI is a helpful but flawed milestone for AI development. When discussing the speed of developments in AI capabilities, it is important to clarify what reference points we are using. Concepts such as HLAI, AGI or transformative AI, introduced earlier in this chapter, are under-specified and ambiguous in some ways, so it is often more helpful to focus on specific capabilities or types of economic impact. Despite this, there has been intense debate over when AI systems on this level might be achieved, and insight into this question could be valuable for better managing the risks posed by increasingly capable AI systems. In this section, we discuss when we might see general AI systems that can match average human skill across all or nearly all cognitive tasks. This is equivalent to some ways of operationalizing the concept of AGI.

§ Potential for Rapid Development of HLAI

HLAI systems are possible. The human brain is widely regarded by scientists as a physical object that is fundamentally complex biological machine and yet is able to give rise to a form of general intelligence. This suggests that there is no reason another physical object could not be built with at least the same level of cognitive functioning. While some would argue that an intelligence based on silicon or other materials will be unable to match one built on biological cells, we see no compelling reason to believe that particular materials are required. Such statements seem uncomfortably similar to the claims of vitalists, who argued that living beings are fundamentally different from non-living entities due to containing some non-physical components or having other special properties. Another objection is that copying a biological brain in silicon will be a huge scientific challenge. However, there is no need for researchers looking to create HLAI to create an exact copy or ''whole brain emulation''. Airplanes are able to fly but do not flap their wings like birds - nonetheless they function because their creators have understood some key underlying principles. Similarly, we might hope to create AI systems that can perform as well as humans through looser forms of imitation rather than exact copying.

High uncertainty for HLAI timelines. Opinions on "timelines"---how difficult it will be to create human-level AI---vary widely among experts. A 2023 survey of over 2,700 AI experts found a wide range of estimates of when HLAI was likely to appear. The combined responses estimated a 10% probability of this happening by 2027, and a 50% probability by 2047. A salient point is that more recent surveys generally indicate shorter timelines, suggesting that many AI researchers have been surprised by the pace of advances in AI capabilities. For example, a similar survey conducted in 2022 yielded a 50% probability of HLAI by 2059. In other words, over a period of just one year, experts brought forward their estimates of when HLAI had a 50% chance of appearing by 12 years. Nonetheless, it is also worth being cautious about experts interpreting evidence of rapid growth over a short period too narrowly. In the 1950s and 1960s, many top AI scientists were overly optimistic about what was achievable in the short term, and disappointed expectations contributed to the subsequent "AI Winter."

Intense incentives and investment for AGI. Vast sums of money are being dedicated to building AGI, with leaders in the field having secured billions of dollars. The cost of training GPT-3 has been estimated at around $5 million, while the cost for training GPT-4 was reported to be over $100 million. As of 2024, AI developers are spending billions of dollars on GPUs for training the next generation of AI systems.

Increasing investment has translated to growing amounts spent on compute; between 2009 and 2024, the cost of compute used to train notable ML models has roughly tripled each year. Moreover, although scaling compute may seem like a relatively simple approach, it has so far proven remarkably effective at improving capabilities over many orders of magnitude of scale. For example, looking at the task of next-token prediction, not only has the loss in performance reduced with increasing training compute, but the trend has also remained consistent as compute has spanned over a dozen orders of magnitude. These developments have defied the expectations of some skeptics who believed that the approach of scaling would quickly reach its limits and saturate. Additionally, since compute costs are falling, the amount being used has increased more than spending on it; although spending has been tripling each year, the amount of training compute for notable models has been quadrupling.

Improvements in drivers, software and other elements are also contributing to the training of ever-larger AI models. For example, FlashAttention made the training of transformers more efficient by minimizing redundant operations and efficiently utilizing hardware resources during training.

Besides increasing compute, another indicator of the growth of AI research is the number of papers published in the field. This metric has also risen rapidly in the past few years, more than doubling from around 128,000 papers in 2017 to around 282,000 in 2022. This suggests that increasing investment is not solely going towards funding ever-larger models, but is also associated with a large increase in the amount of research going into improving AI systems.

§ Obstacles to HLAI

More conceptual breakthroughs may be needed to achieve HLAI. Although simply scaling compute has yielded improvements so far, we cannot necessarily rely on this trend to continue indefinitely. Achieving HLAI may require qualitative changes, rather than merely quantitative ones. For example, there may be conceptual breakthroughs required of which we are so far unaware. This possibility adds more uncertainty to projected timelines; whereas we can extrapolate previous patterns to predict how training compute will increase, we do not know what conceptual breakthroughs might be needed, let alone when they might be made.

High-quality data for training might run out. The computational operations performed in the training of ML models require data to work with. The more compute used in training, the more data can be processed, and the better the model's capabilities will be. However, as compute being used for training continues to rise, we may reach a point where there is not enough high-quality data to fuel the process. But there are strong incentives for AI developers to find ways to work around this. In the short term, they will find ways to access new sources of training data, for example by paying owners of relevant private datasets. Beyond this, they may try a variety of approaches to reduce the reliance on human-generated data. For example, they may use AI systems to create synthetic or augmented data. Alternatively, AI systems may be able to improve further by competing against themselves through self-play, in a similar way to how AlphaGo learned to play Go at superhuman level.

Investment in AI may drop if financial returns are disappointing. Although substantial resources are currently being invested in scaling ML models, we do not know how much scaling is required to reach HLAI (even if scaling alone were enough). As companies increase their spending on compute, we do not know whether their revenue from the technology they monetise will increase at the same rate. If the costs of improving the ML models grow more quickly than financial returns, then companies may turn out not to be economically viable, and investment may slow down.

Conclusion

There is high uncertainty around when HLAI might be achieved. There are strong economic incentives for AI developers to pursue this goal, and advances in deep learning have surprised many researchers in recent years. We should not be confident in ruling out the possibility that HLAI could also appear in coming years.

AI can be dangerous long before HLAI is achieved. Although discussions of possible timelines for HLAI are pertinent to understanding when the associated risks might appear, it can be misleading to focus too much on HLAI. This technology does not need to achieve the same level of general intelligence as a human in order to pose a threat. Indeed, systems that are highly proficient in just one area have the potential to cause great harm.

Many AI capabilities follow clear
Many AI capabilities follow clear scaling laws: performance (y-axis) generally increases with the amount of computing power used for training (FLOPs; x-axis). However, many abilities emerge suddenly at specific, high thresholds, meaning progress is often nonlinear and remains fundamentally hard to predict. Graphs from Wei et al (2022).

Takeoff: when AI starts to research AI

Today, AI research is done by humans – with increasing assistance by AI tools. What happens when AI research is conducted by AI?

The first AI researcher probably won’t be on par with our best human experts. It may be no better than an average human researcher. Even so, it could still kickstart exponential progress in AI research – soon leading to TAI:

There are only two times you can react to an exponential: Too early, or too late.

– Connor Leahy (2023)

  1. An AI researcher can be trained much faster than a human.
  2. An AI researcher can process lots of information (e.g. large, multimodal datasets), and it can do this much faster than a human.
  3. An AI researcher does not need to take breaks, eat, or sleep.
  4. An AI researcher can be copied (see below).
  5. An AI researcher can be deployed at low cost (much less than a researcher salary).
  6. An AI researcher can apply its own insights to create better, smarter AI researchers.
What would happen if the number of scientific researchers grew by 25× within a year? Graph from Macaskill and Moorhouse
What would happen if the number of scientific researchers grew by 25× within a year? Graph from Macaskill and Moorhouse (2025).

We're Not Ready for Superintelligence (below)

Aric Floyd (2025) | 34 min documentary

This documentary produced by 80,000 Hours summarizes major developments from AI 2027, an expert-authored scenario planning exercise which forecasts a rapid escalation of AI capabilities – including the automation of AI research itself, an “intelligence explosion”, and the emergence of superhuman AI by 2027.

In addition to exploring how TAI might radically change our world, the video also addresses criticisms of short TAI timelines.

‣

The post-TAI world could be very weird – and advocates should prepare now

How exactly would TAI “transform” the world? While it’s hard to predict exactly how fundamentally transformed worlds might look like, here are a few things that could happen in post-TAI worlds:

  • Vastly accelerated scientific research, with breakthroughs across many domains (e.g. medicine)
  • Vastly accelerated technological development
  • Vastly accelerated economic growth
  • Shifts in power balance – or lock-in
  • Creation of artificial sentient beings (”digital minds”)
  • Space colonization

The political, economic, and societal landscape after transformative AI may be entirely unrecognizable from the world we know today. How, then, should advocates prepare for the possibility of near-term TAI?

Animal Advocates Should Respond to Transformative AI Maybe Arriving Soon

Jamie Harris (2025) | 10 min read (17 min audio available)

Field leader Jamie Harris (formerly of Sentience Institute, Macroscopic Ventures, and now at the Centre for Effective Altruism) compares 6 different ways advocates could respond to near-term TAI:
  1. Focus on short-term wins, rather than projects that take more than 5 years to pay off
  2. Anticipate how AI will change things and try to steer AI development
  3. Increase concern for animals among AIs and the parties in control of AI
  4. Building the field to prepare for TAI
  5. Shift to working on digital minds advocacy (promoting the welfare of potentially conscious or sentient advanced AIs)
  6. Shift to AI safety (designing guardrails and systems to minimise risks caused by misaligned AI)

A Shallow Review of What Transformative AI Means for Animal Welfare

Ben West and Lizka Vaintrob (2025) | 25 min read (38 min audio available)

Researchers from Forethought and Model Evaluation and Threat Research (METR) make the case for a cautious approach to animal welfare, given tremendous uncertainty with respect to how transformative AI might change the world while calling for more exploratory research into cause prioritization and capacity-building that is reasonably robust across different post-TAI scenarios.
‣

Further readings (optional)

‣

Key organizations

The lay of the land

AISafety.comAISafety.com
(above)

This website collects resources to supercharge your career in AI safety. It also includes the AI Existential Safety Map, an interactive map of major organizations in the world of AI safety organized into different “territories” corresponding to focus or approach.

See especially the Career Castle to the south!

AI Governance Map (below)

This second interactive map displays major organizations in the world of AI governance, separated into 5 broad areas:
  1. Policy
  2. Research
  3. Advocacy
  4. Forecasting
  5. Industry standards and regulations

Careers

BlueDot Impact

A leading nonprofit providing high-quality courses on AI safety, governance, and other topics, such as:
AGI Strategy
Technical AI Safety
AI Governance

80,000 Hours (below)

Preparing society for AGI is among the top priorities of this nonprofit dedicated to supporting career development.

Research, policy, and funding

Foresight Institute

A nonprofit which funds and supports high-impact research across such areas as nanotechnology, safe AI, longevity, and space. Their existential hope program supports optimistic futurism.

Forethought

A research nonprofit dedicated to carefully navigating the transition to a world with superintelligent AI systems.

Future of Life Institute

A nonprofit whose mission is to steer transformative technologies away from catastrophic risks and towards benefiting life.

Check out their list of recommended references on the benefits and risks of AI.

Epoch AI

A nonprofit focused on forecasting the trajectory and societal impact of AI with a view to informing decision-making and strategy. Their work – which some policymakers have referenced – aims to bring more scientific rigor to debates about when transformative AI might arrive, how fast AI capabilities will grow, and what economic and governance implications arise from those trends.

Model Evaluation and Threat Research (METR; formerly known as ARC Evals)

A nonprofit research organisation that assesses potentially dangerous capabilities in state-of-the-art AI models. Their work includes building evaluation suites to measure general autonomous capabilities, developing red-line threat tests, and releasing regular research updates on AI capabilities.

Coefficient Giving (formerly Open Philanthropy)

A major philanthropic funder that supports high-impact work across a broad spectrum of cause areas – including transformative AI.
‣

TAI timelines: How soon could transformative AI arrive?

AISafety.infoAISafety.info

A community-maintained FAQ covering common questions about AI risk, safety research, alignment, governance, and practical ways to contribute. It aims to provide clear, non-technical explanations of key concepts in advanced AI safety.

Expert forecasts

AI Timelines: What Do Experts in Artificial Intelligence Expect for the Future?

Max Roser (2023)

A visual and data-driven summary of expert forecasts for AI timelines, including survey results on when researchers expect transformative AI systems to arrive. This report places different timeline estimates in context and highlights areas of uncertainty.

“Long” Timelines to Advanced AI Have Gotten Crazy Short

Helen Toner (2024) | 8 min read

Helen Toner, a previous board member at OpenAI, reflects on how predictions for when transformative AI will be created have become shorter and shorter over time – though there is still much debate and uncertainty.

Shrinking AGI timelines: A Review of Expert Forecasts

Benjamin Todd (2025) | 6 min read

A summary of recent expert surveys estimating when AGI may be developed, with three main takeaways:
  1. Timeline estimates have shortened over recent years.
  2. AGI before 2030 falls within the range of expert opinion, though there is disgreement.
  3. Forecasts are subject to significant uncertainty: it is difficult to rule out or rule in AGI arriving soon.

The Case for AGI by 2030

Benjamin Todd (2023) | 60 min audio available

An accessible guide to AGI timelines that uses charts, graphics, and scenario walkthroughs to explain why forecasts differ. It introduces key concepts in forecasting AI progress and compares short-, medium-, and long-timeline views.

Biases

Unaware and Unaccepting: Human Biases and the Advent of Artificial Intelligence

This journal article examines cognitive biases (e.g. normalcy bias, motivated reasoning) that lead people (e.g. policymakers, the general public) to underestimate, dismiss, or misunderstand the implications of rapidly advancing AI systems.

Why AI Moonshots Miss

Jeffrey Funk and Gary Smith (2021)

This article places current expectations about AGI in the historical context of past overconfidence, highlighting recurring technical, institutional, and conceptual barriers to breakthrough progress.

AI Timelines and Human Psychology (below)

Sarah Hastings-Woodhouse (2025)

Sarah Hastings-Woodhouse (previously of BlueDot, Pivotal, Future of Life Institute, now at AI Security Institute) turns a social psychological lens on AGI timelines, raising concerns about confounders like bandwagonn effects.
‣

TAI-aware advocacy

How Should We Adapt Animal Advocacy to Near-Term AGI?

Max Taylor (2025) | 9 min read (16 min audio available)

Near-term AGI could precipitate radical social and technological changes – with significant implications for advocacy and movement strategy.

AGI×Animals Wargame

Sentient Futures (2025) | 7 min read (15 min audio available)

This report summarizes findings from a strategic scenario planning exercise exploring how geopolitical actors, frontier AI labs, and animal advocacy organisations might react to breakneck AI development. The aim of this wargame is to identify advocacy strategies that are robust across divergent scenarios (e.g. technological breakthroughs, shifting coalitions, economic transitions).

Materials for the wargame are freely available – we encourage you to run your own version of the wargame!

Transformative AI and Animals: Animal Advocacy Under a Post-Work Society

Kevin Xia (2025) | 10 min read (20 min audio available)

What does farmed animal advocacy look like in a world where human labour is largely automated? In this piece, Kevin Xia of Hive outlines opportunities and challenges for improving animal welfare under post-work economic conditions, while also exploring which interventions may become more or less impactful.

Transformative AI and Wild Animals: An Exploration

Mal Graham (2025) | 30 min read (52 min audio available)

Mal Graham, Executive Director of Wild Animal Initiative, explores how transformative AI could affect wild animal welfare across ecological management, habitat modification, biotechnology, environmental policy, and digital simulation. The piece outlines both promising opportunities and major risks, aiming to clarify what responsible stewardship might look like in a TAI-enabled future.

Pre-session exercises

Please spend 20-30 minutes completing these two exercises.

  • You can write your responses in bullet point format if that’s easier.
  • Submit your responses in the weekly Slack thread created by your facilitator in your channel at least 24 hours before your regularly scheduled meeting.
  • Leave at least one comment on somebody else’s response.

Is TAI really around the corner?

[150 words] The advent of transformative artificial intelligence may fundamentally and irreversibly change the strategic landscape for animal advocacy.

First, review the basic argument for strategic adaptation in the face of potentially near-term TAI:

  1. Speed and uncertainty: AI development is fast and hard to predict.
  2. Near-term TAI: If (i), then TAI may be coming soon.
  3. Weird futures: The world after TAI could be radically, fundamentally different from today’s world in terms of power structures, institutions, markets, and more.
  4. Strategic obsolescence: If (iii), then current advocacy strategies may not work in post-TAI world, insofar as they assume business-as-usual.
  5. Futureproof now: If (iv), then advocates should adapt their strategies to accommodate potentially near-term TAI.

∴ Conclusion: Advocates should adapt their strategies to accommodate potentially near-term TAI.

Your task is to analyze this argument:

  1. Which specific premise(s) (i-v) do you find the most compelling or well-supported?
  2. Which do you find the most questionable or dubious?
  3. Which crucial considerations, if any, might be missing?

Explain your reasoning, making reference to key concepts where relevant.

Futureproofing advocacy?

[150 words] Jamie Harris proposes 6 strategic responses for animal advocates to consider when operating in an era of looming, transformative change:

  1. Optimise harder for immediate results
  2. Predict how AI will change things, and try to make that go well for animals
  3. Try to increase the concern that AIs or their controllers show for animals
  4. Focus on building capacity to prepare for TAI
  5. Shift to AI welfare, to protect potential sentient AIs from suffering
  6. Shift towards all-inclusive AI safety

Choose one strategy and describe how you would implement in either:

  • Your own advocacy work (or desired type of advocacy work), or
  • The specific activities of an existing advocacy organization

Be as specific as possible and justify your reasoning.

icon
Return to the AI×Animals home page
icon
Continue to week 8
icon
Back to week 6