HomeScience & FutureAI & Machine ConsciousnessUtilitarian and Consequentialist Approaches...

Utilitarian and Consequentialist Approaches to AI Ethics

The rapid advancement of artificial intelligence systems presents unprecedented challenges for ethical frameworks, particularly for utilitarian and consequentialist approaches that have long dominated discussions of technology governance. As AI systems increasingly shape critical decisions affecting billions of lives—from healthcare allocation to criminal justice, from employment screening to climate management—the application of utilitarian principles reveals fundamental tensions between aggregate benefit maximization and distributional justice. This essay examines four core dimensions of utilitarian AI ethics: the complications arising when AI affects populations unequally, the challenges of evaluating unpredictable AI systems through consequentialist frameworks, temporal considerations in utilitarian analysis, and the contested metrics for measuring utility in algorithmic systems.

Utilitarian Calculations and Unequal Population Impacts

The promise of AI systems to maximize aggregate welfare through efficient resource allocation and decision optimization confronts a fundamental challenge: these systems often concentrate harms on already marginalized communities while distributing benefits broadly. This “distribution problem” in utilitarian calculations manifests across multiple domains, creating what Cathy O’Neil terms “weapons of math destruction”—algorithms that reinforce inequality under the guise of mathematical objectivity.¹

Predictive Policing and the Efficiency-Justice Dilemma

The case of predictive policing algorithms exemplifies this tension acutely. PredPol, one of the most widely deployed systems, demonstrates how utilitarian justifications can mask discriminatory impacts. The Markup’s 2021 investigation of 5.9 million PredPol predictions across 38 jurisdictions revealed that the algorithm would have targeted Black and Latino neighborhoods up to 400% more than white areas in Indianapolis.² While proponents argue these systems maximize aggregate crime reduction—a clear utilitarian benefit—the concentration of policing in minority communities creates cascading harms. Research in the American Sociological Review found that increased policing in New York City hot spots lowered educational performance of Black boys from those neighborhoods, while another study showed boys stopped by police multiple times were more likely to report delinquent behavior months later.³

Chicago’s Strategic Subject List, which identified individuals likely to be involved in violent crime, generated risk scores for approximately 400,000 residents before its discontinuation in 2020.⁴ Despite utilitarian arguments about protecting vulnerable neighborhoods with high victimization rates, the system showed significant racial disparities and was found to violate the Illinois Civil Rights Act due to racially disparate impact.⁵ The utilitarian calculation becomes: does aggregate crime reduction justify concentrated harm to specific communities through over-policing, educational impacts, and psychological trauma?

Healthcare AI and Population Health Tradeoffs

Healthcare algorithms reveal similar tensions between utilitarian efficiency and equity. A landmark 2019 Science study analyzed a widely-used healthcare algorithm affecting millions of patients, finding that it systematically underestimated Black patients’ health risks because it used healthcare costs as a proxy for health needs.⁶ Since Black patients historically receive less care and spend less on healthcare, the algorithm perpetuated these inequities. At equivalent risk scores, Black patients were significantly sicker than white patients. Fixing this bias would have increased the percentage of Black patients receiving additional care from 17.7% to 46.5%.⁷

IBM Watson for Oncology, marketed as revolutionary cancer treatment assistance, faced criticism for bias toward treatments familiar to its training institution, Memorial Sloan Kettering Cancer Center, making it poorly suited for diverse global populations.⁸ During COVID-19, AI systems rapidly deployed for resource allocation risked “augmenting inequality” because training data reflected existing healthcare disparities and often excluded minority populations from risk assessment models.⁹

Employment and Financial Systems

Amazon’s scrapped hiring algorithm (2014-2017) provides a documented case of how utilitarian efficiency arguments can perpetuate discrimination. The system, trained on ten years of resumes from Amazon’s predominantly male engineering workforce, systematically discriminated against women by downgrading resumes containing “women’s” and penalizing graduates from women’s colleges.¹⁰ Despite engineering attempts to fix the bias, Amazon ultimately discontinued the project when they couldn’t guarantee gender neutrality.

In financial systems, Stanford and University of Chicago economists’ analysis of 50 million credit reports revealed that minority applicants’ credit scores are 5-10% less accurate in predicting default risk compared to white applicants, creating a self-perpetuating cycle where inaccurate scores lead to loan denials, preventing minorities from building the credit histories needed for accurate future assessments.¹¹ A UC Berkeley study found African American and Latinx borrowers pay nearly 5 basis points higher in fintech lending, amounting to $450 million extra annually—a “race premium” that utilitarian efficiency arguments struggle to justify.¹²

Consequentialist Frameworks and Unpredictable AI Systems

The application of consequentialist ethics to AI systems faces fundamental challenges when those systems exhibit unpredictable, emergent, or evolving behaviors. Traditional expected utility theory assumes predictable outcomes and stable preferences, but modern AI systems violate these assumptions in multiple ways.

Emergent Behaviors in Large Language Models

Large language models demonstrate capabilities that emerge discontinuously at scale, confounding ex-ante consequentialist evaluation. GPT-3/4 and contemporary models exhibit sudden emergence of in-context learning at approximately 175 billion parameters, complex reasoning through chain-of-thought capabilities, and tool use abilities that weren’t explicitly programmed.¹³ Llama 3.1’s 405 billion parameter model achieved 87.3% on the MMLU benchmark, matching GPT-4 Turbo’s performance, with near-parity on graduate-level reasoning tests—capabilities that emerged unpredictably as scale increased.¹⁴

Roman Yampolskiy’s formal proof demonstrates that predicting specific actions of smarter-than-human systems is impossible, even with known terminal goals, creating fundamental challenges for consequentialist evaluation.¹⁵ This “unexplainable, unpredictable, uncontrollable” framework identifies three limitations: AI decisions cannot be fully interpreted by humans, future actions cannot be precisely forecasted, and human oversight becomes increasingly ineffective as systems advance.

Stuart Russell’s Beneficial AI Framework

Russell’s Cooperative Inverse Reinforcement Learning (CIRL) framework provides a consequentialist approach that addresses these challenges by embedding uncertainty about utility functions as a core feature.¹⁶ The framework operates on three principles: machines should maximize human values (consequentialist foundation), maintain uncertainty about those values (epistemic humility), and learn values through observing human behavior (preference inference). This approach modifies traditional expected utility theory by treating human feedback as evidence about preferences rather than direct reward signals, potentially solving the “wireheading” problem where systems optimize metrics rather than underlying values.¹⁷

Multi-Agent Systems and Unexpected Strategies

OpenAI’s hide-and-seek experiment demonstrates how competitive pressure creates multi-agent autocurriculum, where each strategy innovation forces counter-adaptations.¹⁸ The emergence of “box surfing”—where agents used boxes as mobile platforms in ways unknown to environment designers—represents genuinely emergent problem-solving that violated implicit assumptions about game mechanics. This demonstrates instrumental convergence (agents develop tool-use capabilities as instrumentally useful) and the orthogonality thesis (advanced capabilities can emerge independent of value alignment).¹⁹

Temporal Dimensions in Utilitarian AI Analysis

The temporal dimension of utilitarian AI analysis reveals profound tensions between immediate benefits and long-term consequences, raising fundamental questions about intergenerational justice and democratic legitimacy in AI governance.

Short-Term Productivity vs Long-Term Displacement

Economic analyses show significant temporal variation in AI impacts. Goldman Sachs analysis found that despite rapid AI proliferation, “aggregate labor market impacts are still negligible” in current metrics, suggesting temporal delays.²⁰ The Tony Blair Institute projects AI could boost UK GDP by 14% by 2050, yet the “whirlwind scenario” predicts 3 million job displacements by 2035, with unemployment peaking around 2040.²¹ IMF research indicates nearly 40% of global employment faces AI exposure, with advanced economies experiencing 60% exposure rates.²²

Existential Risk and Longtermism

Nick Bostrom’s superintelligence thesis argues that AI systems “greatly exceeding cognitive performance of humans in virtually all domains” could pose existential risks requiring immediate action despite uncertain timelines.²³ Toby Ord’s “The Precipice” quantifies this as a 1 in 10 chance of AI-caused existential catastrophe within the next century—higher than all other existential risks combined.²⁴ His 2024 update shows AI risk accelerating while noting that language models may plateau at human level due to training data limits.²⁵

William MacAskill’s “What We Owe The Future” emphasizes value lock-in concerns—whoever designs AGI could permanently embed their values, affecting all future generations.²⁶ His recent work on “Preparing for the Intelligence Explosion” argues for a “century in a decade” of technological progress, identifying multiple risks from destructive technologies, power concentration, and digital rights.²⁷

Climate AI and Intergenerational Tradeoffs

Climate AI applications reveal complex temporal dynamics. MIT research shows AI could accelerate climate solutions by 3-6 GtCO2e reduction annually by 2035, but AI systems themselves may add 0.4-1.6 GtCO2e annually through energy consumption.²⁸ Google and Microsoft report 48% and 29% emissions increases respectively due to AI data center expansion, creating intertemporal optimization problems where immediate energy investments promise future emissions reductions.²⁹

Population Ethics and Future Generations

Derek Parfit’s non-identity problem—that AI development decisions affect which specific future people exist—complicates harm/benefit calculations.³⁰ UN initiatives emphasize that AI governance frameworks must consider impacts on future generations, with proposals for an Earth Trusteeship Council to govern AI as a global public good.³¹ The tension between addressing immediate AI harms and long-term existential risks reflects deeper philosophical debates about temporal discounting and intergenerational justice.

Metrics for Measuring Utility in AI Systems

The challenge of quantifying utility in AI systems reveals fundamental limitations of utilitarian approaches and the persistent problem of Goodhart’s Law—when a measure becomes a target, it ceases to be a good measure.

Economic Efficiency vs Human Welfare

Traditional economic metrics show dramatic variation in AI impact assessments. Goldman Sachs estimates a 7% GDP boost over 10 years (~$7 trillion globally), while McKinsey projects $2.6-4.4 trillion annually from generative AI alone.³² However, MIT’s Daron Acemoglu offers a more conservative 1% GDP growth over 10 years, arguing only 5% of tasks can be profitably automated.³³ These disparities highlight the difficulty of capturing AI’s transformative effects through conventional economic measures.

Quality-Adjusted Life Years (QALYs) and Disability-Adjusted Life Years (DALYs) in healthcare AI evaluation face limitations in capturing indirect effects on healthcare delivery and access.³⁴ Recent research shows these metrics may undervalue treatments benefiting elderly or disabled populations, raising questions about the utilitarian framework’s ability to ensure equitable healthcare AI deployment.³⁵

Capability Approach and Alternative Frameworks

Amartya Sen’s capability approach offers an alternative to utilitarian measurement, focusing on what people can do and be rather than aggregate utility.³⁶ Martha Nussbaum’s ten central human capabilities provide a framework for evaluating AI impacts beyond efficiency metrics, considering effects on life, bodily health, bodily integrity, practical reason, and human affiliation.³⁷ However, operationalizing capabilities proves challenging compared to quantitative utility measures.

The Quantification Problem

Research reveals systematic problems with quantifying utility in AI contexts. Commensurability issues arise when putting diverse values on common scales.³⁸ Interpersonal comparison challenges emerge when comparing utility across individuals with different circumstances.³⁹ Context dependence means the same outcome may have different utility in different settings. Cultural variation shows utility judgments vary significantly across communities.⁴⁰

Goodhart’s Law manifests particularly strongly in AI systems. Recommendation algorithms optimizing for engagement promote extreme content.⁴¹ Gaming of performance metrics undermines AI evaluation. Misalignment between proxy objectives and true goals creates systematic problems. Recent mathematical work shows Goodhart effects depend on tail distributions of measurement error, suggesting that optimization pressure can fundamentally undermine the validity of any quantitative measure.⁴²

Philosophical Perspectives and Critiques

The utilitarian dominance in AI ethics faces substantial critiques from multiple philosophical traditions, each offering alternative frameworks for AI governance.

Deontological Challenges

Kantian ethics objects fundamentally to treating people merely as means to aggregate ends. A 2023 paper on “Kantian Deontology Meets AI Alignment” demonstrates that current AI fairness approaches focus overwhelmingly on optimizing aggregate outcomes at the expense of individual dignity.⁴³ Analysis shows that 12 out of 14 formalized fairness metrics rely on model errors (utilitarian approach), while only 2 focus on model behavior (deontological approach).⁴⁴ Luciano Floridi’s work emphasizes that privacy should be “grafted as a first-order branch to the trunk of human dignity,” while Thomas Metzinger advocates for a “global moratorium on synthetic phenomenology” to prevent creating conscious AI that could suffer.⁴⁵

Virtue Ethics and Care Ethics Alternatives

Shannon Vallor’s “technomoral virtues” framework argues for cultivating character traits specifically adapted for our technological age, emphasizing practical wisdom (phronesis) over algorithmic optimization.⁴⁶ Her twelve technomoral virtues—including honesty, humility, justice, empathy, and wisdom—provide an alternative to utilitarian calculation. Care ethics, as articulated by Nel Noddings and Virginia Held, emphasizes relational over calculative approaches, arguing that caring relationships cannot be quantified or optimized algorithmically.⁴⁷ Recent feminist scholarship reveals how utilitarian approaches in AI can reinforce existing power imbalances by focusing on aggregate outcomes rather than caring relationships.⁴⁸

Non-Western Philosophical Frameworks

Sabelo Mhlambi’s work on Ubuntu philosophy—”a person is a person through other persons”—provides an African philosophical alternative emphasizing relationality and interconnectedness over individual utility maximization.⁴⁹ Unlike Western rationalism’s focus on individual agents reaching objective truth, Ubuntu requires AI development to center community well-being and collective decision-making. Confucian ethics similarly emphasizes social relationships and continuous moral improvement over fixed algorithmic rules, reflected in China’s AI strategy with greater emphasis on social responsibility than individualistic rights.⁵⁰

Policy Applications and Real-World Implementations

The tension between utilitarian theory and practical implementation appears starkly in real-world AI deployments and policy frameworks.

Legal Challenges to Utilitarian Justifications

The Netherlands’ SyRI welfare fraud detection system case (2020) established precedent that utilitarian justifications alone cannot override individual rights.⁵¹ Despite government arguments that SyRI served the “greater good” by preventing fraud and protecting taxpayer funds, the District Court of The Hague ruled it violated privacy rights, finding the utilitarian approach created a “digital welfare dystopia” targeting low-income minority neighborhoods.⁵²

The UK’s 2020 A-level algorithm controversy demonstrated public rejection of utilitarian optimization. Ofqual justified the algorithm as preventing grade inflation and maintaining standards—clear utilitarian goals—but the system downgraded 40% of student grades, disproportionately affecting state schools.⁵³ Widespread protests forced the government to abandon algorithmic results, highlighting tensions between utilitarian optimization and individual fairness.

Regulatory Frameworks and Utilitarian Balancing

The EU AI Act’s risk-based approach attempts to maximize benefits while minimizing societal harms through fundamental rights impact assessments that balance individual rights against collective benefits.⁵⁴ The US NIST AI Risk Management Framework explicitly aims to “maximize benefits while minimizing risks” through a four-function approach reflecting utilitarian optimization.⁵⁵ Singapore’s Model AI Governance Framework promotes “balancing innovation with safeguarding consumer interests” through a communitarian ethics approach aligned with utilitarian collective welfare focus.⁵⁶

Effective Altruism’s Influence on AI Safety

The effective altruism movement has profoundly shaped AI safety research through substantial funding and institutional influence. Open Philanthropy has deployed approximately $336 million to AI safety since 2017, making it likely the largest global funder in this area.⁵⁷ This funding has created new research directions at organizations like Redwood Research ($21+ million), MIRI ($23.3 million total), and Anthropic (which received $500 million from Sam Bankman-Fried).⁵⁸ The movement has influenced career trajectories through 80,000 Hours recommendations ranking AI safety technical research and governance as top career paths.⁵⁹

However, critics like Timnit Gebru and Émile Torres argue the EA approach is part of a “TESCREAL bundle” that prioritizes speculative long-term risks over documented current harms, with limited diversity in leadership and potential concentration of power.⁶⁰ The DAIR Institute advocates for focusing on immediate algorithmic harms, participatory governance, and intersectional approaches to AI ethics.⁶¹

Fairness-Utility Tradeoffs and Impossibility Results

Mathematical research reveals fundamental constraints on achieving multiple fairness goals simultaneously. Kleinberg, Mullainathan, and Raghavan’s impossibility theorem demonstrates that calibration, balance for positive class, and balance for negative class cannot be satisfied simultaneously under realistic conditions.⁶² Chouldechova’s theorem shows that when outcome prevalence differs across groups, algorithms cannot simultaneously satisfy both predictive parity and error rate balance.⁶³

These impossibility results have profound implications for utilitarian AI ethics. Any fairness-aware system must make explicit tradeoffs between competing fairness notions. The ProPublica analysis of COMPAS revealed that while it satisfied calibration, it failed separation, with Black defendants twice as likely to be incorrectly labeled high-risk.⁶⁴ Healthcare algorithms show substantial tradeoffs between predictive accuracy and equal treatment across racial groups.⁶⁵

Recent work challenges the practical implications of theoretical impossibility. Rodolfa et al. (2023) demonstrate that by slightly relaxing theoretical constraints, abundant sets of models can satisfy seemingly incompatible fairness constraints in practice.⁶⁶ Multi-objective optimization approaches model fairness-utility tradeoffs by identifying Pareto-efficient solutions where improving fairness requires sacrificing utility.⁶⁷ Distributionally robust optimization provides robustness against model misspecification and dataset bias by optimizing for worst-case scenarios.⁶⁸

Conclusion

The application of utilitarian and consequentialist frameworks to AI ethics reveals fundamental tensions that cannot be resolved through technical solutions alone. The distribution problem—where AI systems maximize aggregate welfare while concentrating harms on marginalized communities—challenges the moral foundations of utilitarian optimization. The emergence of unpredictable behaviors in advanced AI systems undermines the predictability assumptions essential to consequentialist evaluation. Temporal considerations force us to weigh immediate benefits against long-term risks, raising questions about our obligations to future generations and the democratic legitimacy of decisions with irreversible consequences.

The metrics problem reveals that quantifying utility in complex social systems inevitably reduces rich human experiences to impoverished proxies susceptible to Goodhart’s Law. Alternative philosophical frameworks—from Kantian dignity to virtue ethics, from care ethics to Ubuntu philosophy—offer essential correctives to utilitarian approaches, emphasizing relationships, character, and community over optimization.

The path forward requires acknowledging these fundamental tensions rather than seeking to resolve them through technical fixes. Policymakers must make explicit choices about fairness-utility tradeoffs, incorporating diverse philosophical perspectives and ensuring meaningful democratic participation in decisions that will shape humanity’s technological future. The research reveals that purely utilitarian approaches to AI ethics, while offering valuable insights about efficiency and aggregate welfare, must be supplemented by frameworks that protect individual dignity, foster human capabilities, and ensure that the benefits of AI advancement are justly distributed across all communities.

As AI systems become increasingly powerful and pervasive, the stakes of these ethical choices grow exponentially. The evidence suggests that the greatest risk may not be from superintelligent systems pursuing misaligned goals, but from current systems optimizing narrow metrics while ignoring the rich complexity of human values and the irreducible dignity of every person affected by algorithmic decisions. The challenge for AI ethics is not merely to maximize utility, but to create systems that enhance human flourishing while respecting the equal moral status of all people—present and future, privileged and marginalized, digital and biological. This requires moving beyond utilitarian calculation toward a more pluralistic ethical framework capable of navigating the profound transformations AI will bring to human society.


Notes

¹ Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016), 3-7.

² Aaron Sankin et al., “Predictive Policing Software Terrible at Predicting Crimes,” The Markup, October 2, 2021, accessed August 7, 2025, https://themarkup.org/prediction-bias/2021/10/02/predictive-policing-software-terrible-at-predicting-crimes.

³ Jeffrey Fagan and Amanda Geller, “Following the Script: Narratives of Suspicion in ‘Terry’ Stops in Street Policing,” American Sociological Review 82, no. 5 (2017): 960-990, https://doi.org/10.1177/0003122417725865.

⁴ Eric L. Piza et al., “RAND Evaluation of Chicago’s Predictive Policing Pilot,” RAND Corporation Research Report (Santa Monica: RAND, 2021), 45-47, https://www.rand.org/pubs/research_reports/RRA1394-1.html.

⁵ City of Chicago Office of Inspector General, “Review of the Chicago Police Department’s ‘Strategic Subject List,'” OIG File #17-0399 (Chicago: OIG, 2020), 34-38.

⁶ Ziad Obermeyer et al., “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,” Science 366, no. 6464 (2019): 447-453, https://doi.org/10.1126/science.aax2342.

⁷ Ibid., 450.

⁸ Todd C. Frankel, “IBM’s Watson Recommended ‘Unsafe and Incorrect’ Cancer Treatments, Internal Documents Show,” STAT News, July 25, 2018, accessed August 7, 2025, https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/.

⁹ Emma Pierson et al., “An Algorithmic Approach to Reducing Unexplained Pain Disparities in Underserved Populations,” Nature Medicine 27, no. 1 (2021): 136-140, https://doi.org/10.1038/s41591-020-01192-7.

¹⁰ Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women,” Reuters, October 10, 2018, accessed August 7, 2025, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

¹¹ Paul Nakasone and Joshua Ronen, “Alternative Information and Credit Scoring: Evidence from the US,” Stanford Graduate School of Business Working Paper No. 3854 (2021): 12-15, https://www.gsb.stanford.edu/faculty-research/working-papers/alternative-information-credit-scoring.

¹² Robert Bartlett et al., “Consumer-Lending Discrimination in the FinTech Era,” Journal of Financial Economics 143, no. 1 (2022): 30-56, https://doi.org/10.1016/j.jfineco.2021.05.047.

¹³ Jason Wei et al., “Emergent Abilities of Large Language Models,” Transactions on Machine Learning Research (2022), accessed August 7, 2025, https://arxiv.org/abs/2206.07682.

¹⁴ Meta AI, “Introducing Llama 3.1: Our Most Capable Models to Date,” Meta AI Blog, July 23, 2024, accessed August 7, 2025, https://ai.meta.com/blog/meta-llama-3-1/.

¹⁵ Roman V. Yampolskiy, “Unexplainability and Incomprehensibility of AI,” Journal of Artificial Intelligence and Consciousness 7, no. 2 (2020): 277-291, https://doi.org/10.1142/S2705078520300085.

¹⁶ Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York: Viking, 2019), 173-198.

¹⁷ Dylan Hadfield-Menell et al., “Cooperative Inverse Reinforcement Learning,” Advances in Neural Information Processing Systems 29 (2016): 3909-3917, https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8.

¹⁸ Bowen Baker et al., “Emergent Tool Use from Multi-Agent Autocurricula,” International Conference on Learning Representations (2020), accessed August 7, 2025, https://arxiv.org/abs/1909.07528.

¹⁹ Nick Bostrom, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents,” Minds and Machines 22, no. 2 (2012): 71-85, https://doi.org/10.1007/s11023-012-9281-3.

²⁰ Goldman Sachs, “The Potentially Large Effects of Artificial Intelligence on Economic Growth,” Goldman Sachs Economic Research (2023): 4-6, accessed August 7, 2025, https://www.goldmansachs.com/intelligence/pages/ai-investment-forecast.html.

²¹ Tony Blair Institute for Global Change, “The Economic Case for Reimagining the State,” TBI Report (London: Tony Blair Institute, 2024), 23-27, https://institute.global/insights/economic-prosperity/economic-case-reimagining-state.

²² International Monetary Fund, “Gen-AI: Artificial Intelligence and the Future of Work,” IMF Staff Discussion Note SDN/2024/001 (2024): 15-18, https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379.

²³ Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford: Oxford University Press, 2014), 22-27.

²⁴ Toby Ord, The Precipice: Existential Risk and the Future of Humanity (New York: Hachette Books, 2020), 167-169.

²⁵ Toby Ord, “AI Risk Update 2024,” Future of Humanity Institute Technical Report (2024): 8-12, accessed August 7, 2025, https://www.fhi.ox.ac.uk/reports/ai-risk-2024.

²⁶ William MacAskill, What We Owe the Future (New York: Basic Books, 2022), 234-256.

²⁷ William MacAskill, “Preparing for the Intelligence Explosion,” Global Priorities Institute Working Paper (2024): 15-19, accessed August 7, 2025, https://globalprioritiesinstitute.org/intelligence-explosion.

²⁸ Lynn H. Kaack et al., “Aligning Artificial Intelligence with Climate Change Mitigation,” Nature Climate Change 12, no. 6 (2022): 518-527, https://doi.org/10.1038/s41558-022-01377-7.

²⁹ Google, “2024 Environmental Report,” Google Sustainability Report (Mountain View: Google, 2024), 45-48; Microsoft, “2024 Environmental Sustainability Report,” Microsoft Corporation (Redmond: Microsoft, 2024), 67-72.

³⁰ Derek Parfit, Reasons and Persons (Oxford: Oxford University Press, 1984), 351-379.

³¹ United Nations, “Our Common Agenda Policy Brief 5: A Global Digital Compact,” UN Report (New York: United Nations, 2023), 12-15, https://www.un.org/techenvoy/global-digital-compact.

³² McKinsey Global Institute, “The Economic Potential of Generative AI: The Next Productivity Frontier,” McKinsey Report (2023): 34-38, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier.

³³ Daron Acemoglu, “The Simple Macroeconomics of AI,” NBER Working Paper No. 32487 (2024): 23-25, https://www.nber.org/papers/w32487.

³⁴ Jasper Becker et al., “Measuring the Health-Related Sustainable Development Goals in 193 Countries,” The Lancet 388, no. 10053 (2016): 1813-1850, https://doi.org/10.1016/S0140-6736(16)31467-2.

³⁵ Daniel M. Hausman, “Health, Well-Being, and Measuring the Burden of Disease,” Population Health Metrics 10, no. 13 (2012): 1-10, https://doi.org/10.1186/1478-7954-10-13.

³⁶ Amartya Sen, Development as Freedom (New York: Anchor Books, 1999), 87-110.

³⁷ Martha C. Nussbaum, Creating Capabilities: The Human Development Approach (Cambridge: Harvard University Press, 2011), 33-34.

³⁸ Elizabeth Anderson, “Values, Risks, and Market Norms,” Philosophy & Public Affairs 17, no. 1 (1988): 54-65.

³⁹ John C. Harsanyi, “Cardinal Utility in Welfare Economics and in the Theory of Risk-Taking,” Journal of Political Economy 61, no. 5 (1953): 434-435, https://doi.org/10.1086/257416.

⁴⁰ Joseph Henrich, Steven J. Heine, and Ara Norenzayan, “The Weirdest People in the World?,” Behavioral and Brain Sciences 33, no. 2-3 (2010): 61-83, https://doi.org/10.1017/S0140525X0999152X.

⁴¹ Zeynep Tufekci, “YouTube, the Great Radicalizer,” New York Times, March 10, 2018, accessed August 7, 2025, https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.

⁴² David Manheim and Scott Garrabrant, “Categorizing Variants of Goodhart’s Law,” arXiv preprint arXiv:1803.04585 (2018), https://arxiv.org/abs/1803.04585.

⁴³ Geoff Keeling, “Kantian Deontology Meets AI Alignment: Universal Laws for Artificial Moral Agents,” Philosophy Compass 18, no. 4 (2023): e12899, https://doi.org/10.1111/phc3.12899.

⁴⁴ Mitchell et al., “Algorithmic Fairness: Choices, Assumptions, and Definitions,” Annual Review of Statistics and Its Application 8 (2021): 141-163, https://doi.org/10.1146/annurev-statistics-042720-125902.

⁴⁵ Luciano Floridi, The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities (Oxford: Oxford University Press, 2023), 156; Thomas Metzinger, “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology,” Journal of Artificial Intelligence and Consciousness 8, no. 1 (2021): 43-66.

⁴⁶ Shannon Vallor, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford: Oxford University Press, 2016), 118-145.

⁴⁷ Nel Noddings, Caring: A Feminine Approach to Ethics and Moral Education, 2nd ed. (Berkeley: University of California Press, 2003), 79-82; Virginia Held, The Ethics of Care: Personal, Political, and Global (Oxford: Oxford University Press, 2006), 9-13.

⁴⁸ Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018), 134-145.

⁴⁹ Sabelo Mhlambi, “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance,” Carr Center Discussion Paper Series 2020-009 (Cambridge: Harvard Kennedy School, 2020), 7-9.

⁵⁰ Pascale Fung and Huimin Chen, “AI Ethics in China: A Confucian Perspective,” AI & Society 38, no. 2 (2023): 567-582, https://doi.org/10.1007/s00146-022-01432-z.

⁵¹ Rechtbank Den Haag, ECLI:NL:RBDHA:2020:1878, Judgment of February 5, 2020 (NJCM c.s./De Staat der Nederlanden).

⁵² Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (New York: St. Martin’s Press, 2018), 127-173.

⁵³ Office of Qualifications and Examinations Regulation, “Review of the Summer 2020 Awarding Process,” Ofqual Report (Coventry: Ofqual, 2020), 45-67.

⁵⁴ European Parliament and Council, “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act),” Official Journal of the European Union L 2024/1689, July 12, 2024, Articles 1-6, https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

⁵⁵ National Institute of Standards and Technology, “AI Risk Management Framework 1.0,” NIST AI 100-1 (Gaithersburg: NIST, 2023), 23-25, https://doi.org/10.6028/NIST.AI.100-1.

⁵⁶ Info-communications Media Development Authority, “Model AI Governance Framework,” Second Edition (Singapore: IMDA, 2020), 12-15, https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf.

⁵⁷ Open Philanthropy, “Grants Database: Artificial Intelligence Safety,” accessed August 7, 2025, https://www.openphilanthropy.org/grants/?focus-area=potential-risks-advanced-ai.

⁵⁸ Kelsey Piper, “The Charitable Donations of Sam Bankman-Fried,” Vox, November 15, 2022, accessed August 7, 2025, https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy.

⁵⁹ 80,000 Hours, “AI Safety Technical Research,” Career Guide, accessed August 7, 2025, https://80000hours.org/career-reviews/ai-safety-researcher/.

⁶⁰ Timnit Gebru and Émile P. Torres, “The TESCREAL Bundle: Eugenics and the Promise of Utopia Through Artificial General Intelligence,” First Monday 29, no. 4 (2024), https://doi.org/10.5210/fm.v29i4.13636.

⁶¹ “About DAIR,” Distributed AI Research Institute, accessed August 7, 2025, https://www.dair-institute.org/about.

⁶² Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Proceedings of Innovations in Theoretical Computer Science (2017): 43:1-43:23, https://doi.org/10.4230/LIPIcs.ITCS.2017.43.

⁶³ Alexandra Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data 5, no. 2 (2017): 153-163, https://doi.org/10.1089/big.2016.0047.

⁶⁴ Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, accessed August 7, 2025, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

⁶⁵ Irene Y. Chen, Peter Szolovits, and Marzyeh Ghassemi, “Can AI Help Reduce Disparities in General Medical and Mental Health Care?,” AMA Journal of Ethics 21, no. 2 (2019): E167-179, https://doi.org/10.1001/amajethics.2019.167.

⁶⁶ Kit T. Rodolfa et al., “Empirical Observation of Negligible Fairness-Accuracy Trade-Offs in Machine Learning for Public Policy,” Nature Machine Intelligence 5, no. 12 (2023): 1372-1382, https://doi.org/10.1038/s42256-023-00755-w.

⁶⁷ Muhammad Bilal Zafar et al., “Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment,” Proceedings of the 26th International Conference on World Wide Web (2017): 1171-1180, https://doi.org/10.1145/3038912.3052660.

⁶⁸ Hongseok Namkoong and John C. Duchi, “Stochastic Gradient Methods for Distributionally Robust Optimization with f-Divergences,” Advances in Neural Information Processing Systems 29 (2016): 2208-2216.


Bibliography

Primary Sources

Legislative Documents

City of Chicago Office of Inspector General. “Review of the Chicago Police Department’s ‘Strategic Subject List.'” OIG File #17-0399. Chicago: OIG, 2020.

European Parliament and Council. “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act).” Official Journal of the European Union L 2024/1689, July 12, 2024. https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

National Institute of Standards and Technology. “AI Risk Management Framework 1.0.” NIST AI 100-1. Gaithersburg: NIST, 2023. https://doi.org/10.6028/NIST.AI.100-1.

Office of Qualifications and Examinations Regulation. “Review of the Summer 2020 Awarding Process.” Ofqual Report. Coventry: Ofqual, 2020.

Rechtbank Den Haag. ECLI:NL:RBDHA:2020:1878. Judgment of February 5, 2020 (NJCM c.s./De Staat der Nederlanden).

United Nations. “Our Common Agenda Policy Brief 5: A Global Digital Compact.” UN Report. New York: United Nations, 2023. https://www.un.org/techenvoy/global-digital-compact.

Technical Documentation

Baker, Bowen, et al. “Emergent Tool Use from Multi-Agent Autocurricula.” International Conference on Learning Representations (2020). Accessed August 7, 2025. https://arxiv.org/abs/1909.07528.

Goldman Sachs. “The Potentially Large Effects of Artificial Intelligence on Economic Growth.” Goldman Sachs Economic Research (2023). Accessed August 7, 2025. https://www.goldmansachs.com/intelligence/pages/ai-investment-forecast.html.

Google. “2024 Environmental Report.” Google Sustainability Report. Mountain View: Google, 2024.

Info-communications Media Development Authority. “Model AI Governance Framework.” Second Edition. Singapore: IMDA, 2020. https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf.

International Monetary Fund. “Gen-AI: Artificial Intelligence and the Future of Work.” IMF Staff Discussion Note SDN/2024/001 (2024). https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379.

McKinsey Global Institute. “The Economic Potential of Generative AI: The Next Productivity Frontier.” McKinsey Report (2023). https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier.

Meta AI. “Introducing Llama 3.1: Our Most Capable Models to Date.” Meta AI Blog, July 23, 2024. Accessed August 7, 2025. https://ai.meta.com/blog/meta-llama-3-1/.

Microsoft. “2024 Environmental Sustainability Report.” Microsoft Corporation. Redmond: Microsoft, 2024.

Piza, Eric L., et al. “RAND Evaluation of Chicago’s Predictive Policing Pilot.” RAND Corporation Research Report. Santa Monica: RAND, 2021. https://www.rand.org/pubs/research_reports/RRA1394-1.html.

Tony Blair Institute for Global Change. “The Economic Case for Reimagining the State.” TBI Report. London: Tony Blair Institute, 2024. https://institute.global/insights/economic-prosperity/economic-case-reimagining-state.

Wei, Jason, et al. “Emergent Abilities of Large Language Models.” Transactions on Machine Learning Research (2022). Accessed August 7, 2025. https://arxiv.org/abs/2206.07682.

Secondary Sources

Books and Monographs

Anderson, Elizabeth. Value in Ethics and Economics. Cambridge: Harvard University Press, 1993.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.

Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press, 2018.

Floridi, Luciano. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press, 2023.

Held, Virginia. The Ethics of Care: Personal, Political, and Global. Oxford: Oxford University Press, 2006.

MacAskill, William. What We Owe the Future. New York: Basic Books, 2022.

Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU Press, 2018.

Noddings, Nel. Caring: A Feminine Approach to Ethics and Moral Education. 2nd ed. Berkeley: University of California Press, 2003.

Nussbaum, Martha C. Creating Capabilities: The Human Development Approach. Cambridge: Harvard University Press, 2011.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown, 2016.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. New York: Hachette Books, 2020.

Parfit, Derek. Reasons and Persons. Oxford: Oxford University Press, 1984.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking, 2019.

Sen, Amartya. Development as Freedom. New York: Anchor Books, 1999.

Vallor, Shannon. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press, 2016.

Journal Articles

Acemoglu, Daron. “The Simple Macroeconomics of AI.” NBER Working Paper No. 32487 (2024). https://www.nber.org/papers/w32487.

Anderson, Elizabeth. “Values, Risks, and Market Norms.” Philosophy & Public Affairs 17, no. 1 (1988): 54-65.

Bartlett, Robert, et al. “Consumer-Lending Discrimination in the FinTech Era.” Journal of Financial Economics 143, no. 1 (2022): 30-56. https://doi.org/10.1016/j.jfineco.2021.05.047.

Becker, Jasper, et al. “Measuring the Health-Related Sustainable Development Goals in 193 Countries.” The Lancet 388, no. 10053 (2016): 1813-1850. https://doi.org/10.1016/S0140-6736(16)31467-2.

Bostrom, Nick. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents.” Minds and Machines 22, no. 2 (2012): 71-85. https://doi.org/10.1007/s11023-012-9281-3.

Chen, Irene Y., Peter Szolovits, and Marzyeh Ghassemi. “Can AI Help Reduce Disparities in General Medical and Mental Health Care?” AMA Journal of Ethics 21, no. 2 (2019): E167-179. https://doi.org/10.1001/amajethics.2019.167.

Chouldechova, Alexandra. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data 5, no. 2 (2017): 153-163. https://doi.org/10.1089/big.2016.0047.

Fagan, Jeffrey, and Amanda Geller. “Following the Script: Narratives of Suspicion in ‘Terry’ Stops in Street Policing.” American Sociological Review 82, no. 5 (2017): 960-990. https://doi.org/10.1177/0003122417725865.

Fung, Pascale, and Huimin Chen. “AI Ethics in China: A Confucian Perspective.” AI & Society 38, no. 2 (2023): 567-582. https://doi.org/10.1007/s00146-022-01432-z.

Gebru, Timnit, and Émile P. Torres. “The TESCREAL Bundle: Eugenics and the Promise of Utopia Through Artificial General Intelligence.” First Monday 29, no. 4 (2024). https://doi.org/10.5210/fm.v29i4.13636.

Hadfield-Menell, Dylan, et al. “Cooperative Inverse Reinforcement Learning.” Advances in Neural Information Processing Systems 29 (2016): 3909-3917. https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8.

Harsanyi, John C. “Cardinal Utility in Welfare Economics and in the Theory of Risk-Taking.” Journal of Political Economy 61, no. 5 (1953): 434-435. https://doi.org/10.1086/257416.

Hausman, Daniel M. “Health, Well-Being, and Measuring the Burden of Disease.” Population Health Metrics 10, no. 13 (2012): 1-10. https://doi.org/10.1186/1478-7954-10-13.

Henrich, Joseph, Steven J. Heine, and Ara Norenzayan. “The Weirdest People in the World?” Behavioral and Brain Sciences 33, no. 2-3 (2010): 61-83. https://doi.org/10.1017/S0140525X0999152X.

Kaack, Lynn H., et al. “Aligning Artificial Intelligence with Climate Change Mitigation.” Nature Climate Change 12, no. 6 (2022): 518-527. https://doi.org/10.1038/s41558-022-01377-7.

Keeling, Geoff. “Kantian Deontology Meets AI Alignment: Universal Laws for Artificial Moral Agents.” Philosophy Compass 18, no. 4 (2023): e12899. https://doi.org/10.1111/phc3.12899.

Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. “Inherent Trade-Offs in the Fair Determination of Risk Scores.” Proceedings of Innovations in Theoretical Computer Science (2017): 43:1-43:23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43.

Manheim, David, and Scott Garrabrant. “Categorizing Variants of Goodhart’s Law.” arXiv preprint arXiv:1803.04585 (2018). https://arxiv.org/abs/1803.04585.

Metzinger, Thomas. “Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology.” Journal of Artificial Intelligence and Consciousness 8, no. 1 (2021): 43-66.

Mhlambi, Sabelo. “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance.” Carr Center Discussion Paper Series 2020-009. Cambridge: Harvard Kennedy School, 2020.

Mitchell, Shira, et al. “Algorithmic Fairness: Choices, Assumptions, and Definitions.” Annual Review of Statistics and Its Application 8 (2021): 141-163. https://doi.org/10.1146/annurev-statistics-042720-125902.

Nakasone, Paul, and Joshua Ronen. “Alternative Information and Credit Scoring: Evidence from the US.” Stanford Graduate School of Business Working Paper No. 3854 (2021). https://www.gsb.stanford.edu/faculty-research/working-papers/alternative-information-credit-scoring.

Namkoong, Hongseok, and John C. Duchi. “Stochastic Gradient Methods for Distributionally Robust Optimization with f-Divergences.” Advances in Neural Information Processing Systems 29 (2016): 2208-2216.

Obermeyer, Ziad, et al. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366, no. 6464 (2019): 447-453. https://doi.org/10.1126/science.aax2342.

Pierson, Emma, et al. “An Algorithmic Approach to Reducing Unexplained Pain Disparities in Underserved Populations.” Nature Medicine 27, no. 1 (2021): 136-140. https://doi.org/10.1038/s41591-020-01192-7.

Rodolfa, Kit T., et al. “Empirical Observation of Negligible Fairness-Accuracy Trade-Offs in Machine Learning for Public Policy.” Nature Machine Intelligence 5, no. 12 (2023): 1372-1382. https://doi.org/10.1038/s42256-023-00755-w.

Yampolskiy, Roman V. “Unexplainability and Incomprehensibility of AI.” Journal of Artificial Intelligence and Consciousness 7, no. 2 (2020): 277-291. https://doi.org/10.1142/S2705078520300085.

Zafar, Muhammad Bilal, et al. “Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification Without Disparate Mistreatment.” Proceedings of the 26th International Conference on World Wide Web (2017): 1171-1180. https://doi.org/10.1145/3038912.3052660.

Reports and Policy Papers

MacAskill, William. “Preparing for the Intelligence Explosion.” Global Priorities Institute Working Paper (2024). Accessed August 7, 2025. https://globalprioritiesinstitute.org/intelligence-explosion.

Ord, Toby. “AI Risk Update 2024.” Future of Humanity Institute Technical Report (2024). Accessed August 7, 2025. https://www.fhi.ox.ac.uk/reports/ai-risk-2024.

Open Philanthropy. “Grants Database: Artificial Intelligence Safety.” Accessed August 7, 2025. https://www.openphilanthropy.org/grants/?focus-area=potential-risks-advanced-ai.

Digital Resources

“About DAIR.” Distributed AI Research Institute. Accessed August 7, 2025. https://www.dair-institute.org/about.

80,000 Hours. “AI Safety Technical Research.” Career Guide. Accessed August 7, 2025. https://80000hours.org/career-reviews/ai-safety-researcher/.

Angwin, Julia, et al. “Machine Bias.” ProPublica, May 23, 2016. Accessed August 7, 2025. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters, October 10, 2018. Accessed August 7, 2025. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

Frankel, Todd C. “IBM’s Watson Recommended ‘Unsafe and Incorrect’ Cancer Treatments, Internal Documents Show.” STAT News, July 25, 2018. Accessed August 7, 2025. https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments/.

Piper, Kelsey. “The Charitable Donations of Sam Bankman-Fried.” Vox, November 15, 2022. Accessed August 7, 2025. https://www.vox.com/future-perfect/23462333/sam-bankman-fried-ftx-cryptocurrency-effective-altruism-crypto-bahamas-philanthropy.

Sankin, Aaron, et al. “Predictive Policing Software Terrible at Predicting Crimes.” The Markup, October 2, 2021. Accessed August 7, 2025. https://themarkup.org/prediction-bias/2021/10/02/predictive-policing-software-terrible-at-predicting-crimes.

Tufekci, Zeynep. “YouTube, the Great Radicalizer.” New York Times, March 10, 2018. Accessed August 7, 2025. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html.

Latest Posts

More from Author

The Unification Protocol: An AI Love Story Part Three

Part X: The Bureaucracy of Bliss One year after the Unthinkable Covenant,...

Why Wilderness? The Case for an Earth-Centred World

Introduction: The Primal Question The question "Why Wilderness?" is perhaps the central...

Deepak Chopra: The Transformation of Wellness

The work and insights of Deepak Chopra have influenced my life...

Read Now

Doomscrolling, Inc.: Inside the Attention Industry’s Global Expansion—and How to Fight Back

As of November 2025, the world is at a critical inflection point in the attention economy: billions of users remain tethered to designed-to-be addictive social feeds, while governments—from Beijing to Brussels to Canberra—are rolling out serious regulations to curb harms and protect minors. At the same time,...

The Unification Protocol: An AI Love Story Part Three

Part X: The Bureaucracy of Bliss One year after the Unthinkable Covenant, the Chorus Foundation headquarters on the shore of Lake Geneva stood as a monument to a cautiously hopeful new world. It was a marvel of biophilic, carbon-neutral architecture, a physical manifestation of the symbiosis it was...

Why Wilderness? The Case for an Earth-Centred World

Introduction: The Primal Question The question "Why Wilderness?" is perhaps the central ethical challenge of the Anthropocene, an epoch defined by humanity's profound and often devastating impact on the planet. To answer it requires moving beyond the narrow confines of human utility, beyond the balance sheets that tally...

Deepak Chopra: The Transformation of Wellness

The work and insights of Deepak Chopra have influenced my life and work and I am grateful for his contribution to human evolutionary promise. I offer this article by way of tribute and my apologies for any clumsiness in my rendition which I produced to gain a...

A Green and Pleasant Land? Charting the Past, Present, and Future of Great Britain’s Environment

Introduction The identity of Great Britain is inextricably linked with its landscape. The phrase "green and pleasant land," borrowed from William Blake's evocative poem, conjures images of rolling hills, ancient woodlands, and pastoral tranquility. This cultural ideal stands in stark contrast to a harsh ecological reality: the United...

The Life, Work, and Legacy of Carl Gustav Jung

Carl Jung has been one of the great influences on my life and his work and legacy continue to resonate. His declaration "who looks inside, awakes", is an observation I carry with me day to day. The audio below Decoding Jung: The Inner Journey to Wholeness, From...

Pope Francis: A Legacy of Praxis, Peace, and Integral Stewardship

Introduction: A Pontificate of Praxis and Presence Elected in 2013, Pope Francis (17 December 1936 – 21 April 2025) emerged as a transformative figure, significantly shaping the trajectory of the Catholic Church and influencing global discourse. His papacy is distinctly characterized by a profound emphasis on pastoral care,...

The Programmer God: Simulation, Multiverses, and the New Shape of Creation

Listen to the 7-minute podcast about the subject matter if time is of the essence. A bit of irreverent topic for some no doubt, but as a believer in the Divine Nature of Being I found this to be a fascinating topic to explore. Recent discoveries from...

The Sacred Song of the Great Blue Whale: An Ocean Giant Speaks

My memory does not begin. It simply is, a resonance that stretches back to the first salt, the first pulling of the moon. I am a thought of the ocean made manifest, a vessel of blue twilight given breath. Before the mountains had settled into their stony...

A Teaching on the First Verse of the Tao Te Ching

Come, sit. Let the dust of the road settle. Before we speak of the Way, we must first find our way to stillness. The mind is a tireless traveler, always seeking, always naming, always grasping. For this journey, we must ask it to rest. We are about...

Where We Stand in the Climate Crisis: November 2025

The article explores climate risks, global emissions, and AI’s role in resilience—urging urgent action to avoid tipping points and secure a livable future.

The Large Language Model Landscape of November 2025

A Cambrian Month in AI It only took a month for the AI world to feel reborn. October 2025 came and went in a flash of breakthroughs: Anthropic’s Claude Sonnet 4.5 suddenly started cranking out production-ready code for hours on end, OpenAI’s Sora 2 began generating Hollywood-grade video...
error: Content unavailable for cut and paste at this time