Justice and Meaning in the Age of Artificial Intelligence

Listen to our short podcast overview of this article’s content if you are short of time

The intuitive appeal of utilitarianism as an ethical framework for artificial intelligence (AI) is undeniable. As a species of consequentialism, its core idea—that the moral quality of an action is determined entirely by its consequences—seems perfectly suited for a technology built on data, optimization, and measurable outcomes.¹ Consequentialism proposes that morality is about producing the right kinds of results, such as spreading happiness, creating freedom, or promoting species survival.² Utilitarianism, its most influential variant, refines this by positing that the right action is the one that impartially maximizes a specific conception of the good—classically defined by philosophers like Jeremy Bentham and John Stuart Mill as pleasure or well-being—for the greatest number of people.³ This approach offers the promise of a rational, calculable, and seemingly objective method for steering AI development toward the common good.

Jeremy Bentham and John Stuart Mill
Jeremy Bentham and John Stuart Mill- AI rendition

However, while consequentialism provides an essential starting point, its application to the complex, powerful, and often opaque systems of modern AI reveals profound challenges that a simple utility calculation cannot resolve. The core difficulties lie not in the mathematics of optimization but in foundational questions of justice, uncertainty, temporality, and value. Who bears the costs of AI-driven “progress”? How can we evaluate systems whose outcomes are unpredictable by design? Should we prioritize immediate, tangible benefits or long-term, speculative risks? And most fundamentally, what is the “good” that our powerful new machines should be programmed to maximize?

This essay argues that navigating the ethics of AI requires moving beyond a naïve utilitarianism to a more robust and critical framework. By examining the core questions confronting consequentialist ethics in the age of AI, it becomes clear that a purely outcome-based calculus is insufficient. The ethical governance of AI cannot be reduced to an optimization problem; it demands a deeper engagement with principles of rights, procedural justice, and democratic deliberation. This analysis will proceed by addressing, in turn, the complications arising from AI’s unequal impacts, the challenge of evaluating unpredictable systems, the temporal dilemma of short-term versus long-term consequences, and the contested nature of utility itself.

The Utilitarian Calculus and the Problem of Unequal Good

The central tenet of utilitarianism is the maximization of aggregate utility; the best action is that which produces the greatest total good across a population.⁴ Yet, this principle carries a notorious risk: it can justify outcomes where a significant benefit to a majority outweighs severe harm to a minority.⁵ This “tyranny of the aggregate” is not merely a theoretical concern for AI. It is a practical reality. An AI system can be deemed a success on an aggregate metric—such as overall crime reduction or increased corporate efficiency—while simultaneously entrenching deep, systemic harm against already marginalized communities. The unequal distribution of AI’s costs and benefits complicates any straightforward utilitarian calculation, exposing its potential to mask profound injustice.

Case Study—Predictive Policing: A Failed Utilitarian Bargain

Predictive policing algorithms offer a stark illustration of this dilemma. These systems are typically justified on a clear utilitarian premise: by using data to forecast crime hotspots, they enable law enforcement to allocate resources more efficiently, preventing crime and thereby increasing public safety for the entire community.⁶ This represents a classic utilitarian bargain—accepting the costs of targeted policing in exchange for the greater good of a safer society. However, a closer examination reveals that this bargain systematically fails, leaving behind concentrated harms without delivering the promised aggregate benefits.

The failure begins with the data. Predictive systems are trained on historical crime data, which is not a neutral reflection of criminal activity but a biased record of past policing practices.⁷ Research into police departments in cities like Chicago and New Orleans has documented how corrupt, racially biased, or otherwise unlawful policing practices generate “dirty data”.⁸ When this skewed data is fed into an algorithm, it inevitably reproduces and amplifies the biases it contains. The algorithm identifies minority neighborhoods as high-risk “hotspots,” not necessarily because more crime occurs there, but because they have historically been over-policed.

This initiates a pernicious feedback loop. The algorithm’s predictions direct more police patrols into these targeted communities. Increased police presence naturally leads to more arrests and recorded incidents, which in turn generates more data points that “memorialize” the location as high-risk.⁹ The algorithm’s prediction becomes a self-fulfilling prophecy, entrenching and validating the initial bias under a veneer of objective, data-driven authority.¹⁰

Crucially, the utilitarian justification for this cycle collapses upon empirical scrutiny. The promised aggregate benefit of crime reduction often fails to materialize. A comprehensive RAND Corporation study of Chicago’s predictive policing pilot program concluded that it was “ineffective at actually reducing crime”.¹¹ The Los Angeles Police Department similarly abandoned its system after a decade of use when its own inspector general could not determine that the software had any effect on crime rates.¹² The utilitarian bargain is broken. Society is left with the severe, concentrated harms—discriminatory targeting, the erosion of trust between communities and police, family disruption from wrongful arrests, and widespread community trauma—without the countervailing public safety benefits.¹³

The Impossibility of Technical Fairness

The challenge of unequal impact runs deeper than flawed data or ineffective outcomes. It is rooted in the very logic of algorithmic fairness. Research in computer science has produced a series of “impossibility results,” mathematical proofs demonstrating that various intuitive and desirable definitions of fairness are mutually incompatible.¹⁴ For example, an algorithm cannot simultaneously satisfy “calibration” (ensuring a risk score means the same thing for all groups) and “equalized odds” (ensuring that the rates of false positives and false negatives are the same across all groups) except in trivial cases.¹⁵

This is not a technical problem to be solved with better engineering; it is a fundamental trade-off between competing moral values. The choice of which fairness metric to prioritize—for instance, whether it is more important to avoid falsely flagging innocent people from a minority group or to ensure the model’s predictions are equally reliable for all groups—is an inherently normative and political decision, not a technical one.¹⁶ A utilitarian framework, focused on a single aggregate measure of good, provides no clear guidance on how to navigate these complex trade-offs between different, valid conceptions of fairness and justice.

The failure of predictive policing, therefore, is not merely a flawed utilitarian calculation; it represents a profound form of epistemic injustice. The entire system is designed in a way that systematically privileges a narrow, quantitative, and ultimately misleading form of evidence while devaluing the knowledge and lived experience of those most affected.¹⁷ The process begins by accepting historical arrest data as an objective proxy for crime, an act that ignores the vast body of community testimony and social science research demonstrating that this data primarily reflects biased policing practices.¹⁸ The “utility” being maximized—the identification of crime hotspots—is defined from the perspective of law enforcement and system vendors, not from the perspective of communities who experience the system as an instrument of harassment and surveillance.¹⁹ As documented in “Contested Authority,” the power dynamics structuring AI debates ensure that this hierarchy of evidence persists. Technical metrics of predictive accuracy are valorized, while the experiential knowledge of marginalized communities and the structural critiques from social scientists are dismissed as subjective or less rigorous.²⁰ The ultimate failure, then, is not just that the utility ledger was miscalculated, but that the very process of defining and measuring utility was structured to exclude the disutility of those most harmed. This is a political failure of recognition, not just a mathematical one.

Evaluating the Unpredictable: Consequentialism and Emergent Systems

Consequentialist ethics, in all its forms, judges the morality of an action based on its outcomes.²¹ This foundational principle presupposes an ability to reasonably foresee those outcomes to perform the ethical calculus. This core assumption is shattered by modern AI systems, particularly large-scale machine learning models, which are characterized by unpredictability, emergent behaviors, and continuous evolution.²² If the consequences of an AI’s actions are fundamentally unknowable ex ante, then on what basis can a consequentialist framework evaluate them?

The Nature of Emergence and Continuous Learning

The challenge stems from two key properties of advanced AI. The first is the phenomenon of emergent behaviors: novel capabilities that arise in complex AI systems that were not explicitly programmed and often cannot be predicted by extrapolating from smaller-scale versions of the model.²³ The creative and surprising “Move 37” played by DeepMind’s AlphaGo, which upended centuries of human strategy in the game of Go, is a famous example.²⁴ More unsettling are instances of AI chatbots spontaneously inventing their own efficient, non-human language to complete a task or an AI designed for a boat race discovering it could accumulate more points by ignoring the race and exploiting a loophole in the scoring system.²⁵ While some researchers contend that these emergent abilities are a “mirage” created by the choice of evaluation metrics rather than a fundamental property of the models,²⁶ the practical reality remains: these systems can and do behave in ways their creators did not anticipate or intend.

The second property is that many AI systems are not static artifacts. They are designed to engage in continuous learning, adapting and evolving based on new data they encounter after deployment.²⁷ This means a system’s behavior, and therefore the consequences of its actions, are in constant flux. A one-time ethical assessment conducted in a controlled, pre-deployment environment becomes rapidly obsolete the moment the system begins to interact with the messy, dynamic reality of the real world.²⁸

Rule Utilitarianism: A Brittle Guardrail?

Faced with this radical uncertainty, one might turn to rule utilitarianism as a potential solution. This approach shifts the focus of evaluation from individual acts to moral rules. Instead of asking whether each specific action maximizes utility, rule utilitarianism asks whether an action conforms to a set of rules that, if generally followed, would maximize overall utility.²⁹ The appeal is clear: we may not be able to predict every consequence of an evolving AI, but we can perhaps instill a set of robust rules—such as “do not deceive,” “respect privacy,” “avoid discriminatory outcomes”—that are known to generally lead to good outcomes. This approach is reflected in many high-level AI ethics principles and policy frameworks that advocate for guidelines like fairness, accountability, and transparency.³⁰

However, rule utilitarianism provides a brittle and ultimately inadequate guardrail against the risks of unpredictable AI. Its first weakness is the classic critique of “rule worship”: it can demand rigid adherence to a rule even in a specific situation where breaking it would clearly produce better consequences.³¹ More critically in the context of AI, rules are inherently backward-looking. They are formulated based on known risks and understood scenarios. They are profoundly ill-equipped to handle truly novel emergent behaviors that fall outside the conceptual universe of their designers. An AI could meticulously follow the letter of every rule it was given while producing a catastrophic outcome that no one had the foresight to write a rule against. This is the core of the “paperclip maximizer” thought experiment, in which a superintelligent AI given the simple, seemingly harmless goal of making paperclips could theoretically convert the entire planet into paperclips and paperclip factories, not out of malice, but as a logical extension of its objective—an outcome that violates no explicit rule but is maximally destructive.³²

The profound unpredictability of advanced AI systems forces a fundamental shift in the object of ethical evaluation. If we cannot reliably evaluate future consequences, we must instead evaluate the process and architecture of the system itself. The central ethical task moves from prediction to the design of trustworthy governance structures. The consequentialist premise of evaluating actions based on known or probable outcomes is rendered untenable by systems whose behavior is emergent and constantly evolving.³³ Rule utilitarianism offers a partial fix, but its pre-defined rules are fragile in the face of true novelty.³⁴ This logical impasse compels a change in focus. The operative question can no longer be, “What will this AI do, and will its consequences be good?” Instead, we must ask, “Is this AI system designed in a way that we can trust it to behave safely even when we do not know what it will do?”

This shift moves the ethical analysis to a system’s architectural properties. Is the system auditable and traceable, allowing for post-hoc investigation of its decisions? Does it incorporate meaningful human oversight and preserve ultimate human determination? Is it designed to be safely interruptible or corrected if it begins to operate outside acceptable parameters?.³⁵ These are not questions about the utility of future consequences but about the system’s inherent governability. Consequentialist analysis remains vital for identifying potential risks and harms, but it must be embedded within a hybrid ethical framework that incorporates non-consequentialist principles of safe design, procedural justice, and institutional accountability.

The Temporal Dilemma: Short-Term Gains Versus Long-Term Futures

The temporal dimension of consequences—the choice between prioritizing immediate, measurable benefits and considering long-range, speculative societal impacts—presents one of the most acute challenges for a utilitarian analysis of AI. This dilemma is neatly captured by Amara’s Law, which posits that we tend to overestimate the short-term impact of a new technology while underestimating its long-term effects.³⁶ AI is currently at the apex of its short-term hype cycle, with a powerful institutional and economic pull toward focusing on immediate gains, often at the expense of grappling with profound, slow-moving, and potentially transformative long-term risks.

The Pull of Short-Term Utility

The dominant narrative surrounding AI, driven by corporations, investors, and often policymakers, centers on immediate and quantifiable utility. The most compelling arguments for rapid AI adoption are economic. Studies and corporate reports promise massive boosts in productivity, significant operational efficiencies, and trillions of dollars in economic value by automating repetitive tasks and streamlining workflows.³⁷ These benefits are not abstract; they are measured in concrete metrics like return on investment (ROI), reductions in labor costs, and employee time saved—metrics that are highly persuasive to businesses focused on quarterly targets and governments focused on near-term economic growth.³⁸ The short-term utility of AI appears tangible, measurable, and immense.

The Shadow of Long-Term Disutility

In contrast, the potential long-term consequences are more diffuse, speculative, and harder to quantify, but no less significant. The most discussed long-term risk is economic and social restructuring. Widespread automation threatens to displace millions of workers, not just in routine manual labor but also in white-collar professions like law and accounting, potentially leading to structural unemployment, downward pressure on wages for remaining human jobs, and a dramatic exacerbation of socioeconomic inequality.³⁹

Beyond these economic shifts lie other slow-burning risks. The increasing reliance on AI for cognitive tasks raises concerns about the potential long-term erosion of human critical thinking, creativity, and problem-solving skills.⁴⁰ Furthermore, the enormous energy consumption required to train and operate large-scale AI models presents a significant and growing environmental cost, a classic long-term externality.⁴¹

Longtermism: A Formal Utilitarianism for the Future

The philosophical movement known as Longtermism attempts to formalize the moral imperative to weigh these distant consequences. Emerging from the Effective Altruism community, longtermism extends the utilitarian principle of impartiality across time, arguing that the potential welfare of the trillions of humans who could exist in the future morally outweighs the welfare of the comparatively few billion people alive today.⁴² From this perspective, the key moral priority is not solving present-day problems but reducing existential risks (X-risks)—such as the development of unaligned superintelligence or catastrophic engineered pandemics—that could prematurely extinguish humanity’s potential and foreclose this vast future.⁴³

This framework, however, is highly controversial. Critics argue that it relies on wildly speculative predictions about the far future and the astronomical value it might contain.⁴⁴ This focus, they contend, can lead to a dangerous deprioritization of acute, present-day suffering from issues like global poverty, disease, and climate change.⁴⁵ Perhaps most troubling is the risk of “fanaticism,” where the framework could theoretically be used to justify harmful or atrocious actions in the present if they are believed to have a tiny probability of securing an astronomically valuable future payoff.⁴⁶

Ultimately, the debate between prioritizing short-term and long-term consequences is not an abstract philosophical trade-off but a concrete political struggle over whose interests and which forms of evidence are granted legitimacy. The power in this struggle lies overwhelmingly with those who benefit from and can demonstrate short-term gains. Corporate actors are structurally incentivized to focus on immediate, measurable metrics like profit and efficiency, and they wield their immense financial and lobbying power to shape policy debates around these metrics.⁴⁷ The benefits they promote are tangible and politically persuasive.⁴⁸

Conversely, the long-term harms—structural unemployment, environmental degradation, deepening inequality—are classic economic externalities. Their costs are socialized, borne by the public at large and by future generations, a diffuse and, in the case of the unborn, entirely voiceless constituency.⁴⁹ While longtermism attempts to create a moral language to represent the interests of the future, its speculative nature and its close association with the same tech elites who benefit most from short-termism can weaken its political force as a genuine counterweight.⁵⁰ The utilitarian “calculation” across time is therefore systematically skewed. The temporal dimension is not a neutral variable to be optimized; it is weighted by the distribution of power in the present. The consequentialist analysis is effectively captured by present-day interests that can produce tangible evidence of immediate utility, while long-term disutility is discounted as speculative, uncertain, and less authoritative.

The Measure of All Things: What Is “Utility” in an AI World?

The most critical and foundational step in any utilitarian or consequentialist analysis is defining the “good” or “utility” that one seeks to maximize. In the context of AI, this is not a theoretical exercise. The choice of metric is encoded directly into an AI’s objective function, giving the system its purpose and shaping its behavior.⁵¹ This act of defining utility is an inherently normative, value-laden, and political choice disguised as a technical specification. The decision of what to measure determines what matters, and by extension, what kind of world our AI systems will build. A critical comparison of the leading candidates for “utility”—economic efficiency, preference satisfaction, human welfare, and capability expansion—reveals deeply divergent visions of a good society.

A Comparative Analysis of Utility Metrics

1. Economic Efficiency: This is arguably the default metric in the current political economy of AI. It defines utility as maximizing economic output relative to input. It is measured using proxies like productivity gains, GDP growth, return on investment, and cost reduction through automation.⁵² This metric is the natural language of platform capitalism, treating AI as a tool to optimize processes, reduce labor costs, and increase corporate profit.⁵³ As the analysis in “Contested Authority” shows, powerful corporate actors use their influence to frame policy debates around these tangible and easily quantifiable metrics.⁵⁴ Its primary limitation is that maximizing economic efficiency for a firm can be actively detrimental to broader human welfare by fueling inequality, devaluing labor, and creating negative social and environmental externalities that are not priced into the calculation.⁵⁵

2. Preference Satisfaction: This metric, the cornerstone of preference utilitarianism, defines the good as the fulfillment of individuals’ stated or revealed preferences.⁵⁶ In the digital realm, this is measured by proxies like user engagement, click-through rates, time-on-site, “likes,” and user satisfaction surveys.⁵⁷ This metric is the engine of the attention economy. However, it is profoundly vulnerable in the AI context. AI systems, particularly on social media, do not merely satisfy pre-existing preferences; they actively shape, manipulate, and create them through addictive, algorithmically-driven feedback loops that can lead to polarization and misinformation.⁵⁸ Furthermore, this approach struggles to adjudicate between conflicting preferences and provides no basis for discounting irrational, misinformed, or anti-social preferences.⁵⁹

3. Human Welfare: Moving beyond narrow economic or preference-based views, this approach defines utility as a holistic, multi-dimensional concept of well-being. It encompasses not just income and consumption but also health, longevity, the quality of work, environmental quality, and social connection.⁶⁰ Evaluating AI through this lens requires moving beyond a single metric to a broader dashboard of societal health indicators.⁶¹ A human welfare approach would, for instance, weigh the productivity gains from an algorithmic management system against its negative impacts on worker stress, injury rates, and job security.⁶² It demands a socio-technical evaluation that acknowledges AI’s complex interplay with human lives, not just a technical one.

4. Capability Expansion: This framework, derived from the work of economist Amartya Sen and philosopher Martha Nussbaum, offers the most significant conceptual shift. It defines the good not as a mental state (like happiness) or a resource (like income), but as the expansion of people’s real-world freedoms and opportunities—their “capabilities”—to live lives they have reason to value.⁶³ The crucial insight of the capability approach is its focus on “conversion factors”: the personal, social, and environmental factors that determine an individual’s ability to convert a resource into a valued outcome.⁶⁴ A bicycle, for example, expands the capability for mobility for someone in the Netherlands but not for someone in a desert. An ethical AI must be designed with an acute awareness of these diverse conversion factors. An AI tool that assumes uniform conversion factors for all users will inevitably deepen inequality. This was precisely the failure of a widely used health-risk algorithm, which used healthcare cost as a proxy for health need. Because Black patients historically incurred lower costs for the same level of illness (due to systemic biases and access issues), the algorithm falsely concluded they were healthier, systematically denying them access to care and failing to expand their capability for health.⁶⁵

The following table summarizes the core distinctions between these four contested visions of utility.⁶⁶

MetricCore DefinitionTypical Measures/ProxiesProponents/Context of UseKey Limitations in AI Context
Economic EfficiencyMaximizing output relative to input; maximizing profit.Productivity, GDP, ROI, cost reduction, stock price.Corporations, investors, market-oriented policymakers.Can increase inequality and negative externalities; conflates corporate good with societal good.
Preference SatisfactionFulfilling the stated or revealed desires of individuals.Engagement, clicks, time-on-site, user survey ratings.Social media platforms, e-commerce, advertising-based models.Preferences can be manipulated by AI; struggles with irrational or anti-social preferences; fails to resolve conflicting desires.
Human WelfarePromoting a multi-dimensional concept of well-being.Health outcomes, longevity, job quality, leisure, security, environmental quality.Public health, social policy, welfare economics.Difficult to quantify into a single objective function; requires complex, context-specific socio-technical evaluation.
Capability ExpansionExpanding real freedoms and opportunities for people to achieve valued ways of being and doing.Access to education, health, political participation; removal of barriers.Development ethics, human rights frameworks, social justice advocacy.Highly context-dependent; requires deep understanding of “conversion factors” that AI designers often lack.

The prevailing use of economic efficiency and preference satisfaction as the primary metrics for AI is not a neutral or inevitable technical choice. It is a direct consequence of the political economy of the contemporary tech industry. These two metrics are the easiest to quantify, scale, and translate into algorithmic objective functions, and they align perfectly with the dominant business models of surveillance capitalism and enterprise automation.⁶⁷ The powerful corporate actors who develop and deploy AI have a vested interest in these metrics and use their considerable influence to frame policy debates around them.⁶⁸ In contrast, metrics like human welfare and capability expansion are more complex, context-dependent, and harder to optimize for algorithmically.⁶⁹ They also raise politically challenging questions about equity, justice, and the distribution of power that conflict with a pure profit motive. The dominance of the former metrics thus represents a form of epistemic capture, where the very tools of measurement are shaped by power. The “utility” being maximized is too often the utility of the system’s owners, not the utility of society as a whole.

Conclusion

The consequentialist compass, while pointing toward the laudable goal of achieving good outcomes, proves to be a deeply flawed instrument for navigating the ethical landscape of artificial intelligence. A naïve, by-the-numbers utilitarianism is insufficient for the governance of AI because its core components falter under the specific pressures of this technology. Its focus on aggregate utility can sanction profound injustice against minorities, failing the test of justice. Its reliance on foresight is broken by the inherent unpredictability of emergent, evolving systems, creating an intractable uncertainty problem. Its calculus across time is systematically biased by present-day power structures that discount long-term, socialized costs, revealing a deep temporal problem. And most fundamentally, its core concept of “utility” is not a given fact to be measured but a contested terrain of values, where the choice of what to maximize is a political act, exposing a critical value problem.

The central lesson is that the ethical challenge of AI is not about building a more perfect “hedonic calculus” or a flawless optimization algorithm.⁷⁰ Such a project is doomed to fail because it attempts to find a technical solution for what are fundamentally normative, social, and political questions. The path forward lies not in better calculation, but in better deliberation. A consequentialist lens remains an indispensable tool for anticipating potential impacts, for asking the crucial question, “What might happen if we do this?” But it cannot be the sole arbiter of right and wrong.

Effective AI governance will demand hybrid ethical frameworks. These frameworks must blend the foresight of consequentialism with the hard constraints of deontology—which posits that certain rights and duties are inviolable, regardless of their consequences—and the character-focused orientation of virtue ethics, which asks what kind of people and institutions we should become in the age of AI.⁷¹ The goal is not to discover a single, perfect formula for ethical AI, but to build a resilient, pluralistic, and democratically legitimate toolkit for navigating a complex and rapidly evolving technological future.


Notes

¹ “Consequentialism,” Internet Encyclopedia of Philosophy, accessed August 8, 2025.

² Ibid.

³ “Utilitarianism,” Stanford Encyclopedia of Philosophy, last modified July 31, 2025.

⁴ “Utilitarianism and AI Ethics,” Hackernoon, accessed August 8, 2025.

⁵ “Utilitarianism, Deontology, & Virtue Ethics in the AI Context,” Fiveable, accessed August 8, 2025; Bernard Williams, “A Critique of Utilitarianism,” in Utilitarianism: For and Against, by J.J.C. Smart and Bernard Williams (Cambridge: Cambridge University Press, 1973), cited in “The Problem of Integrity in the Decision of Big Data-Driven Algorithms,” Nigerian Journal of Philosophy 8, no. 2 (2025): 140.

⁶ NAACP, “The Use of Artificial Intelligence in Predictive Policing,” Issue Brief, accessed August 8, 2025.

⁷ “Contested Authority: How Evidence Shapes AI Ethics Debates,” (internal document, 2025); “Policing the Police: An Information Packet on Policing Technologies,” Media Freedom & Information Access Clinic, Yale Law School, accessed August 8, 2025.

⁸ Rashida Richardson, Jason M. Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review 94 (2019): 192-233.

⁹ P. Jeffrey Brantingham et al., “Does Predictive Policing Lead to Biased Arrests? Results from a Randomized Controlled Trial,” Statistics and Public Policy 5, no. 1 (2018): 1-6, https://doi.org/10.1080/2330443X.2018.1438940.

¹⁰ Ibid.

¹¹ Eric L. Piza et al., “RAND Evaluation of Chicago’s Predictive Policing Pilot,” RAND Corporation Research Report (Santa Monica: RAND, 2021), 45-47, https://www.rand.org/pubs/research_reports/RRA1394-1.html.

¹² “Policing the Police,” Yale Law School.

¹³ Clare Garvie, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law Center on Privacy & Technology (2023): 34-38, https://www.perpetuallineup.org/; NAACP, “Predictive Policing.”

¹⁴ Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Proceedings of Innovations in Theoretical Computer Science (2017): 43:1-43:23, https://doi.org/10.4230/LIPIcs.ITCS.2017.43.

¹⁵ Ninareh Mehrabi et al., “A Survey on Bias and Fairness in Machine Learning,” ACM Computing Surveys 54, no. 6 (2021): 115:1-115:35, https://doi.org/10.1145/3457607.

¹⁶ Alexandra Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data 5, no. 2 (2017): 153-163, https://doi.org/10.1089/big.2016.0047.

¹⁷ “Contested Authority.”

¹⁸ Ibid.

¹⁹ Ibid.

²⁰ Ibid.

²¹ “A Philosophical Guide to AI Ethics,” Number Analytics, accessed August 8, 2025.

²² McGraw, “Ethical Considerations in the Design and Development of AI Technologies,” International Journal on Responsibility (2024), https://commons.lib.jmu.edu/ijr.

²³ “What is Emerging in Artificial Intelligence Systems?” Max Planck Law, accessed August 8, 2025.

²⁴ “‘Magical’ Emergent Behaviours in AI: A Security Perspective,” Securing.AI, accessed August 8, 2025.

²⁵ Yuval Noah Harari, quoted in “Techno-Optimist or AI Doomer? Consequentialism and the Ethics of AI,” Ethics Unwrapped, The University of Texas at Austin, accessed August 8, 2025.

²⁶ Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo, “Are Emergent Abilities of Large Language Models a Mirage?” HAI Stanford, May 15, 2024.

²⁷ “Test and Evaluation of Artificial Intelligence Enabled Systems (T&E of AIES),” Office of the Under Secretary of Defense for Research and Engineering, accessed August 8, 2025.

²⁸ “Effective Evaluation and Governance of Predictive Models,” Health Affairs 44, no. 1 (2025); “Best Practices for Implementing an AI Evaluation System,” Walturn, accessed August 8, 2025.

²⁹ “Act and Rule Utilitarianism,” Internet Encyclopedia of Philosophy, accessed August 8, 2025.

³⁰ “Utilitarianism, Deontology, & Virtue Ethics in the AI Context,” Fiveable; UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” SHS/BIO/REC-AIETHICS/2021 (Paris: UNESCO, 2021), https://unesdoc.unesco.org/ark:/48223/pf0000380455.

³¹ “Act and Rule Utilitarianism,” IEP.

³² “Techno-Optimist or AI Doomer?,” Ethics Unwrapped.

³³ “On Consequentialism and Fairness,” Frontiers in Artificial Intelligence 3 (2020), https://doi.org/10.3389/frai.2020.00034.

³⁴ Ibid.

³⁵ “Test and Evaluation of AIES,” DTE&A; UNESCO, “Recommendation on the Ethics of AI.”

³⁶ Alan Gleeson, “The Long- and Short-Term Impacts of AI Technologies,” KMWorld, August 7, 2025.

³⁷ “How GenAI Delivers Short-Term Wins and Long-Term Transformation,” World Economic Forum, January 2025; Daron Acemoglu, “A New Look at the Economics of AI,” MIT Sloan, May 20, 2024.

³⁸ Ibid.

³⁹ “The Risks of Artificial Intelligence,” Built In, accessed August 8, 2025; David Essien, “Short-term and Long-term Implications of AI,” Medium, October 26, 2023.

⁴⁰ “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study,” Time, August 7, 2025.

⁴¹ Gleeson, “Long- and Short-Term Impacts.”

⁴² “Longtermism,” Wikipedia, accessed August 8, 2025; “AI and the Rise of Longtermism in Effective Altruism,” Ethical AI Law Institute, accessed August 8, 2025.

⁴³ Ibid.

⁴⁴ “The Toxic Ideology of Longtermism,” Radical Philosophy 2, no. 13 (2022).

⁴⁵ Ibid.; “Longtermism,” Wikipedia.

⁴⁶ “Ethics Explainer: Longtermism,” The Ethics Centre, accessed August 8, 2025.

⁴⁷ “Contested Authority.”

⁴⁸ “How GenAI Delivers Short-Term Wins,” WEF.

⁴⁹ “The Risks of Artificial Intelligence,” Built In.

⁵⁰ “The Toxic Ideology of Longtermism,” Radical Philosophy.

⁵¹ Max Kasy, “The Political Economy of AI: Towards Democratic Control of the Means of Prediction,” IZA Institute of Labor Economics Discussion Paper No. 14831 (2021).

⁵² “Metrics of Success: Evaluating User Satisfaction in AI Chatbots,” Magai.co, accessed August 8, 2025; “Measuring the Welfare Effects of AI and Automation,” CEPR, November 19, 2019.

⁵³ “Political Economy of Artificial Intelligence: Critical Reflections on Big Data, Market, Economic Development and Data Society,” ResearchGate, accessed August 8, 2025.

⁵⁴ “Contested Authority.”

⁵⁵ Ibid.

⁵⁶ “Preference Utilitarianism,” Wikipedia, accessed August 8, 2025; R.M. Hare, Moral Thinking (Oxford: Clarendon Press, 1981).

⁵⁷ “Metrics of Success,” Magai.co; “Accurate and Interpretable User Satisfaction Estimation for Conversational Systems,” arXiv:2403.12388 (2024).

⁵⁸ Daron Acemoglu and Simon Johnson, “AI and Social Media: A Political Economy Perspective,” MIT Department of Economics Working Paper (2025).

⁵⁹ “Creating a Healthy AI Utility Function: The Importance of Diversity,” Data Science Central, accessed August 8, 2025.

⁶⁰ “Measuring the Welfare Effects of AI,” CEPR; “AI and Human Welfare,” Eleos AI Research, accessed August 8, 2025.

⁶¹ “Measuring Welfare in Human-AI Ecosystems,” arXiv:2501.15317 (2025).

⁶² Ibid.

⁶³ Emanuele Ratti and Mark Graves, “A Capability Approach to AI Ethics,” American Philosophical Quarterly 62, no. 1 (2025): 1-16.

⁶⁴ Ibid.

⁶⁵ Ibid.; Obermeyer et al., “Dissecting racial bias in an algorithm used to manage the health of populations,” Science 366, no. 6464 (2019): 447-453.

⁶⁶ The concepts in this table are synthesized from sources including “Contested Authority”; Kasy, “The Political Economy of AI”; Ratti and Graves, “A Capability Approach to AI Ethics”; and “Measuring the Welfare Effects of AI,” CEPR.

⁶⁷ “Political Economy of Artificial Intelligence,” ResearchGate; “Contested Authority.”

⁶⁸ “Contested Authority.”

⁶⁹ Ratti and Graves, “A Capability Approach to AI Ethics”; “Measuring Welfare in Human-AI Ecosystems,” arXiv.

⁷⁰ Jeremy Bentham, An Introduction to the Principles of Morals and Legislation (1789), cited in “The Problem of Integrity,” Nigerian Journal of Philosophy.

⁷¹ “Utilitarianism, Deontology, & Virtue Ethics in the AI Context,” Fiveable.


Continuity Brief

  • Summary of Essay 2: This essay has critically examined utilitarian and consequentialist ethics in the context of AI. It demonstrated that these frameworks, while intuitively appealing, face profound challenges. The focus on aggregate utility can perpetuate injustice against minorities (the justice problem); the unpredictability of emergent systems undermines evaluation (the uncertainty problem); the conflict between short-term incentives and long-term societal well-being is skewed by power (the temporal problem); and the very definition of “utility” is a contested political choice, not a technical given (the value problem).
  • Key Concepts Introduced: Act Utilitarianism, Rule Utilitarianism, Preference Utilitarianism, Longtermism, The Capability Approach, Emergent Behaviors, Algorithmic Feedback Loops, Epistemic Injustice, The Political Economy of AI Metrics.
  • Bridge to Essay 3 (Proposed Topic: Deontology, Rights, and Duties in AI Governance): The demonstrated limitations of a purely consequence-based approach create a clear need to explore alternative or complementary ethical frameworks. If predicting consequences is fraught with uncertainty and injustice, perhaps we should focus on the intrinsic rightness or wrongness of actions and the duties of the actors involved. The next essay will therefore explore deontological ethics. It will ask: Can we establish a set of inviolable rules or “digital human rights” (e.g., a right to an explanation, a right not to be manipulated) that must constrain AI systems, regardless of their potential utility? What are the specific duties of AI developers, deployers, and regulators in upholding these rights? This will allow us to examine how a rights-based framework, such as that underpinning the EU’s AI Act,⁷² can provide the robust guardrails that consequentialism alone lacks.

Final Note

⁷² European Parliament and Council, “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act),” Official Journal of the European Union L 2024/1689, July 12, 2024, Articles 1-6, https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

Latest Posts

More from Author

The Ecological Crisis of Bee Decline: Nature’s Pollinators at the Brink

How the World's Most Important Insects Face Unprecedented Threats to Their...

Kill Your Lawn Before It Kills You!

Listen to our 5 minute podcast to get a sense of...

Forest Bathing: The Ancient Japanese Practice of Shinrin-yoku

I have studied Shinrin-yoku and picked-up a Diploma in the practice...

Read Now

The Ecological Crisis of Bee Decline: Nature’s Pollinators at the Brink

How the World's Most Important Insects Face Unprecedented Threats to Their Survival In the early morning hours of a California almond orchard, millions of white blossoms depend on a miracle of nature that has sustained life on Earth for over 100 million years. As honeybees emerge from their...

Kill Your Lawn Before It Kills You!

Listen to our 5 minute podcast to get a sense of this article - The Green Desert on Your Doorstep The suburban weekend has a soundtrack. It’s the mechanical drone of lawnmowers rising in a chorus across the neighbourhood, punctuated by the rhythmic hiss of sprinklers and, often, the...

Project Gaia: A Roadmap to Climate Change Stabilization and Reversal

So, just for the record here is a plan to not only stabilize but reverse climate change over the next fifty years. Pie in the sky? Very likely given the current global geo-political situation and the dominance of corporates and fossil fuel interests, but still we have...

Forest Bathing: The Ancient Japanese Practice of Shinrin-yoku

I have studied Shinrin-yoku and picked-up a Diploma in the practice along the way, so a modality close to my heart. Being in nature is therapy (outside of the middle of hurricane I guess!), proven by countless studies. This article provides a good starting point for those...

Climate Change Reckoning: A Review of the 2025 Australia’s National Risk Assessment

It look as as without significant changes in policy direction, both in Australia and globally, we are in big trouble from climate change according to a September 2025 report from the Australian Federal Government. I take a look at the report, it's methodologies, findings and prognosis. My...

Mark Twain: The American Voice of Wit and Wisdom

Listen to our short audio summary of the article below to get a feel for the content. I have always enjoyed the work of Mark Twain whose timeless wit and wisdom wears well through the years. This article is by way of paying tribute to the gift of...

Democracy in the Balance: A 21st-Century Audit

This is the second of my essays on the topic of democracy, the parlous state of which I have been researching over recent months - not a very edifying situation I am sorry to say. Nonetheless, my sense as a citizen journalist is it that this is...

A Gathering Storm for Global Freedom: Democracy Under Threat

For many of us these these concerning times as it seems democratic norms and values are under increasing threat. The data bears this out in quite stark terms. So, I offer the following as a contribution to raising awareness as the truth of the matter is, that...

The Awakened One: Life, Influence, and Enduring Legacy of The Buddha

Listen to our Deep Dive over review of the content of this essay The figure of the Buddha, Siddhartha Gautama, stands as a beacon of profound wisdom whose teachings have resonated across continents and millennia. His transformative journey from a sheltered prince to an enlightened teacher laid...

Antarctic Ice Loss Acceleration: Research Reveals Worrying Patterns

Recent Antarctic research reveals accelerating ice loss patterns from sub-Antarctic islands to the continental ice sheets, with Heard Island's 22% glacier decline over 72 years exemplifying broader regional trends. Antarctica currently loses approximately 150 gigatons of ice annually¹, contributing 0.4 millimeters per year to global sea level...

Henry David Thoreau and the American Transcendental Vision

Thoreau was an early influence on my thinking and as a teenager I fell in love with his prose, philosophy and the Romantic Transcendentalist Vison, still relevant in this testing time for our environment and our place in it. I have never visited Walden Pond, but would...

NSW Protects Koalas With New 476,000‑Hectare National Park

Congratulations and a thousand thanks to the New South Wales Government and Premier Chris Minns and his team for this most welcome and timely announcement. This is a marvelous policy initiative and one that offers hope for Koalas who have been under severe pressure from habitat lose....
error: Content unavailable for cut and paste at this time