HomeArtificial IntelligenceThe Four Pillars of...

The Four Pillars of Truth: Forging Authoritative Evidence in AI Ethics

Introduction: The Crisis of Authority in an Age of Intelligent Machines

The rapid ascent of artificial intelligence has precipitated a crisis of authority in public and private governance. As AI systems become deeply embedded in the critical infrastructure of society—from healthcare and criminal justice to finance and national security—the stakes of ethical deliberation have become monumental. Yet, the discourse is dangerously fragmented. Technologists point to performance benchmarks as proof of capability, social scientists present empirical studies of real-world impact, philosophers invoke timeless ethical principles, and affected communities offer powerful testimony of lived experience. Each speaks a different language of evidence, creating a cacophony of competing claims that resist easy reconciliation. This fragmentation paralyzes effective governance, leaving a vacuum where market imperatives and geopolitical competition often dictate progress, unmoored from a coherent understanding of the public good.¹

This essay argues that authoritative evidence in AI ethics is not found in any single pillar—technical, empirical, philosophical, or experiential—but emerges from their structured and critical integration. True authority is a composite, forged through a process of triangulation where each form of evidence challenges, validates, and enriches the others. An AI system whose claims to safety are validated only by a technical benchmark, without empirical evidence of its real-world behavior or the consent of those it affects, lacks true authority. Likewise, a philosophical argument for a “fair” system is hollow without a technical understanding of its feasibility and empirical data on its disparate impacts. This integrated epistemology, which balances the objective with the subjective, the quantitative with the qualitative, and the abstract with the contextual, is essential for navigating the complex ethical landscape of AI. This analysis will proceed by examining each of the four pillars in turn, exploring their unique strengths, their inherent limitations, and the critical role they play in a holistic evidentiary framework.

Evidentiary PillarCore Question AnsweredPrimary MethodologiesKey StrengthsInherent Limitations
Technical CapabilitiesWhat can the system do?Benchmarking, Model Auditing, Technical Specification AnalysisObjectivity, Measurability, Establishes feasibility & safety baselinesLacks real-world context, Prone to “gaming,” Value-neutral
Empirical Impact StudiesWhat does the system do in the world?RCTs, Longitudinal Studies, Ethnography, Case StudiesReal-world grounding, Measures actual harms/benefits, Causal inferenceLags technology, Context-dependent, Can miss systemic issues
Philosophical ArgumentsWhat should the system do?Application of Ethical Frameworks (Deontology, Utilitarianism, etc.)Normative guidance, Defines core values (fairness, justice), Handles noveltyAbstract, Can be impractical, Competing frameworks conflict
Lived ExperiencesHow is the system felt and by whom?Participatory Design, Community Testimony, Co-design, StorytellingSurfaces hidden biases, Ensures legitimacy & justice, Provides rich contextCan be anecdotal, Difficult to scale, Risk of tokenism

I. The Foundation and its Fault Lines: The Authority of Technical Specification

Technical evidence forms the non-negotiable foundation for any credible claim in AI ethics. Before one can debate whether an AI system is fair, just, or beneficial, one must first understand what it is and what it can do. This pillar of evidence, rooted in computer science and engineering, provides the objective, measurable language of system architecture, training data, and performance metrics. Its authority lies in its ability to establish a baseline of fact, but this authority is dangerously incomplete when relied upon in isolation.

The Power of the Measurable

The primary strength of technical evidence is its capacity for quantification and standardization. Technical specifications, which detail a model’s architecture, its parameters, and the composition of its training data, are the first step toward accountability.² As corporate principles from firms like IBM rightly assert, stakeholders deserve to know “who trains their AI systems, what data was used… and what went into their algorithms’ recommendations.”³ This basic transparency is a prerequisite for any subsequent ethical analysis.

Beyond specifications, technical benchmarks provide a standardized method for comparing different models and tracking progress over time.⁴ Benchmarks like MMLU (Massive Multitask Language Understanding) or HumanEval for code generation create a common ground for evaluating performance on specific tasks, from quantitative reasoning to programming.⁵ This function is critical not only for corporate labs competing to achieve state-of-the-art results but also for regulators seeking to establish minimum performance thresholds for high-risk applications.⁶ The European Union’s AI Act, for instance, relies on the ability to benchmark systems for accuracy and robustness to enforce its risk-based framework.⁷ Similarly, core ethical principles like “explainability”—the ability to describe a model’s mechanics—and “transparency” are central tenets in frameworks from global bodies like the World Health Organization (WHO) and UNESCO.⁸

The “Benchmark Trap” and the Limits of Quantification

Despite its foundational importance, the authority of technical evidence is sharply circumscribed. An over-reliance on quantitative metrics leads to what has been termed the “benchmark trap”: a situation where the metrics themselves, rather than genuine progress, become the goal.⁹ Benchmarks are not passive instruments of measurement; they are “deeply political, performative and generative,” actively shaping the technology they purport to evaluate.¹⁰ This dynamic creates perverse incentives, encouraging developers to “game” the metrics or “overfit” their models to a narrow test, achieving high scores that do not translate into real-world competence.¹¹

This limitation becomes ethically salient when technical metrics obscure profound moral failures. A classifier with 95% accuracy may seem technically sound, but that number is ethically meaningless if the 5% of errors are concentrated on critical cases, such as failing to detect fraud, or disproportionately affect marginalized communities.¹² The landmark “Gender Shades” study, for example, revealed that commercial facial analysis systems performed with near-perfect accuracy on light-skinned males but had error rates of up to 34% for dark-skinned females—a catastrophic failure of fairness completely hidden by a single, aggregate accuracy score.¹³

This exposes a fundamental flaw: technical specifications are value-neutral. They can describe a system’s accuracy but not its fairness. They can measure its computational efficiency but not its environmental cost in energy consumption. They can detail its capabilities but not its potential for misuse or its impact on human dignity.¹⁴

This gap between technical measurement and ethical reality can lead to a dangerous codification of flawed metrics into governance frameworks. The process begins with academic and corporate research, where the “fast logic of academic publication” and commercial competition incentivize the creation of and high performance on quantifiable benchmarks.¹⁵ These benchmarks, despite known weaknesses like poor construct validity and a failure to account for sociocultural context, become the de facto standards of “capability.”¹⁶ Regulators, seeking concrete and seemingly objective standards, then incorporate these very benchmarks into legal frameworks to define requirements for accuracy and robustness, as seen in the EU AI Act.¹⁷ This creates a feedback loop where legal compliance requires optimizing for flawed metrics, potentially leading to AI systems that are legally compliant but brittle, biased, or unsafe in the real world. The benchmark trap thus becomes a regulatory trap.

Furthermore, the push for technical “transparency” can be co-opted to create an illusion of ethical accountability while obscuring deeper moral questions. Industry and ethical guidelines alike call for “explainability.”¹⁸ This is often interpreted in purely technical terms: the ability to describe the “mechanics, rules and algorithms, and training data” of a system.¹⁹ However, as some research demonstrates, this form of explanation can become an “obstacle to the production of ethical AI.”²⁰ An explanation can be technically precise yet possess low “denunciatory power”—it explains the mechanism without exposing the underlying injustice. For example, a predictive policing algorithm might explain a high-risk score for a neighborhood by stating that “historical arrest data was high in this area.” This explanation is technically transparent but neatly sidesteps the crucial ethical question of whether that historical data is itself the product of racially biased policing.²¹ In this way, a narrow, technical definition of transparency can serve to mask, rather than reveal, the system’s most profound ethical failures.


II. From Potential to Practice: The Grounding Power of Empirical Impact

If technical specifications describe what an AI system can do in a controlled environment, empirical impact studies reveal what it actually does in the complex and unpredictable real world. This pillar of evidence is essential for grounding abstract ethical debates in concrete harms and benefits, providing a necessary check on both the optimistic claims of developers and the speculative fears of critics. Its authority comes from its connection to reality.

The Spectrum of Empirical Methodologies

The strength of empirical evidence lies in its methodological diversity, with each approach offering a unique window into an AI system’s impact.²²

  • Randomized Controlled Trials (RCTs) are the gold standard for establishing causality. They can yield counterintuitive findings that challenge prevailing assumptions. A striking example is a 2025 study by the research organization Metr, which conducted an RCT with experienced open-source software developers. Contrary to widespread developer belief and impressive benchmark scores, the study found that using advanced AI coding assistants actually slowed down developers by 19% on complex, high-quality tasks.²³ This demonstrates the power of rigorous empirical testing to pierce through hype and anecdote.
  • Longitudinal Studies track effects over time, revealing slower-moving cognitive and social shifts that single-point assessments would miss. Research has employed this method to find a negative correlation between frequent AI tool usage and critical thinking skills, mediated by increased “cognitive offloading.”²⁴ Another five-week longitudinal study found that sustained interaction with conversational AI led to significant increases in users’ perceived emotional attachment to the agents.²⁵
  • Systematic Reviews and Case Studies synthesize existing evidence to identify broad patterns, best practices, and critical gaps in knowledge. A comprehensive review of predictive policing literature, for instance, found a stark discrepancy between the claimed crime-reduction benefits and the lack of robust empirical support, concluding that the technology’s adoption was driven more by “convincing arguments and anecdotal evidence” than by systematic research.²⁶ Formal AI Impact Assessments (AI-IAs) are an emerging methodology designed to structure this kind of analysis prospectively, identifying potential harms before deployment.²⁷
  • Ethnographic Studies provide rich, qualitative insight into how AI is integrated into real-world work practices. By embedding researchers as “digital anthropologists” within organizations, this method can uncover subtle, unintended consequences on collaboration, professional skill, and organizational culture that quantitative metrics entirely miss.²⁸

The Limits of Empirical Evidence

Despite its grounding power, empirical evidence has significant limitations. The “pacing problem” is chief among them: empirical research, which is often slow and resource-intensive, struggles to keep up with the blistering pace of AI development.²⁹ By the time a rigorous study on one model is published, a more powerful successor is already widely deployed.

Furthermore, the “generalizability problem” means that findings are often highly context-dependent. The Metr RCT, for example, was careful to specify that its findings applied to experienced developers working on high-quality open-source projects and might not hold true for junior developers or different types of coding tasks.³⁰ This makes it difficult to draw universal conclusions from specific studies.

Perhaps most fundamentally, empirical studies are constrained by the “what to measure” problem. They can effectively quantify efficiency gains, error rates, or user satisfaction, but they may fail to capture less tangible but ethically crucial impacts, such as the erosion of social trust, the chilling of democratic speech, or the subtle degradation of human dignity.³¹ This is compounded by the fact that different empirical methods can produce contradictory results. The tension between the RCT showing a productivity slowdown and widespread anecdotal reports of productivity gains highlights this challenge, suggesting that different methods are often measuring different tasks or contexts.³²

This points to a critical role for empirical evidence: not merely to measure impact, but to actively interrogate and correct the powerful, and often misleading, narratives that shape policy and investment. A significant and persistent gap often exists between the perceived impact of AI and its measured impact. In the Metr study, developers believed AI sped them up by 20% even as the data showed a 19% slowdown.³³ This perception-reality gap, fueled by impressive benchmark scores and anecdotal success stories, has profound governance implications. If policymakers and business leaders make decisions based on perception, they risk investing in and deploying technologies whose real-world effects are unproven or even negative. The authority of empirical evidence, therefore, lies in its capacity to serve as a crucial reality check.

At the same time, empirical evidence reveals its own limits when confronted with normative questions like fairness. Computer scientists have developed more than twenty distinct mathematical definitions of fairness, such as statistical parity or equalized odds.³⁴ These definitions are often mutually incompatible; optimizing a system for one can make it less fair according to another.³⁵ Empirical studies of real-world systems, like those used in predictive policing, demonstrate this tension. A system may be empirically “unbiased” in that its predictions accurately reflect reported crime rates, yet still be profoundly “unfair” by imposing an unequal burden of police contact on innocent members of a particular community.³⁶ This reveals that empirical evidence cannot, on its own, resolve fairness debates. It can measure outcomes against a chosen definition of fairness, but it cannot determine which definition is ethically correct. It is at this boundary that empirical evidence points beyond itself, demonstrating the need for philosophical inquiry.


III. The Moral Compass: The Indispensable Guidance of Philosophical Inquiry

While technical and empirical evidence can tell us what a system does and how it does it, only philosophy can provide a coherent framework for asking what it should do. This pillar of evidence provides the normative language to define our values, adjudicate conflicts between them, and establish moral red lines. In a field defined by novelty and uncertainty, the authority of philosophical argument is its unique capacity to guide us when precedent and data fall short.

Applying Ethical Frameworks to AI

The discipline of ethics offers several robust frameworks that provide structure to otherwise intractable debates about AI.³⁷

  • Utilitarianism focuses on consequences, aiming to produce the greatest good for the greatest number.³⁸ This framework is often implicitly invoked in arguments for AI systems that promise to boost economic productivity, accelerate scientific discovery, or improve public health outcomes.³⁹ Its primary challenge lies in the difficulty of measuring and comparing different forms of “utility” and its potential to justify actions that harm a minority for the benefit of the majority—for example, defending mass surveillance for a marginal increase in public safety.⁴⁰
  • Deontology emphasizes inviolable moral duties and rights, arguing that certain actions are inherently right or wrong regardless of their consequences.⁴¹ This approach is crucial for establishing “red lines” in AI development, such as outright bans on certain applications like social scoring.⁴² From a deontological perspective, a facial recognition system that violates a fundamental right to privacy is ethically impermissible, even if it offers utilitarian benefits.⁴³ This framework grounds principles like transparency and accountability not as useful features, but as moral duties owed to those affected by the system.⁴⁴
  • Virtue Ethics shifts the focus from actions or rules to the moral character of the agents involved—in this case, the developers, deployers, and institutions behind the AI.⁴⁵ It asks what virtues, such as justice, fairness, honesty, and temperance, an AI developer or organization should embody and how those virtues can be embedded in a system’s design.⁴⁶ This provides a more flexible, context-sensitive approach than rigid rules, encouraging developers to cultivate practical wisdom (phronesis) in navigating novel ethical dilemmas.⁴⁷
  • Care Ethics and Feminist Perspectives offer a powerful critique of abstract, universalizing principles. Drawing from feminist theory, these approaches prioritize relational values, context, and the needs of the most vulnerable populations.⁴⁸ They ask how AI systems can be designed to foster caring relationships and to challenge, rather than reinforce, existing power structures and societal biases.⁴⁹ Feminist critiques, in particular, highlight how a lack of diversity in development teams leads directly to systems that perpetuate patriarchal and discriminatory norms.⁵⁰

The Limits of Philosophical Arguments

The authority of philosophy is not absolute. Its principles can be highly abstract and difficult to translate into concrete engineering requirements.⁵¹ The injunction to “embed justice” in an algorithm is a powerful moral goal, but it offers little specific guidance to a software engineer. Furthermore, different ethical frameworks often lead to contradictory conclusions. A utilitarian might approve of a predictive policing system if it demonstrably reduces overall crime, while a deontologist would reject it for systematically violating the rights of individuals in over-policed communities.⁵² Finally, without empirical grounding, philosophical arguments risk becoming detached from reality, unable to account for the actual, often surprising, ways that systems behave and affect people in the real world.

Despite these limits, philosophy’s role is indispensable, particularly in its capacity to serve as an adjudicator when other forms of evidence conflict. Consider a hiring algorithm for which technical evidence shows 98% accuracy, but empirical evidence reveals it systematically discriminates against female candidates.⁵³ This conflict cannot be resolved using only technical or empirical terms; both data points can be factually correct. The resolution requires an appeal to a higher-order normative principle. A deontological framework based on the right to non-discrimination, or a utilitarian framework calculating the immense societal harm of systemic bias, provides the authoritative basis to declare that the empirical evidence of harm outweighs the technical evidence of accuracy. In this sense, the authority of philosophy is not just in generating its own claims, but in providing the rules of engagement for all other forms of evidence. It tells us which facts matter more when our core values are at stake.

This adjudicating role is vividly illustrated in the current legal and ethical turmoil surrounding generative AI and copyright. Legal evidence shows that U.S. copyright law requires a human author and creative control over a work’s expression.⁵⁴ Technical analysis reveals that with generative AI, the user’s creative input is often limited to the prompt (an idea), while the AI model performs the vast majority of the expressive work.⁵⁵ This creates a legal and technical impasse. If the user is the author, copyright would protect the prompt rather than the output, inverting the law’s fundamental idea-expression dichotomy. If the AI is the author, the work is uncopyrightable under current law.⁵⁶ Resolving this requires a philosophical inquiry into what we mean by “authorship” and “creativity” in an age of intelligent machines. The answer lies not in the code or the case law, but in a normative, philosophical choice about what kind of human creativity society wishes to incentivize and protect.


IV. The Voice of the Governed: The Irreducible Authority of Lived Experience

The final, and arguably most vital, pillar of authority is the lived experience of the people and communities directly affected by AI systems. This form of evidence is uniquely capable of revealing harms, contexts, and injustices that are invisible to technical audits, large-scale empirical surveys, and abstract philosophical principles. Its authority is not merely informational; it is grounded in the democratic principles of justice and legitimacy, encapsulated by the mantra “nothing about us without us.”⁵⁷

The Epistemic Power of the Affected

Lived experience provides the rich, contextual “ground truth” that quantitative data lacks. It moves beyond statistical aggregates to the human stories of impact—the feeling of “your voice tremble in a benefit appeals meeting” after an automated rejection, or the specific knowledge of which local services are trustworthy.⁵⁸ This qualitative, embodied knowledge is essential for uncovering the “unknown unknowns” and hidden biases that formal testing procedures often miss.⁵⁹ A technical audit might find no bias in an algorithm, but members of a specific culture may immediately recognize a harmful stereotype embedded in its outputs.⁶⁰

The debate over predictive policing provides a stark case study. Technical reports may focus on algorithmic accuracy, and empirical studies may measure changes in crime statistics.⁶¹ But community testimony reveals the harm of over-policing not as a statistical anomaly, but as a lived reality of eroded trust, violated dignity, and the perpetuation of historical injustice.⁶² Residents of targeted neighborhoods describe the feeling of being seen as inherently criminal and the psychological burden of being “wrongly and constantly being watched.”⁶³ This is a profound dignitary harm that crime rates and accuracy scores cannot capture. It is evidence of a system failing in its most basic duty to serve and protect all citizens equally.

Methods for Eliciting Lived Experience

Harnessing the authority of lived experience requires moving beyond collecting anecdotes to employing structured, systematic methods of engagement.

  • Participatory AI and Co-Design are methodologies that involve affected communities in the design, development, deployment, and auditing of AI systems from the very beginning.⁶⁴ This approach, which draws from traditions in community-based research and deliberative democracy, seeks to shift power to end-users and ensure technology is responsive to their actual needs.⁶⁵ Case studies from local authorities like the London Boroughs of Camden and Barking & Dagenham demonstrate the value of co-creating data-use charters with residents and using customer community panels to test and monitor AI systems like chatbots, ensuring they meet the needs of vulnerable users.⁶⁶
  • Storytelling and Testimony can be used as formal methods to convey the nuances of human impact, ensuring that authenticity and empathy are not lost in data-driven processes.⁶⁷ This requires creating safe spaces where insights are valued, not tokenized, and where the goal is to drive change, not simply to collect stories of trauma.⁶⁸
  • Community-Led Governance initiatives empower communities to set the terms for how technology is used in their lives. This can range from grassroots advocacy to formal partnerships with developers and policymakers to ensure that AI serves the public interest.⁶⁹

Challenges and Critiques

Integrating lived experience is not without challenges. A primary concern is scalability: how can the deeply contextual insights from a small community be applied to govern a global platform used by billions?⁷⁰ There is also the significant risk of tokenism, where organizations engage in performative listening without granting communities any real power to alter the system’s design or deployment.⁷¹ Finally, questions of representativeness arise: whose “lived experience” is considered authoritative, especially when different affected groups may have conflicting needs and interests?

Despite these difficulties, the authority of lived experience is indispensable because it serves as the ultimate arbiter of justice. A system might be technically sound, empirically effective, and philosophically defensible on some grounds, but if the community subject to it experiences it as oppressive, discriminatory, and dehumanizing, the system is unjust. This is because lived experience provides a unique form of evidence that can override the conclusions of the other three pillars. Its authority is rooted in a theory of political legitimacy: a system imposed upon a population without its consent, which erodes its trust and violates its dignity, cannot be considered ethically authoritative.

This perspective reveals a direct causal link between the composition of development teams and the ethical failures of their products. AI development is overwhelmingly dominated by a narrow demographic, primarily Western and male.⁷² This homogeneity of lived experience creates systemic “blind spots,” leading developers to unconsciously embed their own biases and assumptions into the systems they build and the data they choose.⁷³ This, in turn, results in technically flawed and empirically discriminatory outcomes, such as recruiting tools biased against women or clinical diagnostic tools that perform worse for people of color.⁷⁴ The causal chain is clear: a lack of diverse lived experience in the design process leads directly to biased technical systems, which produce discriminatory real-world impacts. Integrating lived experience is therefore not merely a matter of social good; it is a fundamental requirement for building robust, effective, and trustworthy AI. It is a form of risk mitigation.


Conclusion: Towards an Integrated Epistemology for AI Governance

The search for a single, ultimate source of authority in AI ethics is a futile one. Each of the four pillars of evidence—technical, empirical, philosophical, and experiential—provides an essential but incomplete picture. Relying on any one in isolation leads to predictable and dangerous failures. Technical specifications are blind to context and deaf to values. Empirical studies are too slow to keep pace with innovation and cannot tell us what is right, only what is. Philosophical arguments are too abstract without the grounding of real-world data and lived reality. And lived experience, while essential for justice, can be difficult to scale and systematize.

The only path toward sound governance is through an integrated epistemology—a structured process of epistemic triangulation. A claim about an AI system can only be considered authoritative if it is supported by converging evidence from all four pillars, with each serving as a check on the others.

  • A system must be technically robust: its capabilities, limitations, and failure modes must be well-understood and meet rigorous safety standards.
  • Its real-world effects must be empirically validated: its claimed benefits must be demonstrated and its potential harms measured and mitigated through ongoing, real-world assessment.
  • Its purpose and constraints must be philosophically sound: it must be aligned with defensible moral principles that can justify its existence and adjudicate trade-offs between competing values like efficiency, fairness, and autonomy.
  • Its implementation must be experientially just: it must be perceived as legitimate, fair, and respectful of dignity by the communities it affects, whose consent is a precondition for its ethical deployment.

When these four streams of evidence diverge, it signals a critical failure that must be addressed. When a technically “accurate” system is shown to be empirically biased, philosophically unjust, and experientially harmful, it lacks authority, no matter how high its benchmark scores. The ultimate challenge of AI ethics, therefore, is not to find the one true source of evidence. It is to build the institutional capacity—within corporations, regulatory agencies, research labs, and civil society—to foster a deliberative process where these four kinds of knowledge can be brought into productive, critical dialogue. This integrated approach is the only path to developing AI that is not just intelligent, but wise.



This essay was produced, with human oversight and input, by Gemini Pro 2.5.Whilst it seems strangely paradoxical, AI commentating on AI, the sources and arguments, at this stage of AIs offering, seem to be a fair assessment and reliable. For comparison I posed the exact same inquiry to another AI model, Claude Opus 4.1 AI. which took a different approach as you can see in Contested Authority: How Evidence Shapes AI Ethics Debates. Please check to your own satisfaction before reproducing or quoting sources and check out the terms and condition and disclaimer. This site and it’s contents represent my enquiries into matters that intrigue me in my semi-retired urban monastic phase of life using the tools on hand, so an online journal that may miss the mark completely or possibly make a fair point or two along the way! Kevin Parker – Site Publisher

Footnotes

¹ For a discussion of the broad range of ethical stakes, see “Ethics of artificial intelligence,” Wikipedia, last modified August 1, 2025, https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence. For an overview of how market and geopolitical pressures can undermine ethical safeguards, see “Five Big Challenges in AI Governance,” Cubettech, accessed August 8, 2025, https://cubettech.com/resources/blog/can-ai-governance-overcome-its-biggest-challenges-as-ai-evolves/.

² High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI (Brussels: European Commission, 2019), 12, https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf; “AI Ethics,” IBM, accessed August 8, 2025, https://www.ibm.com/think/topics/ai-ethics.

³ “AI Ethics,” IBM.

⁴ Adnan Masood, “Measures That Matter: Correlation of Technical AI Metrics with Business Outcomes,” Medium, May 1, 2025, https://medium.com/@adnanmasood/measures-that-matter-correlation-of-technical-ai-metrics-with-business-outcomes-b4a3b4a595ca; G. Grill, C. H. P. Oei, and J. Reijers, “The Politics of AI Benchmarks,” arXiv:2502.06559v1, February 21, 2025, https://arxiv.org/html/2502.06559v1.

⁵ Masood, “Measures That Matter.”

⁶ For a discussion of how benchmarks can inform international security and governance, see Matthew E. Rosen, “Benchmarking a Path to International AI Governance,” Center for Strategic and International Studies, February 21, 2024, https://www.csis.org/analysis/benchmarking-path-international-ai-governance.

⁷ Grill, Oei, and Reijers, “The Politics of AI Benchmarks.”

⁸ World Health Organization, Ethics and Governance of Artificial Intelligence for Health: WHO Guidance (Geneva: World Health Organization, 2021), https://www.who.int/publications/i/item/9789240029200; UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” November 23, 2021, https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

⁹ Sharon Fisher, “The Benchmark Trap: Why AI’s Favorite Metrics Might Be Misleading Us,” VKTR, April 24, 2025, https://www.vktr.com/ai-market/the-benchmark-trap-why-ais-favorite-metrics-might-be-misleading-us/.

¹⁰ Grill, Oei, and Reijers, “The Politics of AI Benchmarks.”

¹¹ Fisher, “The Benchmark Trap.”

¹² Masood, “Measures That Matter.”

¹³ Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 77–91, https://proceedings.mlr.press/v81/buolamwini18a.html.

¹⁴ Fisher, “The Benchmark Trap”; “AI Ethics,” Coursera, last updated October 27, 2023, https://www.coursera.org/articles/ai-ethics.

¹⁵ Fisher, “The Benchmark Trap.”

¹⁶ Ibid.

¹⁷ Grill, Oei, and Reijers, “The Politics of AI Benchmarks.”

¹⁸ High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI; “What is AI Ethics?,” SAP, accessed August 8, 2025, https://www.sap.com/resources/what-is-ai-ethics; World Health Organization, “WHO calls for safe and ethical AI for health,” May 16, 2023, https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health.

¹⁹ “AI Ethics,” IBM.

²⁰ Mathieu d’Aquin et al., “On the Denunciatory Power of Explainable AI,” arXiv:2109.09586 [cs.AI], September 20, 2021, https://arxiv.org/pdf/2109.09586.

²¹ For a discussion of biased data in predictive policing, see Rashida Richardson, Jason Schultz, and Kate Crawford, “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice,” New York University Law Review 94 (2019): 192–233; see also “Algorithmic Justice or Bias: Legal Implications of Predictive Policing Algorithms in Criminal Justice,” Johns Hopkins University Law Review, January 1, 2025, https://jhulr.org/2025/01/01/algorithmic-justice-or-bias-legal-implications-of-predictive-policing-algorithms-in-criminal-justice/.

²² For examples of different methodologies, see Metr, “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity,” Metr Blog, July 10, 2025, https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/ (accessed August 8, 2025); and Julia Stoyanovich, Bill Howe, and H. V. Jagadish, “Responsible AI: A Human-Centered Approach,” XRDS: Crossroads, The ACM Magazine for Students 27, no. 3 (Spring 2021): 14–19, https://doi.org/10.1145/3447423.

²³ Metr, “Measuring the Impact.”

²⁴ Muhammad Shoaib et al., “The Cognitive Consequences of Artificial Intelligence: A Longitudinal Study on the Relationship between AI Tool Usage and Critical Thinking,” Social Sciences & Humanities Open 15, no. 1 (2025): 100987, https://www.mdpi.com/2075-4698/15/1/6.

²⁵ Jessica S. T. Lin et al., “Longitudinal Effects of Interacting with General-Purpose Conversational AI on Users’ Perceptions and Use,” arXiv:2504.14112v1 [cs.HC], April 22, 2025, https://arxiv.org/html/2504.14112v1.

²⁶ Quinten Meijs and Jelle van der Ham, “Predictive Policing: Review of Benefits and Drawbacks,” International Journal of Public Administration 42, no. 12 (2019): 1031–1039, https://doi.org/10.1080/01900692.2019.1575664.

²⁷ For an overview of AI Impact Assessments, see Bernd Carsten Stahl et al., “A Systematic Review of AI Impact Assessments,” Artificial Intelligence Review 58 (2024): 1–38, https://pmc.ncbi.nlm.nih.gov/articles/PMC10037374/; and “AI Impact Analysis: Ethical and Societal Considerations,” Schellman, November 29, 2023, https://www.schellman.com/blog/ai-services/ethical-and-societal-considerations-of-ai-impact-analysis.

²⁸ “AI@Work,” KIN Center for Digital Innovation, Vrije Universiteit Amsterdam, accessed August 8, 2025, https://vu.nl/en/about-vu/research-institutes/kin-center-for-digital-innovation/departments/aiwork; “Integrating AI in Ethnographic Research,” Ethos, accessed August 8, 2025, https://ethosapp.com/blog/integrating-ai-in-ethnographic-research/.

²⁹ “AI Ethics,” IBM.

³⁰ Metr, “Measuring the Impact.”

³¹ Richardson, Schultz, and Crawford, “Dirty Data, Bad Predictions”; European Parliament, Directorate-General for Parliamentary Research Services, The ethics of artificial intelligence: Issues and initiatives, (Publications Office, 2020), https://data.europa.eu/doi/10.2861/6644.

³² Metr, “Measuring the Impact.”

³³ Ibid.

³⁴ Ninareh Mehrabi et al., “Inherent Limitations of AI Fairness: A Causal-Algorithmic Perspective,” Communications of the ACM 67, no. 7 (July 2024): 20–24, https://cacm.acm.org/research/inherent-limitations-of-ai-fairness/.

³⁵ Ibid.

³⁶ Richardson, Schultz, and Crawford, “Dirty Data, Bad Predictions.”

³⁷ For an overview of philosophical approaches, see “Philosophy of artificial intelligence,” Wikipedia, last modified July 26, 2025, https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence; and “The Ethics of Artificial Intelligence,” Internet Encyclopedia of Philosophy, accessed August 8, 2025, https://iep.utm.edu/ethics-of-artificial-intelligence/.

³⁸ “Utilitarianism, Deontology, and Virtue Ethics in the AI Context,” Fiveable, accessed August 8, 2025, https://library.fiveable.me/artificial-intelligence-and-ethics/unit-2/utilitarianism-deontology-virtue-ethics-ai-context/study-guide/uk9lJyQbhFMjCYkC.

³⁹ Ibid.; High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy AI.

⁴⁰ “Utilitarianism, Deontology, and Virtue Ethics,” Fiveable; Ryan Abbott, “The Dubious Utilitarian Argument for Granting Copyright in AI-Generated Works,” Kluwer Copyright Blog, October 14, 2021, https://legalblogs.wolterskluwer.com/copyright-blog/the-dubious-utilitarian-argument-for-granting-copyright-in-ai-generated-works/.

⁴¹ For an overview of deontology in AI, see “Deontology in Robotics: An Ethics Guide,” Number Analytics, accessed August 8, 2025, https://www.numberanalytics.com/blog/deontology-robotics-ethics-guide; and Jean-Gabriel Ganascia, “Integrating Kantian Ethics in the Alignment of Artificial Intelligence,” arXiv:2311.05227v2 [cs.AI], November 10, 2023, https://arxiv.org/html/2311.05227v2.

⁴² “AI Ethics,” IBM.

⁴³ Leon Furze, “Some technologies are created with values, others have values thrust upon them,” Leon Furze, April 12, 2024, https://leonfurze.com/2024/04/12/some-technologies-are-created-with-values-others-have-values-thrust-upon-them/.

⁴⁴ Ganascia, “Integrating Kantian Ethics”; World Health Organization, “WHO calls for safe and ethical AI.”

⁴⁵ For an overview of virtue ethics in AI, see Adnan Masood, “Virtuous AI: Insights from Aristotle and Modern Ethics,” Medium, May 15, 2024, https://medium.com/@adnanmasood/virtuous-ai-insights-from-aristotle-and-modern-ethics-6bf287037f84; and John T. Haltigan, “Eudaimonia, Virtue Ethics, and Artificial Intelligence,” Christian Perspectives on Science and Technology 3 (2021), https://journal.iscast.org/cposat-volume-3/eudaimonia-virtue-ethics-and-artificial-intelligence.

⁴⁶ Masood, “Virtuous AI”; Haltigan, “Eudaimonia, Virtue Ethics, and Artificial Intelligence.”

⁴⁷ Matthew J. Brown and Bridgid O’Connell, “Living Well with Artificial Intelligence: A Virtue Ethics Approach,” Philosophy & Technology 34 (2021): 1647–69, https://doi.org/10.1007/s13347-021-00485-9.

⁴⁸ For an overview of care ethics and feminist perspectives, see David Weinberger, “The Rise of Particulars: AI and the Ethics of Care,” MDPI, January 26, 2024, https://www.mdpi.com/2409-9287/9/1/26; and Corinna Hertweck, “Feminist AI: Disrupting Dominant Power Structures in the Tech Industry,” Friedrich-Ebert-Stiftung, March 2025, https://library.fes.de/pdf-files/bueros/bruessel/21888-20250304.pdf.

⁴⁹ Hertweck, “Feminist AI”; Anna-Kaisa Kaila and Minna Ruckenstein, “Care as a Tactic for an Ethical AI,” Big Data & Society 8, no. 2 (2021), https://doi.org/10.1177/20539517211045233.

⁵⁰ Hertweck, “Feminist AI”; Giada Pistilli, “Feminist AI: transforming and challenging the current Artificial Intelligence (AI) industry,” TU Delft, accessed August 8, 2025, https://www.tudelft.nl/en/stories/articles/feminist-ai-transforming-and-challenging-the-current-artificial-intelligence-ai-industry.

⁵¹ Brown and O’Connell, “Living Well with Artificial Intelligence”; Haltigan, “Eudaimonia, Virtue Ethics, and Artificial Intelligence.”

⁵² “Utilitarianism, Deontology, and Virtue Ethics,” Fiveable; Furze, “Some technologies are created with values.”

⁵³ “AI Ethics,” Coursera.

⁵⁴ Stephen C. Carlisle, “How Generative AI Turns Copyright Upside Down,” Congressional Research Service, LSB10922, updated July 18, 2025, https://www.congress.gov/crs-product/LSB10922; “The Legal Challenges of Generative AI, Part 1,” Colorado Lawyer, January/February 2024, https://cl.cobar.org/features/the-legal-challenges-of-generative-ai-part-1/.

⁵⁵ Mark A. Lemley, “How Generative AI Turns Copyright Upside Down,” Columbia Science & Technology Law Review 25 (2024): 20-35.

⁵⁶ Carlisle, “How Generative AI Turns Copyright Upside Down.”

⁵⁷ Emma Carmel et al., “The Tomorrow Party: How Lived Experience Can Inform Policy Design,” Policy Design and Practice 7, no. 1 (2024): 106-121, https://doi.org/10.1080/25741292.2024.2308311.

⁵⁸ Frank Spillers, “AI Can’t Have Lived Experience — Why That’s a Problem,” Frank Spillers, accessed August 8, 2025, https://frankspillers.com/ai-cant-have-lived-experience-why-thats-a-problem/.

⁵⁹ Mehrabi et al., “Inherent Limitations of AI Fairness”; Spillers, “AI Can’t Have Lived Experience.”

⁶⁰ Pistilli, “Feminist AI”; “AI Ethics,” Coursera.

⁶¹ For technical analysis, see “Predictive Policing: Navigating the Challenges,” Thomson Reuters, October 26, 2023, https://legal.thomsonreuters.com/blog/predictive-policing-navigating-the-challenges/. For empirical analysis, see Meijs and van der Ham, “Predictive Policing: Review of Benefits and Drawbacks.”

⁶² Richardson, Schultz, and Crawford, “Dirty Data, Bad Predictions.”

⁶³ “Ethics in Predictive Policing,” The Anselmian Hub, October 26, 2022, https://www.anselm.edu/about/anselmian-hub/news/ethics-predictive-policing.

⁶⁴ “Participatory AI,” Nesta, accessed August 8, 2025, https://www.nesta.org.uk/project/participatory-ai/.

⁶⁵ Carmel et al., “The Tomorrow Party.”

⁶⁶ Equality and Human Rights Commission, “Artificial intelligence: Case studies of good practice in local authorities,” last updated July 12, 2023, https://www.equalityhumanrights.com/guidance/artificial-intelligence-case-studies-good-practice-local-authorities.

⁶⁷ Spillers, “AI Can’t Have Lived Experience”; “An AI Code of Ethics for Nonprofit Storytelling and Marketing,” Storyraise, accessed August 8, 2025, https://wp.storyraise.com/ai-code-of-ethics-for-nonprofit-storytelling-and-marketing/.

⁶⁸ “Lived Experience,” Child Welfare Information Gateway, accessed August 8, 2025, https://www.childwelfare.gov/topics/casework-practice/lived-experience/.

⁶⁹ Caribou Digital, “Responsible AI for Development: Learning from the Community of Practice,” September 2024, https://www.cariboudigital.net/wp-content/uploads/2024/09/cases_responsibleAI4D_web.pdf.

⁷⁰ “Participatory AI,” Nesta.

⁷¹ “Lived Experience,” Child Welfare Information Gateway.

⁷² Pistilli, “Feminist AI”; Hertweck, “Feminist AI.”

⁷³ Mehrabi et al., “Inherent Limitations of AI Fairness”; Spillers, “AI Can’t Have Lived Experience.”

⁷⁴ “AI Ethics,” Coursera; Pistilli, “Feminist AI.”

Bibliography

Primary Sources

Legislative & Governmental Documents

Carlisle, Stephen C. “How Generative AI Turns Copyright Upside Down.” Congressional Research Service, LSB10922. Updated July 18, 2025. https://www.congress.gov/crs-product/LSB10922.

Equality and Human Rights Commission. “Artificial intelligence: Case studies of good practice in local authorities.” Last updated July 12, 2023. https://www.equalityhumanrights.com/guidance/artificial-intelligence-case-studies-good-practice-local-authorities.

European Parliament, Directorate-General for Parliamentary Research Services. The ethics of artificial intelligence: Issues and initiatives. Publications Office, 2020. https://data.europa.eu/doi/10.2861/6644.

High-Level Expert Group on Artificial Intelligence. Ethics Guidelines for Trustworthy AI. Brussels: European Commission, 2019. https://www.europarl.europa.eu/cmsdata/196377/AI%20HLEG_Ethics%20Guidelines%20for%20Trustworthy%20AI.pdf.

UNESCO. “Recommendation on the Ethics of Artificial Intelligence.” November 23, 2021. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. Geneva: World Health Organization, 2021. https://www.who.int/publications/i/item/9789240029200.

World Health Organization. “WHO calls for safe and ethical AI for health.” May 16, 2023. https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health.

Technical Documentation & Corporate Reports

“AI Ethics.” IBM. Accessed August 8, 2025. https://www.ibm.com/think/topics/ai-ethics.

Metr. “Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity.” Metr Blog, July 10, 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/.

Secondary Sources

Books and Monographs

Richardson, Rashida, Jason Schultz, and Kate Crawford. “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice.” New York University Law Review 94 (2019): 192–233.

Journal Articles

Brown, Matthew J., and Bridgid O’Connell. “Living Well with Artificial Intelligence: A Virtue Ethics Approach.” Philosophy & Technology 34 (2021): 1647–69. https://doi.org/10.1007/s13347-021-00485-9.

Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 77–91. https://proceedings.mlr.press/v81/buolamwini18a.html.

Carmel, Emma, et al. “The Tomorrow Party: How Lived Experience Can Inform Policy Design.” Policy Design and Practice 7, no. 1 (2024): 106-121. https://doi.org/10.1080/25741292.2024.2308311.

d’Aquin, Mathieu, et al. “On the Denunciatory Power of Explainable AI.” arXiv:2109.09586 [cs.AI]. September 20, 2021. https://arxiv.org/pdf/2109.09586.

Ganascia, Jean-Gabriel. “Integrating Kantian Ethics in the Alignment of Artificial Intelligence.” arXiv:2311.05227v2 [cs.AI]. November 10, 2023. https://arxiv.org/html/2311.05227v2.

Grill, G., C. H. P. Oei, and J. Reijers. “The Politics of AI Benchmarks.” arXiv:2502.06559v1. February 21, 2025. https://arxiv.org/html/2502.06559v1.

Haltigan, John T. “Eudaimonia, Virtue Ethics, and Artificial Intelligence.” Christian Perspectives on Science and Technology 3 (2021). https://journal.iscast.org/cposat-volume-3/eudaimonia-virtue-ethics-and-artificial-intelligence.

Kaila, Anna-Kaisa, and Minna Ruckenstein. “Care as a Tactic for an Ethical AI.” Big Data & Society 8, no. 2 (2021). https://doi.org/10.1177/20539517211045233.

Lemley, Mark A. “How Generative AI Turns Copyright Upside Down.” Columbia Science & Technology Law Review 25 (2024): 20-35.

Lin, Jessica S. T., et al. “Longitudinal Effects of Interacting with General-Purpose Conversational AI on Users’ Perceptions and Use.” arXiv:2504.14112v1 [cs.HC]. April 22, 2025. https://arxiv.org/html/2504.14112v1.

Mehrabi, Ninareh, et al. “Inherent Limitations of AI Fairness: A Causal-Algorithmic Perspective.” Communications of the ACM 67, no. 7 (July 2024): 20–24. https://cacm.acm.org/research/inherent-limitations-of-ai-fairness/.

Meijs, Quinten, and Jelle van der Ham. “Predictive Policing: Review of Benefits and Drawbacks.” International Journal of Public Administration 42, no. 12 (2019): 1031–1039. https://doi.org/10.1080/01900692.2019.1575664.

Shoaib, Muhammad, et al. “The Cognitive Consequences of Artificial Intelligence: A Longitudinal Study on the Relationship between AI Tool Usage and Critical Thinking.” Social Sciences & Humanities Open 15, no. 1 (2025): 100987. https://www.mdpi.com/2075-4698/15/1/6.

Stahl, Bernd Carsten, et al. “A Systematic Review of AI Impact Assessments.” Artificial Intelligence Review 58 (2024): 1–38. https://pmc.ncbi.nlm.nih.gov/articles/PMC10037374/.

Stoyanovich, Julia, Bill Howe, and H. V. Jagadish. “Responsible AI: A Human-Centered Approach.” XRDS: Crossroads, The ACM Magazine for Students 27, no. 3 (Spring 2021): 14–19. https://doi.org/10.1145/3447423.

Weinberger, David. “The Rise of Particulars: AI and the Ethics of Care.” MDPI, January 26, 2024. https://www.mdpi.com/2409-9287/9/1/26.

Reports and Policy Papers

Caribou Digital. “Responsible AI for Development: Learning from the Community of Practice.” September 2024. https://www.cariboudigital.net/wp-content/uploads/2024/09/cases_responsibleAI4D_web.pdf.

Hertweck, Corinna. “Feminist AI: Disrupting Dominant Power Structures in the Tech Industry.” Friedrich-Ebert-Stiftung, March 2025. https://library.fes.de/pdf-files/bueros/bruessel/21888-20250304.pdf.

Rosen, Matthew E. “Benchmarking a Path to International AI Governance.” Center for Strategic and International Studies, February 21, 2024. https://www.csis.org/analysis/benchmarking-path-international-ai-governance.

Digital Resources

“AI@Work.” KIN Center for Digital Innovation, Vrije Universiteit Amsterdam. Accessed August 8, 2025. https://vu.nl/en/about-vu/research-institutes/kin-center-for-digital-innovation/departments/aiwork.

“AI Ethics.” Coursera. Last updated October 27, 2023. https://www.coursera.org/articles/ai-ethics.

“AI Impact Analysis: Ethical and Societal Considerations.” Schellman. November 29, 2023. https://www.schellman.com/blog/ai-services/ethical-and-societal-considerations-of-ai-impact-analysis.

“An AI Code of Ethics for Nonprofit Storytelling and Marketing.” Storyraise. Accessed August 8, 2025. https://wp.storyraise.com/ai-code-of-ethics-for-nonprofit-storytelling-and-marketing/.

“Algorithmic Justice or Bias: Legal Implications of Predictive Policing Algorithms in Criminal Justice.” Johns Hopkins University Law Review, January 1, 2025. https://jhulr.org/2025/01/01/algorithmic-justice-or-bias-legal-implications-of-predictive-policing-algorithms-in-criminal-justice/.

“Deontology in Robotics: An Ethics Guide.” Number Analytics. Accessed August 8, 2025. https://www.numberanalytics.com/blog/deontology-robotics-ethics-guide.

“Ethics in Predictive Policing.” The Anselmian Hub. October 26, 2022. https://www.anselm.edu/about/anselmian-hub/news/ethics-predictive-policing.

“Ethics of artificial intelligence.” Wikipedia. Last modified August 1, 2025. https://en.wikipedia.org/wiki/Ethics_of_artificial_intelligence.

“Five Big Challenges in AI Governance.” Cubettech. Accessed August 8, 2025. https://cubettech.com/resources/blog/can-ai-governance-overcome-its-biggest-challenges-as-ai-evolves/.

Fisher, Sharon. “The Benchmark Trap: Why AI’s Favorite Metrics Might Be Misleading Us.” VKTR, April 24, 2025. https://www.vktr.com/ai-market/the-benchmark-trap-why-ais-favorite-metrics-might-be-misleading-us/.

Furze, Leon. “Some technologies are created with values, others have values thrust upon them.” Leon Furze, April 12, 2024. https://leonfurze.com/2024/04/12/some-technologies-are-created-with-values-others-have-values-thrust-upon-them/.

“Integrating AI in Ethnographic Research.” Ethos. Accessed August 8, 2025. https://ethosapp.com/blog/integrating-ai-in-ethnographic-research/.

“Lived Experience.” Child Welfare Information Gateway. Accessed August 8, 2025. https://www.childwelfare.gov/topics/casework-practice/lived-experience/.

Masood, Adnan. “Measures That Matter: Correlation of Technical AI Metrics with Business Outcomes.” Medium, May 1, 2025. https://medium.com/@adnanmasood/measures-that-matter-correlation-of-technical-ai-metrics-with-business-outcomes-b4a3b4a595ca.

Masood, Adnan. “Virtuous AI: Insights from Aristotle and Modern Ethics.” Medium, May 15, 2024. https://medium.com/@adnanmasood/virtuous-ai-insights-from-aristotle-and-modern-ethics-6bf287037f84.

“Participatory AI.” Nesta. Accessed August 8, 2025. https://www.nesta.org.uk/project/participatory-ai/.

“Philosophy of artificial intelligence.” Wikipedia. Last modified July 26, 2025. https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence.

Pistilli, Giada. “Feminist AI: transforming and challenging the current Artificial Intelligence (AI) industry.” TU Delft. Accessed August 8, 2025. https://www.tudelft.nl/en/stories/articles/feminist-ai-transforming-and-challenging-the-current-artificial-intelligence-ai-industry.

“Predictive Policing: Navigating the Challenges.” Thomson Reuters. October 26, 2023. https://legal.thomsonreuters.com/blog/predictive-policing-navigating-the-challenges/.

Spillers, Frank. “AI Can’t Have Lived Experience — Why That’s a Problem.” Frank Spillers. Accessed August 8, 2025. https://frankspillers.com/ai-cant-have-lived-experience-why-thats-a-problem/.

“The Ethics of Artificial Intelligence.” Internet Encyclopedia of Philosophy. Accessed August 8, 2025. https://iep.utm.edu/ethics-of-artificial-intelligence/.

“The Legal Challenges of Generative AI, Part 1.” Colorado Lawyer, January/February 2024. https://cl.cobar.org/features/the-legal-challenges-of-generative-ai-part-1/.

“Utilitarianism, Deontology, and Virtue Ethics in the AI Context.” Fiveable. Accessed August 8, 2025. https://library.fiveable.me/artificial-intelligence-and-ethics/unit-2/utilitarianism-deontology-virtue-ethics-ai-context/study-guide/uk9lJyQbhFMjCYkC.

“What is AI Ethics?.” SAP. Accessed August 8, 2025. https://www.sap.com/resources/what-is-ai-ethics.

Latest Posts

More from Author

Why a Vegetarian Diet is Good for Planet Earth

Introduction The global food system, responsible for an estimated 21-37% of all...

The Koala : Biology, Conservation Status, and Future Prospects

Introduction The koala (Phascolarctos cinereus), Australia's iconic arboreal marsupial, represents one of...

Shamanism and Panpsychism: Exploring Diverse Conceptions of Mind and Reality

Summary This essay undertakes a comprehensive comparison and contrast of Shamanism and...

The State of Global Fish Populations: Crisis and Conservation in the World’s Waters

The Ocean's Vanishing Wealth The world's fish populations stand at a critical...

Read Now

Why a Vegetarian Diet is Good for Planet Earth

Introduction The global food system, responsible for an estimated 21-37% of all anthropogenic greenhouse gas emissions, stands at a critical juncture.¹ This report will rigorously examine the proposition that a global transition toward plant-based diets is not merely an incremental improvement but an environmental, economic, and public health...

The Koala : Biology, Conservation Status, and Future Prospects

Introduction The koala (Phascolarctos cinereus), Australia's iconic arboreal marsupial, represents one of the world's most specialized mammals and faces an unprecedented conservation crisis. This review synthesizes current scientific literature examining koala biology, ecology, conservation status, and recent research developments. With population estimates ranging from 224,000 to 524,000 individuals...

Shamanism and Panpsychism: Exploring Diverse Conceptions of Mind and Reality

Summary This essay undertakes a comprehensive comparison and contrast of Shamanism and Panpsychism, two distinct yet conceptually resonant frameworks concerning the nature of mind and reality. While Shamanism manifests as an ancient, cross-cultural spiritual practice focused on pragmatic intervention through altered states of consciousness, Panpsychism is a philosophical...

The State of Global Fish Populations: Crisis and Conservation in the World’s Waters

The Ocean's Vanishing Wealth The world's fish populations stand at a critical juncture, caught between ecological collapse and conservation hope. With 37.7% of assessed marine stocks now overfished and freshwater species experiencing an 81% decline since 1970, the trajectory appears alarming.¹ Yet this crisis unfolds against a backdrop...

The Divergent Paths and Enduring Legacies: Sigmund Freud and Carl Jung’s Models of Psychology

Listen to our Deep Dive to get some insights into the articles content I. Introduction Sigmund Freud and Carl Jung stand as monumental figures in the annals of modern psychology, recognized as foundational pioneers who profoundly revolutionized the understanding of the human mind and the practice of psychotherapy. Freud,...

Life on the Edge: How Extremophiles Redefine Biology and Expand Our Cosmic Search

Listen to our Deep Dive for an insight into this article Descend into the crushing, lightless abyss of the Pacific Ocean, where fissures in the Earth’s crust spew geothermally heated water at temperatures exceeding 400°C ¹⁶. Or travel to the otherworldly landscape of Yellowstone National Park, where acidic...

From Inequality to Equity: A Roadmap for Gender Justice

This is regrettably still a massive issue and after 67 years on the planet I am astounded that we still haven't got gender justice sorted with so many other pressing issues to deal with as a united species. Still, despite glaring disparities and continued patriarchal violence against...

The Enduring Legacy of the 14th Dalai Lama

This piece is by way of tribute to His Holiness the 14th Dalai Lama. I was privileged to meet him in the mid-1990s when I was National Campaign Director of the Australian Wilderness Society and we convened a conference on Tibetan Wilderness (an idea conceived by activist Chris Doran) held in Sydney, Australia. It was a honor to speak on the same platform as this extraordinary human being who exemplifies compassion and consistency of purpose. This extended article looks back on his life, philosophies and his ongoing contribution to advancing peace, compassion and kindness as a mantra for everyday living.

The Universal Struggle: Human Rights in a Fractured World

Human rights face an existential crisis in 2025. Despite 77 years since the Universal Declaration of Human Rights proclaimed that "all human beings are born free and equal in dignity and rights," only 20% of the world's population lives in countries rated as "Free," while authoritarian practices...

The Art and Science of Yogic Breathing

Abstract This report provides a comprehensive, multi-disciplinary analysis of pranayama, the yogic science of breath regulation. It synthesizes the philosophical underpinnings from classical texts, traces its historical development, details the techniques of primary practices, and critically reviews the modern scientific literature on its physiological and psychological effects. By...

The Great God Car: Re-evaluating Our Worship of the Car in an Age of Electric Dreams and Climate Crisis

Introduction: The Enduring Altar of the Automobile In the early 1990s, I wrote a searing critique of modern transport policy in which I depicted the automobile not merely as a machine, but as a deity—the "Great God CAR," a powerful phrase I borrowed from the late Australian union...

The Human Blueprint: What the World Values in an Age of Upheaval

Listen to the main concepts in this article in our Deep Dive. The Data-Driven Map of Human Values: From Survival to Self-Expression What does it mean to live a good life? For generations, philosophers have debated this question in the abstract. Today, for the first time in history,...
error: Content unavailable for cut and paste at this time