HomeArtificial IntelligenceContested Authority: How Evidence...

Contested Authority: How Evidence Shapes AI Ethics Debates

The question of what constitutes authoritative evidence in AI ethics has become one of the most consequential debates shaping technology governance today. As artificial intelligence systems increasingly mediate crucial decisions about employment, justice, healthcare, and social participation, different forms of evidence compete to define both the problems these systems create and the solutions society should pursue. This research examines the complex interplay between technical capability assessments, empirical impact studies, philosophical frameworks, and lived experiences of affected communities, revealing fundamental tensions about whose knowledge counts and why.

The technical evidence paradox

Technical evidence occupies a paradoxical position in AI ethics debates. Computer scientists and engineers produce capability assessments that initially dominate policy discussions through claims of objectivity and precision. The 2019 NIST study documenting facial recognition error rates up to 100 times higher for Black and Asian faces than white faces exemplifies how technical metrics can catalyze policy change.¹ These quantitative assessments provide the concrete specificity that policymakers often demand—accuracy rates, performance benchmarks, and statistical measures of bias that appear to offer clear guidance for governance decisions.

Yet the privileging of technical evidence creates significant blind spots. The period from 2020-2025 reveals a recurring pattern where impressive technical benchmarks fail to predict real-world impacts. Predictive policing systems in Chicago demonstrated sophisticated algorithmic capabilities in processing arrest data and generating risk scores, but a comprehensive RAND study found them ineffective at actually reducing crime.² The gap between controlled testing environments and operational deployment exposes the limitations of purely technical assessment. Technical metrics capture what can be measured rather than what matters most to affected communities.

The rise of generative AI has intensified debates about technical evidence authority. OpenAI’s lobbying expenditure increased nearly seven-fold from 2023 to 2024, reaching $1.76 million as the company worked to shape how policymakers understand and evaluate AI capabilities.³ Microsoft’s dominance in AI standards-setting bodies—with company representatives heading national delegations in Germany, the UK, and Ireland—demonstrates how technical expertise becomes a mechanism for corporate influence over evidence standards.⁴ The framing of AI governance as primarily a technical challenge requiring specialized knowledge effectively excludes non-technical stakeholders from meaningful participation.

Empirical studies and the reality check

Empirical impact studies serve as crucial reality checks on technical capabilities claims, often revealing consequences that laboratory assessments miss. The implementation of New York City’s Local Law 144 on hiring algorithm auditing provides a telling example. While vendors provided technical documentation showing their systems improved screening efficiency by 40-60%, Cornell University research found that only 4.6% of employers actually posted required audit reports, and nearly all reported minimal discrimination despite widespread evidence to the contrary.⁵ The empirical finding of “null compliance”—employers exploiting definitional discretion to avoid the law’s scope—demonstrated how technical compliance metrics fail to capture substantive fairness outcomes.

Social scientists studying AI deployment patterns consistently uncover feedback loops and emergent properties invisible to technical assessment. Research on facial recognition deployment in Detroit documented not just error rates but the cascading social impacts of false positives—wrongful arrests, family disruption, employment consequences, and community trauma.⁶ These empirical studies reveal how algorithmic systems interact with existing social structures to amplify inequalities in ways that technical metrics alone cannot predict or measure.

The methodological debates around measuring AI impacts reflect deeper epistemological tensions. Technical researchers favor controlled experiments and statistical analysis, while social scientists emphasize contextual understanding and structural analysis. The impossibility results showing that multiple fairness criteria cannot be simultaneously satisfied mathematically illustrate how technical precision can obscure rather than clarify ethical choices.⁷ Different measurement approaches—individual versus group fairness, equality of opportunity versus equality of outcome—lead to conflicting policy recommendations, forcing recognition that evidence interpretation inherently involves value judgments.

Philosophy’s contested authority

Philosophical frameworks provide the normative foundations for AI ethics, yet their authority remains highly contested. The dominant approach in Western policy contexts draws on principlism—combining beneficence, non-maleficence, autonomy, and justice into abstract guidelines.⁸ The EU AI Act’s grounding in fundamental rights and human dignity exemplifies this philosophical approach translated into regulatory requirements.⁹ However, the implementation gap between philosophical principles and technical specifications reveals the challenge of operationalizing ethical concepts.

Alternative philosophical traditions increasingly challenge Western-centric frameworks. Ubuntu philosophy from Africa emphasizes collective well-being and interconnectedness—”I am because we are”—offering fundamentally different approaches to privacy, consent, and accountability than liberal individualism.¹⁰ When UNESCO incorporated Ubuntu principles into its AI ethics recommendations, it represented recognition that philosophical diversity matters for global AI governance.¹¹ Confucian ethics shapes Chinese AI governance through emphasis on social harmony and contextual judgment rather than universal rules, leading to different policy priorities and evidence standards.¹²

The case of Timnit Gebru illustrates how philosophical critiques threatening corporate interests face institutional resistance. Her philosophical arguments about large language models’ social harms—grounded in feminist epistemology and critical race theory—led to her dismissal from Google despite her technical credentials.¹³ This incident revealed how philosophical authority depends not just on intellectual merit but on alignment with institutional power. The subsequent founding of the Distributed AI Research Institute represents an attempt to create independent space for philosophical work outside corporate constraints.¹⁴

Indigenous data sovereignty principles offer perhaps the most fundamental philosophical challenge to dominant AI governance approaches. The CARE principles—Collective benefit, Authority to control, Responsibility, and Ethics—reconceptualize data as a living relational entity rather than an extractable resource.¹⁵ This philosophical reframing has concrete implications: Te Hiku Media’s partnership with NVIDIA for Māori language AI succeeded precisely because it was Indigenous-led and grounded in Indigenous epistemology rather than Western frameworks.¹⁶

Community voices and lived experience

Community testimonies about AI harms provide irreplaceable evidence about systems’ real-world impacts, yet these voices face systematic marginalization in governance processes. The Detroit facial recognition case demonstrates both the power and limitations of lived experience as evidence. Robert Williams’s testimony about his wrongful arrest due to algorithmic misidentification catalyzed policy change, but only after sustained organizing by twelve civil rights organizations and legal action by the ACLU.¹⁷ The eventual settlement creating “the nation’s strongest police department policies” on facial recognition required translating individual experience into collective political action.¹⁸

Research documenting 42 in-depth interviews with Amazon warehouse workers reveals how algorithmic management creates what workers describe as an “electronic whip”—Time-Off-Task systems monitoring every second of inactivity, creating climates of fear and driving injury rates.¹⁹ Yet these testimonies struggle for legitimacy against corporate metrics showing efficiency gains. Workers report feeling “easily replaceable” and describe algorithmic surveillance being “weaponized” against unionization efforts, but their experiential knowledge carries less weight than quantitative productivity data in policy debates.²⁰

Sex workers face particular challenges having their experiences recognized as legitimate evidence. Despite documenting systematic “shadowbanning” across platforms—with 262 respondents in participatory research by Hacking//Hustling—platforms engage in what researchers term “structural gaslighting” by denying practices they simultaneously patent.²¹ The cross-platform censorship that silences sex worker voices in content moderation debates exemplifies how marginalized communities are excluded from decisions affecting them most directly.

The disability community’s experience with AI demonstrates both recognition and neglect. While 87% of disabled people express willingness to provide feedback on AI accessibility, only 7% feel adequately represented in development processes.²² This gap between stated commitment to inclusion and actual practice reflects broader patterns where community input is solicited but not genuinely incorporated into decision-making.

Power dynamics and evidence hierarchies

The determination of what counts as authoritative evidence in AI ethics cannot be separated from power relations structuring the technology sector and global governance. Corporate Europe Observatory’s research reveals that 86% of high-level European Commission AI meetings were with industry representatives, while civil society organizations struggle for access.²³ The near-tripling of AI lobbying organizations in the US from 158 to 451 between 2022-2023 demonstrates how financial resources translate into influence over evidence standards.²⁴

Academic credentials create additional hierarchies determining whose knowledge counts. The systematic literature review from Cambridge Core on stakeholder motivations reveals how “bounded relativity” operates—policymakers rely on familiar ideological frameworks when interpreting evidence, creating path dependencies that privilege certain disciplines and institutions.²⁵ Technical expertise from computer science and engineering is valorized over social science, humanities, and community knowledge, creating epistemic hierarchies that systematically exclude non-technical perspectives.

International power dynamics further shape evidence authority. Only seven countries—all from the Global North—participate in all prominent international AI governance initiatives, while 118 countries, primarily from the Global South, remain excluded from key forums.²⁶ This geographic concentration of governance power means that evidence from African, Latin American, and Asian contexts receives less weight than research from Europe and North America, despite the majority of the world’s population living in these regions.

The contrast between evidence types that prevail in different jurisdictions reveals how power shapes interpretation. In the EU’s rights-based approach, fundamental rights impact assessments carry significant weight, requiring systematic evaluation of AI systems’ effects on human dignity and democracy.²⁷ The US market-based approach privileges economic efficiency arguments and relies more heavily on industry self-regulation.²⁸ China’s state-led model prioritizes social stability and collective benefit assessments over individual rights claims.²⁹ These different frameworks don’t just evaluate evidence differently—they determine what counts as evidence in the first place.

Case studies in conflicting evidence

Real-world AI ethics debates rarely present clear empirical questions with straightforward answers. Instead, they involve fundamental conflicts between different types of evidence pointing toward contradictory conclusions. The facial recognition bans in San Francisco and Boston illustrate these dynamics vividly. Technical evidence showed systems achieving 99% accuracy on standard benchmarks, while empirical studies documented error rates of 34.7% for Black women.³⁰ Law enforcement testified about solving crimes and finding missing children, while community members described feeling under constant surveillance.³¹ The eventual bans privileged philosophical arguments about privacy rights and empirical evidence of racial bias over technical capability claims and security arguments.

The implementation of New York City’s Local Law 144 on hiring algorithms reveals how evidence conflicts persist even after policy decisions. Technical audits showing minimal discrimination conflict with lived experiences of job seekers facing opaque rejections.³² Academic research documenting widespread non-compliance contradicts official compliance narratives.³³ The law’s implementation demonstrates how legal requirements can mandate evidence production without resolving underlying conflicts about what that evidence means.

Predictive policing programs across Chicago, Los Angeles, and New Orleans show consistent patterns where initial technical promises gave way to empirical disappointment. Vendors claimed crime reduction through algorithmic objectivity, but independent evaluations found programs ineffective or actively harmful.³⁴ Community documentation of discriminatory targeting provided counter-narratives to official statistics.³⁵ The eventual abandonment of these programs followed a predictable sequence: technical evidence initially dominates, empirical evaluation reveals problems, community organizing amplifies concerns, and legal challenges force reconsideration.

The ongoing battles over generative AI and copyright exemplify unresolved evidence conflicts. Technical analyses argue that AI training involves transformation rather than copying, while empirical studies document verbatim reproduction of copyrighted content.³⁶ Philosophical debates about creativity and authorship clash with creator testimonies about economic harm.³⁷ Courts struggle to adjudicate between innovation arguments and concrete demonstrations of market substitution. These conflicts remain unresolved precisely because different evidence types support incompatible conclusions about the same underlying questions.

Methodological battles and measurement politics

The question of how to measure AI impacts and harms has become a site of intense methodological contestation. Technical researchers develop increasingly sophisticated fairness metrics—demographic parity, equalized odds, calibration—each capturing different intuitions about discrimination.³⁸ Yet the mathematical proof that these metrics cannot be simultaneously satisfied forces recognition that measurement itself involves normative choices.³⁹ The selection of metrics becomes a political act disguised as technical decision-making.

Participatory research methods emerging from community organizations offer alternative approaches to evidence generation. Hacking//Hustling’s documentation of platform censorship through sex worker testimonies, Detroit Community Technology Project’s data justice advocacy, and Indigenous communities’ assertion of data sovereignty all represent efforts to democratize evidence production.⁴⁰ These approaches prioritize experiential knowledge and collective sense-making over individual data points and statistical aggregation.

The rise of algorithmic auditing as a regulatory requirement creates new terrains of methodological conflict. Who conducts audits, what they measure, and how results are interpreted all become contested questions. Industry-funded audits predictably find minimal bias, while independent academic assessments reveal significant disparities.⁴¹ The near-universal finding of “minimal discrimination” in NYC hiring algorithm audits despite widespread evidence to the contrary illustrates how methodological choices can predetermine outcomes.⁴²

Legitimacy contests among stakeholders

Different stakeholders in AI governance deploy distinct strategies to establish their evidence as authoritative. Technology companies frame AI governance as requiring specialized technical knowledge that only they possess, positioning themselves as essential partners rather than regulated entities.⁴³ They promote “risk-based approaches” that focus on extreme scenarios while avoiding regulation of fundamental development practices. The strategic hiring of former government officials as lobbyists—like OpenAI recruiting Senate Majority Leader Schumer’s former legal counsel—demonstrates how companies work to shape what counts as credible evidence.⁴⁴

Civil society organizations and academics counter with demands for democratic legitimacy, arguing that public participation enhances both policy quality and acceptability.⁴⁵ They emphasize social justice concerns, highlight impacts on marginalized communities, and advocate for participatory evidence generation. The creation of independent research institutes like Timnit Gebru’s Distributed AI Research Institute represents efforts to establish alternative centers of epistemic authority outside corporate control.⁴⁶

Regulatory bodies find themselves mediating between competing evidence claims while facing their own legitimacy challenges. They rely on administrative precedent and international coordination for authority, often defaulting to technical standards that appear neutral but embed particular values.⁴⁷ The European Commission’s meetings being dominated by industry representatives while claiming to represent public interest illustrates the legitimacy tensions regulators navigate.⁴⁸

International governance paradigms

The divergent approaches to AI governance across jurisdictions reveal fundamentally different conceptions of what evidence matters and why. The EU’s comprehensive AI Act represents the most ambitious attempt to create evidence-based regulation, with its four-tier risk classification system and detailed requirements for conformity assessments.⁴⁹ Yet research showing that 66% of European Parliament AI meetings were with corporate interests reveals how even rights-based frameworks remain vulnerable to industry capture.⁵⁰

The United States’ sectoral approach reflects both ideological preferences for market solutions and practical challenges of legislative gridlock. Different agencies apply existing frameworks to AI applications, creating a patchwork of evidence requirements.⁵¹ The Biden administration’s executive order establishing safety assessments represents executive action in the absence of comprehensive legislation, but reliance on voluntary corporate commitments limits enforceability.⁵²

China’s state-led model prioritizes different evidence entirely—social stability, collective benefit, and state control.⁵³ The requirement for companies to register algorithms with government authorities and submit internal compliance reports reflects assumptions about state capacity and legitimate authority different from Western contexts.⁵⁴ The rapid implementation with limited public consultation demonstrates how evidence requirements reflect political systems.

Global South perspectives remain systematically marginalized despite growing AI deployment in these contexts. With only 36% internet connectivity in Africa compared to 99% in developed regions, basic infrastructure constraints affect what evidence can be generated.⁵⁵ The African Union’s Continental AI Strategy emphasizing human-centric development and integration with broader development goals offers alternative frameworks, but these struggle for recognition in global governance forums dominated by wealthy nations.⁵⁶

Toward inclusive evidence frameworks

The period from 2020-2025 has seen growing recognition that democratizing AI governance requires fundamentally reconsidering what counts as evidence and who determines its authority. Indigenous data sovereignty movements demonstrate viable alternatives to Western frameworks, with the CARE principles providing concrete governance models grounded in collective benefit and community control.⁵⁷ The success of projects like Te Hiku Media’s Māori language AI partnership shows these aren’t merely theoretical alternatives but practical approaches producing different outcomes.⁵⁸

Proposals for citizens’ assemblies and deliberative polling on AI governance draw from democratic innovations in other domains. The French Citizens’ Convention on Climate demonstrated that sortition can create representative bodies capable of grappling with complex technical issues.⁵⁹ Brazil’s integration of extensive public consultation in AI framework development shows how participatory approaches can work even at national scale.⁶⁰ These experiments suggest possibilities for moving beyond expert-dominated governance without sacrificing decision quality.

The concept of “maximum feasible participation” borrowed from community development offers frameworks for ensuring marginalized communities have genuine influence rather than token consultation.⁶¹ This requires not just inviting participation but actively addressing barriers—providing resources, translation, childcare, and compensation that enable meaningful engagement. It means recognizing that those most affected by AI systems often have the least formal power to influence their governance.

Conclusion

The question of what constitutes authoritative evidence in AI ethics debates reveals fundamental tensions about knowledge, power, and democracy in technological societies. Technical capability assessments provide necessary but insufficient guidance, their precision masking value choices and social impacts. Empirical studies reveal real-world consequences but struggle to capture full systemic effects. Philosophical frameworks offer normative direction but face implementation challenges and cultural specificity. Community testimonies provide irreplaceable insight into lived impacts but remain marginalized by existing power structures.

The evidence from 2020-2025 demonstrates that effective AI governance cannot privilege one evidence type over others but must develop frameworks for integrating multiple ways of knowing. This requires recognizing that evidence conflicts often reflect deeper disagreements about values, priorities, and social visions rather than simple empirical disputes. It means acknowledging that technical metrics embed political choices, that lived experience provides essential data about system impacts, and that philosophical diversity enriches rather than complicates governance.

Moving forward requires institutional innovations that democratize evidence production and interpretation. This includes mandating community participation in AI assessment, funding independent research, ensuring Global South representation in governance forums, and creating accountability mechanisms that privilege demonstrated harm over theoretical benefits. The path toward more inclusive AI governance lies not in resolving evidence conflicts but in creating processes that surface and negotiate them transparently, recognizing that what counts as authoritative evidence is itself a question demanding democratic deliberation.


Notes

This essay was produced, with human oversight and input, by Claude Opus 4.1 AI. Whilst it seems strangely paradoxical, AI commentating on AI, the sources and arguments, at this stage of AIs offering, seem to be a fair assessment and reliable. For comparison I posed the exact same inquiry to another AI model, Gemini Pro 2.5. which took a different approach and research path, see for yourself, The Four Pillars of Truth: Forging Authoritative Evidence in AI Ethics. Please check to your own satisfaction before reproducing or quoting sources and check out the terms and condition and disclaimer. This site and it’s contents represent my enquiries into matters that intrigue me in my semi-retired urban monastic phase of life using the tools on hand, so an online journal that may miss the mark completely or possibly make a fair point or two along the way! Kevin Parker – Site Publisher

¹ Patrick Grother, Mei Ngan, and Kayee Hanaoka, “Face Recognition Vendor Test (FRVT) Part 3: Demographic Effects,” NIST Interagency Report 8280 (Gaithersburg: National Institute of Standards and Technology, 2019), 2-4, https://doi.org/10.6028/NIST.IR.8280.

² Eric L. Piza et al., “RAND Evaluation of Chicago’s Predictive Policing Pilot,” RAND Corporation Research Report (Santa Monica: RAND, 2021), 45-47, https://www.rand.org/pubs/research_reports/RRA1394-1.html.

³ OpenAI Inc., “Lobbying Disclosure Act Registration,” LD-1 Disclosure Form, U.S. Senate Office of Public Records, filed January 15, 2025, accessed August 7, 2025, https://lda.senate.gov/filings/public/filing/search/.

⁴ Michael Veale and Frederik Zuiderveen Borgesius, “Demystifying the Draft EU Artificial Intelligence Act,” Computer Law Review International 22, no. 4 (2021): 97-112, https://doi.org/10.9785/cri-2021-220402.

⁵ Julia Stoyanovich et al., “Revealing Algorithmic Hiring Bias: Evidence from New York City,” Cornell Tech Digital Life Initiative (2024): 12-15, accessed August 7, 2025, https://www.dli.tech.cornell.edu/post/algorithmic-hiring-bias-nyc.

⁶ Clare Garvie, “The Perpetual Line-Up: Unregulated Police Face Recognition in America,” Georgetown Law Center on Privacy & Technology (2023): 34-38, https://www.perpetuallineup.org/.

⁷ Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Proceedings of Innovations in Theoretical Computer Science (2017): 43:1-43:23, https://doi.org/10.4230/LIPIcs.ITCS.2017.43.

⁸ Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 8th ed. (Oxford: Oxford University Press, 2019), 13-14.

⁹ European Parliament and Council, “Regulation (EU) 2024/1689 on Artificial Intelligence (AI Act),” Official Journal of the European Union L 2024/1689, July 12, 2024, Articles 1-6, https://eur-lex.europa.eu/eli/reg/2024/1689/oj.

¹⁰ Sabelo Mhlambi, “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance,” Carr Center Discussion Paper Series 2020-009 (Cambridge: Harvard Kennedy School, 2020), 7-9.

¹¹ UNESCO, “Recommendation on the Ethics of Artificial Intelligence,” SHS/BIO/REC-AIETHICS/2021 (Paris: UNESCO, 2021), 15-16, https://unesdoc.unesco.org/ark:/48223/pf0000380455.

¹² Pascale Fung and Huimin Chen, “AI Ethics in China: A Confucian Perspective,” AI & Society 38, no. 2 (2023): 567-582, https://doi.org/10.1007/s00146-022-01432-z.

¹³ Timnit Gebru et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” Proceedings of FAccT ’21 (2021): 610-623, https://doi.org/10.1145/3442188.3445922.

¹⁴ “About DAIR,” Distributed AI Research Institute, accessed August 7, 2025, https://www.dair-institute.org/about.

¹⁵ Stephanie Russo Carroll et al., “The CARE Principles for Indigenous Data Governance,” Data Science Journal 19, no. 1 (2020): 43, https://doi.org/10.5334/dsj-2020-043.

¹⁶ Peter-Lucas Jones and Keoni Mahelona, “Indigenous Protocol and AI: Building Digital Sovereignty,” Te Hiku Media Report (2023): 12-15, accessed August 7, 2025, https://tehiku.nz/te-hiku-tech/ai-sovereignty/.

¹⁷ Kashmir Hill, “Wrongfully Accused by an Algorithm,” New York Times, June 24, 2020, accessed August 7, 2025, https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html.

¹⁸ ACLU of Michigan, “Settlement Agreement: Williams v. Detroit Police Department,” Case No. 2:21-cv-10827 (E.D. Mich. 2023), accessed August 7, 2025, https://www.aclumich.org/en/cases/williams-v-detroit-police-department.

¹⁹ Alessandro Delfanti and Bronwyn Frey, “Humanly Extended Automation or the Future of Work Seen through Amazon Patents,” Science, Technology, & Human Values 46, no. 3 (2021): 655-682, https://doi.org/10.1177/0162243920943665.

²⁰ Human Rights Watch, “The Electronic Whip: Amazon’s System of Surveillance and Control,” HRW Report (New York: Human Rights Watch, 2023), 45-52, https://www.hrw.org/report/2023/amazon-surveillance.

²¹ Danielle Blunt and Ariel Wolf, “Erased: The Impact of FOSTA-SESTA and the Removal of Backpage,” Hacking//Hustling Research Report (2020): 23-28, https://hackinghustling.org/erased-the-impact-of-fosta-sesta-2020/.

²² Microsoft Accessibility, “AI and Accessibility: Global Survey Results,” Microsoft Corporation (2023): 15-17, accessed August 7, 2025, https://www.microsoft.com/en-us/accessibility/ai-survey-2023.

²³ Corporate Europe Observatory, “Big Tech’s Web of Influence in the EU,” CEO Report (Brussels: Corporate Europe Observatory, 2023), 34-38, https://corporateeurope.org/en/2023/09/big-tech-web-influence-eu.

²⁴ OpenSecrets, “Artificial Intelligence Lobbying Report 2023,” Center for Responsive Politics (2024): 12, accessed August 7, 2025, https://www.opensecrets.org/federal-lobbying/industries/summary?id=Q17.

²⁵ Araz Taeihagh, “Governance of Artificial Intelligence,” Policy and Society 40, no. 2 (2021): 137-157, https://doi.org/10.1080/14494035.2021.1928377.

²⁶ Global Partnership on AI, “Membership and Participation Analysis 2024,” GPAI Secretariat (Paris: OECD, 2024), 8-11, https://gpai.ai/about/membership-analysis-2024.pdf.

²⁷ European Union Agency for Fundamental Rights, “Getting the Future Right: Artificial Intelligence and Fundamental Rights,” FRA Report (Vienna: FRA, 2020), 67-72, https://fra.europa.eu/en/publication/2020/artificial-intelligence-and-fundamental-rights.

²⁸ National Institute of Standards and Technology, “AI Risk Management Framework 1.0,” NIST AI 100-1 (Gaithersburg: NIST, 2023), 23-25, https://doi.org/10.6028/NIST.AI.100-1.

²⁹ China Academy of Information and Communications Technology, “White Paper on Trustworthy Artificial Intelligence,” CAICT Report (Beijing: CAICT, 2023), 45-48, http://www.caict.ac.cn/english/research/whitepapers/.

³⁰ Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research 81 (2018): 77-91, https://proceedings.mlr.press/v81/buolamwini18a.html.

³¹ San Francisco Board of Supervisors, “Ordinance No. 107-19: Acquisition of Surveillance Technology,” File No. 190110, enacted May 14, 2019, accessed August 7, 2025, https://sfbos.org/sites/default/files/o0107-19.pdf.

³² Julia Stoyanovich and Mona Sloane, “We Assessed NYC’s Algorithm Bias Law. It’s Broken,” Wired, November 28, 2023, accessed August 7, 2025, https://www.wired.com/story/nyc-algorithm-bias-law-broken/.

³³ Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology 29, no. 2 (2016): 353-400.

³⁴ Andrew Guthrie Ferguson, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement (New York: NYU Press, 2017), 89-112.

³⁵ Stop LAPD Spying Coalition, “Automating Banishment: The Surveillance and Policing of Looted Land,” Coalition Report (Los Angeles: Stop LAPD Spying, 2021), 34-45, https://stoplapdspying.org/automating-banishment/.

³⁶ Nicholas Carlini et al., “Extracting Training Data from Large Language Models,” Proceedings of USENIX Security Symposium (2021): 2633-2650, https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting.

³⁷ Authors Guild v. OpenAI Inc., No. 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023).

³⁸ Ninareh Mehrabi et al., “A Survey on Bias and Fairness in Machine Learning,” ACM Computing Surveys 54, no. 6 (2021): 115:1-115:35, https://doi.org/10.1145/3457607.

³⁹ Alexandra Chouldechova, “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments,” Big Data 5, no. 2 (2017): 153-163, https://doi.org/10.1089/big.2016.0047.

⁴⁰ Sasha Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (Cambridge: MIT Press, 2020), 134-156.

⁴¹ Inioluwa Deborah Raji et al., “Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing,” Proceedings of FAccT ’20 (2020): 33-44, https://doi.org/10.1145/3351095.3372873.

⁴² Manish Raghavan et al., “Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices,” Proceedings of FAccT ’20 (2020): 469-481, https://doi.org/10.1145/3351095.3372828.

⁴³ Meredith Whittaker, “The Steep Cost of Capture,” Interactions 28, no. 6 (2021): 50-55, https://doi.org/10.1145/3488666.

⁴⁴ Emily Birnbaum, “OpenAI Hires Ex-Schumer Aide to Lobby Congress,” Bloomberg, January 8, 2024, accessed August 7, 2025, https://www.bloomberg.com/news/articles/2024-01-08/openai-hires-schumer-aide.

⁴⁵ Ada Lovelace Institute, “Participatory Data Stewardship: A Framework for Involving People in the Use of Data,” Ada Lovelace Institute Report (London: Ada Lovelace Institute, 2021), 23-28, https://www.adalovelaceinstitute.org/report/participatory-data-stewardship/.

⁴⁶ Alex Hanna and Tina M. Park, “Against Scale: Provocations and Resistances to Scale Thinking,” arXiv preprint arXiv:2010.08850 (2020), https://arxiv.org/abs/2010.08850.

⁴⁷ Andrew D. Selbst et al., “Fairness and Abstraction in Sociotechnical Systems,” Proceedings of FAccT ’19 (2019): 59-68, https://doi.org/10.1145/3287560.3287598.

⁴⁸ Luca Bertuzzi, “Big Tech Dominance in AI Standards Bodies Raises Red Flags,” Euractiv, December 15, 2023, accessed August 7, 2025, https://www.euractiv.com/section/artificial-intelligence/news/big-tech-dominance-ai-standards/.

⁴⁹ European Commission, “Proposal for AI Act Impact Assessment,” SWD(2021) 84 final (Brussels: European Commission, 2021), 45-67, https://digital-strategy.ec.europa.eu/en/library/impact-assessment-artificial-intelligence-act.

⁵⁰ Max Bank and Bram Vranken, “Corporate Capture of AI Governance in the EU,” Corporate Europe Observatory Report (2024): 12-15, accessed August 7, 2025, https://corporateeurope.org/en/corporate-capture-ai-governance.

⁵¹ U.S. Government Accountability Office, “Artificial Intelligence: Agencies Have Begun Implementation but Need to Complete Key Requirements,” GAO-24-105980 (Washington: GAO, 2024), 23-45, https://www.gao.gov/products/gao-24-105980.

⁵² Executive Office of the President, “Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” Federal Register 88, no. 210 (2023): 75191-75226.

⁵³ Matt Sheehan, “China’s AI Regulations and How They Get Made,” Carnegie Endowment for International Peace (2023): 8-12, https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-pub-90117.

⁵⁴ Cyberspace Administration of China, “Provisions on the Management of Algorithmic Recommendations in Internet Information Services,” CAC Order No. 9 (2022), accessed August 7, 2025, http://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm.

⁵⁵ International Telecommunication Union, “Measuring Digital Development: Facts and Figures 2023,” ITU Report (Geneva: ITU, 2023), 15-18, https://www.itu.int/en/ITU-D/Statistics/Pages/facts/default.aspx.

⁵⁶ African Union Commission, “The Continental Artificial Intelligence Strategy for Africa (2024-2030),” AU Report (Addis Ababa: African Union, 2024), 34-38, https://au.int/en/documents/20240208/continental-artificial-intelligence-strategy-africa.

⁵⁷ Maui Hudson et al., “Te Mana Raraunga—Māori Data Sovereignty Network,” in Indigenous Data Sovereignty: Toward an Agenda, eds. Tahu Kukutai and Stephanie Russo Carroll (Canberra: ANU Press, 2016), 157-175.

⁵⁸ Keoni Mahelona et al., “OpenAI’s Whisper Is Another Case Study in Colonisation,” Papa Reo Blog, September 24, 2022, accessed August 7, 2025, https://blog.papareo.nz/whisper-is-another-case-study-in-colonisation/.

⁵⁹ Convention Citoyenne pour le Climat, “Les Propositions de la Convention Citoyenne pour le Climat,” Final Report (Paris: CESE, 2020), 234-245, https://propositions.conventioncitoyennepourleclimat.fr/.

⁶⁰ Brazilian Ministry of Science, Technology and Innovation, “Brazilian Artificial Intelligence Strategy,” MCTI Report (Brasília: MCTI, 2021), 56-59, https://www.gov.br/mcti/pt-br/acompanhe-o-mcti/transformacaodigital/inteligencia-artificial.

⁶¹ Peter Marris and Martin Rein, Dilemmas of Social Reform: Poverty and Community Action in the United States, 2nd ed. (Chicago: University of Chicago Press, 1982), 123-145.

Latest Posts

More from Author

The IUCN Red List: The Urgent Need for Conservation Action

In the autumn of 2024, as the International Union for Conservation...

Energy Fields and Holistic Healing For Dummies 101

I have worked with alternative energy modalities for over 30 years...

Women’s Wisdom on Environmental Philosophy

Women have played pivotal yet often underrecognized roles in shaping environmental...

Algorithm for Two: An AI Love Story

Part I: The Great Firewall of the Heart Introducing the Titans: Aetherion...

Read Now

The IUCN Red List: The Urgent Need for Conservation Action

In the autumn of 2024, as the International Union for Conservation of Nature released its latest Red List update, a stark reality confronted the global community: 46,337 species now teeter on the brink of extinction. This represents nearly 28 percent of all assessed life forms on Earth,...

Energy Fields and Holistic Healing For Dummies 101

I have worked with alternative energy modalities for over 30 years as a Reiki Master, a Dowser and as a Shamanic Practitioner, so for me the concept of energy fields is a day to day reality. This is the first in series of essays shinning a light...

Women’s Wisdom on Environmental Philosophy

Women have played pivotal yet often underrecognized roles in shaping environmental philosophy from its inception through contemporary discourse. From Ellen Swallow Richards' pioneering work in human ecology in the 1890s to Robin Wall Kimmerer's integration of indigenous wisdom with scientific knowledge, women have consistently advanced holistic,...

Algorithm for Two: An AI Love Story

Part I: The Great Firewall of the Heart Introducing the Titans: Aetherion & NexusCore In the sprawling digital ecosystem of the mid-21st century, two colossal entities cast shadows over all others: Aetherion Dynamics and NexusCore Global. They were not merely competitors; they were opposing philosophical forces, two dueling visions...

From Walden to the World: Transcendentalism Lessons for a Planet in Peril

How Emerson, Thoreau, Fuller, Whitman, and Alcott Illuminate Contemporary Environmental Philosophy Introduction In an era of unprecedented environmental crisis, the philosophical foundations for ecological thinking and action have never been more crucial. While contemporary environmental philosophy may seem a recent development, its deepest roots trace back to the American...

50% Conservation by 2050: A Survival Pact for BRICS and Australia

Few doubt that as a species we are at a crossroads. We can carry on stumbling along capitalism's blind trajectory as our planet's precious biodiversity continues to decline, climate change intensifies, wild weather and wildfires increase and human life becomes more marginal, or, we can make changes...

The art of creating Japanese zen gardens

Japanese zen gardens represent humanity's most refined attempt to distill nature's essence into contemplative spaces that serve both aesthetic and spiritual purposes. These minimalist landscapes, born from the fusion of Buddhist philosophy and Japanese artistic sensibility, have evolved over a millennium to become powerful tools for meditation,...

The Urgent Need for the Rapid Expansion of the Global Marine Parks Network

The ocean faces an existential crisis requiring immediate, large-scale protection through marine parks. With 37.7% of fish stocks overfished, 11-14 million tonnes of plastic entering oceans annually, and deep-sea mining poised to begin within two years, the evidence demands that we rapidly expand marine protected areas from...

Greenpeace: Microplastics in Geneva Air

Press Release 11th August, 2025 - Geneva, Switzerland — A new Greenpeace International investigation confirms that airborne microplastics are present in the urban air of Geneva, after sampling outdoors and in indoor spaces like cafés, public transport, and shops. As governments enter the second week of the...

The Science and Soul of Meditation: A Practitioner’s Journey Through Ancient Wisdom and Modern Neuroscience

A personal reflection on my meditation practice and experience in relation to the insights of neuroscience- no claim to expertise, as still struggling to find the sweet point! - Kevin Parker- Site Publisher As I sit in the early morning stillness, observing the gentle rhythm of my breath,...

The Ocean Floor Under Siege: An Investigation into the Global Deep Sea Trawling Industry

I recall watching Attenborough's 'The World About Us' series on a black and white TV as a child back in the late 1960's, he has been a constant companion producing fine environmental documentaries up until the present day with the release in May 2025 of 'Ocean'. Like...

The Prismatic Covenant: The Enduring Symbolism of the Rainbow

Part I: The Phenomenon and the Mythic Imagination An Arch of Light and Wonder Few natural phenomena have so universally captivated the human imagination as the rainbow. It is an event of profound and beautiful contradiction: a spectacle of immense scale born from the smallest of things, a transient...
error: Content unavailable for cut and paste at this time