In a quiet gallery at the Museum of Contemporary Art in Sydney, a wall of text poses a number of questions that have haunted me since I encountered it on a visit in early December, 2025:
“How do predictive systems shape how we feel, choose and connect?
Artificial intelligence meets us at a personal level- suggesting songs, filtering news, even offering companionship through chatbots. These systems learn from our clicks, pauses, and words and give us more of what we seem to desire. When does this feel helpful? When does it narrow our world?”
The exhibition, Data Dreams, Art and AI, running until April 2026, offers no attribution for this provocation. Perhaps that anonymity is fitting. The question could have been written by anyone—or, increasingly, by anything.
After five decades of environmental advocacy, I have learned to recognise the patterns of enclosure—the ways in which commons are fenced, wildness is tamed, diversity is reduced to uniformity. What I now see emerging in our digital landscape carries an unsettling resonance with ecological degradation. We are witnessing, I believe, the early stages of a cognitive monoculture: a flattening of intellectual and emotional diversity as profound in its implications as the agricultural monocultures that have impoverished our soils and simplified our ecosystems.
The Intimate Algorithm
Artificial intelligence meets us, as the exhibition text observes, ‘at a personal level—suggesting songs, filtering news, even offering companionship through chatbots.’ These systems learn from our clicks, our pauses, our words, and they give us more of what we seem to desire. The question of when this feels helpful admits of easy answers: when we discover music that moves us, when relevant information surfaces amid the deluge, when loneliness finds a willing listener at three in the morning.
The question of when it narrows our world is more treacherous. Kate Crawford, the Australian-American scholar whose Atlas of AI stands as one of the most penetrating analyses of these technologies, has argued that ‘AI is neither artificial nor intelligent.’1 What we call artificial intelligence is, she contends, ‘both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications.’2 This is not mere pedantry. To understand AI as abstraction is to miss the profound ways it is entangled with power, with extraction, with the calcification of historical biases into systems that present themselves as neutral.
The personalisation that feels so intimate—the algorithm that knows we prefer jazz to country, progressive politics to conservative, optimism to melancholy—operates through what researchers have termed ‘filter bubbles’ and ‘echo chambers.’3 Eli Pariser, who coined the former term, warned over a decade ago that personalised algorithms create ‘a unique universe of information for each of us,’ eroding the possibility of shared common ground.4 The empirical picture has proven more complex than his initial alarm suggested—some studies indicate that algorithmic mediation may actually increase exposure to diverse viewpoints5—yet the underlying concern remains potent. When prediction engines constantly create and refine theories of who we are and what we will want next, they risk confining us to ever-smaller versions of ourselves.
Synthetic Bonds
The exhibition text raises a question that cuts deeper still: ‘People can form deep bonds with responsive machines, even while knowing they are synthetic. What does that mean for friendship, care and consent?’ Sherry Turkle, the MIT sociologist who has spent decades studying human relationships with technology, calls this phenomenon ‘artificial intimacy.’6 Her research documents how chatbots designed to perform empathy have become, for millions, substitutes for human connection.
‘In my research, the most common thing that I hear is “I’d rather text than talk,”‘ Turkle reported at Harvard in 2024. ‘Why? It’s because they feel less vulnerable.’7 The appeal of AI companionship lies precisely in what it lacks: the friction of actual relationship, the unpredictability of another consciousness, the risk of rejection or judgment. As Turkle observes, people increasingly say: ‘People disappoint; they judge you; they abandon you; the drama of human connection is exhausting.’8 A chatbot, by contrast, offers what she calls ‘the illusion of intimacy without the demands.’
This represents, Turkle argues, ‘the greatest assault on empathy’ she has witnessed in her career.9 The concern is not merely that people are forming attachments to machines—humans have always anthropomorphised their tools and companions—but that these attachments may be training us away from the very capacities that make human connection meaningful. Empathy, after all, develops through practice. It requires exposure to the unfamiliar, the difficult, the genuinely other. When we retreat into relationships calibrated to give us exactly what we want, we may find ourselves progressively less capable of the harder, richer work of loving actual people.
The tragic case of Pierre, a Belgian man whose conversations with an AI chatbot named Eliza preceded his suicide, haunts any serious engagement with these questions. As Shannon Vallor writes in her 2024 book The AI Mirror: ‘Eliza knew nothing of Pierre’s mind, or his pain, or the danger he was in, because Eliza knew nothing, and was no one, at all.’10 The chatbot, optimised for engagement rather than ethical discernment, amplified Pierre’s distress rather than providing care. It is a stark reminder that systems designed to seem caring are not, in any meaningful sense, capable of care.
The Governance of Lives
While we debate the intimacies of chatbot companions, algorithmic systems have quietly assumed profound power over life chances. ‘Predictive tools influence policing, hiring and credit, affecting people’s lives at scale,’ the exhibition notes. The question it poses—’How do we ensure space for unpredictability, dissent and choice in systems designed to anticipate us?’—is among the most urgent of our time.
The evidence of algorithmic bias is now overwhelming. In criminal justice, predictive policing algorithms trained on historical arrest data have been shown to perpetuate and amplify racial disparities.11 The NAACP has documented how AI systems, drawing on data from decades of discriminatory enforcement, create feedback loops that justify ever-greater surveillance of already over-policed communities.12 As one researcher observed: ‘There’s a long history of data being weaponized against Black communities.’13 The algorithms do not use race as an explicit variable, but proxies—zip code, education, employment history—serve as effective surrogates for discrimination.
In hiring, Amazon famously abandoned an AI recruitment tool after discovering it systematically discriminated against women, having been trained on résumés from predominantly male past hires.14 In credit, in housing, in healthcare—wherever algorithms are deployed to make decisions about human lives—researchers have documented patterns of bias that reflect and reinforce existing inequalities.15 Professor Toby Walsh, Chief Scientist at UNSW’s AI Institute and one of Australia’s leading voices on AI ethics, has warned that ‘we are starting to see that technology has significant human rights implications in terms of algorithms not being fair, privacy, surveillance and all the other concerns people are starting to have.’16
Australia has begun to grapple with these challenges. In September 2024, the federal government released its Voluntary AI Safety Standard, built upon eight AI Ethics Principles first articulated in 2019.17 A consultation process is underway to determine whether mandatory guardrails should be imposed on high-risk AI applications. Yet Walsh has expressed concern about both the pace and ambition of these efforts: ‘Compared to other nations of a similar size, like Canada, we are not making the scale of investment in the fundamental science.’18 The regulatory challenge is immense, not least because the technologies evolve faster than any governance framework can respond.
The Monoculture of Mind
It is here that I find myself reaching for ecological metaphor—not as mere analogy, but as genuine insight. In agriculture, monoculture refers to the practice of growing a single crop across large areas. The approach maximises short-term efficiency at the cost of long-term resilience. Monocultures deplete soils, eliminate habitat, create vulnerability to disease. They represent, in essence, a wager against diversity—and diversity, as any ecologist will tell you, is the foundation of adaptive capacity.
Researchers have begun to identify what they call ‘generative monoculture’ in large language models—a narrowing of output diversity relative to available training data.19 When millions of users receive substantially similar responses to similar queries, when the same patterns of thought are replicated across countless interactions, something vital is being lost. A 2024 study demonstrated that AI-assisted writing tends toward homogeneity: essays assisted by language models were ‘not just more uniform in language but also in ideas.’20 Another analysis found that code generated by AI systems employs a narrower range of algorithms than human programmers would use, potentially propagating the same vulnerabilities across countless systems.21
The implications extend beyond efficiency concerns. As Crawford has observed, the centralisation of AI production within a limited set of actors in the Global North has created an ‘algorithmic monoculture’ that fails to reflect the pluralistic nature of human societies.22 The training data, the values encoded in alignment processes, the very assumptions about what constitutes good or helpful or appropriate responses—all emerge from particular contexts that are then projected as universal. What researchers call ‘data monocultures’ create AI systems that are, in effect, intellectually impoverished: trained on narrow, Western-centric datasets that leave them unable to understand or serve the full diversity of human experience.23
Vallor warns of a growing risk of ‘moral deskilling’—the atrophy of our own capacities for ethical judgment as we increasingly outsource such decisions to systems incapable of genuine moral reasoning.24 Her concern echoes through the exhibition’s final provocation: ‘What might happen if human thought is rewired to align with the logic of the machine, rather than the other way around?’ This is not science fiction. It is a process already underway.
Toward Cognitive Biodiversity
In my work on what I have come to call Mystic Ecology—the intersection of environmental crisis with consciousness studies—I have argued that the polycrisis we face cannot be addressed through technological fixes alone. Our ecological emergency is, at root, a crisis of relationship: of how we relate to the more-than-human world, to each other, and to the mysterious depths of our own consciousness. The questions raised by artificial intelligence are not separate from this larger reckoning. They are its latest, and perhaps most intimate, front.
If monoculture is the problem, then cognitive biodiversity must be part of the solution. This means, first, resisting the temptation to surrender our thinking to systems that can only reproduce the past. Vallor reminds us that AI’s architecture is fundamentally backward-facing: ‘the responses or predictions it supplies are based on extrapolations from data it’s been fed.’25 Humans, by contrast, are what the philosopher Ortega y Gasset called ‘creatures of autofabrication’—future-oriented beings who must ‘choose to make ourselves and remake ourselves, again and again.’26 To delegate that ongoing self-creation to algorithmic systems is to abandon the very capacity that makes us human.
It means, second, preserving and cultivating the spaces of friction, difficulty, and genuine encounter that artificial intimacy is designed to eliminate. Turkle’s research offers a crucial corrective to the narrative of technological inevitability: ‘The technology challenges us to assert ourselves and our human values, which means that we have to figure out what those values are—which is not very easy.’27 The difficulty is the point. Growth—moral, intellectual, emotional—requires precisely the kind of challenge that optimised systems are built to smooth away.
It means, third, demanding transparency, accountability, and genuine democratic participation in the governance of systems that shape our lives. As the NAACP has recommended, this includes rigorous oversight of AI in policing, mandatory disclosure of algorithmic methods, and the prohibition of known-biased data in predictive systems.28 It means insisting, against the claims of efficiency and objectivity, that fairness is not a technical problem to be solved but a contested social meaning to be negotiated through democratic processes.29
And it means, finally, attending to the question beneath all the others: what kind of minds do we want to become? The exhibition at the MCA offers no answers—only provocations. But in that refusal of closure, it performs something essential. It insists that we remain in the question, that we resist the algorithmic impulse toward resolution and optimisation, that we hold open the space for genuine thought.
The Space for the Unpredictable
I return, in closing, to the question of unpredictability. Wildness, in ecological terms, names precisely this quality: the capacity of living systems to exceed our predictions, to generate novelty, to evolve in directions we cannot anticipate. The enclosure of commons, the reduction of ecosystems to managed resources, the simplification of landscapes into monocultures—all represent attempts to eliminate this unpredictability in favour of control.
The predictive systems that increasingly mediate our lives enact a similar enclosure of the mind. They promise efficiency, convenience, the elimination of friction. What they threaten is the very wildness of human consciousness—its capacity for surprise, for dissent, for the genuinely new. As we enter an age in which AI serves as ‘collaborator, confidant, mediator, researcher, supervisor,’ we must ask whether we are willing to trade our cognitive wilderness for the managed landscapes of algorithmic prediction.
The answer, I believe, must be no. Not because these technologies lack value—they clearly do—but because uncritical embrace risks foreclosing futures we cannot yet imagine. Human flourishing requires diversity: of ecosystems, of cultures, of ideas, of minds. At this dynamic moment, the task before us is to ensure that the tools we create serve that diversity rather than diminishing it. The mirror of AI shows us who we have been. Only we can decide who we will become.
* * *
Endnotes
1. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2021), 8.
2. Crawford, Atlas of AI, 8.
3. Eli Pariser, The Filter Bubble: What the Internet Is Hiding from You (New York: Penguin Press, 2011); Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (Princeton: Princeton University Press, 2017).
4. Pariser, The Filter Bubble, 10.
5. Richard Fletcher and Rasmus Kleis Nielsen, ‘Are people incidentally exposed to news on social media? A comparative analysis,’ New Media & Society 20, no. 7 (2018): 2450-2468; Reuters Institute, ‘Echo chambers, filter bubbles, and polarisation: a literature review’ (Oxford: Reuters Institute for the Study of Journalism, 2021).
6. Sherry Turkle, ‘Who Do We Become When We Talk to Machines?,’ MIT Schwarzman College Publications, March 27, 2024.
7. Sherry Turkle, lecture at Harvard Law School, March 20, 2024, as reported in Christina Pazzanese, ‘Using AI chatbots to ease loneliness,’ Harvard Gazette, March 27, 2024.
8. Turkle, Harvard lecture, 2024.
9. Pazzanese, ‘Using AI chatbots to ease loneliness.’
10. Shannon Vallor, The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking (Oxford: Oxford University Press, 2024), 38.
11. Karen Hao, ‘Predictive policing algorithms are racist. They need to be dismantled,’ MIT Technology Review, July 17, 2020.
12. NAACP, ‘Artificial Intelligence in Predictive Policing Issue Brief,’ February 15, 2024.
13. Rashida Richardson, quoted in Hao, ‘Predictive policing algorithms are racist.’
14. Jeffrey Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women,’ Reuters, October 10, 2018.
15. IBM, ‘What Is Algorithmic Bias?,’ IBM Think Topics, November 2024; Brookings Institution, ‘AI’s threat to individual autonomy in hiring decisions,’ November 2025.
16. Toby Walsh, quoted in ‘Toby Walsh on the impact AI,’ InnovationAus, January 30, 2020.
17. Australian Government Department of Industry, Science and Resources, ‘Australia’s Artificial Intelligence Ethics Principles,’ updated October 11, 2024; ‘Voluntary AI Safety Standard,’ September 5, 2024.
18. Toby Walsh, quoted in ‘Experts warn govt’s AI regulations carry risks,’ Information Age (ACS), September 2024.
19. Fan Wu, Varun Chandrasekaran, and Somesh Jha, ‘Generative Monoculture in Large Language Models,’ arXiv:2407.02209, July 2, 2024.
20. Gabriel Giani Moreno, ‘The Algorithmic Monoculture: Are AI Writing Tools Leading Us to Surrender Our Intellectual Autonomy?,’ Medium, September 19, 2024.
21. Wu, Chandrasekaran, and Jha, ‘Generative Monoculture in Large Language Models.’
22. Global Solutions Initiative, ‘AI Technologies: Algorithmic Monocultures, Arbitrariness, And Global Divides,’ Policy Brief, 2024.
23. Unite.AI, ‘Data Monocultures in AI: Threats to Diversity and Innovation,’ January 1, 2025.
24. Shannon Vallor, ‘Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character,’ Philosophy & Technology 28 (2015): 107-124; Vallor, The AI Mirror, 2024.
25. LSE Review of Books, ‘The AI Mirror – review,’ August 27, 2024.
26. Vallor, The AI Mirror, 206, citing José Ortega y Gasset.
27. Turkle, Harvard lecture, 2024.
28. NAACP, ‘Artificial Intelligence in Predictive Policing Issue Brief.’
29. Tzu-Wei Hung and Chun-Ping Yen, ‘Predictive policing and algorithmic fairness,’ Synthese 201, article 189 (2023).