Illusions of AI Sentience: The Hidden Human Workforce Behind the Machine

Article inspired by a visit Sydney’s Museum of Contemporary Art, e exhibition, “Data Dreams Art and AI, December, 2025 Kevin Parker Site Publisher

An investigation into the global workforce that makes AI possible

On a white gallery wall in Sydney’s Museum of Contemporary Art, a simple question hangs in the air: When a system feels like it’s intelligent, what are we actually sensing? The exhibition, “Data Dreams Art and AI,” (21 November, 2025 – 27 April 2026. I visited this exhibition on December 6th, 2025. Recommended- Kevin Parker Site Publisher), invites visitors to contemplate the gap between the apparent autonomy of artificial intelligence and the hidden human labour that undergirds it. The text panel’s title—”Illusions of Sentience: Humans Behind the Machine”—could serve as an epitaph for our age of algorithmic enchantment.

We have grown accustomed to speaking of AI as though it were a kind of weather: something that arrives, changes things, passes on. ChatGPT “writes” our emails. Algorithms “decide” what we see. Machine learning models “understand” our preferences. This meteorological framing absolves us of examining how these systems are actually produced—and by whom. The exhibition text draws a line from Wolfgang von Kempelen’s eighteenth-century chess-playing automaton to the sophisticated interfaces of today, suggesting that the fundamental deception has merely been updated for the digital age. The illusion endures; only the architecture has changed.

The Original Turk

In 1770, Wolfgang von Kempelen unveiled his creation before the Habsburg court of Empress Maria Theresa: a life-sized wooden figure dressed in Ottoman robes and turban, seated behind an ornate cabinet surmounted by a chessboard. The automaton appeared capable of playing chess with remarkable skill, its mechanical arm reaching across the board with deliberate grace. For the next eighty-four years, the Mechanical Turk toured Europe and America, defeating opponents including Napoleon Bonaparte and Benjamin Franklin, baffling audiences who struggled to comprehend how gears and springs might replicate the distinctly human art of strategic thinking.

The secret, of course, was that they could not. A human chess master crouched within the cabinet, tracking the game on a miniature board and operating the Turk’s movements through an ingenious system of levers and magnets. Von Kempelen had created not artificial intelligence but artificial unintelligence—a system for concealing human cognition behind mechanical theatre. As cultural critic Jason Farago observed in Frieze, the Turk was “ahead of its time: why bother with machines when human labour is so much cheaper?”

The question reverberates through the centuries. When Amazon launched its crowdsourcing platform in 2005, it chose the name “Mechanical Turk” with apparent irony—a winking acknowledgment that the tasks routed through its system would be completed not by algorithms but by human workers, invisibly distributed across the globe. What seemed like clever homage has become uncomfortable prophecy.

The New Turks

“The great paradox of automation,” write anthropologist Mary L. Gray and computer scientist Siddharth Suri in their 2019 study Ghost Work, “is that the desire to eliminate human labor always generates new tasks for humans.” Their research reveals an archipelago of invisible labour scattered across the global economy: the content moderators who review millions of posts daily, the data labellers who annotate images so that machine vision can “see,” the quality assurance workers who correct the outputs of automated systems before they reach consumers.

Gray estimates that eight percent of Americans have worked at least once in this “ghost economy”—a figure that continues to climb. These workers make the internet seem smart, performing what she calls “high-tech piecework”: flagging violent content, transcribing audio, identifying objects in photographs, rating the quality of search results. They earn, on average, less than minimum wage. They have no benefits, no job security, no recourse when employers reject their work without payment. There are no labour laws governing what they do.

“Work isn’t disappearing in the age of AI,” observes Virginia Eubanks, author of Automating Inequality. “It is being hidden.” The hiding serves multiple purposes. It protects corporate valuations built on narratives of autonomous machine intelligence. It insulates affluent consumers from confronting the conditions under which their conveniences are produced. And it renders workers so dispersed and interchangeable that collective organisation becomes nearly impossible.

According to Lilly Irani, associate professor at the University of California San Diego, the computer science industry is deeply invested in producing what she calls “the image of technological magic.” Through her work developing Turkopticon—a platform allowing Amazon Mechanical Turk workers to share information and rate employers—Irani has documented how this investment shapes labour relations. “The visibility of workers could otherwise obstruct this favorable perception,” she explains, noting that investors are “significantly more likely to back businesses built on scalable technology, rather than unwieldy workforces demanding office space and minimum wages.”

The Conditions of Invisible Labour

In January 2023, TIME journalist Billy Perrigo published an investigation that would fundamentally alter public understanding of how large language models like ChatGPT are produced. His reporting revealed that OpenAI had contracted with Sama, a San Francisco-based outsourcing firm, to employ workers in Kenya who would review tens of thousands of text passages describing child sexual abuse, bestiality, murder, torture, and incest. Their task was to label this content so that the model could learn to filter toxicity from its outputs.

The workers were paid between $1.32 and $2.00 per hour, depending on seniority. One worker, speaking anonymously to TIME, described the experience as “torture”: “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” Another recounted suffering recurring visions after reading graphic descriptions of sexual violence. All four workers interviewed described being “mentally scarred” by the work.

The mathematics of this arrangement deserve scrutiny. OpenAI, valued at the time of Perrigo’s reporting at $29 billion, signed contracts worth a total of $200,000 with Sama for this work. The company’s mission statement declares its intention to “ensure artificial general intelligence benefits all of humanity.” The Kenyan workers making ChatGPT safe enough for public consumption represent, presumably, some portion of that humanity.

Perrigo’s investigation was not the first to expose conditions in Sama’s Kenyan operations. A year earlier, he had reported on content moderators employed by the same firm to filter graphic content for Facebook—images and videos of executions, rape, and child abuse—for $1.50 per hour. That investigation became a finalist for the Orwell Prize and catalysed an ongoing lawsuit against Meta in Kenyan courts.

Kate Crawford, author of Atlas of AI, situates this labour within a broader framework of extraction. Just as the AI industry extracts rare earth minerals from the planet’s crust and personal data from billions of users, it extracts human cognition from workers in the Global South. “AI is a technology of extraction,” Crawford writes, describing how systems marketed as intelligent depend fundamentally upon “the atmosphere, the oceans, the earth’s crust, the deep time of the planet, and the brutal impacts on disadvantaged populations around the world.”

The myth that AI would make human labour “frictionless,” Crawford argues, merely conceals where the friction has been displaced. “The hardship is pushed away, rendered invisible, hidden behind systems that are claimed to be working automatically.” Reinforcement Learning from Human Feedback (RLHF)—the technique that enables models like ChatGPT to produce responses aligned with human preferences—represents “an enormous amount of workers, generally in the Global South, often being paid well below poverty levels.”

The Philosophy of the Mask

Why do we prefer the illusion? The question animates much of the current scholarship on AI and labour, but it reaches beyond economics into something approaching collective psychology. There is a certain pleasure in believing that the machine thinks—a frisson of the uncanny that enhances rather than diminishes our experience. The Mechanical Turk’s audiences in 1770 wanted to believe; their twenty-first-century descendants, scrolling through AI-generated content, seem no different.

The corporate incentives for maintaining this illusion are obvious. A company valued on the premise of autonomous machine intelligence cannot readily acknowledge how much its products depend on human labour without threatening its narrative—and its stock price. But the preference extends beyond shareholders. Users who employ AI assistants to write their emails, plan their schedules, or answer their questions seem largely uninterested in knowing that these systems were shaped by workers earning poverty wages to review traumatic content.

“Human computation currently relies on worker invisibility,” Irani and Silberman wrote in their foundational 2013 paper introducing Turkopticon. The architecture of platforms like Amazon Mechanical Turk is designed to route tasks to distributed workers through APIs that shield requesters from any encounter with the humans completing their work. The worker becomes an abstraction, a line item in a database, a component in a pipeline—never a person with a name, a family, a stake in the quality of their conditions.

This architectural invisibility serves an ideological function. It permits us to imagine that we live in a world of autonomous systems and intelligent machines, rather than one in which the drudgery of cognitive labour has merely been redistributed along familiar lines of global inequality. The iPhone in your pocket, Crawford reminds us, contains cobalt mined by child labourers in the Democratic Republic of Congo. The chatbot on your screen was trained by workers in Nairobi reviewing descriptions of child sexual abuse for less than two dollars an hour. The material conditions of our technological conveniences remain carefully obscured.

When the Ghosts Become Visible

On Labour Day 2023, more than 150 content moderators working for Facebook, TikTok, and ChatGPT gathered in Nairobi to vote on establishing the first African Content Moderators Union. The meeting represented a watershed moment—workers from competing platforms recognising their common conditions and common interests. With support from the Communications Workers’ Union of Kenya (COWU), they pledged to fight for better wages, improved mental health support, and protection from arbitrary dismissal.

“For too long we, the workers powering the AI revolution, were treated as different and less than moderators,” declared Richard Mathenge, a former ChatGPT content moderator who has since been named one of TIME’s 100 most influential people in AI. “Our work is just as important and it is also dangerous. We took an historic step today. The way is long but we are determined to fight on so that people are not abused the way we were.”

The union’s emergence followed a wave of legal action sparked by Perrigo’s journalism. Kenyan courts have now ruled that Meta can be sued in Kenya despite having no official presence there—a decision with potentially far-reaching implications for the global outsourcing model. The court ordered Meta to provide “proper medical, psychiatric and psychological care” for content moderators, and prevented the company from switching contractors in ways that would circumvent the lawsuit.

“This ruling shows that despite Meta’s eye-watering resources, it can be beaten,” observed Martha Dark, co-executive director of Foxglove, the nonprofit supporting the moderators’ legal challenge. “Meta can no longer hide behind outsourcers to excuse the exploitation and abuse of its content moderators.”

The implications extend well beyond Kenya. The case could establish that technology companies bear responsibility for working conditions throughout their supply chains—a principle that would fundamentally reshape how AI labour is sourced and compensated globally.

The Beginnings of Governance

Governments have begun, haltingly, to respond. The European Union’s AI Act, which entered into force in August 2024, represents the world’s first comprehensive horizontal legal framework for regulating artificial intelligence. The legislation classifies AI systems by risk level and imposes corresponding requirements on developers and deployers. Significantly, it permits member states to “maintain or introduce laws that are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers.”

The EU’s Platform Workers Directive, confirmed in March 2024, specifically addresses algorithmic management in the gig economy—making the use of algorithms in human resources management more transparent and ensuring workers have the right to contest automated decisions. It is the first piece of EU legislation to regulate algorithmic management in the workplace and set minimum standards for platform workers.

Australia’s National AI Plan, released in December 2025, takes a different but complementary approach. The plan explicitly acknowledges that “workers and unions will play an important role in shaping the uptake and adoption of AI” and commits to ensuring that adoption is “transparent, safe and responsibly managed.” The Australian Council of Trade Unions welcomed the plan as containing “clear recognition of the need to put working people’s rights, wages, conditions, and jobs at the centre of the future roll-out of AI technology.”

“Despite the many promises by large multinational tech companies,” the ACTU observed, “so far AI has done little to improve the quality of working conditions for most Australians. Instead, too many big businesses have seized on AI technology to try and replace workers and to intrusively monitor their employees by placing them under Orwellian levels of surveillance.”

The Australian plan requires that AI adoption be “consultative, transparent, and fair—meaning workers and unions should be involved early in decisions about AI use.” Whether these principles will survive contact with the economic pressures driving AI deployment remains to be seen.

What Changes When We Acknowledge?

The Sydney exhibition poses its questions to visitors as though they were open: What changes when we acknowledge the human workers? How might our ideas of ‘automation’ and ‘artificial intelligence’ shift if we see the people and decisions behind the screen? But the answers, such as they are, have already begun to emerge from the factories and courtrooms and union halls where this acknowledgment is being forced into being.

First, acknowledgment changes responsibility. When we recognise that AI systems are produced by human labour, we can no longer treat the conditions of that labour as someone else’s problem. Meta cannot claim that Kenyan content moderators are Sama’s responsibility, not theirs. OpenAI cannot outsource accountability along with its data labelling. The fiction of autonomous systems carrying out autonomous work dissolves.

Second, acknowledgment changes compensation. Gray and Suri propose what they call “portable benefits”—additional payments beyond contract wages, pooled from workers, governments, and corporations, to provide social security, healthcare, and retraining for the informal workforce. Such proposals remain speculative, but the underlying principle is clear: if these workers are essential to the AI economy, that economy must bear the costs of their welfare.

Third, acknowledgment changes design. Irani’s work on Turkopticon demonstrates that systems can be built to make labour visible rather than to conceal it. Platforms could be required to disclose how tasks are distributed, what workers are paid, what recourse they have when disputes arise. The architecture of invisibility is a choice, not a necessity.

Finally, acknowledgment changes narrative. We can no longer speak of AI as a kind of weather—something that arrives, changes things, passes on. These systems are built by specific corporations, funded by specific investors, and produced by specific workers whose labour carries specific costs. The technology is not neutral. The magic is not real. The Turk, as it turns out, has always been us.

Crawford concludes her Atlas of AI with a warning and an exhortation. The planet has reached a limit; the extractive logics of digital capitalism cannot continue indefinitely. “At every step of the way,” she writes, “we have to ask, whose civic space is being defended? Whose rights are being recognized? What forms of discrimination are being calcified into technical systems?”

These questions are not rhetorical. They are political, economic, and ultimately moral. The gallery wall in Sydney offers a starting point for reflection, but reflection alone will not suffice. The workers in Nairobi and Manila and Caracas who make our AI possible are organising, litigating, demanding recognition. The question is not whether we will see them, but whether we will see them in time.

The ghosts, it seems, are becoming visible. What we do when we see them will define the AI age to come.

Article created with the help of Claude AI. Please check that all information is accuracte to your own satisfaction before reproduction.

Notes

1. Mary L. Gray and Siddharth Suri, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass (Boston: Houghton Mifflin Harcourt, 2019), 8.

2. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven: Yale University Press, 2021), 64.

3. “Wolfgang von Kempelen,” Wikipedia, accessed December 2025.

4. “The Mechanical Turk: AI Marvel or Parlor Trick?” Britannica, accessed December 2025.

5. Jason Farago, “Mechanical Turk,” Frieze, accessed December 2025.

6. Gray and Suri, Ghost Work, 12.

7. Virginia Eubanks, endorsement in Gray and Suri, Ghost Work, front matter.

8. Billy Perrigo, “Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic,” TIME, January 18, 2023.

9. Ibid.

10. Perrigo, “Exclusive: OpenAI Used Kenyan Workers.”

11. Billy Perrigo, “Inside Facebook’s African Sweatshop,” TIME, February 2022.

12. Crawford, Atlas of AI, 28.

13. Crawford, Atlas of AI, 64.

14. Kate Crawford, quoted in Robert F. Kennedy Human Rights, “Atlas of AI: Examining the Human and Environmental Costs of Artificial Intelligence,” November 2024.

15. Lilly C. Irani and M. Six Silberman, “Turkopticon: Interrupting Worker Invisibility in Amazon Mechanical Turk,” Proceedings of CHI 2013, ACM, 2013.

16. “Ghost Work,” Wikipedia, citing Irani’s research, accessed December 2025.

17. “150 African Workers for AI Companies Vote to Unionize,” TIME, May 1, 2023.

18. “Facebook, TikTok and ChatGPT content moderators in Kenya finally unionise,” Citizen Digital, May 1, 2023.

19. Richard Mathenge, quoted in TIME, May 1, 2023.

20. Martha Dark, quoted in Computer Weekly, 2024.

21. European Union, “AI Act,” Regulation (EU) 2024/1689, Official Journal, July 12, 2024.

22. DLA Piper, “EU AI Act to Enter into Force: Implications for Employers,” 2024.

23. European Council, “Platform Workers Directive,” press release, March 11, 2024.

24. Australian Government, Department of Industry, Science and Resources, “National AI Plan,” December 2, 2025.

25. Australian Council of Trade Unions, “AI Plan Puts Workers at the Centre,” media release, December 2025.

26. MinterEllison, “Australia Introduces a National AI Plan: Four Things Leaders Need to Know,” December 2025.

27. Gray and Suri, Ghost Work, 215.

28. Crawford, Atlas of AI, 230.

29. “Illusions of Sentience: Humans Behind the Machine,” exhibition text, “Data Dreams and AI,” Museum of Contemporary Art, Sydney, 2025.

Latest Posts

More from Author

The Green Woman: From Hidden History to Ecological Archetype

The Green Woman, long overlooked, reveals dual-gendered nature symbolism, linking hidden history to ecofeminist and global ecological archetypes.

Biodynamics: Cosmic Agriculture for a Climate Changing World

The 100 year-old proven farm revolution transforming soil, wine, and scientific...

Read Now

The Hollow Manger: The Christmas Myth and the Crisis of Connection

Some might see this as a bit of Bah! Humbug! article and in truth I did think twice about publishing it, after all Christmas brings my own family and millions worldwide great joy, and, we have enough harsh analysis without me piling more burning tinsel on the...

Theories of State in the 21st Century: An Analysis of Classical and Emerging Frameworks

My Masters and proposed PhD thesis was focused on developing a Deep Ecological Theory of State. It never happened as I got married, and, in the twinkling of an eye, found myself as a primary co-carer of four amazing children under four and home tutoring my fine...

The Green Woman: From Hidden History to Ecological Archetype

The Green Woman, long overlooked, reveals dual-gendered nature symbolism, linking hidden history to ecofeminist and global ecological archetypes.

Biodynamics: Cosmic Agriculture for a Climate Changing World

The 100 year-old proven farm revolution transforming soil, wine, and scientific debate In the rolling vineyards of Burgundy, where some of the world's most prestigious wines originate, a quiet revolution unfolds each morning before dawn. Winemakers at Domaine de la Romanée-Conti—whose bottles command thousands of dollars—can be found...

Beyond Santa: World Religions and Traditions other than Christmas

Discover December–January celebrations worldwide—Christian and beyond—covering lunar and solar calendars, meanings, rituals, and communities beyond Santa

Mother Teresa: A Life of Service, Compassion, and Contention

Mother Teresa, born Agnes Gonxha Bojaxhiu, emerged as one of the 20th century's most recognized humanitarian figures, dedicating her life to serving the "poorest of the poor" in Calcutta, India, and beyond. Her profound commitment led to the establishment of the Missionaries of Charity, a religious order...

The Era of Enshittification

The Era of Enshittification a decline in quality and integrity across digital platforms, highlighting societal and economic implications.

The Life of Nelson Mandela: From Rebel to Revered Statesman

Mandela’s journey from rebel to president shows resilience, sacrifice, and reconciliation, shaping South Africa’s democracy and inspiring global justice.

Is God a Computer Programmer?

When Code Becomes Cosmos If the universe is a computer simulation, then God might be less like Michelangelo's bearded patriarch and more like a cosmic software engineer, writing the code that generates galaxies, consciousness, and everything in between. This provocative thesis has gained serious academic attention as physicists...

The Mirror That Narrows: Predictive AI, Cognitive Monoculture, and the Ecology of Mind

In a quiet gallery at the Museum of Contemporary Art in Sydney, a wall of text poses a number of questions that have haunted me since I encountered it on a visit in early December, 2025:  “How do predictive systems shape how we feel, choose and connect? Artificial intelligence...

The History and Evolving Mission of the Sea Shepherd Conservation Society

It is interesting to look closely at an organization that I have a soft spot for with a slightly cold journalistic eye. The Sea Shepherd project has, in my view, been one that has fundamentally altered opinions globally in favor of the essential conservation needed for threatened...

The Large Language Model Landscape in December 2025

How 2025 ends with an open-source surge, sovereign models, and a hard look at AI’s physical footprint. In the first days of December 2025, the generative-AI race feels less like a sprint toward a single “god model” and more like a branching river delta. At Amazon’s re:Invent conference in...
error: Content unavailable for cut and paste at this time