As of November 2025, the world is at a critical inflection point in the attention economy: billions of users remain tethered to designed-to-be addictive social feeds, while governments—from Beijing to Brussels to Canberra—are rolling out serious regulations to curb harms and protect minors. At the same time, the next wave of users in Africa, Asia and Latin America is coming online fast, and generative-AI is already reshaping what we see and how we engage. What once seemed like an innocent scroll is now a global battleground for our time, attention and mental health — and the fight for a healthier digital future is just beginning. By the way, Sean Parker, ex-Facebook CEO quoted below, is no relation to the best of my knowledge! Kevin Parker -Site Publisher
How Much of Your Time and Conscious Attention Can We Consume?
In a closed-door talk a few years after Facebook’s hyper-growth phase, the company’s first president, Sean Parker, described the core design question that drove early product decisions: How do we consume as much of your time and conscious attention as possible? Likes, tags, notifications—the “social-validation feedback loop”—were engineered to keep you hooked, he said. “God only knows what it’s doing to our children’s brains.” Axios
That candid admission frames the central paradox of our era: social media connects billions, mobilizes movements, and builds livelihoods—while also feeding an addictive compulsion we’ve named doomscrolling. We flick through infinite, personalized feeds—especially bad news and outrage—long past the point of benefit. It feels like vigilance; it’s often just anxiety, by design. WIRED
This article looks under the hood of the attention economy—how the algorithms work, who profits, which rules are coming, and what a saner decade might look like as the next billion users come online across Africa, Asia, and Latin America. It closes with concrete steps that individuals, platforms, and regulators can take now.
The habit that hunts you
“Doomscrolling” crystallized during the pandemic, but the mechanism pre-dates COVID-19: a bottomless feed tuned to our negativity bias, variable rewards (“pull to refresh”), and push alerts calibrated to nudge. The result isn’t trivial: repeated exposure to alarming content is associated with elevated stress, anxiety, and a distorted “mean world” perception—feeling the world is more dangerous than it is. WIRED+1
Public-health bodies increasingly treat youth social media as a risk environment requiring guardrails. The American Psychological Association’s 2023 advisory is blunt: social media can be beneficial, but safe use must be scaffolded with literacy, maturity, and design protections; robust independent safety analyses are still lacking. The U.S. Surgeon General has since urged warning labels and design curbs (e.g., limiting autoplay and push notifications) while Congress debates child-safety bills. American Psychological Association+1
The industry knows how sticky the feeds are. Parker’s remarks weren’t outliers. Former Facebook executive Chamath Palihapitiya has lamented “dopamine-driven feedback loops” that “destroy” civil discourse—another insider acknowledging that maximizing engagement can reward the worst of human impulses. And the long-standing rumor that tech elites keep their own kids away from screens is only partly myth: Steve Jobs himself told The New York Times reporter Nick Bilton, “We limit how much technology our kids use at home,” while Bill Gates has said his children didn’t get phones before 14 and had strict time limits. (Recent reporting cautions against caricature—many tech parents aim for moderation rather than bans—but the point stands: those closest to the machine handle it with tongs.) World Economic Forum+2The Independent+2
The new monopolists of attention
More than two-thirds of humanity uses social media each month; seven platforms now claim a billion-plus users. The top tier—Meta (Facebook, Instagram, WhatsApp), YouTube, TikTok, and WeChat—dominate the global attention market and the advertising flows that fund it. That concentration delivers not only profits but outsized political influence. In Washington alone, Big Tech poured at least $17.5 million into lobbying in Q1 2025; Meta routinely tops the quarterly spend. At year scale, the industry’s lobbying runs into the tens of millions as it shapes debates over youth safety, privacy, AI, and Section 230. Issue One+1
The underlying incentive remains unchanged: the longer you scroll, the more ads can be sold. Engagement is the north star. When algorithms learn that anger, sensationalism, or tribal conflict keep us on-platform, they tend to amplify more of the same unless checked by design or policy. The result isn’t merely wasted hours; it’s a reweighted information diet where extreme, emotionally arousing content crowds out the rest—after which doomscrolling becomes a reflex, not a choice. WIRED
The South logs on
For a decade the growth story has shifted south and east. Social adoption in the Global North is mature; the next wave of users comes from Africa, South Asia, Southeast Asia, and Latin America—young, mobile-first, and hungry for opportunity. In South Africa, Kenya, and Nigeria, average time spent on social media rivals or exceeds Western benchmarks: recent DataReportal-based summaries put Kenya at ~3h43m/day, South Africa ~3h37m, and Nigeria ~3h23m–3h49m depending on the dataset. TikTok’s penetration is rising fast in South Africa, while WhatsApp and Facebook remain pervasive. Bizcommunity+3Statbase+3Intelpoint+3
The upside here is enormous: low-friction entrepreneurship on Instagram or WhatsApp; creator economies and influencer marketing; community organizing where traditional media is constrained. But the risks are intensified by gaps in local-language moderation, low digital literacy, and weak privacy enforcement. When your first internet experience is a For You feed tuned for maximum engagement, the slope toward doomscrolling is as steep in Lagos or Lahore as it is in Los Angeles—sometimes steeper. DataReportal – Global Digital Insights+1
Regulation is finally catching up—unevenly
After a long laissez-faire period—“move fast and break things”—governments are asserting themselves, though with very different models.
European Union (transparency and accountability). The Digital Services Act (DSA) is the most comprehensive regime yet. It bans targeted ads to minors and the use of sensitive data for targeting, requires “very large” platforms to assess and mitigate systemic risks (including harms to children), mandates algorithmic transparency, and obliges platforms to offer non-profiling, chronological feeds. In July 2025 the Commission published specific guidance on protecting minors under the DSA—nudging design away from dark patterns toward age-appropriate defaults. Fines can reach 6% of global revenue. digital-strategy.ec.europa.eu+1
United States (fragmented, litigious). Federal action remains stalled; state efforts dominate and are colliding with the First Amendment. In April 2025 a federal judge permanently struck down Arkansas’s pioneering age-verification law as unconstitutional. Utah’s parental-consent/curfew rules for under-18s remain a bellwether; more states are testing the limits, and industry groups like NetChoice continue to sue. The Surgeon General has called for cigarette-style warning labels and curbs on autoplay and notifications—signaling a public-health frame even as Congress dithers. Arkansas Advocate+2Bloomberg Law+2
Australia (hard age lines). From 10 December 2025, selected platforms—among them Facebook, Instagram, TikTok, YouTube, Snapchat, X, Reddit, and Kick—must ban accounts for under-16s or face fines up to roughly A$50 million. Regulators say companies already have enough data to detect likely under-age users without invasive “age-proofing” for everyone; critics warn of privacy risks, evasion via VPNs, and harms to vulnerable teens who rely on online communities. Either way, it will be the world’s strongest test of an under-16 social ban at platform scale. eSafety Commissioner+2AP News+2
China (curfews and clamps). Beijing has pioneered the strictest approach to youth screen time. Since 2021, gaming for minors is capped at three hours per weekend; in 2023 the Cyberspace Administration proposed smartphone “youth mode” limits—two hours/day for 16–17-year-olds, less for younger children—and overnight curfews. Douyin, the domestic TikTok, already blocks under-14s after 40 minutes/day. The policy mix blends public-health paternalism with content control. ABC
No single model will “solve” doomscrolling. But the DSA’s choice and transparency, the U.S.’s constitutional guardrails, Australia’s age-line experiment, and China’s time caps are now pressuring platforms from multiple angles—changing defaults, revealing black-box logic, and redefining duty of care for minors. digital-strategy.ec.europa.eu+2Arkansas Advocate+2
AI: the accelerant—and the potential brake
Recommendation engines are already machine-learning systems tuned for engagement. Generative AI adds fresh volatility: synthetic video and text at trivial cost, bot swarms that mimic human accounts, and hyper-personalized persuasion. Expect misinformation at scale—and counter-AI from platforms to detect it—an arms race likely to dominate the next five years.
But AI can also help undo the harms it helped create. Concretely:
- Algorithmic choice and “nutrition labels.” Let users choose feed styles (chronological; friends-first; diverse-viewpoint) and see why a post was recommended (“because you watched…”)—both required or encouraged under emerging rules. digital-strategy.ec.europa.eu
- Well-being optimization. Instead of optimizing for time-on-site, give users toggles to optimize for “fewer anxiety triggers,” “less sensational content,” or “more civic/educational posts”—an explicit objective the recommender must honor.
- Personalized safety. On-device models could learn a user’s sensitivities (e.g., self-harm, eating-disorder triggers) and blur or downrank accordingly—private by design, adjustable by the user, audited by independent researchers under DSA data-access provisions. digital-strategy.ec.europa.eu
Whether this happens depends less on technical feasibility than on incentives. Right now, the ad model rewards maximal scrolling. Which brings us back to power and money.
Follow the money (and the lobbying)
If you want to understand why design hasn’t changed faster, look at the ledger and the legislature. Time spent equals impressions equals revenue; anything that reduces doomscrolling reduces income unless the business model changes. The major platforms accordingly invest heavily in shaping rules: Meta, Alphabet, Amazon and peers reliably top K-Street spend; in 2025 Meta again led the pack. Policy portfolios span AI, privacy, content moderation, digital taxes, and Section 230; child-safety bills such as KOSA become bargaining chips in broader negotiations. Axios+1
That influence isn’t unlimited—Europe has already shown it can impose teeth—but it will slow and shape reforms, especially in the U.S. Where policymakers can’t or won’t act, users and markets can: subscriptions, smaller community platforms, and advertisers demanding brand-safe environments all alter incentives. If media buyers reward platforms that demonstrably reduce harm—even at the expense of raw impressions—design will follow the money.
What the numbers say now
- Scale: Social users now outnumber non-users globally; billions log in monthly across seven billion-plus platforms. (DataReportal, 2025.) DataReportal – Global Digital Insights
- Teens: Nearly half of U.S. teens report being online “almost constantly,” with YouTube still dominant; Surgeon General urges warning labels and design constraints. AP News+1
- Time spent: Emerging markets lead daily usage: Kenya (~3h56m), South Africa (~3h37m), Nigeria (~3h23–3h49m). Global averages hover around 2–2.5 hours/day. Statbase+2Intelpoint+2
- Policy heat: DSA enforcement ramping; U.S. state age-check laws face First Amendment headwinds; Australia’s under-16 ban begins 10 Dec 2025. digital-strategy.ec.europa.eu+2Arkansas Advocate+2
Cutting down: what actually works
There’s no silver bullet, but practical “speed bumps” help:
- Kill the hooks. Disable non-essential notifications; turn off autoplay; use Do Not Disturb after a set hour.
- Friction by design. Remove the most compulsive apps from your home screen (or phone). Use website versions with no infinite scroll.
- Bedtime boundaries. Keep phones out of bedrooms; charge elsewhere; set night-mode or grayscale in the evening.
- Feed curation. Unfollow chronic outrage merchants; follow accounts that leave you calmer or better informed.
- Screen-time caps. Use iOS/Android tools to set daily limits per app; treat override prompts as a mindfulness check, not a failure.
These are small acts of resistance, but add up. As Wired puts it, reclaiming even modest chunks of time measurably improves mood and focus. WIRED
What platforms should do next (and prove, not promise)
- Make “off-ramps” the default. After N minutes of continuous scrolling, show a “You’re all caught up/Take a break” interstitial. Let users set their own thresholds.
- Offer algorithmic choice. Prominent, one-tap switches for chronological or friends-first feeds; publish audits of impacts on well-being and polarization under the DSA’s transparency rules. digital-strategy.ec.europa.eu
- Shift KPIs. Reward teams on meaningful interactions and session quality, not raw time. Report these metrics publicly, the way privacy reports became standard post-GDPR.
- Safety by default for teens. Private accounts by default; no targeted ads; night-time notifications off; recommendation limits around sensitive topics; easy-to-use parent/guardian tools. (Much of this is now mandated or strongly guided in the EU and Australia.) digital-strategy.ec.europa.eu+1
- Researcher access. Expand and standardize API/data access for bona fide researchers to study systemic risks (again, mandated in the EU), with privacy protections. digital-strategy.ec.europa.eu
What regulators should do (what’s working, what isn’t)
- Codify choice and transparency. Generalize DSA-style requirements: non-profiling feeds; explanation labels (“why am I seeing this?”); independent audits; data access for researchers. digital-strategy.ec.europa.eu
- Age-appropriate design over blunt bans. Australia’s under-16 prohibition will be an important test. If it drives evasive behavior or harms vulnerable youth, pivot toward design codes (privacy-by-default, nudges off after dark) and targeted enforcement against under-age onboarding rather than universal ID checks that raise privacy risks. AP News
- Align incentives. Explore ad-market levers (procurement standards, advertiser codes) that reward harm-reduction and high-quality contexts; consider competition remedies for recommender “gatekeeping.”
- Interoperability and portability. Make it as easy to leave as to join. If users can take their social graph/content elsewhere, platforms have to compete on experience, not lock-in.
- AI accountability. Require disclosures for synthetic media; baseline bot detection; sunset periods for models that repeatedly fail safety thresholds.
The next decade: two plausible futures
Trajectory A (Course-corrected): Platforms, under pressure from law and the market, swap engagement-maximization for user-defined optimization. Default feeds are gentler; off-ramps are common; minors get strict guardrails; researchers can audit real impacts. Ad models evolve toward quality contexts. Generative AI is harnessed to filter harm and boost serendipity rather than exploit compulsion. “Doomscrolling” becomes a relic term, like “surfing the web.”
Trajectory B (Runaway): Generative content floods feeds; bot herds and deepfakes outstrip platform defenses; engagement-at-all-costs persists as the sole metric; regulation is either captured or toothless; attention fractures further; doomscrolling becomes the default state of being online.
Which future we land in is less a question of technology than of politics and will. The tools exist to build humane feeds; what’s missing is the alignment of incentives and accountability to deploy them at scale.
The enduring irony—and a way forward
Even at the height of the smartphone boom, some tech leaders put tight limits on their own children’s screens. Jobs’ line—“We limit how much technology our kids use at home”—has become lore, sometimes exaggerated but rooted in fact. The deeper point is not hypocrisy; it’s that those who built the machinery understand its power. The rest of us need protections and choices by default, not just willpower. World Economic Forum
We can keep the best of social media—connection, creativity, mobilization—without the worst of it. That means: individuals installing speed bumps; platforms proving (with data) that their design choices improve well-being; and regulators locking in transparency, choice, and youth protections that travel with users regardless of passport.
The attention industry won’t reform itself out of altruism. But it will respond to pressure—from users who churn, advertisers who demand better, and lawmakers who finally insist that the most powerful media systems in history treat our time and minds as something more than an extractive resource.
If we do this right, the next generation won’t need a word like doomscrolling at all.
Sources (selected)
- Doomscrolling & mental health: Angela Watercutter, “Doomscrolling Is Slowly Eroding Your Mental Health,” WIRED (June 25, 2020). WIRED
- Insider admissions: Mike Allen, “Sean Parker unloads on Facebook,” Axios (Dec. 2017). Axios
- Tech parents & screens: World Economic Forum re: Jobs’ NYT comment; reporting on Gates’ policies; Le Monde myth-busting on “no screens” trope. World Economic Forum+2The Independent+2
- Global scale: DataReportal, Digital 2025: State of Social (2025). DataReportal – Global Digital Insights
- Lobbying: Issue One (Apr. 22, 2025); Axios Q3 2025 lobbying tallies. Issue One+1
- EU regulation: European Commission, DSA minors-protection guidance (July 14, 2025); Taylor Wessing explainer. digital-strategy.ec.europa.eu+1
- U.S. teens online: AP on Pew Research Center (Dec. 2024). Surgeon General warning-label calls (2024). AP News+1
- U.S. state laws: Arkansas Advocate and Bloomberg Law on age-verification law struck down (Apr. 2025). Arkansas Advocate+1
- Australia under-16 ban: eSafety Commissioner (updated Nov. 6, 2025); AP and Guardian coverage of expanded platform list; policy cautions on verification. eSafety Commissioner+1
- China youth limits: ABC News (Aug. 2, 2023) overview of proposed phone limits; Douyin youth-mode context. ABC
- Emerging markets: DataReportal country snapshots; South Africa landscape reports (World Wide Worx/Ornico). DataReportal – Global Digital Insights+1
- Time spent (Africa): Statbase (Kenya, SA, Nigeria daily time). Statbase