false

  • News and events false false
  • 2025 false false
  • 2025 false false
  • June false false
  • The Undersphere: How unseen AI communities are rewriting risk and regulation true true

/content/dam/corporate/images/faculty-of-arts-and-social-sciences/news-and-events/2025/june/2025-elise-racine-ai.jpg

This image features 3 images of a street. Overlying the image are different shapes which are arranged to look like QR code symbols. These are in white/blue colours and intersect one another. The first image is clear, but the second is slightly more pixelated, and the final image is very pixelated.

50%

The Undersphere: How unseen AI communities are rewriting risk and regulation

12 June 2025

m-hero--style-center-wide cmp-teaser--std

220.150.2x.jpeg 440w, 1440.983.2x.jpeg 2880w, 1280.1280.jpeg 1280w, 440.300.2x.jpeg 880w, 800.546.2x.jpeg 1600w, 2000.1365.2x.jpeg 4000w

false

As generative AI tools become more powerful and widely available, researchers are turning their attention to a new and often overlooked driver of digital risk: the Undersphere.

Coined by Milica Stilinovic, A/Prof Jonathon Hutchinson and Dr Francesco Bailo, Deputy Director of the University of Sydney’s Centre for AI, Trust and Governance (CAITG), the Undersphere describes a decentralised, networked community of practice that forms around the creative and experimental use of generative AI—often well beyond what the technology was originally designed to do.

Unlike regulated AI development within companies or institutions, the Undersphere is informal, fast-moving and hard to trace. It thrives on open-source tools, online platforms and collaborative subcultures. And while many outputs are benign or artistically motivated, others carry serious risks.

“Unlike outputs that solely aim to reimagine technologies,” says Dr Bailo.

“Many outputs emerging from the Undersphere possess both intentional and unintentional societal ramifications.”

Democratic risks in the shadows

The dangers emerging from the Undersphere are not hypothetical. One prominent example is the creation of deepfake pornography using publicly available AI models.

These images and videos—often made without consent—violate individual privacy, erode human dignity, and undermine trust in digital information.

This kind of content is more than a breach of ethics; it’s a direct threat to democratic values. And because it often circulates on fringe platforms or in closed networks, it’s difficult to monitor, let alone prevent.

“Underspheric outcomes may be multifaceted,” explains Dr Bailo.

“Some serve benign artistic purposes, while others may pose risks and negatively impact individuals and societies.”

We argue for a governance framework that is more fluid and adaptive, one that can address risks as they emerge across distributed digital systems.

Dr Francesco Bailo

Why existing regulation falls short

Most AI regulations today—such as the EU’s AI Act—are built around managing known risks associated with specific, intended uses of technology. These frameworks tend to assume a linear path from development to deployment.

But the Undersphere shows how generative AI can quickly take on new forms and applications once it leaves the lab. Users who are unaffiliated with developers—and often untraceable—can adapt tools in ways that were never anticipated. As a result, many harmful outcomes fall outside the scope of current regulation.

“Underspheres illustrate the various ways in which deviation from intended use can occur,” says Dr Bailo, pointing to a growing gap between how AI is governed and how it is actually used.

Dr Bailo and colleagues argue that we need to shift how we think about AI risk—from something that is linear and controllable to something more complex, distributed and evolving.

Rather than treating AI like a conventional product with clearly defined risks, we should approach it more like climate change: a dynamic challenge that emerges across systems, scales and actors. That means building governance models that are adaptive, responsive and able to address harm as it arises—especially in decentralised spaces like the Undersphere.

“We argue for a governance framework that is more fluid and adaptive,” he says, “one that can address risks as they emerge across distributed digital systems.”

50

automatic

Link

Making the invisible visible

The Undersphere reveals how generative AI is not just shaped by its creators, but by its users—many of whom operate outside regulatory or institutional view.

As Dr Bailo’s research shows, understanding and responding to these emergent practices is essential for building a future where AI can support, rather than undermine, the values of a democratic society.

Header image: Elise Racine & The Bigger Picture

_self

AI and Digital Societies

h2

Learn more about our research

cmp-call-to-action--ochre

Manual Name : Dr Francesco Bailo

Manual Description : Centre for AI, Trust and Governance

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : profile

alt

_self

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID :