An interactive experience that equips young people with a view into bias and prejudice of AI analyses

Client
SparkOH, Belgium

Timeframe
May 2025 - August 2025

Role
Designer, Lead Researcher

Engagement
20,000+ young adults

Objective

To make AI ethics tangible by letting young people experience algorithmic bias firsthand through manipulated facial analysis.

“The AI therapist told me I'd struggle with relationships based on my 'defensive jawline.' It felt real and clinical that I actually started believing it for a moment.”

— Participant

Scope

I was invited by a science and technology museum in Belgium to design and deliver an interactive experience that, safely, demonstrates the principles of bias and prejudice in present-day AI systems.

For SparkOH, with a core audience of 6-to-18-year-olds, I created “How They See Us” (“Comment nous voient les IAs?”), an interactive experience that uses a deliberately compromised large language model (LLM) to perform psychological assessments of its viewers; tasks the technology was never designed, trained, or certified to do. The outcome is an interactive and personalised experience that presents snap judgments of people with clinical authority, complete with therapy-style explanations and personalised future scenarios that blur the line between current AI capabilities and dystopian speculation.

While wrapped in playful elements, the message of “How They See Us” is crystal clear: LLMs and AI tend to weaponise and amplify common biases, such as by linking one’s clothing to employability, facial features to personality characteristics, or facial expressions to mental health indicators. With AI chatbots and virtual therapists gaining popularity among young people, “How They See Us” also aims to empower young people to approach such technologies with caution, care, and a deeper understanding of limitations. This is especially important for young people as they'll grow up in a world where machines increasingly judge human worth and where consent is under threat.

Components

“How They See Us” is a multilingual and GDPR-compliant experience that invites visitors to an interactive psychological assessment.

Visitors receive an analysis of approximately 20 demographic and psychometric factors, ranging from their age and fashion style to their dominant bias and level of creativity. Each data point also contains context, outlining the reasoning of the LLM’s assessment.

The analysis then informs a speculative story, applying the visitor’s analysis to themes and storylines of recent global news items. Visitors also receive a digital token of their participation. This fictitious “passport” contains their psychological profile. While it’s a shareable memento, its main intention is to extend conversations about AI bias beyond gallery walls.

“My friends and I compared our AI passports, and it was chilling. We're using them in our sociology class about algorithmic bias because nothing explains it better than seeing it happen to you.”

— Participant

Outcome

"How They See Us" is an interactive art installation disguised as a sleek self-service therapy kiosk that invites visitors to discover how AI sees them. Visitors take a photo that's instantly analysed by a manipulated commercial AI system. What emerges is often unsettling: a psychological profile that judges personality, mental health, and even future life outcomes based solely on facial features.

The installation's power comes from making abstract AI ethics viscerally personal. Visitors don't just learn about algorithmic bias; they experience it directly as the machine judges their worth, potential, and character in real-time. By exposing how easily commercial AI can be manipulated to bypass safety measures, the work transforms each participant into both a subject and a witness to the urgent need for democratic oversight of AI systems that increasingly shape our lives without our knowledge or consent.

Related projects

Previous
Previous

Victoria Police: Ethnographic research

Next
Next

Victoria Police: Participatory research