AI’s Regimes of Representation: When Technology Forgets Who We Are
Published on:
Right to Fair Representation
The case study explores how text-to-image AI models often fail to capture South Asian cultures accurately, showing how these systems reflect global power imbalances more than reality. It argues that understanding bias requires more than numbers — it needs voices from the communities affected.
For me, cultural representation means being understood, not just seen. Growing up Lebanese, I’ve watched how Arab identity gets flattened into stereotypes — violent, poor, or exotic — and I can see how those same biases are now coded into AI systems. When the data comes from the same internet that misrepresents us, the output naturally repeats those mistakes.
What I liked about this study is that it valued people’s feelings, not just technical accuracy. Asking participants how they interpreted AI images gave depth to the research. Numbers can’t capture when something “feels wrong,” and that emotional context matters. I think smaller, community-based evaluations like this make AI more ethical because they reconnect technology with the people it’s supposed to serve.
Making AI inclusive starts with who gets to build it. Adding more data isn’t enough; the people shaping these systems have to come from diverse places too. Developers should collaborate with cultural experts and locals to make sure images reflect lived realities, not just Western defaults.
I also don’t think culture can ever be fully encoded. It changes too often for that. But models could be designed to update with new, locally informed data so they stay open to growth. The goal shouldn’t be to define culture but to stay responsive to it.
History shows every major technology has carried the biases of its creators. The difference now is that AI can either repeat those mistakes faster or learn from them. Building fairer models starts with remembering that representation isn’t neutral — it’s power.
My Question:
What would AI look like if the people it’s misrepresented most — Arabs, South Asians, Africans, Indigenous communities — were the ones designing its datasets and deciding what counts as accurate?
This case reminded me that fairness in AI isn’t just about fixing bias in code. It’s about respect and agency. I want to see technology that doesn’t just show us, but understands us — or at least tries to.
