Reimagining AI Through Justice

Reflections from a Queer Feminist on the Una Europa PhD Summer School

This reflection explores my experience at the 2025 Una Europa Summer School on Data Science and AI for Social Welfare, held in July 2025 San Lorenzo de El Escorial (Madrid). Drawing from queer, feminist, and decolonial perspectives, I discuss how AI research must grapple with power, harm, and historical injustice. I reflect on the challenges of interdisciplinary work, the limits of fairness metrics, and the necessity of grounding technical innovation in political and ethical commitments. This is not just a summary; it is a call to centre care, complexity, and accountability in the future of AI.

 

What Are PhD Summer Schools?

 

PhD Summer Schools are immersive, short-term academic programmes designed for doctoral students and early-career researchers. They aim to cultivate deeper intellectual engagement, build cross-border networks, and offer space for experimentation across disciplines.

Unlike conventional academic conferences, summer schools offer slower, more reflexive learning. They prioritise critical thinking, mentorship, and collaborative exchange over finished results. They ask not only what you know, but how you learn, and with whom.

 

Una Europa: Transnational Learning for the Future

 

Una Europa is a strategic alliance of eleven leading European universities. Its vision is to foster transnational, interdisciplinary educational models that respond to shared global challenges.

The 2025 Summer School, titled Data Science and AI for Social Welfare, was hosted by Universidad Complutense de Madrid in the historically charged town of San Lorenzo de El Escorial. Participants came from across Europe, representing diverse disciplines including informatics, law, public health, philosophy, machine learning, sociology, and beyond.

The stated ambition of the programme was to explore how AI could be used “for social good.” Yet this theme, while compelling, prompted deeper inquiry:

Who defines social good?
Which lives get counted as worth protecting—or predicting?
And how do patterns of harm persist even in the name of fairness?

 

Showing Up as a Queer, Trans, Feminist Scholar

 

My participation in this summer school was shaped by my positionality as a queer, trans scholar with deep commitments to decolonial, feminist, and critical theory traditions.

I did not arrive as a neutral observer. I came with questions that refused easy answers:

  • How does AI replicate systems of oppression?
  • What knowledges are excluded or dismissed in data science discourse?
  • How can we build technologies that care, rather than control?

These perspectives extended beyond disciplinary boundaries. My role was not simply to represent a field, but to ask different kinds of questions, about power, repair, complicity, and accountability.

Summer schools like this offer a glimpse of what academia could become: rigorous, critical, and socially engaged. But this potential must be built, not assumed.

 

AI Is Not Neutral: Power in Pattern Recognition

 

Engaging deeply with data science, algorithmic fairness, and computational modelling took me outside my scholarly comfort zone. But it also brought clarity:

AI is not objective. It is pattern recognition—but patterns are political.

 

Every dataset is shaped by historical decisions: what to collect, who to count, and how to classify. Every model reflects choices about who matters and what outcomes are desirable. The tools we use carry the weight of the societies that created them.

We asked:

  • Whose lives are rendered visible or invisible by AI systems?
  • What gets treated as “noise,” and what gets coded as “risk”?
  • Can bias be “removed,” or does fairness require fundamentally different logics?

The more I learned about fairness metrics, interpretability, and transparency, the more I saw their limits. Many so-called solutions failed to account for the structural injustices baked into the systems themselves.

 

Structure, Themes, and Mentorship

 

The summer school offered a thoughtfully curated pedagogical structure that blended conceptual grounding with hands-on application. Learning unfolded through a combination of keynote sessions, thematic lectures, team-based projects, and ongoing ethical reflection.

The programme opened with a series of lectures designed to establish foundational understanding and provoke critical inquiry, including:

  • Introduction to AI – exploring core concepts and technical architectures
  • The Role of Ethics in AI – situating ethics not as an add-on, but as integral to system design
  • Ethical Tools and Resources – introducing practical frameworks such as checklists, audit tools, and bias mitigation techniques
  • Ethics and AI: From Speculation to Human Impact – examining real-world consequences and affective dimensions of algorithmic systems
  • The ALFIE Project – a presentation on the Assessment of Learning Frameworks for Intelligent and Ethical AI, which raised crucial questions around education, pedagogy, and technological accountability

Participants were then divided into interdisciplinary teams of 5–6 researchers and challenged to develop responses to real-world AI problems over the course of two days. Each team was supported by a mentor and also worked in coordination with an Ethics Advisory-like body: the Youth Design Assembly (YDA). This student-led group ensured that ethical reflection remained embedded throughout each phase of the design process.

The structure emphasised not only what AI can do, but what it should do, and for whom. Ethics was not treated as a checklist, but as a dynamic, negotiated and situated practice.

 

Our Project: Predictive Injustice and the Case of COMPAS

 

Our interdisciplinary team—Banafshee, Yuan, Marie, Samuel, Kostas, and myself—chose to explore algorithmic bias in criminal justice systems. We focused on COMPAS, a commercial risk assessment tool used in U.S. courts to predict “recidivism.”

Investigations, such as ProPublica’s 2016 report, have shown that COMPAS disproportionately overestimates the risk posed by Black defendants.

 

Our team brought together perspectives from law, computer science, AI, physics, and social justice. We asked:

  • Should predictive tools like COMPAS exist at all?
  • What does it mean to calculate “risk” in systems already structured by racial and economic injustice?
  • Can we ethically build on foundations that are themselves compromised?

Using AIF360, an open-source toolkit developed by IBM for detecting and mitigating algorithmic bias, we tested several fairness metrics and bias mitigation strategies on publicly available data. We also developed an accompanying ethical governance framework to guide implementation and accountability.

But even as we adjusted models and improved fairness scores, sometimes by removing race as an input variable, a deeper truth became clear:

Removing race from the model does not remove racism from the system.

 

Despite technical improvements, the underlying structural injustices remained untouched. This underscored a key limitation: statistical fairness cannot substitute for historical or systemic redress. Technical tools can only go so far when the broader institutional logics—carceral, racialised, and exclusionary—remain intact.

 

From Prototype to Politics

 

In under 48 hours, we:

  • Built a functional prototype
  • Created a governance and audit framework
  • Delivered a critically grounded presentation that foregrounded complexity, not closure

Our team was awarded the winning challenge, not for resolving a problem, but for resisting reductionism. We argued that tools built within oppressive systems cannot be “neutralised” by tweaks; they must be interrogated at the level of their assumptions.

Code is never just code. It is policy, politics, and potential harm made executable.

 

Interdisciplinarity in Action

 

This project taught me how to:

  • Understand model interpretability and fairness trade-offs
  • Work with data mitigation strategies
  • Translate between technical and ethical registers
  • But the most enduring lesson was this:

Interdisciplinary work is not just about sharing space, it’s about sharing stakes.

 

It is slow. It is messy. It resists closure. But it is necessary, especially when working toward justice.

Resisting technosolutionism does not mean rejecting all tools. It means situating tools within the histories and power structures they emerge from, and using them with care, critique, and humility.

 

San Lorenzo de El Escorial: Reflecting in Imperial Shadows

 

The summer school took place in San Lorenzo de El Escorial, a Spanish imperial complex built in the 16th century. Majestic and austere, the site once served as a royal palace, monastery, and burial site for kings.

Its architecture is a reminder of empire – of conquest, religious dominance, and colonial violence.

To work on AI and algorithmic justice in such a place was a powerful provocation. Like data, place holds memory. And that memory shaped how I understood our project; as an effort to interrogate the afterlives of empire encoded into technical systems.

 

Towards Queer, Feminist, Decolonial AI Futures

 

The future of ethical AI cannot be built on better tools alone; it requires different politics.

A queer, feminist, decolonial lens demands we ask:

  • Who is harmed by “fair” systems?
  • What stories do our datasets erase?
  • What futures are foreclosed by prediction itself?

We need systems grounded not in control, but in care, accountability, and relational ethics.

We are not just contributors to the AI conversation. We are its critics, re-shapers, and re-imaginers.

 

I left the summer school with more questions than answers, but also with deeper solidarity, sharper tools, and a renewed commitment to justice.

 

Final Reflections

 

This experience reminded me:

  • Interdisciplinarity is not a luxury but a necessity.
  • Ethical AI requires political courage, not just technical skill.
  • The future of AI must be collectively built, not individually optimised.

We need spaces where governance meets resistance, care meets code, and justice becomes an organising principle—not a metric.

AI for social justice is possible, if we’re willing to centre different knowledges, ask different questions, and build technologies that don’t just work, but care.

 

Want to collaborate on Justice and AI?

If you’re working on ethical AI, data governance, or tech and justice, and looking for fresh, cross-cutting perspectives, I’d love to connect.

At the Una Europa Summer School, I didn’t just contribute ideas. I helped lead team conversations, facilitate ethical reflection, and connect dots across disciplines: law, data science, political theory, and social justice. This kind of work isn’t always easy, but it’s where real insight and accountability can emerge.

I work at the intersections of governance, ethics, and justice, supporting organisations, institutions, and projects to think critically about technology, power, and equity. That includes ethical strategy, research design, facilitation, and building values-driven frameworks that work in the real world.

If you’re a policymaker, researcher, technologist, funder, or organiser navigating these questions, let’s talk.

Reach out for collaboration, consulting, or conversation. I’m open to short-term projects, strategic advice, or co-creating longer-term initiatives.

Justice work is collective!

_______________

Thank you to the Una Europa consortium, our mentors, the Youth Design Assembly, my team and every person who made this space possible. The project, the people, the process, together, reminded me that the work ahead is urgent, collective, and unfinished.

Want to see more? View our team’s presentation – 1. Reimagining Justice through Ethical AI-Group 3 Presentation