Words We Inherit, Words We Generate: Uncovering and Contextualizing Bias
Just Added

Words We Inherit, Words We Generate: Uncovering and Contextualizing Bias

Priberam Machine Learning Lunch Seminar

By Priberam Labs

Date and time

Tue, 3 Jun 2025 13:00 - 14:00 WEST

Location

Instituto Superior Técnico, Anfiteatro VA5

1 Avenida Rovisco Pais 1049-001 Lisboa Portugal

About this event

  • Event lasts 1 hour

Abstract:

As language shapes our understanding of the world, it also reflects the social norms, stereotypes, and biases embedded within it. In this talk, I present two complementary lines of research that aim to uncover and contextualize bias in both inherited (human-authored) and generated (machine-produced) language.

The first part focuses on harmful language in cultural heritage metadata—descriptions and labels that, while historically grounded, may carry derogatory or exclusionary terms. Rather than censoring or erasing these instances, our approach promotes informed engagement by detecting such language and providing contextual explanations. Through a curated vocabulary and an AI-powered tool leveraging both traditional NLP techniques and large language models (LLMs), we aim to support more inclusive access to cultural collections.

The second part examines gender bias in machine translation (MT) systems. We propose a novel evaluation framework that captures systematic gender asymmetries in the translation of gender-ambiguous occupational terms from English into languages with grammatical gender (e.g., Greek, French, or Spanish). While individual translations in such cases may not be inherently biased, as translating 'the doctor' in a masculine ('el doctor') or feminine ('la doctora') form is not biased or unbiased in isolation, aggregated outputs reveal consistent patterns that reinforce gender stereotypes. Our methodology and accompanying dataset (GAMBIT-MT) provide tools to quantify and study these biases, offering insight into how human stereotypes and biases are embedded in machine translation systems.

Together, these works highlight the importance of making bias visible—whether inherited from the past or produced by modern AI—so that we can move toward more inclusive and fair language technologies and a better understanding of the world they shape.


Bio:

Orfeas Menis Mastromichalakis is a Ph.D. candidate and researcher at the Artificial Intelligence and Learning Systems Laboratory (AILS) of the National Technical University of Athens (NTUA), where his work focuses on explainable AI, algorithmic bias, and AI ethics. He holds an integrated master's degree from the School of Electrical and Computer Engineering at NTUA and a master's in Science, Technology, and Society from the National and Kapodistrian University of Athens. He has contributed to several EU-funded research projects and has worked with EU agencies on topics related to AI. Orfeas has been a member of the Council of Europe's Experts Group on AI Literacy, and he is a co-founder and head of AI at the startup Nerion.

www.priberam.com

Tickets

Organised by