Recourse for reclamation: Chatting with generative language models

J Chien, K McKee, J Kay, W Isaac - … Abstracts of the CHI Conference on …, 2024 - dl.acm.org
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 2024dl.acm.org
Researchers and developers increasingly rely on toxicity scoring to automate generative
language model outputs, in settings such as customer service, information retrieval, and
content generation. However, toxicity scoring may render pertinent information inaccessible,
rigidify or “value-lock” cultural norms, and prevent language reclamation processes,
particularly for marginalized people. In this work, we extend the concept of algorithmic
recourse to generative language models: we provide users a novel mechanism to achieve …
Researchers and developers increasingly rely on toxicity scoring to automate generative language model outputs, in settings such as customer service, information retrieval, and content generation. However, toxicity scoring may render pertinent information inaccessible, rigidify or “value-lock” cultural norms, and prevent language reclamation processes, particularly for marginalized people. In this work, we extend the concept of algorithmic recourse to generative language models: we provide users a novel mechanism to achieve their desired prediction by dynamically setting thresholds for toxicity filtering. Users thereby exercise increased agency relative to interactions with the baseline system. A pilot study (n = 30) supports the potential of our proposed recourse mechanism, indicating improvements in usability compared to fixed-threshold toxicity-filtering of model outputs. Future work should explore the intersection of toxicity scoring, model controllability, user agency, and language reclamation processes—particularly with regard to the bias that many communities encounter when interacting with generative language models.
ACM Digital Library