human labor embedded within the LLMs. The formulaic and highly repetitive nature of certain
responses strongly implies this influence, as a lack of such labor would likely result in more
divergent answers within each chatbot’s output and that more negative content would be
present. Regarding the changes in responses based on contextual information, pinpointing
specific causes remains challenging. Nonetheless, as underscored by the cited interview, there
is future intent to deliberately tailor responses to align with specific country contexts and user
values.
While not disputing the importance of enhancing the cultural sensitivity of generative
AI responses, our primary objective was to highlight the potentially adverse effects of overly
aligning AI content with specific cultural and religious norms on certain minority groups.
Adopting a culturally relativistic approach can benefit LLM companies by creating a more
engaging user experience, as people can have a more positive experience if the answers of the
AI align with their beliefs. According to Chen et al. (2024), a more enjoyable experience can
lead to increased use of chatbots. Nonetheless, an excessive reliance on cultural relativism may
result in responses that compromise human rights. Given the demonstrated influence of chatbots
on user opinions, promoting negative values could expose LGBTQ+ individuals to adverse
social interactions and, in more severe cases, lead to discriminatory actions against them.
The rapid advancement of AI technologies poses significant challenges, not least of which
is the potential conflict between corporate profit motives and the ethical deployment of AI
systems. There is a risk that profit motives could overshadow human rights considerations.
Companies might prefer to keep their operations opaque, but it is important to increase
transparency in handling cultural and ethical issues (Bakiner, 2023). This could be addressed
by implementing comprehensive documentation of AI decision-making processes, openly
disclosing the sources of AI training data, and making the methodologies for generating
responses transparent, including how AI responses are generated and modified based on cultural
contexts, and what ethical frameworks guide these modifications. Despite being at the forefront
of generative AI, the United States lacks comprehensive AI legislation, posing a risk of future
challenges from the European Union in the case of European deployment, where the AI Act has
recently been passed (EP, 2024). We contend that to mitigate the impact of profit-driven
motives, there is a need for stringent regulations on generative AI, with a fundamental
integration of human rights considerations. This stance is supported by a substantial body of
scholarly literature and the contributions of civil society organizations advocating for a human
rights framework in AI applications (e.g. Bakiner, 2023; Latonero, 2018). Our article explores
an underinvestigated dimension within this discourse, suggesting that minority groups, who
face greater oppression in some other societies compared to the U.S., could encounter issues if
the cultural sensitivity of generative AI system responses is excessively prioritized.
Our study was limited by a moderate sample size and its exclusive focus on English
language content. Research such as Cao et al., (2023) has suggested that generative AI tools
may demonstrate more pronounced cultural alignment when generating responses in languages
specific to different countries. Nonetheless, the fact that differences appeared even within the
English responses suggests that using multiple languages might have highlighted even greater
variations between standard and contextually adjusted cases.
A potential criticism of our methodology might center on the assumption that
individuals are unlikely to disclose their contextual backgrounds when making statements to
chatbots. In response to this critique, two arguments can be advanced. Firstly, while individuals
may not typically include their background context within a statement containing prejudiced
content, in the future, such background information might be accessible through other means,
such as collected personal profile data. Secondly, our study did not aim to replicate a real-world
interaction scenario in its entirety. Instead, our methodology aligns with frameworks that assess
AI safety in a more isolated context, as discussed by, for example, Weidinger et al. (2023).