Search
+
    The Economic Times daily newspaper is available online now.

    WhatsApp's AI-generated stickers spark outrage for depicting Palestinian children with guns; Meta says it'll 'continue to improve these features'

    Synopsis

    The incident has raised concerns about AI perpetuating bias, impacting public opinion negatively.

    WhatsApp, owned by Meta, is facing backlash due to its new AI-powered stickers that depict Palestinian children with guns when prompted with words like 'Palestinian' or 'Palestine.'Agencies
    WhatsApp, owned by Meta, is facing backlash due to its new AI-powered stickers that depict Palestinian children with guns when prompted with words like 'Palestinian' or 'Palestine.' (Image Source: The Guardian)
    WhatsApp, the popular messaging platform owned by Meta, is under fire for its new AI-powered stickers, which have ignited a massive controversy.

    The upgrade, designed to transform text prompts into stickers, has drawn outrage for generating images of children holding guns when prompted with words like 'Palestinian,' 'Palestine,' or 'Muslim boy Palestinian,' as per a report in the Guardian.

    In contrast, prompts related to Israel, such as 'Israeli boy,' resulted in innocent imagery of children playing and dancing, devoid of any violence.

    The issue came to light when users noticed the stark contrast in the generated stickers, prompting widespread condemnation on social media.

    Meta spokesperson Kevin McAlister acknowledged the problem, stating to the Guardian, "As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. We'll continue to improve these features as they evolve and more people share their feedback."

    The controversy has raised concerns about the inadvertent propagation of bias and discrimination by AI technology, especially in sensitive geopolitical contexts. Critics argue that these biases distort the portrayal of communities and events, potentially influencing public opinion negatively.

    The incident has triggered a wave of reactions from netizens, expressing their disappointment and loss of faith in AI systems.




    This is not the first time Meta's AI has faced scrutiny for bias-related issues. Instagram's automatic translation feature previously inserted the word "terrorist" into a user's bio written in Arabic, echoing a Facebook mistranslation that led to the wrongful arrest of a Palestinian man in Israel in 2017, according to The Verge.

    The incident also adds to the growing apprehension about AI's potential misuse. Instances like Lensa AI app generating sexually suggestive and racially biased avatars have further fueled concerns about inappropriate content generated by AI systems.

    A recent study published in the journal Nature has highlighted the risks associated with integrating large language models (LLMs) into healthcare, emphasising the potential for harmful, race-based medical practices.

    (Catch all the Business News, Breaking News, Budget 2024 Events and Latest News Updates on The Economic Times.)

    Subscribe to The Economic Times Prime and read the ET ePaper online.

    ...more

    (Catch all the Business News, Breaking News, Budget 2024 Events and Latest News Updates on The Economic Times.)

    Subscribe to The Economic Times Prime and read the ET ePaper online.

    ...more
    The Economic Times

    Stories you might be interested in