We and AI

We and AI

Civic and Social Organizations

Enabling and encouraging critical thinking about AI - to increase the diversity of those able to make decisions about AI

Über uns

We and AI is a collaborative non-profit organisation set up and staffed by volunteers. We seek and welcome participation from people of all ages and backgrounds. Our mission is to raise awareness of the risks and rewards of Artificial Intelligence.

Website
https://weandai.org/
Industrie
Civic and Social Organizations
Größe des Unternehmens
2-10 Mitarbeiter
Hauptsitz
London
Typ
Nonprofit
Gegründet
2020

Standorte

Employees at We and AI

Aktualisierungen

  • View organization page for We and AI, graphic

    2,347 followers

    If you work with non-profits and haven't yet read the research report: Grassroots and non-profit perspectives on generative AI Why and how do non-profit and grassroots organisations engage with generative AI tools, and the broader AI debate?  (https://lnkd.in/emnNySva) Don't worry, it is easy to find out the fascinating key findings at a webinar JRF are hosting - where we will discuss the research We and AI undertook which informed it! This event launches the findings of the Joseph Rowntree Foundation (JRF)'s report which engaged a broad range of social and environmental organisations in a survey, discussion groups and interviews. Join us to hear an overview of the research, the main findings, and key themes for wider discussion. There will also be opportunities for you to ask questions to the research team of Gulsen G. Lizzie Remfry Ismael Kherroubi García, FRSA Tania Duarte Nicholas Barrow, and Yasmin Ibison who commissioned the research. *Themes for discussion will include balancing short and long term objectives, the impact on the nonprofit sector as a whole, especially in relation different sizes of organisation, and what role grassroots have to play in AI advocating for AI governance, and how they deal with ethical dilemmas.* We hope to see you there: https://lnkd.in/emaK2y3D #AIforpublicgood #grassroots #generativeAI

    • Webinar
Grassroots and non-profit perspectives on generative AI. DATE 12TH AUGUST 2024, TIME 1PM - 1.45PM. #Aifor publicgood JRF
  • We and AI reposted this

    View organization page for Better Images of AI, graphic

    1,718 followers

    🚨🔔 Announcement: Open call for image-makers - for the Archival Images of AI Project! Brought to you by AIxDESIGN with support from Better Images of AI and funded by Netherlands Institute for Sound and Vision, this is a chance to contribute to the creation of a playbook for making better images of AI based on images from digital heritage collections. AIxDESIGN will commission 3 image-makers who may have worked with collage techniques before, but do not need to be experts in AI. Some of these images may later be published as stock imagery on the Better Images of AI gallery (https://lnkd.in/ejqsFBH6), available for free and public use under the Creative Commons by 4.0 license. 👉🏽 Application deadline: Monday 29th July at midnight (CET) 📌 For more information and to apply please visit: https://lnkd.in/dNUr2Whq Filter on image provided by Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0 (https://lnkd.in/dUHS3W5i)

    • A poster announcing an open call for image-makers. The background features a pixelated image of an artwork depicting a man and a woman in historical attire. The text reads: "Open Call: Image-Makers. AIxDESIGN will commission 3 image-makers to test the playbook for creating better images of AI from remixed digital heritage collections. Some of these images may later be published as stock imagery on the Better Images of AI gallery under CC by 4.0 license. Apply by Monday, July 29th midnight CET." Logos for AI x DESIGN, Sound & Vision, and Better Images of AI are displayed at the bottom.
  • View organization page for We and AI, graphic

    2,347 followers

    [Framing Deepfakes] Meet our panelist Medina Bakayeva - cybersecurity, digitalisation, and communications consultant   We’re looking forward to our first online public event “Framing Deepfakes: How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media?”    Date: Monday, July 8 · 4:30 pm - 6:30 pm BST (London).   Cost: Free   Booking link: https://lnkd.in/eYxa7p6J In preparation for the event, we want to introduce you to another of our panelists, Medina Bakayeva, consultant at Digital Hub on the topics of cybersecurity, digitalisation, and communications. Let’s hear from Medina: We and I: What brings you to the topic of deepfake frames? Medina: I was pursuing a masters in Global Governance and Ethics at UCL. During the programme, I started volunteering for We and AI as I was interested in the global governance of AI. I took a module on the role of technology in security and conflict where I learned about the nexus between technological advancement, cybersecurity, and conflict. I was interested in the impacts of AI developments on information ecosystems, democracy, and society at large which led me to research policy discourse on deepfakes, particularly as half the world was heading into elections in 2024. We and I: What are your top key takeaways for the audience? Medina: The key ideas and objectives in policy debates (the ‘policy frames’) in the U.K., US, and EU highlight that deepfakes pose a threat to democracy, cybercrime, and gender-based violence and harassment. The solutions policies in these jurisdictions propose include: technological solutions, public education, and more robust regulation. On Monday, we’ll hear more about Medina’s perspective on deepfakes.   Click on the link below and join us to explore how the way we talk about audiovisual misinformation and disinformation shapes the way we respond to hazards.   #GenAI #AI #Deepfakes #EthicalAI #InclusiveAI #TrustworthyAI #Deepfakes   IMAGE CREDITS: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0 Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0   https://lnkd.in/eYxa7p6J

    Framing Deepfakes

    Framing Deepfakes

    eventbrite.co.uk

  • View organization page for We and AI, graphic

    2,347 followers

    [Framing Deepfakes] Meet our panelist Patricia Gestoso - technologist and inclusion strategist We’re looking forward to our first online public event “Framing Deepfakes: How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media?”    Date: Monday, July 8 · 4:30 pm - 6:30 pm BST (London).   Cost: Free   Booking link: https://lnkd.in/eYxa7p6J   In preparation for the event, we want to introduce you to another of our panelists, Patricia Gestoso-Souto ◆ Inclusive AI Innovation, a technologist and inclusion strategist.   We and I: What brings you to the topic of deepfake frames? Patricia: I was researching for an article on how AI is weaponised against women when I found out that women bear the brunt of deepfakes, specifically those called “deepfake porn”, where a face of a woman is used in nude videos without her consent with the help of artificial intelligence. I then began to dig deeper into the topic, looking at how deepfake pornography affected victims, the lack of legislation, the financial web behind deepfake porn, and the moral stand of deepfake porn consumers. We and I: What are your top key takeaways for the audience? Patricia: When talking about deepfake porn, rightly, the focus has been the victims and legislation. I argue that we must also target who’s profiting from them. We need to make accountable not only deepfake porn marketplaces and platforms but all the infrastructure supporting them. For example, Big Tech provides deepfake technology, cloud services, storage, and even promotion through paid ads and app stores. There are also payment networks - e.g. credit cards - that make the financial transactions between sellers and purchasers possible. We also should challenge the moral stand of consumers that tell themselves that “deepfake porn” is harmless because it’s not the real footage of the person. On Monday, we’ll hear more about Patricia’s perspective on deepfakes.   Click on the link below and join us to explore how the way we talk about audiovisual misinformation and disinformation shapes the way we respond to hazards.   #GenAI #AI #Deepfakes #EthicalAI #InclusiveAI   IMAGE CREDITS: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0 Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0   https://lnkd.in/eYxa7p6J

    Framing Deepfakes

    Framing Deepfakes

    eventbrite.co.uk

  • We and AI reposted this

    View organization page for We and AI, graphic

    2,347 followers

    [Framing Deepfakes] Meet our panelist Adrija Bose - Senior Editor at BOOM We’re excited about our first online public event “Framing Deepfakes: How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media?” Date: Monday, July 8 · 4:30 pm - 6:30 pm BST (London). Cost: Free Booking link: https://lnkd.in/eYxa7p6J In preparation for the event, we want to introduce you to one of our panelists, Adrija Bose, who’s Senior Editor at BOOM. We and I: What brings you to the topic of deepfake frames? Adrija: While conversations around deepfakes started a while back, India began witnessing a plethora of deepfakes in 2023. The conversation around deepfakes started on Indian social media when media organisations reported about deepfakes of Indian actresses. It wasn’t just one or two, several deepfakes of Indian actresses that sexualised them were all over X. And X wasn’t taking any action. But deepfakes are not limited to celebs. With easy and cheap tools, anyone can create anyone’s deepfake. Through my reporting, I realised that the cost of deepfakes is high on particularly women. The patriarchal nature of the internet combined with the lack of policies and laws to protect women make the digital world a scary place for women. We and I: What are your top key takeaways for the audience? Adrija: Consent is key. A 2019 study showed globally that 96% of deepfakes are of non-consensual sexual nature, and of those, 99% are made of women. But there aren’t enough studies to show how deeply dangerous deepfakes can be, especially for marginalised communities. Deepfakes impact men and women differently. Its disproportionate impact usually implies that women are almost always sexualised which can result in them being forced away from public spaces. Due to a lack of studies and reporting on deepfakes and their impact on gender, policies are also not made keeping this in mind. We cannot wait to hear more of Adrija’s research and perspective on deepfakes. Click on the link below and join us to explore how the way we talk about audiovisual misinformation and disinformation shapes the way we respond to hazards. #GenAI #AI #Deepfakes #EthicalAI #InclusiveAI IMAGE CREDITS: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0 Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0 https://lnkd.in/eYxa7p6J

    Framing Deepfakes

    Framing Deepfakes

    eventbrite.co.uk

  • View organization page for We and AI, graphic

    2,347 followers

    [Framing Deepfakes] Meet our panelist Adrija Bose - Senior Editor at BOOM We’re excited about our first online public event “Framing Deepfakes: How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media?” Date: Monday, July 8 · 4:30 pm - 6:30 pm BST (London). Cost: Free Booking link: https://lnkd.in/eYxa7p6J In preparation for the event, we want to introduce you to one of our panelists, Adrija Bose, who’s Senior Editor at BOOM. We and I: What brings you to the topic of deepfake frames? Adrija: While conversations around deepfakes started a while back, India began witnessing a plethora of deepfakes in 2023. The conversation around deepfakes started on Indian social media when media organisations reported about deepfakes of Indian actresses. It wasn’t just one or two, several deepfakes of Indian actresses that sexualised them were all over X. And X wasn’t taking any action. But deepfakes are not limited to celebs. With easy and cheap tools, anyone can create anyone’s deepfake. Through my reporting, I realised that the cost of deepfakes is high on particularly women. The patriarchal nature of the internet combined with the lack of policies and laws to protect women make the digital world a scary place for women. We and I: What are your top key takeaways for the audience? Adrija: Consent is key. A 2019 study showed globally that 96% of deepfakes are of non-consensual sexual nature, and of those, 99% are made of women. But there aren’t enough studies to show how deeply dangerous deepfakes can be, especially for marginalised communities. Deepfakes impact men and women differently. Its disproportionate impact usually implies that women are almost always sexualised which can result in them being forced away from public spaces. Due to a lack of studies and reporting on deepfakes and their impact on gender, policies are also not made keeping this in mind. We cannot wait to hear more of Adrija’s research and perspective on deepfakes. Click on the link below and join us to explore how the way we talk about audiovisual misinformation and disinformation shapes the way we respond to hazards. #GenAI #AI #Deepfakes #EthicalAI #InclusiveAI IMAGE CREDITS: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0 Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0 https://lnkd.in/eYxa7p6J

    Framing Deepfakes

    Framing Deepfakes

    eventbrite.co.uk

  • View organization page for We and AI, graphic

    2,347 followers

    Join us to explore various frames through which deepfakes are perceived and how they shape the discourse and response. In a year of elections coinciding with the increasing use of generative AI, deepfakes have been big news in 2024. But does the way we talk about audiovisual misinformation and disinformation shape the way we respond to hazards; hazards which impact a wider sphere than typically addressed? And who gets forgotten in policy framings of AI? First, a panel will discuss the multifaceted impact of language on shaping public perceptions, policies, and economic dynamics surrounding deepfakes. They include journalist Adrija Bose who has investigated how deepfakes impact women in India, Medina Bakayeva who has conducted academic research on different policy framings, with a focus on cyber security perspective, and Dr Patricia Gestoso-Souto ◆ Inclusive AI Innovation who has analysed case studies from a feminist perspective, and will be lead by author and technologist Robert Elliott Smith Next will be a fireside chat with Jacobo Castellanos and Tania Duarte, discussing the work of WITNESS - an international human rights organization helping people use video and technology to protect and defend their rights. India Avalon, from The University of Nottingham, will present the problematic origins and usage of the term "Deepfake" and exploration of potential language solutions. The final segment will hear from We and AI’s young volunteers who have been delivering workshops with schoolchildren and parents. It will include interactive elements and a chance for a group discussion, based on what we can learn from young people’s responses to different types of deepfakes. IMAGE CREDITS: Clarote & AI4Media / Better Images of AI / User/Chimera / CC-BY 4.0 Adrien Limousin / Better Images of AI / Non-image / CC-BY 4.0

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • We and AI reposted this

    View profile for Patricia Gestoso-Souto ◆ Inclusive AI Innovation, graphic

    Director Scientific Services and Operations SaaS | Ethical and Inclusive Digital Transformation | Award-winning Inclusion Strategist | Trustee | International Keynote Speaker | Certified WorkLife Coach | Cultural Broker

    [You’re invited!] I’m a panelist at “Framing Deepfakes” - Free online event I’m delighted to share that I’ll be a panelist at the virtual We and AI online event “Framing Deepfakes: How can we better understand and address the complex societal challenges posed by deepfakes and synthetic media?” In a year of elections coinciding with the increasing use of generative AI, deepfakes have been big news in 2024. But how do the way we talk about audiovisual misinformation and disinformation shape our response to hazards? Date: Monday, July 8 · 4:30 pm - 6:30 pm BST (GMT+1) Join We and AI to explore various frames through which deepfakes are perceived and how they shape the discourse and response. The event will be chaired by Tania Duarte, We and AI founder. First, a panel will discuss the multifaceted impact of language on shaping public perceptions, policies, and economic dynamics surrounding deepfakes. Panelists: Adrija Bose who has investigated how deepfakes impact women in India. Medina Bakayeva who has conducted academic research on different policy framings, with a focus on cyber security perspective Patricia Gestoso-Souto ◆ Inclusive AI Innovation who has analysed case studies from a feminist perspective The panel will be led by author Robert Elliott Smith. Next will be a fireside chat with Jacobo Castellanos from WITNESS, an international human rights organisation that helps people use video and technology to protect and defend their rights. India Avalon, from The University of Nottingham, will present the problematic origins and usage of the term "Deepfake" and exploration of potential language solutions. The final segment will hear from We and AI’s young volunteers who have been delivering workshops to schoolchildren and parents. We’ll hear about “Teaching young people to navigate Deepfakes and synthetic media. What can we all learn?” Register on the link below and join us for a thought-provoking session! #GenAI #AI #Deepfakes #EthicalAI #InclusiveAI #Trust #Sexism #Patriarchy https://lnkd.in/e4Rj6fRA

    Framing Deepfakes

    Framing Deepfakes

    eventbrite.co.uk

  • We and AI reposted this

    View organization page for Better Images of AI, graphic

    1,718 followers

    Better Images of AI is delighted to be working with Cambridge University AI Ethics Society to create a community of Student Stewards ❤️ The Stewards are working to empower people to use more representative images of AI and celebrate those who lead by example. The Stewards have also formed a valuable community to help us connect with its artists and develop our image library 🙌🏽 👉🏽 Read more about our collaboration, the work of our Student Stewards in our latest blog post, and find out how you could get involved ✨ https://lnkd.in/eugNCzxz Hannah Claus Harry Weir-McAndrew Valena Reich Hannan Ali DL Luna Wang Kia Wan Marissa Ellis Kimberly Wright Ismael Kherroubi García, FRSA Angeline Corvaglia Sahaj Vaidya Lizzie Remfry Gulsen G. Amina Memon Beckett L. Flo Forster Hollie Bayliss Keerti R. ingrid karikari Lakshmi Chockalingam Abha Thakor CMgr, FChartPR, FCIPR Silky Vaidya Medina Bakayeva Patricia Gestoso-Souto ◆ Inclusive AI Innovation Ricardo Santos Coelho Zoya

    Better Images of AI's Student Stewards

    Better Images of AI's Student Stewards

    https://blog.betterimagesofai.org

Ähnliche Seiten

Jobs durchsuchen