“Would I rather live in a world where there is actually a large number of practitioners who know how to fix AI models when an attack or misuse happens, or a world in which I’m at the mercy of a couple of institutes?” Our CEO Ali Farhadi spoke with Marketplace's Matt Levin about the importance of openness in AI.
Ai2
Non-profit Organizations
Seattle, WA 44,543 followers
Breakthrough AI to solve the world's biggest problems.
Über uns
Our mission is to build breakthrough AI to solve the world's biggest problems.
- Website
-
http://allenai.org
External link for Ai2
- Industrie
- Non-profit Organizations
- Größe des Unternehmens
- 201-500 Mitarbeiter
- Hauptsitz
- Seattle, WA
- Typ
- Nonprofit
- Gegründet
- 2014
- Spezialitäten
- Artificial Intelligence, Deep Learning, Natural Language Processing, Computer Vision, Machine Reading, Machine Learning, Knowledge Extraction, Common Sense AI, Machine Reasoning, Information Extraction, and Language Modeling
Standorte
-
Primäre
Seattle
Seattle, WA 98013, US
Employees at Ai2
-
Eran Megiddo
Startup CEO | Education Technology Executive | New Product Innovation | Global Business Leadership
-
Chris Doehring
Lead Software Engineer at AI2
-
Kirby Winfield
Emerging VC and recovering founder
-
Peter Clark
Senior Research Director at the Allen Institute for Artificial Intelligence (AI2)
Aktualisierungen
-
We’re turning to our community to help fill a #NewRole at Ai2 – a technical solutions engineer with a community-driven mindset. Want to help us build a community around open-source AI? Apply today! #NewJob
Technical Solutions Engineer, Language Models
job-boards.greenhouse.io
-
We have a unique opportunity for a visionary, creative leader to join our team as the Head of People & Operations. If you're passionate about your team, driven by impact, and a proven leader, this could be the perfect partnership. Learn more about the role and apply below.
Head of People & Operations
job-boards.greenhouse.io
-
Tomorrow kicks off DEF CON's red teaming at the AI Village, with OLMo acting as the featured LLM. We take this open, collaborative approach to testing OLMo as it reflects our broader strategy towards AI safety: contextual, technical, and ongoing. Here’s how we tackle AI safety.
Open research is the key to unlocking safer AI
blog.allenai.org
-
Hey DEF CON — we'll see you tomorrow🧑💻 Come to the AI Village to test OLMo in our Generative Red Team Challenge, co-hosted by our friends at the Digital Safety Research Institute.
Digital Safety Research Institute
dsri.org
-
Ai2 reposted this
After months of behind-the-scenes research, interviews, and labors of love, we’re delighted to debut Ai2’s new brand and website today. What we learned is that Ai2 is audacious but disciplined, grounded in science while being open and accessible with the purpose of building breakthrough AI to solve the world’s biggest problems. 🩷 We’re all of these things, and now, we come with a new look. 😎 Explore the evolution and follow along for more: https://lnkd.in/gV3-GwXv
-
After months of behind-the-scenes research, interviews, and labors of love, we’re delighted to debut Ai2’s new brand and website today. What we learned is that Ai2 is audacious but disciplined, grounded in science while being open and accessible with the purpose of building breakthrough AI to solve the world’s biggest problems. 🩷 We’re all of these things, and now, we come with a new look. 😎 Explore the evolution and follow along for more: https://lnkd.in/gV3-GwXv
-
💪 We must unite to address threats to nature. Gundi, a free “universal adaptor” from our EarthRanger team & others, integrates data and technologies so that conservationists can use the best tools they need to protect wildlife. Now, a finalist in FastCompany’s #FCDesignAwards 🎉 https://lnkd.in/gk3PdDc5
-
When should AI not comply? That's what our latest work aims to propose — in addition to safety considerations, Faeze Brahman, Sachin Kumar & their collaborators outline the taxonomy of model noncompliance, and offer CoCoNot, a resource for training and evaluating models’ noncompliance.
Broadening the Scope of Noncompliance: When and How AI Models Should Not Comply with User Requests
blog.allenai.org