Developing safe, secure, and trustworthy AI
How we build generative AI systems responsibly
Govern
Putting responsible AI into practice begins with our Responsible AI Standard, which details how to integrate responsible AI into engineering teams, the AI development lifecycle, and tooling.
Map
Mapping risks is a first step toward measuring and managing risks associated with AI systems. Mapping informs decisions about planning, safeguards, and the appropriate use of a generative AI system.
Measure
We’ve implemented procedures to measure AI risk and related impacts to inform how we manage these considerations when developing and using generative AI systems.
Manage
We manage identified risks at the platform- and application-level. We also work to safeguard against previously unknown risks through monitoring, feedback, and incident response systems.
How we make decisions
Deployment safety for generative AI applications
Safely deploying Copilot Studio
Copilot Studio harnesses generative AI to enable customers without programming or AI skills to build copilots. As with all generative AI systems, the Copilot Studio engineering team mapped, measured, and managed risks according to our governance framework to ensure safety prior to deployment.
Safely deploying GitHub Copilot
GitHub Copilot is an AI-powered tool designed to increase developer productivity. In developing the features for GitHub Copilot, the team worked with their Responsible AI Champions to map, measure, and manage risks associated with using generative AI in the context of coding.
Sensitive Uses program
How we support our customers
AI Customer Commitments
In June 2023, we announced our AI Customer Commitments, outlining steps to support our customers on their responsible AI journey.
Tools to support responsible development
We’ve released 30 responsible AI tools that include more than 100 features to support customers’ responsible AI development. These tools work to map and measure AI risks and manage identified risks with novel mitigations, real-time detection and filtering, and ongoing monitoring.
Transparency to support responsible development and use
We provide documentation to our customers about our AI applications’ capabilities, limitations, intended uses and more.
How we learn, evolve, and grow
Governance of responsible AI
At Microsoft, no one team or organization can be solely responsible for embracing and enforcing the adoption of responsible AI practices.
External partnerships
We partner with governments, civil society organizations, academics, and others to advance responsible AI.
Supporting AI research
Academic research and development can help realize the potential of AI. We’ve committed support to various programs and regularly publish research to advance the state of the art in responsible AI.
Tuning in to global perspectives
In 2023, we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries.
Explore Responsible AI at Microsoft
Earn trust
We’re committed to advancing cybersecurity and digital safety, leading the responsible use of AI, and protecting privacy.
Responsible AI
We are committed to the advancement of AI driven by ethical principles.