Sign the Responsible AI Commitments
Artificial intelligence offers enormous promise and great risk. In this pivotal moment as new companies are funded and built, it is critical that startups incorporate responsible AI practices into product development from the outset.
The Responsible AI Commitments outline a Responsible AI program by distilling best practices and frameworks designed for early stage companies and their investors. You can download a PDF of the commitments here.
To help implement the Commitments, we created the Responsible AI Protocol in an unprecedented collaboration among operators, investors, civil society, and the public sector. The protocol includes clear action items, resources, and frameworks. Download it here.
| The Responsible AI Commitments
Startup: My organization makes these voluntary commitments (described below).
Venture Capital Firm: My organization will make reasonable efforts to (1) encourage portfolio companies to make these voluntary commitments (described below), (2) consult these voluntary commitments when conducting diligence on potential investments in AI startups, and (3) foster responsible AI practices among portfolio companies.
LP: I encourage the venture capital firms in which I invest to make these voluntary Responsible AI commitments (described below).
Supporter: I encourage startups and venture firms to make these voluntary Responsible AI commitments (described below).
Signatories make these voluntary responsible AI commitments:
- Secure organizational buy-in: We understand the importance of building AI safely and are committed to incorporating responsible AI practices into building and/or adopting AI within our organization. We will implement internal governance processes, including a forum for diverse stakeholders to provide feedback on the impact of new products and identify risk mitigation strategies. We will build systems that put security first, protect privacy, and invest in cybersecurity best practices to prevent misuse.
- Foster trust through transparency: We will document decisions about how and why AI systems are built and/or adopted (including the use of third party AI products). In ways that account for other important organizational priorities, such as user privacy, safety, or security, we will disclose appropriate information to internal and/or external audiences, including safety evaluations conducted, limitations of AI/model use, model’s impact on societal risks (e.g. bias and discrimination), and the results of adversarial testing to evaluate fitness for deployment.
- Forecast AI risks and benefits: We will take reasonable steps to identify the foreseeable risks and benefits of our technologies, including limitations of a new system and its potential harms, and use those assessments to inform product development and risk mitigation efforts.
- Audit and test to ensure product safety: Based on forecasted risks and benefits, we will undertake regular testing to ensure our systems are aligned with responsible AI principles. Testing will include auditing of data and our systems’ outputs, including harmful bias and discrimination, and documenting those results. We will use adversarial testing, such as red teaming, to identify potential risks.
- Make regular and ongoing improvements: We will implement appropriate mitigations while monitoring their efficacy, as well as adopt feedback mechanisms to maintain a responsible AI strategy that keeps pace with evolving norms, technologies, and business objectives. We will monitor the robustness of our AI systems.
| Next: Action the Commitments
We created the Responsible AI Protocol as a guide to implement the Commitments. It provides clear action items, resources, and frameworks. Action the Commitments