Understanding the White House Blueprint for an AI Bill of Rights

The White House “Blueprint for an AI Bill of Rights” is an important first step toward future policymaking in the US and is likely to help shape future thinking about this topic. Here's what to consider from the latest White House guidelines.

Last Updated: October 6, 2022

Nearly all technology companies leverage AI or automated large data systems in their products and services. The White House “Blueprint for an AI Bill of Rights” is an important first step toward future policymaking in the US and is likely to help shape future thinking about this topic. It is also creates an opportunity for companies to ask themselves some essential questions:

1. Does this or will this apply to my company?

2. What are my company’s principles for responsible AI?

3. How do we operationalize our principles?

What’s happening:

On October 4, the White House Office of Science and Technology Policy released the Blueprint for an AI Bill or Rights (“Blueprint”) as a call to action for governments and companies to protect civil rights in an increasingly AI-infused world. The Blueprint and associated documents serve as an overview of issues surrounding the use of automated large data systems and AI and guidelines for mitigating harm. It does not provide a legislative framework or guidelines for enforcement, but is instead intended to be a “guide for society.”

The Blueprint outlines five high-level principles for responsible AI. This overview is clear, and is similar to other recent whitepapers and guidance published on AI and automated large data systems.

The more detailed companion content to the Blueprint “From Principles to Practice” provides examples of the business and technical scenarios the White House would like to see addressed and could signal where future legislative and enforcement action will likely occur. The document is worth reading, but here is a quick summary of the major points:

Questions to consider:

1. Does this or will this apply to our company?

Read through the examples of real world issues and think hard about how corollaries of these may apply to your company now or in the future.

2. What are my company’s principles for responsible AI?

Create or adapt an existing set of principles. It’s important to formally document this for your company, discuss/debate it as a leadership team and share it with your employees and customers.

3. How do we operationalize our principles?

Form an ethics review board that includes external experts in responsible AI, legal counsel, and engineering and business leaders from your company. Ideally, the board is majority independent/external. When ethical questions arise, the board responds with research, reflections and clear recommendations. Start small, but start - this could be three external experts and two employees.

RIL can help you get started, please reach out to us.