The Intelligence Community (IC) has rolled out the Principles of Artificial Intelligence (AI) Ethics for the IC and the accompanying AI Ethics Framework. These principles and framework, recently approved by the director of national intelligence (DNI), will guide the IC’s ethical development and use of AI.
“The IC leads in developing and using technology crucial to our national security mission. We cannot do so without recognizing and acting on its ethical implications,” said DNI John Ratcliffe in a press release. “These principles and their accompanying framework will help guide our mission leads and data scientists as they implement technology to solve intelligence problems.”
The documents include input from IC data scientists as well as privacy and civil liberties officers, and provide guidance on how agency personnel should develop and use AI and machine learning as part of their intelligence-gathering responsibilities.
The Principles of AI Ethics for the IC
The principles outlined are intended to guide personnel about when and how to develop and use AI and machine learning to further the IC’s mission.
The following principles were outlined:
- Respect the Law and Act with Integrity
- Transparent and Accountable
- Objective and Equitable
- Human-Centered Development and Use
- Secure and Resilient
- Informed by Science and Technology
These principles articulate the general norms that the IC should follow in applying those authorities and requirements. To assist with the implementation of these principles, the IC has also created an AI Ethics Framework to provide further guidance.
The AI Ethics Framework for the IC
AI can enhance the intelligence mission, but like other new tools, it must be understood how to use this rapidly evolving technology in a way that aligns with IC principles to prevent unethical outcomes. The framework is a guide for the IC on how to procure, design, build, use, protect, consume, and manage AI and AI-related data.
The use of AI, as detailed in the framework, must match the IC’s unique mission purposes, authorities, and responsibilities for collecting and using data and AI outputs.
According to the framework, AI should be used:
- When it is an appropriate means to achieve a defined purpose after evaluating the potential risks;
- In a manner consistent with respect for individual rights and liberties of affected individuals and use data obtained lawfully and consistent with legal obligations and policy requirements;
- With the incorporation of human judgment and accountability at appropriate stages to address risks across the lifecycle of the AI and inform decisions appropriately.
“We must ensure that our intelligence activities produce objective intelligence while protecting privacy and civil liberties,” said Ben Huebner, ODNI Civil Liberties Protection Officer in a press release. “The use of AI provides new opportunities, but we must decide how to best use it to advance our mission. The Principles and Framework will provide a consistent approach.