The Defense Department has released a Responsible Artificial Intelligence Toolkit that aims to guide best practices for AI among DOD users and industry contractors.
The RAI Toolkit, coming out of the Chief Digital and Artificial Intelligence Office, is a critical part of the RAI Strategy and Implementation Pathway that Deputy Defense Secretary Kathleen Hicks signed in June 2022.
“To ensure that our citizens, warfighters and leaders can trust the outputs of DOD AI capabilities, DOD must demonstrate that our military's steadfast commitment to lawful and ethical behavior apply when designing, developing, testing, procuring, deploying and using Al,” Hicks wrote. “The Responsible AI (RAI) Strategy and Implementation (S&I) Pathway illuminates our path forward by defining and communicating our framework for harnessing AI.”
The goal of the strategy, Hicks wrote, is to “eliminate uncertainty and hesitancy” among DOD users, industry and U.S. allies.
“Integrating ethics from the start also empowers the DOD to maintain the trust of our allies and coalition partners as we work alongside them to promote democratic norms and international standards,” she wrote.
A key part of this strategy and implementation plan was the development of an AI-related test and evaluation toolkit that would “draw upon best practices and innovative research from industry and the academic community, as well as commercially available technology where appropriate,” DOD said in a press release yesterday.
The toolkit was released to DOD users yesterday. It was created using the Responsible AI Guidelines and Worksheets, made by the Defense Innovation Unit, the NIST AI Risk Management Framework and Toolkit and the IEEE 7000 Standard Model Process for Addressing Ethical Concerns during System Design as a basis for its foundation.
"Responsible AI is foundational for anything that the DoD builds and ships,” CDAO Craig Martell said in a statement.
“So, I am thrilled about the release of the RAI Toolkit,” he continued. “This release demonstrates our commitment to ethics, risk assessment, internal governance, and external collaboration. We promised to establish processes to design and employ human fail-safes in AI development and deployment, and we're excited to provide this applied toolkit for our end users."
DOD also noted that the toolkit counsels users through “tailorable and modular assessments, tools and artifacts throughout the AI Product lifecycle.”
The department reports that the toolkit will be continuously updated as it is a living document.