AI experts' main security goals are stopping data poisoning and targeting existing vulnerabilities

By Georgina DiNardo / April 9, 2024 at 1:38 PM
Leading experts from the Energy Department's artificial intelligence research center believe the most pressing risk areas revolving around AI today are possible data poisoning of Large Language Models and the possibility of adversaries exploiting existing vulnerabilities in systems. In September, DOE’s Oak Ridge National Laboratory established the Center for AI Security Research (CAISER) in an effort to address existent threats posed by AI to government and industry. Edmon Begoli, the center's founding director, said one of its goals is finding...

Not a subscriber? Sign up for 30 days free access to exclusive, behind-the-scenes reporting on defense policy and procurement.

Log in to access this content.