Artificial intelligence could more reliably be used for cybersecurity applications if the machines were better able to explain themselves, according to a representative for a key government agency funding research and development for emerging technology. "We recognize that machine-learning-enabled AI is inherently brittle and can be easily spoofed, either intentionally or unintentionally," said Valerie Browning, director of the Defense Advanced Research Projects Agency's Defense Sciences Office. Browning spoke to Inside Cybersecurity after giving the keynote address at NextGov's emerging...