AFRL generative AI program already has 80,000 users

By Shelley K. Mesch  / September 12, 2024

Tens of thousands of Defense Department servicemembers and civilians are now using an Air Force-designed generative AI system plugged into the military's intranet, paving the way for the tool's future expansion.

Launched in June, NIPRGPT -- named after the Non-classified Internet Protocol Router Network used in DOD and generative pretrained transformer, the type of AI it uses -- now has about 80,000 users, the Air Force Research Laboratory said. The ballooning use of the genAI capability has proven the demand for the tool, AFRL CIO Alexis Bonnell said in a late July interview.

“A lot of [use cases] have to do with toil reduction, getting a minute back on mission, being able to focus on lots of things like curiosity, content-creation or summation,” she said, adding computer coding can also be assisted by the system.

Not all the users are in the Air Force or Space Force either, she said, as other services and DOD organizations have signed up as well.

“This initiative will enable us to better gauge policy, infrastructure and potentially budgeting and acquisition insights to drive value-based investments in emerging technology,” AFRL said in an emailed statement. “We are learning that this is a journey of ‘right tool, for the right time, for the right use, at the right value.’”

Insights from current NIPRGPT use will also be used for future efforts to build upon the system and create fit-for-purpose GenAI capabilities, AFRL said.

With a secure system, users are able to experiment with how to leverage genAI, the Air Force said at the time of NIPRGPT’s launch, as “adequate safeguards” are in place.

NIPRGPT is just the start, Bonnell said, of integrating GenAI into modern defense and warfare.

“We’re going to see a huge amount of new models come out,” Bonnell said. “We’re going to see value in curated information sets for those models, and we’re going to see models that . . . work on the edge and are smaller and less compute-intensive.”

One impetus for creating NIPRGPT as a tool for DOD came when considering the amount of useful information the department has that isn’t available publicly.

A GenAI model is going to only be as effective as the information it is given, Bonnell said, and that need for information came up against the heavily siloed data regime within the U.S. government and DOD in particular.

“Part of the goal and part of what I think our leadership are really trying to do is understand ‘how do we become more knowledge-rich and knowledge-accessible,’” she said.

Like any new technology, people will need to be trained on it, even trained on how to ask the right questions of the AI system.

Submitting a question to a system like NIPRGPT is more complicated that a Google search, said Bonnell, who used to work for Google. Users can ask multiple questions at one time, ask for information to be displayed in a certain way or put parameters on the information that could be used in formulating an answer.

Curiosity is key to effectively using GenAI, she said.

“What we have tended to find -- and our research is still really early in this -- is that, to various degrees, people have either had the curiosity in them encouraged or beaten out of them,” Bonnell said. “So, particularly when we introduce a tool like NIPRGPT, a huge part of that was to give people a safe space to practice their curiosity in a new and different way. And if you think about it, it’s kind of like building a muscle.”

DOD leaders shouldn’t fear GenAI, she said, as it’s “just the next thing.” Much like how Google became an everyday part of life, AI is going to become ubiquitous too. What’s important is understanding how it can augment and drive efficiency in current operations.

Not using GenAI may be “actively disadvantaging and, quite frankly, creating toil where maybe that isn’t the best place for our most brilliant minds to spend” time, she said.

Bonnell also wants people to stop referring to this technology as “the AI,” she said, because it makes it seem like the machine has agency and is making decisions, when really, it’s a machine enacting what humans tell it to do.

“We never want to absolve ourselves of the consequences of what we choose to do with technology,” Bonnell said.