Pentagon’s’maturity-model’ for generative artificial intelligence will be released in June.

[ad_1]

DISA AI Hearing at HASC

DoD Chief Information Office John Sherman, Dr. Craig Martell (DoD chief digital and AI officer), and Air Force Lieutenant General Robert J. Skinner (Director of Defense Information Systems Agency) testify in front of a House Armed Services Subcommittee on March 22, 2024. (DoD photo taken by EJ Hersom).

WASHINGTON — To get a gimlet-eyed assessment of the actual capabilities of much-hyped generative artificial intelligences ChatGPT is a great alternative to other chat services. Officials from the Pentagon’s Chief Data & AI Office They said that they wouldWe’ll publish a “maturity-model” in June.

“We’ve worked really hard to determine where and when generative artificial intelligence can be useful.” You can also find out more about the following: “When and where is it going to be dangerous?” Craig Martell, the CDAO who is leaving his post,The teller of the Cyber, Innovative Technologies, & Information Systems subcommitteThe House Armed Services Committee This morning. “We have an enormous gap between the science of marketing and the science of medicine. [through its] Task Force Lima, is attempting to rationalize this gap. We’re developing what we call a maturity-model, very similar in concept to the autonomous driving maturity-model.”

You can also find out more about the following: Widely used framework rates the claims of car-makers on a scale from zero — a purely manual vehicle, like a Ford Model T — to five, a truly self-driving vehicle that needs no human intervention in any circumstances, a criterion that no real product has yet met.

RELATED: Artificial Stupidity, Failing The Handoff From AI to Human Control

Martell continued: “For generative AI, that’s a useful model, because people have claimed to be at level five. But objectively speaking we’re actually at level three, and a few folks are doing some level four things.”

The problem with Large Language Models to date is that they produce plausible, even authoritative-sounding text that is nevertheless Hallucinations are a series of errors that can be confusing. Only an expert in the field can detect. LLMs are deceptively simple to use, but extremely difficult to use. Well,.

“It is extremely difficult.” Martell said it takes a lot of cognitive effort to validate the output.[Using AI] to replace experts and allow novices to replace experts — that’s where I think it’s dangerous. “I think the most effective way to help experts become better experts is to help someone who knows their work well do it better.”

“I don’t know, Dr. Martell,” responded a skeptical Rep. Matt GaetzOne of the GOP members in the subcommittee. “I find many novices able to show their expertise by using these language models.”

Martell interjected frantically, “if I can, Sir,” “it is very difficult to validate the results.” … I’m totally on board, as long as there’s a way to easily check the output of the model, because hallucination hasn’t gone away yet. There’s a lot hope that hallucinations can be eliminated. Some research says it won’t еание go away. This is a question that I think we should continue to investigate.

“If it’s difficult to validate output, then… I’m very uncomfortable with this,” Martell said.

Two Hands on the Wheel: The Maturity Model

The day before Martell’s testimony on the Hill, his Chief technology officer. Bill StreileinPotomac Officers Club Annual Conference on AI Details on the development and timeline of the upcoming maturity model.

Since the CDAO Task Force Lima Launched in AugustStreilein said that it has been assessing more than 200 potential “use case” submissions for generative AI from organizations across the Defense Department. He said that they are finding “the most promising uses cases” in the back office where many forms need to filled out and many documents need to summarized.

RELATED: Beyond ChatGPT: Experts say generative AI should write — but not execute — battle plans

“Another important use case is that of the analyst,” he said, because intelligence analysts already have a high level of expertise in assessing incomplete or unreliable data, and double-checking and verifying are built into their standard procedure.

As part of this process, CDAO went out to industry and asked for their assistance in assessinging generative AIs — It is something that the private industry has a great incentive to get right. “We issued an RFI [Request For Information] Streilein spoke at the Potomac Officers’ conference. “We received over 35 industry proposals on ways to implement this maturity-model in the fall.” “As part Our symposium, which took place in FebruaryWe held a full-day workshop to discuss this maturity scale.

“We will be releasing our first version, version 1.0 of the maturity model… at the end of June,” he continued. But it won’t end there: “We do anticipate iteration… It’s version 1.0 and we expect it will keep moving as the technology improves and also the Department becomes more familiar with generative AI.”

Streilein said that 1.0 “will consist a simple rubric with five levels which articulates how much the LLM takes care of accuracy, completeness and reliability autonomously,” previewing the framework Martell talked about with lawmakers. It will be a set of datasets to which models can then be compared and a method by which someone could integrate a model at a certain maturity into their workflow.

RELATED: Former official: 3 ways intel analysts use artificial intelligence today

Why does CDAO draw inspiration from the maturity model of so-called self driving cars? To stress that humans cannot take a hands off, faith-based attitude to this technology.

“As a driver who is familiar with the basics of driving a vehicle, you are still responsible for other aspects, such as staying in your lane, and avoiding obstacles. [like] Streilein explained. “That’s sort of the inspiration for what we want in the LLM maturity model… to show people the LLM is not an oracle, its answers always have to be verified.”

Streilein expressed his excitement about generative AI and its potential, but he wants users to proceed carefully, with full awareness of the limits of LLMs.

“I think they are amazing. “I also think they are dangerous, because they provide a very human-like AI interface,” he said. “Not everyone understands that they are really just an algorithm that predicts words based on the context.”



[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *