Reportedly, Microsoft unveiled a generative AI model based on GPT-4 that is specifically tailored for US intelligence organizations that don’t have access to the Internet.
This is supposedly the first time Microsoft has used a significant language model in a secure environment. It was created to enable secure chats with a chatbot like Microsoft Copilot and ChatGPT, as well as to enable spy agencies to study top-secret data without running the risk of a connection failure. However, because AI language models have intrinsic design limits, improper use could potentially lead to officials being misled.
OpenAI’s GPT-4 is a large language model (LLM) designed to identify the most likely tokens in a sequence—fragments of encoded data. Information analysis and computer code creation are two applications for it.
GPT-4 can power AI assistants that converse like humans when set up as a chatbot (like ChatGPT). As part of an agreement, Microsoft is allowed to exploit the technology in exchange for the investments it has made in OpenAI.
The report claims that the new AI service—which hasn’t been given a name yet—addresses intelligence agencies’ growing desire to employ generative AI to handle sensitive material while lowering the danger of data breaches or hacking efforts.
Typically, ChatGPT is hosted on Microsoft cloud servers, which increases the danger of data leaks and interception. In keeping with that, the CIA declared last year that it intended to develop a ChatGPT-like service; nevertheless, this Microsoft endeavor is apparently unrelated.
In a report, Microsoft’s chief technology officer for strategic missions and technologies, William Chappell, stated that modifying an AI supercomputer in Iowa took 18 months to construct the new system. Although it is unable to access the public Internet, the modified GPT-4 variant can read files that users have sent to it. “This is the first time we’ve ever had an isolated version—when isolated means it’s not connected to the Internet—and it’s on a special network that’s only accessible by the US government,” Chappell stated.
About 10,000 members of the intelligence community can currently access the newly launched service, which was enabled on Thursday and is prepared for additional testing by pertinent authorities. Currently, Chappell says, it’s “answering questions.”
The possibility that GPT-4 will confabulate (make up) incorrect summaries, draw incorrect inferences, or give consumers misleading information is a significant disadvantage of utilizing it to evaluate crucial data.
Trained artificial intelligence neural networks are not databases; instead, they function based on statistical probabilities, which makes them inadequate factual resources unless they are supplemented with external data obtained from another source through a method like retrieval augmented generation (RAG).