Chinese researchers reuse Meta’s Llama model for military intelligence applications
- Chinese researchers adapt Meta’s Llama model for military intelligence use
- ChatBIT shows the risks of open-source AI technology
- Meta distances itself from unauthorized military applications of Llama
Meta’s Llama AI model is open source and available for free use, but the company’s licensing terms clearly state that the model intended for non-military applications only.
However, there are concerns about how open source technology can be monitored to ensure it is not used for the wrong purposes and the latest speculation confirms these concerns as recent reports claim that Chinese researchers with ties to the People’s Liberation Army (PLA) a military-focused AI model called ChatBIT using Llama
The rise of ChatBIT highlights the potential and challenges of open source technology in a world where access to advanced AI is increasingly seen as a national security issue.
A Chinese AI model for military intelligence
A recent study by six Chinese researchers from three institutions, including two affiliated with the Academy of Military Sciences of the People’s Liberation Army (AMS), describes the development of ChatBIT, created using an early version of Meta’s Llama model.
By integrating their parameters into the large language model Llama 2 13B, the researchers aimed to produce a military-oriented AI tool. Subsequent academic follow-up articles outline how ChatBIT has been adapted to handle military-specific dialogues and support operational decisions, with the goal of performing at approximately 90% of GPT-4’s capacity. However, it remains unclear how these performance data were calculated as no detailed testing procedures or field applications have been disclosed.
Analysts familiar with Chinese AI and military research have reportedly reviewed these documents and supported the claims about ChatBIT’s development and functionality. They claim that ChatBIT’s reported performance metrics match experimental AI applications, but note that the lack of clear benchmarking methods or accessible datasets makes it challenging to confirm the claims.
Furthermore, a study by Reuters provides an additional layer of support, citing sources and analysts who have reviewed materials linking PLA-affiliated researchers to ChatBIT’s development. The investigation finds that these documents and interviews reveal efforts by the Chinese military to repurpose Meta’s open-source model for intelligence and strategy tasks, making it the first published instance of a national military adapting Llama’s language model for defense purposes.
The use of open-source AI for military purposes has reignited debate about the potential security risks associated with publicly available technology. Meta, like other tech companies, has licensed Llama with clear restrictions against its use in military applications. However, as with many open source projects, enforcing such restrictions is practically impossible. Once the source code is available, it can be modified and reused, allowing foreign governments to tailor the technology to their specific needs. The case of ChatBIT is a clear example of this challenge, as Meta’s intentions are circumvented by people with different priorities.
This has led to renewed calls within the US for stricter export controls and further restrictions on Chinese access to open source and open standard technologies such as RISC-V. These steps are intended to prevent U.S. technologies from supporting potentially hostile military advancements. Lawmakers are also exploring ways to limit U.S. investment in China’s AI, semiconductor and quantum computing sectors to curb the flow of expertise and resources that could fuel the growth of China’s technology industry.
Despite the concerns surrounding ChatBIT, some experts question its effectiveness given the relatively limited data used in its development. The model was reportedly trained on 100,000 military dialogue records, which is relatively small compared to the massive data sets used to train state-of-the-art language models in the West. Analysts suggest this could limit ChatBIT’s ability to perform complex military tasks, especially when other large language models are trained on trillions of data points.
Meta also responded to these reports, claiming that Llama 2 13B LLM, used for ChatBIT development, is now an outdated version, while Meta is already working on Llama 4. The company also distanced itself from the PLA, saying any misuse of Llama is unauthorized. Molly Montgomery, Meta’s director of public policy, said: “Any use of our models by the People’s Liberation Army is unauthorized and violates our acceptable use policy.”
Via Tom’s hardware