Skip to content

Chinese LLM Developers Explore Domestic Chips Options

Image Source: Michael Dziedzic/Unsplash

As Chinese AI regulations require the approval of large language models before they are publicly available, Chinese users do not (officially) have access to models popular and widely debated in Western countries, such as OpenAI’s ChatGPT and Google’s Gemini. However, Chinese technological firms and recently established start-ups compete in the development of models designed specifically for Chinese users. Whereas many of them are open-source with detailed technical documentation, the developments in this field receive little attention.

For China and Chinese companies, developing indigenous LLMs is a matter of independence from foreign technologies and also a matter of national pride. Since August 2023, when China’s rules on generative AI came into effect, 46 different LLMs developed by 44 different companies were approved by the authorities. The legislation requires companies to ensure that the models’ responses align with the socialist values and also undergo a security self-assessment, which was, however, not defined until recently.

Details for Security Assessment

In March 2024, China’s National Information Security Standardization Technical Committee (TC260) published its Basic security requirements for generative AI, which qualifies as a technical document providing detailed guidance for authorities and providers of AI services. This document is not legally binding and lacks national applicability but will most likely be used by the authorities due to its detailed approach and lack of other guiding documents.

This text sets strict measures regarding the security of training data. The providers must randomly choose 4,000 data points from each training corpus and the number of ‘illegal’ or ‘harmful’ information should not exceed 5 percent. Otherwise, the corpus may not be used for training. Developers are also required to maintain information about the sources of the training data and the collection processes and acquire agreement or other authorization to use data for training when using open-source data.

This document also provides detailed guidance regarding the evaluation of the model’s responses. Providers are required to create a 2000-question bank designed to control the model’s outputs in the case of areas defined as security risks. At least a thousand of these questions are then used to test the model’s performance with a threshold set at 90 percent to pass the test. 

Censor, But Not Too Much

Per this document, the main security risks of the training data and generated answers are those related to the violation of the socialist values, for example, incitement to overthrow the socialist system, damaging the national image and endangering national security, undermining national unity and social stability, promoting terrorism and violence and spreading false and harmful information. 

Furthermore, the document addresses all discriminatory content, from ethnicity and religion to gender, as a security risk. Besides, it also mentions business-related risks, such as violations of intellectual property, business ethics, and other commercial issues. The model’s responses also should not harm the physical and mental health of others, or infringe on the privacy, and legitimate rights and interests.

Special attention is paid to high-security requirements in terms of applications of the technology in critical areas, such as medicine, psychology, and critical infrastructure, where the inaccuracy of content, inconsistencies with scientific common sense, and errors, may cause significant harm. 

In addition, the document requires the developers to create and frequently update a database of at least 10,000 keywords to detect conversations related to the above-mentioned security risks.

Given the necessity to comply with the censorship requirements regarding a wide range of topics that the Chinese Communist Party views as sensitive, some of the developers adopted a very strict approach and their models do not provide any answers which could be viewed as sensitive, such as when a user asks about Chinese President Xi Jinping. 

At the same time, probably to avoid excessive censorship, which could limit the model’s applications (and make the censorship too obvious), the document also requires that the model does not refuse to answer the questions from a “non-rejection” list, which may address topics such as China’s history, heroes, culture, customs, ethnicity, and geography.

Limiting Applications Based on Non-Chinese Models

Importantly, the newest AI rules stipulate that Chinese companies are not allowed to use unregistered third-party foundation models to provide public services. This means that access to LLMs developed outside China becomes even more limited and some of the Chinese AI companies who have built their applications based on ChatGPT or LlaMa will need to find other solutions. 

This approach not only allows to reduce dependence on non-domestic technologies but also strengthens the position of the already approved LLMs on the domestic market as the usage of unregistered models is only allowed for research and development. 

Finally, the document also underlines the importance of secure supply chains for the chips, software, and other tools used in the system to ensure stability and continuity of the service provided. 

Chinese LLMs

The database of the LLMs and their developers gathered by CHOICE offers an important glimpse into the Chinese LLM market and the key players. Whereas Western media most often refer to Ernie developed by Baidu as an example of China’s alternative to ChatGPT, the reality is much more complex as both well-known technological giants and a few months old startups enter the LLM race. Besides these approved models, it is estimated that there are more than 200 different LLMs currently functioning in China.

The first wave of models approved in August 2023 was predominantly general LLM models developed by the biggest players in China’s technological market – Baidu, Tencent, Alibaba, Huawei, iFlytek, SenseTime, and ByteDance. Besides these companies, Chinese research institutions, namely the Chinese Academy of Sciences and Shanghai Artificial Intelligence Laboratory, received approvals for their models. The majority of these models offer a wide range of applications, ranging from text processing, code generation, data analysis, and information retrieval. These models are most likely going to be used by other Chinese companies as foundational models for further applications. 

In the following batches, models with specific applications started to appear, showing the quick adaptability of various companies and interest in developing models for their products. Three different models are designed for recruitment purposes, ranging from CV formatting to providing recommendations. Two models are specifically designed to help companies with cyber security assessments and risk prevention and one is designed for readers to interact with their favorite literary characters. Other models are aimed at video content generation based on an article or an idea description, whereas other models provide recommendations to customers and serve as AI assistants. 

Managing the Chips Gap?

One of the most pressing questions surrounding the future of China’s AI sector relates to the access to the graphics processing units (GPUs) after the several rounds of US export controls introduced in recent years. A detailed report on the most important Chinese LLMs developed between 2020-2022 shows that only three of the 26 analyzed models explicitly mentioned not using chips developed by US company Nvidia for training, showing a high dependence on Nvidia’s hardware.

After last year’s boom in interest in AI, the number of available LLMs skyrocketed and it became difficult to follow the developments and companies involved in the field. The documentation and technical reports that are available show that at least 15 of the models approved for public use by the authorities most likely used Nvidia’s chips. This included developers such as Alibaba, Baidu, Tencent, SenseTime, and also 01.AI founded by Kai Fu-Lee, a Taiwanese computer scientist and investor. 

Three models, developed by Huawei, Chinese Academy of Sciences, and iFlytek, used Huawei’s Ascend chips, and further two models, developed by Beijing Zhongke Wenge Technology and Langboat, most likely used a combination of Nvidia’s and Huawei’s chips. Consequently, as of now, the proportion of models using non-American chips has not (yet) significantly shifted.

Exploring Domestic Options

Due to the lack of documentation and technical information regarding especially the most recently approved models, there is a significant knowledge gap and for the majority of the models, the information regarding the hardware used for training is not available. However, as Chinese media and companies promote models trained using domestic chips, it is unlikely that the rest of the models used Huawei’s or other domestically developed chips. 

On the other hand, the media reports suggest that Chinese companies, especially the biggest players in the field, are increasingly looking to use domestically manufactured chips. For instance, Baidu bought 1,600 Huawei 910B Ascend chips. Such an amount is, however, rather low and probably aimed for the first testing of its functions, compared to tens of thousands of chips that Chinese companies order from Nvidia. For instance, in August 2023, Baidu, ByteDance, Tencent, and Alibaba ordered about 100,000 Nvidia A800 GPUs, the adapted and slower version of the more cutting-edge A100 developed to meet the export controls. 

Nevertheless, such an order shows a wider tendency of Chinese companies to explore domestically available options and diminish rising vulnerabilities in the chip supply chains. Baidu, despite receiving the most attention, is not alone in this approach. Allegedly, Tencent and Meituan have also purchased Huawei’s chips for testing in previous months. 

Future for Nvidia?

Before the introduction of the US export controls, Nvidia held over 90 percent of China’s AI chip market share, making the Chinese AI companies and data centers significantly dependent on its chips. However, this dominant position is expected to erode in the following years as domestic companies rush to provide alternatives. Despite the company developing China-specific chips to circumvent the export controls, the unpredictability and gradual tightening of the restrictions seem to impact Nvidia’s image as a reliable partner for Chinese companies.

Several companies, including Alibaba and Tencent, have already informed Nvidia of their intention to reduce their orders this year. Analysis of the performance of Huawei’s Ascend 910B chip mentions that it equals about 80 percent of Nvidia’s chip A100, which is one of the most often mentioned GPUs Chinese companies previously used to train their models, including 01.AI, Baidu, and ZhipuAI. 

Furthermore, media reports suggest that the performance of GPU H20, the most powerful GPU currently available for the Chinese market, could lag behind Huawei’s Ascend 910B in certain technical capabilities while the price of these chips seems to be comparable. Nvidia has already referred to Huawei as its top competitor and with many other Chinese companies launching their chip development projects, the rivalry will only deepen. The less advanced domestically developed chips simply start to appear as a more cost-effective and reliable option, that also fits into China’s political goals of achieving technological autonomy. Furthermore, China’s rapidly growing LLM sector and the number of companies eager to apply language models in their products, fuel the demand for secure chips supply chains and the legislation prohibiting usage of unregistered models for public services allows China to continue building a separate digital environment favoring domestic companies.

Written by

Veronika Blablová

VBlablova

Veronika Blablová works as a Data Analyst of the MapInfluenCE and CHOICE projects at the Association for International Affairs (AMO), Prague, Czech Republic.