TL;DR
- Domestic Training: Zhipu AI became the first Chinese company to train a major AI model entirely on Huawei’s domestic chips.
- Hardware Independence: The GLM-Image system was developed using Huawei’s Ascend processors without any US semiconductor technology.
- Open Source Strategy: Zhipu released the model as open-source software to build a developer ecosystem and compete against better-funded rivals.
- Export Control Impact: The achievement demonstrates that US chip restrictions have not prevented China from developing competitive AI systems.
Zhipu AI announced this week that it has become the first Chinese company to train a major AI model entirely on domestic chips, using Huawei’s Ascend processors to develop its GLM-Image model without US semiconductor technology.
The achievement validates that China’s multi-billion dollar investment in domestic semiconductor infrastructure can power competitive AI systems despite American technological containment strategies.
Breaking US Semiconductor Dependence
According to Zhipu, the entire training pipeline for GLM-Image was conducted on Huawei’s Ascend Atlas 800T A2 server, incorporating the company’s in-house Ascend AI processors and MindSpore machine learning framework. DeepSeek’s well-documented difficulties training models on Huawei hardware makes Zhipu’s achievement particularly notable. The success demonstrates that complete domestic training is technically feasible despite earlier high-profile failures.
GLM-Image’s release as open-source software provides Chinese developers with a reference implementation demonstrating domestic chip viability for computationally intensive AI tasks.
Inside GLM-Image’s Architecture
The GLM-image model employs an autoregressive and diffusion hybrid architecture. The system uses an autoregressive encoder to process text prompts, then feeds representations to a diffusion decoder that generates images through iterative denoising.
GLM-Image was developed using the Ascend Atlas 800T A2 server with four Kunpeng 920 processors. The entire training pipeline used Huawei’s MindSpore machine learning framework.
Huawei claims the Ascend 910C can achieve around 800 TFLOPS at FP16 precision, approximately 80% of Nvidia H100’s computing power. While sufficient for current-generation models like GLM-Image, this performance gap compounds when training frontier models requiring thousands of GPUs running for months. Rather than pursuing the brute-force scaling approach that characterizes Western AI development, Chinese firms optimize for architectural efficiency out of necessity.
GLM-Image consumes fewer computational resources than comparable Western models, making it more economical for deployment across China’s vast enterprise market.
Circumventing US Export Controls
China’s chip independence push intensified after the Biden administration expanded semiconductor export controls in 2022. Regulations restricted sales of advanced AI accelerators from Nvidia and AMD to Chinese customers. The controls targeted chips capable of certain performance thresholds, effectively cutting off access to hardware that powers advanced AI systems in Western markets.
Huawei responded by accelerating Ascend chip development, positioning the 910 series as China’s primary alternative to banned Nvidia products. Industry analysis suggests Huawei shipped several thousand Ascend 910B processors to Chinese AI companies in 2024, though total volumes remain far below Nvidia’s pre-restriction shipments to China.
GLM-Image’s emergence challenges the effectiveness of US semiconductor restrictions. The achievement demonstrates that cutting off advanced chips has not prevented China from developing competitive AI systems for certain model categories.
Zhipu’s Open-Source Journey
This hardware independence strategy aligns with Zhipu’s broader approach to competing against better-resourced rivals. Zhipu AI launched the open-source GLM-4.5 language model in July 2025, positioning the company as China’s answer to OpenAI.
The Tsinghua University spin-off adopted aggressive open licensing under MIT terms, allowing commercial use without restrictions common in Western models’ licenses.
CEO Zhang Peng previously articulated a strategic philosophy diverging sharply from OpenAI’s approach to model access:
“We share this view with OpenAI, but differ in approach. Unlike OpenAI’s closed system, we adopt an open strategy to advance science and technology, fostering industry-academia collaboration while focusing on continuously enhancing the capabilities of our strongest foundational model.”
Zhang Peng, CEO of Zhipu AI (via PR Newswire)
By July 2025, Chinese companies had released more than 1,500 large language models according to Xinhua News Agency, a number that should have increased sharply since then.
This created a development ecosystem that contrasts with the West’s concentration around proprietary systems. For Zhipu, competing against better-funded rivals like Alibaba and Baidu through proprietary models would require capital reserves the company lacks.
Open licensing transforms this disadvantage into advantage by building an ecosystem of developers whose contributions enhance GLM’s core architecture. The proliferation of Chinese open-source models reflects both strategic calculation and market reality.
By releasing its GLM models under permissive licensing, Zhipu trades direct revenue for ecosystem control, positioning GLM as infrastructure that thousands of Chinese developers improve through contributions.
From Tsinghua Spinoff to Global Player
Zhang Peng founded Zhipu in 2019 with backing from Tsinghua University’s prestigious computer science department, initially focusing on natural language processing research. ChatGPT’s launch in late 2022 validated Zhipu’s technical direction, triggering a wave of Chinese investment in generative AI that benefited the company.
Zhipu expanded internationally in 2024, opening offices in the United States, United Kingdom, and France. As of July 2025, Zhipu had raised over $400 million from investors including Alibaba Cloud, Tencent, Ant Group, and Saudi Arabia’s Prosperity7 Ventures, pushing the company’s valuation above $20 billion.
Navigating US Restrictions
Yet this international success story confronts growing geopolitical headwinds. US policymakers added multiple Chinese AI companies to restricted trade lists in 2024, limiting their access to American technology. Zhipu’s inclusion constrains partnerships with US cloud providers and complicates international expansion plans.
OpenAI specifically flagged Zhipu’s growing government contract portfolio as a competitive concern in June 2025. Open-source licensing partially circumvents these barriers by allowing developers worldwide to download and deploy GLM models without direct Zhipu involvement.
While Zhipu cannot sell AI services directly to Western customers or partner with US cloud providers, GLM models flow freely across borders as downloadable software packages.
Broader Impact on Global AI Competition
GLM-Image demonstrates that chip export restrictions have not prevented China from developing competitive AI infrastructure for specific model categories. This validates China’s long-term strategy of building domestic semiconductor capabilities.
Industry analyst Bi Qi, Chief Scientist at China Telecom and Nokia Bell Labs Academician, provided perspective on China’s competitive position in a statement to Global Times last year:
“While most US AI firms adopt closed-source models, China’s open-source approach as a challenger has the potential to exert certain commercial impact on the leaders. Despite US technological advantages, China also has considerable advantages in terms of engineering and market scale.”
Bi Qi, Chief Scientist at China Telecom and Nokia Bell Labs Academician (via PR Newswire)
China’s 1.4 billion population creates deployment advantages that compound over time through data accumulation and iteration cycles. While Western firms may train more powerful base models using superior hardware, Chinese companies can rapidly refine models through deployment across hundreds of millions of users.
For Chinese AI developers and enterprises, GLM-Image’s success forces a strategic decision. Companies that spent the past two years scrambling for Nvidia chips through gray market channels now face a choice: invest in adapting workflows to Huawei’s ecosystem or remain dependent on increasingly restricted Western hardware.
As more Chinese AI firms validate domestic chip viability through production deployments, the competitive pressure to follow suit intensifies. Western policymakers face the mirror image of this dilemma: their semiconductor restrictions have accelerated rather than delayed China’s push toward self-sufficiency, potentially creating a bifurcated global AI ecosystem where Chinese and Western models develop along incompatible technological paths.

