ADD TIME:2025-01-21
The booming development and widespread application of AI technology have led to an unprecedented demand for high-performance computing chips, which have improved the overall performance of servers. ChatGPT has currently demonstrated human level performance on various professional and academic benchmarks, with over 100 million users two months after its release. At the same time, domestic Baidu "ERNIE Bot", Ali "Tongyi Qianwen" and a series of Chinese big models have also been launched. In artificial intelligence architecture, the chip layer provides the computing power foundation support for the entire architecture, and each training and inference of a large model requires the computing power foundation provided by the chip. The parameter count of GPT has shown exponential growth throughout history. With the further development of AI, the demand for computing power will continue to expand, which will continue to drive the market demand for high-performance computing chips. According to the prediction of EO Think Tank, the market size of AI chips in China will reach 178 billion yuan by 2025, with a CAGR of 42.9% from 2019 to 2025. At present, server leader Intel has gradually shipped server products for HPC and artificial intelligence fields, achieving up to 30 times performance improvement in AI, and further transitioning to industry-leading levels such as DDR5 and PCIe 5.0 in memory and interface standards.
Massive computing power demand relies on AI servers and the addition of high-performance GPU chips. Due to its compatibility with training and inference, GPU can achieve 10-100 times the throughput compared to CPU, making it more suitable for AI model training and inference. In the Chinese AI chip market in 2021, GPU accounted for 91.9%. Huajing Industry Research Institute predicts that the global GPU industry market size will maintain a high-speed growth trend, and is expected to reach 185.31 billion US dollars in 2027, with a compound annual growth rate of 32.82%. At present, Nvidia is the industry leader in GPU chips, and servers equipped with 8 Nvidia's latest GPU H100SXM chips have up to 50 times the computing power compared to servers equipped with its previous generation GPU product A100. Generally, AI servers will be equipped with 4-8 GPGPU modules to form GPU modules, which will increase the number of GPU boards compared to general-purpose servers, usually 6-layer or 4-layer PCB boards.