
Recently, IDC and Longchamp jointly released the "2022-2023 China Artificial Intelligence Computing Power Development Assessment Report" (hereinafter referred to as "the Report"). The report predicts that AI market-related spending in China will reach $13.03 billion in 2022 and is expected to reach $26.69 billion in 2026, with a compound annual growth rate of 19.6% from 2022 to 2026.
Among them, AI servers remain the main driver of AI market growth. IDC data shows that the global AI server market will grow at a year-on-year rate of 39.1% in 2021, exceeding the overall global AI market growth rate (20.9%) and is the driving force behind the overall AI market growth.
In China, the accelerated landing of AI applications largely drives the high growth of China's AI server market. $5.92 billion of AI server market size in 2021, up 68.2% compared to 2020, and is expected to reach $12.34 billion by 2026.
At the same time, the scale of computing power in China, especially intelligent computing power, is also growing at a high rate. According to the report, China's general-purpose computing power scale reaches 47.7EFLOPS (10 billion billion floating point operations per second) in 2021, and is expected to reach 111.3EFLOPS by 2026.
And the scale of China's intelligent computing power reaches 155.2EFLOPS in 2021, will reach 268EFLOPS in 2022, and is expected to enter the trillion trillion floating point per second (ZFLOPS) level by 2026, reaching 1271.4EFLOPS.
This also means that during 2021-2026, China's intelligent computing power scale can grow at a compound annual growth rate of 52.3%, while the general-purpose computing power scale grows at a compound annual growth rate of 18.5% during the same period.
The big models that have become more popular in the industry in recent years are the most typical major innovations driven by intelligent computing power. According to the report, thanks to the strong generalization ability of the model, the low dependency of long-tail data and the improvement of the efficiency of downstream model usage, the big model is considered to have the prototype of "general intelligence" and has become one of the important ways to explore the industry to achieve inclusive artificial intelligence.
The technical foundations of Big Model are transformer architecture, migration learning, and self-supervised learning. The transformer architecture has made breakthroughs in NLP and has also proven its effectiveness in vision tasks. From the viewpoint of computing power, the capacity of language and visual models and the corresponding computing power demand are expanding rapidly, and the development of large models is supported by huge computing power.
If we use the "arithmetic equivalent" (PetaFlops/s-day, PD), i.e., the total amount of arithmetic power consumed by a computer running at trillions of times per second for one full day, to measure the total amount of arithmetic power required for AI tasks, AlphaFold2 in AI+Science, autonomous driving systems, and GPT-3 in AI+Science. Model training such as GPT-3 requires hundreds or even thousands of PDs of arithmetic support, such as GPT-3 training requires 3640 PDs of arithmetic power.
With the ability of big models, AIGC-type applications, including text-to-graph and virtual digital human, are rapidly entering the commercialization stage and bringing huge changes to the meta-universe content production. According to the report, the big model is allowing AI technology to go from "being able to listen and see" five years ago to "being able to think and create" today, and is expected to achieve "being able to reason and make decisions" in the future. The future is expected to achieve a significant progress of "can reason, can make decisions".
However, the development of large models also brings huge challenges to computing power. According to the report, the large computational and storage resource overhead for large model training has high requirements for accelerated computing systems and artificial intelligence software stacks, and thousands of accelerator cards are often needed to train hundreds of billions and trillions of models, posing a great challenge to the promotion and generalization of large models.
At the same time, limited by the marginal decreasing effect, the further improvement of model complexity and accuracy will require a larger proportion of computing resources overhead, and concerns about computational efficiency will limit the continued expansion of the large model parameter scale.
Therefore, although the current number of large model parameters has not yet reached the synaptic size of the human brain, the market perception of large models is becoming rational. The industry gradually recognizes that the development of big models should focus more on green and low-carbon, service capability sinking and business model practice, which will pave the way for the scale of big models to land in various industries.
The report points out that, in general, the degree of application of AI in various industries shows a trend of deepening, and the application scenarios are becoming more and more extensive. AI has become an important capability for enterprises to seek new business growth points, improve user experience and maintain core competitiveness.
Meanwhile, in the 2022 China AI city ranking, Beijing, Hangzhou and Shenzhen continue to maintain the top three, Shanghai and Guangzhou ranked fourth and fifth, and Tianjin entered the top 10. In addition to the TOP10 cities, many cities such as Hefei, Wuhan and Changsha have made great progress in AI applications, driven by their own industrial advantages and various factors.
*** Translated with www.DeepL.com/Translator (free version) ***

