TIANJIN, CHINA—In a cavernous room just off the marble floored lobby of China’s National Supercomputer Center of Tianjin stand more than 100 wardrobe-size black and gray metal cabinets, arranged in ranks like a marching army. They contain the Tianhe-1A supercomputer, which 8 years ago became the first Chinese machine to reign, briefly, as the world’s fastest computer, running at 2.57 petaflops (or quadrillion floating point operations per second). But just upstairs from Tianhe-1A—and off-limits to visitors—is a small prototype machine that, if successfully scaled up, could push China to the top of the rankings again. The goal is a supercomputer capable of 1 exaflop—1000 petaflops, five times faster than the current champion, the Summit supercomputer at Oak Ridge National Laboratory in Tennessee.
China is vying with the United States, Europe, and Japan to plant its flag in this rarefied realm, which will boost climate and weather modeling, human genetics studies, drug development, artificial intelligence, and other scientific uses. But its strategy is unique. Three teams are competing to build China’s machine; the Tianjin prototype has rivals at the National Supercomputing Center in Jinan and at Dawning Information Industry Co., a supercomputer manufacturer in Beijing. The Ministry of Science and Technology (MOST) will probably select two for expansion to exascale by the end of the year. The approach is a chance to spur innovation, says Bob Sorensen, a high-performance computing analyst at Hyperion Research in St. Paul. It “encourages vendors to experiment with a wide range of designs to distinguish themselves from their competitors,” he says.
China may not be first to reach this computing milestone. Japan’s Post-K exascale computer could be running in 2020. The United States is aiming to deploy its first exascale system at Argonne National Laboratory in Lemont, Illinois, in 2021. The European Union is ramping up its own program. China is aiming for 2020, but the date may slip.
Being first is not China’s only goal, however. Having three competing teams will ensure broad-based technological advancement in computer chips, operating software, networking, and data storage technologies, says Meng Xiangfei, a physicist leading exascale application R&D for the center here. Building domestic capacity is particularly important for central processing units (CPUs) and specialized chips called accelerators, which boost a computer’s performance. China relied on U.S.-made Intel CPUs for several generations of supercomputers, says Jack Dongarra, a computer scientist at the University of Tennessee in Knoxville, but in 2015, the U.S. government barred the export of certain chips for security reasons. That move “provoked the Chinese government to make a heavy investment” in processors, he says. All three exascale prototypes use chips made in China.
The three-team strategy also allowed MOST to share costs with regional governments, which hope that a leading edge supercomputer will spur technological development and lure institutes and businesses. Qian Depei, a computer scientist at Beihang University in Beijing who serves on MOST’s exascale evaluation team, says the prototypes cost about $9 million each; MOST put up half and the rest came from local sources.
The prototypes have faced a battery of tests for speed, stability, and energy consumption plus trial runs of software from different application areas, but the results are “very secret,” Meng says. The final budget is also unclear, though at the outset a governmental advisory committee estimated one exascale computer would cost 2 billion to 3 billion yuan ($288 million to $432 million).
Even after the two winners are announced, Qian says the third team will probably remain involved so the expertise they’ve acquired is not wasted. Scaling up the prototypes, which operate in the range of 3 petaflops, will mean interconnecting enough CPUs and accelerators to reach an exaflop, refining the liquid cooling systems needed to remove heat and improve efficiency, and perfecting the operating software needed for the massively parallel arrangement of processors to work together.
China once lagged in developing application software needed to do interesting science with supercomputers, but it has been catching up, Meng says. For the past 2 years, Chinese groups have won the Gordon Bell Prize presented annually by the Association for Computing Machinery for innovations in applying high-performance computing to science, engineering, and large-scale data analytics. Chinese scientists are now working on new applications, says Yang Meihong, director of the Jinan center. For example, going to exascale will allow a dramatic improvement in the spatial resolution of global atmospheric models, “which will be greatly significant for a deeper understanding of the mechanisms of climate change,” she says.
The United States still dominates among the truly powerful supercomputers used for research, with 21 systems in the top 50 to China’s two. But scientists play down the ranking’s importance. “Having the top 500 No. 1 supercomputer—that’s pretty good, but that’s not the goal,” Qian says. “The real measure should be what kind of new science we have as a result of these computers,” Dongarra says.