Circuito elettronico

History of supercomputing

The long history of supercomputers, of the Cold War nowdays, told with the dizzying numbers of FLOPS.

by Stefano Bevacqua
20 February 2020
6 min read
by Stefano Bevacqua
20 February 2020
6 min read

It might be an extreme comparison, and even throw up a paradox, but it’s instructive four our purposes here. My first utility car, was handed down to me by my parents in 1972 after I got my licence. It had a top speed of 95 km (59 miles) an hour. The most recent model of the same car, in its basic version, can do 160 km (99 miles) an hour. Exactly 35 years, and a 68% improvement in speed, separate the two cars. Of course, it’s not high speeds that the engineers in Turin have been putting all their efforts into over the years. But it does give you an idea of the progress they’ve made. Now, let’s take a computer made 35 years ago, in 1985, like the one plonked down in front of me that very year at the publishing company where I worked. It did incredible things, things I couldn’t dream of a Lettera 32 doing. It wrote on the screen instead of paper. It let me delete, cut and paste and, best of all, send files off to the press, instead of a piece of paper whose contents then had to be typed up on a phototypesetting machine (and before that a linotype). Obviously, it was a PC, not a calculation centre, but the analogy works. That machine worked at the extraordinary speed of 8 million operations per second. But the computer I’m using now, to type this story of the massive leaps in capacity made by the latest computing systems, has a 2.5 GHz clock, meaning it can do 2.5 billion operations a second. Times that by eight, the number of cores in its processor, and you’re looking at 20 billion operations a second. In 35 years, the computers used every day in publishing have seen their calculation power multiply by 2,500, and that’s without taking into account the processing ability of their screens. If you will allow me a foray into the absurd, that’s like the next Fiat 500 coming out with a speed of more than 66 km (41 miles), not per hour but per second.

Cavi del supercomputer

The birth of supercomputer

But let’s park our “super” utility car for now and try to understand what’s happened in the world of computers – not the personal variety that we’ve looked at so far to get an idea of the changes, but supercomputers, those big machines designed for incredibly complex calculations. Contrary to the impression you might get from the papers, supercomputers did not appear in the last few decades. They were invented well before PCs, which were really just a scaled-down version of the machines built at the big American and Soviet calculation centres of the 1950s and ‘60s. Essentially what pushed them on to ever greater calculating power was the Cold War and above all its most visible symptom, the Space Race. Landing on the moon would require machines ever more adept at calculation. The first supercomputer worthy of the name was built for defence purposes in 1954. Its name was NORC and it was installed by IBM on behalf of the United States’ Naval Proving Ground, in Dahlgren, in Virginia. Its calculating speed was 67 kOPS, or 67,000 operations a second. The rest was history, and the supercomputers’ power grew so much as to force the unit of measure to change, from OPS to FLOPS (floating point operations per second). Just seven years after NORC, IBM followed up with the 7030, also known as Stretch. Another 13 years on, in 1974, Control Data Corporation’s (CDC) STAR-100 appeared on the scene, with 100 million FLOPS. A further 11 years passed before Cray X-MP4 managed 1 billion FLOPS. Progress seemed boundless. In 1991, even Italy joined the party, when its National Institute for Nuclear Physics built APE100, with 100 billion FLOPS.

HPC5

The new era of multicore and PFLOPS

For a few years after that, it looked like the increases were slowing down, for which there was a reason. Beyond a certain limit, a single processor cannot do its job. So, two simultaneous paths were taken. One, multiplying the number of cores, that is the main calculating units within the processor, which led to dual cores and then a full-on “core race”. Two, multiplying the number of parallel connected processors, to increase calculating power and thereby speed, as had already started in the 1990s. The latter changed the architecture of processors and of the large calculating systems that used them. GFLOPS (billions of FLOPS) gave way to TFLOPS (thousands of billions of FLOPS) in 1997, with Intel’s ASCI Red/9152. A decade later, IBM’s Roadrunner turned up with over a million billion FLOPS, forcing TFLOPS to give way to PFLOPS.
There were few kings in the world of supercomputers in the 1990s, all of them American, with the exception of a few European efforts, mainly from Germany, Italy and France. But the new millennium heralded the arrival of the Japanese, and the competition heated up. In 2010, the most powerful supercomputer in the world was Chinese. With 2.5 PFLOPS, Tianhe-1A was a massive gamechanger. Today, however, the world’s most powerful supercomputer is once again from across the pond and is called Summit. It was turned on in June 2018 at the Oak Ridge National Laboratory in Tennessee and can do 145 PFLOPS, and even 200 PFLOPS for brief periods. Almost 2.5 million cores churn within its bowels, consuming 10,000 kW every hour. 

Supercomputers in Italy

So, the race is far from over. Hot on Summit’s heels are another American supercomputer, two Chinese, yet another American and then a mixture of Japanese, Chinese and European models. If Eni’s new HPC5 were included in the current world rankings, its capacity of 52 PFLOPS would award it sixth place. It supports its predecessor HPC4, capable of 18 PFLOPS alone, through an integrated double architecture than combines CPUs (traditional processors) with GPUs (graphic processors). The machine counts on 1,820 hubs, each with two 24-core processors and four GPU accelerators.

HPC5 Eni

The European Union has jumped on the bandwagon with its EuroHPC project, to link eight different calculation centres around the continent, including Cineca in Bologna, three of which have 150 PFLOPS and five of which have 40 PFLOPS. The aim is to break the ceiling of 1,000 PFLOPS, so a billion billion floating operations a second, within a decade. Indeed, it looks as though the race will never end. If you are wondering at this point what these super-powerful machines are actually useful for, Eni’s HPC5 does medium-term weather forecasts with unimaginable precision, interprets geophysical data, studies renewable sources, manages health crises, runs complex systems with many hubs (road, railway and electric networks), studies climate change and tackles the great challenge of Artificial Intelligence.