(photo credit: Argonne National Laboratory)
Peter Kogge headed up a DARPA study on the feasibility of creating a 1 exaflop (10^18 flops) supercomputer by 2015. The study group’s findings were less than encouraging — even taking into account new technologies such as nanotubes and reduced operating voltages, the power required of such a machine would just be too great. Further, much of the time, many of the cores would be idle — drawing power without doing anything.
Supercomputers are the crowning achievement of the digital age. Yes, it’s true that yesterday’s supercomputer is today’s game console, as far as performance goes. But there is no doubt that during the past half-century these machines have driven some fascinating if esoteric pursuits: breaking codes, predicting the weather, modeling automobile crashes, simulating nuclear explosions, and designing new drugs—to name just a few. And in recent years, supercomputers have shaped our daily lives more directly. We now rely on them every time we do a Google search or try to find an old high school chum on Facebook, for example. And you can scarcely watch a big-budget movie without seeing supercomputer-generated special effects.
So with these machines more ingrained than ever into our institutions and even our social fabric, it’s an excellent time to wonder about the future. Will the next decade see the same kind of spectacular progress as the last two did?
So are exaflop computers forever out of reach? I don’t think so. Meeting DARPA’s ambitious goals, however, will require more than the few short years we have left before 2015. Success in assembling such a machine will demand a coordinated cross-disciplinary effort carried out over a decade or more, during which time device engineers and computer designers will have to work together to find the right combination of processing circuitry, memory structures, and communications conduits—something that can beat what are normally voracious power requirements down to manageable levels.