Engineering Dean: Traditional Supercomputers Will Disappear In A Decade

November 19, 1997

Traditionally designed supercomputers will vanish within a decade. That's a prediction from John Hennessy, professor of computer science and dean of Stanford's School of Engineering. "The last bastion of non-microprocessor computing, the vector-based supercomputers, will vanish in the next five or 10 years," Hennessy predicted in a talk on the future of supercomputing on Nov. 19 at the SC'97 conference in San Jose.

Since the 1950s, supercomputers have used a special "vector" architecture to give them the processing muscle that resulted in the fastest processing speeds in the world.

Today, however, the microprocessor, that humble computer-on-a-chip, is nipping at the supercomputer's heels. Micros already have made the mini-computers that were heavily used for research and the mainframe computers that were once the mainstay of the corporate world obsolete. It's only a matter of time for the traditional supercomputer.

Technical and economic factors are combining to make the specialized architectures that have characterized supercomputers for the last 40 years less and less cost effective, he said.

As microprocessor-based computers get faster, they are able to do a higher fraction of the computing jobs. As the market grows, the software improves, allowing these computers to take over more of the jobs that previously had required supercomputers. As a result, demand for special-purpose supercomputers is decreasing.

"Most importantly, the price-performance gap between the two types of machines is closing at a compounding rate," Hennessy said.

In the 1970s, the price-performance ratio of the Cray-1 supercomputer was about the same as that of a VAX mini-computer. Today, users get more than twice the processing power per dollar from a desktop scientific workstation than they do from a supercomputer, and it won't be long before they begin getting five times the bang for their buck, Hennessy said.

During the 1980s, three different types of supercomputer architecture developed in parallel:Another reason that the future of the vector-based machines is limited, Hennessy said, is that supercomputers no longer can afford to run on totally customized software. "The days when supercomputer users all wrote their own programs is gone. Today, everyone relies on vendor codes, and what vendors care about the most, how they make the most money, is by writing for the largest market," he said. As cycles have become cheaper, programmers have become increasingly expensive, a trend that is likely to continue. So dependence on commercial software is likely to become even greater.

On the other hand, there are two basic reasons why bus-based machines are increasing in number so rapidly.

First, they support the most common programming paradigm, an approach called shared memory. In shared-memory architectures, all processors can access all of a system's memory. Since these machines are simpler to program and more common, independent software vendors have found them an attractive base for developing programs. In recent years, the independents have been responsible for an increasing fraction of the software being installed.

Second, more and more desktop computers are running two to four microprocessors. With their Pentium series, Intel Corp. added special hardware capabilities that made it much easier to run several processors at a time.

At the same time, experimental architectures like Stanford's experimental Directory Architecture for Shared Memory (DASH) computer have shown that shared memory systems can be scaled up to the size of the massively parallel machines. This design, called distributed shared memory, allows an entire network of desktop computers to act like a single large computer when given problems that require large amounts of computer power to solve.

Cluster architecture

Meanwhile, massively parallel machines are looking increasingly like networks of personal computers packaged in a single box, an approach called cluster architecture. Such machines can incorporate much faster networking than is available externally. As the performance of standard networking technology improves, however, the gap between this approach and the distributed shared memory model will shrink, Hennessy said.

The supercomputer market will probably move to a combination of cluster-based architecture, which IBM, the University of California-Berkeley and Sandia National Laboratories (as a user) are pursuing, and distributed shared memory architecture, which is being developed by Stanford, the Massachusetts Institute of Technology, Silicon Graphics, HP Convex and Los Alamos National Laboratory (as a user), Hennessy said.

"The big question for the future of supercomputing is the trade-off between centralized or distributed architectures," Hennessy said. People like the idea of buying little pieces and then aggregating them, but doing so with today's off-the-shelf hardware comes with a price, a high level of processing overhead that makes such an aggregate slower than a centralized computer with the same power.

One factor that will tip the balance toward the distributed model is the increased importance being given to communications. For many years, communications was considered a peripheral ability, so it received little attention. That has changed dramatically. Today, networking has risen to a first-rank function, and experts are lavishing a lot of time on networking and communication issues. Dramatic improvements are likely, Hennessy said. In the long run, however, the best solution will be to create a computer architecture that can operate with equal efficiency in both the centralized and distributed modes. "We need to develop the right primitives, the right building blocks, so that we can build systems that work well running alone or in concert with an array of other machines," Hennessy said.

Stanford University

Related Memory Articles from Brightsurf:

Memory of the Venus flytrap
In a study to be published in Nature Plants, a graduate student Mr.

Memory protein
When UC Santa Barbara materials scientist Omar Saleh and graduate student Ian Morgan sought to understand the mechanical behaviors of disordered proteins in the lab, they expected that after being stretched, one particular model protein would snap back instantaneously, like a rubber band.

Previously claimed memory boosting font 'Sans Forgetica' does not actually boost memory
It was previously claimed that the font Sans Forgetica could enhance people's memory for information, however researchers from the University of Warwick and the University of Waikato, New Zealand, have found after carrying out numerous experiments that the font does not enhance memory.

Memory boost with just one look
HRL Laboratories, LLC, researchers have published results showing that targeted transcranial electrical stimulation during slow-wave sleep can improve metamemories of specific episodes by 20% after only one viewing of the episode, compared to controls.

VR is not suited to visual memory?!
Toyohashi university of technology researcher and a research team at Tokyo Denki University have found that virtual reality (VR) may interfere with visual memory.

The genetic signature of memory
Despite their importance in memory, the human cortex and subcortex display a distinct collection of 'gene signatures.' The work recently published in eNeuro increases our understanding of how the brain creates memories and identifies potential genes for further investigation.

How long does memory last? For shape memory alloys, the longer the better
Scientists captured live action details of the phase transitions of shape memory alloys, giving them a better idea how to improve their properties for applications.

A NEAT discovery about memory
UAB researchers say over expression of NEAT1, an noncoding RNA, appears to diminish the ability of older brains to form memories.

Molecular memory can be used to increase the memory capacity of hard disks
Researchers at the University of Jyväskylä have taken part in an international British-Finnish-Chinese collaboration where the first molecule capable of remembering the direction of a magnetic above liquid nitrogen temperatures has been prepared and characterized.

Memory transferred between snails
Memories can be transferred between organisms by extracting ribonucleic acid (RNA) from a trained animal and injecting it into an untrained animal, as demonstrated in a study of sea snails published in eNeuro.

Read More: Memory News and Memory Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.