Supercomputers - Extending the Limits of Computing

Monday, September 3rd, 2012

Technically speaking, a super computer is the computer which is able to perform at the highest operational rate possible for computers. It is a combination of multiple processors linked together and able to perform on the same set of data. To put it simply, it is the fastest computer around. It can easily perform heavy mathematical calculations in no time. Supercomputers are designed to perform specialized tasks; they are applied to weather forecasting, animated graphics, aerospace, nuclear energy research, fluid dynamic calculations and petroleum exploration.

While a regular computer uses its processor power to perform several functions, a supercomputers makes use of all of its power to perform only a few tasks at fastest possible pace. This is why supercomputers are used for exceptional tasks only.

Although the very High Performance Computingfirst supercomputers came to view in 1954 as the IBM Naval Ordnance Research Calculator used in the Columbia University, the first official machine was designed in 1964 at Control Data Corporation. This was designed by Seymour Cray, a name still associated very closely with the earliest supercomputers. In 1972, Cray formed his own company and released the 80MHZ supercomputer in the year 1976. This was the most successful machine of its kind at that time. Cray launched his second supercomputer in 1985; it had eight processors cooled by liquid Flourinert.

The very first official supercomputer, the CDC 6600, was the size of a small cabinet. It worked at 40MHZ and had 10 processors and one CPU that had 10 parallel functional units. The entire system was cooled using Freon.

Installed at Los Alamos National Laboratory, the second supercomputer, also known as Cray 1, was a 64-bit integrated chips able to perform at 136 megaflops; a lot more faster than the CDC 6600. The architect had 1662 printed circuits and 144 ICs. This system was also cooled using liquid Freon. In 6 years, 80 pieces of Cray 1 were sold at a price of $5-8 million each.

As the machine evolved, vector processing techniques came into use which speed up the data processing and also did great benefit to the mainframe computers as well. The earliest supercomputers used only a few processors, but the supercomputers in the 1990s had thousands of them. After two decades of U.S.A lead, Japan also developed some of the best supercomputers.

The supercomputer by Fujitsu’s Numerical Wind Tunnel designed in 1994 had 166 vector processors able to operate at a speed of 1.7 megaflops each. The SR2201 by Hitachi had 2048 processors and operated at 600 megaflops. It was designed in 1996.
A facility housing a supercomputer
Finally the leader in microprocessors, Intel, came to view in 1996 with its supercomputer which was called Paragon. It had 4000 processors. Although Paragon was not a big success, it paved way for ASCI Red, the first supercomputer made from Pentium processors. It had 6000 200MHZ processors and this was the first supercomputer with a speed of over 1 teraflops. Later, its upgraded version was able to operate at 3.1 Tflops. The ACSI Red used more than one megawatt of power.

In 2004, IBM released the Blue Gene/L, a series of supercomputers that remained unbeatable for the next four years. Blue Gene peak performance was at 600 teraflops that used more than 100,000 processors. This series was unmatched for two main reasons; it did not consume as much power as the other supercomputers and the “nodes were integrated on System on a chip.”

Later in 2008, Roadruner was revealed, another of IBM’s supercomputer able to perform 1000 trillion operations per second. This was twice as fast as Blue Gene and six times faster than any other supercomputer in the world.

As technology evolved, Conceptthe supercomputers transformed into being a group of closely packed microprocessors. Supercomputers in 2010 were made to operate at a speed of 2.5 Petaflops and last year, in 2011, the highest operating speed of supercomputers was marked at 8.2 Pflops. The upcoming target is the exaflop which is equal to 1000 petaflops. It is expected that by 2018-2020 a supercomputer performing at 100 petaflops will be designed. Current focus is upon reducing power consumption as well as space occupied by them. Currently, the fastest supercomputer around is 3.3 billion times faster than the first one.

It may sound absurd, but scientists claim that by 2045 such supercomputers will be developed which will have self-awareness.

May be one day every household will also have a supercomputer keeping in view the increasing inclination of consumers towards the virtual platform. Thousands of websites registering every day shows how realistic the virtual world is getting.

© 2017 - - All Rights Reserved.