Moore’s Law and Computer Processing Power

Long before today’s super-powered cell phones, Gordon Moore predicted that computer processing power would perform an inexorable march. In 1965, the same year that Allen Ginsberg helped spark the flower power movement of the 1960s with the publication of his essay, “How to Make a March/Spectacle,” the co-founder of Intel took a close look at the state of technological development and posited that the number of transistors that could be manufactured on a computer chip would approximately double every year. (In 1975, he adjusted that prediction to every two years.)

Just as flower power sent shock waves through American culture, Moore’s Law, as his proclamation came to be known, transformed the computing industry. It was a prediction born of Moore’s observations of the chip industry, but Moore’s Law also became a rallying cry and moving target that chip makers sought to reach with manufacturing improvements. Increased transistor density and the accompanying vast increases in processing power helped drive the information economy of the late 20th and 21st century.

Although Moore’s prediction referred to transistor density, it has been popularly widened to refer to processing power. Almost since its inception, industry watchers have debated how long rates could continue to double so quickly. For decades, processing power followed the expected path, with processing speeds ramping up from 740 KHz in the 1970s to 8 MHz (or 8,000 KHz) during the decade of the 1970s.

A similar look at the 2000s suggests a different story. Between 2000 and 2009, processing speeds barely doubled, from 1.3 GHz to 2.8 GHz. But around 2000, computer manufacturers introduced multi-core CPUs that can run operations simultaneously. When multiple cores are taken into account, the number of transistors per chip (the true measure of Moore’s Law) increased from 37.5 million per chip in 2000 to 904 million in 2009.

Moore’s Law does have an ultimate limit. Once transistors are miniaturized to the scale of atoms, it will be physically impossible to increase the density. That physical reality has manufacturers scrambling to develop new chip architectures that can allow computing power to continue its unchecked growth.

For example, new chemical processes are making it possible to form patterns of ultra-thin wires on to a semiconductor wafer. These novel structures could be used to create new types of computer chips that could then be improved in accordance with Moore’s Law. Other advances include a form of the metal tin that is just one molecule in thickness. These structures can conduct electricity with 100 percent efficiency at room temperature, a property that could allow the construction of materials that conduct electricity along their edges but are insulated on the interior. Such a material could be combined with existing chip-making techniques to increase speed and reduce power consumption.

It would seem that advancing technology will ensure that Moore’s Law will continue to apply to the growth of processing speed, if not strictly to the increasing number of transistors on a chip, as Gordon Moore envisioned.

Data storage, while also increasing over time, hasn’t kept pace with Moore’s Law. The result is a gap between performances of CPUs and hard disk drives, and input-output rate is often a performance bottleneck that prevents full utilization of improved processor speeds. That problem has prompted manufacturers to turn to other technologies like flash, which though expensive, is thousands of times faster than HDDs. Traditionally, flash couldn’t be readily shared or scaled, but the arrival of the cloud-enabled data centers has solved those problems.

So for now, data storage can hope to keep up with processing speeds, but the arms race between processing speeds and storage speeds is unlikely to abate, at least as long as Moore’s Law holds true.