Moore`s Law

Moore’s Law
1
Ramesh Kumar1 , Priye Ranjan2 , Mr.Dipesh Das3
B,Tech(cse) scholar at IIMT Institute of Technology Gr,Noida(UP),
B,Tech(cse) scholar at IIMT Institute of Technology Gr,Noida(UP)
3
Department of Computer Science and Engineering at IIMT Instituteof Technology Gr.Noida(UP)
2
Abstract - Moore suggested an exponential growth of the number of transistors in integrated
electronic circuits. In this paper, Moore’s law is derived from a preferential growth model of
successive production technology generations. The theory suggests that products manufactured with
a new production technology generating lower costs per unit have a competitive advantage on the
market. Therefore, previous technology generations are replaced according to a Fisher-Pry law.
Discussed is the case that a production technology is governed by a cost relevant characteristic. If
this characteristic is bounded by a technological or physical boundary, the presented evolutionary
model predicts an asymptotic approach to this limit. The model discusses the wafer size evolution
and the long term evolution of Moore’s law for the case of a physical boundary of the lithographic
production technology. It predicts that the miniaturization process of electronic devices will slow
down considerably in the next two decades.
It has been forty years since Gordon Moore first posited what would one day come to be
known as Moore's Law. Gordon's ideas were more than a forecast of an industry's ability to improve;
they were a statement of the ability for semiconductor technology to contribute to economic growth
and even the improvement of mankind in general. More importantly, Moore's Law set forth a vision
of the future that harnessed the imaginations of scientists and engineers to make it all possible.
Keywords:DRAM-Dynamic Random Access Memory , MOS- Metal Oxide Semiconductor
I. INTRODUCTION
"Moore's law" is the observation that, over the history of computing hardware, the number of
transistors in a dense integrated circuit has doubled approximately every two years.
The observation made in 1965 by Gordon Moore, co-founder of Intel. Moore predicted that this
trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but
data density has doubled approximately every 18 months, and this is the current definition of
Moore's Law, which Moore himself has blessed. Most experts, including Moore himself, expect
Moore's Law to hold for at least another two decades.
To break down the law even further, it specifically stated that the number of transistors on an
affordable CPU would double every two years (which is essentially the same thing that was stated
before) but ‘more transistors’ is more accurate.
II. HOW MOORE’S LAW WORKS
The discovery of semiconductors, the invention of transistors and the creation of the
integrated circuit are what make Moore's Law -- and by extension modern electronics -- possible.
Before the invention of the transistor, the most widely-used element in electronics was the vacuum
tube. Electrical engineers used vacuum tubes to amplify electrical signals. But vacuum tubes had a
tendency to break down and they generated a lot of heat, too. The most noticeable effects of Moore's
Law are smaller, cheaper, more energy efficient, and, of course, faster computers. In fact, each
@IJMTER-2016, All rights Reserved
508
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
doubling of transistor density results in an effective quadrupling of computational power, for reasons
that will be explained shortly.
But first, we should discuss why Moore's Law should hold up as well as it does. It is, after
all, just a prediction; there is no property of cosmic physics shrinking the transistors in you
computer, merely human engineers finding creative new ways to refine their product.
One reason is purely economic. Moore's Law has held up for a long time in a very
competitive industry; corporations that fail to steadily produce faster chips run the grave risk of
being outflanked by a more aggressive or paranoid firm. And so, each company attempts to keep
pace with the Law and cut a little ahead of it, which results in a leapfrogging progression of chip
designs lining up very nicely with Moore's prediction.
But there must be scientific reasons why engineers are continuously able to shrink their
transistors. Indeed, there are. As small as transistors are -- and they are already the most
microscopically tiny devices ever assembled by the human race -- they have plenty of room to
become smaller. Theoretical experiments have shown that computation can be performed within
single atoms or even smaller particles, and today's transistors are still comprised of millions of
atoms. We cannot jump straight to molecular computing with the tools we have now, but faster
computers and specialized devices produced by a given phase of engineering consistently enable
engineers to design the next phase, in a spiral of ever-shrinking designs.
Lest we glorify chip engineers too much, it should be mentioned that the natural consequences
of shrinking transistor size do most of the work. Why? Because as transistors shrink, so does the
time it takes them to perform their switching operations. Smaller transistors, as a rule, are faster and
more energy efficient, and can therefore do more work per given unit of time and energy. Shrinking
the transistors by 50% on a chip of a given size yields approximately quadruple the computing
power, because both the number and speed of the transistors have been increased. (And don't forget
the power savings per calculation!)
Here is an analogy that works because the mathematical calculations are identical to those used
to approximate the performance of a simple, hypothetical computer chip:
Suppose you have an ice-cube tray, and your goal is to produce as many ice cubes as possible
without changing the size of the tray -- only the size of the boxes where the cubes will form. Here is
your word problem for the day:
You start with boxes small enough that you can pack them in 10 high and 10 wide. This 10 x 10
grid gives you 100 ice cubes. How many total squares high and wide would you need to be able to
squeeze in if you hoped to produce 200 ice cubes?
If 20 x 20 came to mind... bzzzt! The correct answer, of course, is 15 x 15, which will produce 225
squares. Hence, shrinking the size of each box by a third will give you more than double the number
ice cubes. If you managed to shrink them by 50%, as would be the case in a 20 x 20 grid, you could
make 400 ice cubes -- quadruple the number.
For the fun of it, we can take this analogy to three dimensions, to simulate what might happen if or
when engineers might be able to construct chips using a fully three dimensional process -- as
opposed to the layered two dimensional "wafering" done now. As you will see, the effects of
transistor size reduction are even more pronounced:
A 10 x 10 x 10 ice machine -- 10 x 10 trays stacked 10 high -- gives you 1,000 ice cubes. But
shrinking each cube by 33% percent to make a 15 x 15 x 15 machine would more than triple the
@IJMTER-2016, All rights Reserved
509
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
number of cubes produced: 3,375. A 50% reduction in cube size yields 8,000 cubes -- eight times
the original number!
III. IMPLICATION OF MOORE’S LAW
The implications of Moore's Law are quite obvious and profound. It is increasingly referred to as a
ruler, gauge, benchmark (see subtitle), barometer, or some other form of definitive measurement of
innovation and progress within the semiconductor industry. As one industry watcher has recently put
it:
"Moore's Law is important because it is the only stable ruler we have today, It's a sort of
technological barometer. It very clearly tells you that if you take the information processing power
you have today and multiply by two, that will be what your competition will be doing 18 months
from now. And that is where you too will have to be." (Malone 1996)
Since semiconductor cost is measured in size and complexity, unit cost is directly related with size - as circuit size has been reduced, so has cost. As a result, virtually all electronics used today
incorporate semiconductors. These devices perform a wide range of functions in a variety of end-use
products -- everything from children's toys, to antilock brakes in automobiles, to satellite and
weapon systems, to a variety of sophisticated computer applications. The fact that all these products
(and many, many more) are now so accessible to so many users is due in large part to continually
declining costs of the core microelectronics made possible by the innovation of the semiconductor.
IV. EXPECTATION MATTER
Yet another dimension, involving non-technical or non-physical variables such as user expectations
contribute to the dynamic of fulfilling this law. In this view, Moore's Law is not based on the physics
and chemical properties of semiconductors and their respective production processes, but on other
non-technical factors. One hypothesis is that a more complete explanation of Moore's Law has to do
with the confluence and aggregation of individuals' expectations manifested in organizational and
social systems which serve to self-reinforce the fulfillment of Moore's prediction.
A brief examination of the interplay among only three components of the personal computer (PC)
(i.e., microprocessor chip, semiconductor memory, and system software) helps reveal this point. A
very common scenario using the IBM-compatible PC equipped with an Intel microprocessor and
running Microsoft's WindowsJ software goes something like this. As the Intel microprocessor has
evolved from the 8086/88 chip in 1979 to the 286 in 1982, to the 386 in 1985, to the 486 in 1989, to
the PentiumJ in 1993, and the Pentium ProJ in 1996, each incremental product has been markedly
faster, more powerful, and less costly as a direct result of Moore's Law. At the same time, dynamic
random access memory (DRAM) and derivative forms of semiconductor memory have followed a
more regular Moore's Law pattern to the present where a new PC comes standard with 8Meg
(million bits) to 16Meg of memory as compared to the 480k (thousand bits) standard of a decade
ago. Both of these cases reflect the physical or technical aspects of Moore's Law.
However, system software, the third piece of this puzzle, begins to reveal the non-technical
dimension of Moore's Law. In the early days of computing when internal memory was costly and
scarce, system software practices had to fit this limitation -- limited memory meant efficient use of it
or "tight" code. With the advent of semiconductor memory -- especially with metal oxide
semiconductor (MOS) technology -- internal memory now obeyed Moore's Law and average PC
@IJMTER-2016, All rights Reserved
510
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
memory sizes grew at an exponential rate. Thus, system software was no longer constrained to "tight
spaces" and the proliferation of thousands, then many thousands, and now millions of "lines of code"
have become the norm for complex system software.
Nathan Myhrvold, Director of Microsoft's Advanced Technology Group, conducted a study of a
variety of Microsoft products by counting the lines of code for successive releases of the same
software package. (Brand 1995) Basic had 4,000 lines of code in 1975 -- 20 years later it had
roughly half a million. Microsoft Word consisted of 27,000 lines of code in the first version in 1982 - over the past 20 years it has grown to about 2 million. Myhrvold draws a parallel with Moore's
Law:
"So we have increased the size and complexity of software even faster than Moore's Law. In fact,
this is why there is a market for faster processors -- software people have always consumed new
capability as fast or faster than the chip people could make it available."
As the marginal cost of additional semiconductor processing power and memory literally
approaches zero, system software has exponentially evolved to a much larger part of the "system."
More complex software requires yet even more memory and more processing capacity, and
presumably software designers and programmers have come to expect that this will indeed be the
case. Within this scenario a kind of reinforcement multiplier effect is at work.
V. CONSEQUENCES AND LIMITATIONS
The ensuing speed of technological change
Technological change is a combination of more and of better technology. A recent study in the
journal Science shows that the peak of the rate of change of the world's capacity to compute
information was in the year 1998, when the world's technological capacity to compute information
on general-purpose computers grew at 88% per year.
Transistor count versus computing performance
The exponential processor transistor growth predicted by Moore does not always translate into
exponentially greater practical CPU performance. Let us consider the case of a single-threaded
system. According to Moore's law, transistor dimensions are scaled by 30% (0.7x) every technology
generation, thus reducing their area by 50%. This reduces the delay (0.7x) and therefore increases
operating frequency by about 40% (1.4x). Finally, to keep electric field constant, voltage is reduced
by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%, since active power = CV2
f. Therefore, in every technology generation transistor density doubles, circuit becomes 40% faster,
while power consumption (with twice the number of transistors) stays the same.
Another source of improved performance is due to microarchitecture techniques exploiting the
growth of available transistor count. These increases are empirically described by Pollack's rule
which states that performance increases due to microarchitecture techniques are square root of the
number of transistors or the area of a processor. In multi-core CPUs, the higher transistor density
does not greatly increase speed on many consumer applications that are not parallelized. There are
cases where a roughly 45% increase in processor transistors have translated to roughly 10–20%
increase in processing power.[72] Viewed even more broadly, the speed of a system is often limited
by factors other than processor speed, such as internal bandwidth and storage speed, and one can
judge a system's overall performance based on factors other than speed, like cost efficiency or
electrical efficiency.
@IJMTER-2016, All rights Reserved
511
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
Importance of non-CPU bottlenecks
As CPU speeds and memory capacities have increased, other aspects of performance like memory
and disk access speeds have failed to keep up. As a result, those access latencies are more and more
often a bottleneck in system performance, and high-performance hardware and software have to be
designed to reduce their impact. In processor design, out-of-order execution and on-chip caching and
prefetching reduce the impact of memory latency at the cost of using more transistors and increasing
processor complexity. In software, operating systems and databases have their own finely tuned
caching and prefetching systems to minimize the number of disk seeks, including systems like
ReadyBoost that use low-latency flash memory. Some databases can compress indexes and data,
reducing the amount of data read from disk at the cost of using CPU time for compression and
decompression. The increasing relative cost of disk seeks also makes the high access speeds
provided by solid-state disks more attractive for some applications.
Parallelism and Moore's law
Parallel computation has recently become necessary to take full advantage of the gains allowed by
Moore's law. For years, processor makers consistently delivered increases in clock rates and
instruction-level parallelism, so that single-threaded code executed faster on newer processors with
no modification.Now, to manage CPU power dissipation, processor makers favor multi-core chip
designs, and software has to be written in a multi-threaded or multi-process manner to take full
advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and
will not see a linear increase in speed vs number of processors. This is particularly true while
accessing shared or dependent resources, due to lock contention. This effect becomes more
noticeable as the number of processors increases. Recently, IBM has been exploring ways to
distribute computing power more efficiently by mimicking the distributional properties of the human
brain.
Obsolescence
A negative implication of Moore's Law is obsolescence, that is, as technologies continue to rapidly
"improve", these improvements can be significant enough to rapidly render predecessor technologies
obsolete. In situations in which security and survivability of hardware and/or data are paramount, or
in which resources are limited, rapid obsolescence can pose obstacles to smooth or continued
operations
VI. CONCLUSION
Since the costs per unit of a good are governed by the production technology, the presented
evolutionary model suggests that manufacturers have a competitive advantage when they apply new
generations of the production technology. If the main competition is confined to neighbouring
generations, the unit sales market shares of sold products are expected to evolve according to a
Fisher-Pry-plot law. Also, derived is the case that a process technology is governed by a cost
relevant characteristic that is constrained by a technological or physical boundary. The model
suggests that in this case the limit is approached asymptotically in time. In order to test the model
two characteristics of the DRAM semiconductor production technology are studied. The wafer size
is a cost relevant characteristic of the production technology because the costs per unit decrease with
an increasing wafer size. The model suggests that different wafer sizes replace each other according
to a Fisher-Pry law. Empirical data confirm this replacement process of successive generations of the
wafer size. Note that similar replacement processes are known from other technologies. Another cost
relevant characteristic of the DRAM production technology is related to the minimum feature size of
electronic elements on a chip. It determines the number of transistors per chip and governs therefore
@IJMTER-2016, All rights Reserved
512
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
Moore’s law. Applying the lithographic method, the minimum feature size is bounded by the
minimum wavelength that can be applied. This limit restricts the density of transistors. While
Moore’s law suggests an exponential increase of the number of transistors per chip, the model agrees
with this statement far from the technological limit but suggests a deviation from Moore’s law in the
run of time. It predicts that the miniaturization process will slow down considerably in the next two
decades.
REFERENCES
1.
Moore, Gordon E. (1965-04-19). "Cramming more components onto integrated circuits" (PDF). Electronics.
Retrieved 2011-08-22.
2.
The trend begins with the invention of the integrated circuit in 1958. See the graph on the bottom of page 3 of
Moore's original presentation of the idea.[1]
3.
to:a b c Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF).Electronics
Magazine. p. 4. Retrieved 2006-11-11.
4.
Moore, Gordon. "Progress In Digital Integrated Electronics" (PDF). Retrieved July 15, 2015.
5.
Krzanich, Brian (July 15, 2015). "Edited Transcript of INTC earnings conference call". Retrieved July 16,2015. Just
last quarter, we celebrated the 50th anniversary of Moore's Law. In 1965 when Gordon's paper was first published,
he predicted a doubling of transistor density every year for at least the next 10 years. His prediction proved to be
right and in fact, in 1975, looking ahead to the next 10 years, he updated his estimate to a doubling every 24 months.
6.
to:a b Takahashi, Dean (April 18, 2005). "Forty years of Moore’s law". Seattle Times (San Jose, CA).
RetrievedApril 7, 2015. A decade later, he revised what had become known as Moore’s Law: The number of
transistors on a chip would double every two years.
7.
Moore, Gordon (2006). "Chapter 7: Moore's law at 40". In Brock, David. Understanding Moore’s Law: Four
Decades of Innovation (PDF). Chemical Heritage Foundation. pp. 67–84. ISBN 0-941901-41-6. RetrievedMarch
15, 2015.
8.
"Over 6 Decades of Continued Transistor Shrinkage, Innovation" (Press release). Santa Clara, California: Intel
Corporation. Intel Corporation. 2011-05-01. Retrieved 2015-03-15. 1965: Moore’s Law is born when Gordon Moore
predicts that the number of transistors on a chip will double roughly every year (a decade later, revised to every 2
years)
9.
to:a b Disco, Cornelius; van der Meulen, Barend (1998).Getting new technologies together. New York: Walter de
Gruyter. pp. 206–207. ISBN 3-11-015630-X.OCLC 39391108. Retrieved August 23, 2008.
10. Byrne, David M.; Oliner, Stephen D.; Sichel, Daniel E. (March 2013). Is the Information Technology Revolution
Over? (PDF). Finance and Economics Discussion Series Divisions of Research & Statistics and Monetary Affairs
Federal Reserve Board. Washington, D.C.: Federal Reserve Board Finance and Economics Discussion Series
(FEDS). Archived (PDF) from the original on 2014-06-09. technical progress in the semiconductor industry has
continued to proceed at a rapid pace ... Advances in semiconductor technology have driven down the constantquality prices of MPUs and other chips at a rapid rate over the past several decades.
11. Myhrvold, Nathan (June 7, 2006). "Moore's Law Corollary: Pixel Power". New York Times. Retrieved2011-11-27.
12. Rauch, Jonathan (January 2001). "The New Old Economy: Oil, Computers, and the Reinvention of the Earth". The
Atlantic Monthly. Retrieved November 28,2008.
13. Keyes, Robert W. (September 2006). "The Impact of Moore's Law". Solid State Circuits Newsletter.
Retrieved November 28, 2008.
14. Liddle, David E. (September 2006). "The Wider Impact of Moore's Law". Solid State Circuits Newsletter.
Retrieved November 28, 2008.
@IJMTER-2016, All rights Reserved
513
International Journal of Modern Trends in Engineering and Research (IJMTER)
Volume 03, Issue 04, [April– 2016]
ISSN (Online):2349–9745 ; ISSN (Print):2393-8161
15. to:a b Kendrick, John W. (1961). Productivity Trends in the United States. Princeton University Press for NBER.
p. 3.
16.
to:a b c Jorgenson, Dale W.; Ho, Mun S.; Samuels, Jon D. (2014). "Long-term Estimates of U.S. Productivity and
Growth" (PDF). World KLEMS Conference. Retrieved2014-05-27.
17. "Moore's Law to roll on for another decade". Retrieved 2011-11-27. Moore also affirmed he never said transistor
count would double every 18 months, as is commonly said. Initially, he said transistors on a chip would double
every year. He then recalibrated it to every two years in 1975. David House, an Intel executive at the time, noted that
the changes would cause computer performance to double every 18 months.
18. "Overall Technology Roadmap Characteristics".International Technology Roadmap for Semiconductors. 2010.
Retrieved 2013-08-08.
19. Moore, Gordon (March 30, 2015). Gordon Moore: The Man Whose Name Means Progress, The visionary engineer
reflects on 50 years of Moore’s Law. IEEE Spectrum. Interview with Rachel Courtland. Special Report: 50 Years of
Moore's Law. We won’t have the rate of progress that we've had over the last few decades. I think that’s inevitable
with any technology; it eventually saturates out. I guess I see Moore’s law dying here in the next decade or so, but
that’s not surprising.
20. Clark, Don (July 15, 2015). "Intel Rechisels the Tablet on Moore’s Law". Wall Street Journal Digits Tech News
and Analysis. Retrieved 2015-07-16. The last two technology transitions have signaled that our cadence today is
closer to two and a half years than two
@IJMTER-2016, All rights Reserved
514