Selasa, 05 Juni 2018

Sponsored Links

Moore's Law | Points and Figures
src: www.is.umk.pl

Moore's Law is the observation that the number of transistors in a solid integrated circuit doubles every two years. This observation is named after Gordon Moore, co-founder of Fairchild Semiconductor and Intel, whose 1965 paper describes doubling annually in the number of components per integrated circuit, and projects this growth rate to continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to double every two years. This period is often cited as 18 months because Intel executive David House, who predicts that chip performance will double every 18 months (a combination of more transistor and transistor effects becomes faster).

Moore's predictions have proven to be accurate for decades, and have been used in the semiconductor industry to guide long-term planning and to set targets for research and development. Advances in digital electronics are closely related to Moore's law: microprocessor prices adjusted for quality, memory capacity, sensors and even the number and size of pixels in a digital camera. Digital electronics have contributed to the growth of the world economy in the late twentieth and early twenty-first centuries. Moore's Law describes the driving forces of technological and social change, productivity, and economic growth.

Moore's Law is an observation and projection of historical trends and not physical or natural laws. Although the exchange rate remained stable from 1975 to about 2012, that figure was faster during the first decade. In general, it is illogical to extrapolate from an unlimited historical growth rate to the future. For example, the 2010 update to the International Technology Roadmap for Semiconductors, estimates growth will slow down sometime in 2013, and by 2015 Gordon Moore predicts that the rate of progress will reach saturation: "I see Moore's law die here in the next decade or so."

Intel stated in 2015 that the pace of progress has slowed, starting at a 22 nm feature width around 2012, and continues at 14 nm. Brian Krzanich, CEO of Intel, announced, "Our rhythm is today approaching two and a half years rather than two." Intel is expected to reach a 10 nm knot in 2018, a three-year rhythm. Intel also stated in 2017 that Hyperscaling will be able to continue Moore's legal trends and offset increasing rhythms with aggressive scales beyond double multiplication of transistors. He cited Moore's 1975 revision as a precedent for the current slowdown, resulting from technical challenges and a "natural part of Moore's legal history".


Video Moore's law



History

In 1959, Douglas Engelbart discussed the projected downscaling of integrated circuit sizes in the article "Microelectronics, and Art of Similitude". Engelbart presented his ideas at the 1960 Solid-State Circuit Conference, where Moore was present among the audience.

For the thirty-fifth edition of the Electronic magazine, published on 19 April 1965, Gordon E. Moore, who worked as director of research and development at Fairchild Semiconductor at the time, was asked to predict what would happen in semiconductor component industry for the next ten years. The answer is a short article entitled, "Gathering more components to integrated circuits". In his editorial, he speculates that in 1975 it would probably contain as many as 65,000 components on a single, quarter inch semiconductor.

The complexity for minimum component costs has increased at a rate of about two factors per year. Of course in the short term this figure can be expected to continue, if not increased. In the long run, the rate of increase is a bit more uncertain, although there is no reason to believe that it will not remain nearly constant for at least 10 years.

The reason is the log-linear relationship between device complexity (higher circuit density and lower cost) and time.

At the IEEE 1975 International Electron Encounter Meeting, Moore revised the forecast level. The semiconductor complexity will continue to multiply annually until about 1980 after which it will drop to twice every two years. He outlines several factors that contribute to this exponential behavior:

  • die size increases at an exponential rate and as density decreases, chipmakers can work with larger areas without loss of yields;
  • simultaneous evolution to finer minimum dimensions;
  • and what Moore calls "circuit and device ingenuity".

Shortly after 1975, Caltech professor Carver Mead popularized the term "Moore's Law".

Despite popular misconceptions, Moore insists that he does not predict doubling "every 18 months". In contrast, David House, an associate of Intel, has calculated the transistor performance increase to conclude that integrated circuits will double in performance every 18 months.

In April 2005, Intel offered US $ 10,000 to buy copies of the original Electronic issue in which Moore's article appeared. An engineer living in the UK was the first to find a copy and offer it to Intel.

Maps Moore's law



As a growing target for industry

Moore's Law came to be widely accepted as a goal for the industry, and it was cited by competitive semiconductor manufacturers as they sought to improve processing power. Moore sees his eponymous law as shocking and optimistic: "Moore's law is a violation of Murphy's law, everything is getting better and better." The observations are even seen as self-fulfilling prophecies. However, the increased level of physical dimensions known as Dennard scale has slowed down in recent years; and, the formal revision of the International Technology Roadmap for Semiconductors was suspended in 2016. Moore's_second_law "> Moore's second law <

When the cost of computer power for consumers falls, the cost for manufacturers to meet Moore's law follows the opposite trend: R & D, manufacturing, and testing costs continue to increase with each generation of new chips. Increasing manufacturing costs is an important consideration for maintaining Moore's law. This has led to the formulation of Moore's second law, also called Rock law, which is that the capital cost of the semiconductor fab also increases exponentially over time.

8/31 - HMTCA Mobile CSP 2017-2018
src: c1.staticflickr.com


The main factors that enable

Various innovations by scientists and engineers have sustained Moore's law since the beginning of the era of integrated circuit (IC). Some key innovations are listed below, for example breakthroughs that have advanced integrated circuit technology with more than seven fold in less than five decades:

  • The most important contribution, which is raison d'ÃÆ'ªtre for Moore's law, is the discovery of integrated circuits, credited simultaneously to Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor.
  • The discovery of a complementary metal-oxide-semiconductor (CMOS) process by Frank Wanlass in 1963, and a number of technological advances by many CMOS workers in the field of semiconductors since Wanlass's work, has enabled the very high-density and high-performance IC made today's industry.
  • The invention of Robert Dennard's dynamic random-access memory (DRAM) technology at IBM in 1967 made it possible to create single memory transistor cells, and the discovery of flash memory by Fujio Masuoka in Toshiba in the 1980s led to low-cost, memory capacity high in a variety of electronic products.
  • The discovery of chemically strengthened photoreses by Hiroshi Ito, C. Grant Willson and J. M. J. FrÃÆ' © chet at IBM c. 1980 which is 5-10 times more sensitive to ultraviolet light. IBM introduced a chemically strengthened photoresist for DRAM production in the mid-1980s.
  • The invention of deep UV laser photolitography eksitator by Kanti Jain at IBM c.1980 has allowed the smallest feature in IC to shrink from 800 nanometers in 1990 to as low as 10 nanometers by 2016. Prior to this, excimer lasers have been primarily used as a research tool since it was developed in the 1970s. From a wider scientific perspective, the invention of eczema laser lithography has been highlighted as one of the major milestones in the 50-year history of lasers.
  • The interconnecting innovations of the late 1990s, including mechanical-chemical or mechanical mechanical mechanization (CMP), trench isolation, and copper interconnections - though not directly a factor in creating smaller transistors - have enabled increased wafer output layers of metal cables, closer device spacing, and lower electrical resistance.

Road map technology computer industry predicts (in 2001) that Moore's law will continue for several generations of semiconductor chips. Depending on the doubling time used in the calculation, this can mean an increase of up to a hundred times the number of transistors per chip in a decade. The semiconductor industry technology roadmap uses a three-year doubling time for microprocessors, leading to a tenfold increase in the next decade. Intel reported in 2005 as stating that the downsizing of silicon chips with good economics can continue over the next decade, and in 2008 as predicting the trend through 2029.

Future trends

One of the major challenges of engineering future nanoscale transistors is the design of gates. When device dimensions shrink, controlling the flow of current in thin channels becomes more difficult. Compared to FinFET, which has a dielectric gate on three sides of the channel, the gate-all-around structure has better gate control.

  • In 2010, researchers at the Tyndall National Institute in Cork, Ireland announced transistors without connections. A control gate wrapped around a silicon nanowire can control the electron path without using an intersection or doping. They claim this can be produced on a 10-nanometer scale using existing fabrication techniques.
  • In 2011, researchers at the University of Pittsburgh announced the development of a single electron transistor, 1.5 nanometers in diameter, made of oxide-based materials. Three "cables" converge on a central "island" that can hold one or two electrons. Tunnel electrons from one cable to another via cable island. The conditions on the third wire produce different conductive properties including the transistor's ability to act as solid state memory. Nanowire transistors can spur the manufacture of microscopic computers.
  • In 2012, a team of researchers at the University of New South Wales announced the development of the first working transistor consisting of a single atom placed precisely in a silicon crystal (not just taken from a large sample of random transistors). Moore's Law predicts this milestone must be achieved for ICs in the laboratory by 2020.
  • In 2015, IBM showed a 7 nm chip node with silicon-germanium transistors manufactured using EUVL. The company believes this transistor density will be four times that of the current 14 nm chip.

Revolutionary technological advances can help defend Moore's laws through improved performance with or without reducing feature size.

  • In 2008, researchers at HP Labs announced a working memristor, the fourth basic passive element element whose existence only theorized before. The unique nature of the memorandum allows the creation of smaller electronic devices and performs better.
  • In 2014, biotechnology at Stanford University develops circuits that are modeled on the human brain. Sixteen "Neurocore" chips simulate one million neurons and billions of synaptic connections, claimed 9,000 times faster and more energy efficient than regular PCs.
  • In 2015, Intel and Micron announce 3D XPoint, the claimed non-volatile memory faster with the same density compared to NAND. Production scheduled to begin in 2016 is suspended until the second half of 2017.

Research alternative materials

Most of the current transistors in the IC consist mainly of doping silicon and its alloys. Because silicon is made into a single nanometer transistor, the short-channel effect adversely changes the desired material properties of silicon as a functional transistor. Below are some non-silicon replacements in the manufacture of small nanometer transistors.

One of the proposed materials is indium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for high-speed and low-power logic applications in the future. Due to the intrinsic characteristics of the III-V semiconductor compound, transistors of quantum and tunnel effects based on InGaAs have been proposed as an alternative to more traditional MOSFET designs.

  • In 2009, Intel announced the development of 80-nanometer quantum InGaAs transistors as well. Quantum devices also contain materials flanked between two layers of material with a wider bandgap. Despite doubling the size of the leading pure silicon transistors at the time, the company reported that they performed equally well while consuming less power.
  • In 2011, researchers at Intel demonstrated 3G-InGaAs tri-gate transistors with better leakage characteristics compared to traditional planar designs. The company claims that their design achieves the best electrostatic of any III-V compound semiconductor transistor. At the 2015 Solid-State International Circuit Conference, Intel mentioned the use of III-V compounds based on such an architecture for their nanometer nodes.
  • In 2011, researchers at the University of Texas at Austin developed an InGaAs field effect transistor capable of operating higher than previous designs. The first TFET III-V design was demonstrated in 2009 by joint teams from Cornell University and Pennsylvania State University.
  • In 2012, a team at Microsystems Technology Laboratories MIT developed a 22nm transistor based on InGaAs which, at the time, was the smallest non-silicon transistor ever built. The team used a technique that is currently used in fabrication of silicon devices and aims for better electrical performance and a 10-nanometer scale reduction.
  • Research also shows how micro-cell biologics are capable of producing impressive computing power while simultaneously saving energy.

Alternatively, carbon-based compounds such as graphene have also been proposed. First identified in the nineteenth century, the graphene-easy method of producing graphene was not available until 2004. Being a special form of carbon, graphene usually exists in the stable form of graphite, a material widely used in many applications - "leads" in mechanical pencils become examples. When a single monolayer of carbon atoms is extracted from nonconductive mass graphite, the observable electrical properties contribute to semiconductor behavior, making it a viable substitute for silicon. However, further research needs to be done, on the sub-50 nm graphene layer, as the resistivity value increases and thus the electron mobility decreases.

Graphene nanoribbon transistors have shown great expectations since their appearance in publication in 2008. Bulk graphene has a zero band gap and thus can not be used in transistors due to its constant conductivity, inability to shut down. The zig-zag edges of nanoribbons introduce the local energy status in the conduction band and valence and thus bandgap that allows the switch when made as a transistor. For example, a typical 10 nm wide GNR has a desired bandgap energy of 0.4eV.

Short-term limit

Most of the semiconductor industry forecasters, including Gordon Moore, expect Moore's law to end around 2025.

In April 2005, Gordon Moore stated in an interview that the projection can not be maintained indefinitely: "It can not go on forever." The exponential nature is that you push them out and eventually a disaster happens. " He also notes that the transistor will eventually reach the limit of miniaturization at the atomic level:

In terms of the size of [transistors], you can see that we are close to the size of the atom which is a fundamental barrier, but it will be two or three generations before we reach that extent - but that's as far away as we've ever been able to see. We have another 10 to 20 years before we reach the fundamental limit. At that time they will be able to make bigger chips and have budget transistors in the billions.


Moore's Law Is Ending... So, What's Next? - YouTube
src: i.ytimg.com


Consequences and limitations

Technological change is a combination of more and better technology. A 2011 study in the journal Science showed that the peak of the world's capacity change rate to compute information was in 1998, when the capacity of the world's technology to calculate information about general-purpose computers grew at 88% per year. Since then, technological change has clearly slowed down. Recently, every new year allows humans to perform calculations about 60% more than could be possible by all general purpose computers that existed in the previous year. This is still exponential, but shows the varying nature of technological change.

The main driving forces of economic growth are productivity growth, and Moore's legal factors become productivity. Moore (1995) estimates that "the pace of technological progress will be controlled from financial reality". On the contrary it could and did happen around the late 1990s, with economists reporting that "Productivity growth is a key economic indicator of innovation."

Acceleration in semiconductor progress rates contributed to a surge in US productivity growth, which reached 3.4% annually in 1997-2004, surpassing 1.6% annually during both 1972-1996 and 2005-2013. As economist Richard G. Anderson noted, "A number of studies have traced the causes of productivity acceleration for technological innovation in semiconductor production that sharply reduces the price of these components and from products containing them (as well as expanding the capabilities of those products)."

While physical boundaries for transistor scales such as source-to-flow leaks, metal gates are limited, and limited options for channel materials have been reached, new avenues for advanced scale are open. The most promising of this approach depends on the use of electron spinronics spinronics, tunnel connections, and continued confinement of the channel material through the geometry of the nanowires. The full list of available device options indicates that a wide selection of open devices to continue Moore's law over the next few decades. Spin-based logic and memory options are being developed actively in industrial laboratories, as well as academic laboratories.

Another source of performance improvement is the microarchitecture technique exploiting the growing number of available transistors. Out-of-order execution and on-chip caching and prefetching reduce memory latency bottlenecks at the expense of using more transistors and increasing processor complexity. This increase is explained empirically by the Pollack Rule, which states that performance improvement due to microarchitecture techniques is the square root of the number of transistors or processor areas.

Over the years, processor makers delivered increased clock rates and instructional parallelism, so single-threaded code was executed faster on newer processors without modification. Now, to manage CPU power dissipation, processor makers love the design of multi-core chips, and the software must be written in a multi-threaded way to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs. number of processors. This is especially true when accessing shared or dependent resources, because of key conflict. This effect becomes more noticeable as the number of processors increases. There are cases where about 45% increase in processor transistors has translated to about 10-20% increase in processing power.

On the other hand, processor manufacturers take advantage of the 'extra space' that transistor shrinkage provides for adding special processing units to handle features such as graphics, video, and cryptography. For example, the Intel Parallel JavaScript extension not only adds support for multiple cores, but also to other non-generic processing features of their chips, as part of a migration in client-side script against HTML5.

The negative implications of Moore's law are obsolescence, that is, as technology continues to "rise" rapidly, this increase may be significant enough to make its predecessor technology obsolete quickly. In situations where security and resilience of hardware or data is paramount, or where resources are limited, rapid obsolescence may pose obstacles to smooth or continued operation.

Because the toxic materials used in the production of modern computers, obsolescence, if not properly managed, can cause harmful environmental impacts. On the other hand, obsolescence can sometimes be desirable by companies that can make the most of regular purchases of expensive new equipment that are often expensive rather than keeping one device for a longer period of time. Those in the industry are well aware of this, and can utilize the planned obsolescence as a method to increase profits.

Moore's Law has significantly influenced the performance of other technologies: Michael S. Malone wrote of the Moore War following the startling and fascinating success of the early days of the Iraq War. Advances in the development of guided weapons depend on electronic technology. Improvements in density and low power operating circuits associated with Moore's law have also contributed to the development of technologies including cell phones and 3-D printing.

Silicon Valley marks 50 years of Moore's Law
src: 3c1703fe8d.site.internapcdn.net


Other formulas and similar observations

Some measures of digital technology are increasing at an exponential rate associated with Moore's law, including size, cost, density, and component speed. Moore writes only about component densities, "components being transistors, resistors, diodes or capacitors", with minimum cost.

Transistors per integrated circuit - The most popular formulation is doubling the number of transistors on an integrated circuit every two years. In the late 1970s, Moore's law was known as the limit to the number of transistors on the most complex chips. The graph at the top shows this trend in effect today.

  • Starting in 2016, commercially available processors that have the most number of transistors are 24 Xeon Broadwell-WS cores with more than 5.7 billion transistors.

Density with minimum cost per transistor - This is a formulation given in Moore's 1965 paper. It's not just about the density of transistors that can be achieved, but about the density of the transistor where the cost per transistor is the lowest. As more transistors are placed in the chip, the cost to make each transistor decreases, but chances are the chip will not work due to increased defects. In 1965, Moore examined the density of transistors in which costs were minimized, and observed that, as transistors were made smaller through advances in photolithography, this figure would increase at "a rate of approximately two factors per year".

Dennard scaling - This indicates that the power requirement is proportional to the area (both voltage and current proportional to the length) for the transistor. Combined with Moore's law, performance per watt will grow at a rate roughly equal to the density of the transistor, doubling every 1-2 years. According to Dennard the scale of the transistor scale is increased by 30% (0.7 x) per generation of technology, thus reducing their area by 50%. This reduces the delay by 30% (0.7x) and therefore increases the operating frequency by about 40% (1.4x). Finally, to keep the electric field constant, the voltage decreases by 30%, reduces energy by 65% ​​and power (at 1.4x frequencies) by 50%. Therefore, in every generation density of double transistor technology, the circuit becomes 40% faster, while power consumption (with twice the number of transistors) remains the same.

The growth of the exponential processor transistors predicted by Moore does not necessarily mean exponentially larger CPU performance exponentially. Since around 2005-2007, the Dennard scale appears to have been damaged, so although Moore's law continues for several years after that, it has not resulted in a dividend in performance improvement. The main reason cited for details is that on the small size, the current leak poses a greater challenge, and also causes the chip to heat up, which creates thermal runaway threat and therefore, further increases energy costs.

Dennard scale decryption pushes a switch among several chip manufacturers to focus larger on multicore processors, but the benefits offered by switching to more cores are lower than the benefits to be achieved if the Dennard scale continues. In another departure from the Dennard scale, the Intel microprocessor adopts a non-planar FinFET tri-gate at 22 nm in 2012 which is faster and consumes less power than conventional planar transistors.

Adjusted price quality of IT equipment - The price of information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined by 16% on average over five decades from 1959 to 2009. However, the pace accelerated to 23% per year in 1995-1999 triggered by faster IT innovations, and then, slowed to 2% per year in 2010-2013.

Microprocessor-adjusted price adjustment rates also vary, and are not linear on log scale. The increase in microprocessor prices accelerated during the late 1990s, reaching 60% per annum (half every nine months) versus a 30% general repair rate (halved every two years) during previous and later years. Laptop microprocessors in particular increased by 25-35% per year in 2004-2010, and slowed to 15-25% per year in 2010-2013.

The number of transistors per chip can not explain the price of a microprocessor that is fully qualified. Moore's 1995 paper does not limit Moore's law to strict linearity or to counting transistors, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry which, when plotted on a semi-log paper approaches a straight line. review its origin and thereby limit its definition. "

Hard disk drive area density - Similar observations (sometimes called Kryder laws) were made in 2005 for density of hard disk drive area. Several decades of rapid advancement resulted from the use of error correction codes, magnetoresistive effects, and gigantic magnetoresistive effects. Kryder assesses the increase in area density to slow significantly around 2010, because noise is associated with smaller grain size of disk media, thermal stability, and writing ability using available magnetic fields.

Optical fiber capacity - The number of bits per second that optical fiber can decrease increases exponentially, faster than Moore's law. The Law of Keck , in honor of Donald Keck.

Network Capacity - According to Gerry/Gerald Butters, former chair of Lucent Optical Networking Network at Bell Labs, there is another version, called Butters' Law of Photonics, a formula that deliberately aligns Moore's law. Butters' law says that the amount of data coming out of optical fibers doubles every nine months. Thus, the transmission cost slightly through the optical network is reduced by half every nine months. The availability of multiplexed multiplexing-division (sometimes called WDM) increases the capacity that can be placed on single fibers by a factor of 100. The optical and density-division-multiplexing (DWDM) networks are rapidly lowering networking costs, and further progress seems assured. As a result, the wholesale price of data traffic collapses in the dot-com bubble. Nielsen's law says that the bandwidth available to users increases 50% per year.

Pixels per dollar - Similarly, Barry Hendy from Kodak Australia has plotted pixels per dollar as a basic measure of value for digital cameras, showing the historical linearity (on a log scale) of this market and an opportunity to predict future trends price of digital camera, LCD and LED display, and resolution.

Great Moore's legal compensator (TGMLC) , also known as Wirth law - is commonly referred to as bloat software and is the principle that the generation of computer software continues to grow in size and complexity, thus offsetting the performance gains predicted by Moore's law. In a 2008 article at InfoWorld, Randall C. Kennedy, previously from Intel, introduced this term using a sequential version of Microsoft Office between 2000 and 2007 as its premise. Despite the advantages in computing performance over this time period according to Moore's law, Office 2007 performed the same task at half-speed in 2007 prototypical computers compared to Office 2000 on a 2000 computer.

Library extension - calculated in 1945 by Fremont Rider to double the capacity every 16 years, if sufficient space is available. He recommends replacing large and damaged prints with miniature photo microform photos, which can be duplicated on demand for library customers or other institutions. He did not foresee the digital technology that would follow several decades later to replace analogue microforms with digital imaging, storage, and transmission media. Automatic, potentially losing digital technology enables a rapid increase in the speed of information growth in the present era sometimes called the Information Age.

Carlson Curve - is a term coined by The Economist to describe the equation of Moore's legal biotechnology, and named after author Rob Carlson. Carlson predicts accurately that DNA DNA sequencing time (measured by cost and performance) will be at least as fast as Moore's law. Carlson Curves illustrates rapidly (in some cases hyperexponential) cost reduction, and performance improvements, from a variety of technologies, including DNA sequencing, DNA synthesis, and various physical and computational tools used in protein expression and in determining protein structure..

Eroom's Law - is a pharmaceutical drug development observation deliberately written as Moore's Law spelled backwards to distinguish it from the exponential advances of other forms of technology (such as transistors) over time. It states that the cost of developing new drugs is about double every nine years.

Experience curve effect says that any doubling of cumulative production of almost all products or services is accompanied by an estimated constant reduction percentage in unit costs. The first documented qualitative description known from this date from 1885. The power curve was used to describe this phenomenon in a 1936 discussion of the cost of aircraft.

Intel claims Moore's Law is alive and well | bit-tech.net
src: images.bit-tech.net


See also


Using Moore's Law to Predict Future Memory Trends
src: antranik.org


Note


Moore's Law - One Law to Rule them All | Signal
src: www.signalz.com


References


Moore's Law - One Law to Rule them All | Signal
src: www.signalz.com


Further reading

  • Moore's Law: The Life of Gordon Moore, The Quiet Revolution in Silicon Valley. Arnold Thackray, David C. Brock, and Rachel Jones. New York: Basic Books, (May) 2015.
  • Understanding Moore's Law: Four Decades of Innovation. Edited by David C. Brock. Philadelphia: Chemical Heritage Foundation, 2006. ISBN: 0-941901-41-6. OCLCÃ, 66463488.

SemiWiki.com - ARM on Moore's Law at 50: Are we planning for ...
src: www.semiwiki.com


External links

  • Intel press kit - released for the 40th anniversary of Moore's Law, with a 1965 sketch by Moore
  • The Life and Death of Moore's Law - by Ilkka Tuomi; detailed study of Moore's Law and its historical evolution and its criticism by Kurzweil
  • No more disturbing Technology... Microchip growth slide show
  • Intel (IA-32) CPU speed from 1994 to 2005 - the increase in speed in recent years seems to be slowing with regards to the percentage increase per year (available in PDF or PNG format)
  • International Technology Roadmap for Semiconductors (ITRS)
  • Gordon Moore's Law and Integrated Circuit, Dream 2047 October 2006
  • A C | net FAQ about Moore's Law at Archive.is (archived 2013-01-02)
  • ASML's 'Our Stories', Gordon Moore on Moore's Law, ASML Holding

Source of the article : Wikipedia

Comments
0 Comments