News:

8 Luglio 2005: Gamers4um è finalmente un "vero" forum... da parte mia
un caloroso benvenuto a tutti i vecchi e nuovi iscritti!!
Turrican3

Menu principale

AMD: prossimi alla "fine" della Legge di Moore?

Aperto da Turrican3, 4 Aprile, 2013, 15:38:56

Discussione precedente - Discussione successiva

0 Utenti e 1 Visitatore stanno visualizzando questa discussione.

Turrican3

CitazioneSAN FRANCISCO: CHIP DESIGNER AMD claims that the delay in transitioning from 28nm to 20nm highlights the beginning of the end for Moore's Law.

AMD was one of the first consumer semiconductor vendors to make use of TSMC's 28nm process node with its Radeon HD 7000 series graphics cards, but like every chip vendor it is looking to future process nodes to help it increase performance. The firm told The INQUIRER the time taken to transition to 20nm signals the beginning of the end for Moore's Law.

Famed Intel co-founder and electronics engineer Gordon Moore predicted that total the number of transistors would double every two years. He also predicted that the 'law' would not continue to apply for as long as it has. It was professor Carver Mead at Caltech that coined the term Moore's Law, and now one of Mead's students, John Gustafson, chief graphics product architect at AMD, has said that Moore's Law is ending because it actually refers to a doubling of transistors that are economically viable to produce.

Gustafson said, "You can see how Moore's Law is slowing down. The original statement of Moore's Law is the number of transistors that is more economical to produce will double every two years. It has become warped into all these other forms but that is what he originally said."


According to Gustafson, the transistor density afforded by a process node defines the chip's economic viability. He said, "We [AMD] want to also look for the sweet spot, because if you print too few transistors your chip will cost too much per transistor and if you put too many it will cost too much per transistor. We've been waiting for that transistion from 28nm to 20nm to happen and it's taking longer than Moore's Law would have predicted."

Gustafson was pretty clear in his view of transistor density, saying, "I'm saying you are seeing the beginning of the end of Moore's law."

AMD isn't the only chip vendor looking to move to smaller process nodes and has to wait on TSMC and Globalfoundries before it can make the move. Even Intel, with its three year process node advantage over the industry is having problems justifying the cost of its manufacturing business to investors, so it could be the economics rather than the engineering that puts an end to Moore's Law.

http://www.theinquirer.net/inquirer/news/2258444/amd-claims-20nm-transition-signals-the-end-of-moores-law

Hmmm... :look: :mmmm:

Turrican3

CitazioneWelcome to the Jungle

In the twilight of Moore's Law, the transitions to multicore processors, GPU computing, and HaaS cloud computing are not separate trends, but aspects of a single trend – mainstream computers from desktops to 'smartphones' are being permanently transformed into heterogeneous supercomputer clusters. Henceforth, a single compute-intensive application will need to harness different kinds of cores, in immense numbers, to get its job done.

The free lunch is over. Now welcome to the hardware jungle.


From 1975 to 2005, our industry accomplished a phenomenal mission: In 30 years, we put a personal computer on every desk, in every home, and in every pocket.

In 2005, however, mainstream computing hit a wall. In "The Free Lunch Is Over" (December 2004), I described the reasons for the then-upcoming industry transition from single-core to multi-core CPUs in mainstream machines, why it would require changes throughout the software stack from operating systems to languages to tools, and why it would permanently affect the way we as software developers haveimage to write our code if we want our applications to continue exploiting Moore's transistor dividend.

In 2005, our industry undertook a new mission: to put a personal parallel supercomputer on every desk, in every home, and in every pocket. 2011 was special: it's the year that we completed the transition to parallel computing in all mainstream form factors, with the arrival of multicore tablets (e.g., iPad 2, Playbook, Kindle Fire, Nook Tablet) and smartphones (e.g., Galaxy S II, Droid X2, iPhone 4S). 2012 will see us continue to build out multicore with mainstream quad- and eight-core tablets (as Windows 8 brings a modern tablet experience to x86 as well as ARM), and the last single-core gaming console holdout will go multicore (as Nintendo's Wii U replaces Wii).

This time it took us just sixyears to deliver mainstream parallel computing in all popular form factors. And we know the transition to multicore is permanent, because multicore delivers compute performance that single-core cannot and there will always be mainstream applications that run better on a multi-core machine. There's no going back.

For the first time in the history of computing, mainstream hardware is no longer a single-processor von Neumann machine, and never will be again.

That was the first act.

[...]

Exit Moore, Pursued by a Dark Silicon Bear

Finally, let's return one more time to the end of Moore's Law to see what awaits us in our near future, and why we will likely pass through three distinct stages as we navigate Moore's End.

Eventually, our tired miners will reach the point where it's no longer economically feasible to operate the mine. There's still gold left, but it's no longer commercially exploitable. Recall that Moore's Law has been interesting only because we have been able to transform its raw resource of "more transistors" into one of two useful forms:

Exploit #1: Greater throughput. Moore's Law lets us deliver more transistors, and therefore more complex chips, at the same cost. That's what will let us continue to deliver more computational performance per chip – as long as we can find ways to harness the extra transistors for computation.
Exploit #2: Lower cost/power/size. Alternatively, Moore's Law lets us deliver the same number of transistors at a lower cost, including in a smaller area and at lower power. That's what will let us continue to deliver powerful experiences in increasingly compact and mobile and embedded form factors.

The key thing to note is that we can expect these two ways of exploiting Moore's Law to end, not at the same time, but one after the other and in that order.

Why? Because Exploit #2 only relies on the basic Moore's Law effect, whereas the first relies on Moore's Law and the ability to use all the transistors at the same time.

Which brings us to one last problem down in our mine...

The Power Problem: Dark Silicon

Sometimes you can be hard at work in a mine, still productive, when a small disaster happens: a cave-in, or striking water. Besides hurting miners, such disasters can render entire sections of the mine unreachable. We are now starting to hit exactly those kinds of problems.

One particular problem we have just begun to encounter is known as "dark silicon." Although Moore's Law is still delivering more transistors, we are losing the ability to power them all at the same time. For more details, see Jem Davies' talk "Compute Power With Energy-Efficiency" and the ISCA'11 paper "Dark Silicon and the End of Multicore Scaling" (alternate link).

This "dark silicon" effect is like a Shakespearian bear chasing our doomed character offstage. Even though we can continue to pack more cores on a chip, if we cannot use them at the same time we have failed to exploit Moore's Law to deliver more computational throughput (Exploit #1). When we enter the phase where Moore's Law continues to give us more transistors per die area, but we are no longer able to power them all, we will find ourselves in a transitional period where Exploit #1 has ended while Exploit #2 continues and outlives it for a time.

This means that we will likely see the following major phases in the "scale-in" growth of mainstream machines. (Note that these apply to individual machines only, such as your personal notebook and smartphone or an individual compute node; they do not apply to a compute cloud, which we saw belongs to a different "scale-out" mine.)

Exploit #1 + Exploit #2: Increasing performance (compute throughput) in all form factors (1975 – mid-2010s?). For a few years yet, we will see continuing increases in mainstream computer performance in all form factors from desktop to smartphone. As today, the bigger form factors will still have more parallelism, just as today's desktop CPUs and GPUs are routinely more capable than those in tablets and smartphones – as long as Exploit #1 lives, and then...

Exploit #2 only: Flat performance (compute throughput) at the top end, and mid and lower segments catching up (late 2010s – early 2020s?). Next, if problems like dark silicon are not solved, we will enter a period where mainstream computer performance levels out, starting at the top end with desktops and game consoles and working its way down through tablets and smartphones. During this period we will continue to use Moore's Law to lower cost, power, and/or size – delivering the same complexity and performance already available in bigger form factors also in smaller devices. Assuming Moore's Law continues long enough beyond the end of Exploit #1, we can estimate how long it will take for Exploit #2 to equalize personal devices by observing the difference in transistor counts between current mainstream desktop machines and smartphones; it's roughly a factor of 20, which will take Moore's Law about eight years to cover.

Democratization (early 2020s? – onward). Finally, this democratization will reach the point where a desktop computer and smartphone have roughly the same computational performance. In that case, why buy a desktop ever again? Just dock your tablet or smartphone. You might think that there are still two important differences between the desktop and the mobile device: power, because the desktop is plugged in, and peripherals, because the desktop has easier access to a bigger screen and a real keyboard/mouse – but once you dock the smaller device, it has the same access to power and peripherals and even those differences go away.

[...]

To continue enjoying the free lunch of shipping an application that runs well on today's hardware and will just naturally run faster or better on tomorrow's hardware, you need to write an app with lots of juicy latent parallelism expressed in a form that can be spread across a machine with a variable number of cores of different kinds – local and distributed cores, and big/small/specialized cores. The filet mignon of throughput gains is still on the menu, but now it costs extra – extra development effort, extra code complexity, and extra testing effort. The good news is that for many classes of applications the extra effort will be worthwhile, because concurrency will let them fully exploit the exponential gains in compute throughput that will continue to grow strong and fast long after Moore's Law has gone into its sunny retirement, as we continue to mine the cloud for the rest of our careers.

http://herbsutter.com/welcome-to-the-jungle/

Saggio a mio parere estremamente interessante (preceduto, una decina d'anni addietro, da questo sul multicore) dal quale ho estratto quelle che ritengo le parti più significative. Inutile riportarlo per intero: troppo lungo e comunque di complessità medio/alta, tant'è che alcuni passaggi sono risultati appena intuibili pure per me che sono del settore.

Il succo del discorso è che per una moltitudine di problematiche tecnologiche (ad esempio, e questa la sconoscevo proprio, stiamo perdendo la possibilità di alimentare contemporaneamente tutti i transistor degli integrati), l'autore ritiene che nel giro di non oltre due lustri si arriverà al punto in cui la componentistica desktop e mobile avrà grossomodo la medesima capacità computazionale.

Come si può facilmente comprendere, quella che Mr. Sutter definisce democraticizzazione porterebbe con sè dei cambiamenti mostruosi anche nell'industria del videogioco, oltre che in generale nell'informatica/elettronica consumer in senso lato.

[invero nel saggio si parla anche di cloud computing, e anche questa strada potrebbe avere ripercussioni non banali sul mercato - ma personalmente sono un po' scettico al momento sulla possibilità che divenga praticamente accessibile alle masse in tempi brevi nei numeri che fanno le console - ergo decine e decine di milioni di unità]

=====================

Nota sull'autore direttamente dal sito: Herb Sutter is a leading authority on software development. He is the best selling author of several books including Exceptional C++ and C++ Coding Standards, as well as hundreds of technical papers and articles, including the essay "The Free Lunch Is Over" which coined the term "concurrency revolution" and its recent sequel "Welcome to the Jungle" on the end of Moore's Law and the turn to mainstream heterogeneous supercomputing from the cloud to 'smartphones.' Herb has served for a decade as chair of the ISO C++ standards committee, and is a software architect at Microsoft where he has led the language extensions design of C++/CLI, C++/CX, C++ AMP, and other technologies.

Turrican3

CitazioneShrinking silicon transistors to keep Moore's Law alive has made successive generations of chips both more powerful and less power-hungry. But the two new technologies [NdTurry: si riferisce a tunneling transistors e spintronics] can't work on data as fast as silicon transistors. "The best pure technology improvements we can make will bring improvements in power consumption but will reduce speed," said Holt.

That suggests that Moore's Law as we've known it may come to an end. But Holt claimed that continued gains in energy efficiency, not raw computing power, are most important for the things asked of computers today.

"Particularly as we look at the Internet of things, the focus will move from speed improvements to dramatic reductions in power," Holt said.

https://www.technologyreview.com/s/600716/intel-chips-will-have-to-sacrifice-speed-gains-for-energy-savings/

William Holt, presidente di Intel, sulla necessità di spingere sulla efficienza energetica più che sulla capacità computazionale grezza.

Bluforce

Insomma, in soldoni acquistare una CPU decente oggi è un buon investimento :D

Ben conscio che il gaming è una nicchia nel mercato delle CPU, è chiaro che dispositivi indossabili, o comunque a batteria, necessitano più di minor consumi che di potenza esagerata. Non per niente da qualche anno Intel si è tuffata sui prodotti ARM, riuscendo anche a ritagliarsi una fetta di mercato.

Turrican3

Eh pare proprio di sì.

Devo dire però che non sapevo che Intel avesse licenziato tecnologia da ARM. :o

Bluforce

L'errore è mio nell'aver usato in maniera sbagliata il termine ARM.
Volevo soltanto dire che Intel con gli Atom (che ora mi pare si chiamino X3, X5, X7), si è spostata "nella direzione di ARM", ovvero proporre CPU piccole, potenti, poco energivore.

Cosa che ad esempio non ha fatto AMD, e che con molta probabilità pagherà carissimo nel prossimo futuro (ben peggio di non aver proposto nell'ultimo quinquennio delle CPU desktop "all'avanguardia".

Turrican3

Aaaaaah ok. :D

Curiosamente, AMD lo è davvero licenziataria ARM, lo so di rimbalzo perché se n'è speculato spesso in merito a NX. Sconosco però i dettagli della suddetta licenza.

Turrican3

#7


Video fenomenale del 2016 sull'evoluzione dei microprocessori con Sophie Wilson, "mamma" dei primi ARM e di svariate altre cosette, che consiglio a tutti senza riserve.

Perché fenomenale?

Beh perché spiega in maniera estremamente chiara tutto sommato anche per i NON addetti ai lavori un sacco di cose del passato e soprattutto del futuro di queste bestioline. Ho capito praticamente tutto persino io che con l'elettronica non ho mai avuto un buon rapporto! :sweat:

Tra le tante cose di cui si parla:
- la Wilson ironizza sulla legge di Moore che negli anni è stata, come dire, corretta in corsa :hihi:
- nei processori moderni un sacco di silicio rimane "spento"
- salvo applicazioni particolari il quantitativo di core è una mezza truffa, soprattutto perché esiste una legge che dimostra che le prestazioni si appiattiscono superata una certa soglia (e sono comunque "frenate" da quanto l'algoritmo è parallelizzabile)
- i processi produttivi più spinti sono costosissimi
- i salti tecnologici del passato non si ripeteranno (Intel nel 200x profetizzava CPU a 30 GHz entro 10 anni :bua:)

PS: con i sottotitoli automatizzati di youtube viene intercettato correttamente un buon 95% del testo quindi andate tranquilli anche se l'inglese parlato vi spaventa

Turrican3





Ho estratto alcuni fotogrammi-chiave dalla presentazione di cui sopra.

Bluforce

Il video è bello corposo, il tuo resoconto è d'uopo :bua:

Turrican3

Me ne rendo conto. :sweat:
Io l'ho guardato davvero con piacere (come detto, il sottoscritto e l'elettronica per vari motivi hanno fatto a pugni in adolescenza :hihi:) ma capisco che possa comunque rappresentare un mattone. :bua:

Joe

"guarda più tardi" *click*

Grazie per al segnalazione!

Turrican3

https://mobile.twitter.com/digitalfoundry/status/1151488169449336832

Costi in netta ascesa per i nuovi processi produttivi.

Abbastanza in linea, tutto sommato, con quanto si diceva in una delle diapositive di Sophie Wilson precedentemente discusse.

Bluforce

Non è un caso se Intel e Nvidia sono rimaste e ancora rimangono ancorate ai 14nm.
AMD ha tentato una carta diversa, e chissà che col tempo ciò non possa portare benefici.

Certo, bisogna vedere quando Intel e Nvidia scenderanno a 7nm o meno, che tipo di gap prestazionale riusciranno a proporre (se ci sarà, 'sto gap).
:hmm:

Turrican3

A tal proposito ho fatto tramite Twitter una bella chiacchierata in privato con uno storico sviluppatore di videogiochi (resto sul vago per questioni di riservatezza) e mi sa che, con tutti i miei limiti, molte delle problematiche di cui abbiamo discusso in questi anni sia qui che in altri thread le avevo intuite in maniera corretta.

Altre ahimè un po' meno, come l'aumento "magico" delle prestazioni di pari passo con l'arrivo sul mercato di nuove CPU/GPU più potenti... che di magico ha poco o nulla :hihi: :bua: e che purtroppo può essere fonte di grattacapi non banali per gli sviluppatori.

Insomma come spesso accade temo che tendiamo a semplificare un po' troppo tutto ciò che sta dietro allo sviluppo di roba ormai di indicibile complessità come i videogiochi moderni.