Wikipendium

History Compendium
Log in
This is an old version of the compendium, written May 20, 2014, 3:49 p.m. Changes made in this revision were made by gombos. View rendered version.
Previous version Next version

TDT4260: Computer Architecture

# Curriculum 2014 Computer Architecture - A Quantitative Approach, 5th Edition, Hennessy and Patterson - Chapter 1.1-1.6, 1.8-1.9, 1.11-1.12 - Appendix B.1-B.4, pages B.49-50, B.6-B.7 (Is partly overlapping with Chapter 2) - Chapter 2.1-2.2, 2.4-2.5, 2.7-2.8 - Chapter 3.1-3.2, 3.7, 3.9 (pages 202 - 208 (speculation), 3.10, 3.12 (not pages 230-232), 3.15 - Chapter 4.1-4.3 (not page 286-288), 4.5, 4.8-4.9 - Chapter 5.1-5.5, 5.9-5.10 - Chapter 6.1-6.5 - Appendix C.1-C.2 - Appendix F.1-F.6, made available under It's learning Papers - The Future of Multiprocessors, Borkar et al., Communications of the ACM, 2011 - Exploring the Design Space of Future CMPs, by Jaehyuk Huh, Stephen W. Keckler and Doug Burger, PACT 2001 - Vilje - The New Supercomputer at NTNU, by Jørn Amundsen, Meta, Issue 4, 2011 - The Manchester Prototype Dataflow Computer, by J.R. Gurd, C.C. Kirkham and I. Watson, CACM January 1985 # Chapter 1 - Introduction Moore's Law, explosive growth in computer power, yada yada yada. Power dissipation problems have arrived since 2002, now multiple cores is the shit. ## Defining Computer Architecture Originally, Computer Architecture design was only the instruction set, but now a lot more is needed to design a computer. MIPS is used as example ISA in the book. There are two components to a computer, _organization_ and _hardware_. The organization includes the high-level aspects of a computer's design, for example memory system and interconnect and the design of the internal CPU. The hardware refers to more specifics of a computer, the logic design and packaging. Computer Architecture is defined in the book as all three aspects of a computer, ISA, organization _and_ hardware. To design a computer, you need to meet functional requirements as well as price, power, performance and availabilty goals. And as well you will often need to define what the functional requirements are. ## Performance "X is _n_ times faster than Y": $$ n=\frac{Execution\:time_{Y}}{Execution\:time_{X}}=\frac{\frac{1}{Performance_{Y}}}{\frac{1}{Performance_{X}}}=\frac{Performance_{X}}{Performance_{Y}}$$ According to the book, the only consistent and reliable measure of performance is the execution time. Time is however not unambigiously. The most straightforward definition is _wall-clock time_ (or _response time_ or _elapsed time_), which his the time you would get timing a task with a stopwatch. _CPU time_ looks at how long time the CPU works on the task, not waiting for I/O or running other programs. ## Quantitative Principles of Computer Design ### Take advantage of Parallelism Really important, at every level. #### System level Multiple processors and disks, improved throughput. Ability to expand memory and number of CPUs is called _scalability_. #### Processor level Exploit parallelism among instructions. Easiest way is through pipelining, i.e. overlap instruction execution to reduce the total time to complete an instruction sequence. Not every instruction depends on its immediate predecessor. ### Principle of Locality Programs tend to reuse data and instructions it has used recently. An implication of locality is that we can predict with reasonable accuracy what instructions and data the program will use in near future, based on recent accesses. ### Focus on the Common Case If you can make something better, it should be the most common thing, as that will occur more often. This is kind of intuitive, dumbasss. _Amdahl's Law_ can be used to quantify this. ### Amdahl's Law Amdahl's Law defines the _speedup_ that can be gained by using a particular feature, which is an enhancement that will improve performance when it is used. $$Speedup=\frac{Performance\:for\:entire\:task\:using\:the\:enhancement\:when\:possible}{Performance\:for\:entire\:task\:without\:using\:the\:enhancement}$$ Alternatively, $$Speedup=\frac{Execution\:time\:for\:entire\:task\:without\:using\:the\:enhancement}{Execution\:time\:for\:entire\:task\:using\:the\:enhancement\:when\:possible}$$
Speedup tells us how much faster a task will run using the computer with the enhancement. $$Execution\:time_{new} = Execution\:time_{old}\times ((1-Fraction_{enhanced})+\frac{Fraction_{enhanced}}{Speedup_{enhanced}})$$
  • Contact
  • Twitter
  • Statistics
  • Report a bug
  • Wikipendium cc-by-sa
Wikipendium is ad-free and costs nothing to use. Please help keep Wikipendium alive by donating today!