Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Parallel Computing

AllenHeard
November 23, 2016

Parallel Computing

Year 13 Lesson

AllenHeard

November 23, 2016
Tweet

More Decks by AllenHeard

Other Decks in Education

Transcript

  1. Parallel computing ▪ Parallel computing is a type of computation

    in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ▪ Ideally, parallel processing makes programs run faster because there are more engines (CPUs or cores) running it. ▪ In computer software, a parallel programming model is a model for writing parallel programs which can be compiled and executed.
  2. ▪ Traditionally, software has been written for serial computation: –

    A problem is broken into a discrete series of instructions – Instructions are executed sequentially one after another – Executed on a single processor – Only one instruction may execute at any moment in time Serial computing
  3. ▪ In the simplest sense, parallel computing is the simultaneous

    use of multiple compute resources to solve a computational problem: ▪ A problem is broken into discrete parts that can be solved concurrently. ▪ Each part is further broken down to a series of instructions. ▪ Instructions from each part execute simultaneously on different processors. ▪ An overall control/coordination mechanism is employed. Parallel computing
  4. ▪ The computational problem should be able to: – Be

    broken apart into discrete pieces of work that can be solved simultaneously; – Execute multiple program instructions at any moment in time; – Be solved in less time with multiple compute resources than with a single compute resource. ▪ The compute resources are typically: – A single computer with multiple processors/cores – An arbitrary number of such computers connected by a network Parallel computing
  5. ▪ The schematic below shows a typical parallel computer cluster:

    ▪ Each compute node is a multi-processor parallel computer in itself. ▪ Multiple compute nodes are networked together. Supercomputers
  6. ▪ In the natural world, many complex, interrelated events are

    happening at the same time, yet within a temporal sequence. ▪ Compared to serial computing, parallel computing is much better suited for modeling, simulating and understanding complex, real world phenomena. ▪ For example, imagine modeling these serially: Why use parallel computing?
  7. ▪ In the natural world, many complex, interrelated events are

    happening at the same time, yet within a temporal sequence. ▪ Compared to serial computing, parallel computing is much better suited for modeling, simulating and understanding complex, real world phenomena. ▪ For example, imagine modeling these serially: Why use parallel computing?
  8. ▪ SAVE TIME AND/OR MONEY: – In theory, throwing more

    resources at a task will shorten its time to completion, with potential cost savings. – Parallel computers can be built from cheap, commodity components. ▪ SOLVE LARGER / MORE COMPLEX PROBLEMS: – Many problems are so large and/or complex that it is impractical or impossible to solve them on a single computer, especially given limited computer memory. – Example: "Grand Challenge Problems" (en.wikipedia.org/wiki/Grand_Challenge) requiring PetaFLOPS and PetaBytes of computing resources. – Example: Web search engines/databases processing millions of transactions every second Main reasons
  9. ▪ PROVIDE CONCURRENCY: – A single compute resource can only

    do one thing at a time. Multiple compute resources can do many things simultaneously. – Example: Collaborative Networks provide a global venue where people from around the world can meet and conduct work "virtually". ▪ TAKE ADVANTAGE OF NON-LOCAL RESOURCES: – Using compute resources on a wide area network, or even the Internet when local compute resources are scarce or insufficient. – Example: SETI@home (setiathome.berkeley.edu) over 1.5 million users in nearly every country in the world. Source: www.boincsynergy.com/stats/ (June, 2015). Main reasons
  10. Who is using parallel computing? ▪ Historically, parallel computing has

    been considered to be "the high end of computing", and has been used to model difficult problems in many areas of science and engineering: • Atmosphere, Earth, Environment • Physics - applied, nuclear, particle, condensed matter, high pressure, fusion, photonics • Bioscience, Biotechnology, Genetics • Chemistry, Molecular Sciences • Geology, Seismology • Mechanical Engineering - from prosthetics to spacecraft • Electrical Engineering, Circuit Design, Microelectronics • Computer Science, Mathematics • Defense, Weapons
  11. Who is using parallel computing? ▪ Today, commercial applications provide

    an equal or greater driving force in the development of faster computers. These applications require the processing of large amounts of data in sophisticated ways. For example: • "Big Data", databases, data mining • Oil exploration • Web search engines, web based business services • Medical imaging and diagnosis • Pharmaceutical design • Financial and economic modeling • Management of national and multi- national corporations • Advanced graphics and virtual reality, particularly in the entertainment industry • Networked video and multi-media technologies
  12. Parallel programming ▪ Programs can be separated into two portions:

    ▪ A sequential portion that must be run in order ▪ Parts that can run concurrently ▪ The parts that can run concurrently can then be run in parallel
  13. Parallel processing example 1 10 40 Parallel portion of task

    40 Parallel set up time (Sequential) Sequential portion of task 20 Total 60
  14. ▪ In computer architecture, Amdahl's law gives the theoretical speedup

    in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. ▪ It is named after computer scientist Gene Amdahl. Amdahl’s Law Speed up = 1 (1 – P) + N P P = Parallel portion of program (%) N = Number of processors/cores
  15. Amdahl's Law ▪ For a single value of P and

    a varying number of N we will always end up with this type of curve as we are getting a diminishing return. ▪ Basically, there comes a point where throwing more processors at our problem becomes pointless due to the sequential portion that cannot be run in parallel.
  16. Example If 30% of the execution time may be the

    subject of a speedup, P will be 0.3; if the improvement makes the affected part twice faster, N will be 2. Amdahl's law states that the overall speedup of applying the improvement will be n p
  17. Other limiting factors of parallel processing Beside the intrinsic sequentiality

    of parts of an algorithm, there are also other factors that limit the achievable speedup: • communication cost • load balancing of the processors • costs of creating and scheduling processes • I/O operations (mostly sequential in nature).