High+Performance+Computing

=High Performance Computing (HPC)= toc

What is __Parallel Computing__? What are Parallelization strategies and types of parallelization? Problem Types Can your problem be paralyzed? How can you make an existing program parallel? Is it worth it to make a program parallel?

=Resources=

Introductory Modules
Programming

=Applications=

Weather Matlab- Engineering Calculations such as characteristics of an airfoil Optimization Swarm Robotics Swarm Robotics for kids in game-form (physical or graphical) Materials Simulations Graph Analyses Submarine Tracking

=What is High Performance Computing?=

Supercomputing: Wikipedia Definition Sample Module: Understanding High Performance Computing Technology@Intel: What Is HPC? Intel: Parallel Programming TED Talks: Computers [|Should programming supercomputers be hard?] [|Parallel Computing Concepts] [|The Tao of Parallelism in Algorithms] [|The Supercomputing Conference Series] [|The 26th Annual International Supercomputing Conference] [|Supercomputing Challenge] Supercomputing Online [|RDMA Heterogeneous Parallel Computing] [|HPC Wire- From Mobile Phones to Supercomputers] Supercomputing Education in Russia [|Top 500 Supercomputing] [|What is HPCC] [|HPCC Glossary Terms]

=Learning Resources=

[|University of Utah HPC- Hello World] In this tutorial you will learn how to compile a basic MPI code on the CHPC clusters, as well as basic batch submission and user environment set up. In order to complete this tutorial, you will need an account with CHPC. If you don't have an account, see the [|Getting Started at the CHPC][| guide.]

[|Top 500 Supercomputer Sites]

MPI Hello World Tutorial: This tutorial covers the basics of connecting to Saguaro and writing, compiling, and running programs.

MPI Finding Maximums Tutorial: This tutorial covers the basics of parallel programming using MPI, walking step by step through the source code of a parallel program that finds the maximum of an array.

=Supercomputing Game=

- High School and up - Science/Engineering Focused People - People with problems to parallelize
 * Target**

- Emphasize data flow and parallelism - Don't focus on the specifics of C/MPI programming
 * Goals**

- Web-based (HTML5) would be easy to deploy, but possibly more difficult to debug and work with. (gist.github.com/768272) - Other platforms are dependent on a specific environment, not as easy to distrubute and use for most people.
 * Platform**

- Different game types possible (RPG, Strategy, Shooter, Arcade, Platformer, etc.) - Need to have a method of showing data flow/parallelism techniques.
 * Game Mechanics**

Parallelization Strategies
Granularity- number and size of tasks Divide and Conquer Data Decomposition Recursive Decomposition Exploratory Decomposition LU Factorization Block-cyclic distribution Sparse Matrices

Overview: Maximize data locality minimize data exchange volume minimize frequency of interactions minimize contention of hot spots replicate data/computation overlap computations with interactions use optimized collective interaction operations

General Methods: Convert serial program to operate on vectors Substitute vector iterative parallel algorithm for direct serial algorithm Exploit recursive doubling when possible

=News and Developments=

Tech Times: __degrees__ in supercomputing

[|SETI@Home]

Linux Journal: Parallel Computing Using Linux

=Similar Projects and Programs=

= = [|Building your own supercomputer] [|Texas State Technical College: HPC associates degree] San Diego Super computer: @http://education.sdsc.edu/ Teragrid: @https://www.xsede.org/education-and-outreach @https://www.xsede.org/web/guest/curriculum-and-educator-programs @https://www.xsede.org/web/guest/engagement

=Gil's Powerpoints and Course Materials=

CSE 494: Intro to High Performance Computing - Projects:
MPI Project Part 1 MPI Project Part 2

CSE 494: Intro to High Performance Computing - Lecture Slides:
Lecture 2: Saguaro User Environment and Platform Topologies  

Lecture 3: OpenMP

Lecture 4: Parallelization Strategies

Lecture 5: Parallelization continued, Compile & Debug, and Precision

Lecture 6: MPP Abstractions and Patterns

Lecture 7: Intro to MPI

Lecture 8: MPI Continued

Lecture 9: MPI Examples and Problem Survey

Lecture 10: Models for Parallel Applications

Lecture 11: Matrix Operations

<span style="background-color: #ffffff; font-family: 'Lucida grande',Arial,Verdana,sans-serif; font-size: 15px;">Lecture 12: FFT, Libraries and MPI utilities

<span style="background-color: #ffffff; font-family: 'Lucida grande',Arial,Verdana,sans-serif; font-size: 15px;">Lecture 13: MPI-IO

<span style="background-color: #ffffff; font-family: 'Lucida grande',Arial,Verdana,sans-serif; font-size: 15px;">Lecture 16: Quantum Computing and H/W overview

<span style="background-color: #ffffff; font-family: 'Lucida grande',Arial,Verdana,sans-serif; font-size: 15px;">Lecture 17: Concurrent Programming

Example Programs:

=Links=

[|History of Computer Animation - P1] [|TED: A brain in a supercomputer]

=Collaborators=

=Logan Van Engelhoven=

= =