Graphical Processing Units (GPU’s) were designed in the first instance to handle graphics intensive applications. Their design is radically different from the Central Processing Units that power computers. Whereas a modern CPU has several cores, allowing small-scale parallelisation, a GPU has in some cases millions of data parallel threads. Moreover, the chips contain largely floating point units. This architecture gives GPU’s far superior processing power over CPU’s.
GPU’s have in recent years been shown to have applicability beyond their original purpose. With CPU clockrates staying steady, GPU’s are increasingly attracting attention in astronomy (as in other sciences of course) as a means of processing the vast quantities of data being made available. In the early days, programmers had to make their codes look like video applications to run on GPU’s. Now there are special frameworks that allow science applications run on GPU’s’; the best known of these is NVIDIA’s Compute Unified Device Architecture (CUDA) framework. And algorithms must be rewritten to make them data parallel; that is, data are distributed across different threads or nodes. So there is still a difficult learning curve in using GPU’s.
In a fascinating recent paper, Barsdell, Barnes and Fluke (2010) have analyzed astronomy algorithms to understand which algorithms can be best engineered to run on GPU’s; the way the engineering is done is critical to getting the best performance out of CPU’s. Barsdell, Barnes and Fluke used algorithm analysis to understand how to well algorithms can be optimized to run in parallel in a GPU environment (as opposed to implementation optimization).
The table below summarizes their conclusions; it shows the efficiency with which various algorithms run on GPU’s:
- Can be parallelized into many fine-grained elements.
- Neighboring threads access similar locations in memory.
- Minimize neighboring threads that execute different instructions.
- Have high arithmetic intensity
- Avoid host-device memory transfers