GPU’s in Astronomy: Critical Decisions for Early Adopters

Following on from last week’s post about Graphical Processing Units (GPU’s), I have been reading another paper by the same team from the Center for Astrophysics and Supercomputing, Swinburne University of Technology in Australia.  This paper, Fluke et al. (2011), gives a frank and sobering account of the pitfalls and risks that adopters must navigate to realize the benefits of GPU’s.

GPU’s have been recognized as a technology that astronomers must consider in processing the vast quantities of data that will be produced by new projects.  The construction is simple enough. A  GPU is a highly parallel coprocessor with a high memory bandwidth, and a number of astronomers have been able use them to speed-up their code by orders of x10-1000 over a traditional CPU.

The potential is there, but Fluke et al. describe the price that will have be paid to master GPU’s.  Mature programming frameworks that abstract the details of the GPU’s away from the end user are not yet hardware-neutral.  The most mature framework, CUDA, is proprietary to NVIDIA and operates only on that vendor’s chips. An Open Source framework, Open Compute Language (OpenCL) is gaining traction among practitioners. It has the potential to support hardware agnostic coding, and its performance does not appear appreciably different from that of CUDA – see the plot below.  So there is a distinct possibility that astronomers will only have to develop one set of code for all platforms.

Adoption of hardware-neutral frameworks would offer obvious benefits to astronomers.  But there are plenty of other issues to grapple with. Regardless of framework, GPU’s may well demand a transformation in programming methods to get the best put of them.  Fluke et al. suggest that a good starting point might well be the least sophisticated: brute force, rather than algorithmic elegance. They cite the example of “ray-shooting” (tracing light rays backwards from an observer) to model gravitational lenses, which can be “brute forced”  because each light ray is independent of the others.  With this method, Thompson et al (2010) achieved billion ray-shooting calculations at 1 teraflop/second and runtimes of a few days.

There are other considerations too.  GPU’s are optimized for doing graphics calculations, and science processing may only achieve 1/4 to 1/2 of the unit’s peak performance.  Moreover, astronomy applications often need double precision, but GPU’s generally only have single precision chips. And code profiling, not yet generally practiced, may prove valuable in optimizing performance.

Altogether, astronomers should perform a cost-benefit analysis and some initial development to investigate whether their code can benefit from running on a GPU.  The table in last week’s post indicates the type of applications that can benefit from GPU’s. While there are landmines to negotiate, Fluke et al. cite an impressive number of applications that do in fact benefit from GPU processing. Used in the right way and on the right applications, GPU’s wil be a powerful tool for astronomers processing huge volumes of data.

This entry was posted in astroinformatics, Astronomy, cyberinfrastructure, GPU's, High performance computing, information sharing, Parallelization, programming, software engineering, software maintenance, software sustainability and tagged , , , , , , , , , , . Bookmark the permalink.

2 Responses to GPU’s in Astronomy: Critical Decisions for Early Adopters

  1. Pingback: Cosmological Calculations with GPUs | Astronomy Computing Today

Leave a comment