GPU’s vs. CPU’s: Apples vs. Oranges?

This will be the last of three posts on processing with GPU’s.  Jan Zverina and Peter Varhol wrote a couple of interesting articles on the applicability of GPU’s and CPU’s in scientific data processing. They made a convincing case that their applicability is quite different. A quote from Axel Kohlmeyer, associate director with the Institute for Computational Molecular Science at Temple University, summarizes the position well: “CPUs are designed to handle complexity well, while GPUs are designed to handle concurrency well.”

CPU’s are much better at handling serial tasks, and are needed to allow GPU’s to access data from disk.  The speed-up with GPU’s can, however, be enormous if people are willing to put in the effort to port their code to run on a GPU. Zverina cites one application which benefited from a GPU:  a protein folding simulation of TRPCage (an artificially designed protein) using a single Intel E5462 2.80GHz CPU versus an NVIDIA C1060 GPU; not astronomy, but an impressive video:

So what is an astronomer to do? Astronomers need to understand where GPU’s can help performance, and fortunately there are a growing number of studies explaining this. The development of open, platform-agnostic frameworks will encourage the development of portable code.  The development of tools that will aid astronomers in understanding when to run their code on a GPU may also prove valuable. Varhol quotes one example, called Jacket:

“The lack of engineering applications that run on the GPU is a problem that isn’t going away soon. Still, there may be an easier way of getting code to run on GPUs.
A startup company called Accelereyes is working to ease the burden for moving code over to GPUs using a product called Jacket. It has started doing so with MATLAB, the special-purpose language from The Math Works used by scientists and engineers.

Here’s how it works: Engineers examine their code, and tag data structures that might execute more quickly on a GPU. Jacket takes those tags and automatically compiles those data structures into GPU-executable code. When data and functions use those data structures, it compiles the functions to GPU code, and fetches the data into GPU memory space. When the computation is complete, the data is returned to the CPU space.

Because most engineering groups own their own MATLAB source code, this can be a relatively straightforward approach to using GPUs.”

Visit Jacket on-line at http://www.accelereyes.com/products/jacket.

This entry was posted in astroinformatics, computer videos, cyberinfrastructure, Data Management, GPU's, High performance computing, information sharing, Parallelization, programming, software engineering, software maintenance, software sustainability and tagged , , , , , , , , , , . Bookmark the permalink.

4 Responses to GPU’s vs. CPU’s: Apples vs. Oranges?

  1. Daniel says:

    I use Jacket for particle-based simulations (20M particles to juggle) and I get 20 times faster Matlab code. Definitely worth it for me.

  2. astrocompute says:

    Interesting. Do you have papers or posts where I can learn about your work?

  3. If you use IDL, check out GPULib. It is similar to Jacket, but has an IDL interface. Also, note that MathWorks itself now includes GPU support in its Parallel Computing Toolbox.

  4. Pingback: Cosmological Calculations with GPUs | Astronomy Computing Today

Leave a comment