The International Science Grid This Week posted a fascinating article on how high performance computing is being used to analyze time-series data sets from the Kepler mission. Kepler’s primary goal is to determine the frequency of Earth-sized planets around other stars.
In May 2009, it began a photometric transit survey of 170,000 stars in a 105 square degree area in Cygnus. The photometric transit survey has a nominal mission lifetime of 3.5 years. As of this writing, the Kepler mission has released 1,547,900 light curves, made over mission 880 days (see http://exoplanetarchive.ipac.caltech.edu/docs/intro.html for more details). I have already written about how high performance computing is being used to produce an Atlas of the periodicities present on the light curves, a starting point for identifying new planets.
The article in ISGTW describes work led by Travis Metcalfe (Space Science Institute, Boulder, CO, USA). They have set up an Asteroseismic Modeling Portal (AMP), a Science Gateway to high performance computing. A Science Gateway is simply a customized pre-configured set of tools, applications and data integrated into a single portal and accessible via a Graphical User Interface (GUI), through a common web browser. The AMP is “coupled with a low-level artificial intelligence algorithm that allows users, such as those managing the Kepler program, to quickly attain much-needed stellar data.”
The team are using the AMP is model the stars measured by Kepler, to estimate properties such as their radii, masses, bulk compositions, and ages. This type of analysis was used last year to establish that Kepler-22b, the first planet in the “Goldilocks Zone” (where water may exist in liquid zone), was too large to be considered Earth-like: it’s 2.4x the size of the Earth.
So far, 100 or so stars have been analyzed. The AMP uses the Kraken computer to perform its processing. Kraken is an XT5 high-performance computer, capable of petaflop (a thousand trillion calculations per second) scale computing and managed by the University of Tennessee’s National Institute for Computational Sciences (NICS) for the National Science Foundation (NSF). Individual stars require only 512 of Kraken’s 100,000 cores, but the power of Kraken will eventually allow many stars to be processed at once.
This work is, I think, a foretaste of the future of astronomical computing, which more and more and more will involved processing of massive data sets on distributed high performance platforms.
This post is based on material originally appearing in the International Science Grid This Week.