The Death Knell For The Grid?

Well, the title comes not from me, but from International Science Grid This Week. It is in response  to Amazon’s announcement that it will be providing high-performance compute services.  Readers of my earlier posts will see that one of Amazon’s weaknesses for science applications is that high-performance clusters offer superior performance for I/O-bound applications.  It will be important for science apps  to see how well Amazon’s HP services perform on the Montage image mosaic engine, the subject of earlier comparative studies. And to see what the costs are.  In the meantime, read Craig Lee’s excellent article on the subject at iSGTW.

This entry was posted in Cloud computing, cyberinfrastructure, High performance computing, Uncategorized and tagged , , , . Bookmark the permalink.

2 Responses to The Death Knell For The Grid?

  1. Steve B says:

    As evidenced by the recent TeraGrid 2010 conference in Pittsburgh, federation is definitely a bottleneck in the utilization of grid resources, and there are projects underway to address this issue; whether they will be fruitful and multiply remains to be seen.

    Another point to make is that while grid computing has been a rage for many years, it was initially driven by the need for very large simulation environments, such as numerical relativity or global climate simulations, where a hundred thousand cores is not enough. To a large extent, cloud computing’s strength, provisioning, addresses the need for a different simulation architecture, wherein a PI might need to run suites of, say, thousand-core jobs in a hierarchical or Monte Carlo fashion, to perform their research. Current ‘grid’ technologies can in principle handle this, but the overriding assignment of queue priority in large clusters favors the former case. While I can’t imagine the grid being ‘dead’, it has instead been supplemented and complemented by cloud computing.

  2. astrocompute says:

    Thank you for your comment. I quite agree that the Grid remains the technology of choice for computing at very large scales. For many applications in processing astronomical data (this is the crowd I run with), hundreds or thousands of cores are what is needed, and so I am inclined to think that the cloud may be a better choice. For example, we used Amazon EC2 to compute periodograms of all 200,000 public Kepler light curves. We provisioned a few hundred cores and the job took a couple of hours. Up to now, my reservations regarding Amazon had been that its performance on I/O-bound applications was poor because it had not offered high performance networks or parallel file systems.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s