This is the title of a presentation I gave today at an internal symposium at IPAC, and I thought I would post it here. The title refers to the fact that a lot of scientific software developed on desktops will not prove useful for analyzing massive data sets that will soon be coming our way. Many scientists simply do not have training in writing scalable and portable software. I make some suggestions for what the community can do about this. In summary:
- Massive data sets are driving a new business model for scientific computing, where analysis will have to be done near the data.
- The computationally self-taught scientist working at a desktop will be at a big disadvantage in this new world.
- Software components that are portable and scalable will have a much bigger role to play in the future. And software will need to be sharable, and used by user communities to develop new applications.
- I think we need more formal computer education for scientists, and a cultural change to reward computational skills.