“Software Use in Astronomy: An Informal Survey” by Momcheva and Tollerud

This the title of a paper by Ivelina Momcheva and Erik Tollerud that was recently posted on astro-ph at http://arxiv.org/abs/1507.03989.  Between December 2014 and February 2015, they carried out an informal survey about software use in astronomy,  marketed mainly through social media. It is not presented as representative of the worldwide astronomy community, and there are no attempts to correct for selection biases: the results are presented as collected. I recommend this article to everyone writing software in astronomy. Figs 1 and 2 alone make the article worth reading (no spoilers!). You can also interact with some of the visualizations I reproduce below.

Participants were asked the following questions:

  1. Do you use software in your research?
  2. Have you had formal training in software development?
  3. Which of community or self-written software is more common in your work?
  4. Select which if the community tools that you use regularly for your research.

Three questions requested basic demographic information:

1. What is your field of research?
2. What is your career stage?
3. What is the location of your institution?

The survey received 1,142 responses, across all career stages. 100% of respondents used software to do research,  yet only 8%  report that they have received substantial training in software development. Another 49% received “little” training, and the remaining 43% have received no training. This seems to be true across all career stages:

2015-07-24_17-53-06The same is true for the 90% of participants who write their own code:

2015-07-24_17-55-19

Astronomers appear to use a quite narrow set of tools, with 10 tools used by more than 10% of respondents, again with little demographic variation; e.g: 2015-07-24_17-59-11

That Python tops the list of tools should surprise no-one, given the high cost of licensing its closest rival, IDL, and the outstanding free Python distributions available to astronomers. We often talk of IDL having lost much of its market share to Python (and the comments at the end of the paper do back this up), but it’s not that simple, as many astronomers do use both:

2015-07-24_18-04-30

Some  of the most revealing information was in the comments at the end.

  • Many respondents were learning Python, and its displacement of IDL as the programming language of choice will almost certainly continue.
  • Many decried that lack of formal training on software engineering, and some thought it should  be a required part of their graduate programs.
  • There were suggestions that greater credit and career opportunities should be afforded to those developing community software.

 

 

Posted in astroinformatics, Astronomy, astronomy surveys, Career Advice, Computing, cyberinfrastructure, informatics, information sharing, programming, publishing, Python, Scientific computing, social media, social networking, Software citation, software engineering, software maintenance, software sustainability, user communities | Tagged , , , , , , , , , , , , | Leave a comment

The 8th Extremely Large Databases Conference and Workshop

These annual workshops that discuss ” …the real-world challenges, practical considerations, and nuts-and-bolts solutions in the realm of managing and analyzing extreme scale data sets.”  The attendees include Big Data users from industry and science, developers, researchers, and providers. This year’s topics were:

  • Integrating statistical tools with databases and clouds
  • Big Data centers
  • Current practices, and unsolved challenges in Big Data
  • Urban science.

PDFs and videos all the talks are on-line at http://www-conf.slac.stanford.edu/xldb2015/ProgramC.asp, and there is a youTube channel at https://www.youtube.com/playlist?list=PLE1UFlsTj5AGd364NBD2R1_RkESSgn8rD

If you are interested in Big Data and its challenges, these talks are well worth a look.

Two of my favorites are
“R in the World: Interfaces between Languages,” by John Chambers, and “Critical Technologies Necessary for Big Data Exploitation,” by Stephen Brobst. You can watch them on the youTube channel (the embed codes appear to be incorrect, so I can’t post them here).

Posted in astroinformatics, Cloud computing, computer modeling, computer videos, Computing, computing videos, cyberinfrastructure, Grid Computing, High performance computing, informatics, information sharing, Parallelization, programming, publishing, Python, R, Scientific computing, social media, social networking, software engineering, software maintenance | Tagged , , , , , , , , , , , | Leave a comment

Toward a Framework for Evaluating Software Success

Many of us in the astronomical software business have been debating the best way to evaluate the quality of software and its success within its user community. Here is one proposal submitted by a group of us to the Computational Science & Engineering Software Sustainability and Productivity Challenges (CSESSP) Workshop, October 15-16, 2015, Washington, DC, USA.

Briefly, we are proposing the creation of a software “peer-review group,” comprised of grant recipients funded to develop sustainable software, who would meet periodically to evaluate each others’ software, developing and refining success metrics along the way. What do others in the field think of this approach?

Toward a Framework for Evaluating Software Success: A Proposed First Step

Stan Ahalt (ahalt@renci.org), Bruce Berriman, Maxine Brown, Jeffrey Carver, Neil Chue Hong, Allison Fish, Ray Idaszak, Greg Newman, Dhabaleswar Panda, Abani Patra, Elbridge Gerry Puckett, Chris Roland, Douglas Thain, Selcuk Uluagac, Bo Zhang.

Software is a particularly critical technology in many computational science and engineering (CSE) sectors. Consequently, software is increasingly becoming an important component in the evaluation of competitive grants and the execution of research projects. As a result, software can be viewed as a scholarly contribution and has been proposed as a new factor to consider in tenure and promotion processes. However, existing metrics for evaluating the capability, use, reusability, or success of software are sorely lacking. This lack of software metrics permits the development of software based on poor development practices, which in turn allows poorly written software to “fly under the radar” in the scientific community and persist undetected. The absence of evaluation by knowledgeable peers often leads to the establishment and adoption of tools based on aggressive promotion by developers, ease-of-use, and other peripheral factors, hindering the sustainability, usefulness, and uptake of software and even leading to unreliable scientific findings. All of these factors mean that addressing the current lack of software evaluation metrics and methods is not just a question of increasing scientific productivity, but also a matter of preventing poor science.

As a first step toward creating a methodology and framework for developing and evolving software success metrics for the CSE community, we propose the creation of a software “peer-review group.” This group, comprised of grant recipients funded to develop sustainable software, would meet periodically to evaluate their own and each others’ software, developing and refining success metrics along the way. We envision the group as a pilot test for a potential larger-scale effort to establish a more formal framework for software success metrics and evaluation.

Framing Success Metrics

Our perspective on framing software success metrics arose from a breakout session held at a recent NSF-funded workshop attended by more than 75 Software Infrastructure for Sustained Innovation (SI2) principal investigators.  The breakout team identified the need to create a methodology and framework for academic software success metrics, brainstormed factors to consider in developing such a framework, and outlined the actionable steps needed to advance this effort. The idea of a software review group was introduced in these discussions, and possible outcomes—presented briefly here—were discussed. We believe further discussion by Computational Science and Engineering Software Sustainability and Productivity Challenges (CSESSP) workshop attendees will help to further develop these ideas and emphasize the importance of framing software success metrics as an integral part of developing a sustainable software ecosystem.

The Need to Evaluate Software Success

On the whole, the development of research software in academia, government, and national labs trails the rigor of industry-developed software. Incentives and measurements of what constitutes successful software differ among and within these sectors, yet all are ultimately part of the same software ecosystem. Generally speaking, successful software must be reliable, sustainable, have value to the target user community and beyond, and provide outcomes that are meaningful to societal stakeholders. Sound software development and engineering practices lead to sustainable software. Stakeholder adoption, use, and reuse of software create feedback loops that further enhance software success. To improve the productivity and sustainability of research software and the research communities it supports, we should be able to objectively measure what makes software successful—or not.

Factors to Consider

There are multiple dimensions to consider in developing an effective methodology and framework for evaluating software success. One dimension relates to the factors that contribute to software success, such as criticality, usability, performance, functionality, availability, and scientific impact. These terms may have different meanings in different fields; for example, usability may mean something different for networking software than it does for security software. Another dimension relates to the types of outcomes we might want to measure, such as the value of the scientific contributions of a grant or project, the value of the products of a grant or project (i.e., the value of the software), or the nature of the team’s “community conduct” (e.g., its value to the software ecosystem). Another relates to defining needs: for example, what is it that funders, researchers, or the broader community need to know in order to inform better decisions and improve sustainability? Finally, we must develop robust metrics to address these dimensions, inform project goals, and empower software creators, researchers, funders, and others to effectively evaluate software.

Next Steps

To begin to develop and evolve a software evaluation framework, we propose establishing a peer review group —an organization of representative stakeholders who will self-review software works created by their respective communities. This group would effectively constitute a pilot program to inform the feasibility, scope, and approach of a future, larger effort to establish and refine a framework for sustainable software metrics. At a minimum, this group would give its members an opportunity for regular review and enhance their own self- improvement processes. If successful more broadly, the group would help to characterize key challenges in software evaluation, define and refine evaluation criteria, and lead to a more informed approach to software development and evaluation for the CSE community as a whole.

We believe further discussion of this idea at the CSESSP workshop would refine and inform our approach and help to generate momentum toward achieving better software evaluation approaches. Examples of questions that warrant further exploration include:

  •  How should we determine who should be included in the review group? o What attributes make someone an expert software reviewer?
  • How should we manage the process for submitting software for evaluation?
  • Should we require all group members to regularly submit their own software
  • How can others opt in to have their software reviewed?
  • How will the process provide adequate protections against conflicts of interest, address reviewers’ knowledge limitations, and address the possibility that some software creators may be competing with each other or with reviewers?
  • How should this activity be structured to continually advance the ultimate aim of establishing anobjective set of review criteria that can be applied to different types of software?
  • What evaluation criteria or mechanisms are needed to ensure the group works effectively toward its goals?
  • What types of documentation or outcomes would be useful toward developing a larger-scale metrics framework

Submitted to: Computational Science & Engineering Software Sustainability and Productivity Challenges (CSESSP) October 15-16, 2015, Washington, DC, USA

Report from the National Science Foundation-funded workshop held February 17-18, 2015, at the Westin Arlington Gateway in Arlington, Virginia for Software Infrastructure for Sustained Innovation (SI2) Principal Investigators, http://dl.acm.org/citation.cfm?id=2764957.

Posted in Computing, information sharing, Open Source, Peer review, programming, Scientific computing, Software citation, software engineering, software maintenance, software sustainability, user communities | Tagged , , , , , , , , , , | Leave a comment

Machine Learning with Scikit-Learn (I) – PyCon 2015

An excellent introduction to machine learning, by Jake VanderPlas at PyCon 2015. Long, but full of useful information.

Posted in astroinformatics, Astronomy, computer modeling, computer videos, Computing, computing videos, cyberinfrastructure, informatics, information sharing, knowledge based discovery, Machine learning, programming, Python, Scientific computing, software engineering, statistical analysis | Tagged , , , , , , , , | Leave a comment

What’s the Difference Between Cluster, Grid and Cloud Computing?

After my last post on introductory videos on cloud computing, I was asked if there were videos that explained the difference between Cluster, Grid and Cloud Computing. Here is a very good one by Prof. Ajit Pal, Department of Computer Science and Engineering, IIT Kharagpu. He explains the architectural differences between these platforms, as well as the implications for maintenance, deployment and cost. Although long at 55 min, this video is worthwhile for its technical approach for computer professionals  and for scientists wishing to exploit these approaches to computing.

 

Posted in astroinformatics, Cloud computing, computer videos, Computing, computing videos, cyberinfrastructure, Grid Computing, High performance computing, information sharing, Parallelization, programming, Scientific computing, Uncategorized | Tagged , , , , , , | Leave a comment

Videos on Getting Started With Cloud Computing

A number of people have asked if I would post links to videos that provide useful introductions to Cloud Computing. Believe it or not, there are many, many videos on this topic posted on YouTube, and I will post here links to those that I think might be useful if you are getting started with cloud computing.

Cloud Computing Explained

What is the Cloud? (now with pictures!)

A bit more technical: The Three Ways to Cloud Compute

A longer but very good introduction:  Introduction to Cloud Computing

This last video is by Eli the Computer Guy. His channel at https://www.youtube.com/user/EliComputerGuyLive has a lot of interesting computer videos – well worth a look.

Posted in Cloud computing, computer videos, Computing, cyberinfrastructure, Data formats | Tagged , , , | Leave a comment

The Palomar Transient Factory: High Quality Realtime Data Processing in a Cost-Constrained Environment

This is the title of a paper by Surace et al. (2015) currently available on astro-ph and presented as a paper at ADASS XXIV in October 2014. The Palomar Transient Factory (PTF) is an example of a cost-constrained project that is now common in astronomy. It produces a high volume of data, which need near real-time processing for maximum science return, and yet must achieve all of this on a shoestring budget. I will focus in this post on how the cost constraints were managed, rather than give a technical description of the project and its infrastructure. The decisions made exploited many years of expertise at IPAC, developed in managing science operations centers for NASA missions.

The PTF itself is a generic term for several projects, with various observing cadences aimed at discovering supernovae, gamma-ray bursters and other objects. The orginal PTF was succeeded by the “intermediate” Palomar Transient Factory (or iPTF), which concentrates on specific focussed science campaigns, rotated on a quarterly basis. The iPTF was itself succeeded by the Zwicky Transient Facility, which operates with new camera with a field of view of nearly 50 square degrees, composed of inexpensive “wafer-scale” CCDs.

The cost constraints were managed on all parts of the project, from the hardware on the telescope all the way through to the archive system. First of all, it took advantage of a lot of hardware in the data acquisition system:

  • It re-used the the CFHT 12k Mosaic Camera, and replaced the liquid nitrogen dewar with a mechanical cryo-cooler.
  • The system primarily surveyed the sky in one filter, the R-band, which maximizes survey volume.
  • It took advantage of the 1.2-m Oschin-Schmidt telescope, rather than build a new one.
  • Telescopic operations are largely robotic.
  • Transients discovered by PTF can be  followed up in near real-time by other telescopes at Palomar.

All data acquired at the telescope are required for science analysis, and are transmitted to IPAC via a microwave link through the  San Diego Supercomputer Center.  At IPAC, the data are processed on twenty-four dual-CPU compute drones. The processing itself is embarrassing parallel, with data for each CCD processed on a single drone. Mass storage is managed with a ZFS file system, with data compression. The long-term storage is dual-homed and connected to both the operations system and the archive system. This is done because the the  disk is too expensive to manage an operations and an archive copy, at the expense of  complexity cost in controlling file ownership between operations and the archive. See the figure below for a schematic of the processing flow:

pipe

The data processing system was developed under an agile management process, with only a handful of core staff and with heavy involvement of scientists throughout. This is a key feature of IPAC’s institutional strategy and ensures alignment of software development with science goals. The system draws heavily on existing community software, with individual modules on various languages carrying out specific tasks.  Utility was valued over elegance.

The archive is managed within the Infrared Science Archive (IRSA) at IPAC. The PTF archive interface is essentially a thin layer built atop a reusable and portable science information system that has supported the archives of many missions and projects at IPAC for the past decade and a half.

Finally, a critical component of the PTF are “science marshalls,” organized around particular topics and which organize and present results for those topics, and, among other things, allow scientists to interact with the results and form collaborations.

Posted in astroinformatics, Astronomy, astronomy surveys, cyberinfrastructure, data archives, Data Management, High performance computing, informatics, information sharing, Observatories, Operations, Parallelization, programming, Scientific computing, software engineering, software maintenance, software sustainability, Uncategorized, user communities | Tagged , , , , , , , , , , , , , | Leave a comment