Wheeler’s Informal OSS Evaluation Guidelines

On August 5, 2010, in Research, by Emily Brock

David Wheeler, in “How to Evaluate Open Source Software/ Free Software Programs” provides a thorough outline of how to “compile the short list”  of top candidates and perform comparative assessments of the software. Notably, Wheeler’s article includes a summation of many recent OSS resources, including papers, websites, research projects and articles.

Wheeler addresses the issue of comparing OSS and proprietary software and explains that the methods set out in his paper can be used to evaluate OSS, whose outcomes can then be assessed next to traditional proprietary software evaluations. The particular aspects of OSS that make it difficult to compare to proprietary software are exactly the characteristics that might sway an implementer to use one over the other. That is, as comparing apples to oranges, one might not even desire to compare OSS/FS to proprietary software with the same evaluation methods and score weightings.

Wheeler explains that OSS/FS programs make much more information publicly available than proprietary software, such as source code, and forums for discussions between developers and users.

Like most of the other resources examined in this review, Wheeler’s guidelines are primarily aimed at evaluation for selection, as opposed to evaluation as a continuing best practice approach to OSS project management. Wheeler does note that implementers “need to be able to evaluate OSS/FS programs, because you will always need to know how well a given program meets your needs, and there are often competing OSS/FS programs.”

Many of the attributes Wheeler suggests to examine are helpful to consider when developing any software assessment model: functionality, cost, market share, support, maintenance, reliability, performance, scalability, useability, security, flexibility/ customizability, interoperability, and legal/license issues.  Some of these attributes differ greatly in OSS/FS versus proprietary software.  For example, in assessing functionality, one might consider the opportunity  to alter the OSS/FS directly, by editing or adding to the source code.  Wheeler examines each of these aspects in-depth and provides an analysis of what and how an OSS/FS evaluation model might use the criteria.

Tagged with:  

OSS Eval Methods-Lit Review

On August 3, 2010, in Research, by Chris Prom

As I noted a few weeks ago, Emily Brock and I are reviewing formal evaluation methods for Open Source Software (OSS). We’re doing this because I would like to get a handle on what worked or didn’t work with the Archon project.  Having an objective understanding of that project’s strengths and weaknesses will be critical as the ArchivesSpace project moves forward.  The article that Emily and I hope to write will complement Sybil Shaefer’s excellent Code4Lib piece.

The evaluation tools and methods that we found help users select software.  While they may may facilitate project improvement or self criticism, that is not their primary purpose.   Therefore, Emily Brock and I will be putting together a new method for OSS project evaluation/self criticism, then testing whether it works.

All this is just to say, by way of introduction, that over the next few days, Emily and I will be releasing some posts based on the initial literature review we completed.  Before getting to that, Wikipedia contains a helpful overview and comparison of existing open source software assessment methodologies.  It lists the Open Source Maturity Model (OSMM) from Navica, the Qualification and Selection of Open Source software (QSOS), and the Open Business Readiness Rating (OpenBRR).  Look out soon for more in depth reviews of these, and other methods!