Having identified the root needs the product will fulfill, engaging people while selecting criteria, and generating a scorecard to evaluate each contender against, we're ready to get started, right?
Step 3: Evaluation
In order to evaluate and score all of the contending products, we first need some contending products.
At this point it's highly likely we will have one or two products that have been suggested. However those aren't the only solutions on the market and armed with our enhanced and comprehensive list of requirement, we ought to be able to find a few more options to evaluate. Even if we have a clear winner at this point, looking at additional options will ensure there wasn't anything critical we overlooked and that we can show alignment with the groups interests.
While options are good, we want to be wary of the internal development solution. Because the solution is what we make of it and will undoubtedly change over the course of implementation, it is the option that simultaneously has the least and most options, the lowest and highest cost, and the worst and best support. My opinion is to include 2-3 versions including small to large ranges of features and to have each of them quoted separately. We also need to ensure that criteria for implementation time and internal support are evaluated, as a custom development project will have a significant long-term impact on both of these.
Play with products Evaluate!
With the exception of the internal development product, it's now time to sit down and play with some products. Depending on the size of the purchase and the criticality of the purchase, this could consist of something as light as reviewing published specs to getting full installations or samples and having real users test drive the products. The scorecard is the starting point for these trials, but we want to make sure we also capture additional features and any detrimental factors we run into.
Extra columns or space on the scorecard should be used to reflect software scores. These include things like price tags, annual support costs, average response time, and other similar values. Some of these, like cost, will likely already be strongly represented in our criteria (ie, don't spend more than $X), while other may need to be factored in or rated against one another. Don't forget to track bonus items, features no one was expecting but might help break a tie.
Review and Select
Highest score wins, right? Not necessarily.
After all of the evaluations have been completed, we should have a clear set of leaders. With some hands on time, there have likely been some pros and cons discovered that weren't on the initial list. These could sway the group to re-evaluate some of their scores. The important part is that the process has given us the information we need to make an informed decision, whereas without it were closer to guessing.
Either individually or in a group we need to determine where the new criteria fall on the scorecard, determine if anything has changed since we started (budget cuts, changing needs, etc), and so on.
Document, Document, Document
Document all the factors in the decisions. Promises from vendors, pros and cons of products, changes to the scoring, final selection. In 6 months we need to be able to understand what details led to the decision we made and if a product has failed to fulfill it's promises.
The final deliverable of the product selection process is a decision that has clearly included the input of the necessary parties and a decision directly based on our needs and problems.
Once the product has been selected, it's time to implement. This should include at least a minimal implementation plan, identifying an owner for the implementation, purchasing the software, and so on. This can be a two paragraph note with a bullet list or a fully defined project plan following your methodology of choice (critical path, CCPM, scrum, etc).
Implementing the product is outside the scope of this product selection series, but there is one step beyond implementation that is not. Every process should end in a review. What worked, what didn't work, did we we achieve long term success, and so on. In short, the process is never perfect, some things didn't go so well, and there is always too much work. So we'll skip past the implementation and head straight to the review.