18 Dec 2017 by Dennis Breen
The Trouble with Rigor: Evaluation Methods for Selecting a CMS

If you’re responsible for a complex website or intranet, you’ve probably faced the daunting task of figuring out what CMS would best power your site. If you’re a business analyst, or have one on your team, you may have followed a process like this:

  1. Gather a list of required features
  2. Prioritize the requirements using a method like MoSCow
  3. Develop a weighted scoring method for each of the prioritization levels
  4. For each candidate CMS, assign a suitability score for how well it meets each requirement
  5. Select the CMS with the highest score

This detailed, comprehensive approach is deeply rooted in the Business Analysis Body of Knowledge, or BABOK Guide, which lays out techniques for eliciting, analyzing, and assessing requirements. The approach can offer insights into your organizational needs, but there may be some flaws to consider. Before I get to that, let’s look more at the process.

Traditional Evaluation  Process

1. Gather Requirements

This is easily the most difficult and time consuming step. I won’t go into elicitation methods in detail, but you’ll spend time looking at existing processes, and interviewing, surveying, observing, and workshopping with key stakeholders. In the end, you’ll have a structured list of requirements that the CMS must meet. Categories of requirements might include product maturity, ease of use, permissions & workflow, content editing, versioning, reporting, application integration, or security.

2. Prioritize Requirements

All requirements are not equally important. Some are musts for the system to work at all, while others are more like ‘nice to have’ ideas. The MoSCoW ranking method allows you to assign priority to each requirement.

  • Must Have: critical for success in current phase
  • Should Have: important but not necessary in current phase
  • Could Have: desirable but not necessary
  • Won’t Have: least-critical, lowest-payback items – won’t be included at this time

Depending on your situation, you may do this prioritization alone, with a small core team, or via wide stakeholder consultation.

3. Develop a Weighted Scoring Method

This is simply assigning points to each of the prioritization levels. For example:

  • Must Have = 20 points
  • Should Have = 5 points
  • Could Have = 2 points
  • Won’t Have = 1 point

4. Assign a Suitability Score

Each CMS will receive a percentage of the possible points, depending on how well it meets each requirement.

  • Excellent (E) = 100%
  • Good (G) = 75%
  • Fair (F) = 50%
  • Unacceptable (U)= 0%

5. Select the CMS with the Highest Score

Of course, all the above ends up in a giant spreadsheet that calculates point totals. Selecting a winner is a simple matter of picking the system with the most points.

Looks great! What’s the problem?

This is an intensive process, and the results are filled with percentages, calculations and numbers. It appears to be both rigorous and impartial. But looks can be deceiving.

Masking Subjective Judgement

The method feels impartial and objective because it gives you a single numerical result. But it hides the fact that the numbers were derived from a series of subjective judgements. What’s the difference between a Must Have and a Should Have? That’s a judgment call. What’s the difference between an Excellent rating and a Good rating? Also a judgment call. In fact, all the numbers in the scoring system are derived from judgement calls. This means that, while the results may be useful, they’re not necessarily definitive.

Another problem is that it’s tricky to get the weighted scoring right. How much more should a ‘Must Have’ be worth, compared to a ‘Should Have’? And if you’re thinking about a site evolution that will eventually include all your requirements, how do you express that in the scoring?

The result is a process that appears to be unassailably objective, but which is actually quite easy to game. Changing just a couple of suitability scores from Excellent to Good (which are difficult judgment calls anyway) could give you a different winner. This seems like a shaky foundation for such a big decision.

Avoiding Uncertainty

Psychology has a concept called Uncertainty Avoidance, which describes our level of tolerance for ambiguity and the extent to which we cope with anxiety by minimizing uncertainty. Now, if there’s any environment that has uncertainty-based anxiety and intolerance for ambiguity it’s corporate IT. We want clear, definitive answers and confirmation that we’re making the right decision. Picking our CMS based on an objective score appears to give us just that. But perhaps it’s too much appearance and not enough reality.

Solution: Expose Strengths and Weaknesses

If you’re using a method like this, the first step is to recognize that it is neither impartial, nor definitive. It might give you valuable information, but it can’t outright make your choices for you. The final score doesn’t tell the entire story.

One thing the process may be good at is exposing the relative strengths and weaknesses of different systems. If you create requirements categories like those above, you can see at a glance what a system is good at, and where it’s weakest. This can help you make lists of pros and cons so you can better understand the tradeoffs between systems. Understanding pros and cons can help to expose where your strongest priorities lie. For example, you may discover that:

  • X is the best .NET option
  • Y is the best developer-focused option
  • Z is the best option for business users

Which of those drivers does your organization care most about? What is the downside to your choice? Do your developers need to learn new tools? Do you need to plan for additional content editor training? Are you willing to use a less robust tool to ensure ease of use for non-technical staff?  Exposing and answering questions like these can make a huge difference.

Postscript: Selecting Technology

Soon after finishing this post, Rosenfeld Media released a new book: “The Right Way to Select Technology”, by Tony Byrne and Jarrod Gingras. It includes some of the same critiques of the weighted-requirements method that I’ve made here. They complain that this approach “…assumes that you can capture all your requirements up front in one big, abstract, analytical effort and then make a decision based on mapping vendor features to your list.” (Introduction, PG xvii). They further argue that the approach results in:

  • Inadequate testing and adaptation
  • Inability to course-correct based on learning
  • Overanalysis and underexperimentation
  • Less control over schedules and outcomes
  • Emphasis on “big bang” decision-making

If, instead of tweaking a traditional process, you’d like to try something completely new, this book offers a great blueprint. It covers everything from business case to negotiating with vendors. Highly recommended.

11 Dec 2017 by Megan Sirockman

Themes and Trends in IA and Digital Communication

Throughout the year, nForm consultants participate in a variety of events that pertain to user experience – from digital communication, to customer experience, to information architecture. While these events cover a diverse set of topics, we have noticed some persistent … Read More

12 Jan 2018 by Carson Pierce

nForm’s 2017 in Review

2017 was a pretty remarkable year for us here at nForm. We did some pretty cool work for some amazing clients, got deeply involved in our professional community, and even found time to have some fun together. Here are some … Read More