Wednesday, February 16, 2011

Decision Support: The Multi-Variable Ranking Systems

In the most recent issue of The New Yorker, Malcolm Gladwell takes on the US News & World Report’s annual “Best Colleges” guide in The Order of Things: What college rankings really tell us (http://www.newyorker.com/reporting/2011/02/14/110214fa_fact_gladwell) (sorry, subscription required for the full text article.)  He dissects the US News college ranking system with other examples of ranking systems: Car and Driver’s ranking of three sports cars, a ranking of suicide rate by country, a ranking of law schools and a ranking of geographical regions by level of civilization.  He makes the point that these systems are highly arbitrary and can be best understood by inspecting the categories and weights against the needs of various audiences that will make decisions based on the information provided.  Most ranking systems reflect the prejudices of the people doing the ranking and less frequently serve the needs of the people seeking advice.
I have also used multi-variable ranking systems to assist in decision making in the past.  These systems were used to help sort out alternative architectures, product portfolios investment decisions or alternative corporate strategies and as such often stimulated intense political positioning and lobbying before, during and after decisions were made. My experience has been that to drive good decisions, there are a number of best practices that have their origin in the politics of shared responsibility in corporate organizations.  Here they are in no particular order:
  1. Score criteria that are naturally quantitative, quantitatively, for other criteria, use qualitative ranking, e.g., project cost or profitability should be scored in $, while strategic alignment can be High, Medium or Low. 
  2. Get political adversaries to participate in the structuring of the number and weights of criteria... be sure to include the top two or three things each cares most about in the system.
  3. Be prepared to explain everything: the structure of the system and the reasoning behind the scores.  To the extent that a score is controversial, it should have its origin and ownership in the organizations that bear the greatest responsibility for execution after the decision is made.
  4. Find several different points of view to cross check the results of the ranking system.  If you use choose certain proxies for cost, quality, flexibility, etc., cross check with other proxies.
  5. Run sensitivities on the number of variables and weights chosen to see how dependent the outcome is on the particular variable set and weights.  When a sensitivity changes the outcome, study what you have to believe to use different variables and weights that drive a different outcome.
  6. Recognize that it is impossible to put every decision criteria into the system and that ultimately an executive will make a decision based on the system and other factors outside the system.  Also recognize that an important function of the system is to get the participants to develop common language and models, not to drive a particular result.
What are your experiences with multi-variable ranking systems in support of decision making?  How do you react to Gladwell’s discussion of the limitations of these systems for making difficult apples to oranges comparisons?

No comments:

Post a Comment