Confusion Analysis

Background

 

Multi-class classifiers often compute scores of the samples to belong to the classes. This information is valuable to understand the behavior of such classifiers, but is hard to depict in a confusion matrix. We propose an alternative visual metaphor to analyze the data. Find out more in our paper (PDF) presented at IEEE VAST 2014 (slides - PPT).

 

The Visual Metaphor of Confusion Wheel

Fig. 1. Visualizing classification results of 10,992 handwritten digits (a) using a confusion matrix augmented with histograms of sample probabilities in the respective rows and columns, (b) using the confusion wheel: Sectors represent digits with chords showing classification confusion between them. Histograms represent the probabilities of the samples in each class according to the color legend.

 

Videos

 
The following preview video illustrates multi-class classification:
 
 
The following 5-minute video illustrates the proposed visual metaphors and views, and how they can be used together to analyze probabilistic classification data. It also highlights some of the supported interactions that allow selecting certain samples based on their classification results:
 

 

Screenshots

Here you will find some examples based on publicly available datasets:

  • Pen-Based Handwritten Digits: the UCI Dataset on Pen-Based Handwritten Digits encompasses 10,992 handwritten digits classified into 10 classes representing the Arabic numerals.
    • Classification results using naive Bayesian classifier: this figure shows the classification results of the dataset using a naive Bayesian classifier.
    • Classification results using k-NN: this figure shows the classification results of the dataset using a k-NN classifier with k = 5.
    • Comparison between three classifiers: the following figures show histograms of sample probabilities to represent the digit "5" computed using different classifiers, and colored by their classification results.

      Comparison between three classifiers
       

  • Patent Images: this dataset was made available during the CLEF-IP 2011 classification evaluation campaign.
    The data comprise 1000 patent images classified into 9 classes (character, chemical structures, drawings, flow charts, gene sequence, graphs, math, program listing, and table).
    Ten teams participated in the challenge: XEROX-SAS, XEROX-SAS_mean_all, RUNORH, RUNORH_ROTRAN, ALPHACENTAURI, ARCTURUS, BETELGEUSE, CANOPUS, RIGEL, SIRIUS, VEGA, and PROCYON.
    Each team submitted probabilistic classification of the patent images using their own classifiers. More details about the challenge can be found in this article.
    In the following we show classification results of two of the partcipating teams, visualized using the confusion wheel.
  • Chess moves: this KEEL dataset collects data about 28057 chess endgames.
    Each abalone is classified in one of 17 classes representing the number of turns required for white to win a game, which can be a draw if it takes more than sixteen turns.
  • Latin Letters: this UCI dataset encompasses 20000 letter images were created by randomly distorting 20 different font images of these letters.
    Each letter is classified into one of 26 classes representing the Latin alphabet.
  • Abalone data set: this UCI Dataset collects physical measurements about 4177 abalone.
    Each abalone is classified in one of 28 classes representing the number of rings seen through a microscope.
    As stated in the UCI dataset description, "the number of rings is the value to predict: either as a continuous value or as a classification problem".

Questions or Comments?

If you have questions or comments or want to visualize your classification data, you can gladly reach me through my gmail id: "bilalsal"