What is a good classification accuracy in machine learning?

By Data Tricks, 1 June 2020

One of the most common questions I’m asked when it comes to classification problems in machine learning is what is a good classification accuracy.

And the answer is, unfortunately, in the form of another question: what are you trying to measure? A “good” classification accuracy will largely depend on what you’re trying to predict and what those predictions are going to be used for. Indeed, accuracy might not even be the best statistic to use at all.

How to measure classifier performance

First it is common to create a confusion matrix, which looks like the following:

Predicted
Positive Negative
Actual Positive True positive False negative
Negative False positive True negative

Using this confusion matrix you can calculate a range of measures (scroll to the bottom of this article for a tool to calculate these):

Accuracy

The overall proportion of correct classifications.

Precision

Proportion of predicted positives that were correct.

Sensitivity

Proportion of actual positives that were predicted correctly (sometimes called recall).

Specificity

Proportion of actual fails that were predicted correctly.

F-score

Sometimes called the F1 score, this provides a balanced measure of precision and sensitivity.

Examples

Now let’s consider two scenarios:

Scenario A: you’re training a machine learning algorithm to be used for facial recognition on a social media platform.

Scenario B: you’re training a machine learning algorithm to determine the immediate risk posed to vulnerable people.

Let’s say you achieved a classification accuracy of 80% in both scenarios. In Scenario A your algorithm tagged lots of photos correctly but miss-classified 1 in 5 photos, leading to a minor inconvenience for some users. In Scenario B, however, if you miss-classified 1 in 5 vulnerable people as not at risk, then that’s 1 person who may be in imminent danger but ignored – the stakes are much higher.

In Scenario B it might be better to maximise sensitivity rather than accuracy. Put another way, you might want to get the number of false negatives (people who you predicted were not at risk, but actually were) as close to zero as possible. Of course this will come at the expense of your overall accuracy which might decrease, but you can probably live with your model having more false positives, or ‘false alarms’, rather than false negatives.

Classifier performance calculator

Calculate accuracy, precision, sensitivity, specificity and F-score

Enter the number of observations in each category below and click Calculate

Predicted
Positive Negative
Actual Positive
Negative



Tags: , , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that your first comment on this site will be moderated, after which you will be able to comment freely.