The area under the curve (AUC) is equal to the probability that a
classifier will rank a randomly chosen positive instance higher than a
randomly chosen negative example. It measures the classifiers skill in
ranking a set of patterns according to the degree to which they belong
to the positive class, but without actually assigning patterns to
The overall accuracy also depends on the ability of the classifier to
rank patterns, but also on its ability to select a threshold in the
ranking used to assign patterns to the positive class if above the
threshold and to the negative class if below.
Thus the classifier with the higher AUROC statistic (all things being
equal) is likely to also have a higher overall accuracy as the ranking
of patterns (which AUROC measures) is beneficial to both AUROC and
overall accuracy. However, if one classifier ranks patterns well, but
selects the threshold badly, it can have a high AUROC but a poor
- ► 2018 (20)
- ► 2017 (64)
- ► 2016 (67)
- python unix style pathname pattern expansion
- Building OpenCV with face module (opencv_contrib) ...
- python notes
- Installin wxpython on Windows with Anaconda
- python in-built data types
- Warning:java: source value 1.5 is obsolete and wil...
- XGBoost Linear Regression output incorrect
- python print object memory address
- Pandas chained indexing - example
- Installing Anaconda and xgboost on Ubuntu
- Installing xgboost on Windows for Python (While us...
- Errors running apt-get on aws ubuntu instance
- ML notes : some data sources
- Great explanation of how async/non blocking I/O in...
- ML Notes : Objective function
- ML notes : good explanation of gradient descent
- ML Notes : Precision vs Recall (true positive rate...
- ML notes : linear regression vs logistic regressio...
- ML Notes : false positive, false negative, true po...
- ML Notes : Area Under Curve (AUC) vs Overall Accur...
- memory being used by httpd
- ▼ December (21)
- ► 2013 (48)
- ► 2012 (59)
- ► 2011 (77)
- ► 2010 (147)
- ► 2009 (46)
- ► 2008 (73)