.. _evaluation: Metrics and scoring: quantifing the performance of a quantifier =============================================================== .. currentmodule:: mlquantify.evaluation The measures implemented in this module are shown in the book *Learning to Quantifier*, and can be listed as follows: .. list-table:: :header-rows: 1 * - measure - abbreviation - First proposed for quantification - Return type * - `Absolute error `_ - ae - -- - array-like * - `Mean Absolute Error `_ - mae - `Saerens et al. (2002) `_ - float * - `Normalized Absolute Error `_ - nae - `Esuli and Sebastiani (2014) `_ - float * - `Relative Absolute Error `_ - rae - `González-Castro et al. (2010) `_ - float * - `Normalized Relative Absolute Error `_ - nrae - `Esuli and Sebastiani (2014) `_ - float * - `Squared Error `_ - se - `Bella et al. (2011) `_ - float * - `Mean Squared Error `_ - mse - -- - float * - `Kullback Leibler Divergence `_ - kld - `Forman (2005) `_ - array-like * - `Normalized Kullback Leibler Divergence `_ - nkld - `Esuli and Sebastiani (2014) `_ - float .. note:: **Return type** When inserting the measure in the `scoring` parameter of :class:`~mlquantify.model_selection.GridSearchQ` or :class:`~mlquantify.evaluation.protocol.APP` classes, the name of the measure should be passed with its acronym, for example "mae" for :func:`~mlquantify.evaluation.measures.mean_absolute_error`.