Skip to main content
Ctrl+K
mlquantify homepage mlquantify homepage
  • Install
  • User Guide
  • API
  • Getting Started
  • About Us
  • GitHub
  • Install
  • User Guide
  • API
  • Getting Started
  • About Us
  • GitHub

Section Navigation

  • 1. Aggregative Quantification
    • 1.1. Counters For Quantification
    • 1.2. Adjust Counting
    • 1.3. Likelihood-Based Quantification
    • 1.4. Mixture Models
    • 1.5. Nearest Neighbors
    • 1.6. Kernel Density Estimation
  • 2. Non Aggregative Quantification
    • 2.1. Mixture Models for Non-Aggregative Quantification
  • 3. Meta Quantification
    • 3.1. Ensemble for Quantification
    • 3.2. Bootstrap in Quantification
    • 3.3. QuaDapt: Drift-Resilient Score Adaptation
  • 4. Model Selection and Evaluation
    • 4.1. Protocols for Quantification
    • 4.2. Tuning Hyperparameters
    • 4.4. Evaluation Metrics
  • 5. Confidence Intervals
  • 6. Building a Quantifier
  • User Guide
  • 4. Model Selection and Evaluation

4. Model Selection and Evaluation#

  • 4.1. Protocols for Quantification
    • 4.1.1. Artificial-Prevalence Protocol (APP)
    • 4.1.2. Natural-Prevalence Protocol (NPP)
    • 4.1.3. Uniform Prevalence Protocol (UPP)
    • 4.1.4. Personalized Prevalence Protocol (PPP)
  • 4.2. Tuning Hyperparameters
  • 4.3. References
  • 4.4. Evaluation Metrics
    • 4.4.1. Single Label Quantification (SLQ) Metrics
    • 4.4.2. Regression-Based Quantification (RQ) Metrics
    • 4.4.3. Ordinal Quantification (OQ) Metrics

previous

3.3. QuaDapt: Drift-Resilient Score Adaptation

next

4.1. Protocols for Quantification

This Page

  • Show Source