APP#

class mlquantify.model_selection.APP(batch_size, n_prevalences, repeats=1, random_state=None, min_prev=0.0, max_prev=1.0)[source]#

Artificial Prevalence Protocol (APP) for exhaustive prevalent batch evaluation.

Generates batches with artificially imposed prevalences across all possible combinations within specified bounds. This allows comprehensive evaluation over a range of prevalence scenarios.

Parameters:
batch_sizeint or list of int

Size(s) of the evaluation batches.

n_prevalencesint

Number of artificial prevalence levels to sample per class dimension.

repeatsint, optional (default=1)

Number of repetitions for each prevalence sampling.

random_stateint, optional

Random seed for reproducibility.

min_prevfloat, optional (default=0.0)

Minimum possible prevalence for any class.

max_prevfloat, optional (default=1.0)

Maximum possible prevalence for any class.

Notes

For multiclass problems, this protocol may have high computational complexity due to combinatorial explosion in prevalence combinations.

Examples

>>> protocol = APP(batch_size=[100, 200], n_prevalences=5, repeats=3, random_state=42)
>>> for idx in protocol.split(X, y):
...     # Train and evaluate
...     pass
get_metadata_routing()[source]#

Get metadata routing of this object.

Please check User Guide on how the routing mechanism works.

Returns:
routingMetadataRequest

A MetadataRequest encapsulating routing information.

get_n_combinations()[source]#

Get the number of combinations for the current protocol.

get_params(deep=True)[source]#

Get parameters for this estimator.

Parameters:
deepbool, default=True

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
paramsdict

Parameter names mapped to their values.

save_quantifier(path: str | None = None) None[source]#

Save the quantifier instance to a file.

set_params(**params)[source]#

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as Pipeline). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Parameters:
**paramsdict

Estimator parameters.

Returns:
selfestimator instance

Estimator instance.

split(X: ndarray, y: ndarray)[source]#

Split the data into samples for evaluation.

Parameters:
Xnp.ndarray

The input features.

ynp.ndarray

The target labels.

Yields:
Generator[np.ndarray, np.ndarray]

A generator that yields the indices for each split.