DySsyn#
- class mlquantify.methods.aggregative.DySsyn(learner: BaseEstimator | None = None, measure: str = 'topsoe', merge_factor: ndarray | None = None, bins_size: ndarray | None = None, alpha_train: float = 0.5, n: int | None = None)[source]#
Synthetic Distribution y-Similarity (DySsyn).
This method works similarly to the DyS method, but instead of using the train scores, it generates them via MoSS (Model for Synthetic Scores). MoSS creates a spectrum of score distributions ranging from highly separated to fully mixed scores.
- Parameters:
- learnerBaseEstimator
A probabilistic classifier implementing the
predict_proba
method.- measurestr, optional
The metric used to compare distributions. Options are: - “hellinger” - “topsoe” - “probsymm” Default is “topsoe”.
- merge_factornp.ndarray, optional
Array controlling the mixing level of synthetic distributions. Default is np.linspace(0.1, 0.4, 10).
- bins_sizenp.ndarray, optional
Array of bin sizes for histogram computation. Default is np.append(np.linspace(2, 20, 10), 30).
- alpha_trainfloat, optional
Initial estimate of the training prevalence. Default is 0.5.
- nint, optional
Number of synthetic samples generated. Default is None.
- Attributes:
- bins_sizenp.ndarray
Bin sizes used for histogram calculations.
- merge_factornp.ndarray
Mixing factors for generating synthetic score distributions.
- alpha_trainfloat
True training prevalence.
- nint
Number of samples generated during synthetic distribution creation.
- measurestr
Selected distance metric.
- mNone or float
Best mixing factor determined during computation.
References
MALETZKE, André et al. Accurately quantifying under score variability. In: 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021. p. 1228-1233. Avaliable at https://ieeexplore.ieee.org/abstract/document/9679104
Examples
>>> from mlquantify.methods.mixture_models import DySsyn >>> from mlquantify.utils.general import get_real_prev >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.model_selection import train_test_split >>> >>> features, target = load_breast_cancer(return_X_y=True) >>> X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.2, random_state=42) >>> >>> dyssyn = DySsyn(RandomForestClassifier()) >>> dyssyn.fit(X_train, y_train) >>> prevalence = dyssyn.predict(X_test) >>> prevalence {0: 0.3606413872681201, 1: 0.6393586127318799} >>> get_real_prev(y_test) {0: 0.37719298245614036, 1: 0.6228070175438597}
- GetMinDistancesDySsyn(test_scores: ndarray) list [source]#
Calculates the minimum distances between test scores and synthetic distributions of MoSS across various bin sizes and merge factors.
- Parameters:
- test_scoresnp.ndarray
Array of predicted probabilities for the test data.
- Returns:
- valuesdict
Dictionary mapping each merge factor (m) to a tuple containing: - The minimum distance value. - The corresponding prevalence estimate.
- best_distance(X_test)[source]#
Computes the minimum distance between test scores and synthetic distributions of MoSS.
- Parameters:
- X_testarray-like of shape (n_samples, n_features)
Test data.
- Returns:
- distancefloat
Minimum distance value for the test data.
- delayed_fit(class_, X, y)[source]#
Delayed fit method for one-vs-all strategy, with parallel execution.
- Parameters:
- class_Any
The class for which the model is being fitted.
- Xarray-like
Training features.
- yarray-like
Training labels.
- Returns:
- selfobject
Fitted binary quantifier for the given class.
- delayed_predict(class_, X)[source]#
Delayed predict method for one-vs-all strategy, with parallel execution.
- Parameters:
- class_Any
The class for which the model is making predictions.
- Xarray-like
Test features.
- Returns:
- float
Predicted prevalence for the given class.
- fit(X, y, learner_fitted=False, cv_folds: int = 10, n_jobs: int = 1)[source]#
Fit the quantifier model.
- Parameters:
- Xarray-like
Training features.
- yarray-like
Training labels.
- learner_fittedbool, default=False
Whether the learner is already fitted.
- cv_foldsint, default=10
Number of cross-validation folds.
- n_jobsint, default=1
Number of parallel jobs to run.
- Returns:
- selfobject
The fitted quantifier instance.
Notes
The model dynamically determines whether to perform one-vs-all classification or to directly fit the data based on the type of the problem: - If the data is binary or inherently multiclass, the model fits directly using
_fit_method
without creating binary quantifiers.For other cases, the model creates one binary quantifier per class using the one-vs-all approach, allowing for dynamic prediction based on the provided dataset.
- fit_learner(X, y)[source]#
Fit the learner to the training data.
- Parameters:
- Xarray-like
Training features.
- yarray-like
Training labels.
- get_distance(dist_train, dist_test, measure: str) float [source]#
Computes the distance between training and test distributions using a specified metric.
- Parameters:
- dist_trainnp.ndarray
Distribution of scores for the training data.
- dist_testnp.ndarray
Distribution of scores for the test data.
- measurestr
The metric to use for distance calculation. Supported values are ‘topsoe’, ‘probsymm’, ‘hellinger’, and ‘euclidean’.
- Returns:
- float
The computed distance between the two distributions.
- Raises:
- ValueError
If the input distributions have mismatched sizes or are zero vectors.
- get_metadata_routing()[source]#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
- routingMetadataRequest
A
MetadataRequest
encapsulating routing information.
- get_params(deep=True)[source]#
Get parameters for this estimator.
- Parameters:
- deepbool, default=True
If True, will return the parameters for this estimator and contained subobjects that are estimators.
- Returns:
- paramsdict
Parameter names mapped to their values.
- property is_multiclass: bool[source]#
Indicates whether the model supports multiclass classification.
- Returns:
- bool
Always returns False, as MixtureModel supports only binary classification.
- property is_probabilistic: bool[source]#
Check if the learner is probabilistic or not.
- Returns:
- bool
True if the learner is probabilistic, False otherwise.
- property learner[source]#
Returns the learner_ object. Returns ——- learner_ : object
The learner_ object stored in the class instance.
- predict(X) dict [source]#
Predict class prevalences for the given data.
- Parameters:
- Xarray-like
Test features.
- Returns:
- dict
A dictionary where keys are class labels and values are their predicted prevalences.
Notes
The prediction approach is dynamically chosen based on the data type: - For binary or inherently multiclass data, the model uses
_predict_method
to directlyestimate class prevalences.
For other cases, the model performs one-vs-all prediction, where each binary quantifier estimates the prevalence of its respective class. The results are then normalized to ensure they form valid proportions.
- predict_learner(X)[source]#
Predict the class labels or probabilities for the given data.
- Parameters:
- Xarray-like
Test features.
- Returns:
- array-like
The predicted class labels or probabilities.
- set_fit_request(*, cv_folds: bool | None | str = '$UNCHANGED$', learner_fitted: bool | None | str = '$UNCHANGED$', n_jobs: bool | None | str = '$UNCHANGED$') DySsyn [source]#
Request metadata passed to the
fit
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed tofit
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it tofit
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.Added in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
- cv_foldsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
cv_folds
parameter infit
.- learner_fittedstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
learner_fitted
parameter infit
.- n_jobsstr, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED
Metadata routing for
n_jobs
parameter infit
.
- Returns:
- selfobject
The updated object.
- set_params(**params)[source]#
Set the parameters of this estimator. The method allows setting parameters for both the model and the learner. Parameters that match the model’s attributes will be set directly on the model. Parameters prefixed with ‘learner__’ will be set on the learner if it exists. Parameters: ———– **params : dict
Dictionary of parameters to set. Keys can be model attribute names or ‘learner__’ prefixed names for learner parameters.
Returns:#
- selfQuantifier
Returns the instance of the quantifier with updated parameters itself.