evaluation package
Submodules
evaluation.eval module
- evaluation.eval.evaluate_model(model: BaseModel, dataset: Dataset, metrics=('normalized_rmse', 'mape'))
Evaluate the performance of a model using specified metrics.
- Parameters:
model – The trained model to be evaluated.
dataset – Dataset.
metrics – List of metrics to calculate.
- Returns:
A dictionary containing the calculated metrics.
- evaluation.eval.evaluate_predictions(y_true: ndarray, y_pred: ndarray, metrics=('normalized_rmse', 'mape'))
Evaluate predictions using specified metrics.
- Parameters:
y_true (numpy.ndarray) – True labels for evaluation.
y_pred (numpy.ndarray) – Predicted values.
metrics – List of metrics to calculate.
- Returns:
A dictionary containing the calculated metrics.
- evaluation.eval.get_default_metrics()
- evaluation.eval.mape(y_true: ndarray, y_pred: ndarray)
Calculate Mean Absolute Percentage Error (MAPE). Note that in the provided implementation using scikit-learn, there is an absence of multiplication by 100
Args: - y_true (numpy.ndarray): True values. - y_pred (numpy.ndarray): Predicted values.
Returns: - float: Mean Absolute Percentage Error.
- evaluation.eval.metric(func)
Decorator to mark functions as metrics
- evaluation.eval.normalized_rmse(y_true: ndarray, y_pred: ndarray)
Calculate the normalized Root Mean Squared Error (RMSE) between true and predicted values.
- Parameters:
y_true (numpy.ndarray) – True values.
y_pred (numpy.ndarray) – Predicted values.
- Returns:
Normalized RMSE value as a percentage.
- Return type:
float