Importance Methods¶
This module contains the feature importance calculation methods.
ConditionalPermutationImportance¶
ConditionalPermutationImportance(model, metric='mse', strategy='auto', partitioner=None, n_repeats=5, n_jobs=-1, random_state=None)
¶
Bases: MetricBasedExplainer
Conditional Permutation Feature Importance calculator.
This implements conditional permutation importance where feature values are only shuffled within defined subgroups, preserving the correlation structure between features.
Supports two strategies: - 'auto': Uses tree-based cs-PFI to automatically learn subgroups - 'manual': Uses pre-defined groups provided by the user
Example
explainer = ConditionalPermutationImportance(model, metric='mse') result = explainer.explain(X, y, features=['lag_1', 'lag_2']) print(result.to_dataframe())
Initialize the conditional permutation importance calculator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
ModelProtocol
|
A model with a predict method. |
required |
metric
|
MetricFunction | str
|
Scoring metric ('mse', 'mae', 'rmse', 'r2') or callable. |
'mse'
|
strategy
|
str
|
Grouping strategy ('auto' for tree-based, 'manual' for user-defined). |
'auto'
|
partitioner
|
BasePartitioner | None
|
Custom partitioner instance. If None, uses TreePartitioner for 'auto'. |
None
|
n_repeats
|
int
|
Number of times to repeat permutation for each feature. |
5
|
n_jobs
|
int
|
Number of parallel jobs (-1 for all cores). |
-1
|
random_state
|
int | None
|
Random seed for reproducibility. |
None
|
Source code in src/xeries/importance/permutation.py
Planned Methods¶
The following importance methods are planned for future releases and are not part of the current release:
- Conditional SHAP
- SHAP-IQ
- Feature Dropping
- Causal Feature Importance