Iteration¶
Iterations methods allow a researcher or practitioner to derive average model accuracy.
Functions
- causeinfer.evaluation.iterate_model(model, X_train, y_train, w_train, X_test, y_test, w_test, tau_test=None, n=10, pred_type='predict', eval_type=None, normalize_eval=False, verbose=True)[source]¶
Trains and makes predictions with a model multiple times to derive average predictions and their variance.
- Parameters:
- modelobject
A model over which iterations will be done.
- X_trainnumpy.ndarray(num_train_units, num_features)int, float
Matrix of covariates.
- y_trainnumpy.ndarray(num_train_units,)int, float
Vector of unit responses.
- w_trainnumpy.ndarray(num_train_units,)int, float
Vector of original treatment allocations across units.
- X_testnumpy.ndarray(num_test_units, num_features)int, float
A matrix of covariates.
- y_testnumpy.ndarray(num_test_units,)int, float
A vector of unit responses.
- w_testnumpy.ndarray(num_test_units,)int, float
A vector of original treatment allocations across units.
- tau_testnumpy.ndarray(num_test_units,)int, float
A vector of the actual treatment effects given simulated data.
- nint (default=10)
The number of train and prediction iterations to run.
- pred_typestr (default=pred)
predict or predict_proba: the type of prediction the iterations will make.
- eval_typestr (default=None)
qini or auuc: the type of evaluation to be done on the predictions.
Note: if None, model predictions will be averaged without their variance being calculated.
- normalize_evalbooloptional (default=False)
Whether to normalize the evaluation metric.
- verbosebool (default=True)
Whether to show a tqdm progress bar for the query.
- Returns:
- avg_preds_probasnumpy.ndarray (num_units, 2)float
Averaged per unit predictions.
- all_preds_probasdict
A dictionary of all predictions produced during iterations.
- avg_evalfloat
The average of the iterated model evaluations.
- eval_variancefloat
The variance of all prediction evaluations.
- eval_variancefloat
The variance of all prediction evaluations.
- all_evalsdict
A dictionary of all evaluations produced during iterations.
- causeinfer.evaluation.eval_table(eval_dict, variances=False, annotate_vars=False)[source]¶
Displays the evaluation of models given a dictionary of their evaluations over datasets.
- Parameters:
- eval_dictdict
A dictionary of model evaluations over datasets.
- variancesbool (default=False)
Whether to annotate the evaluations with their variances.
- annotate_varsbool (default=False)
Whether to annotate the evaluation variances with stars given their sds.
- Returns:
- eval_tablepandas.DataFrame(num_datasets, num_models)
A dataframe of dataset to model evaluation comparisons.