Create prediction function
Refer to Prediction Section on to how to create a prediction function and add it to a version of a model.
Create validation dataset
SuperAlign expects the dataset version to be used for validation in a specific manner.
Run your first evaluation
from pureml.decorators import dataset
@dataset("<dataset_name>")
def create_validation_dataset():
x_test = #Data for testing
y_test = #Labels for testing
A_test = #Sensitive features (optional)
return {"x_test":x_test, "y_test":y_test, sensitive_features: "A_test"}
Any dataset that is intended to be used in model validation should be
registered in a dictionary format with mandatory keys; “x_test” for testing
data, and “y_test” for dataset labels.
Running Evaluator
from pureml_evaluate.policy import policy_eval
results = policy_eval.eval(label_model='Credit Underwriting:v1',
label_dataset='Credit Loan Dataset:v1')
SuperAlign supports two task types for evaluation, “classification”, and
“regression”.
After running the evaluator, the computed results were sent to the SuperAlign Backend. This will allow to apply any policy from the Dashboard itself.