Model Explainability

This page helps you understand how your trained machine learning model makes decisions by visualizing feature importance and generating explanations. This is especially useful for building trust in your models and identifying potential biases or unexpected patterns. Upload a trained model and the corresponding dataset used to train it. The system will use techniques like SHAP to show which features influenced predictions the most. Then you can ask local explanation for one specific observation in your data.