EXPLAINABLE AI IN CREDIT SCORING: BALANCING ACCURACY AND INTERPRETABILITY
Keywords:
credit scoring, explainable AI, model interpretability, regulatory compliance, machine learningAbstract
Credit scoring today requires not only high accuracy but also transparency to meet regulatory standards and build trust in automated decisions. This study evaluates three approaches for balancing these goals: traditional interpretable models (logistic regression, decision trees), black-box models enhanced with explainability tools (XGBoost with SHAP), and hybrid models (generalized additive models with boosting). Using the German Credit dataset, all models were trained and compared based on AUC-ROC, F1-score, and the clarity of their decision-making process.
XGBoost combined with SHAP explanations achieved the highest predictive performance (87% AUC-ROC) while still offering a level of transparency suitable for regulatory review. Interpretable models were easier to understand but lost 6–8% in accuracy. Hybrid GAMs showed a promising balance, reaching 85% accuracy with built-in explainability. These results suggest that financial institutions can adopt powerful AI models in credit scoring without compromising on interpretability, as long as appropriate explainability techniques are integrated
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.