Bluesky Facebook Reddit Email

Interpretability in deep learning for finance: A case study for the Heston model

01.21.26 | KeAi Communications Co., Ltd.

SAMSUNG T9 Portable SSD 2TB

SAMSUNG T9 Portable SSD 2TB transfers large imagery and model outputs quickly between field laptops, lab workstations, and secure archives.


Deep learning has become a powerful tool in quantitative finance, with applications ranging from option pricing to model calibration. However, despite its accuracy and speed, one major concern remains: neural networks often behave like “black boxes”, making it difficult to understand how they reach their conclusions. This spells a lack of validation, accountability, and risk management in financial decision-making.

In a new study published in Risk Sciences , a team of researchers from Italy and the UK investigate how interpretable deep learning models can be made in a financial setting. Their goal was to understand whether interpretability tools can genuinely explain what a neural network has learned, rather than just producing visually appealing but potentially misleading explanations.

The researchers focused on the calibration of the Heston model, one of the most widely used stochastic volatility models in option pricing, whose mathematical and financial properties are well understood. This makes it an ideal benchmark for testing whether interpretability methods provide explanations that align with established financial intuition.

“We trained neural networks to learn the relationship between volatility smiles and the underlying parameters of the Heston model, using synthetic data generated from the model itself,” shares lead author Damiano Brigo, a professor of mathematical finance at Imperial College London. “We then applied a range of interpretability techniques to explain how the networks mapped inputs to outputs.”

These techniques included local methods—such as LIME, DeepLIFT, and Layer-wise Relevance Propagation—as well as global methods based on Shapley values, originally developed in cooperative game theory.

The results showed a clear distinction between local and global interpretability approaches. “Local methods, which explain individual predictions by approximating the model locally, often produced unstable or financially unintuitive explanations,” says Brigo. “In contrast, global methods based on Shapley values consistently highlighted input features—such as option maturities and strikes—in ways that aligned with the known behavior of the Heston model.”

The team's analysis also revealed that Shapley values can be used as a practical diagnostic tool for model design. By comparing different neural network architectures, the researchers found that fully connected neural networks outperformed convolutional neural networks for this calibration task, both in accuracy and interpretability—contrary to what is commonly observed in image recognition.

“Shapley values not only help explain model predictions, but also help us choose better neural network architectures that reflect the true financial structure of the problem,” explains co-author Xiaoshan Huang, a quantitative analyst at Barclays.

By demonstrating that global interpretability methods can meaningfully reduce the black-box nature of deep learning in finance, the study provides a pathway toward more transparent, trustworthy, and robust machine-learning tools for financial modeling.

###

Contact the author:

Damiano Brigo

Department of Mathematics

Imperial College London, United Kingdom

Email: damiano.brigo@imperial.ac.uk

The publisher KeAi was established by Elsevier and China Science Publishing & Media Ltd to unfold quality research globally. In 2013, our focus shifted to open access publishing. We now proudly publish more than 200 world-class, open access, English language journals, spanning all scientific disciplines. Many of these are titles we publish in partnership with prestigious societies and academic institutions, such as the National Natural Science Foundation of China (NSFC).

Risk Sciences

10.1016/j.risk.2025.100030

Data/statistical analysis

Not applicable

Interpretability in deep learning for finance: A case study for the Heston model

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Disclaimer: This paper reflects the personal views of Mr. Andrea Pallavicini and does not represent the official position of his employer, Banca IMI.

Keywords

Article Information

Contact Information

Ye He
KeAi Communications Co., Ltd.
cassie.he@keaipublishing.com

How to Cite This Article

APA:
KeAi Communications Co., Ltd.. (2026, January 21). Interpretability in deep learning for finance: A case study for the Heston model. Brightsurf News. https://www.brightsurf.com/news/LVDE3X3L/interpretability-in-deep-learning-for-finance-a-case-study-for-the-heston-model.html
MLA:
"Interpretability in deep learning for finance: A case study for the Heston model." Brightsurf News, Jan. 21 2026, https://www.brightsurf.com/news/LVDE3X3L/interpretability-in-deep-learning-for-finance-a-case-study-for-the-heston-model.html.