Transparent Breast Cancer Diagnosis through Causality, Explainability and Visualization

Authors

  • Yi-Jui Huang Department of Computer Science and Information Engineering, National Taitung University, Taitung, Taiwan Author
  • Cheng-Yu Wen Author

Keywords:

Breast Cancer, Causal Inference, SHAP Explanations, Feature Visualization

Abstract

Breast cancer diagnosis is crucial for improving patient survival rates, yet the explainability of machine learning models remains a significant challenge in clinical applications. This study focuses on feature importance analysis and model explainability in breast cancer diagnosis, highlighting the importance of transparency in medical feature interpretation. By combining FreeViz visualization, SHAP analysis, and LiNGAM causal inference, this research explores key features influencing tumor classification and enhances interpretability in the decision-making process. The results show high consistency across methods, confirming that tumor size, shape irregularity, and boundary morphology are essential in distinguishing malignant from benign tumors. Furthermore, integrating causal inference provides insight into feature interactions and clinical relevance. These findings underscore the value of explainable AI in medical diagnostics, enhancing clinical trust, supporting early detection, and enabling personalized treatment planning. The study contributes to evidence supporting the deployment of interpretable machine learning models in critical healthcare domains.

Published

2025-06-28

Issue

Section

Articles

How to Cite

Transparent Breast Cancer Diagnosis through Causality, Explainability and Visualization. (2025). Journal of Information and Computing, 3(2), 1-13. https://itip-submit.com/index.php/JIC/article/view/151