Transparent Breast Cancer Diagnosis through Causality, Explainability and Visualization
Keywords:
Breast Cancer, Causal Inference, SHAP Explanations, Feature VisualizationAbstract
Breast cancer diagnosis is crucial for improving patient survival rates, yet the explainability of machine learning models remains a significant challenge in clinical applications. This study focuses on feature importance analysis and model explainability in breast cancer diagnosis, highlighting the importance of transparency in medical feature interpretation. By combining FreeViz visualization, SHAP analysis, and LiNGAM causal inference, this research explores key features influencing tumor classification and enhances interpretability in the decision-making process. The results show high consistency across methods, confirming that tumor size, shape irregularity, and boundary morphology are essential in distinguishing malignant from benign tumors. Furthermore, integrating causal inference provides insight into feature interactions and clinical relevance. These findings underscore the value of explainable AI in medical diagnostics, enhancing clinical trust, supporting early detection, and enabling personalized treatment planning. The study contributes to evidence supporting the deployment of interpretable machine learning models in critical healthcare domains.
Published
Issue
Section
License
Copyright (c) 2025 Journal of Information and Computing

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.