This study evaluates physician trust and adoption of AI-Rule hybrid diagnostic explanations in clinical decision-making. It compares pure AI-based explanations with hybrid systems combining neural network predictions with explicit clinical rules. The study employs a randomized controlled trial with practicing physicians to assess three explanation modalities: feature importance/saliency maps, counterfactual explanations, and AI-Rule hybrid explanations. Primary outcomes include appropriate reliance, self-reported trust scales, and behavioral adoption metrics.
Key findings
AI-Rule hybrid systems combine neural networks with explicit clinical rules to improve precision and interpretability.
The study hypothesizes that hybrid explanations will achieve higher appropriate reliance and more calibrated trust compared to pure AI explanations.
Physicians' intention to adopt AI systems is expected to be higher with hybrid explanations.
Limitations & open questions
The study's findings may be influenced by physician specialty and AI experience level.
The impact of explanation types on trust may vary across different medical specialties.