AI has the potential to unlock the full potential of digital pathology. Research studies propose added value; however, clinical integration is lagging. Faster innovation is confronted with risks of bias, privacy issues, operational challenges, ethical responsibilities, and explainability. Deferring risk and safety assessments to existing regulations is imperfect and the question emerges: How can we assure that regulatory guidance documents accurately reflect risk and safety assessment of novel digital pathology and ML/AI tools?