3/14 PIcc Alliance “Pi” Annual Members Meeting
We had a great online meeting with a total of 9 breakout sessions of which 7 provided materials and summaries (see below).
We thank everyone who participated.
Introduction
Joe Lennerz provided an overview of recent events and outlined the meeting.
Video overview
00:00 welcome
00:50 website
01:09 recognition as collaborative community; member number
02:22 update federal notice (*see also selected material)
05:30 outline of the meeting
09:38 breakdown of interests
Moved to breakout sessions
Workgroup Breakout Sessions
Summary & Discussion
Video overview
00:00 breakout session
01:13 pre-analytics
13:35 slide scanning
18:12 truthing
31:23 ML/AI
45:32 payor
52:01 trainees
58:01 discussion
62:00 next steps
PRE-ANALYTICS (AMANDA LOWE)
discussed goals and revised executive summary
need for additional contributors
deliverables revised
focus on pre-analytics deliverables for AI
make concrete recommendations
device manufacturers should consider these factors
Discussion:
how to translate pre-analytical considerations
modeling the different contributions (relevance for algorithm developers)
needed for scanner calibration?
prioritization: stain vs. scanner?
SLIDE SCANNING (SCOTT BLAKELY)
artifacts from histology (=pre-analytics) vs. artifacts from scanning
no substantial changes from previous goals
added classifying scanner artifacts
capture features at the scanning level without moving this forward
Discussion:
writing out definitions?
STANDARDS (MIKE ISAACS)
Click HERE to watch the session recording.
minor changes to the summary
synergies regarding interoperable output from various scanners
need to standardize file output
Discussion:
upcoming webinar DPA/PIcc/DICOM
TRUTHING (BRANDON GALLAS & KATE ELFER)
re-naming proposal of the group to include “performance evaluation”
agreement across pathologists vs. other assays
focus on statistical methods
group focused on re-writing the executive summary
Discussion:
algorithmic performance evaluation
image system assessment
concrete need for evaluation
ML/AI (MATTHEW HANNA)
Click HERE to watch the session recording.
focus on models and how to summarize the intended use (and what’s not in scope)
blueprint (akin to a package insert) with clearly defined in- and exclusion criteria
model confidence or probability of output
robustness across datasets
rejection of samples (=when outside of the bounds of the models)
towards “best practices” of a model
Discussion:
descriptive information
assumptions of the model
transparency to the end user (example only 7 out of 10 thyroid cancers accounted for)
task of the model has to be clear
can the algorithm be used for creating real-world evidence?
one approval with change protocol
PAYOR (LAURA LASITER)
Click HERE to watch the session recording.
insufficient evidence for digital pathology to approach payors?
approach via individual use cases or technology as a whole?
short term and longer-term end points
Discussion:
What evidence are local payors looking for?
Value creation of for digital pathology
TRAINEES (JULIA THIERAUF & SARAH DUDGEON)
Click HERE to watch the session recording.
target audience (residents, fellows, graduate and postgraduate trainees)
expand target audience to include early career scientists (industry and academia)
what we want to provide: seminars, working courses, 15’ brainstorming sessions, feedback
building a network
twitter and social media (#PIcctraineegroup)
Discussion:
next workgroup meeting
recognize the potential of including trainees
need for clear communication to distribute across membership
Post-Meeting “Happy Hour”
Relevant link from the post-meeting discussion: