Background materials:
• Letter to the Editor
Including specifics on the initial study, which was updated based on significant issues with the underlying data comparison.
• Original Study
This spring, Genetics in Medicine included a paper titled “Comparison of literature mining tools for variant classification: Through the lens of 50 RYR1 variants.” The study included findings from a team at the NIH reviewing two public NIH resources along with two commercial resources, one of which was Genomenon’s Mastermind. This study concluded that none of these tools were sufficient on their own in clinical diagnostic workflows and that multiple tools were required for effective variant classification.
However, this wasn’t the real story. A closer examination of the data revealed significant flaws in the study's results, undermining its conclusions. Mastermind content was systematically underrepresented, leading to a misleading portrayal of its performance. The corrected data showed Mastermind to be a sufficient stand-alone tool for clinical variant interpretation and highlighted deficiencies in other tools. While the study's authors acknowledged the corrections, they hadn't revised their conclusions.
This situation highlights the crucial importance of rigorous data collection and analysis and the self-correcting nature of science, particularly when patient diagnoses are at stake.
In this webinar, we will provide the following:
- A discussion of the contents and conclusions of the original comparison study
- A discussion of the identified errors in this study
- A discussion of the approach taken to correct the data
- A summary of the significance of the results and how they impact the conclusions




