Negative Predictions ICGM Phoenix, March 2014 Dr Chris Barber Director of Science [email protected] Negative Predictions OUTLINE • Impact of changes driven by M7 • Negative predictions in Derek for mutagenicity The science Performance Using it in practice… • What further information / development would you like to see? Focus of the first workshop in silico predictions for M7 • Use of models that predict Ames outcomes • 2 complementary methods should be applied One expert rule-based + one statistical-based Models should follow OECD Principles for QSAR The absence of alerts from both is sufficient to conclude that the impurity is of no concern Seems unlikely that ‘out of domain’ will be considered a prediction! • Expert review is needed to provide additional evidence for any prediction …and to explain conflicting results Lhasa is hosting a webinar on April 16… • 2014 vICGM: Members of the FDA present on ICH M7 Naomi Kruhlak and Mark Powley of the Food and Drug Administration will be presenting on the proposed ICH M7 guidelines Naomi - “FDA/CDER Current Practices for (Q)SAR Analysis under ICH M7” o overview of (Q)SAR models and methods used at FDA/CDER for the prediction of bacterial mutagenicity, including specific expert analysis steps applied. Mark - “Reconciling Conflicting (Q)SAR Predictions in Impurity Evaluations” o will address the potential regulatory implications of discordant (Q)SAR predictions and describe strategies used to reconcile such results Negative Predictions OUTLINE • Impact of changes driven by M7 • Negative predictions in Derek for mutagenicity The science Performance Using it in practice… • What further information / development would you like to see? Focus of the first workshop Derek Nexus – an expert knowledge base • Built using public & confidential data • Comprises of >110 alerts for mutagenicity Can be further customised by members with private knowledge • Derek is the preferred system for mutagenicity predictions In silico methods combined with expert knowledge rule out mutagenic potential of pharmaceutical impurities: An industry survey Regulatory Toxicology and Pharmacology, 2012, 62, 449–455 – Pfizer, Novartis, GSK, AZ, Lilly, Hoffmann-La Roche, Covance, Merck, J&J Use of in silico systems and expert knowledge for structure-based assessment of potentially mutagenic impurities Regulatory Toxicology and Pharmacology, 2013, 67, 39 – Bayer, Sanofi, AZ, Hoffmann-La Roche, Computational Toxicology Services LLC, BMS, Pfizer, Servier, Novartis, J&J, Abbott, Merck, Boehringer, NCSP Data sharing remains critical for Derek’s performance New / modified alerts since 2012 KB Sensitivity 86% 30 Public Proprietary 82% 20 78% 74% 10 70% 66% 0 D2012KB Feb-2013 Apr-2013 Jul-2013 Aug-2013 D2014KB • SOT Poster - Can public data improve mutagenicity predictions for proprietary compounds? Richard V Williams and Chris Barber Enhancing Derek Nexus for mutagenicity • Designed to support expert analysis for M7 Provides additional supporting information Recommends where expert should focus analysis • If no alerts for mutagenicity were found, Derek Nexus would return ‘Nothing to report’ With your support, we have developed a robust way to extend this and provide further information Next release of Derek Nexus will make an explicit prediction of inactivity for mutagenicity Supporting expert analysis • In the absence of a positive alert, experts ask “Is there any reason to be concerned with this prediction?” “Are there any unusual features in my molecule?” “Are there features associated with false negative predictions?” “Do I have additional confidential information?” a feature = a property derived from structure Negative predictions for mutagenicity • Lhasa experts have developed two lists Features known to the model present in the Lhasa Ames Test Ref Set encoded within structural alerts present in Derek examples features not in this list are Unclassified Features found in non-alerting mutagens features present in mutagens that Derek predicts non-mutagenic o These may be coincidental or contributory features in this list are Misclassified Lhasa Ames Test Reference Set contains Vitic, Hansen, FDA, ISSSTY, CGX, Marketed Pharmaceuticals… Current Derek Nexus • Absence of alert returns ‘nothing to report’ Q Match alert or example N Nothing to report Y Prediction with supporting details • • • • • • • • Prediction Likelihood Substructure highlighted Markush Expert comments Validation metrics References Examples Next Release of Derek Nexus • Assesses the query using the two expert-derived lists… Match alert or example Q Y N • Query contains unclassified features? • Query contains misclassified features? Inactive Inactive with misclassified features Prediction with supporting details • • • • • • • • Inactive with unclassified features Prediction Likelihood Substructure highlighted Markush Expert comments Validation metrics References Examples Inactive with unclassified & misclassified features How are compounds classified? • Positive predictions are unchanged Nothing to report Inactive Inactive with misclassified features Inactive with unclassified features Inactive with unclassified & misclassified features Partition of public data 1524 89% 176 10% 12 1% 1 - Vitic intermediates 464 94% 20 4% 9 2% 0 - Private member data 1 280 86% 29 9% 15 5% 1 - Private member data 2 372 89% 31 7% 13 3% 0 - How accurate are these classifications? Nothing to report Inactive Inactive with misclassified features Inactive with unclassified features Inactive with unclassified & misclassified features FN 132 / TN 1392 FN 165 / TN 11 FN 1 / TN 11 FN 1 / TN 0 91% 6% 92% - Vitic intermediates FN 65 / TN 399 FN 4 / TN 16 FN 3 / TN 6 FN 0 / TN 0 86% 80% 67% - Private member data 1 FN 16 / TN 264 FN 4 / TN 25 FN 0 / TN 15 FN 0 / TN 1 94% 86% 100% - Private member data 2 FN 47 / TN 325 FN 2 / TN 11 FN 2 / TN 29 FN 0 / TN 1 87% 86% 94% - Partition of public data FN = false negatives : TN = true negatives : Accuracy % Example with an unclassified feature No alerts contain this system No examples in the Lhasa Ames Test Reference Set • Highlights where to focus to increase confidence Database search / proprietary data could alleviate concerns inactive Example 1 with a misclassified feature Example 1 with a misclassified feature • Review examples where the feature was seen in false negative predictions No reason for an expert to over-rule the prediction of Derek • Proceed with confidence in a negative prediction Example 2 with a misclassified feature Example 2 with a misclassified feature • Review examples where the feature was seen in false negative predictions No reason for an expert to over-rule the prediction of Derek • Proceed with confidence in a negative prediction Derek Nexus – negative predictions - recap • Only for mutagenicity (Ames) alerts • Providing additional direction to focus attention on which features are associated with uncertainty • It is unusual for features to be highlighted. Less than 10% of the time against a number of datasets • Accuracy remains high We are confident in making a negative prediction Future development • Will be driven by your feedback Features Supporting data Performance • Negative prediction lists are closely coupled to a particular version of the Knowledge Base and Vitic Unclassified and Misclassified features will change with each release… Negative Predictions OUTLINE • Impact of changes driven by M7 • Negative predictions in Derek for mutagenicity The science Performance Using it in practice… • What further information / development would you like to see? Focus of the first workshop Structure 2. Unambiguous algorithm 3. Applicability domain 5. Mechanism 1. Defined endpoint Prediction 4. Performance References Likelihood Key examples
© Copyright 2026 Paperzz