Quantitative Medicine is participating in Drug Safety Executive Council’s Technical Evaluation Committee, organized by Cambridge Healthtech. Studies Quantitative Medicine conducted with several members, representing three of the largest pharma companies, yielded impressive demonstrations of the Computational Research Engine’s™ capabilities and value.
Case One: Reduced Experimentation Cost to Develop Accurate Predictive Models
In September 2013, two pharma asked Quantitative Medicine to compare the cost and accuracy of predictive models developed using CoRE’s active machine learning methods, to those using standard industry methods. EPA’s ToxCast dataset was used as a simulated “experimental space” for the test.
- A predictive model of the experimental space was developed using the current industry standard analytic methods. Using several machine learning approaches – RandomForest and LASSO regression– it was necessary to explore 80% of the experimental space to reach the maximum predictive accuracy when compounds were chosen for experimentation based on their chemical diversity.
- However, it only took 10% of the experimental space for Quantitative Medicine’s Computational Research Engine™ to reach this level of predictive accuracy.
Given the EPA estimate that $6M was spent on the experiments to develop ToxCast, when dollarized, current industry methods would have cost $4.8M to explore and achieve the level of accuracy that Quantitative Medicine’s CoRE™ would have achieved for $600K. An 87% savings! Any further experimentation directed by CoRE™ resulted in accuracy which was better than standard machine learning methods regardless of experiment selection methods. Based on the results, both Pharmas have asked for additional studies using proprietary data.
Case Two: Reduced Experimentation by Leveraging Historical Experimental Results
40% Less Experimentation Needed Using Client’s Data
In September 2013, a third large pharma also asked Quantitative Medicine to compare the efficiency of CoRE’s active machine learning methods to standard industry methods for predicting hepatotoxicity. Their high content screening data from a recently published study was used.
- A predictive model of the experimental space was developed using the current industry standard analytic methods. About 50% of the experiments executed in the Study were needed to create the most accurate predictive model.
- By comparison, it took only 30% of the experimental space for Quantitative Medicine’s Computational Research Engine™ to reach this same level of accuracy predicting hepatotoxicity.
While it took 40% less experimentation, we cannot estimate the savings because costs were not made available to us.
No new experimentation needed using CoRE™ with Quantitative Medicine’s Large Curated Database – A 100% Savings!
More interestingly, collaborators then suggested we test methods for predicting toxicity without using “new” experimental results from HCS screens. This is as if were developing the models entirely in silico without novel experimentation. In order to use only our extensive database of prior research on this problem, we designed a new, sophisticated method that works with extremely sparse data sets. Using this method with the dataset and no current experimental results, CoRE™ developed a model with higher accuracy than any methods previously tested! This shows that the knowledge gathered in their new HCS experiments was actually already in our database, but it had been gathered in different experiments, testing different compounds. The active learning methods used by CoRE™ enabled us to capture that knowledge effectively.
Case Three: Reduced Compound Synthesis Required to Discover Promising Drug Leads
30-50% Reduction in Synthesized Compounds
A smaller pharma specializing in CNS drug development asked Quantitative Medicine to assess how well CoRE would have performed on a completed drug discovery campaign had it been used to direct experimentation. They conducted the campaign and identified a lead to advance after synthesizing a large number of compounds. In our simulations, CoRE™ used their historical data to simulate an active learning approach as if it was directing compound synthesis. So all of their data was hidden from CoRE™ and only revealed when CoRE™ recommended a batch of compound be “synthesis.” Random selection required that on average 42 compounds be synthesized in order to predict the ideal compound. The “industry standard approach” required on average 25 compounds to be synthesized to produce an optimal lead. CoRE™ required an average of only 18 compounds be synthesized to produce the optimal lead to advance. This represents a 30-50% reduction in the number of synthesized compounds.
In all of these cases, and many others that Quantitative Medicine has undertaken, CoRE™ has demonstrated it could reach the same goal with a substantial reduction in experimentation required.