In an insightful article about computer-based, in silico toxicity testing method, Jim Kling argues that "where there is sufficient data that is properly analyzed, in silico methods can likely reduce and replace animal testing. And even when the data is sparse, it can at least help guide the way." Kling must be commended for updating us on the latest developments in QSAR modelling or organ-on-a-chip technology, but perhaps more importantly, for going beyond the technological promisses of in silico testing and showing us empirically, instead, what in silico testing actually achieves in terms of prediction. As the research conducted in INNOX shows — several papers are forthcoming about QSAR, PBPK and other modelling techniques — in silico testing assembles with other information and knowledge. It does not replace experiment, but is mostly helpful to frame further experiments and exploits their results as much as possible.
One comment, though. The view that regulatory agencies are “slow to adopt these approaches” and need to be further “convinced to trust them” misrepresents the reality of innovation in toxicity testing. This is a common view indeed: regulatory agencies would be reluctant to take on board new kinds of data and studies, and prefer sticking to the conventional methods established in laws and guidelines. They are conservative, and make decisions only based on what animal experiments, still the gold standard, show. But this is only a part of the actual history of regulatory science, however, as far as the historical development of computational toxicology methods goes. It is difficult to under-estimate the role of the Office of Toxic Substances of the Environmental Protection Agency (EPA) in the initial realization, back at the end of the 1970s, that structure-based predictions could help in reviewing chemicals at a fast rate, and its responsibility in the development of a large database of ecotoxicological data on 600 chemicals to produce validated statistical models, or in the patient creation of a dedicated software to help chemical firms replicate structure-activity methods. Similarly, while ToxCast is a program of the Office of Research and Development of the EPA, the initial impulse of the head of pesticides and toxics office, realizing the need for faster chemicals screening methods, was instrumental in its launch.
Regulatory science, as its name indicates, is an intriguing mix of ideas and technologies emerging from academia, industry and regulatory agencies. In this ecosystem, regulators play an essential part, pointing to potential developments, asserting the criteria of validity of new methods, funding technological developments. In silico toxicology would not be where it is now without them.
One comment, though. The view that regulatory agencies are “slow to adopt these approaches” and need to be further “convinced to trust them” misrepresents the reality of innovation in toxicity testing. This is a common view indeed: regulatory agencies would be reluctant to take on board new kinds of data and studies, and prefer sticking to the conventional methods established in laws and guidelines. They are conservative, and make decisions only based on what animal experiments, still the gold standard, show. But this is only a part of the actual history of regulatory science, however, as far as the historical development of computational toxicology methods goes. It is difficult to under-estimate the role of the Office of Toxic Substances of the Environmental Protection Agency (EPA) in the initial realization, back at the end of the 1970s, that structure-based predictions could help in reviewing chemicals at a fast rate, and its responsibility in the development of a large database of ecotoxicological data on 600 chemicals to produce validated statistical models, or in the patient creation of a dedicated software to help chemical firms replicate structure-activity methods. Similarly, while ToxCast is a program of the Office of Research and Development of the EPA, the initial impulse of the head of pesticides and toxics office, realizing the need for faster chemicals screening methods, was instrumental in its launch.
Regulatory science, as its name indicates, is an intriguing mix of ideas and technologies emerging from academia, industry and regulatory agencies. In this ecosystem, regulators play an essential part, pointing to potential developments, asserting the criteria of validity of new methods, funding technological developments. In silico toxicology would not be where it is now without them.