Bilel Benbouzid a publié à la fin de l'année dernière, dans le numéro spécial de Réseaux sur les machines prédictives (voir plus bas), un nouvel article issu de ses recherches sur la police prédictive aux Etats-Unis. Il y montre que "les machines prédictives sont des technologies morales de gouvernement. Elles servent non seulement à prédire où et quand les crimes sont susceptibles d’avoir lieu, mais aussi à réguler le travail de la police. Elles calculent des rapports d’équivalence, en distribuant de la sécurité sur le territoire, selon de multiples critères de coûts et de justice sociale. En retraçant les origines de la police prédictive dans le système du Compstat, on peut observer le passage de machines à explorer des intuitions (le policier garde la main sur la machine) à des applications qui font disparaître la dimension réflexive de la proactivité, faisant de la prédiction le support de métriques de « dosage » de la quantité du travail de la police. Sous l’effet d’un mouvement critique dénonçant les biais discriminatoires des machines prédictives, les développeurs imaginent les techniques d’audit des données des bases d’apprentissage et les calculs de la quantité raisonnable de contrôle policier dans la population.
0 Comments
In April 2019, the special issue on to foreknowledge in public policy that was developed by Stefan Aykut, Bilel Benbouzid and David Demortain in the framework of INNOX will be published by Science & Technology Studies. In the meantime, the first paper to appear in this special issue has been published online. In "Reassembling Energy Policy", Stefan Aykut shows that visions of policy futures are emerging from what he calls predictive assemblages. The term designates the fact that in a policy environment such as the energy policy sector, coalitions of actors are equipped with their own models and forecasts, which cohere, in turn, with a normative discourse about future developments in energy systems. Actors, models and discourses form the assemblage. This original persective is particularly helpful to reveal the politics behind modeling and anticipation for policy: there are competing assemblages at any given time and country. Stefan compares the changing predictive policy assemblages in France and Germany from the 1960s to the present. At the end of the day, Stefan teaches us how and to what extent models and predictions enable policy change, but also shows how to go beyong conventional accounts of the performativity of models in policy. As he says, "further research should not only focus on the effects of foreknowledge on expectations and beliefs (discursive performativity), but also take into account how new models equip political, administrative and market actors (material performativity), and how forecasting practices recompose and shape wider policy worlds (social performativity)." The paper may be downloaded below. ![]()
In an insightful article about computer-based, in silico toxicity testing method, Jim Kling argues that "where there is sufficient data that is properly analyzed, in silico methods can likely reduce and replace animal testing. And even when the data is sparse, it can at least help guide the way." Kling must be commended for updating us on the latest developments in QSAR modelling or organ-on-a-chip technology, but perhaps more importantly, for going beyond the technological promisses of in silico testing and showing us empirically, instead, what in silico testing actually achieves in terms of prediction. As the research conducted in INNOX shows — several papers are forthcoming about QSAR, PBPK and other modelling techniques — in silico testing assembles with other information and knowledge. It does not replace experiment, but is mostly helpful to frame further experiments and exploits their results as much as possible.
One comment, though. The view that regulatory agencies are “slow to adopt these approaches” and need to be further “convinced to trust them” misrepresents the reality of innovation in toxicity testing. This is a common view indeed: regulatory agencies would be reluctant to take on board new kinds of data and studies, and prefer sticking to the conventional methods established in laws and guidelines. They are conservative, and make decisions only based on what animal experiments, still the gold standard, show. But this is only a part of the actual history of regulatory science, however, as far as the historical development of computational toxicology methods goes. It is difficult to under-estimate the role of the Office of Toxic Substances of the Environmental Protection Agency (EPA) in the initial realization, back at the end of the 1970s, that structure-based predictions could help in reviewing chemicals at a fast rate, and its responsibility in the development of a large database of ecotoxicological data on 600 chemicals to produce validated statistical models, or in the patient creation of a dedicated software to help chemical firms replicate structure-activity methods. Similarly, while ToxCast is a program of the Office of Research and Development of the EPA, the initial impulse of the head of pesticides and toxics office, realizing the need for faster chemicals screening methods, was instrumental in its launch. Regulatory science, as its name indicates, is an intriguing mix of ideas and technologies emerging from academia, industry and regulatory agencies. In this ecosystem, regulators play an essential part, pointing to potential developments, asserting the criteria of validity of new methods, funding technological developments. In silico toxicology would not be where it is now without them. Machine learning, deep learning, réseaux de neurones... ces technologies de calcul prédictif n'ont rien de nouvelles, mais les formes que ce type de calcul prend aujourd'hui lui confèrent un caractère inédit. C'est ce que proposent de démontrer Bilel Benbouzid et Dominique Cardon à travers une sélection d'articles publiés dans la revue Réseaux à la fin de l'année dernière, et consacrés à ce qu'ils appellent les "machines prédictives": "des dispositifs calculatoires rationalisent le futur en le rendant disponible à des formes d’action préventives". Une des choses qui explique le renouveau de l'intelligence artificielle est l'existence de controverses sur ses formes passées, controverse qui ont conduit les concepteurs d'algorithmes à repenser leur utilité, et de là le type de prédictions produites. Comme Benbouzid et Cardon le résument, ce qui distingue cette intelligence artificielle est son encastrement dans les mondes sociaux et mondes organisés. Dans le régime d'anticipation actuel, "le résultat d’un calcul est satisfaisant s’il permet de faire fonctionner des machines utiles, davantage tournées vers l’action que vers l’explication des phénomènes". Chacun des articles de ce riche numéro spécial l'illustre. En ligne ici: https://www.cairn.info/revue-reseaux-2018-5.htm
|
Archives
July 2019
Categories |