“To design an AI security agency, we must think about, and above all predict, its impact on human behavior”

HASAs European regulation of digital services and markets is being put in place, the European Parliament this summer focused on the regulation of artificial intelligence (AI), based on a scale of risks for each algorithm or system. These projects are part of an international context where States, organizations and institutions are trying to develop analytical frameworks allowing such regulations.

At the same time, many actors, mainly academic or ociative, are calling for the creation of national agencies responsible for algorithmic security. Most of these proposals are based on an equivalent of the French Medicines Agency or the Food & Drug Administration in the United States, which would a priori control the dangerousness of an algorithm or AI system and authorize its placing on the market. .

Unfortunately, such proposals deceive themselves about their actual impact. Indeed, although any prior study of risks and foreseeable impacts is welcome, in reality it will not be of much use if continuous monitoring and control are not established. The error that these analysts and experts make is that they think of the algorithm as a computer or mathematical object from the exact sciences, which would always produce identical effects, hence the analogy with the drug: if its effects may fluctuate from one individual to another, the biological action of aspirin has not varied since its discovery.

Read also: Article reserved for our subscribers Artificial intelligence: the race for regulation between great powers

But the social body is, for its part, always evolving. We must therefore in reality interpret the algorithm as a behavioral phenomenon, destined to generate, during its diffusion and its appropriation by users, interactions and therefore risks, which could seem a priori unpredictable. Who, in fact, had envisaged, when creating Facebook as an alumni network, that this type of platform could facilitate political phenomena as diverse as the “Arab Spring” or the color revolutions in the former USSR, the appearance of the Tea Party and the emergence of Donald Trump or Qanon?

New recipe for old objective

To design an algorithm and AI security agency, we must therefore think about – and above all predict – the impact on human behavior and the perceptions they have of it, the effects of dynamic and network interactions. The algorithm is in fact just a new recipe for a very old objective: the very principle of all public policy is to influence the behavior of individuals. In this sense, history is littered with our failures.

You have 57.68% of this article left to read. The rest is reserved for subscribers.

Source link

Leave a Reply