Dn the current tense geopolitical context, the technological war rages. “Those who master artificial intelligence will be the masters of the future,” declared Russian number one, Vladimir Putin, in Yaroslavl, in front of an audience of students, in 2017. The race for increasingly efficient systems is also a driving force for accelerating capabilities and gaining power over the information.
The challenges of increasing fake news have come to light with the deployment of generative artificial intelligence (AI) systems like Dall-E, StableDiffusion for images and videos and ChatGPT, which uses language models from OpenAI. The ChatGPT conversational robot is certainly capable of producing useful dissertations and summaries, but it is impossible to know their sources, nor to measure the influence of the dominant English language in these models. ChatGPT has therefore caused excitement and fear among the general public, with questions like: “Should it be banned in schools? » ; “Will he take my job? » or even “Will I be able to do without it? “.
For these enormous models, Europe intends to propose joint responsibility between the manufacturers of the models who put them on the market or who publish them with open access, those who deploy them and those who use them. Which is not the idea of the American digital giants, who instead want to place the responsibility on users. Silicon Valley technologists, Elon Musk, Sam Altman, co-founder of OpenAI, and many others are also ideologues with a global political vision. The United States, like China, is calling for standards in this race to dominate the technological landscape, while stimulating innovation, a guarantee of power.
Three levels of risk
The European Union has much to gain by developing a framework for the use of
AI that is ociated with trust and respect for human rights and the rule of law. The AI Act (for Artificial Intelligence Act) focuses on identifying applications that are considered to present a risk and therefore require regulation.
Three different levels of risk (unacceptable, high, moderate) attract different actions. Applications presenting unacceptable risk are prohibited, for example manipulative “subliminal techniques” that exploit specific vulnerable groups or are used by public authorities for social evaluation purposes, as in China. High-risk applications will be closely regulated. He is
It is desirable that it be with a high level of specificity to preserve useful applications. For example, facial recognition is a danger if used to monitor the population, but it turns out to be a very useful tool for monitoring pathologies in patients. The AI Act therefore proposes to provide protection. In particular, it imposes “obligations to certain AI systems due to the specific risks they present” for algorithms that interact with humans.
You have 48.99% of this article left to read. The rest is reserved for subscribers.