Lhe Paris Peace Forum, which is being held this week, is an opportunity to talk about immediate threats to global stability, but also to plan for those of tomorrow. Artificial intelligence (AI) will be one of the themes covered. Without going as far as the dystopian futures proposed by Hollywood, the list of potential threats is long: disinformation destabilizing democracies, disruption of financial markets, large-scale hacking, etc.
This is not the first time that humanity has faced a major technological leap. Lessons must be learned from history: that of the industrial revolution, which extracted humanity from poverty, generating the greatest ecological catastrophe of all time, or even that of nuclear power, both a formidable energy and deadliest weapon of the 20the century. To avoid these disaster scenarios, we must organize AI governance. Unfortunately, current national governance mechanisms are both rigid and difficult to apply. Regulators react to observed damage (“ex post”), and, when they regulate, the problem is already elsewhere. A change of method is necessary.
First, governance frameworks must take a holistic approach that integrates all interconnected technologies, such as quantum computing, augmented and virtual reality, blockchain, and many others. They must also enable anticipation (ex ante) and acceptance of risks. Since consensus is necessary at the societal level, a multi-stakeholder approach is mandatory, including governments, civil society, technical experts, but also academics and donors.
The essential role of governments
Currently, IT expertise and resources are largely centralized within a few private sector companies (notably Gafam). Since the start of the year, Apple has led the way with the acquisition of thirty-two companies in the AI sector. Note that, unlike open AI, as the open source model of Mistral AI or that of Dall-E can illustrate, AI operating with a “black box”, such as Google Bard or ChatGPT, are more complex to understand. . These are virtual machines to which you submit an input (request, question, task) and which give you an output (a text, an image, an action, a result, etc.).
Between the two, the algorithmic logic is not explicit. It is imperative that all relevant stakeholders are involved in AI governance. Governments have a central role to play. They must ensure the consistency and effectiveness of the standards. However, at present, the EU and the United States have very distinct regulatory frameworks, the former proposing to regulate with the AI Act and the latter adopting a largely non-interventionist approach. Auditing, compliance monitoring and therefore technological and commercial exchanges between two regional blocks risk being slowed down.
You have 55% of this article left to read. The rest is reserved for subscribers.