06/15/2021 by Arnaud Guerin
Originally published in french, this article is a translation of an article in La Tribune, available here
Artificial intelligence (AI), this considerable technological advance, has the potential of a new industrial revolution: it can be one of the major levers of development and progress, radically transforming the daily lives of millions of people in all its aspects - health, transport, education, agriculture... This is already partly a reality, explains Arnaud Guerin, co-founder and President of Preligens.
For countries that do not fully adopt AI, the risk of downgrading is real. Rather than fearing it, it is advisable to control its uses and effects in order to put in place the conditions for its virtuous deployment. Because like any powerful tool, it also carries the possibility of drift and damage. This is what Europe has understood and has taken the lead in a vast project to develop global standards in this area. And for anyone who is attentive to these issues, finding in the draft regulation of the Commission published on April 21, 2021 an explicit attention to "the traceability of algorithms", "the robustness of computer systems" or "the understanding of AI by the user" is rather reassuring for the future.
As is often the case in history, it was first in the military field that this new ethical challenge, posed to human thought and action by a technical advance, emerged. And it is undoubtedly from the Defense and Intelligence actors that we can find today some of the most concrete ways to prevent and limit the dangers inherent to a badly controlled AI. Why? Because nowadays, war is fought and won in large part with the help of algorithms designed to assist humans in the eminently strategic tasks of collecting, sorting and analyzing information from exponential sources. Having the technical capacity to automatically process huge amounts of data is the key to a real tactical advantage.
Who and what should AI be used for? What rules should be established for its commercialization? And how to ensure that it will not be misused? So many questions whose importance and urgency must lead the actors of new technologies for Defense to take their responsibilities here and now, even if it means anticipating the regulations of major international institutions. Our sector is closely following the European debates and awaits with interest the first fruits of the NATO discussions started yesterday on this subject. But we need a line of conduct and a framework for action now.
Over the past ten years, four concrete axes have emerged. First, awareness and training. Employees of companies working in AI for Defense must fully understand the tools they are helping to develop and their effects on the real world, their potential, their power and their limits.
Second: the use case. The choice of technology does not only depend on the nature and volume of data to be processed. In the design and development phase, it is essential to ensure that the user's problem and the questions he raises are taken into account, because to frame the predictions of the algorithms with the precise objective to which they are responding is to guarantee that their functionalities cannot serve an ancillary, potentially deviant cause.
The third concrete action is the commercial strategy. Because these technologies serve particularly sensitive security and protection objectives, strict sales rules are needed. National, European and multilateral policies have established precise restrictions to guarantee international security. Beyond the obvious respect of these rules, in a logic of constant reactivity and adaptation, it is also necessary to have its own sources and evaluation tools to proceed, if necessary, to its own exclusions.
Finally, because risk is often born of ignorance, those who develop technology have a responsibility to pass on to those who use it the minimum knowledge required for informed use. The "black box" effect of AI must be limited as much as possible by explaining the paths it takes and the basis of its performance.
The actors of AI applied to Defense have the means to concretely advance ethics in this field, to guarantee national security and sovereignty and to contribute to the advent of a safer world. War, and therefore peace, will be won more and more on the field of automated systems. Beyond Defense, they could also inspire reflection on the countless applications of AI in the civilian world, because the challenge is common to all of us: to ensure that the digital world of tomorrow remains influenced by robust and loyal democracies and companies... And thus protect ourselves against the attempts of domination by autocratic powers hostile to our rights and freedoms.