Artificial intelligence

Trusted AI: towards reliable, explainable and responsible artificial intelligence

Date:
Publish on 23/01/2025
In a global context marked by a growing quest for security, transparency and accountability, the concept of trusted AI is emerging as a priority. But how can we ensure that artificial intelligence remains safe, reliable and ethical? What tools, frameworks and standards should be developed to prevent risks without hindering the development of these technologies? Find out in this issue how work in the digital sciences is helping to develop safer AI.

 

The rapid development of AI is opening up new prospects, while at the same time bringing to the fore risks that it is important not to ignore: manipulation of information, cybersecurity, possible abuses, etc. These impacts, which are complex to fully anticipate, highlight the urgent need to establish a structuring framework and implement solid actions for trusted AI.

Against this backdrop, a growing number of international initiatives - such as the summit for action on artificial intelligence to be held on 10 and 11 February - are enabling players in the field to discuss the development of open technical solutions accessible to all and common international standards recognised by all, in order to avoid fragmentation of approaches, norms and standards and encourage convergence at all levels. Inria is fully committed to this dynamic, particularly through the research carried out by its scientists.

Find out more about our research in this field

Biais sociétaux et discrimination

Algorithms to make AI fairer

How can we reduce the societal biases inherent in machine learning algorithms? Researchers from the Magnet project team (Inria Centre at the University of Lille) have come up with a solution: the scientists have designed an open source software package to correct inequities by playing on the opportunities offered to each individual, in other words on the ‘resource allocations’. 

Illustration transfert et innovation

Towards an ethics of automatic language processing: Karën Fort's perspective

As fundamental elements of artificial intelligence, automatic language processing (ALP) tools raise both expectations and concerns. By studying the way in which they reproduce and amplify stereotypical biases, Karën Fort questions the place of ethics in their design, development and use.

Confiance.ai unveils its tool-based methodology, a ‘digital commons’ serving the development of industrial and responsible AI

This ‘digital commons’ is now open to the scientific and industrial communities. It consists of an end-to-end methodology built on numerous open source technological components.

IA de confiance

Building trustworthy AI in Europe

Since the European Commission's experts published ethical guidelines for trustworthy AI in 2019, a great deal of documentation on the subject has been issued and shared by all the players in this field. 

TAILOR: a European network for reliable AI

Supported by the European Horizon 2020 programme, TAILOR brings together a network of 55 scientific and industrial partners from 21 different countries. Their objective: to develop new solutions in the field of trusted artificial intelligence by promoting the so-called hybrid method, which combines a symbolic approach, optimisation and learning.

FAIRPLAY

How can we avoid cases of discrimination while protecting user data?

Created as part of a research partnership between Criteo and Inria, the mission of the FAIRPLAY project-team is to study the impact of AI on the design of transparent and fair marketplaces.