More efficient, streamlined, and useful AI: at the heart of CRI’s research
AI has continued to evolve over the decades. Initially based on expert systems relying on hand-coded rules, it underwent its first revolution with machine learning, which requires large amounts of data and parallel computing architectures. The rise of deep learning, followed by large language models (LLMs) and neural networks now with hundreds of billions of parameters, has profoundly changed the scale of AI.
This evolution highlights a central tension: the more powerful the models, the more memory, computing power, and energy they require.
However, these resources are neither infinite nor neutral. The energy cost of AI has become a major scientific, industrial, and environmental issue. It is precisely in this area that CRI’s research is focused.

Contrary to popular belief, innovating in AI does not always mean creating new models. Much of CRI’s work consists of optimizing existing models:
Advances in AI are feeding into a large number of scientific sectors.
A prime example is the work carried out in collaboration with CERN (European Organization for Nuclear Research) as part of high-energy irradiation experiments. These facilities test the resistance of materials and electronic components to radiation, a key issue for space, nuclear, and particle physics.
CRI researchers are using neural networks capable of learning a compact and relevant representation of complex data.
In concrete terms, these models make it possible to:
This work has a direct impact: more reliable experiments that are quicker to analyze and better documented, integrated into CERN’s operational tools.

Another major challenge in modern AI is choosing the right model. With a multitude of algorithms and parameters to choose from, this step is often costly, empirical, and reserved for experts.
At CRI, researchers are developing meta-learning approaches, i.e., methods capable of automatically recommending:
Notable results include:
The result is considerable time savings for researchers and engineers, and more accessible AI, even for non-specialists.
Large AI models, particularly LLMs, rely on a key mechanism: attention. This is what allows the model to determine which information in a text is relevant for producing a response. But this mechanism is extremely costly when texts become very long.
CRI researchers exploit a key property of these models: attention matrices are in practice very sparse, meaning that the majority of their values contribute little or nothing to the final result. By exploiting this property, they have developed sparse calculation methods for attention, capable of:
Other work focuses on:
This research has a concrete impact, with faster models capable of processing massive documents and deployable on a wider variety of infrastructures.
One of the most concrete results concerns the reduction of energy consumption in computing infrastructures. By carefully analyzing the functioning of target parallel architectures, CRI researchers have shown that energy efficiency can be significantly improved through appropriate scheduling of calculations at all levels, from instructions and operations to tasks executed on accelerators.
By developing:
they have managed to reduce the overall energy consumption of a computing cluster by more than 10% without slowing down scientific production. A decisive step towards more sustainable AI!
This research is being strategically extended through the Priority Research Program and Equipment (PEPR) on AI components, co-led by the https://www.cnrs.fr/frCEA and Inria, which aims to accelerate the development of artificial intelligence in France. Funded by France 2030 and led by the Agence nationale de la recherche (ANR), the objective of the PEPR CAMELIA (AI Components) is to design a complete hardware and software environment enabling the efficient execution of AI applications on hardware targets, developed as part of the project, constituting a sovereign alternative to the solutions currently available, most of which are foreign.
The CRI plays a key role in this PEPR by co-piloting the lot dedicated to the design and development of the software components essential for exploiting the targeted architectures and guaranteeing performance, portability, and energy efficiency.
This is a scientific, industrial, and technological sovereignty issue.

This work was highlighted during the AI Workshop held in December 2025 at Mines Paris – PSL. Designed as an opportunity for internal exchange, the event allowed faculty, doctoral students, and engineers to present their projects, tools, and platforms through oral presentations and posters.
Beyond the diversity of topics, the workshop highlighted a common dynamic: building AI that is grounded in reality, capable of interacting with humans and integrating into complex systems.
At the center, AI is not only more powerful, it is also the subject of in-depth understanding, methodical optimization, and thoughtful integration into major scientific, industrial, and societal challenges. Through this research, the Computer Science Research Center (CRI) at Mines Paris – PSL affirms a clear vision: AI that calculates “less, but better,” and whose impact extends beyond the laboratory to contribute to both research advances and concrete, high-performance, and more frugal applications.
Understanding how cells work, accelerating the discovery of new treatments, and tailoring medicine to each patient are among the major contemporary ch...