Data Sciences

Education: Using AI to Personalise Learning Methods

Date:

Changed on 05/03/2025

Using AI to create learning and assessment tools that adapt to each student’s level and objectives: this is the aim of Jill-Jênn Vie, a researcher and member of the Soda project team at Inria. This expert in the field with a passion for education nevertheless believes that teachers remain indispensable.
Portrait de Jill-Jênn Vie lors des JSI 2023
© Inria / Photo B. Fourrier

At what point in your career did you become interested in personalised educational methods using AI?

Jill-Jênn Vie: it's a vocation that has come a long way. My first career plan was to become a teacher of preparatory classes and I passed the “agrégation” [competitive examination for high-level teaching] in mathematics. At the same time, however, I had started a doctorate in Computer Science, devoted precisely to “adaptive” tests whose future content is constantly adjusted in light of the student's previous answers. This field fascinated me and I have never moved away from it. 

I even contributed as a developer to the “Pix” public project for certifying digital competencies, which has an open source, adaptive engine. As part of a state-owned startup with a staff of seven, we created a service that now has six million active users. Similarly, since 2023, I've been working with another government startup: Culture Pass. We're optimising the recommendation system to help 15-20 year-olds discover new cultural practices. 

In short, my research subject is inexhaustible and I'm passionate about it: I've got enough work for the next hundred years!

Jill-Jênn Vie Brief bio

  • 2014: “agrégation” in Mathematics, Computer Science option
  • 2016: PhD in Computer Science from the Paris-Saclay University (Italy)
  • 2017-2019: postdoctoral research at the RIKEN AIP Laboratory (Tokyo)
  • Since 2019: researcher at Inria, Scool project teams, followed by Soda, OPALE and RED associate teams with Kyoto University
  • Since 2022: lecturer and ICPC coach at École polytechnique
  • From 2022: jury for the “agrégation” in Computer Science
  • Since 2023: member of the Conseil scientifique de l'Éducation nationale.

Online training tools such as MOOCs claim to adapt to the students’ level. How do they differ from the ones you are developing?

MOOCs are not really adaptive. It is the learners who adapt, consulting the content they want at their own pace. But the learning scenarios are linear and written in advance.

Conversely, AI-based adaptivity has no predefined sequence. 

 

Image

Portrait Jill-Jênn Vie

Verbatim

The tool regularly tests the student and, according to their results, decides in real time on the content and questions that will follow. Learning and assessment are intertwined and interact continuously. 

Auteur

Jill-Jênn Vie

Poste

Researcher in the SODA project-team

Our algorithms are very similar to those of online recommendation services, which suggest books or films based on your personal interests.

To what extent can AI power this capacity to adapt?

The Holy Grail would be the ability to use AI to generate all training content. For example, a student might tell the tool that they wanted to learn the 2,136 Japanese logograms in everyday use (jōyō kanji) and the system would create made-to-measure content and exercises for them, according to their initial level, their objectives and the time they have set themselves to accomplish this task. 

Today, our algorithms draw on well-stocked libraries of pre-existing resources. Customisation is less advanced, but it is simpler to develop and evaluate.

What are the main difficulties in designing these pathways?

We need to strike a balance in learning to ensure that it is neither too easy – boring for the student, nor too difficult – discouraging for the student, and we are struggling to obtain access to “real” students for full-scale experiments, because there is an ethical issue involved in carrying out randomised controlled tests: are students who use our tools more likely to get good marks or pass their exams?

As with any optimisation problem, the learner's objective must also be clearly defined: for example, “cramming” for a specific competitive examination, or identifying gaps in a minimum number of questions in order to expand their knowledge.

Finally, knowledge is a latent, or hidden, variable. Even if a student passes a test, we cannot be sure of their level of proficiency in a subject.

How do focus your research in order to improve these tools?

To design a learning tool, we first need to formalise the knowledge to be acquired and its prerequisites. The pioneers in this field created “knowledge graphs” in which the different concepts are represented in nodes linked by edges that symbolise prerequisite links, for example between addition and multiplication in mathematics. We had to develop such a graph for each new subject, and then define how the tool would assess the student's knowledge.

We want to replace this laborious framework with generic models that can be applied to several subjects. At the NeurIPS 2024workshop, we described activities that use knowledge representation to determine which document to submit to a learner in order to optimise their knowledge, without using an explicit knowledge graph. 

We mainly design learner simulators, based on the results of real students in different exercises, and teacher simulators that identify the most effective strategies for personalising content.

With these AI-based tools of the future, what role will teachers play in years to come?

The COVID-19 period showed that they still have an essential role to play in training and guiding pupils, in motivating them, instilling class dynamics, reducing inequalities and so on. Personally, I see AI as a tool that will save them time and help them perfect their art

Verbatim

With large language models (LLMs), we need to rethink knowledge assessment. If grades are the only thing that counts, students will use LLMs to boost them

Auteur

Jill-Jênn Vie

Poste

Researcher in the SODA project-team

However, if their reasoning is evaluated and they are given suggestions for improvement, they will use LLMs to improve their understanding. That said, digital and IT education for all will be essential if we are to reduce inequalities in the use of these new tools.

Can your learning tools be rolled out in all subjects?

They are particularly suitable for maths, computer programming, MCQs in general, numerical skills, basic foreign languages, and so on. But not (yet) for essays and philosophy.

What do you gain from belonging to an institute like Inria, which you joined as a researcher in 2019?

My favourite teachers were either high-level teachers [holders of the “agrégation”] or Inria researchers! I particularly enjoy the Inria Science Days: you discover that everyone is working on different subjects in a climate of excellence and goodwill. There is also an entire human environment to help us develop our projects: as a researcher, it's hard to imagine anything better.

France has very few scientists in your field. How are you coping with this situation?

Being one of the few in your field can be both pleasant and depressing! I don't understand why most innovations are American or, more recently, Chinese. I get the impression that interdisciplinary subjects are not very popular in France. And students interested in these subjects are rare, so I try to look after them.

Fortunately, since 2023 I have been a member of the Conseil scientifique de l'éducation nationale (CSEN – the French Department of Education’s Scientific Council), which includes 30 researchers in various fields, all striving to improve student learning. With Séverine Erhel, we are co-hosting a working group on digital technology, AI and education, which gives me so many opportunities to talk to colleagues and find fresh sources of inspiration! 

Find out more