After the second phase of the Agilis project was successfully validated a few months ago, two Inria project teams - WILLOW from the Inria Paris Centre and CAMIN - from the Montpellier antenna of Inria Centre at Université Côte d'Azur University - decided to join forces in an attempt to help people with tetraplegia become more autonomous.
Recreating everyday objects to help people with tetraplegia to grip them
This involves using tools developed by the WILLOW project team, including computer vision tools, to recreate objects using video footage captured with a conventional camera. Using this video footage, the aim is to recreate not only the position and shape of objects, but also the real-time position and configuration of the person’s hand. This information, coupled with a biomechanical model of the person's arm and hand, can then be used to control the electrostimulation system implanted in their arm. This enables them to configure their hands in such a way that they are able to grip objects, opening and closing their fingers as efficiently as possible.
What is computer vision?
Computer vision is a branch of computer science with links to a range of different disciplines (including mathematics, cognitive science, computer graphics and machine learning), the aim of which is to understand and automate tasks which the human visual system is capable of performing.
Find out more about computer vision
The man behind this project is the researcher Etienne Moullet, who was recently taken on as a postdoctoral researcher by CAMIN, but who is currently working with WILLOW. “The final application of my project will be Agilis, a project aimed at restoring a level of mobility to the upper limbs of people with tetraplegia using an implanted functional electrical stimulation system”, he explains.
The project will initially focus on typical, everyday tasks such as opening doors, picking up a bottle of water, and so on.
The aim is for patients to be able to move their hand towards objects placed in front of them, which may be surrounded by other, differently-shaped objects, and to pick them up: “We need to develop a way of scanning and recreating objects in order to trigger the activation sequence that will enable objects to be picked up”, explains Etienne Moullet.
Working with the human body
But therein lies the complexity of this project. When it comes to controlling a robot, it is often simply a case of finding the optimum trajectory for the robot's actuators to follow, but the operating variables of the human body are extremely hard to predict.
“When working with the human body, you can’t reason the way you would with a motorised system. It’s one thing being able to determine the optimum configuration of a hand for gripping an object, but it’s another to determine the sequence of stimulation signals that need to be sent to the muscle in order to recreate that configuration on the human hand. We aren't able to fully control the level of force produced by the electrical stimulation system within muscles. Residual functional capacities and the impact of stimulation also vary considerably from one patient to another. That’s a real issue.”
Computer vision can also be used as a starting point when developing such a project: “Current computer vision tools are at their best when the shape of the object is known in advance”, explains Etienne Moullet. “There's still a lot of work to be done, but it’s a permanently evolving field, and research has shown that these constraints can be overcome.”