A foundation of algorithms to control robots
How did the ViSP ViSP (Visual Servoing Platform) come into being? « When I joined Inria in 1994, we had a problem with what was known as disposable code, developed by researchers and PhD students to control robots using vision. This code was neither shared nor capitalised on,’ recalls Fabien Spindler, one of the three main authors behind this platform, which has been developed and distributed as free software for the past twenty years at the Inria Centre at the University of Rennes. This is the first reason why we created this platform. It is a modular library that brings together a solid base of algorithms for prototyping and implementing applications using image processing, tracking and visual servoing techniques’. These technologies for tracking 2D or 3D objects and visual servoing use vision sensors to control the movements of a robotic system.
The aim of these applications? For example, to detect and locate objects from images and 2D or 3D models. Specialising in interactive robotics and sensor robots, the Rainbow project team is working on solutions for locating a satellite in orbit (with Trasys), or a refuelling boom approaching an aircraft fuselage (with Airbus). It has also developed vision-based solutions to facilitate the robotic assembly of a door on a car chassis (with ABB Barcelona), or to spot and locate diseases and problems to be treated as a priority in cultivated fields (with Dilepix).
A collective dynamic in robotics for the benefit of all
Logically, the 50 or so members of Rainbow are the first to adopt ViSP: they use it to create prototype applications that work with the growing number of devices on the Robstar robotics platform: manipulator arms, indoor drones, electric wheelchairs, industrial cameras, etc. And over the last twenty years, more than fifty doctoral students have used ViSP code to complete their theses in the IRISA research laboratory - shared by nine major institutions, including Inria.
But now many other laboratories and companies are also adopting this open source platform. Their aim is to improve the way they control robotic systems.
Verbatim
Sharing this library written in C++ - a programming language that has the advantage of being close to machine language and therefore very efficient - as open source is a guarantee of quality. In fact, the extensive community of users helps us to identify needs and areas for improvement, so that we can regularly perfect the existing functions.
Efficient interaction between robots and objects
Today, many innovations are made possible thanks to ViSP, most often to drive robots used to manipulate rigid objects, such as boxes or car parts. Mandela Ouafo Fonkoua, a doctoral student in the Inria project team, is interested in robotic systems for manipulating flexible or deformable objects, such as foams, animals or organs. He has developed a technique that makes it possible to deform a foam using a robotic manipulator.
His research, carried out alongside two Rainbow researchers, has just been presented in one of the leading scientific journals on robotics, IEEE Robotics and Automation Letters.
Verbatim
Among the possible applications of this type of algorithm, we are already thinking of systems that would allow robots to remove fish fillets,’ he explains. Another researcher in the team has been working on tools that could deform suspended cables.
AI will soon make robots even more interactive
In terms of software, ViSP complements other Inria software, such as SOFA (for physical modelling of rigid and deformable objects) and Pinocchio (for predictive control). In total, ViSP comprises 18 software modules. In particular, they help researchers to use the images captured to model, simulate and precisely locate robots and objects. To make these robots and objects interact even more effectively, the researchers include control laws in their equations: ‘These tell the robot how it should change its trajectory to arrive at the expected position, taking into account data already measured (the effect of a given action on a similar object, etc.) and other parameters analysed in real time (the speed of the object, its physical characteristics, etc.),’ explains Mandela Ouafo Fonkoua.
Today, there are still a number of challenges to be met. One of the team's engineers is currently working on a new version of our 3D object location and tracking tool, which is still rather frustrating at the moment’, Fabien Spindler points out. This new version should help us to consider objects that are much more complex than at present, such as aircraft or simply everyday objects.
Another challenge is to use the advantages of artificial intelligence, which is increasingly present in the world of robotics. ‘In the same way that artificial intelligence algorithms currently exist to detect faces, we want to use AI to improve our ability to detect and locate objects in 3D from images, to enable a robotic system to manipulate them more effectively’, explains Fabien Spindler. This suggests that we may soon be seeing robots that are even more interactive and fully in tune with their environment.