You've once again been awarded the prestigious ERC Advanced Grant.
How will you use the €2.5 million in funding over five years for your new project, NERPHYS?
Beyond the actual funding amount, it's important to emphasize the benefits of an ERC. With the allocated funds, you can hire and invest whenever required. You can carry out your work freely, with no deliverables, which means you can take risks. It's the most convenient funding instrument that exists for researchers in Europe. As far as I know there aren’t many similar programs elsewhere.
With the 2.5 million euros, I plan to hire four or five PhD students and three or four post-docs, as well as engineers, for the duration of the project. We will also be purchasing a very powerful compute node so that we can easily use deep learning methods. This node will be accessible to other teams, but we will have priority access.
What image synthesis techniques will you combine?
The first, which I've been studying for years, starts with photos of an object or scene captured from different angles, but sharing pixels. These photos are used to create a 3D point cloud, which is then “densified”, to represent the scene in a allowing fast display, with high visual quality. But this method doesn't allow for physics-based calculations.
The second approach is used for video games and special effects, but also architecture. It relies on physics to simulate lighting, and requires tedious modeling, typically done by artists to define geometry, materials and lighting. This is followed by expensive calculations to simulate light transport. On the other hand, these methods are difficult to adapt to the creation of 3D scenes from photos.
The third, and more recent, technique is based on generative AI, for example Chat GPT. A scene is described by text in a system like Midjourney or Dall-E, and a computer-generated image is created rapidly and automatically. The problem is that this type of solution is neither very precise nor realistic, and it has no notion of the underlying physics. As a result, you have no control over the properties of what is displayed.
So your idea is to combine these different approaches effectively ?
Exactly. We want to use the power of deep learning models in generative AI to create and display extremely realistic 3D scenes, but importantly we will add the control capabilities of physics-based algorithms.
It's very ambitious. And for some aspects, I still have only vague notions of how we'll go solving the problems! But ERCs are designed to explore such ambitious research directions and aim for breakthrough innovations.
My previous ERC Advanced Grant FUNGRAPH did not yet exploit generative AI, nor the link with physics-based methods. In 2023, it allowed the development of an algorithm for creating and displaying 3D scenes that is faster and more accurate than the best tools from Google and NVidia. This algorithm - 3D Gaussian Splatting - has seen widespread adoption in the academic world and by major industrial players in the digital sector.
What applications could result from your research?
It's hard to give you a precise answer. When a 3D scene creation tool performs well, it can have applications far beyond what you might have imagined. Less than a year after its publication, our 3D Gaussian Splatting approach is already being used in e-commerce, video games and cinema special effects, as well as in real estate, the inspection of engineering structures and damaged industrial sites...
With the solutions that will be proposed by NERPHYS, we can imagine much more advanced industrial uses in the same fields, and others entirely new, thanks to the physics-based control offered by our new algorithms.
Do you have all the necessary skills within your team?
We're well equipped in terms of photo-based rendering techniques, but we'll need to draw on some outside expertise. For deep learning and physics-based synthesis, we'll be relying on other universities and research institutes in France, Europe and beyond, such as our long-time collaborator Professor Frédo Durand at MIT (USA).
To learn more
- Conférence de Georges Drettakis, « Computer Graphics and Machine Learning » (vidéo), Académie des Sciences, 29/10/2019.
- Une technique révolutionnaire de création de scènes en 3D (accès payant), Le Monde, 8/5/2024.
- Adrien Bousseau fluidifie le design en 3D, Inria, 17/1/2024.
- 3D Gaussian Splatting for Real-Time Radiance Field Rendering (video en anglais), Siggraph 2023, Inria GraphDeco, 2/5/2023.
Contact
George Drettakis
Director of research - Team GraphDeco
Centre Inria d'Université Côte d'Azur - 2004, route des Lucioles
,
06560 Valbonne Sophia Antipolis