Éloi Zablocki

Senior research scientist at valeo.ai

photo_eloi_zablocki.jpg

I am a senior research scientist at valeo.ai, working on:

  • autonomous driving, scene understanding and forecasting, world-models, motion planning
  • vision and language, explainability, foundation models

Before joining Valeo, I completed a Ph.D. at Sorbonne Université in 2019 on multi-modal machine learning with language and vision, supervised by Patrick Gallinari, Laure Soulier, and Benjamin Piwowarski.

Prior to this, I earned an Engineering Degree from École Polytechnique (X2012) and an MSc from ENS Paris-Saclay (“Master MVA”).

News

Oct 2025 New preprint: RAP. Lightweight 3D rasterized views can augment imitation learning data for end-to-end driving.
Oct 2025 I am an outstanding reviewer at ICCV 2025.
Sep 2025 I participate to CoRL 2025 in Seoul, Korea. I will present VaViM and VaVAM and PPT in workshops.
Sep 2025 VaViM and VaVAM and PPT are accepted at CoRL Workshops. VaViM is a 1.2B parameter video generative model trained on 1,800+ hours of raw Youtube videos. Its derived video-action model, VaVAM, achieves state-of-the-art results on NeuroNCAP driving benchmark. Pseudo-labeled trajectories can be used to pre-train trajectory prediction models: improved performance, efficiency, and generalization.
Jun 2025 GaussRender is accepted at ICCV 2025. A plug-and-play 3D-to-2D reprojection loss using Gaussian splatting enhances 3D semantic occupancy learning from multiple cameras.
Feb 2025 New release from the team: VaViM & VaVAM. VaViM is a 1.2B parameter video generative model trained on 1,800+ hours of raw Youtube videos. Its derived video-action model, VaVAM, achieves state-of-the-art results on NeuroNCAP driving benchmark. We open-source code, model weights, training recipes and scaling laws.
Jan 2025 Annealed Winner-Takes-All for Motion Forecasting is accepted at ICRA 2025. We show that using an annealing loss enhances training stability and performance of state-of-the-art trajectory prediction models.
Jan 2025 LLM-wrapper is accepted at ICLR 2025. It shows that LLMs can learn to adapt black-box VLMs for new tasks and domains, by wrapping and reasoning on the vision models’ outputs.
Dec 2024 New preprint: PPT. Pseudo-labeled trajectories can be used to pre-train trajectory prediction models: improved performance, efficiency, and generalization.
Nov 2024 New preprint :gift:: GIFT. A framework for generating global, interpretable textual explanations of vision classifiers, combining counterfactual visual explanations with VLMs and LLMs.
Sep 2024 I participate to ECCV 2024 in Milan. I will present UniTraj, LLM-wrapper, ReGentS, and Valeo4Cast.
Aug 2024 3 papers accepted in workshops at ECCV: LLM-wrapper, ReGentS, and Valeo4Cast.

Current Students

Alumni

Scientific Service

Reviewer:
Conferences: CVPR 2021–2025, ECCV/ICCV 2021–2025, ICLR 2025, ICML 2025, WACV 2024, IROS 2022, AAAI 2021
Journals: IJCV 2022,2025, TPAMI 2021–2022, T-ITS 2023
Workshops: ROAD++ (@ICCV 2023)

PhD Jury Member:
Florent Bartoccioni (Invited Member, 2023)