Éloi Zablocki

Senior research scientist at valeo.ai

photo_eloi_zablocki.jpg

I am a senior research scientist at valeo.ai, working on:

  • autonomous driving, scene understanding and forecasting, world-models, motion planning
  • vision and language, explainability, foundation models

Before joining Valeo, I completed a Ph.D. at Sorbonne Université in 2019 on multi-modal machine learning with language and vision, supervised by Patrick Gallinari, Laure Soulier, and Benjamin Piwowarski.

Prior to this, I earned an Engineering Degree from École Polytechnique (X2012) and an MSc from ENS Paris-Saclay (“Master MVA”).

News

Feb 2026 NAF is accepted at CVPR 2026. Image-guided neighborhood attention provides a lightweight and fast feature upsampler that generalizes zero-shot across vision foundation models while surpassing specialized architectures.
Feb 2026 MAD is accepted at CVPR 2026. Decoupling motion learning from appearance synthesis enables efficient adaptation of general video diffusion models into controllable, state-of-the-art driving world models with minimal supervision.
Feb 2026 DrivoR is accepted at CVPR 2026. Compressing multi-camera ViT features into a few register tokens enables a simple pure-transformer planner to achieve state-of-the-art end-to-end driving in both open and closed-loop settings.
Feb 2026 Loick Chambon defends his PhD entitled “Efficient Representations for Autonomous Driving”. Jury: Alexandre Alahi, Vincent Lepetit, Fatma Güney, Catherine Achard, Matthieu Cord, and myself. Congrats Loick!
Feb 2026 GIFT :gift: is accepted at TMLR 2026, with a Featured Certification. A framework for generating global, interpretable textual explanations of vision classifiers, combining counterfactual visual explanations with VLMs and LLMs.
Jan 2026 PPT is accepted at ICRA 2026. Pseudo-labeled trajectories can be used to pre-train trajectory prediction models: improved performance, efficiency, and generalization.
Jan 2026 RAP is accepted at ICLR 2026. It shows that lightweight 3D rasterized views can augment imitation learning data for end-to-end driving.
Oct 2025 I am an outstanding reviewer at ICCV 2025.
Sep 2025 I participate to CoRL 2025 in Seoul, Korea. I will present VaViM and VaVAM and PPT in workshops.
Sep 2025 VaViM and VaVAM and PPT are accepted at CoRL Workshops. VaViM is a 1.2B parameter video generative model trained on 1,800+ hours of raw Youtube videos. Its derived video-action model, VaVAM, achieves state-of-the-art results on NeuroNCAP driving benchmark. Pseudo-labeled trajectories can be used to pre-train trajectory prediction models: improved performance, efficiency, and generalization.
Jun 2025 GaussRender is accepted at ICCV 2025. A plug-and-play 3D-to-2D reprojection loss using Gaussian splatting enhances 3D semantic occupancy learning from multiple cameras.
Feb 2025 New release from the team: VaViM & VaVAM. VaViM is a 1.2B parameter video generative model trained on 1,800+ hours of raw Youtube videos. Its derived video-action model, VaVAM, achieves state-of-the-art results on NeuroNCAP driving benchmark. We open-source code, model weights, training recipes and scaling laws.

Current Students

Alumni

Scientific Service

Reviewer:
Conferences: CVPR 2021–2026, ECCV/ICCV 2021–2025, ACL 2026, ICLR 2025, ICML 2025, WACV 2024, IROS 2022, AAAI 2021
Journals: IJCV 2022,2025, TPAMI 2021–2022, T-ITS 2023
Workshops: ROAD++ (@ICCV 2023)

PhD Jury Member:
Loick Chambon (Advisor, 2026)
Florent Bartoccioni (Invited Member, 2023)