logo-vista

VISTA

Visual Worlds: Temporal Analysis, Animation and Authoring
VISTA - Research team in Computer Graphics & Vision at LIX, at Ecole Polytechnique/CNRS, Institut Polytechnique de Paris
Objectives: Analyse & Generate Animated Visual Scenes and Interactive 3D Virtual Worlds
Scientific Approach: Generative AI, Reinforcement Learning, Expressive Modeling & Authoring, Real-Time Simulation, Geometric Constraints, Field-Based Representation.
Applications: Entertainment, Design, Natural Sciences.

Research Axes

Our team develop new methods for the creation of Visual and Virtual Worlds with a specific focus on Storytelling for Animated Content. Our methods spans fully-automatic understanding of videos, up to the interactive creation of populated 3D virtual worlds. To this ends we are proposing methods improving the (i) Analysis of visual content, (ii) Shape and Motion representation, and (iii) the Creation of Visual Worlds.

We first propose fully Automatic AI-based Analysis of 2D Videos and 3D Animated Content that leverage Deep-Learning technics with a specific focus on time and multimodal input data. We are specifically developing methods for automatic human recognition, pose estimation, and behavior understanding. We also propose lightweight learning based on statistical approches to extract spatial relation between shapes from a single input.

Second, we develop Interactive Models to efficiently represent Shape and Motion. We are specialized in integrating spatio-temporal constraint into real-time reactive virtual models for game-like application using,either, explicit procedural models, or discovering them via Reinforcement-Learning. We also propose alternative, volume-based, representation for shapes modeling relying on implicit surface. These models are suited for complex shape synthesis or advanced interactive behaviors (precise collision, deformation). We finally develop layered and coupled models of different spatial/temporal nature adapted to simulate efficiently large and multi-scale natural scenes.

Third our models and analysis are aimed at the Creation and Authoring of Visual and Virtual Worlds. To this ends, we propose Expressive Creation Methodology, relying on Sketching or Sculpting Gestures, as well as Sound and Multimodal Systems. These steps are supported by the scene analysis allowing to provide suggestion system, up to helping the narrative design of the scene. We further propose transfert medodologies between geometry, animation, and style in complement to generative models in order to create lively and populated worlds with sufficient variety, or to explore the impact of parameters into a simulated world.

  • 1. Analysis and Understanding of Visual Content
    Deep CNN, Human-centric video learning
    Automatic & multimodal understanding
    Light learning, spatial representation
  • 2. Interactive Models for Shape and Motion
    Alternative representation (Field based, Implicit surfaces, ...)
    Spatio-temporal constraints
    Visual simulation, Layered models
    Behavioral simulation, Reinforcement learning
  • 3. Creating and Authoring Visual Worlds.
    Expressive creation: Sketching or Sculpting gestures, Sound, Multimodal system
    A-priori/learned knowledge constraints
    Narrative design, suggestion system
    Generation and style transfert, Visual transformers
Keywords - Computer Graphics, Computer Vision with Deep Learning, Generative AI, Animated Content, Shape and Motion, Interactive Creation, Visual Simulation, AI for Visual Computing.
Application - Movies, Video Games, Animation Cinema, Natural Science, Medical Imaging, Archeology, Art & Sciences, Design, Fashion, CAD.
Specialized Keywords
- Graphics: Sketch-based Modeling, Virtual Sculpting, Character Animation, Natural Phenomenon, Real-Time, Implicit Surface, Hybrid and Procedural Models.
- Vision: Human-centric Video understanding, Cinematography analysis, Visual Transformers, NeRF, Interior Scenes.
- Learning: Multi-Modal Learning, Generative Models, GANs, Diffusion Models, Reinforcement Learning, Lightweight Learning.


Team Expertise

The specific aspect of our team-based methodology is propose a global Visual Computing approach coupling Automatic Vision and Interactive Graphics methodologies. This allows to tackle complex open scientific problems mixing the analysis of 2D and the synthesis of 3D content. For instance, we develop generative-based approaches ranging from automatic-learning fom data (GAN, diffusion, etc), reinforcement-learning, as well as alternative lightweight and efficient model relying on a-priori knowledge and user-centric design.

We are researchers with mixed expertises and backgrounds in Computer Graphics and Computer Vision. We jointly develop AI-based approaches and efficient representation to improve 2D video analysis and 3D animated virtual world generation.

At the LIX level, our speciality relies on
- Video Analysis and Understanding
- Human Representation and Virtual Character Animation
- Interactive Creation
- Interactive Simulation of Multi-Scale Natural Scenes.
More events

VISTA Recent Events

event-thumbnail
2025/06/01
Event: Xi Wang, new Faculty member at VISTA
Xi Wang, Generative AI expert, is a new Tenure-Track Assistant Professor at Vista. Xi obtained his PhD at University of Rennes in 2022, and did a PostDoc on Computational Cinematography at Vista since 2023. He is now joining us on a permanent Faculty position to develop his research on generative models, 3D vision and computational cinematography.
We are thrilled to welcome Xi!
event-thumbnail
2025/05/22
Event: Vicky Kalogeiton VISTA team leader
Vicky Kalogeiton is now the new team leader of VISTA
event-thumbnail
2025/05/20
Event: Vicky Kalogeiton - CVPR 2027 Program Chair
We are proud to announce that Vicky Kalogeiton will be Program Chair of the CVPR 2027 conference, the top-tier conference in Computer Vision and among the world-largest in AI, attracting more than 15000 paper submissions every year.
event-thumbnail
2025/05/18
Award: Honorable Best Paper Award Eurographics
Congratulations to Théo Cheynel for receiving an Honorable Best Paper Award at Eurographics conference for the work "ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies" developed in collaboration with Kinetix (Thomas Rossi and Baptiste Bello-Gurlet), Damien Rohmer and Marie-Paule Cani.
event-thumbnail
2025/05/17
Award: Gold Medal Eurographics awarded to Marie-Paule Cani
In recognition to her outstanding research contributions in the Eurographics domains, Marie-Paule Cani was awarded the Eurographics Gold Medal at the Eurographics 2025

VISTA Seminars

2025/07/03 (11am)
Latent Representations for Better Generative Image Modeling,
Spyros Gidaris (Valeo.ai)
This talk explores how latent representations shape modern generative models. While latent spaces (like those in VQ-VAE and VQ-GAN) are central to today’s generative architectures—from diffusion models to autoregressive approaches—their structure and properties are often overlooked. I will present three works that refine or leverage latent representations for better generative modeling. First, EQ-VAE addresses a key limitation in existing autoencoders used in latent-based generative models: their latent spaces lack equivariance to simple semantic-preserving transformations like rotation or scaling, making generation harder. We introduce a simple regularization method that enforces equivariance, reducing its complexity without degrading reconstruction quality. This improves multiple state-of-the-art models (DiT, SiT, MaskGIT) and speeds up training. Next, ReDi integrates pretrained semantic features into latent diffusion models. Instead of just generating low-level image latents, we jointly model them with high-level semantic features (e.g., from DINOv2). This unified approach boosts image quality and training efficiency while enabling "Representation Guidance", a simple way to steer generation using learned semantics. Finally, DINO-Foresight tackles video prediction. We predict future frames in the semantic feature space of pretrained vision foundation models (e.g., from DINOv2), avoiding pixel-level inefficiencies. This makes forecasting simpler, faster, and more robust, enabling flexible adaptation to downstream tasks. Together, these works highlight how better latent representations can simplify, accelerate, and improve generative modeling.
2025/07/03 (10am)
Expressive representations for digital art & computer-aided manufacturing
Emilie Yu (UCSB)
Digital representations of 3D objects allow people to create both digital artworks destined to be viewed through a screen, as well as physical manufactured objects through computer-aided design and manufacturing. Designing well-suited digital representations is thus central to let humans extend the range of what they can create through computer software and machines. In this talk, I will present four case studies, in which leveraging specific digital representations and associated algorithms allowed us to design software that supports complex authoring workflows: by decomposing animation authoring into 2D and 3D components, we support the insertion of animated doodles into captured footage ; by introducing a new primitive in VR painting, we can achieve more fine-grained color editing ; by parameterizing patterns for crochet granny square garments, we enable crocheters to re-use material across garments ; and by devising new primitives to represent machine motion, we allow for fine-grained control over fabrication machines. Throughout the presentation, I will emphasize high-level design decisions and practical research methods that guided us in developing adequate digital representations.
2025/05/28 (2pm)
Seeing Beyond What You Have: Integrated Intelligence Through Multisensor Systems
Zongwei Wu (PostDoc, University of Wurzburg)
In this talk, I will present our recent work on multisensor perception systems. I will begin by discussing individual sensors, such as depth and event-based sensors. Then, I will move on to our efforts in developing a unified approach with a particular focus on emergent alignment and robustness to missing modalities. Finally, I will highlight the potential of such a system and outline future directions.
Bio: Zongwei Wu is a PostDoc Researcher and junior research group leader at the Computer Vision Lab, University of Wurzburg, Germany. He received his diplome d'ingénieur from the University of Technology of Compiègne in 2019 and earned a Ph.D. from Vibot EMR CNRS 6000, University of Burgundy, France in 2022. He was also a visiting scholar at CVL, ETH Zurich. His research focuses on multimodal models and multi-task reasoning for machine vision. He is a main organizer of the NTIRE workshop at CVPR 2024-2025 and acknowledged as an outstanding Associate Editor for IEEE RA-L.
2025/04/03 (5pm)
Hummingbird: High Fidelity Image Generation via Multimodal Context Alignment
Minh-Quan Le (PhD, Stony Brook University)
While diffusion models are powerful in generating high-quality, diverse synthetic data for object-centric tasks, existing methods struggle with scene-aware tasks such as Visual Question Answering (VQA) and Human-Object Interaction (HOI) Reasoning, where it is critical to preserve scene attributes in generated images consistent with a multimodal context, i.e. a reference image with accompanying text guidance query. To address this, we introduce Hummingbird, the first diffusion-based image generator which, given a multimodal context, generates highly diverse images w.r.t. the reference image while ensuring high fidelity by accurately preserving scene attributes, such as object interactions and spatial relationships from the text guidance. Hummingbird employs a novel Multimodal Context Evaluator that simultaneously optimizes our formulated Global Semantic and Fine-grained Consistency Rewards to ensure generated images preserve the scene attributes of reference images in relation to the text guidance while maintaining diversity. As the first model to address the task of maintaining both diversity and fidelity given a multimodal context, we introduce a new benchmark formulation incorporating MME Perception and Bongard HOI datasets. Benchmark experiments show Hummingbird outperforms all existing methods by achieving superior fidelity while maintaining diversity, validating Hummingbird's potential as a robust multimodal context-aligned image generator in complex visual tasks.
2025/03/31 (11am)
Discussion on Generative AI
Alexei/Alyosha Efros (UC Berkeley)

Application domain

We have being developing our recent contributions in the following typical domains:
- Human recognition on Videos and 3D virtual Character Animation
- Cloth and Garment analysis and synthesis
- Natural environment simulation (terrain, volcano, flaura and fauna)
- Medical Imaging Analysis and biological shape design
Our research is highly application-driven where we aim at providing scientific support to enhance creativity with application in entertainement (movies and games), design as well as art in general. Our development of interactive visual representation can also find application for general public experience to help understanding time-related phenomenon (ex. terrain evolution, impact of climate change), or for expert public via serious games. Finally, we further provide dedicated analysis, interactive models and visualization for other scientific disciplines such as medical imaging, biology, or archeology where our models can help analysis, or serve as virtual test bench.
- Improve Creative and Entertainment Industries
video games/animation, Movies, VFX, creative arts, design
- Interactive Representation and Experience for the general Public or Experts
Museography, Archeology, Serious games
- Efficient Virtual Test Bench for Natural Sciences
Medical, Biology, Climatology, Natural Environment


We have ongoing (or recent) research collaboration with the following companies.
team-application

Research environment

We are located on the campus of Institut Polytechnique de Paris in the Alan Turing building [Contact].
We are collocated and working in close collaboration with the GeomeriX team at LIX regarding Geometry Analysis and Processing.
At the LIX level, we are part of the Modeling, Simulation and Learning Pole.
At the IP Paris, we are part of GeoVISTA - regrouping the Graphics and Vision teams on the Plateau de Saclay.

Links

publications
Publications
software
Software & Code
job
Job Offers
projects
Funded Projects