Color-Encoded Illumination for High-Speed Volumetric Scene Reconstruction

CVPR 2026 (Highlight)

David NovikovEilon Vaknin LauferNarek TumanyanMark Sheinin

Weizmann Institute of Science

Conventional video capture

Scene under constant illumination
Image formation under constant illumination @ 60 FPS
Traditional camera capture final image
Captured image under constant illumination @ 60 FPS

Our color-encoded imaging and volumetric scene reconstruction

Scene under color-encoded illumination
Image formation under color-encoded illumination @ 60 FPS
Color encoded capture final image
Captured image under color-encoded illumination @ 60 FPS
Stack of captured multi-camera images
Captured images @ 60 FPS
Our volumetric reconstruction and rendering method
Recovered volumetric dynamics and novel-view rendering @ 600 FPS
TL;DR: We encode high-speed scene dynamics using sequential color strobing, enabling high-speed, 600 fps, dynamic volumetric reconstruction from multiple low-frame-rate 60 fps cameras. Left: conventional camera capture. Right: our color-encoded capture and volumetric scene reconstruction.

Selected results

In the 3D camera motion plots below (middle column), the training camera positions are marked in red while the novel-view camera position is marked in green.

The recovered results (right column) only show novel viewpoints. The camera position plot is synchronized with the GIF motion.

Camera layout visualization
Strobed image
sim explosion - Imaged object
sim explosion - Space movement- stitched cam + frames
Space movement
Strobed image (different view)
sim explosion - strobed image
sim explosion - Time movement - stitched cam + frames
Time movement

Explosion scene (simulated)

Experimental details: 10 colored strobes, 50 cameras, and nine cubes moving along different trajectories.

Imaged object
CVPR logo v2 3kOhm gain boosted 1 - Imaged object
CVPR logo v2 3kOhm gain boosted 1 - Space movement - stitched cam + frames
Space movement
Strobed image
CVPR logo v2 3kOhm gain boosted 1 - strobed image
CVPR logo v2 3kOhm gain boosted 1 - Time movement - stitched cam + frames
Time movement

Spinning disk experiment

Experimental details: 10 colored strobes, eight cameras, and one sticker with "CVPR" text.

Imaged object
chess 122 - Imaged object
chess 122 - Space movement - stitched cam + frames
Space movement
Strobed image
chess 122 - strobed image
chess 122 - Time movement - stitched cam + frames
Time movement

Flying chess pieces (a.k.a. 4D chess)

Experimental details: 10 colored strobes, eight cameras, and three chess pieces falling downward.

Method

Our pipeline has two main stages. First, we perform color-encoded capture, where we strobe a sequence of colored light onto the scene at a high speed. This light temporally encodes the scene's dynamics into the captured images. We synchronize the cameras to the light strobes so that each camera captures the same information. Second, we perform reconstruction, where we use a dynamic Gaussian Splatting pipeline to decode the captured images and recover the scene's high-speed volumetric dynamics.

Part 1: Capture

Step 1: Flashing Colored Lights

To make our system highly adaptable, we choose to take a standard and widely available R, G, and B LED light and flash it at a high speed to create arbitrary colored strobes. The trick is to use PWM (pulse width modulation) to create intermediate colors between the three base colors. Since our scene motion is slow relative to the speed of the strobe, the objects are approximately static during each strobe. We do not observe any colored motion blur artifacts in the captured images, and the colors are perceived as solid colors. We can observe this in captured images in the results sections.

Step 1: Flashing colored lights

Step 2: Synchronizing Light to Cameras

Here we show our setup with 8 cameras and our light. To strobe the light at high speeds, we use an Arduino Due microcontroller to control the light's PWM and synchronize it to the cameras' capture. We show our circuit schematic and a photo of our physical setup. We use the MOBL-300x150-RGBW light and IDS UI-3240CP cameras.

Step 2: Synchronizing light to cameras

Part 2: Training and reconstruction

Step 3: Modeling the motion

In order to model the volumetric high-speed scene we first perform a geometric initialization step to extract the initial point cloud and camera poses using COLMAP. We also extract the colors of the N strobes from the captured images, which gives us a mapping between strobe time and strobe color.
We then fit our single-color-channel dynamic gaussian model (adapted from Gaussian Flow), over the captured color encoded images. During the forward pass, for each camera, we render the motion for N time stamps producing N images of the rendered motion. To optimize over this motion, we take the rendered high-speed images for each camera, multiply them by the strobe colors corresponding to each time stamp, and sum over these images to produce a single image which we can then compute a loss over using the captured image for that camera.
In other words, we are simulating the color encoding process that we captured in our model's forward pass and then comparing the result to the actual captured image. To improve our method performance, we also add a total variation loss on the inverse depth images to encourage smoothness in the scene's geometry.

Step 3: Forward pass

Special cases and additional results

Imaged object
CVPR logo yellow v2 - Imaged object
CVPR logo yellow v2 - Space movement - stitched cam + frames
Space movement
Strobed image
CVPR logo yellow v2 - strobed image
CVPR logo yellow v2 - Time movement - stitched cam + frames
Time movement

Non-white reflectance

Experimental details: 10 colored strobes, eight cameras, and one yellow sticker with "CVPR" text. We encourage you to compare this result to the experiment with the white paper and "CVPR" text.

Strobed image
single camera - strobed image
single camera - Time movement - stitched cam + frames
Time movement

Single camera view (simulated)

Experimental details: 10 colored strobes, one camera, and a simulated sticker with "CVPR" text.

Strobed image 1
w background - Imaged object
w background - Space movement for frame 4 - stitched cam + frames
Space movement
Strobed image 2
w background - strobed image
w background - Time movement - stitched cam + frames
Time movement

Non-negligible background (simulated)

Experimental details: 10 colored strobes, 20 cameras, and a monkey head moving in an arc with a non-negligible background.

Imaged object
nerf dart 3rd attempt - Imaged object
nerf dart 3rd attempt - Space movement - stitched cam + frames
Space movement
Strobed image
nerf dart 3rd attempt - strobed image
nerf dart 3rd attempt - Time movement - stitched cam + frames
Time movement

Nerf dart

Experimental details: 10 colored strobes, 8 cameras, and a nerf dart with a rapid change in motion.

Imaged object
nuts 58 - Imaged object
nuts 58 - Space movement - stitched cam + frames
Space movement
Strobed image
nuts 58 - strobed image
nuts 58 - Time movement - stitched cam + frames
Time movement

Particles

Experimental details: 10 colored strobes, 8 cameras, and four hexagonal particles.

BibTeX

@article{novikov2026colorencoded,
  title   = {Color-Encoded Illumination for High-Speed Volumetric Scene Reconstruction},
  author  = {Novikov, David and Vaknin Laufer, Eilon and Tumanyan, Narek and Sheinin, Mark},
  journal = {arXiv preprint arXiv:XXXX.XXXXX},
  year    = {2026}
}