Dartmouth researchers have created a computational framework for simulating an imaging technique called Optical Heterodyne Detection that is integral to a wide variety of applications that call for precise measurement of object or flow speeds.
Motion-sensing cameras commonly use the Doppler effect to "see" motion. This phenomenon, typically exemplified by the changing pitch of a moving ambulance siren, is also applicable to light waves. Moving objects or particles—a passing car, blood cells flowing through capillaries, as well as particles or aerosols suspended in air—slightly shift the frequency of laser light shone on them depending on their speeds.
OHD mixes the scattered light with a reference light wave to produce a stronger signal, making it possible to measure the tiny frequency shifts and calculate the fluid's velocity. Among the applications that use OHD are lidar for autonomous vehicles, biomedical imaging, and atmospheric sensing.
Computer simulations offer researchers a way to build and test digital prototypes for scientific applications, instead of creating costly hardware prototypes, says computer science PhD student Juhyeon Kim, Guarini, who led the project.
"Existing OHD simulators are highly specialized; they are tailored to specific domains with simplified assumptions, so they are limited in their applicability," says Kim.
Working with his PhD advisor, Adithya Pediredla, assistant professor of computer science, Wojciech Jarosz, associate professor of computer science, and research collaborators from Aurora Innovation, Kim created and validated a novel framework that enables the same rendering engines used for movies or game graphics to also simulate advanced optical detection systems like OHD .
Their framework leverages algorithms used extensively in computer graphics and animated movies called Monte Carlo path tracing techniques that simulate how light travels and interacts with objects in a scene. By tracing out many random paths that photons take as they move from light source to the camera (or a viewer's eye) and capturing the average illumination at each pixel of the frame, the method recreates a realistic image.
These computational imaging developments have fueled the fast-paced evolution of cameras in modern smartphones, which today contain not just a traditional digital camera, but in fact combined various sensors and algorithms to improve everything from our captured images to securely authenticating our identity when unlocking our phones.
While currently of interest to specific application domains like self-driving cars, this work could help fuel the next generation of developments in cameras and sensors across a wide range of disciplines.
"Our simulator serves as a “digital twin” for OHD experiments and can be scaled to realistic and complex scenarios while preserving physical accuracy," says Kim. The research, which was presented at SIGGRAPH 2025 in July, was selected as a Best Paper Honorable Mention at the conference.
Pediredla, an expert in building and simulating computational imaging systems, led the development of the digital twin framework. Jarosz's expertise in rendering ensured that the simulations faithfully captured complex light transport effects.
In the paper, the researchers compare their simulations with real-world captured data for three different OHD applications—lidar that measures distance and velocity of objects, devices that measure blood flow, and wind Doppler lidar.
Direct validation of the simulated data against physical measurements was made possible by using custom-built lidar hardware along with deep expertise on the system provided by Aurora Innovation. Their self-driving trucks rely on these cameras to transport freight.
The researchers demonstrated that their rendered data matches real hardware data much more accurately than simpler models. In fact, Pediredla says, the simulator was able to faithfully reproduce artifacts caused by multiple light bounces—effects that current sensing modalities still struggle with.
"We can use the digital twin to design next-generation cameras that eliminate these artifacts entirely on a computer, without requiring any physical imaging hardware," says Pediredla. "With the help of simulated data, we can also train data-hungry AI models for these imaging systems, which are not yet widely available."
The simulator and all data are made publicly available by the authors.