Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Real-time Image Intensity Transformation for Surgical Displays


Fluorescence-guided surgical intervention is an emergent technology where a molecular contrast agent makes the target tissue glows (i.e., fluorescent). This helps a surgeon accurately identify anatomical features to resect, such as tumors, or preserve, such as vital tissues like nerves and blood vessels. The glow is very dim due to the low amount of contrast agent, while an operating room is very bright, and the surgeon needs to see the surgical field very well. There is a need to present the dim and bright images on a digital display such that the information about anatomical features can be perceived and interpreted easily.


The goal of the project is to develop multiple approaches to fusing dim fluorescent and vivid bright-field images such that surgery-relevant anatomical features are easily perceived. Our approach is based on transforming image intensities in fluorescent surgical microscopy images and blending them with bright-field microscopy images in real-time. The objective is to render fused images, for example, on surgical displays of Da Vinci robotic surgical systems, and easily interpreted them by a surgeon.

The challenges lie in designing unsupervised and supervised intensity transformations that

  1. incorporate a priori knowledge about (a) photometric properties of viewed human anatomical parts (e.g., luminance as perceived by surgeons) and (b) light radiometric properties (e.g., distribution of the radiation’s power in surgery rooms),
  2. minimize the level of effort for surgeons to provide feedback about an optimal fusion of dim and bright images while perceiving and interpreting anatomical features,
  3. utilize the cutting-edge artificial intelligence (AI) models for optimal fusion based on limited amount of training data, and
  4. accelerate the image fusion to achieve real-time video streaming to digital displays in surgery rooms.
Created May 28, 2021, Updated June 11, 2021