Fluorescence-guided surgical intervention is an emergent technology where a molecular contrast agent makes the target tissue glows (i.e., fluorescent). This helps a surgeon accurately identify anatomical features to resect, such as tumors, or preserve, such as vital tissues like nerves and blood vessels. The glow is very dim due to the low amount of contrast agent, while an operating room is very bright, and the surgeon needs to see the surgical field very well. There is a need to present the dim and bright images on a digital display such that the information about anatomical features can be perceived and interpreted easily.
The goal of the project is to develop multiple approaches to fusing dim fluorescent and vivid bright-field images such that surgery-relevant anatomical features are easily perceived. Our approach is based on transforming image intensities in fluorescent surgical microscopy images and blending them with bright-field microscopy images in real-time. The objective is to render fused images, for example, on surgical displays of Da Vinci robotic surgical systems, and easily interpreted them by a surgeon.
The challenges lie in designing unsupervised and supervised intensity transformations that