A NIST team is at work on what promises to be one of the world's most accurate methods for generating a specified number of photons, and has devised a method to precisely estimate the uncertainty when counting such numbers.
Tallying individual units of light may seem like an exotic capability with little real-world applicability. "In everyday life, we don't encounter the granularity of light directly," says Alan Migdall, Leader of NIST's Quantum Optics Group. "Analogously, if I shoot a garden hose at you, you won't have any experience of the individual water molecules hitting you. But when you start dealing with light at the single-particle level – which is inherently the most sensitive, fundamental measurement you can make – weird quantum stuff starts to pop up, and suddenly there is the potential for new applications that can generate the weird stuff on demand."
In fact, there are abundant uses for generating and measuring specific numbers of photons. For example, determining the amount of a highly dilute substance (such as pollutants in a large volume of air) typically requires shining light through the gas and tracking how many photons are absorbed by molecules of the contaminant. Clearly, knowing precisely the number of photons sent into and out of a system will better determine the contaminant concentration than approximations.
The same is true for detecting faint light from astronomical objects, for super-accurate distance determinations, for developing materials to use in high-power lasers, for surface-to-orbit communications with satellites, for characterizing small variations in light sources, and for testing the limits of sensitivity in human vision. And, of course, there are numerous applications in quantum metrology, lithography, and imaging.
But producing exactly the desired number of photons, or even exactly knowing what number has been produced, is extremely difficult because the processes involved in generating photons are inherently random. The NIST team's method gets around that problem by exploiting a quantum-mechanical phenomenon called "spontaneous parametric down-conversion" (SPDC). Despite the forbidding name, its operation is straightforward: When a single photon enters a "nonlinear" optical crystal, it has a small chance of being converted into two "daughter" photons. Their combined energy and momentum equals the energy and momentum of the parent photon. The daughter photons exit the crystal at different specific angles depending on their properties.
The NIST researchers designed a system (see video above) that makes use of both photons. A pulsed laser fires periodic bursts at a nonlinear crystal which is located at the center of a cavity with mirrors on each end. When the SPDC process generates two daughter photons with the desired properties, one of them is trapped in the cavity and "stored," bouncing back and forth between the mirrors and re-encountering the crystal on each pass.
The other half of the pair flies off at a different angle and travels to a detector which records its existence and at counter tallies the total number of photons received. No matter how many laser pulses pass through the crystal before another photon pair of the right properties is created, the number recorded by the detector will always be the same as the number of photons in the cavity. When the detector count reaches the desired number (N), the system optics are switched and the trapped photons are released as a single N-photon bunch.
The team tested its design in a computer simulation, and found that it could reduce the uncertainty in emitted photon number by a factor of two compared to classic limits. Now they are at work on building a working model.
"There have been somewhat similar ideas floating around before," says Boris Glebov, first author on the recent paper in which the team reports its simulation results, "but the detectors and electronics required to construct such a device are only now starting to become available. For example, in order for the system to work, it has to make decisions very quickly about whether to let the stored photons out or wait for another pair. That requires an ultra-fast detector capable of resolving photon numbers. Sae Woo Nam's team in the Quantum Electronics and Photonics Division has devised such sensors, which we're in the process of testing.
"The pump laser would operate at a pulse rate on the order of 100 MHz – that is, a pulse every 10 ns. When we were thinking about the timing constraints, we gave ourselves the goal of using every third pump pulse, so a single cycle would take about 30 ns. In those 30 ns, we would need to detect photons, count them, and perform some decision logic. Since the operations are simple addition and comparison, we believe this is something that a small, purpose-built electronic circuit can handle."
Meanwhile, Zachary Levine of NIST's Sensor Science Division and colleagues in Gaithersburg, MD and Boulder, CO, have devised a method to minimize the uncertainty of photon counts made by an increasingly popular device called a transition-edge sensor (TES). TESs have been shown to resolve the number of photons in a bunch quite accurately at numbers up to about 30. Scientists want to extend that range as far as possible,** but both the source and the detector exhibit quantum fluctuations ("noise") which have made it impossible to see the peaks in the data representing individual photons as the number of photons gets large. Prior work by Thomas Gerrits and others in NIST's Quantum Electronics and Photonics Division indicated that the data-analysis scheme could yield reduced noise up to around 1,000 photons.
Levine, Glebov, Gerrits, and colleagues found that by applying sophisticated statistical analysis to the TES output, they were able to estimate the detector noise well enough that measurements of up to 100 photons will have an uncertainty of no more than 1 photon. In a paper forthcoming in the Journal of the Optical Society of America, Part B, they were also able to confirm Gerrits' hypothesis that the scheme could determine photon number, with less noise than a conventional detector, for photon numbers as high as 1000.
"Generating a known number of photons, and precisely counting the number that survive an encounter with a test sample, will provide the ultimate accuracy in whatever property is being tested," Migdall says. "That can be a very important tool in any researcher's toolbox."