Laser sight: NYU's real-life tricorder
A laser-driven device can read an object’s reflected light to decipher its substance.
Courtesy of the University of Chicago
It’s a staple of science fiction, made famous by the tricorders on “Star Trek”: a hand-held device that reveals detailed information about some unknown substance or object in front of you. Sometimes you’d even get a real-time picture of each molecule.
Laboratory devices can decipher such unidentified things, but the special equipment is often bigger than a refrigerator and many times more expensive. Not exactly tricorder technology.
But David Grier at New York University believes that he’s closer than ever before – and his design uses parts available from a local electronics store.
The setup is simple: a laser, a microscope, a digital video camera, and a PC. Take the laser and fire it through the microscope “backward” – from behind the object you are looking through the lens. The image that hits the microscope looks like a pattern of rings, like ripples in a pond. With a little computing power, Dr. Grier can read the pattern of circles and create a real-time image that teases out the defining characteristics of an object.
With an ordinary microscope, you can only see a two-dimensional image. But the ring pattern made by the laser allows the user to measure how far the object is from the lens. Since different materials refract light in different ways, you can tell exactly what the target is made of.
The analysis works on liquids, goos, and dusts – things translucent enough to allow laser light to pass through – but also solid objects. This “ripple” effect is just barely visible in ordinary light. It’s what creates the “fuzzing” effect at the edge of shadows. It’s also visible at sunset – brilliant red sunsets are due to dust scattering longer wavelengths, and one could use that to determine the average size of the dust particles. Grier’s technique allows for observing smaller samples in a more controlled way and is accurate with remarkably small samples: down to sizes measured in micrometers, or millionths of a meter. A coat of paint is typically 100 micrometers thick.
“The one place we did go more all-out and used professional equipment was the microscope,” Grier says. “The lenses are higher quality.” The rest of the equipment, he said, isn’t any different from what’s available off-the-shelf.
In the lab at NYU, Grier’s partner in research, Fook Chiong Cheong, shows the setup, which takes up three feet of a lab table.
“In the next generation of the device, we can make it smaller,” Mr. Cheong says. He pointed to a spare video camera lying next to the apparatus. “That one is color,” he says. “The old one is monochrome.”
A color camera enables the user to see exactly how varying wavelengths scatter, giving even more information about whatever it is on the slide. The new camera, he adds, is also cheaper.
Unlike other methods, this one doesn’t damage the object in question. On top of that, there is no longer any need to “tag” substances with radioactive markers or fluorescent compounds, which is expensive and time consuming.
Grier said the next generation of his invention should be able to fit into a soda can. And by adding some fiber-optic cables – also available at the local Radio Shack – he can use lasers of multiple wavelengths, increasing the sensitivity of the device.
Why didn’t anyone think of this before? The basis for imaging this way is called Lorenz-Mie theory. It’s all based on a single equation, which has been well known for a century. But doing the calculations to reconstruct the position, velocity, size, and refractive index of the object is complicated at best. Grier notes that on older machines, the calculations can take a long time.
“Before, you’d have been old before the calculation was done,” he jokes. That all changed with the advent of more powerful PCs and digital imaging technology.
“People had been satisfied for years with ‘back of the envelope’ calculations,” he says. Those older attempts mostly produced snapshots, not precise images.
Developing the new software was the tough part. Grier says the problem is that there were too many moving parts. The laser produces a scatter pattern that changes with time, and you have to track the target’s position, refractive index, and size from the constantly changing image. That’s a total of six parameters (position is three for three dimensions, plus velocity).
At first, Grier’s team tried to standardize two or three of those numbers in the hopes that the others would then come more easily. That didn’t work. “More or less out of resignation we just let all of them vary,” he says. “That worked.”
Joseph Katz, professor of mechanical engineering at Johns Hopkins University in Baltimore, says the big difference in Grier’s technique is the ability to study dynamic systems, instead of having to deal with the narrow depth of field an ordinary microscope offers. (It was one of Katz’s original papers that inspired Grier’s work.)
Because you can tell where small particles are moving, you can track how they diffuse through other substances. One recent experiment – designed by a student – involved eggs.
“The administrator looked at us a little funny when we said we needed to buy eggs,” Cheong says.
The team used organic eggs, commercial eggs, and even ostrich eggs (available at the local Whole Foods supermarket). The result? Certain kinds of particles diffuse less well through organic eggs. They haven’t figured out why that is yet.
Another application is dentistry. Cavities happen because bacteria get on your teeth and live in a film that sticks on the surface. If the film could be analyzed and disrupted, the bacteria would die off.
Cheong notes that because the imager can “see” very small particles, it can show exactly how nutrients affect the bacteria.
“It turns out the bacteria feed on both sugar and starch,” he says. “So it’s worse to eat cake than it is to eat candy.” The dream? Use results from this device to create a chewing gum that would eliminate the need for toothbrushes.
One of the reasons Grier started the work was frustration with conventional imaging. Much of the work in his lab deals with finding out how very small particles interact, as well as how to manipulate them. He wanted, he says, to “see what he was doing.”
A similar laser setup can be used to make “light traps” that use focused laser light (slightly different from a laser beam) to hold individual particles in place. The technique is well known, but Grier’s imaging technology lets experimenters see the results as they happen.
With a setup not unlike a video game, Cheong shows how to move tiny glass beads into place, using just a mouse and looking at the image on a screen. So far, there haven’t been any “serious” applications. But for Cheong, that isn’t a problem.
“We try to encourage play here,” he says, adding that it is where good ideas sometimes come from.