Scientists from the University of Glasgow have developed a new technique that uses artificial intelligence (AI) to take temporal information from photons and create 3D images.
The process works similar to a lidar – a simple, inexpensive single-point detector records the time needed for photons produced by a pulse of laser light to bounce off objects and reach the sensor. The further away an object is, the longer it takes each reflected photon to reach the sensor.
This method differs from traditional photography, which uses a multitude of digital sensors to measure the colour and intensity of light and then builds up a 3D image by using multiple cameras at different angles.
Dr. Alex Turpin from the University of Glasgow’s School of Computing Science, who led the university’s research team, said: “Cameras in our cell-phones form an image by using millions of pixels. Creating images with a single pixel alone is impossible if we only consider spatial information, as a single-point detector has none.
“However, such a detector can still provide valuable information about time. What we’ve managed to do is find a new way to turn one-dimensional data – a simple measurement of time – into a moving image which represents the three dimensions of space in any given scene.”
Working with researchers from Italy and the Netherlands, the team collected the timings of each photon reflected in the scene – what the researchers call the temporal data – in a simple graph.
The researchers trained a sophisticated neural network algorithm by showing it thousands of different conventional photos of the team moving and carrying objects around the lab, alongside temporal data captured by the single-point detector at the same time.
Once the network had learned enough about how the temporal data corresponded with the photos, it could create highly accurate 3D images from the temporal data alone.
The team managed to build moving images at about 10 frames per second from the temporal data during proof-of-principle experiments. Researchers noted also that the hardware and algorithm used have the potential to produce thousands of images per second.
“We’re confident that the method can be adapted to any system which is capable of probing a scene with short pulses and precisely measuring the return ‘echo’. This is really just the start of a whole new way of visualising the world using time instead of light,” said Dr. Turpin.
The neural net is currently limited to creating what the researchers have trained it on. Further training and more advanced algorithms will be needed to teach it to visualise a greater range of scenes, widening its potential applications in real-world situations.
- Scottish Researchers Developing ‘World First’ Healthcare Robot
- State-of-the-art Research Centre Could Transform Scots Manufacturing
- California-based AI Startup to Establish Edinburgh Research Hub
The scientists believe that their new technique could help cars, mobile devices and health monitors develop 360-degree awareness.
Dr. Turpin added: “The single-point detectors which collect the temporal data are small, light and inexpensive, which means they could be easily added to existing systems like the cameras in autonomous vehicles to increase the accuracy and speed of their pathfinding.
“Alternatively, they could augment existing sensors in mobile devices like the Google Pixel 4, which already has a simple gesture-recognition system based on radar technology. Future generations of our technology might even be used to monitor the rise and fall of a patient’s chest in hospital to alert staff to changes in their breathing, or to keep track of their movements to ensure their safety in a data-compliant way.”