Moving at the speed of lidar

Image
A picture of Michael Olsen.

Lidar is a light detection and ranging technology that creates high-resolution, three-dimensional (3-D) representations of unparalleled accuracy and detail. It has become an invaluable and versatile tool for gathering information in dozens of fields, including forestry, archaeology, agriculture, and even crime scene investigation.

When Michael Olsen, associate professor of geomatics, aims and fires his lidar scanner, the tripod-mounted device collects a million points of data every second. From those data it creates a 3-D image of the targeted landscape, vegetation, and built environment. An average scan lasts about three minutes and yields an astonishing amount of raw, unstructured data.

Geomatics engineering involves taking geospatial measurements like these — on, above, and below the earth’s surface — to support engineering analyses. Lidar has made collecting the data easy, but the greater challenge is to develop tools that extract the information contained in the raw data.

Olsen’s work involves teasing out meaningful information that engineers can use to build roads, inspect buildings for defects, evaluate coastal erosion or landslide hazards, or for scores of other applications. For example, civil engineers routinely use terrestrial (ground-based) lidar to design and inspect highways, bridges, public transportation, and other infrastructure.

“With terrestrial lidar, we can sample terrain and structures down to the level of millimeters,” said Olsen. “Its accuracy is remarkable compared to what we used to get.”

Firing nanosecond laser pulses at a rate of up to a million bursts every second, lidar measures the time it takes for each pulse to bounce back from whatever it hits, then calculates the distance. Terrestrial lidar can be stationary (moved manually from place to place) or mobile (mounted to a vehicle). Airborne lidar has become commonplace and is the primary tool used to make topographic maps. Other lidar systems operate in space or underwater.

The elemental output of every lidar scan is the point cloud — visually stunning and sometimes ghostly images. Every scan, as represented by the point cloud, comprises tens of millions of data points, each of which corresponds to a point on an object in the physical world. “It doesn’t just give you a bunch of numbers on a spread sheet or a bunch of code,” Olsen explained. “You can visually see the data. That helps people understand the results in an intuitive sense.”

On its own, though, a point cloud has no inherent meaning. “If we scan the room we’re sitting in, the system will generate a 3-D point cloud, but it won’t know that this is a door, that’s a wall, that’s a chair, that’s a rug,” explained Olsen.

Currently, rigorous geometric techniques are needed to segment and classify point clouds, enabling users to distinguish, categorize, and identify objects in a scan. Sometimes it takes days of computer processing, and sometimes it has to be done manually, the same way a photographer might manipulate an image in Photoshop. “It’s tedious work,” said Olsen.

Olsen has developed a solution that simplifies and accelerates the data processing component and produces more accurate spatial and structural representations from lidar images. Two-dimensional (2-D) image files are one-tenth the size of 3-D point cloud files, so Olsen and Hamid Mahmoudabadi (’16 Ph.D. Civil Engineering) segment and classify a 2-D panoramic image created by the lidar system — an image that usually gets ignored and then lost in subsequent processing.

“We take that initial data structure and create 2-D maps and apply proven software algorithms to extract information from them,” said Olsen. “Then we kick it back to a 3-D image where all of the objects have been divided out, categorized, and identified. The processing time with our approach is much shorter and more efficient, and the results are more accurate when compared to a commonly used technique for point cloud segmentation.”

In one particularly robust validation of his segmentation method, Olsen applied both his new system and conventional segmentation to a lidar scan of the trophy case in Kearney Hall on the Oregon State campus. It was a challenging proposition because of the distorting effects from the glass front panel, but Olsen’s technique handled it with ease.

“When we started with the 2-D panorama and applied our algorithm, it pulled out and identified each of the plaques and trophies as distinct objects,” said Olsen. “The older approach applied to a 3-D point cloud didn’t work as well. With our new technique, we were very satisfied with the accuracy and the dramatically faster processing time.”

Using Olsen’s technique, lidar interpretation processes that normally take hours to compute can now be done in a matter of minutes with improved results and fewer errors.

Olsen’s next step is to adapt his streamlined segmentation technique to mobile lidar, which scans large swaths of territory in a very short time and is far more efficient than the painstaking process of taking individual scans, picking up the equipment, setting it at another location and repeating the process. Mobile lidar is also safer because surveyors and engineers don’t have to stand near dangerous roadways to conduct stationary scans. “Right now, we’re applying our algorithm to a single scan at a time,” Olsen said, “but I think we’ll be able to figure out a way to stitch together the multiple images that mobile lidar creates along highways. If we succeed, I think it will be the first time that’s been done.”

Sept. 8, 2021