3D Terminology Put in Context


3D scanning terminology across industries
Pop quiz, what’s the difference between 3D digitizing and 3D scanning? Any smart alecs in the room are right: absolutely nothing. And yet, the two terms appear in different places, with 3D digitizing being the lesser used by far – why is that? 3D scanning is a cutting edge technology, and that means the language surrounding its adoption into different spheres of life and different fields of business, is part of a constant evolution.
For a definition of 3D scanning, checkout this previous article on our blog. But let’s go a bit deeper into how people use 3D scanning terminology within different contexts.

3D Digitizing / Probe Scanning
Although the term 3D digitizing can be used interchangeably with 3D scanning, it’s most often used in the context of digital shape sampling using a physical probe making contact with an object. Digitizing probes are often sold as accessories to milling machines and routers. Fitting the probe onto the milling machine turns it into a ‘digitizing machine’ capable of creating point cloud files compatible with the milling machine’s software. The process is very similar to that of a 3D laser machine: points are collected using the probe, and meshing the resulting point cloud gives you an appropriate file for the milling machine or router to recreate or CNC (computerized numerical control) the part.

Probe digitizing smaller objects can take a long time to create complete 3D geometry(scanning a quarter can take up to 3.5 hours), but this isn’t always a problem. This method of digitizing offers very high resolution, sometimes even reaching a metrology level accuracy of <0.002”. It’s a more precise tool for measurement than other kinds of scanning and can determine very quickly a certain thickness or measurement by documenting only a few, selective points, and leaving the rest of the object uncaptured.

Laser Imaging (LIDAR)
Laser Imaging can refer to the first two words in the LIDAR acronym – Laser Imaging Detection and Ranging (other versions of the acronym include LiDAR – Light Detection and Ranging, or LADAR – LAser Detection and Ranging; the process to which they refer is the same). Here’s how George Shaw, a laser systems lead at NASA, explains how LIDAR is capable of calculating distances based on measurements of time:
“When the pulse leaves the instrument, there is a detector that starts an electronic clock. When the laser pulse hits the surface and is reflected back—it’s scattered, but some amount of light is reflected back—it is directed into a photo detector and that stops the clock.”

Up in space, LIDAR technology has led to surprising discoveries – in 2008 a LIDAR system picked up streaks of brightness where laser light shone through the atmosphere above the Phoenix Mars Lander. This lead to the first indication there might be water (in the form of ice crystals) on Mars.
LIDAR has been a part of space missions dating back to 1971, but it’s also used extensively for earth-bound projects, most often for geographic scanning that covers large areas. Transitioning the technology from earth to space isn’t always easy – there can be issues adapting the size, weight, and power of the equipment to suit different environments. LIDAR systems can be bulky and enormously heavy but to be effectively used in space sometimes they have to fit into the size of a bread box. The pressure on advancing LIDAR for space missions yields dividends for selling the systems to commercial clients, however. It all shakes out.

3D Computed Tomography (CT)
Laser technology is used for much more than 3D scanning – it’s all over the place in the medical world in particular. Did you know there’s a type of x-ray machine that uses lasers, the Xaser? Pretty cool, but it doesn’t render any images in 3D.

One of the most important processes in the medical world is obtaining graphics-based information about a patient’s body. X-rays or ultrasound machines display a cross section of the body (or any solid object) using a technique called tomography. The first industrial computerized tomography (CT) scanner was developed by G. Hounsfield in 1972, winning him a Nobel Prize in medicine. Sometimes CT scanning uses x-rays to produce 3D images of external and internal views of an object. 2D radiographic imaging techniques are still prevalent in the medical world, mostly due to cost considerations and the high resolutions of these methods. There are some particularly sensitive cases, like diagnosing a woman’s reproductive organs, where 3D is beneficial. Like most other 3D scanning methods, multiple MRI or CT scans can be combined using computer software to create 3D blocks.

In dentistry there’s a more aggressive form of 3D scanning called cone beam computerized tomography (CBCT), in which an x-ray beam in the shape of a cone moves across the scan object to generate a large number of images. Used sparingly due to this technique’s higher concentration of radiation, it is capable of capturing 3D information regarding teeth, gums and muscle, nerve paths and bone in a single scan.

3D Photography
3D photography has traditionally used stereoscopic techniques to produce the effect of 3D. It’s accomplished by taking two pictures of the same object or scene from slightly different positions to mimic the human eyes. Specialty 3D cameras often have two image sensors that can capture simultaneous pairs of images, but the additional hardware and niche interest has made them expensive. Both Panasonic and Fujifilm produce this kind of camera. Other companies like Nikon and Sony have taken a different approach, equipping a regular single lens camera with the UI and functionality to align the subject of two different photographs. They’re taken one after the other with a slight shift in camera position by the photographer.

You’d have to wear anaglyphic or polarized viewing glasses, though, in order to view these images as 3D. Either that, or the camera itself needs to have a lenticular lens system monitor, in which microscopic concave lenses produce binocular disparity, that allows you to look at 3D image files with the naked eye. An example is Fujifilm’s Finepix REAL 3D W3 camera. Although equipped with two sensors and two optical zoom lenses, the Finepix uses the two-step method in which a user takes a first picture and then a second, with the first appearing on the screen as a translucent overlay to help them position the camera correctly. It also has a feature to take successive shots automatically, perfect for shooting from a moving vehicle.
Photogrammetry
We’ve touched on the particulars of photogrammetry in the past (here, in fact!), but it would be impossible to omit it from a list on 3D capture terminology.

This scan of a 2,500 year-old olive tree was created using 1618 photos and the RealityCapture photogrammetry software.
Although photogrammetry involves the computational crunching of graphical information, more than any physical ‘scanning’ in the typical sense, the resulting 3D model is very similar to laser scanning methods. Photogrammetry is a more complex form of stereo vision, which creates 3D depth by combining two images of the same object and environment taken from two slightly different angles. Photogrammetry takes things much further, reconstructing full 3D geometry from a whole bunch of photos taken from many different angles. The photos can be taken all at once by multiple cameras set up on a rigid frame or rig, surrounding the object like a cage. It can also work with photos taken by one camera being moved around a scene or object in increments.

Laser Scanning
A lot of different terminology exists within the sphere of laser scanning alone. We’ve covered different methods of laser scanning, and therefore language used to describe it, in the same blog post as linked above for photogrammetry.

One thing remains constant–laser scanning always combines laser data and graphical data to create a point cloud of an object’s surface. It’s how the laser is used that varies. A laser light can be shone as a single stripe across an object from left to right in a sequence or ‘pass’, or a pattern of striped laser light can be shone over the object all at once. Both these techniques fall under the term ‘structured light’ laser scanning.

In time-of-flight laser scanning the laser is used as in the description of LIDAR, to ‘time’ distances between the scanning device and different points on an object by using the known speed of light to calculate how long it takes to the laser light to bounce off the object’s surface and return to the scanning device’s sensor. Time-of-flight technology is highly sophisticated in terms of quality but can be an expensive option usually reserved for scanning large environments and buildings.


Additional products to consider...