Capturing accurate color board scans for value recovery with area cameras

Guest Post: Terry Arden, CEO, LMI Technologies

The use of 3D scanning technology in the wood industry has evolved significantly since first introduced into saw and planer mills. Initially, 3D scanners were used to measure the shape of logs and boards in order to extract the greatest amount of lumber from the wood (i.e., volume recovery).

Today, complementary technologies are used to extract the highest quality from the wood (i.e., value recovery). Color imaging systems are essential in generating the high-resolution images required for surface defect detection, which leads to grade-based cutting decisions.

Building a Color Scanning System

Building a color scanning system requires a color camera, a lens, and lighting. The color camera can use either linear or area scan camera technology. The choice of which of these two technologies to use affects the overall design of the color system. Understanding linear vs. area camera operation is the key to picking a lighting solution that offers long lifetimes.

Generating a 2D Color Image of the Board Surface with Linear Cameras

A linear camera chip consists of one row of pixels. A lens is chosen to map this row of pixels to a suitable resolution across the board length––for example 0.5mm/pixel. To scan a board moving on a conveyor along the board width (transverse scanner), an encoder is used to track motion and trigger the camera one row at a time, at a suitable resolution across the board width––for example 0.5mm (Fig. 1). This is how a 2D color image is created of the board surface with 0.5mm x 0.5mm pixel resolution.

Fig. 1 Pixels are mapped from the board surface to a linear camera, with transverse board motion.

To produce color, an RGB mask is applied over the pixels to deliver a repeating sequence of red, green, and blue color pixels (Fig. 2).

Fig 2. A single row of pixels encoded by a color mask.

In some cases, a linear camera may offer 3 rows of pixels––one for each color (Fig. 3). This is called a trilinear camera.

Fig. 3 Trilinear cameras offer three pixel rows––one for each R, G, B color.

For trilinear cameras, you will need 3x the encoder trigger rate to achieve the same color density as a linear camera. Aligning the encoder triggers so the same board surface is sampled by each color is difficult to achieve and often leads to color artifacts. For a linear camera, you will need 3x the number of pixels in a row to get the same color density as a trilinear camera. With today’s linear cameras, high pixel density along a row is easily achieved.

Continuous Lighting and the Duty Cycle

Both linear and trilinear cameras require a continuous source of white light to illuminate the board surface. Since linear cameras are always capturing the next row of data while the previous row is being read out, the lighting system must always be on (Fig. 4).

Fig. 4 The light source must always be on when using linear or trilinear cameras.

The ratio of time that the light is on, to the period of the camera frame rate (period = 1 / frame rate), is called the duty cycle (duty cycle = exposure time / period). The duty cycle largely determines the lifetime of a light source. Due to heat, a high duty cycle will require more frequence light replacement than a lower duty cycle.

Linear camera-based scanning designs have a high duty cycle, which results in shorter light lifetimes. In these scanning designs the light source is inefficient, with high power-loss due to heat. Heat shortens component lifetime. This is why it is common to see LED light bars with large heat sinks to dissipate heat.

In addition, the light source in linear scanning designs must be very close to the board surface for maximum illumination brightness, creating a mounting strategy where the cameras are high up (say 1-2m), and the light is relatively close to the oncoming board. This is not a desirable configuration.

Area Cameras and Longer Lighting Lifetime

The alternative to a linear camera is an area camera. An area camera is composed of a 2D array of pixels that are mapped by a lens onto an area of the board surface. Area cameras use a color mask to encode pixels into R, G, and B elements in a pattern known as a Bayer filter. This Bayer pattern is decoded later by software to produce color for every pixel on the 2D array.

For the purposes of this discussion, assume we use a 2D array with 10 rows (note: a 2D array is just a linear with more rows). If we wanted 0.5mm resolution for each row, like we did in the linear example, then the encoder will trigger the capture of an area when the board moves every 5mm (0.5mm/row x 10 rows = 5mm) (Fig.5). Now we are reading a small “patch” of pixels––not just a row. Each patch is then stitched to build a 2D color image based on encoder stamps that identify the exact start location of each pixel patch.

Fig. 5 Area cameras read a “patch” of pixels rather than a line.

Now, let’s consider what happens with the lighting in this type of system design. We still need white illumination to produce color images, but the duty cycle is very different. The ON time of the light spans the duration it takes to expose one row (since we want to “stop” motion for 0.5mm––just like in a linear camera). The rest of the time (9 rows), the light is OFF while we wait for the 5mm of board motion to complete (Fig. 6).

Fig. 6 With an area camera, the light is only ON during the time it takes to expose one row.

During the time the light is ON, all 10 rows are exposing. The duty cycle is therefore very low (1/10 or 10%), not the 100% cycle of a linear system. This means the light can be strobed for a very short period––ON for one row, and OFF for 9 rows. Strobing an LED light can lead to very intense light output as long as the duty cycle is very low, so the LED never heats up, which results in a much longer lifetime (eg., 10 years vs. 1 year).

With strobed LED lighting, LEDs can be overdriven at a much higher current to produce 5x more intensity. This allows lighting to be conveniently mounted and wired close to the cameras, and kept physically out of the way of board movement (Fig 7).

Fig. 7 Low duty cycles and strobed LED lighting allow light placement close to the camera.

Once we have generated a 2D color image, the data is further white balanced for accurate color representation, and scaled based on the height variation of the board using profile data taken from a 3D scan of the same region (Fig 8). A color pixel has a different physical size on the board surface at one height than at another height. If color pixels are not corrected for height, then the dimension of defects (e.g., knots) will be incorrect.

Fig. 8 2D data is white balanced and scaled based on 3D height variation.

Modular, Area Camera System Design with Easy Bolt-On Lighting

At LMI, our Gocator 200 series of modular scanner systems are designed around the area camera principle. An LED light bar generates white light illumination and is strobed for a short ON time but at high intensity to scan even the darkest board surfaces. The timing of LED lighting is synchronized to the area camera exposures precisely. The resulting color patches are stitched into a single seamless image, white balanced, and then scaled based on board height variation.

The 3D profiling scan data from a Gocator 210, 230, or 250 scanner is aligned to the color scan plane of a bolt-on Gocator 205, so 3D data can be used to scale the color image data. All of the software needed to capture 2D color with 3D profile and tracheid is provided to customers in an open source SDK. The SDK shows how to manage the many sensors in an optimizer in order to build high definition data models. These models are processed by machine vision algorithms (supplied by OEM) in order to extract wane and defects, and compute optimal cutting patterns. Gocator makes it easy to mix 3D with 2D color in order to build custom solutions for a variety of machine centres in saw and planer mills.

Source and images credit: LMI

Leave a Reply

avatar
  Subscribe  
Notify of