# Processing Measurement Results#

This topic tells you how to process the measurement results of Basler blaze-101 cameras.

## Overview#

The blaze-101 camera measures the distance the light travels for each sensor pixel.

Using these distances, the camera calculates x,y,z coordinates in a right-handed coordinate system for each sensor pixel. The coordinate system's origin is at the camera's optical center, which is located inside the camera housing. The y axis is pointing down, and the z axis is pointing away from the camera.

The camera provides 3D information either as a depth map or as a point cloud, depending on the pixel format selected. In a depth map, z coordinates are encoded as 16-bit gray values. As illustrated below, all 3D coordinate values can be calculated from the gray values. A point cloud contains the x,y,z 3D coordinates for each sensor pixel as floating point numbers. The unit is mm.

If there is no valid depth information for a sensor pixel (e.g., due to outlier removal or insufficient light, i.e., light that is not strong enough to pass the confidence threshold), the corresponding values in a depth map or a point cloud are set to the value defined by the `Scan3dInvalidDataValue`

parameter (default setting is 0).

## Processing Depth Maps#

Depth maps consist of 16-bit gray values. For each sensor pixel, the camera converts the z coordinate value to a gray value and stores it in the depth map.

In combination with the camera's calibration data provided by the `Scan3dCoordinateScale`

, `Scan3dPrincipalPointU`

, `Scan3dPrincipalPointV`

, and `Scan3dFocalLength`

parameters, complete 3D information can be retrieved from a depth map.

Info

The `Scan3dCoordinateScale`

parameter value varies depending on the pixel format selected. When working with the camera in the blaze Viewer, the pixel format is always set to the `Coord3D_ABC32f`

pixel format. You can't change this setting. The depth maps provided by the blaze Viewer are created based on the point clouds. The `Scan3dCoordinateScale`

parameter value is 1 in this case. When you're working with the blaze camera outside the pylon Viewer and set the pixel format to `Coord3D_C16`

, the `Scan3dCoordinateScale`

parameter value is different. The `Scan3dCoordinateScale`

parameter values for the different pixel formats are listed in the following table.

Pixel Format | Scan3dCoordinateScale[C] Parameter Value |
---|---|

Coord3D_ABC32f | 1 |

Coord3D_C16 | 0.152588 |

Mono16 | 0.152588 |

Refer to the **GrabDepthMap** C++ sample for how to configure a camera to send depth maps and how to access the depth map data.

### Calculating 3D Coordinates from the 2D Depth Map#

To convert a depth map's 16-bit gray values to z coordinates in mm, use the following formula:

```
z [mm] = gray2mm * g
```

where:

`g`

= gray value from the depth map

`gray2mm`

= value of the `Scan3dCoordinateScale`

parameter

For calculating the x and y coordinates, use the following formulas:

```
x [mm] = (u-cx) * z / f
y [mm] = (v-cy) * z / f
```

where:

`(u,v)`

= column and row in the depth map

`f`

= value of the `Scan3dFocalLength`

parameter, i.e., the focal length of the camera's lens

`(cx,cy)`

= values of the `Scan3dPrincipalPointU`

and `Scan3dPrincipalPointV`

parameters, i.e., the principal point

#### C++ Sample Code#

```
// Enable depth maps by enabling the Range component and setting the appropriate pixel format.
camera.ComponentSelector.SetValue(ComponentSelector_Range);
camera.ComponentEnable.SetValue(true);
camera.PixelFormat.SetValue(PixelFormat_Coord3D_C16);
// Query the conversion factor required to convert gray values to distances:
// Choose the z axis first...
camera.Scan3dCoordinateSelector.SetValue(Scan3dCoordinateSelector_CoordinateC);
// ... then retrieve the conversion factor.
const auto gray2mm = camera.Scan3dCoordinateScale.GetValue();
// Configure the gray value used for indicating missing depth data.
// Note: Before setting the value, the Scan3dCoordinateSelector parameter must be set to the axis the
// value is to be configured for, in this case the z axis. This means that Scan3dCoordianteSelector must be set
// to "CoordinateC". This has already been done a few lines above.
camera.Scan3dInvalidDataValue.SetValue((double)missingDepth);
// Retrieve calibration data from the camera.
const auto cx = camera.Scan3dPrincipalPointU.GetValue();
const auto cy = camera.Scan3dPrincipalPointV.GetValue();
const auto f = camera.Scan3dFocalLength.GetValue();
// ....
// Access the data.
const auto container = ptrGrabResult->GetDataContainer();
const auto rangeComponent = container.GetDataComponent(0);
const auto width = rangeComponent.GetWidth();
const auto height = rangeComponent.GetHeight();
// Calculate coordinates for pixel (u,v).
const uint16_t g = ((uint16_t*)rangeComponent.GetData())[u + v * width];
const double z = g * gray2mm;
const double x = (u - cx) * z / f;
const double y = (v - cy) * z / f;
```

#### Dealing with Saved Depth Maps#

For depth maps acquired with the blaze Legacy SDK, the pylon SDK, or the blaze ROS driver, you have to use the following formula:

```
Distance Measured [mm] = Pixel_Value x Scan3dCoordinateScale[C]
```

For depth maps saved with the blaze Viewer, use the following formula:

```
Distance Measured [mm] = DepthMin_parameter + (Pixel_Value x (DepthMax_Parameter - DepthMin_parameter)) / 65535
```

## Processing Point Clouds#

No further processing is required to extract 3D information from a point cloud since the point cloud consists of x,y,z coordinate triples within the camera's coordinate system.

Refer to the **FirstSample** C++ sample for how configure a camera for sending a point cloud and how to access the data.

If you need a depth map in addition to a point cloud, refer to the **ConvertPointCloud2DepthMap** C++ sample that illustrates how to compute grayscale and RGB depth maps from point cloud.

## Shifting the Origin of the Coordinate System to the Front of the Camera Housing#

The camera's coordinate system's origin is located in the camera's optical center which is inside the camera housing. If you prefer coordinates in a coordinate system which is located at the camera's front of the housing, i.e., which is translated along the z axis, a constant, device-specific offset has to be subtracted from the z coordinates. The required offset can be retrieved from the camera by getting the value of the `ZOffsetOriginToCameraFront`

parameter:

```
const double offset = camera.ZOffsetOriginToCameraFront.GetValue();
```

If (x,y,z) are the coordinates of a point in the cameras's coordinate system, the corresponding coordinates (x',y',z') in a coordinate system that is shifted along the z axis to the front of the camera's housing can be determined using the following formulas:

```
x' = x
y' = y
z' = z - offset
```

## Calculating Distances#

Given a point's coordinates `(x,y,z)`

in mm, the distance of that point to the camera's optical center can be calculated using the following formula:

```
d = sqrt( x*x + y*y + z*z )
```

The distance `d'`

to the front of the camera's housing can be calculated as follows:

```
z' = z - offset
d' = sqrt( x*x + y*y + z'*z')
```

## Visualizing Depth Information as RGB images#

You can calculate RGB values from z coordinates or distance values using the following scheme for a rainbow color mapping. This can be useful for improved visualization of the data.

First, a depth value from the `[minDepth..maxDepth]`

value range is converted into a 10-bit value. This 10-bit depth value is mapped to 4 color ranges where each range has a resolution of 8 bits.

`minDepth`

and `maxDepth`

= values of the `DepthMin`

and `DepthMax`

parameters, i.e., the camera's current depth ROI

Depth Value | Mapped to Color Range |
---|---|

0..255 | Red to yellow (255,0,0) -> (255,255,0) |

256..511 | Yellow to green (255,255,0) -> (0, 255, 0) |

512..767 | Green to aqua (0,255,0) -> (0,255,255) |

768..1023 | Aqua to blue (0,255,255) -> (0, 0, 255) |

In the following code snippet, `depth`

is either a z value or a distance value in mm.

```
const int minDepth = (int)m_camera.DepthMin.GetValue();
const int maxDepth = (int)m_camera.DepthMax.GetValue();
const double scale = 65536.0 / (maxDepth - minDepth);
for each pixel {
// Set depth either to the corresponding z value or
// a distance value calculated from the z value.
// Clip depth if required.
if (depth < minDepth)
depth = minDepth;
else if (depth > maxDepth)
depth = maxDepth;
// Compute RGB values.
const uint16_t g = (uint16_t)((depth - minDepth) * scale);
const uint16_t val = g >> 6 & 0xff;
const uint16_t sel = g >> 14;
uint32_t res = val << 8 | 0xff;
if (sel & 0x01)
{
res = (~res) >> 8 & 0xffff;
}
if (sel & 0x02)
{
res = res << 8;
}
const uint8_t r = res & 0xff;
res = res >> 8;
const uint8_t g = res & 0xff;
res = res >> 8;
const uint8_t b = res & 0xff;
}
```