Using the Blob Analysis for Classification and ROI Selection#
The example shows the capability of Visual Applets to use the Blob Analysis results for classification inside the applet which allows a significant enhancement of image processing possibilities inside the frame grabber. Figure 1 shows a screenshot of the design. The main parts are the binarization, the Blob Analysis, the classification and the cut-out of the ROI. These parts will be explained in detail in the following.
The binarization used in this example is based on simple thresholding. The Blob Analysis assumes every white pixel to be an object pixel. Figure 2 shows that a foreground value range is selected using two threshold values. This allows the use of images where the objects are formed by dark pixels or images where objects are brighter in contrast to their background. After the binarization itself the image is buffered. From the buffer, the images are passed to the Blob Analysis and are transferred to the host PC to monitor the binarized images.
The Blob Analysis is performed post to the binarization. Here, only the object features center of gravity in x and y-direction as well as the area are used for further processing. All other features are not used in this example.
The applet continues with the classification of the Blob Analysis results (Figure 3). First, the center of gravity values are normalized by dividing them with the area. This is simply done by using the DIV operator. Meanwhile the detection of the object with the largest area is performed. Here, the operator
FrameMax and Registers are used. The idea of the classification algorithm is to suppress every object whose area is less than the objects which have been investigated so far. In detail, the Blob Analysis outputs a stream of objects. The
FrameMax operator detects if a current object is larger than the objects which have been investigated so far and outputs logic one at its output
IsMax.This value is used at the
Capture input of the registers. If the current object is larger than the previous objects the center of gravity coordinates are latched to the output. Hence, the last output of the frame i.e. the blob stream represents the center of gravity coordinates of the object which has the maximum area. Figure 3 illustrates this behavior.
The classification module now outputs the center of gravity coordinates of the largest object found in the image. Next, the coordinates have to be transformed to the ROI coordinates
YLength. Figure 4 shows the design of this transformation. The
YLength is set to a constant whereas the center of gravity coordinates are reduced by half of the ROI size.
The four ROI values are now used to cut-out the ROI from the original image using the DynamicROI operator. As explained previously the classification module outputs the object with the maximum area with the last value of the frame. This is favorable as the
DynamicROI operator only considers the last ROI coordinates received. Hence, the correct coordinates are used and the largest object is cut-out from the original image. The design file of this VisualApplets project can be found in folder
Examples\Processing\BlobAnalysis\Blob2D_ROI_select in the VisualApplets installation path. The applet can be run in microDisplay and the results can directly be seen. Moreover, a small SDK project is added to the example. To use the SDK project it is required to change the DMA size of the ROI to the DMA size set by modules
ROIsizeY. Use the display settings in microDisplay or the
height1 parameters in the SDK project to perform the adaptation.