Skip to content

Template Matching vTool#

The Template Matching vTool allows you to detect objects in images. Detection is based on the pixels' gray value information using a taught template.

It is best suited for detecting objects with blurry edges or not clearly defined borders.

The Template Matching vTool accepts images via the Image input pin and outputs information about the matches found via its output pins.

You can use the transformation data supplied by the Calibration vTool to output the positions of the matches in world coordinates in meters (the Positions_m outpin pin becomes available). To do so, use the Calibration vTool before matching and connect the Transformation output/input pins of both vTools.

Template Matching Basic vTool

How It Works#

By defining a template and applying it to your input image, you can gather the following information about the matches:

  • Matching scores
  • Positions
  • Orientations
  • Regions with visualizations of the matches

Template Definition#

The first step of template matching is the teaching of a model based on a representative image of the object to be detected by your application.

In this teaching image, mark the desired region and teach it. Your application uses this model to detect this object in your production environment.

The Template Matching Basic vTool only detects objects that are completely visible in the image. That means the region marked in the template image must lie completely in the search image at the matching position. Therefore, make sure to be very precise when marking the desired object and use the smallest pen possible.

Objects must be the exact same size and lie at the same distance to the camera as the object in the model taught.

Template matching is based directly on the gray value information of the model and the search image. Template matching uses a normalized cross-correlation algorithm.

Info

  • Choose a representative image as a template.
  • Mark the region of the desired object in the template image. Choose regions that are characteristic for the object and that are generally present in the search images.

Using the Reference Point#

You can decide to display a reference point in the region you've marked. By default, the reference point is placed at the center of gravity of your region. For most applications, this works well.

If required, however, you can move the reference point to a different position. An example would be a robotic picking application. In such a scenario, the vision system has to determine the gripping position for the robot taking into account the gripper of the robot and the shape of the objects to pick. Here, you could place the reference point accordingly to accomodate both these aspects.

To display the reference point, you must mark a template region in the teaching image first.

Execution Settings#

  • Matches: If the application is such that only a limited number of objects can occur, you may want to restrict the possible number of matches to exactly this number. This will increase processing speed and robustness.
  • Score: A score value is calculated for every match. This score is the correlation coefficient between the region in the model and the region matched in the search image.
    To find the optimum Score setting, start with a moderate value and run some test images. Observe the output values of the Scores pin. Allow for some tolerance and choose a score value between 50–80 % of the lowest score output values determined on the test image. In case the score is set too low, some unwanted matches may be detected as well. In that case, readjust the score value until only the intended templates are found in the test images.
  • Timeout: For time-critical applications you can specify a timeout. A timeout helps in cases where the processing time varies or can't be foreseen, for example, if the image doesn't contain the object taught. By specifying a timeout, the detection process stops after the specified timeout and moves on to the next image.
    Basler recommends setting the timeout a bit lower than the desired time limit. This is because the vTool doesn't stop the detection process immediately. Therefore, set the timeout to 70–80 % of the desired time limit.

Using Calibration#

The resulting positions of the matches can also be output in world coordinates in meters. To achieve this, connect the Transformation output pin of the Calibration vTool to the Template Matching vTool's Transformation input pin.

You must use the same calibration configuration for setting up the Template Matching vTool as for the actual processing.

Info

  • The surface of the objects in the three-dimensional world space producing the two dimensional images must lie, at least more or less, on the same plane in space. If this is the case, the effect of perspective distortion due to different positions of the object in the world and the edges in the image is negligible.
  • Wide-angle lenses introduce significant perspective distortion due to the optical setup. For these lenses, the objects of inspection should be low in height relative to the object-to-camera distance. Alternatively, if the object height can't be neglected, choose a region in the template image at the same height level.
  • Register the plane of the object by placing the calibration plate's surface in this plane during calibration setup.

Common Use Cases#

  • Counting objects: In this case, only the number of elements at the Scores output pin is relevant.
  • Locating objects in image coordinates: Use the Positions_px and Angles_rad output pins for subsequent processing.
  • Locating objects in world coordinates: Use the Positions_m and Angles_rad output pins to transmit the positions of objects, e.g., to a robot to grasp the objects.

Configuring the vTool#

To configure the Template Matching Basic vTool:

Template Matching Basic vTool Settings

  1. In the Recipe Management pane in the vTool Settings area, click Open Settings or double-click the vTool.
    The Template Matching Basic dialog opens.
  2. Capture or open a teaching image.
    You can either use the Single Shot button to grab a live image or click the Open Image button to open an existing image.
  3. Use the pens to mark the desired region in the image. To correct the drawing, you can use the eraser or delete it completely.
  4. Teach the matching model by clicking the Teach button in the toolbar.
  5. If required, display the reference point by clicking Show Reference Point.
    The reference point is displayed at the center of the template region and the Set Reference Point Manually button becomes available. You can now drag the reference point to the desired position. If you click Set Reference Point Manually again, the reference point snaps back to its default position.
  6. If desired, adjust the Matches setting in the Execution Settings area to limit the number of matches to a maximum.
  7. If necessary, adjust the Score setting so that only the target objects are found.
  8. If you want to specify a timeout, clear the No timeout check box and enter the desired timeout in the input field.

You can view the result of the template matching in a pin data view. Here, you can select which outputs to display.

Inputs#

Image#

Accepts images directly from a Camera vTool or from a vTool that outputs images, e.g., the Image Format Converter vTool.

  • Data type: Image
  • Image format: 8-bit to 16-bit mono or color images. Color images are converted internally to mono images.

Transformation#

Accepts transformation data from the Calibration vTool.

  • Data type: Transformation Data

Outputs#

Scores#

Returns the scores of the matches.

  • Data type: Float Array

Positions_px#

Returns the positions of the matches in image coordinates (row/column).

  • Data type: PointF Array

Positions_m#

Returns the positions of the matches in world coordinates in meters (x/y).

  • Data type: PointF Array

Angles_rad#

Returns the orientations of the matches in radian.

  • Data type: Float Array

Regions#

Returns a region for every match found.

  • Data type: Region Array

Typical Predecessors#