Classification vTool#
The Classification vTool can classify objects in images it receives via the Image input pin. It outputs the results via the Classes and Scores output pins.
If you use the ROI Creator vTool to specify a region of interest before classification, the Classification vTool accepts the ROI data via its Roi input pin. In that case, classification is only performed on the region of interest. As a side effect, this results in faster processing.
How It Works#
Assume that you want to classify images to determine whether they contain a husky or a flamingo.
This is your input image showing a flamingo:
You configure the vTool by loading a model that has been trained to classify images into various categories, including Husky and Flamingo.
Next, you can select the classes you want to detect and classify. Some models may be tailored exactly for your use case. In that case, all desired classes are already selected.
After configuring the vTool, you run the recipe that initiates the inference process. The vTool utilizes the model to analyze the image and identify the objects present. When the analysis has finished, the classification result is returned.
This is the resulting image in the pin data view:
Detailed Output
Class: Flamingo
Score: 0.687027
The result indicates that the model is 68.7 % confident that the object in the image is a flamingo.
However, if the uploaded image contains a flamingo but the Flamingo class wasn't selected, the model won't return any results related to the flamingo. The vTool can only classify and return results for the classes that have been selected.
This functionality allows you to focus the classification on specific categories of interest, ensuring that only relevant results are displayed.
Configuring the vTool#
To configure the Classification vTool:
- In the Recipe Management pane in the vTool Settings area, click Open Settings or double-click the vTool.
The Classification dialog opens. - In the Server Selection area, select the server you want to connect to.
The server must be running to be able to connect to it.- Option 1: Enter the IP address and port of your Triton Inference Server manually.
- Option 2: Select the IP address / host name and port of your server in the drop-down list.
- Click Connect to establish a connection.
The vTool then loads the available models. - In the Model Selection area, select the bundle that contains the desired model in the drop-down list.
After you have selected a bundle, the information below the drop-down list is populated with the bundle details allowing you to check that you have selected the correct bundle. - If the desired bundle isn't listed, click Deploy New Model in the drop-down list.
This opens the pylon AI Agent that allows you to deploy more models.
After you have deployed a new model, you have to click the Refresh button. -
In the Detection Settings area, click Select Classes to select the classes you want to classify.
A dialog with the available classes opens. By default, all classes are selected. You can select as many classes as you like.Info
The parameters available in this area depend on the model selected. Some parameters are only available after selecting a certain model. Default values are provided by the model.
-
Click OK.
You can view the result of the classification in a pin data view. Here, you can select which outputs to display.
Inputs#
Image#
Accepts images directly from a Camera vTool or from a vTool that outputs images, e.g., the Image Format Converter vTool.
- Data type: Image
- Image format: 8-bit to 16-bit color images
Roi#
Accepts a region of interest from the ROI Creator vTool or any other vTool that outputs rectangles. If multiple rectangles are input, the first one is used as a single region of interest.
- Data type: RectangleF, RectangleF Array
Outputs#
Classes#
Returns an n-by-1 vector of predicted class labels. n is the number of classes detected in the image.
- Data type: String Array
Scores#
Returns the confidence scores for each predicted class.
- Data type: Float Array
Payload#
Returns additional information regarding the performance of the Inference Server for evaluation. This output pin only becomes available if you enable it in the Features - All pane in the Debug Settings parameter group.
- Data type: String