Skip to content

pylon AI Agent#

The pylon AI Agent is a standalone application that helps you manage and deploy your deep learning (DL) models to the pylon AI vTools in your application.

DL models are packaged in bundles, together with hardware configurations, etc., and released via the pylon AI Platform. After they have been released, use the pylon AI Agent to deploy models for pylon AI vTools on the Triton Inference Server.

The Triton Inference Server handles the inference process. In the context of deep learning, inference means that a trained model is applied to new, unseen data and makes predictions about the content.

Overview of the pylon AI Agent#

There are two ways to start the AI Agent:

  • From the Workbench menu of the pylon Viewer
  • From a vTool's settings dialog
    Starting the pylon AI Agent from a vTool

Both methods open the AI Agent on the Deployed Models tab. Here, you can check which models have already been deployed.

pylon AI Agent - Deployed Models Tab

To deploy models, you can choose between deployment via the pylon AI Platform or from storage, e.g., your hard drive or a flash drive.

Deploying via the pylon AI Platform#

If you want to deploy models via the pylon AI Platform, you have to be aware of the following:

  • If you have opened the AI Agent from the Workbench menu, you see all available models for all vision tasks on the Deploy via pylon AI Platform tab.pylon AI Agent Opened from Workbench Menu - Deploy via Platform

  • If you have opened the AI Agent from the settings dialog of a vTool, you only see models that are suitable for the vTool you are currently configuring. For the following screenshot the AI Agent has been opened from the settings dialog of the Classification vTool.
    pylon AI Agent Opened from vTool Settings Dialog- Deploy via Platform

Deploying from Storage#

If you want to deploy models from storage, you first have to download the bundles that contain the desired models from the AI Platform onto your desired storage medium.

pylon AI Agent - Deploy from Storage Tab

Deploying Models#

  1. Start the Triton Inference Server.
    For instructions, see the Install document included in the pylon Supplementary Package for pylon AI.
  2. Start the pylon AI Agent by one of the following methods:
    • From the Workbench menu of the pylon Viewer
    • From a vTool's settings dialog
      The pylon AI Agent opens on the Deployed Models tab. Here, you can check which models have already been deployed.
  3. Decide whether you want to deploy new models from storage or via the pylon AI Platform.

    • Deploy from Storage

      1. Download the desired bundle from the pylon AI Platform if you haven't done so already.
      2. Go to the Deploy from Storage tab.
      3. Either drag and drop the bundle onto the drop area or click the folder icon to browse to the desired bundle file.
        This can be a local or remote storage location.
      4. Click Deploy.
      5. In the Select Server dialog, select the Triton Inference Server on which you want to deploy the models.
        Select Server
      6. Click OK.
        The deployment starts immediately. A message appears when the deployment has completed.
    • Deploy via pylon AI Platform

      1. Go to the Deploy via pylon AI Platform tab.
      2. Log in to the pylon AI Platform by clicking the Log in button.
        You're taken to the AI Platform where you can sign in. Afterwards, the web page should close and you should be returned automatically to the AI Agent. If that doesn't happen, return to the AI Agent yourself to continue.
      3. Select one or more of the bundles listed in the table.
      4. Click Deploy.
      5. In the Select Server dialog, select the Triton Inference Server on which you want to deploy the models.
        Select Server
      6. Click OK.
        The deployment starts immediately. A message appears when the deployment has completed.

General Considerations#

  • Model naming: You can't deploy multiple models with the same name on the Triton Inference Server. Each model must have a unique name within the Triton Inference Server's model repository. Unique names are required because the Triton Inference Server uses the model name as an identifier to manage and route inference requests to the appropriate model. If there were two models with the same name, the server wouldn't be able to distinguish between them.
  • Network latency and bandwidth: Ensure sufficient bandwidth is available. Large models can be time-consuming to download, especially over a network with limited bandwidth.
  • Sufficient local storage: The Triton Inference Server needs sufficient local storage to cache or temporarily store the models downloaded. If local storage is limited, you may run into issues when deploying large or multiple models at the same time.
  • Running server and vTools remotely: If you're running the Triton Inference Server on a remote machine and are also running the AI vTools from another location, be aware of the following important restrictions and considerations:
    • The time it takes for data to travel between the vTools making the inference request and the Triton Inference Server can impact the overall response time.
    • High latency can slow down the inference process, especially for real-time applications.
    • Limited network bandwidth can lead to slow or unreliable communication, particularly when transferring large amounts of data.