AzureComputerVisionConnector

Accesses the Azure Computer Vision Service to detect objects in images.

Jump to Configuration

Typical Uses

Submitting text to the Azure Computer Vision service to

  • detect individual objects
  • describe the general contents

How does it work?

The AzureComputerVisionConnector uses your Azure Cognitive Services account credentials (either via a previously defined FME web connection, or by setting up a new FME web connection right from the transformer) to access the service.

It will submit images to the service, and return features with attributes that describe the contents of the image. Services supported are object detection, text detection, and face detection.

  • For object detection, if the service is able to identify the exact location of an object in the image, a bounding box geometry will also be returned.
  • Text detection will always return bounding boxes around the detected text.
  • For face detection, if the service is able to identify the exact location of a face in the image, a bounding box geometry will also be returned. There is also the option to detect and locate facial landmarks.

Usage Notes

  • For better performance, requests to the Computer Vision service are made in parallel, and are returned as soon as they complete. Consequently, detection results will not be returned in the same order as their associated requests.
  • While powerful, the use of AI has important legal and ethical implications. Consult your local AI legislation and ethical guidelines before applying the AzureComputerVisionConnector in a production environment. For information about privacy and compliance with respect to Azure Cognitive Services, please see https://azure.microsoft.com/en-ca/support/legal/cognitive-services-compliance-and-privacy.

Configuration

Input Ports

Output Ports

Parameters

The remaining parameters available depend on the value of the Request > Detection Type parameter. Parameters for each Detection Type are detailed below.

Editing Transformer Parameters

Using a set of menu options, transformer parameters can be assigned by referencing other elements in the workspace. More advanced functions, such as an advanced editor and an arithmetic editor, are also available in some transformers. To access a menu of these options, click beside the applicable parameter. For more information, see Transformer Parameter Menu Options.

Defining Values

There are several ways to define a value for use in a Transformer. The simplest is to simply type in a value or string, which can include functions of various types such as attribute references, math and string functions, and workspace parameters. There are a number of tools and shortcuts that can assist in constructing values, generally available from the drop-down context menu adjacent to the value field.

Dialog Options - Tables

Transformers with table-style parameters have additional tools for populating and manipulating values.

Reference

Processing Behavior

Feature-Based

Feature Holding

No

Dependencies Azure Cognitive Services Account
Aliases  
History Released FME 2019.2

FME Community

The FME Community is the place for demos, how-tos, articles, FAQs, and more. Get answers to your questions, learn from other users, and suggest, vote, and comment on new features.

Search for all results about the AzureComputerVisionConnector on the FME Community.

 

Examples may contain information licensed under the Open Government Licence – Vancouver and/or the Open Government Licence – Canada.