RasterObjectDetector
Accepts a raster input and outputs rectangular geometries outlining the detected object(s). The transformer uses OpenCV’s Cascade Classifier for object detection and allows for selection of various object types and detection models or classifiers. Each classifier is trained to detect a specific object, for instance: human bodies, faces and eyes. Multiple classifiers can to be used in the same transformer on the same source raster(s) to produce different sets of results, grouped by detection model.
Detection models use a detection kernel window that is moved across the entire raster. If the pixel pattern in a specific area of the raster matches the kernel “sufficiently”, that area is treated as a detected object. For the purposes of matching, the kernel and source raster are scaled up and down, respectively, to detect smaller and larger objects.
A rough bounding box of the detected object will be individually attached to a feature and output via the Detected port. The detection parameters, scaling factor, minimum number of neighbors and detection object sizes work together to help balance the number of objects detected, processing speed and detection accuracy. See the parameters section for more details.
Input Ports
The transformer accepts input features with raster geometries. All features with non-raster geometry or invalid raster geometry will be rejected. Input raster geometries may be images or other types of data but must have between 1 to 4 bands, with either all 8-bit or 16-bit bands, of the allowed interpretations. The input raster is consumed in the process. Accepted raster interpretations include Gray8, Gray16, GrayAlpha16, GrayAlpha32, RGB24, RGB48, RGBA32, and RGBA64.
Output Ports
For each detected object in the input raster geometry, a feature will be produced containing a rectangular geometry that represents a bounding box of the detected object. If the matched portion of the input raster is desired, consider using a Clipper transformer after the RasterObjectDetector and routing the detected boxes into the input Clipper port and the input raster into the Clippee port.
Output detected features are tagged with an attribute named _detected_object_type by default whose value contains the name of the detection model that produced that particular detected feature (for example, ‘LBP - Frontal Face’). The detection attribute name can be changed using the Detected Attribute Name parameter.
The fme_basename attribute may also be useful in determining the source raster for the output detected geometries.
Non-raster features or features with invalid raster geometries are output through this port.
Rejected Feature Handling: can be set to either terminate the translation or continue running when it encounters a rejected feature. This setting is available both as a default FME option and as a workspace parameter.
Parameters
General
The parameter determines a group of features to be detected:
- Face
- Body
- Animal
- Object
- Custom
For each detection type, multiple detection models can be selected using the Detection Model parameter.
Specifies whether the detected object’s output features should retain attributes from the input raster feature. The default is to preserve input attributes.
This parameter determines the name of the attribute that will be used to tag each detected object’s feature with the detection model’s name that produced the object. By default this attribute will be named _detected_object_type.
Detection Model
These parameters allow the user to choose multiple detection models under a single detection type.
The transformer offers two broad approaches towards object detection: Haar feature-based cascade classifiers and Local Binary Patterns or LBP.
Haar feature-based cascade classifiers is an object detection method, where a cascade function is trained from a large sample of positive and negative images, from which features are extracted that describe the image. In this context, the word “cascade” indicates that the classifiers consist of a number of chained simpler classifiers. A very large set of defining features is required to classify or detect an object, therefore this method is generally slightly slower than LBP.
https://en.wikipedia.org/wiki/Haar-like_feature
https://docs.opencv.org/3.4/d5/d54/group__objdetect.html
Local Binary Pattern utilizes differences between a particular cell and the surrounding neighbors, at a specified window size. For each cell, all the neighbors around the center cell are analyzed (first 1 cell away, then 2, etc.) and their difference with the center is calculated. The results are put in a histogram of frequency of each neighboring value occurring.
https://en.wikipedia.org/wiki/Local_binary_patterns
Categorized list of built in detection models:
Detection Model |
Size WxH (px) |
Description |
---|---|---|
Haar - Eye | 20x20 | Stump-based frontal eye detector. |
Haar - Eye Tree Eyeglasses | 20x20 | Tree-based frontal eye detector with better handling of eyeglasses. |
Haar - Frontal Face Alt | 20x20 | Stump-based frontal face detector with gentle Adaptive Boosting. |
Haar - Frontal Face Alt Tree | 20x20 | Stump-based frontal face detector with gentle Adaptive Boosting. Detector uses tree of stage classifiers instead of a cascade. |
Haar - Frontal Face Alt 2 | 20x20 | Stump-based discrete frontal face detectors with Adaptive Boosting. |
Haar - Frontal Face Default | 24x24 | |
Haar - Profile Face | 20x20 | Profile face detector. |
Haar - Left Eye 2Splits | 20x20 | Tree-based eye detectors. |
Haar - Right Eye 2Splits | 20x20 | |
Haar - Smile | 18x36 | Smile detector. Improved results can be achieved by first detecting a face and supplying that image to smile detector. |
LBP - Frontal Face | 24x24 | 24x24 frontal face detector. |
LBP - Frontal Face Improved | 45x45 | 45x45 frontal face detector. |
LBP - Profile Face | 20x34 | 20x34 detector of profile faces using LBP features. Only detects faces rotated to the right. Can flip the image to detect left side. |
Detection Model |
Size WxH (px) |
Description |
---|---|---|
Haar - Fullbody | 22x18 | Full body detector. Only supports frontal and back views but not side views. Outline will also include a little bit of background to ensure proper silhouette representation. |
Haar - Lowerbody | 19x23 | Lower body detector. Shares same limitations as Full Body detector. |
Haar - Upper Body | 18x22 | Upper body detector. Shares same limitations as Full Body detector. One of the better performing detectors. |
Detection Model |
Size WxH (px) |
Description |
---|---|---|
Haar - Frontal Cat Face | 24x24 | A frontal cat face detector using the full set of Haar features, such as horizontal, vertical, and diagonal features |
Haar - Frontal Cat Face Extended | 24x24 | An upright subject is assumed. In situations where the cat's face might be sideways or upside down (for example, the cat is rolling over), try various rotations of the input image |
LBP - Frontal Cat Face | 24x24 |
Detection Model |
Size WxH (px) |
Description |
---|---|---|
Haar - 16 Stage Russian License plate | 64x16 | Russian License plate number detection |
Haar - Russian Plate Number | 20x60 | |
LBP - Silverware | 12x80 | 12x80 detector of the silverware (forks, spoons, knives) using LBP features. Detector only detects vertically oriented silverware |
This parameter allows you to supply a custom object detection model. You can read further about training your own model in the official OpenCV Documentation.
Advanced
The original detection window of the detection model is often small, therefore the input raster is scaled down in attempt to detect larger objects. The scale factor determines how much the image is scaled down in percentages, ranging from 1% to 300%, inclusive. Object detection is performed at each scale of the raster, but not in between scales. In other words, if the scaling factor is 100%, detection will happen on the original raster, followed by the raster scaled down x2, x4, etc.
Scaling Factor Percent |
Actual Scaling Value Used |
---|---|
3% | 1.03 |
15% | 1.15 |
100% | 2.00 |
150% | 2.50 |
The table above specifies some of the values used to scale down the raster. Default is 3%.
If the scaling factor is small, there will be higher chance of finding an object. However, since objects are being looked for at a more granular scale, the transformer might take longer to process the raster. With higher granularity also comes a potential for more noise or false-positive detections and other transformer parameters might help reduce those. The opposite is also true for larger scaling factors.
When a detection (kernel) window is being moved across the raster, an object might be detected multiple times in the same area. These similar area detections are called neighbors. Minimum Number of Neighbors specifies how many neighbors each candidate detected object requires before it as accepted as a valid detected object. The default is 2 neighbors.
When the Minimum Number of Neighbors is 0, all detected objects will be retained. Therefore, the confidence in each match will be low.
When the Minimum Number of Neighbors is greater than 0, the algorithm will retain a detected object only if it has at least the specified number of neighbors, thus increasing confidence in each object that is output.
The parameter affects minimum and maximum detection size parameters.
- Percent: Object size parameters will be treated as percentages relative to the size of the source raster. This is the default.
- Pixels: The object size parameters will be treated as exact pixel values.
Minimum and maximum object detection sizes specify the size limits for detected objects; ones that are smaller or larger than the specified sizes, respectively, will be ignored. If no values are provided the detection will happen at all scales defined by the Scale Factor Percent parameter. If minimum and maximum sizes are specified and are same, the detection happens only at the specified size.
Specifying the minimum size can greatly improve detection performance. Maximum size is often unnecessary but can also affect performance. By default, maximum size is unset and minimum width and height is 4%.
Editing Transformer Parameters
Using a set of menu options, transformer parameters can be assigned by referencing other elements in the workspace. More advanced functions, such as an advanced editor and an arithmetic editor, are also available in some transformers. To access a menu of these options, click beside the applicable parameter. For more information, see Transformer Parameter Menu Options.
Defining Values
There are several ways to define a value for use in a Transformer. The simplest is to simply type in a value or string, which can include functions of various types such as attribute references, math and string functions, and workspace parameters. There are a number of tools and shortcuts that can assist in constructing values, generally available from the drop-down context menu adjacent to the value field.
Using the Text Editor
The Text Editor provides a convenient way to construct text strings (including regular expressions) from various data sources, such as attributes, parameters, and constants, where the result is used directly inside a parameter.
Using the Arithmetic Editor
The Arithmetic Editor provides a convenient way to construct math expressions from various data sources, such as attributes, parameters, and feature functions, where the result is used directly inside a parameter.
Conditional Values
Set values depending on one or more test conditions that either pass or fail.
Parameter Condition Definition Dialog
Content
Expressions and strings can include a number of functions, characters, parameters, and more.
When setting values - whether entered directly in a parameter or constructed using one of the editors - strings and expressions containing String, Math, Date/Time or FME Feature Functions will have those functions evaluated. Therefore, the names of these functions (in the form @<function_name>) should not be used as literal string values.
These functions manipulate and format strings. | |
Special Characters |
A set of control characters is available in the Text Editor. |
Math functions are available in both editors. | |
Date/Time Functions | Date and time functions are available in the Text Editor. |
These operators are available in the Arithmetic Editor. | |
These return primarily feature-specific values. | |
FME and workspace-specific parameters may be used. | |
Creating and Modifying User Parameters | Create your own editable parameters. |
Dialog Options - Tables
Transformers with table-style parameters have additional tools for populating and manipulating values.
Row Reordering
|
Enabled once you have clicked on a row item. Choices include:
|
Cut, Copy, and Paste
|
Enabled once you have clicked on a row item. Choices include:
Cut, copy, and paste may be used within a transformer, or between transformers. |
Filter
|
Start typing a string, and the matrix will only display rows matching those characters. Searches all columns. This only affects the display of attributes within the transformer - it does not alter which attributes are output. |
Import
|
Import populates the table with a set of new attributes read from a dataset. Specific application varies between transformers. |
Reset/Refresh
|
Generally resets the table to its initial state, and may provide additional options to remove invalid entries. Behavior varies between transformers. |
Note: Not all tools are available in all transformers.
FME Community
The FME Community is the place for demos, how-tos, articles, FAQs, and more. Get answers to your questions, learn from other users, and suggest, vote, and comment on new features.
Search for samples and information about this transformer on the FME Community.