Trains a custom raster object detection model based on the positive and negative samples.
Trains a custom raster object detection model based on the positive and negative datasets. The resulting model file can be used to detect the desired object using RasterObjectDetector. For convenience, you may get the input datasets from RasterObjectDetectorSamplePreparer that takes multiple positive samples and a number of negative samples for preparation. Or you may generate artificial samples using RasterObjectDetectorSampleGenerator (note that artificially generated samples usually perform worse than hand-picked ones)
Note that the transformer calls an external (opencv_traincascade) process to perform the model training. At the moment, if translation is suspended or stopped, the opencv_traincascade process is going to remain running and will need to be killed manually.
The transformer accepts any feature but the contents are largely ignored. A possible input would be the feature output from RasterObjectDetectorSamplePreparer which will contain a number of input parameters required for translation.
This port returns the original feature and the _out_detection_model_name attribute, containing the path to the produced custom detection model.
This port returns a feature with all its original contents and an fme_rejection_message containing the error message if any occur.
Rejected Feature Handling: can be set to either terminate the translation or continue running when it encounters a rejected feature. This setting is available both as a default FME option and as a workspace parameter.
Number of positive samples contained in the input Positive Samples File.
Number of negative samples contained in the input Background Description File.
- HAAR: Trains detection model using Haar-like features, which are rectangular features that represent a difference of the sum of pixels within the area. The model generally takes longer to train and has slower detection time (speed heavily depends on model complexity and not a general truth about the model types). However, usually this model type is more accurate than LBP features.
- LBP: Model will use Local Binary Patterns for object detection. The opposite (compared to HAAR-feature) is true regarding performance. LBP models are considerably faster to train but may lack in accuracy in some applications.
Size of buffer for feature values in Mb (Megabytes). The more memory is given, the faster the training process is going to be. Ensure that the combined memory specified for Value and Index buffer sizes does not exceed the total system memory.
Size of buffer for feature indices in Mb (Megabytes). The more memory is given, the faster the training process is going to be. Ensure that the combined memory specified for Value and Index buffer sizes does not exceed the total system memory.
Number of cascade stages to be trained.
Parameter determines the precision to which the model will be trained to. A good guideline is not to train the model past 10e-5 (0.000001), to ensure the model isn't overtrained on the data. -1 disables the feature.
NO PARALLELISM | MINIMAL | MODERATE | AGGRESSIVE | EXTREME. This parameter determines the number of threads to use for processing. NO PARALLELISM means single-threaded processing. On a 16 (virtual) core machine, the rest of the options will map to 8, 16, 24, 32 threads respectively.
- Working directory where the intermediate steps and parameters for detection model training will be placed.
- Sometimes the training can take days and even weeks. In case of an interruption, the training may be continued with the same parameters from the last known "good" training stage which will be saved in this directory.
- The training parameters must match exactly, otherwise the training will be restarted from the beginning. Additionally, if you want to retrain with the same parameters, but maybe your samples have changed, you would need to delete the outputs in the intermediate directory.
Path to the file which will contain the trained detection model.
Boosted Classifier Parameters
From OpenCV: A common machine learning task is supervised learning. In supervised learning, the goal is to learn the functional relationship F: y = F(x) between the input x and the output y. Predicting the qualitative output is called classification, while predicting the quantitative output is called regression.
Editing Transformer Parameters
Using a set of menu options, transformer parameters can be assigned by referencing other elements in the workspace. More advanced functions, such as an advanced editor and an arithmetic editor, are also available in some transformers. To access a menu of these options, click beside the applicable parameter. For more information, see Transformer Parameter Menu Options.
There are several ways to define a value for use in a Transformer. The simplest is to simply type in a value or string, which can include functions of various types such as attribute references, math and string functions, and workspace parameters. There are a number of tools and shortcuts that can assist in constructing values, generally available from the drop-down context menu adjacent to the value field.
Using the Text Editor
The Text Editor provides a convenient way to construct text strings (including regular expressions) from various data sources, such as attributes, parameters, and constants, where the result is used directly inside a parameter.
Using the Arithmetic Editor
The Arithmetic Editor provides a convenient way to construct math expressions from various data sources, such as attributes, parameters, and feature functions, where the result is used directly inside a parameter.
Set values depending on one or more test conditions that either pass or fail.
Expressions and strings can include a number of functions, characters, parameters, and more.
When setting values - whether entered directly in a parameter or constructed using one of the editors - strings and expressions containing String, Math, Date/Time or FME Feature Functions will have those functions evaluated. Therefore, the names of these functions (in the form @<function_name>) should not be used as literal string values.
|These functions manipulate and format strings.|
|A set of control characters is available in the Text Editor.|
|Math functions are available in both editors.|
|Date/Time Functions||Date and time functions are available in the Text Editor.|
|These operators are available in the Arithmetic Editor.|
|These return primarily feature-specific values.|
|FME and workspace-specific parameters may be used.|
|Creating and Modifying User Parameters||Create your own editable parameters.|
Dialog Options - Tables
Transformers with table-style parameters have additional tools for populating and manipulating values.
Enabled once you have clicked on a row item. Choices include:
Cut, Copy, and Paste
Enabled once you have clicked on a row item. Choices include:
Cut, copy, and paste may be used within a transformer, or between transformers.
|Start typing a string, and the matrix will only display rows matching those characters. Searches all columns. This only affects the display of attributes within the transformer - it does not alter which attributes are output.|
|Import populates the table with a set of new attributes read from a dataset. Specific application varies between transformers.|
Generally resets the table to its initial state, and may provide additional options to remove invalid entries. Behavior varies between transformers.
Note: Not all tools are available in all transformers.
FME Licensing Level
FME Professional edition and above
The FME Community is the place for demos, how-tos, articles, FAQs, and more. Get answers to your questions, learn from other users, and suggest, vote, and comment on new features.
Search for samples and information about this transformer on the FME Community.