RasterObjectDetectorSamplePreparer

Aids in preparation of positive and negative samples provided by the user to be used by the RasterObjectDetectionModelTrainer.

Typical Uses

A hand-picked selection of positive samples is used in the preparation step of creating a custom detection model.

This is the suggested way of preparing samples to be supplied to the RasterObjectDetectionModelTrainer for custom detection model creation. Hand-picking good quality positive samples yields a higher accuracy detection model, compared to artificially generated samples.

Input Ports

Output Ports

Parameters

For both positive and negative sample directories, we advise to collect your images into a single directory (one for positive, and another for negatives). If you have a "working directory" where you want your project files to reside, a sub-directory there containing your negative samples is a suggested location (but it can be anywhere).

Negative Samples

Training a detection model requires a set of negative or background images that do not contain the object you are trying to detect. Depending on the application, you may get away with using random images. However, if your object(s) have a very specific background, you may want to take the positive samples and crop out the object to get some samples that do not contain your object.

Positive Samples

A very good positive sample set is a key to a high quality detection model. This might be a time consuming process but it is best to find your object in its natural setting (where you will be detecting it). For instance, if you are planning to detect cars at a certain intersection, images of that intersection will work best. You will also need to annotate the images manually or using opencv_annotate tool, which can be found in [FME_HOME]/plugins/opencv/. You can read more about how to use the tool here. Follow the instructions on how to annotate your samples. In short, call opencv_annotation --annotations={path/to/output/annotations/file.txt} --images={/path/to/positive/samples/dir}. There are also two optional parameters: --maxWindowHeight [maximum image height to resize in case they are too tall] --resizeFactor [factor to resize the image by if it's too tall]

The produced .txt file can be supplied to the Annotation File parameter. The annotation file should contain one line per each annotated image in the format: [path to image] [number of annotations] [x1_1] [y1_1] [width_1] [height_1] ... [x1_n] [y1_n] [width_n] [height_n]

Output

Editing Transformer Parameters

Using a set of menu options, transformer parameters can be assigned by referencing other elements in the workspace. More advanced functions, such as an advanced editor and an arithmetic editor, are also available in some transformers. To access a menu of these options, click beside the applicable parameter. For more information, see Transformer Parameter Menu Options.

Defining Values

There are several ways to define a value for use in a Transformer. The simplest is to simply type in a value or string, which can include functions of various types such as attribute references, math and string functions, and workspace parameters. There are a number of tools and shortcuts that can assist in constructing values, generally available from the drop-down context menu adjacent to the value field.

Dialog Options - Tables

Transformers with table-style parameters have additional tools for populating and manipulating values.

FME Community

The FME Community is the place for demos, how-tos, articles, FAQs, and more. Get answers to your questions, learn from other users, and suggest, vote, and comment on new features.

Search for samples and information about this transformer on the FME Community.