There are several fundamental tips and tricks for optimizing processing. The following list identifies and briefly describes some key issues to consider when cleaning data.
Know your data
The first step in any cleaning exercise is to become familiar with your source data. Information on data quality (such as 1m vs. 100m accuracy), data currency, and intended use is important in determining which cleaning modules and tolerances should be used. If such information is not available, a visual inspection of the design file(s) should provide insight into average line-work gap sizes, line weeding requirements, and other issues which may exist.
When setting cleaning tolerances, it is always best to start small. With smaller tolerances, the software uses a smaller search radius, which reduces the number of potential element intersections to consider and increases processing speed. Also, if the bulk of the linework errors can be corrected using a small tolerance, more detail can be maintained in the dataset. One or more cleaning processes can always be repeated with larger tolerances to increase the number of errors automatically corrected.
Mix it up
Depending on the source dataset, and its intended use, you may achieve better results running the individual modules with different tolerances.
The FME Community is the place for demos, how-tos, articles, FAQs, and more. Get answers to your questions, learn from other users, and suggest, vote, and comment on new features.
Search for samples and information about this transformer on the FME Community.