FME Transformers: 2026.1
OpenAIConnector
Connects to OpenAI or OpenAI-compatible services to access OpenAI or provider-specific models.
Typical Uses
- Generate text responses, summaries, classifications, or reasoning output from text prompts
- Generate images from text prompts
- Generate vector embeddings from text for similarity search, clustering, or downstream analytics
How does it work?
The OpenAIConnector uses an endpoint URL and, optionally, an API key to perform various tasks:
| Action | Task |
|---|---|
|
Create Response |
Generate model responses given a text prompt. The model can also use tools such as Web Search. |
|
Create Image |
Generate new images given a text prompt. |
|
Edit Image |
Generate an edited or extended image given a text prompt and one or more source images. |
|
Create Embedding |
Generate an embedding vector from text input. |
Optional Input Port
This transformer has two modes, depending on whether a connector is attached to the Input port or not:
- Input-driven: When input features are connected, the transformer runs once for each feature it receives in the Input port.
- Run Once: When no input features are connected, the transformer runs one time.
When the Input port is in use, the Initiator output port is also enabled.
Usage Notes
-
To continue a Create Response workflow across multiple requests, use Conversation State > Conversation or Previous Response.
Configuration
Input Ports
This transformer accepts any feature. Coordinate systems are not supported.
Output Ports
Features with added attributes, as specified in parameters and according to Action:
|
Action |
Output - Input-Driven |
Output - Run Once |
|---|---|---|
|
Create Response |
Input feature with the generated response. |
New feature with the generated response. |
|
Create Image |
Input feature with the generated image. |
New feature with the generated image. |
|
Edit Image |
Input feature with the revised image. |
New feature with the revised image. |
|
Create Embedding |
Input feature with the generated embedding vector. |
New feature with the generated embedding vector. |
When the optional Input port is used, input features are output here unmodified, in addition to any other output locations (Output or <Rejected>).
Features that cause the operation to fail are output through this port. An fme_rejection_code attribute describing the category of the error will be added, along with a more descriptive fme_rejection_message which contains more specific details as to the reason for the failure.
If an Input feature already has a value for fme_rejection_code, this value will be removed.
Rejected Feature Handling: can be set to either terminate the translation or continue running when it encounters a rejected feature. This setting is available both as a default FME option and as a workspace parameter.
Parameters
|
Account |
Select or create a Web Connection connecting to an OpenAI Web Service. The Web Connection includes authentication details based on your selected service provider. OpenAI requires an API key, while OpenAI-compatible providers such as Azure OpenAI require a resource endpoint and API key. Some providers, such as locally hosted models via Ollama, may not require an API key. |
|
Action |
Select an operation to perform. Choices include:
|
Input Options
|
Model |
Provide or select a model to use. Some OpenAI models are included as choices, such as gpt-5.4. When using Azure OpenAI, this value must match your Azure deployment name. Model names may also be entered directly or provided as attribute values or user parameters. |
|
Instructions |
(Optional) Provide a message to insert into the model’s context. Note that when Conversation State is Previous Response, Instructions from previous responses are not carried over. |
|
Prompt |
Provide text input used to generate a response. |
|
Conversation State |
Select an option for carrying forward conversation context:
|
|
Conversation ID |
When Conversation State is Conversation, provide the id of an existing conversation to continue. Typically obtained from the provider's conversations API. |
|
Previous Response ID |
When Conversation State is Previous Response, provide the id of a previous response to continue from, typically set to the _response_id attribute value from an earlier Create Response action. |
Response Options
|
Verbosity |
(Optional) Select a verbosity constraint to apply to the response:
|
||||||
|
Structured Output |
When enabled, constrains the model's response to conform to a given JSON Schema.
The transformer sets strict to true on the request where possible, to ensure the model's output conforms exactly to the provided schema. As part of this, additionalProperties is set to false on any object schema in the JSON Schema that does not already include it. This is required by the API when strict mode is enabled. Strict mode is disabled in the following cases:
|
Tools
|
Web Search |
When enabled, the model may use the built-in web search tool to generate a response. |
Advanced
|
Temperature |
(Optional) Specify the sampling temperature to use, between 0 and 2 (inclusive). Higher values make the output more random, while lower make it more focused and deterministic. |
|
Top-P |
(Optional) Specify the nucleus sampling value, between 0 and 1 (inclusive). Top-P is an alternative to sampling with Temperature. The model considers the results of the tokens with top_p probability mass. For example, 0.2 means that the tokens comprising the top 20% probability mass are considered. |
|
Reasoning Effort |
(Optional) Select a level of reasoning effort :
This should only be set for models that support reasoning. |
|
Timeout(s) |
Specify the read timeout in seconds. Default is 120. |
|
Include Response JSON |
Select an option:
|
Added Attributes
Output features will receive these attributes:
|
_response_id |
Unique identifier for this Response. |
|
_model |
Language model used to generate the response. For example gpt-5.4. |
|
_completed_at |
ISO 8601 date-time string indicating when the Response was completed. Only present when _status is completed. |
|
_status |
Status of the response generation. One of completed or incomplete. |
|
_incomplete_reason |
Details on why the response is incomplete. max_output_tokens: the response is incomplete because the maximum number of tokens allowed in the request, i.e. the output length limit, was reached. content_filter: the response is incomplete because it was blocked or removed by the provider’s safety filters. |
|
_input_tokens |
The number of input tokens used. |
|
_output_tokens |
The number of output tokens used. |
|
_output{} |
List attribute containing JSON objects representing the content items generated by the model. |
|
_output_text |
Aggregated text output from all output_text items in the _output{} attribute. |
|
_json_response |
Added when Include Response JSON is Yes. Contains the complete JSON response. |
Create Image Options
|
Model |
Provide or select a model to use. Some OpenAI models are included as choices, such as chatgpt-image-latest. Model names may also be entered directly or provided as attribute values or user parameters. |
|
Prompt |
Provide a text description of the desired image. |
Image Output Options
|
Size |
Select a size for the output image:
Custom values may be entered if supported by the API. |
|
Moderation |
Select a content-moderation level for images generated by the GPT image models:
|
Advanced
|
Timeout(s) |
Specify the read timeout in seconds. Default is 120. |
|
Include Response JSON |
Select an option:
|
Added Attributes
Output features will receive these attributes:
|
_created_at |
ISO 8601 date-time string indicating when the image was created. |
|
_background |
The background parameter used for the image generation (e.g. transparent, opaque). |
|
_output_format |
The output format of the image generation (e.g. png, webp, jpeg). |
|
_quality |
The quality of the image generated (e.g. low, medium, high) |
|
_size |
The size of the image generated (e.g. 1024x1024, 1024x1536, 1536x1024). |
|
_image |
The generated image, encoded in base64. |
|
_json_response |
Added when Include Response JSON is Yes. Contains the complete JSON response. |
Edit Image Options
|
Model |
Provide or select a model to use. Some OpenAI models are included as choices, such as chatgpt-image-latest. Model names may also be entered directly or provided as attribute values or user parameters. |
|
Input Image(s) |
Specify the path and filenames of 1 to 16 source images to edit. Supported formats are PNG, JPG, JPEG, and WEBP. |
|
Prompt |
Provide a text description of the desired edits to apply to the image. |
Image Output Options
|
Size |
Select a size for the output image:
Custom values may be entered if supported by the API. |
|
Moderation |
Select a content-moderation level for images generated by the GPT image models:
|
Advanced
|
Timeout(s) |
Specify the read timeout in seconds. Default is 120. |
|
Include Response JSON |
Select an option:
|
Added Attributes
Output features will receive these attributes:
|
_created_at |
ISO 8601 date-time string indicating when the image was created. |
|
_background |
The background parameter used for the image generation (e.g. transparent, opaque). |
|
_output_format |
The output format of the image generation (e.g. png, webp, jpeg). |
|
_quality |
The quality of the image generated (e.g. low, medium, high) |
|
_size |
The size of the image generated (e.g. 1024x1024, 1024x1536, 1536x1024). |
|
_image |
The generated image, encoded in base64. |
|
_json_response |
Added when Include Response JSON is Yes. Contains the complete JSON response. |
Create Embedding Options
|
Model |
Provide or select a model to use. Some OpenAI models are included as choices, such as text-embedding-3-small. Model names may also be entered directly or provided as attribute values or user parameters. |
|
Input |
Provide text to convert into an embedding. If the resolved value is a list, it must contain only integers. |
|
Dimensions |
(Optional) Specify the number of dimensions for the generated embedding. |
Advanced
|
Timeout(s) |
Specify the read timeout in seconds. Default is 120. |
|
Include Response JSON |
Select an option:
|
Added Attributes
Output features will receive these attributes:
|
_model |
Embedding model used to generate the vector. For example text-embedding-3-small. |
|
_embedding{} |
List attribute containing the embedding vector as floating-point values. |
|
_json_response |
Added when Include Response JSON is Yes. Contains the complete JSON response. |
Editing Transformer Parameters
Transformer parameters can be set by directly entering values, using expressions, or referencing other elements in the workspace such as attribute values or user parameters. Various editors and context menus are available to assist. To see what is available, click
beside the applicable parameter.
Defining Values
There are several ways to define a value for use in a Transformer. The simplest is to simply type in a value or string, which can include functions of various types such as attribute references, math and string functions, and workspace parameters.
Using the Text Editor
The Text Editor provides a convenient way to construct text strings (including regular expressions) from various data sources, such as attributes, parameters, and constants, where the result is used directly inside a parameter.
Using the Arithmetic Editor
The Arithmetic Editor provides a convenient way to construct math expressions from various data sources, such as attributes, parameters, and feature functions, where the result is used directly inside a parameter.
Conditional Values
Set values depending on one or more test conditions that either pass or fail.
Parameter Condition Definition Dialog
Content
Expressions and strings can include a number of functions, characters, parameters, and more.
When setting values - whether entered directly in a parameter or constructed using one of the editors - strings and expressions containing String, Math, Date/Time or FME Feature Functions will have those functions evaluated. Therefore, the names of these functions (in the form @<function_name>) should not be used as literal string values.
| These functions manipulate and format strings. | |
|
Special Characters |
A set of control characters is available in the Text Editor. |
| Math functions are available in both editors. | |
| Date/Time Functions | Date and time functions are available in the Text Editor. |
| These operators are available in the Arithmetic Editor. | |
| These return primarily feature-specific values. | |
| FME and workspace-specific parameters may be used. | |
| Creating and Modifying User Parameters | Create your own editable parameters. |
Table Tools
Transformers with table-style parameters have additional tools for populating and manipulating values.
|
Row Reordering
|
Enabled once you have clicked on a row item. Choices include:
|
|
Cut, Copy, and Paste
|
Enabled once you have clicked on a row item. Choices include:
Cut, copy, and paste may be used within a transformer, or between transformers. |
|
Filter
|
Start typing a string, and the matrix will only display rows matching those characters. Searches all columns. This only affects the display of attributes within the transformer - it does not alter which attributes are output. |
|
Import
|
Import populates the table with a set of new attributes read from a dataset. Specific application varies between transformers. |
|
Reset/Refresh
|
Generally resets the table to its initial state, and may provide additional options to remove invalid entries. Behavior varies between transformers. |
Note: Not all tools are available in all transformers.
For more information, see Transformer Parameter Menu Options.
Reference
|
Processing Behavior |
|
|
Feature Holding |
No |
| Dependencies | |
| Aliases | |
| History |
FME Online Resources
The FME Community and Support Center Knowledge Base have a wealth of information, including active forums with 35,000+ members and thousands of articles.
Search for all results about the OpenAIConnector on the FME Community.
Examples may contain information licensed under the Open Government Licence – Vancouver, Open Government Licence - British Columbia, and/or Open Government Licence – Canada.