StandardOpenAILLMService

Description

A Controller Service that provides integration with OpenAI’s Chat Completion API. Supports configurable parameters including model selection, temperature, top_p, max tokens, and retry behavior. Handles API authentication, request retries with exponential backoff, and error handling.

Tags

ai, chat completion, chatgpt, large language model, llm, openai, openflow

Properties

In the list below required Properties are shown with an asterisk (*). Other properties are considered optional. The table also indicates any default values, and whether a property supports the NiFi Expression Language.

Display NameAPI NameDefault ValueAllowable ValuesDescription
Backoff Base Delay (ms) *Backoff Base Delay (ms)1000The base delay in milliseconds for exponential backoff between retries
Max Response TokensMax Response TokensThe maximum number of tokens to generate in the response.
Max Retries *Max Retries3The maximum number of retry attempts for API calls
Model Name *Model Namegpt-4o-miniThe name of the OpenAI model.
OpenAI API Key *OpenAI API KeyThe API Key for authenticating to OpenAI.
SeedSeedThe seed to use for generating the response
TemperatureTemperatureThe temperature to use for generating the response.
Top PTop PThe top_p value for nucleus sampling. It controls the diversity of the generated responses.
UserUserYour end user, sent to OpenAI for monitoring and detection of abuse
Web Client Service *Web Client ServiceThe Web Client Service to use for communicating with the LLM provider.

State management

This component does not store state.

Restricted

This component is not restricted.

System Resource Considerations

This component does not specify system resource considerations.