LLM Agent¶
The LLMAgent
class is a specialized component designed for configuring and interacting with Large Language Models (LLMs). It provides a structured way to define LLM settings such as the provider, model, temperature, and system prompt. Once configured, an LLMAgent
instance can be used to generate chat responses or be saved for later use.
The primary way to create an LLMAgent
is through its Builder
class, which offers a fluent interface for setting up the configuration.
LLMAgent Builder¶
The LLMAgent.Builder
class provides a fluent interface for constructing an LLMAgent
instance. You can create a builder by calling LLMAgent.Builder()
.
Builder Methods¶
- Builder.provider(provider='openai')¶
Sets the LLM provider for the configuration.
Parameters:
provider (str | None, optional): The name of the LLM provider. Currently, only “openai” is supported. Defaults to “openai”.
Returns:
LLMAgent.Builder
: The builder instance for method chaining.
Raises:
ValueError: If an unsupported provider is specified.
- Builder.model(model=None)¶
Sets the specific LLM model to use.
Parameters:
model (str | None, optional): The model name (e.g., “gpt-4o-mini”). Defaults to the default model for the provider.
Returns:
LLMAgent.Builder
: The builder instance for method chaining.
- Builder.temperature(temperature=None)¶
Sets the temperature for the LLM’s responses.
Temperature controls the randomness of the output. Higher values (e.g., 0.8) make the output more random, while lower values (e.g., 0.2) make it more focused and deterministic.
Parameters:
temperature (float | int | None, optional): A value between 0.0 and 1.0. Defaults to 0.5.
Returns:
LLMAgent.Builder
: The builder instance for method chaining.
Raises:
ValueError: If the temperature is outside the valid range of 0.0 to 1.0.
- Builder.system_prompt(system_prompt=None)¶
Sets the system prompt for the LLM.
The system prompt is used to give the LLM context and instructions on how to behave. It can contain variables in the format
{{variable_name}}
which can be replaced at runtime using thechat
method.Parameters:
system_prompt (str | None, optional): The system prompt string. Defaults to an empty string.
Returns:
LLMAgent.Builder
: The builder instance for method chaining.
- Builder.build()¶
Constructs the final
LLMAgent
instance from the builder’s configuration.Returns:
LLMAgent
: A new, immutableLLMAgent
instance.
Example:
from berrydb import LLMAgent agent = LLMAgent.Builder() .provider("openai") .model("gpt-4o-mini") .temperature(0.7) .system_prompt("You are a helpful assistant.") .build()
LLMAgent Methods¶
- LLMAgent.chat(llm_api_key, prompt_args=None)¶
Send a chat message to the BERRY_GPT service using this prompt configuration.
Parameters:
llm_api_key (str): The LLM API key (e.g., OpenAI API key) for authentication.
prompt_args (dict[str, str] | None, optional): Dictionary of key-value pairs to replace variables in system_prompt. Variables should be enclosed with {{variable}} in the system_prompt. Example: {“user”: “John”} will replace {{user}} with “John” in the system_prompt.
Returns:
dict: The response from the BERRY_GPT service.
Raises:
ValueError: If the LLM API key is invalid.
Exception: If the API call fails.
Example:
from berrydb import LLMAgent # Create a prompt configuration prompt = LLMAgent.Builder() .provider("openai") .model("gpt-4o-mini") .temperature(0.7) .system_prompt("Hello {{user}}! You are a helpful assistant.") .build() # Send a chat message with variable replacement response = prompt.chat("sk-your-openai-api-key", {"user": "John"}) # Result: system_prompt becomes "Hello John! You are a helpful assistant." print(response)
- LLMAgent.save(berrydb_api_key, prompt_name)¶
Saves the current LLMAgent configuration to BerryDB under a specified name.
This allows you to store and reuse prompt configurations across your application.
Parameters:
berrydb_api_key (str): The BerryDB API key for authentication.
prompt_name (str): The unique name to save this prompt configuration as.
Returns:
dict: The API response confirming the save operation.
Raises:
ValueError: If the API key or prompt name is invalid.
Exception: If the API call fails.
Example:
from berrydb import LLMAgent # Create a prompt configuration prompt = LLMAgent.Builder().system_prompt("You are a helpful assistant.").build() # Save the prompt to BerryDB prompt.save("YOUR_BERRYDB_API_KEY", "helpful-assistant-prompt")
- static LLMAgent.get(berrydb_api_key, prompt_name)¶
Retrieves a saved LLMAgent configuration from BerryDB by its name.
Parameters:
berrydb_api_key (str): The BerryDB API key for authentication.
prompt_name (str): The name of the prompt configuration to retrieve.
Returns:
LLMAgent: An instance of the LLMAgent with the retrieved configuration.
Raises:
ValueError: If the API key or prompt name is invalid.
Exception: If the API call fails or the prompt is not found.
Example:
from berrydb import LLMAgent # Retrieve a saved prompt from BerryDB retrieved_prompt = LLMAgent.get("YOUR_BERRYDB_API_KEY", "helpful-assistant-prompt") print(retrieved_prompt.system_prompt)