camel.agents package#
Submodules#
camel.agents.chat_agent module#
- class camel.agents.chat_agent.ChatAgent(system_message: BaseMessage, model: ModelType | None = None, model_config: BaseConfig | None = None, message_window_size: int | None = None, output_language: str | None = None, function_list: List[OpenAIFunction] | None = None)[source]#
Bases:
BaseAgent
Class for managing conversations of CAMEL Chat Agents.
- Parameters:
system_message (BaseMessage) – The system message for the chat agent.
model (ModelType, optional) – The LLM model to use for generating responses. (default
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – Configuration options for the LLM model. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)function_list (Optional[List[OpenAIFunction]]) – List of available
OpenAIFunction
. (default:None
)
- get_info(id: str | None, usage: Dict[str, int] | None, termination_reasons: List[str], num_tokens: int, called_funcs: List[FunctionCallingRecord]) Dict[str, Any] [source]#
Returns a dictionary containing information about the chat session.
- Parameters:
id (str, optional) – The ID of the chat session.
usage (Dict[str, int], optional) – Information about the usage of the LLM model.
termination_reasons (List[str]) – The reasons for the termination of the chat session.
num_tokens (int) – The number of tokens used in the chat session.
called_funcs (List[FunctionCallingRecord]) – The list of function calling records, containing the information of called functions.
- Returns:
The chat session information.
- Return type:
Dict[str, Any]
- get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int] [source]#
Get usage dictionary when using the stream mode.
- Parameters:
output_messages (list) – List of output messages.
prompt_tokens (int) – Number of input prompt tokens.
- Returns:
Usage dictionary.
- Return type:
dict
- handle_batch_response(response: Dict[str, Any]) Tuple[List[BaseMessage], List[str], Dict[str, int], str] [source]#
- Parameters:
response (dict) – Model response.
- Returns:
- A tuple of list of output ChatMessage, list of
finish reasons, usage dictionary, and response id.
- Return type:
tuple
- handle_stream_response(response: Any, prompt_tokens: int) Tuple[List[BaseMessage], List[str], Dict[str, int], str] [source]#
- Parameters:
response (dict) – Model response.
prompt_tokens (int) – Number of input prompt tokens.
- Returns:
- A tuple of list of output ChatMessage, list of
finish reasons, usage dictionary, and response id.
- Return type:
tuple
- init_messages() None [source]#
Initializes the stored messages list with the initial system message.
- is_function_calling_enabled() bool [source]#
Whether OpenAI function calling is enabled for this agent.
- Returns:
- Whether OpenAI function calling is enabled for this
agent, determined by whether the dictionary of functions is empty.
- Return type:
bool
- preprocess_messages(messages: List[ChatRecord]) Tuple[List[Dict[str, str]], int] [source]#
Truncate the list of messages if message window is defined and the current length of message list is beyond the window size. Then convert the list of messages to OpenAI’s input format and calculate the number of tokens.
- Parameters:
messages (List[ChatRecord]) – The list of structs containing information about previous chat messages.
- Returns:
- A tuple containing the truncated list of messages in
OpenAI’s input format and the number of tokens.
- Return type:
tuple
- reset()[source]#
Resets the
ChatAgent
to its initial state and returns the stored messages.- Returns:
The stored messages.
- Return type:
List[BaseMessage]
- set_output_language(output_language: str) BaseMessage [source]#
Sets the output language for the system message. This method updates the output language for the system message. The output language determines the language in which the output text should be generated.
- Parameters:
output_language (str) – The desired output language.
- Returns:
The updated system message object.
- Return type:
- step(input_message: BaseMessage) ChatAgentResponse [source]#
Performs a single step in the chat session by generating a response to the input message.
- Parameters:
input_message (BaseMessage) – The input message to the agent.
either (Its role field that specifies the role at backend may be) –
since (user or assistant but it will be set to user anyway) –
external. (for the self agent any incoming message is) –
- Returns:
- A struct containing the output messages,
a boolean indicating whether the chat session has terminated, and information about the chat session.
- Return type:
- step_function_call(response: Dict[str, Any]) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord] [source]#
Execute the function with arguments following the model’s response.
- Parameters:
response (Dict[str, Any]) – the response obtained by calling the model.
- Returns:
- a tuple consisting of two obj:FunctionCallingMessage,
one about the arguments and the other about the execution result, and a struct for logging information about this function call.
- Return type:
tuple
- step_token_exceed(num_tokens: int, called_funcs: List[FunctionCallingRecord]) ChatAgentResponse [source]#
Return trivial response containing number of tokens and information of called functions when the number of tokens exceeds.
- Parameters:
num_tokens (int) – Number of tokens in the messages.
called_funcs (List[FunctionCallingRecord]) – List of information objects of functions called in the current step.
- Returns:
- The struct containing trivial outputs and
information about token number and called functions.
- Return type:
- submit_message(message: BaseMessage) None [source]#
Submits the externally provided message as if it were an answer of the chat LLM from the backend. Currently, the choice of the critic is submitted with this method.
- Parameters:
message (BaseMessage) – An external message to be added as an assistant response.
- property system_message: BaseMessage#
The getter method for the property
system_message
.- Returns:
The system message of this agent.
- Return type:
- update_messages(role: str, message: BaseMessage) List[ChatRecord] [source]#
Updates the stored messages list with a new message.
- Parameters:
message (BaseMessage) – The new message to add to the stored messages.
- Returns:
The updated stored messages.
- Return type:
List[BaseMessage]
- class camel.agents.chat_agent.ChatAgentResponse(msgs: List[BaseMessage], terminated: bool, info: Dict[str, Any])[source]#
Bases:
object
Response of a ChatAgent.
- msgs#
A list of zero, one or several messages. If the list is empty, there is some error in message generation. If the list has one message, this is normal mode. If the list has several messages, this is the critic mode.
- Type:
List[BaseMessage]
- terminated#
A boolean indicating whether the agent decided to terminate the chat session.
- Type:
bool
- info#
Extra information about the chat message.
- Type:
Dict[str, Any]
- info: Dict[str, Any]#
- property msg#
- msgs: List[BaseMessage]#
- terminated: bool#
- class camel.agents.chat_agent.ChatRecord(role_at_backend: str, message: BaseMessage)[source]#
Bases:
object
Historical records of who made what message.
- role_at_backend#
Role of the message that mirrors OpenAI message role that may be system or user or assistant.
- Type:
str
- message#
Message payload.
- Type:
- message: BaseMessage#
- role_at_backend: str#
- class camel.agents.chat_agent.FunctionCallingRecord(func_name: str, args: Dict[str, Any], result: Any)[source]#
Bases:
object
Historical records of functions called in the conversation.
- func_name#
The name of the function being called.
- Type:
str
- args#
The dictionary of arguments passed to the function.
- Type:
Dict[str, Any]
- result#
The execution result of calling this function.
- Type:
Any
- args: Dict[str, Any]#
- func_name: str#
- result: Any#
camel.agents.role_playing module#
camel.agents.task_agent module#
- class camel.agents.task_agent.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#
Bases:
ChatAgent
An agent that helps create new tasks based on the objective and last completed task. Compared to
TaskPlannerAgent
, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.- task_creation_prompt#
A prompt for the agent to create new tasks.
- Type:
- Parameters:
role_name (str) – The role name of the Agent to create the task.
objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)
- run(task_list: List[str]) List[str] [source]#
Generate subtasks based on the previous task results and incomplete task list.
- Parameters:
task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.
- Returns:
The new task list generated by the Agent.
- Return type:
List[str]
- class camel.agents.task_agent.TaskPlannerAgent(model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None)[source]#
Bases:
ChatAgent
An agent that helps divide a task into subtasks based on the input task prompt.
- task_planner_prompt#
A prompt for the agent to divide the task into subtasks.
- Type:
- Parameters:
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the
(default (agent.) –
None
)
- run(task_prompt: str | TextPrompt) TextPrompt [source]#
Generate subtasks based on the input task prompt.
- Parameters:
task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.
- Returns:
A prompt for the subtasks generated by the agent.
- Return type:
- class camel.agents.task_agent.TaskPrioritizationAgent(objective: str | TextPrompt, model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#
Bases:
ChatAgent
An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.
- task_prioritization_prompt#
A prompt for the agent to prioritize tasks.
- Type:
- Parameters:
objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)
- class camel.agents.task_agent.TaskSpecifyAgent(model: ModelType | None = None, task_type: TaskType = TaskType.AI_SOCIETY, model_config: Any | None = None, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#
Bases:
ChatAgent
An agent that specifies a given task prompt by prompting the user to provide more details.
- DEFAULT_WORD_LIMIT#
The default word limit for the task prompt.
- Type:
int
- task_specify_prompt#
The prompt for specifying the task.
- Type:
- Parameters:
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)task_type (TaskType, optional) – The type of task for which to generate a prompt. (default:
TaskType.AI_SOCIETY
)model_config (Any, optional) – The configuration for the model. (default:
None
)task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default:
None
)word_limit (int, optional) – The word limit for the task prompt. (default:
50
)output_language (str, optional) – The language to be output by the
(default (agent.) –
None
)
- DEFAULT_WORD_LIMIT = 50#
- run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt [source]#
Specify the given task prompt by providing more details.
- Parameters:
task_prompt (Union[str, TextPrompt]) – The original task prompt.
meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default:
None
)
- Returns:
The specified task prompt.
- Return type:
Module contents#
- class camel.agents.BaseToolAgent(name: str, description: str)[source]#
Bases:
BaseAgent
- Creates a
BaseToolAgent
object with the specified name and description.
- Parameters:
name (str) – The name of the tool agent.
description (str) – The description of the tool agent.
- Creates a
- class camel.agents.ChatAgent(system_message: BaseMessage, model: ModelType | None = None, model_config: BaseConfig | None = None, message_window_size: int | None = None, output_language: str | None = None, function_list: List[OpenAIFunction] | None = None)[source]#
Bases:
BaseAgent
Class for managing conversations of CAMEL Chat Agents.
- Parameters:
system_message (BaseMessage) – The system message for the chat agent.
model (ModelType, optional) – The LLM model to use for generating responses. (default
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – Configuration options for the LLM model. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)function_list (Optional[List[OpenAIFunction]]) – List of available
OpenAIFunction
. (default:None
)
- get_info(id: str | None, usage: Dict[str, int] | None, termination_reasons: List[str], num_tokens: int, called_funcs: List[FunctionCallingRecord]) Dict[str, Any] [source]#
Returns a dictionary containing information about the chat session.
- Parameters:
id (str, optional) – The ID of the chat session.
usage (Dict[str, int], optional) – Information about the usage of the LLM model.
termination_reasons (List[str]) – The reasons for the termination of the chat session.
num_tokens (int) – The number of tokens used in the chat session.
called_funcs (List[FunctionCallingRecord]) – The list of function calling records, containing the information of called functions.
- Returns:
The chat session information.
- Return type:
Dict[str, Any]
- get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int] [source]#
Get usage dictionary when using the stream mode.
- Parameters:
output_messages (list) – List of output messages.
prompt_tokens (int) – Number of input prompt tokens.
- Returns:
Usage dictionary.
- Return type:
dict
- handle_batch_response(response: Dict[str, Any]) Tuple[List[BaseMessage], List[str], Dict[str, int], str] [source]#
- Parameters:
response (dict) – Model response.
- Returns:
- A tuple of list of output ChatMessage, list of
finish reasons, usage dictionary, and response id.
- Return type:
tuple
- handle_stream_response(response: Any, prompt_tokens: int) Tuple[List[BaseMessage], List[str], Dict[str, int], str] [source]#
- Parameters:
response (dict) – Model response.
prompt_tokens (int) – Number of input prompt tokens.
- Returns:
- A tuple of list of output ChatMessage, list of
finish reasons, usage dictionary, and response id.
- Return type:
tuple
- init_messages() None [source]#
Initializes the stored messages list with the initial system message.
- is_function_calling_enabled() bool [source]#
Whether OpenAI function calling is enabled for this agent.
- Returns:
- Whether OpenAI function calling is enabled for this
agent, determined by whether the dictionary of functions is empty.
- Return type:
bool
- preprocess_messages(messages: List[ChatRecord]) Tuple[List[Dict[str, str]], int] [source]#
Truncate the list of messages if message window is defined and the current length of message list is beyond the window size. Then convert the list of messages to OpenAI’s input format and calculate the number of tokens.
- Parameters:
messages (List[ChatRecord]) – The list of structs containing information about previous chat messages.
- Returns:
- A tuple containing the truncated list of messages in
OpenAI’s input format and the number of tokens.
- Return type:
tuple
- reset()[source]#
Resets the
ChatAgent
to its initial state and returns the stored messages.- Returns:
The stored messages.
- Return type:
List[BaseMessage]
- set_output_language(output_language: str) BaseMessage [source]#
Sets the output language for the system message. This method updates the output language for the system message. The output language determines the language in which the output text should be generated.
- Parameters:
output_language (str) – The desired output language.
- Returns:
The updated system message object.
- Return type:
- step(input_message: BaseMessage) ChatAgentResponse [source]#
Performs a single step in the chat session by generating a response to the input message.
- Parameters:
input_message (BaseMessage) – The input message to the agent.
either (Its role field that specifies the role at backend may be) –
since (user or assistant but it will be set to user anyway) –
external. (for the self agent any incoming message is) –
- Returns:
- A struct containing the output messages,
a boolean indicating whether the chat session has terminated, and information about the chat session.
- Return type:
- step_function_call(response: Dict[str, Any]) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord] [source]#
Execute the function with arguments following the model’s response.
- Parameters:
response (Dict[str, Any]) – the response obtained by calling the model.
- Returns:
- a tuple consisting of two obj:FunctionCallingMessage,
one about the arguments and the other about the execution result, and a struct for logging information about this function call.
- Return type:
tuple
- step_token_exceed(num_tokens: int, called_funcs: List[FunctionCallingRecord]) ChatAgentResponse [source]#
Return trivial response containing number of tokens and information of called functions when the number of tokens exceeds.
- Parameters:
num_tokens (int) – Number of tokens in the messages.
called_funcs (List[FunctionCallingRecord]) – List of information objects of functions called in the current step.
- Returns:
- The struct containing trivial outputs and
information about token number and called functions.
- Return type:
- submit_message(message: BaseMessage) None [source]#
Submits the externally provided message as if it were an answer of the chat LLM from the backend. Currently, the choice of the critic is submitted with this method.
- Parameters:
message (BaseMessage) – An external message to be added as an assistant response.
- property system_message: BaseMessage#
The getter method for the property
system_message
.- Returns:
The system message of this agent.
- Return type:
- update_messages(role: str, message: BaseMessage) List[ChatRecord] [source]#
Updates the stored messages list with a new message.
- Parameters:
message (BaseMessage) – The new message to add to the stored messages.
- Returns:
The updated stored messages.
- Return type:
List[BaseMessage]
- class camel.agents.ChatAgentResponse(msgs: List[BaseMessage], terminated: bool, info: Dict[str, Any])[source]#
Bases:
object
Response of a ChatAgent.
- msgs#
A list of zero, one or several messages. If the list is empty, there is some error in message generation. If the list has one message, this is normal mode. If the list has several messages, this is the critic mode.
- Type:
List[BaseMessage]
- terminated#
A boolean indicating whether the agent decided to terminate the chat session.
- Type:
bool
- info#
Extra information about the chat message.
- Type:
Dict[str, Any]
- info: Dict[str, Any]#
- property msg#
- msgs: List[BaseMessage]#
- terminated: bool#
- class camel.agents.CriticAgent(system_message: BaseMessage, model: ModelType = ModelType.GPT_3_5_TURBO, model_config: Any | None = None, message_window_size: int = 6, retry_attempts: int = 2, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#
Bases:
ChatAgent
A class for the critic agent that assists in selecting an option.
- Parameters:
system_message (BaseMessage) – The system message for the critic agent.
model (ModelType, optional) – The LLM model to use for generating responses. (default
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – Configuration options for the LLM model. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
6
)retry_attempts (int, optional) – The number of retry attempts if the critic fails to return a valid option. (default:
2
)verbose (bool, optional) – Whether to print the critic’s messages.
logger_color (Any) – The color of the menu options displayed to the user. (default:
Fore.MAGENTA
)
- flatten_options(messages: Sequence[BaseMessage]) str [source]#
Flattens the options to the critic.
- Parameters:
messages (Sequence[BaseMessage]) – A list of BaseMessage objects.
- Returns:
A string containing the flattened options to the critic.
- Return type:
str
- get_option(input_message: BaseMessage) str [source]#
Gets the option selected by the critic.
- Parameters:
input_message (BaseMessage) – A BaseMessage object representing the input message.
- Returns:
The option selected by the critic.
- Return type:
str
- parse_critic(critic_msg: BaseMessage) str | None [source]#
Parses the critic’s message and extracts the choice.
- Parameters:
critic_msg (BaseMessage) – A BaseMessage object representing the critic’s response.
- Returns:
- The critic’s choice as a string, or None if the
message could not be parsed.
- Return type:
Optional[str]
- reduce_step(input_messages: Sequence[BaseMessage]) ChatAgentResponse [source]#
Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.
- Parameters:
messages (Sequence[BaseMessage]) – A list of BaseMessage objects.
- Returns:
- A ChatAgentResponse object includes the
critic’s choice.
- Return type:
- class camel.agents.EmbodiedAgent(system_message: BaseMessage, model: ModelType = ModelType.GPT_4, model_config: Any | None = None, message_window_size: int | None = None, action_space: List[BaseToolAgent] | None = None, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#
Bases:
ChatAgent
Class for managing conversations of CAMEL Embodied Agents.
- Parameters:
system_message (BaseMessage) – The system message for the chat agent.
model (ModelType, optional) – The LLM model to use for generating responses. (default
ModelType.GPT_4
)model_config (Any, optional) – Configuration options for the LLM model. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)action_space (List[Any], optional) – The action space for the embodied agent. (default:
None
)verbose (bool, optional) – Whether to print the critic’s messages.
logger_color (Any) – The color of the logger displayed to the user. (default:
Fore.MAGENTA
)
- get_action_space_prompt() str [source]#
Returns the action space prompt.
- Returns:
The action space prompt.
- Return type:
str
- step(input_message: BaseMessage) ChatAgentResponse [source]#
Performs a step in the conversation.
- Parameters:
input_message (BaseMessage) – The input message.
- Returns:
- A struct containing the output messages,
a boolean indicating whether the chat session has terminated, and information about the chat session.
- Return type:
- class camel.agents.HuggingFaceToolAgent(name: str, *args: Any, remote: bool = True, **kwargs: Any)[source]#
Bases:
BaseToolAgent
- Tool agent for calling HuggingFace models. This agent is a wrapper
around agents from the transformers library. For more information about the available models, please see the transformers documentation at https://huggingface.co/docs/transformers/transformers_agents.
- Parameters:
name (str) – The name of the agent.
*args (Any) – Additional positional arguments to pass to the underlying Agent class.
remote (bool, optional) – Flag indicating whether to run the agent remotely. (default:
True
)**kwargs (Any) – Additional keyword arguments to pass to the underlying Agent class.
- chat(*args: Any, remote: bool | None = None, **kwargs: Any) Any [source]#
Runs the agent in a chat conversation mode.
- Parameters:
*args (Any) – Positional arguments to pass to the agent.
remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default:
None
)**kwargs (Any) – Keyword arguments to pass to the agent.
- Returns:
The response from the agent.
- Return type:
str
- step(*args: Any, remote: bool | None = None, **kwargs: Any) Any [source]#
Runs the agent in single execution mode.
- Parameters:
*args (Any) – Positional arguments to pass to the agent.
remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default:
None
)**kwargs (Any) – Keyword arguments to pass to the agent.
- Returns:
The response from the agent.
- Return type:
str
- class camel.agents.RoleAssignmentAgent(model: ModelType = ModelType.GPT_3_5_TURBO, model_config: Any | None = None)[source]#
Bases:
ChatAgent
An agent that generates role names based on the task prompt. .. attribute:: role_assignment_prompt
A prompt for the agent to generate
- type:
TextPrompt
- role names.
- Parameters:
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)
- run(task_prompt: str | TextPrompt, num_roles: int = 2) Dict[str, str] [source]#
Generate role names based on the input task prompt.
- Parameters:
task_prompt (Union[str, TextPrompt]) – The prompt for the task based on which the roles are to be generated.
num_roles (int, optional) – The number of roles to generate. (default:
2
)
- Returns:
- A dictionary mapping role names to their
descriptions.
- Return type:
Dict[str, str]
- class camel.agents.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#
Bases:
ChatAgent
An agent that helps create new tasks based on the objective and last completed task. Compared to
TaskPlannerAgent
, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.- task_creation_prompt#
A prompt for the agent to create new tasks.
- Type:
- Parameters:
role_name (str) – The role name of the Agent to create the task.
objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)
- run(task_list: List[str]) List[str] [source]#
Generate subtasks based on the previous task results and incomplete task list.
- Parameters:
task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.
- Returns:
The new task list generated by the Agent.
- Return type:
List[str]
- class camel.agents.TaskPlannerAgent(model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None)[source]#
Bases:
ChatAgent
An agent that helps divide a task into subtasks based on the input task prompt.
- task_planner_prompt#
A prompt for the agent to divide the task into subtasks.
- Type:
- Parameters:
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the
(default (agent.) –
None
)
- run(task_prompt: str | TextPrompt) TextPrompt [source]#
Generate subtasks based on the input task prompt.
- Parameters:
task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.
- Returns:
A prompt for the subtasks generated by the agent.
- Return type:
- class camel.agents.TaskPrioritizationAgent(objective: str | TextPrompt, model: ModelType | None = None, model_config: Any | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#
Bases:
ChatAgent
An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.
- task_prioritization_prompt#
A prompt for the agent to prioritize tasks.
- Type:
- Parameters:
objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)model_config (Any, optional) – The configuration for the model. (default:
None
)output_language (str, optional) – The language to be output by the agent. (default:
None
)message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default:
None
)
- class camel.agents.TaskSpecifyAgent(model: ModelType | None = None, task_type: TaskType = TaskType.AI_SOCIETY, model_config: Any | None = None, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#
Bases:
ChatAgent
An agent that specifies a given task prompt by prompting the user to provide more details.
- DEFAULT_WORD_LIMIT#
The default word limit for the task prompt.
- Type:
int
- task_specify_prompt#
The prompt for specifying the task.
- Type:
- Parameters:
model (ModelType, optional) – The type of model to use for the agent. (default:
ModelType.GPT_3_5_TURBO
)task_type (TaskType, optional) – The type of task for which to generate a prompt. (default:
TaskType.AI_SOCIETY
)model_config (Any, optional) – The configuration for the model. (default:
None
)task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default:
None
)word_limit (int, optional) – The word limit for the task prompt. (default:
50
)output_language (str, optional) – The language to be output by the
(default (agent.) –
None
)
- DEFAULT_WORD_LIMIT = 50#
- func_dict: Dict[str, Callable]#
- message_window_size: int | None#
- model_backend: BaseModelBackend#
- model_token_limit: int#
- orig_sys_message: BaseMessage#
- output_language: str | None#
- role_name: str#
- run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt [source]#
Specify the given task prompt by providing more details.
- Parameters:
task_prompt (Union[str, TextPrompt]) – The original task prompt.
meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default:
None
)
- Returns:
The specified task prompt.
- Return type:
- stored_messages: List[ChatRecord]#
- task_specify_prompt: str | TextPrompt#
- terminated: bool#