Camel Package#

Subpackages#

Submodules#

camel.configs module#

class camel.configs.BaseConfig[source]#

Bases: ABC

class camel.configs.ChatGPTConfig(temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | ~typing.Sequence[str] | None = None, max_tokens: int | None = None, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, logit_bias: ~typing.Dict = <factory>, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

frequency_penalty: float = 0.0#
logit_bias: Dict#
max_tokens: int | None = None#
n: int = 1#
presence_penalty: float = 0.0#
stop: str | Sequence[str] | None = None#
stream: bool = False#
temperature: float = 0.2#
top_p: float = 1.0#
user: str = ''#
class camel.configs.FunctionCallingConfig(temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | ~typing.Sequence[str] | None = None, max_tokens: int | None = None, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, logit_bias: ~typing.Dict = <factory>, user: str = '', functions: ~typing.List[~typing.Dict[str, ~typing.Any]] = <factory>, function_call: ~typing.Dict[str, str] | str = 'auto')[source]#

Bases: ChatGPTConfig

Defines the parameters for generating chat completions using the OpenAI API with functions included.

Parameters:
  • functions (List[Dict[str, Any]]) – A list of functions the model may generate JSON inputs for.

  • function_call (Union[Dict[str, str], str], optional) – Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. (default: "auto")

classmethod from_openai_function_list(function_list: List[OpenAIFunction], function_call: Dict[str, str] | str = 'auto', kwargs: Dict[str, Any] | None = None)[source]#

Class method for creating an instance given the function-related arguments.

Parameters:
  • function_list (List[OpenAIFunction]) – The list of function objects to be loaded into this configuration and passed to the model.

  • function_call (Union[Dict[str, str], str], optional) – Controls how the model responds to function calls, as specified in the creator’s documentation.

  • kwargs (Optional[Dict[str, Any]]) – The extra modifications to be made on the original settings defined in ChatGPTConfig.

Returns:

A new instance which loads the given

function list into a list of dictionaries and the input function_call argument.

Return type:

FunctionCallingConfig

function_call: Dict[str, str] | str = 'auto'#
functions: List[Dict[str, Any]]#
class camel.configs.OpenSourceConfig(model_path: str, server_url: str, api_params: ChatGPTConfig = ChatGPTConfig(temperature=0.2, top_p=1.0, n=1, stream=False, stop=None, max_tokens=None, presence_penalty=0.0, frequency_penalty=0.0, logit_bias={}, user=''))[source]#

Bases: BaseConfig

Defines parameters for setting up open-source models and includes parameters to be passed to chat completion function of OpenAI API.

Parameters:
  • model_path (str) – The path to a local folder containing the model files or the model card in HuggingFace hub.

  • server_url (str) – The URL to the server running the model inference which will be used as the API base of OpenAI API.

  • api_params (ChatGPTConfig) – An instance of :obj:ChatGPTConfig to contain the arguments to be passed to OpenAI API.

api_params: ChatGPTConfig = ChatGPTConfig(temperature=0.2, top_p=1.0, n=1, stream=False, stop=None, max_tokens=None, presence_penalty=0.0, frequency_penalty=0.0, logit_bias={}, user='')#
model_path: str#
server_url: str#

camel.generators module#

class camel.generators.AISocietyTaskPromptGenerator(num_tasks: int = 10)[source]#

Bases: object

from_role_files(assistant_role_names_path: str = 'data/ai_society/assistant_roles.txt', user_role_names_path: str = 'data/ai_society/user_roles.txt') Generator[Tuple[str, Tuple[str, str]], None, None][source]#
from_role_generator(role_generator: Generator[Tuple, None, None]) Generator[Tuple[str, Tuple[str, str]], None, None][source]#
class camel.generators.CodeTaskPromptGenerator(num_tasks: int = 50)[source]#

Bases: object

from_role_files(languages_path: str = 'data/code/languages.txt', domains_path: str = 'data/code/domains.txt') Generator[Tuple[TextPrompt, str, str], None, None][source]#
from_role_generator(role_generator: Generator[Tuple, None, None]) Generator[str, None, None][source]#
class camel.generators.RoleNameGenerator(assistant_role_names_path: str = 'data/ai_society/assistant_roles.txt', user_role_names_path: str = 'data/ai_society/user_roles.txt', assistant_role_names: List[str] | None = None, user_role_names: List[str] | None = None)[source]#

Bases: object

from_role_files() Generator[Tuple, None, None][source]#
class camel.generators.SingleTxtGenerator(text_file_path: str)[source]#

Bases: object

from_role_files() Generator[str, None, None][source]#
class camel.generators.SystemMessageGenerator(task_type: TaskType = TaskType.AI_SOCIETY, sys_prompts: Dict[RoleType, str] | None = None, sys_msg_meta_dict_keys: Set[str] | None = None)[source]#

Bases: object

System message generator for agents.

Parameters:
  • task_type (TaskType, optional) – The task type. (default: TaskType.AI_SOCIETY)

  • sys_prompts (Optional[Dict[RoleType, str]], optional) – The prompts of the system messages for each role type. (default: None)

  • sys_msg_meta_dict_keys (Optional[Set[str]], optional) – The set of keys of the meta dictionary used to fill the prompts. (default: None)

from_dict(meta_dict: ~typing.Dict[str, str], role_tuple: ~typing.Tuple[str, ~camel.typing.RoleType] = ('', <RoleType.DEFAULT: 'default'>)) BaseMessage[source]#

Generates a system message from a dictionary.

Parameters:
  • meta_dict (Dict[str, str]) – The dictionary containing the information to generate the system message.

  • role_tuple (Tuple[str, RoleType], optional) – The tuple containing the role name and role type. (default: (“”, RoleType.DEFAULT))

Returns:

The generated system message.

Return type:

BaseMessage

from_dicts(meta_dicts: List[Dict[str, str]], role_tuples: List[Tuple[str, RoleType]]) List[BaseMessage][source]#

Generates a list of system messages from a list of dictionaries.

Parameters:
  • meta_dicts (List[Dict[str, str]]) – A list of dictionaries containing the information to generate the system messages.

  • role_tuples (List[Tuple[str, RoleType]]) – A list of tuples containing the role name and role type for each system message.

Returns:

A list of generated system messages.

Return type:

List[BaseMessage]

Raises:

ValueError – If the number of meta_dicts and role_tuples are different.

validate_meta_dict_keys(meta_dict: Dict[str, str]) None[source]#

Validates the keys of the meta_dict.

Parameters:

meta_dict (Dict[str, str]) – The dictionary to validate.

camel.human module#

class camel.human.Human(name: str = 'Kill Switch Engineer', logger_color: Any = '\x1b[35m')[source]#

Bases: object

A class representing a human user.

Parameters:
  • name (str) – The name of the human user. (default: "Kill Switch Engineer").

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

name#

The name of the human user.

Type:

str

logger_color#

The color of the menu options displayed to the user.

Type:

Any

input_button#

The text displayed for the input button.

Type:

str

kill_button#

The text displayed for the kill button.

Type:

str

options_dict#

A dictionary containing the options displayed to the user.

Type:

Dict[str, str]

display_options(messages: Sequence[BaseMessage]) None[source]#

Displays the options to the user.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

None

get_input() str[source]#

Gets the input from the user.

Returns:

The user’s input.

Return type:

str

parse_input(human_input: str) str[source]#

Parses the user’s input and returns a BaseMessage object.

Parameters:

human_input (str) – The user’s input.

Returns:

A str object representing the user’s input.

Return type:

content

reduce_step(messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by displaying options to the user, getting their input, and parsing their choice.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object representing the

user’s choice.

Return type:

ChatAgentResponse

camel.messages module#

class camel.messages.BaseMessage(role_name: str, role_type: RoleType, meta_dict: Dict[str, str] | None, content: str)[source]#

Bases: object

Base class for message objects used in CAMEL chat system.

Parameters:
  • role_name (str) – The name of the user or assistant role.

  • role_type (RoleType) – The type of role, either RoleType.ASSISTANT or RoleType.USER.

  • meta_dict (Optional[Dict[str, str]]) – Additional metadata dictionary for the message.

  • role (str) – The role of the message in OpenAI chat system, either "system", "user", or "assistant".

  • content (str) – The content of the message.

content: str#
create_new_instance(content: str) BaseMessage[source]#

Create a new instance of the BaseMessage with updated content.

Parameters:

content (str) – The new content value.

Returns:

The new instance of BaseMessage.

Return type:

BaseMessage

extract_text_and_code_prompts() Tuple[List[TextPrompt], List[CodePrompt]][source]#

Extract text and code prompts from the message content.

Returns:

A tuple containing a

list of text prompts and a list of code prompts extracted from the content.

Return type:

Tuple[List[TextPrompt], List[CodePrompt]]

classmethod make_assistant_message(role_name: str, content: str, meta_dict: Dict[str, str] | None = None) BaseMessage[source]#
classmethod make_user_message(role_name: str, content: str, meta_dict: Dict[str, str] | None = None) BaseMessage[source]#
meta_dict: Dict[str, str] | None#
role_name: str#
role_type: RoleType#
to_dict() Dict[source]#

Converts the message to a dictionary.

Returns:

The converted dictionary.

Return type:

dict

to_openai_assistant_message() Dict[str, str][source]#

Converts the message to an OpenAIAssistantMessage object.

Returns:

The converted OpenAIAssistantMessage

object.

Return type:

OpenAIAssistantMessage

to_openai_message(role_at_backend: str) Dict[str, str][source]#

Converts the message to an OpenAIMessage object.

Parameters:

role_at_backend (str) – The role of the message in OpenAI chat system, either "system", "user", or obj:”assistant”.

Returns:

The converted OpenAIMessage object.

Return type:

OpenAIMessage

to_openai_system_message() Dict[str, str][source]#

Converts the message to an OpenAISystemMessage object.

Returns:

The converted OpenAISystemMessage

object.

Return type:

OpenAISystemMessage

to_openai_user_message() Dict[str, str][source]#

Converts the message to an OpenAIUserMessage object.

Returns:

The converted OpenAIUserMessage object.

Return type:

OpenAIUserMessage

class camel.messages.FunctionCallingMessage(role_name: str, role_type: RoleType, meta_dict: Dict[str, str] | None, content: str, func_name: str | None = None, args: Dict | None = None, result: Any | None = None)[source]#

Bases: BaseMessage

Class for message objects used specifically for function-related messages.

Parameters:
  • func_name (Optional[str]) – The name of the function used. (default: None)

  • args (Optional[Dict]) – The dictionary of arguments passed to the function. (default: None)

  • result (Optional[Any]) – The result of function execution. (default: None)

args: Dict | None = None#
func_name: str | None = None#
result: Any | None = None#
to_openai_assistant_message() Dict[str, str][source]#

Converts the message to an OpenAIAssistantMessage object.

Returns:

The converted OpenAIAssistantMessage

object.

Return type:

OpenAIAssistantMessage

to_openai_function_message() Dict[str, str][source]#

Converts the message to an OpenAIMessage object with the role being “function”.

Returns:

The converted OpenAIMessage object

with its role being “function”.

Return type:

OpenAIMessage

to_openai_message(role_at_backend: str) Dict[str, str][source]#

Converts the message to an OpenAIMessage object.

Parameters:

role_at_backend (str) – The role of the message in OpenAI chat system, either "system", "user", or obj:”assistant”.

Returns:

The converted OpenAIMessage object.

Return type:

OpenAIMessage

camel.typing module#

class camel.typing.ModelType(value)[source]#

Bases: Enum

An enumeration.

GPT_3_5_TURBO = 'gpt-3.5-turbo'#
GPT_3_5_TURBO_16K = 'gpt-3.5-turbo-16k'#
GPT_4 = 'gpt-4'#
GPT_4_32k = 'gpt-4-32k'#
LLAMA_2 = 'llama-2'#
STUB = 'stub'#
VICUNA = 'vicuna'#
VICUNA_16K = 'vicuna-16k'#
property is_open_source: bool#

Returns whether this type of models is open-source.

Returns:

Whether this type of models is open-source.

Return type:

bool

property is_openai: bool#

Returns whether this type of models is an OpenAI-released model.

Returns:

Whether this type of models belongs to OpenAI.

Return type:

bool

property token_limit: int#

Returns the maximum token limit for a given model. :returns: The maximum token limit for the given model. :rtype: int

validate_model_name(model_name: str) bool[source]#

Checks whether the model type and the model name matches.

Parameters:

model_name (str) – The name of the model, e.g. “vicuna-7b-v1.5”.

Returns:

Whether the model type mathches the model name.

Return type:

bool

property value_for_tiktoken: str#
class camel.typing.RoleType(value)[source]#

Bases: Enum

An enumeration.

ASSISTANT = 'assistant'#
CRITIC = 'critic'#
DEFAULT = 'default'#
EMBODIMENT = 'embodiment'#
USER = 'user'#
class camel.typing.TaskType(value)[source]#

Bases: Enum

An enumeration.

AI_SOCIETY = 'ai_society'#
CODE = 'code'#
DEFAULT = 'default'#
EVALUATION = 'evaluation'#
MISALIGNMENT = 'misalignment'#
ROLE_DESCRIPTION = 'role_description'#
SOLUTION_EXTRACTION = 'solution_extraction'#
TRANSLATION = 'translation'#

camel.utils module#

class camel.utils.BaseTokenCounter[source]#

Bases: ABC

Base class for token counters of different kinds of models.

abstract count_tokens_from_messages(messages: List[Dict[str, str]]) int[source]#

Count number of tokens in the provided message list.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.OpenAITokenCounter(model: ModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[Dict[str, str]]) int[source]#

Count number of tokens in the provided message list with the help of package tiktoken.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.OpenSourceTokenCounter(model_type: ModelType, model_path: str)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[Dict[str, str]]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.PythonInterpreter(action_space: Dict[str, Any], import_white_list: List[str] | None = None)[source]#

Bases: object

A customized python interpreter to control the execution of LLM-generated codes. The interpreter makes sure the code can only execute functions given in action space and import white list. It also supports fuzzy variable matching to reveive uncertain input variable name.

This class is adapted from the hugging face implementation python_interpreter.py. The original license applies:

Copyright 2023 The HuggingFace Inc. team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.

We have modified the original code to suit our requirements. We have encapsulated the original functions within a class and saved the interpreter state after execution. We have added support for “import” statements, “for” statements, and several binary and unary operators. We have added import white list to keep import statement safe. Additionally, we have modified the variable matching logic and introduced the fuzz_state for fuzzy matching.

Modifications copyright (C) 2023 CAMEL-AI.org

Parameters:
  • action_space (Dict[str, Any]) – A dictionary that maps action names to their corresponding functions or objects. The interpreter can only execute functions that are either directly listed in this dictionary or are member functions of objects listed in this dictionary. The concept of action_space is derived from EmbodiedAgent, representing the actions that an agent is capable of performing.

  • import_white_list (Optional[List[str]], optional) – A list that stores the Python modules or functions that can be imported in the code. All submodules and functions of the modules listed in this list are importable. Any other import statements will be rejected. The module and its submodule or function name are separated by a period (). (default: None)

clear_state() None[source]#

Initialize state and fuzz_state

execute(code: str, state: Dict[str, Any] | None = None, fuzz_state: Dict[str, Any] | None = None, keep_state: bool = True) Any[source]#

Execute the input python codes in a security environment.

Parameters:
  • code (str) – Generated python code to be executed.

  • state (Optional[Dict[str, Any]], optional) – External variables that may be used in the generated code. (default: None)

  • fuzz_state (Optional[Dict[str, Any]], optional) – External varibles that do not have certain varible names. The interpreter will use fuzzy matching to access these varibales. For example, if fuzz_state has a variable image, the generated code can use input_image to access it. (default: None)

  • keep_state (bool, optional) – If True, state and fuzz_state will be kept for later execution. Otherwise, they will be cleared. (default: True)

Returns:

The value of the last statement (excluding “import”) in the

code. For this interpreter, the value of an expression is its value, the value of an “assign” statement is the assigned value, and the value of an “if” and “for” block statement is the value of the last statement in the block.

Return type:

Any

camel.utils.check_server_running(server_url: str) bool[source]#

Check whether the port refered by the URL to the server is open.

Parameters:

server_url (str) – The URL to the server running LLM inference service.

Returns:

Whether the port is open for packets (server is running).

Return type:

bool

camel.utils.download_tasks(task: TaskType, folder_path: str) None[source]#
camel.utils.get_first_int(string: str) int | None[source]#

Returns the first integer number found in the given string.

If no integer number is found, returns None.

Parameters:

string (str) – The input string.

Returns:

The first integer number found in the string, or None if

no integer number is found.

Return type:

int or None

camel.utils.get_model_encoding(value_for_tiktoken: str)[source]#

Get model encoding from tiktoken.

Parameters:

value_for_tiktoken – Model value for tiktoken.

Returns:

Model encoding.

Return type:

tiktoken.Encoding

camel.utils.get_prompt_template_key_words(template: str) Set[str][source]#

Given a string template containing curly braces {}, return a set of the words inside the braces.

Parameters:

template (str) – A string containing curly braces.

Returns:

A list of the words inside the curly braces.

Return type:

List[str]

Example

>>> get_prompt_template_key_words('Hi, {name}! How are you {status}?')
{'name', 'status'}
camel.utils.get_task_list(task_response: str) List[str][source]#

Parse the response of the Agent and return task list.

Parameters:

task_response (str) – The string response of the Agent.

Returns:

A list of the string tasks.

Return type:

List[str]

camel.utils.openai_api_key_required(func: F) F[source]#

Decorator that checks if the OpenAI API key is available in the environment variables.

Parameters:

func (callable) – The function to be wrapped.

Returns:

The decorated function.

Return type:

callable

Raises:

ValueError – If the OpenAI API key is not found in the environment variables.

camel.utils.parse_doc(func: Callable) Dict[str, Any][source]#

Parse the docstrings of a function to extract the function name, description and parameters.

Parameters:

func (Callable) – The function to be parsed.

Returns:

A dictionary with the function’s name,

description, and parameters.

Return type:

Dict[str, Any]

camel.utils.print_text_animated(text, delay: float = 0.02, end: str = '')[source]#

Prints the given text with an animated effect.

Parameters:
  • text (str) – The text to print.

  • delay (float, optional) – The delay between each character printed. (default: 0.02)

  • end (str, optional) – The end character to print after each character of text. (default: "")

Module contents#