Camel Package#

Subpackages#

Submodules#

camel.configs module#

class camel.configs.AnthropicConfig(max_tokens: int = 256, stop_sequences: list[str] | NotGiven = NOT_GIVEN, temperature: float = 1, top_p: float | NotGiven = NOT_GIVEN, top_k: int | NotGiven = NOT_GIVEN, metadata: NotGiven = NOT_GIVEN, stream: bool = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Anthropic API.

See: https://docs.anthropic.com/claude/reference/complete_post :param max_tokens: The maximum number of tokens to

generate before stopping. Note that Anthropic models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. (default: 256)

Parameters:
  • stop_sequences (List[str], optional) – Sequences that will cause the model to stop generating completion text. Anthropic models stop on “nnHuman:”, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

  • temperature (float, optional) – Amount of randomness injected into the response. Defaults to 1. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks. (default: 1)

  • top_p (float, optional) – Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. (default: 0.7)

  • top_k (int, optional) – Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. (default: 5)

  • metadata – An object describing metadata about the request.

  • stream (bool, optional) –

    Whether to incrementally stream the response using server-sent events.

    (default: False)

max_tokens: int = 256#
metadata: NotGiven = NOT_GIVEN#
stop_sequences: list[str] | NotGiven = NOT_GIVEN#
stream: bool = False#
temperature: float = 1#
top_k: int | NotGiven = NOT_GIVEN#
top_p: float | NotGiven = NOT_GIVEN#
class camel.configs.BaseConfig[source]#

Bases: ABC

class camel.configs.ChatGPTConfig(temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, logit_bias: dict = <factory>, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

frequency_penalty: float = 0.0#
logit_bias: dict#
max_tokens: int | NotGiven = NOT_GIVEN#
n: int = 1#
presence_penalty: float = 0.0#
stop: str | Sequence[str] | NotGiven = NOT_GIVEN#
stream: bool = False#
temperature: float = 0.2#
top_p: float = 1.0#
user: str = ''#
class camel.configs.FunctionCallingConfig(temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, frequency_penalty: float = 0.0, logit_bias: dict = <factory>, user: str = '', functions: list[dict[str, Any]] = <factory>, function_call: dict[str, str] | str = 'auto')[source]#

Bases: ChatGPTConfig

Defines the parameters for generating chat completions using the OpenAI API with functions included.

Parameters:
  • functions (List[Dict[str, Any]]) – A list of functions the model may generate JSON inputs for.

  • function_call (Union[Dict[str, str], str], optional) – Controls how the model responds to function calls. "none" means the model does not call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. (default: "auto")

classmethod from_openai_function_list(function_list: list[OpenAIFunction], function_call: dict[str, str] | str = 'auto', kwargs: dict[str, Any] | None = None)[source]#

Class method for creating an instance given the function-related arguments.

Parameters:
  • function_list (List[OpenAIFunction]) – The list of function objects to be loaded into this configuration and passed to the model.

  • function_call (Union[Dict[str, str], str], optional) – Controls how the model responds to function calls, as specified in the creator’s documentation.

  • kwargs (Optional[Dict[str, Any]]) – The extra modifications to be made on the original settings defined in ChatGPTConfig.

Returns:

A new instance which loads the given

function list into a list of dictionaries and the input function_call argument.

Return type:

FunctionCallingConfig

function_call: dict[str, str] | str = 'auto'#
functions: list[dict[str, Any]]#
class camel.configs.OpenSourceConfig(model_path: str, server_url: str, api_params: ~camel.configs.openai_config.ChatGPTConfig = <factory>)[source]#

Bases: BaseConfig

Defines parameters for setting up open-source models and includes parameters to be passed to chat completion function of OpenAI API.

Parameters:
  • model_path (str) – The path to a local folder containing the model files or the model card in HuggingFace hub.

  • server_url (str) – The URL to the server running the model inference which will be used as the API base of OpenAI API.

  • api_params (ChatGPTConfig) – An instance of :obj:ChatGPTConfig to contain the arguments to be passed to OpenAI API.

api_params: ChatGPTConfig#
model_path: str#
server_url: str#

camel.generators module#

class camel.generators.AISocietyTaskPromptGenerator(num_tasks: int = 10)[source]#

Bases: object

from_role_files(assistant_role_names_path: str = 'data/ai_society/assistant_roles.txt', user_role_names_path: str = 'data/ai_society/user_roles.txt') Generator[Tuple[str, Tuple[str, str]], None, None][source]#
from_role_generator(role_generator: Generator[Tuple, None, None]) Generator[Tuple[str, Tuple[str, str]], None, None][source]#
class camel.generators.CodeTaskPromptGenerator(num_tasks: int = 50)[source]#

Bases: object

from_role_files(languages_path: str = 'data/code/languages.txt', domains_path: str = 'data/code/domains.txt') Generator[Tuple[TextPrompt, str, str], None, None][source]#
from_role_generator(role_generator: Generator[Tuple, None, None]) Generator[str, None, None][source]#
class camel.generators.RoleNameGenerator(assistant_role_names_path: str = 'data/ai_society/assistant_roles.txt', user_role_names_path: str = 'data/ai_society/user_roles.txt', assistant_role_names: List[str] | None = None, user_role_names: List[str] | None = None)[source]#

Bases: object

from_role_files() Generator[Tuple, None, None][source]#
class camel.generators.SingleTxtGenerator(text_file_path: str)[source]#

Bases: object

from_role_files() Generator[str, None, None][source]#
class camel.generators.SystemMessageGenerator(task_type: TaskType = TaskType.AI_SOCIETY, sys_prompts: Dict[RoleType, str] | None = None, sys_msg_meta_dict_keys: Set[str] | None = None)[source]#

Bases: object

System message generator for agents.

Parameters:
  • task_type (TaskType, optional) – The task type. (default: TaskType.AI_SOCIETY)

  • sys_prompts (Optional[Dict[RoleType, str]], optional) – The prompts of the system messages for each role type. (default: None)

  • sys_msg_meta_dict_keys (Optional[Set[str]], optional) – The set of keys of the meta dictionary used to fill the prompts. (default: None)

from_dict(meta_dict: ~typing.Dict[str, str], role_tuple: ~typing.Tuple[str, ~camel.types.enums.RoleType] = ('', <RoleType.DEFAULT: 'default'>)) BaseMessage[source]#

Generates a system message from a dictionary.

Parameters:
  • meta_dict (Dict[str, str]) – The dictionary containing the information to generate the system message.

  • role_tuple (Tuple[str, RoleType], optional) – The tuple containing the role name and role type. (default: (“”, RoleType.DEFAULT))

Returns:

The generated system message.

Return type:

BaseMessage

from_dicts(meta_dicts: List[Dict[str, str]], role_tuples: List[Tuple[str, RoleType]]) List[BaseMessage][source]#

Generates a list of system messages from a list of dictionaries.

Parameters:
  • meta_dicts (List[Dict[str, str]]) – A list of dictionaries containing the information to generate the system messages.

  • role_tuples (List[Tuple[str, RoleType]]) – A list of tuples containing the role name and role type for each system message.

Returns:

A list of generated system messages.

Return type:

List[BaseMessage]

Raises:

ValueError – If the number of meta_dicts and role_tuples are different.

validate_meta_dict_keys(meta_dict: Dict[str, str]) None[source]#

Validates the keys of the meta_dict.

Parameters:

meta_dict (Dict[str, str]) – The dictionary to validate.

camel.human module#

class camel.human.Human(name: str = 'Kill Switch Engineer', logger_color: Any = '\x1b[35m')[source]#

Bases: object

A class representing a human user.

Parameters:
  • name (str) – The name of the human user. (default: "Kill Switch Engineer").

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

name#

The name of the human user.

Type:

str

logger_color#

The color of the menu options displayed to the user.

Type:

Any

input_button#

The text displayed for the input button.

Type:

str

kill_button#

The text displayed for the kill button.

Type:

str

options_dict#

A dictionary containing the options displayed to the user.

Type:

Dict[str, str]

display_options(messages: Sequence[BaseMessage]) None[source]#

Displays the options to the user.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

None

get_input() str[source]#

Gets the input from the user.

Returns:

The user’s input.

Return type:

str

parse_input(human_input: str) str[source]#

Parses the user’s input and returns a BaseMessage object.

Parameters:

human_input (str) – The user’s input.

Returns:

A str object representing the user’s input.

Return type:

content

reduce_step(messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by displaying options to the user, getting their input, and parsing their choice.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object representing the

user’s choice.

Return type:

ChatAgentResponse

camel.messages module#

class camel.messages.BaseMessage(role_name: str, role_type: RoleType, meta_dict: Dict[str, str] | None, content: str, image: Image | None = None, image_detail: Literal['auto', 'low', 'high'] = 'auto')[source]#

Bases: object

Base class for message objects used in CAMEL chat system.

Parameters:
  • role_name (str) – The name of the user or assistant role.

  • role_type (RoleType) – The type of role, either RoleType.ASSISTANT or RoleType.USER.

  • meta_dict (Optional[Dict[str, str]]) – Additional metadata dictionary for the message.

  • content (str) – The content of the message.

content: str#
create_new_instance(content: str) BaseMessage[source]#

Create a new instance of the BaseMessage with updated content.

Parameters:

content (str) – The new content value.

Returns:

The new instance of BaseMessage.

Return type:

BaseMessage

extract_text_and_code_prompts() Tuple[List[TextPrompt], List[CodePrompt]][source]#

Extract text and code prompts from the message content.

Returns:

A tuple containing a

list of text prompts and a list of code prompts extracted from the content.

Return type:

Tuple[List[TextPrompt], List[CodePrompt]]

image: Image | None = None#
image_detail: Literal['auto', 'low', 'high'] = 'auto'#
classmethod make_assistant_message(role_name: str, content: str, meta_dict: Dict[str, str] | None = None, image: Image | None = None, image_detail: OpenAIImageDetailType | str = 'auto') BaseMessage[source]#
classmethod make_user_message(role_name: str, content: str, meta_dict: Dict[str, str] | None = None, image: Image | None = None, image_detail: OpenAIImageDetailType | str = 'auto') BaseMessage[source]#
meta_dict: Dict[str, str] | None#
role_name: str#
role_type: RoleType#
to_dict() Dict[source]#

Converts the message to a dictionary.

Returns:

The converted dictionary.

Return type:

dict

to_openai_assistant_message() ChatCompletionAssistantMessageParam[source]#

Converts the message to an OpenAIAssistantMessage object.

Returns:

The converted OpenAIAssistantMessage

object.

Return type:

OpenAIAssistantMessage

to_openai_message(role_at_backend: OpenAIBackendRole) ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam[source]#

Converts the message to an OpenAIMessage object.

Parameters:

role_at_backend (OpenAIBackendRole) – The role of the message in OpenAI chat system.

Returns:

The converted OpenAIMessage object.

Return type:

OpenAIMessage

to_openai_system_message() ChatCompletionSystemMessageParam[source]#

Converts the message to an OpenAISystemMessage object.

Returns:

The converted OpenAISystemMessage

object.

Return type:

OpenAISystemMessage

to_openai_user_message() ChatCompletionUserMessageParam[source]#

Converts the message to an OpenAIUserMessage object.

Returns:

The converted OpenAIUserMessage object.

Return type:

OpenAIUserMessage

class camel.messages.FunctionCallingMessage(role_name: str, role_type: RoleType, meta_dict: Dict[str, str] | None, content: str, image: Image | None = None, image_detail: Literal['auto', 'low', 'high'] = 'auto', func_name: str | None = None, args: Dict | None = None, result: Any | None = None)[source]#

Bases: BaseMessage

Class for message objects used specifically for function-related messages.

Parameters:
  • func_name (Optional[str]) – The name of the function used. (default: None)

  • args (Optional[Dict]) – The dictionary of arguments passed to the function. (default: None)

  • result (Optional[Any]) – The result of function execution. (default: None)

args: Dict | None = None#
func_name: str | None = None#
result: Any | None = None#
to_openai_assistant_message() ChatCompletionAssistantMessageParam[source]#

Converts the message to an OpenAIAssistantMessage object.

Returns:

The converted OpenAIAssistantMessage

object.

Return type:

OpenAIAssistantMessage

to_openai_function_message() ChatCompletionFunctionMessageParam[source]#

Converts the message to an OpenAIMessage object with the role being “function”.

Returns:

The converted OpenAIMessage object

with its role being “function”.

Return type:

OpenAIMessage

to_openai_message(role_at_backend: OpenAIBackendRole) ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam[source]#

Converts the message to an OpenAIMessage object.

Parameters:

role_at_backend (OpenAIBackendRole) – The role of the message in OpenAI chat system.

Returns:

The converted OpenAIMessage object.

Return type:

OpenAIMessage

camel.messages.OpenAIAssistantMessage#

alias of ChatCompletionAssistantMessageParam

camel.messages.OpenAISystemMessage#

alias of ChatCompletionSystemMessageParam

camel.messages.OpenAIUserMessage#

alias of ChatCompletionUserMessageParam

camel.types module#

class camel.types.ChatCompletion(**data: Any)[source]#

Bases: BaseModel

choices: List[Choice]#

A list of chat completion choices.

Can be more than one if n is greater than 1.

created: int#

The Unix timestamp (in seconds) of when the chat completion was created.

id: str#

A unique identifier for the chat completion.

model: str#

The model used for the chat completion.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'defer_build': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'choices': FieldInfo(annotation=List[openai.types.chat.chat_completion.Choice], required=True), 'created': FieldInfo(annotation=int, required=True), 'id': FieldInfo(annotation=str, required=True), 'model': FieldInfo(annotation=str, required=True), 'object': FieldInfo(annotation=Literal['chat.completion'], required=True), 'system_fingerprint': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'usage': FieldInfo(annotation=Union[CompletionUsage, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

object: typing_extensions.Literal[chat.completion]#

The object type, which is always chat.completion.

system_fingerprint: str | None#

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

usage: CompletionUsage | None#

Usage statistics for the completion request.

class camel.types.ChatCompletionAssistantMessageParam[source]#

Bases: TypedDict

content: str | None#

The contents of the assistant message.

Required unless tool_calls or function_call is specified.

function_call: FunctionCall#

Deprecated and replaced by tool_calls.

The name and arguments of a function that should be called, as generated by the model.

name: str#

An optional name for the participant.

Provides the model information to differentiate between participants of the same role.

role: typing_extensions.Required[typing_extensions.Literal[assistant]]#

The role of the messages author, in this case assistant.

tool_calls: Iterable[ChatCompletionMessageToolCallParam]#

The tool calls generated by the model, such as function calls.

class camel.types.ChatCompletionChunk(**data: Any)[source]#

Bases: BaseModel

choices: List[Choice]#

A list of chat completion choices.

Can contain more than one elements if n is greater than 1. Can also be empty for the last chunk if you set stream_options: {“include_usage”: true}.

created: int#

The Unix timestamp (in seconds) of when the chat completion was created.

Each chunk has the same timestamp.

id: str#

A unique identifier for the chat completion. Each chunk has the same ID.

model: str#

The model to generate the completion.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'defer_build': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'choices': FieldInfo(annotation=List[openai.types.chat.chat_completion_chunk.Choice], required=True), 'created': FieldInfo(annotation=int, required=True), 'id': FieldInfo(annotation=str, required=True), 'model': FieldInfo(annotation=str, required=True), 'object': FieldInfo(annotation=Literal['chat.completion.chunk'], required=True), 'system_fingerprint': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'usage': FieldInfo(annotation=Union[CompletionUsage, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

object: typing_extensions.Literal[chat.completion.chunk]#

The object type, which is always chat.completion.chunk.

system_fingerprint: str | None#

This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

usage: CompletionUsage | None#

An optional field that will only be present when you set stream_options: {“include_usage”: true} in your request. When present, it contains a null value except for the last chunk which contains the token usage statistics for the entire request.

class camel.types.ChatCompletionFunctionMessageParam[source]#

Bases: TypedDict

content: typing_extensions.Required[str | None]#

The contents of the function message.

name: typing_extensions.Required[str]#

The name of the function to call.

role: typing_extensions.Required[typing_extensions.Literal[function]]#

The role of the messages author, in this case function.

class camel.types.ChatCompletionMessage(**data: Any)[source]#

Bases: BaseModel

content: str | None#

The contents of the message.

function_call: FunctionCall | None#

Deprecated and replaced by tool_calls.

The name and arguments of a function that should be called, as generated by the model.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'defer_build': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'content': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'function_call': FieldInfo(annotation=Union[FunctionCall, NoneType], required=False, default=None), 'role': FieldInfo(annotation=Literal['assistant'], required=True), 'tool_calls': FieldInfo(annotation=Union[List[openai.types.chat.chat_completion_message_tool_call.ChatCompletionMessageToolCall], NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

role: typing_extensions.Literal[assistant]#

The role of the author of this message.

tool_calls: List[ChatCompletionMessageToolCall] | None#

The tool calls generated by the model, such as function calls.

class camel.types.ChatCompletionSystemMessageParam[source]#

Bases: TypedDict

content: typing_extensions.Required[str]#

The contents of the system message.

name: str#

An optional name for the participant.

Provides the model information to differentiate between participants of the same role.

role: typing_extensions.Required[typing_extensions.Literal[system]]#

The role of the messages author, in this case system.

class camel.types.ChatCompletionUserMessageParam[source]#

Bases: TypedDict

content: typing_extensions.Required[str | Iterable[ChatCompletionContentPartTextParam | ChatCompletionContentPartImageParam]]#

The contents of the user message.

name: str#

An optional name for the participant.

Provides the model information to differentiate between participants of the same role.

role: typing_extensions.Required[typing_extensions.Literal[user]]#

The role of the messages author, in this case user.

class camel.types.Choice(**data: Any)[source]#

Bases: BaseModel

finish_reason: typing_extensions.Literal[stop, length, tool_calls, content_filter, function_call]#

The reason the model stopped generating tokens.

This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.

index: int#

The index of the choice in the list of choices.

logprobs: ChoiceLogprobs | None#

Log probability information for the choice.

message: ChatCompletionMessage#

A chat completion message generated by the model.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'defer_build': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'finish_reason': FieldInfo(annotation=Literal['stop', 'length', 'tool_calls', 'content_filter', 'function_call'], required=True), 'index': FieldInfo(annotation=int, required=True), 'logprobs': FieldInfo(annotation=Union[ChoiceLogprobs, NoneType], required=False, default=None), 'message': FieldInfo(annotation=ChatCompletionMessage, required=True)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

class camel.types.CompletionUsage(**data: Any)[source]#

Bases: BaseModel

completion_tokens: int#

Number of tokens in the generated completion.

model_computed_fields: ClassVar[dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'defer_build': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[dict[str, FieldInfo]] = {'completion_tokens': FieldInfo(annotation=int, required=True), 'prompt_tokens': FieldInfo(annotation=int, required=True), 'total_tokens': FieldInfo(annotation=int, required=True)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo].

This replaces Model.__fields__ from Pydantic V1.

prompt_tokens: int#

Number of tokens in the prompt.

total_tokens: int#

Total number of tokens used in the request (prompt + completion).

class camel.types.EmbeddingModelType(value)[source]#

Bases: Enum

An enumeration.

ADA_1 = 'text-embedding-ada-001'#
ADA_2 = 'text-embedding-ada-002'#
BABBAGE_1 = 'text-embedding-babbage-001'#
CURIE_1 = 'text-embedding-curie-001'#
DAVINCI_1 = 'text-embedding-davinci-001'#
property is_openai: bool#

Returns whether this type of models is an OpenAI-released model.

property output_dim: int#
class camel.types.ModelType(value)[source]#

Bases: Enum

An enumeration.

CLAUDE_2_0 = 'claude-2.0'#
CLAUDE_2_1 = 'claude-2.1'#
CLAUDE_3_HAIKU = 'claude-3-haiku-20240307'#
CLAUDE_3_OPUS = 'claude-3-opus-20240229'#
CLAUDE_3_SONNET = 'claude-3-sonnet-20240229'#
CLAUDE_INSTANT_1_2 = 'claude-instant-1.2'#
GPT_3_5_TURBO = 'gpt-3.5-turbo'#
GPT_4 = 'gpt-4'#
GPT_4O = 'gpt-4o'#
GPT_4_32K = 'gpt-4-32k'#
GPT_4_TURBO = 'gpt-4-turbo'#
LLAMA_2 = 'llama-2'#
STUB = 'stub'#
VICUNA = 'vicuna'#
VICUNA_16K = 'vicuna-16k'#
property is_anthropic: bool#

Returns whether this type of models is Anthropic-released model.

Returns:

Whether this type of models is anthropic.

Return type:

bool

property is_open_source: bool#

Returns whether this type of models is open-source.

property is_openai: bool#

Returns whether this type of models is an OpenAI-released model.

property token_limit: int#

Returns the maximum token limit for a given model. :returns: The maximum token limit for the given model. :rtype: int

validate_model_name(model_name: str) bool[source]#

Checks whether the model type and the model name matches.

Parameters:

model_name (str) – The name of the model, e.g. “vicuna-7b-v1.5”.

Returns:

Whether the model type mathches the model name.

Return type:

bool

property value_for_tiktoken: str#
class camel.types.OpenAIBackendRole(value)[source]#

Bases: Enum

An enumeration.

ASSISTANT = 'assistant'#
FUNCTION = 'function'#
SYSTEM = 'system'#
USER = 'user'#
class camel.types.OpenAIImageDetailType(value)[source]#

Bases: Enum

An enumeration.

AUTO = 'auto'#
HIGH = 'high'#
LOW = 'low'#
class camel.types.OpenAIImageType(value)[source]#

Bases: Enum

Image types supported by OpenAI vision model.

GIF = 'gif'#
JPEG = 'jpeg'#
JPG = 'jpg'#
PNG = 'png'#
WEBP = 'webp'#
class camel.types.OpenAPIName(value)[source]#

Bases: Enum

An enumeration.

COURSERA = 'coursera'#
KLARNA = 'klarna'#
SPEAK = 'speak'#
class camel.types.RoleType(value)[source]#

Bases: Enum

An enumeration.

ASSISTANT = 'assistant'#
CRITIC = 'critic'#
DEFAULT = 'default'#
EMBODIMENT = 'embodiment'#
USER = 'user'#
class camel.types.StorageType(value)[source]#

Bases: Enum

An enumeration.

MILVUS = 'milvus'#
QDRANT = 'qdrant'#
class camel.types.TaskType(value)[source]#

Bases: Enum

An enumeration.

AI_SOCIETY = 'ai_society'#
CODE = 'code'#
DEFAULT = 'default'#
EVALUATION = 'evaluation'#
MISALIGNMENT = 'misalignment'#
OBJECT_RECOGNITION = 'object_recognition'#
ROLE_DESCRIPTION = 'role_description'#
SOLUTION_EXTRACTION = 'solution_extraction'#
TRANSLATION = 'translation'#
class camel.types.TerminationMode(value)[source]#

Bases: Enum

An enumeration.

ALL = 'all'#
ANY = 'any'#
class camel.types.VectorDistance(value)[source]#

Bases: Enum

Distance metrics used in a vector database.

COSINE = 'cosine'#

//en.wikipedia.org/wiki/Cosine_similarity

Type:

Cosine similarity. https

DOT = 'dot'#

//en.wikipedia.org/wiki/Dot_product

Type:

Dot product. https

EUCLIDEAN = 'euclidean'#

//en.wikipedia.org/wiki/Euclidean_distance

Type:

Euclidean distance. https

camel.utils module#

class camel.utils.AnthropicTokenCounter(model_type: ModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.BaseTokenCounter[source]#

Bases: ABC

Base class for token counters of different kinds of models.

abstract count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.OpenAITokenCounter(model: ModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list with the help of package tiktoken.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.OpenSourceTokenCounter(model_type: ModelType, model_path: str)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

camel.utils.api_key_required(func: F) F[source]#

Decorator that checks if the OpenAI API key is available in the environment variables.

Parameters:

func (callable) – The function to be wrapped.

Returns:

The decorated function.

Return type:

callable

Raises:

ValueError – If the OpenAI API key is not found in the environment variables.

camel.utils.check_server_running(server_url: str) bool[source]#

Check whether the port refered by the URL to the server is open.

Parameters:

server_url (str) – The URL to the server running LLM inference service.

Returns:

Whether the port is open for packets (server is running).

Return type:

bool

camel.utils.download_tasks(task: TaskType, folder_path: str) None[source]#

Downloads task-related files from a specified URL and extracts them.

This function downloads a zip file containing tasks based on the specified task type from a predefined URL, saves it to folder_path, and then extracts the contents of the zip file into the same folder. After extraction, the zip file is deleted.

Parameters:
  • task (TaskType) – An enum representing the type of task to download.

  • folder_path (str) – The path of the folder where the zip file will be downloaded and extracted.

camel.utils.get_first_int(string: str) int | None[source]#

Returns the first integer number found in the given string.

If no integer number is found, returns None.

Parameters:

string (str) – The input string.

Returns:

The first integer number found in the string, or None if

no integer number is found.

Return type:

int or None

camel.utils.get_model_encoding(value_for_tiktoken: str)[source]#

Get model encoding from tiktoken.

Parameters:

value_for_tiktoken – Model value for tiktoken.

Returns:

Model encoding.

Return type:

tiktoken.Encoding

camel.utils.get_prompt_template_key_words(template: str) Set[str][source]#

Given a string template containing curly braces {}, return a set of the words inside the braces.

Parameters:

template (str) – A string containing curly braces.

Returns:

A list of the words inside the curly braces.

Return type:

List[str]

Example

>>> get_prompt_template_key_words('Hi, {name}! How are you {status}?')
{'name', 'status'}
camel.utils.get_system_information()[source]#

Gathers information about the operating system.

Returns:

A dictionary containing various pieces of OS information.

Return type:

dict

camel.utils.get_task_list(task_response: str) List[str][source]#

Parse the response of the Agent and return task list.

Parameters:

task_response (str) – The string response of the Agent.

Returns:

A list of the string tasks.

Return type:

List[str]

camel.utils.print_text_animated(text, delay: float = 0.02, end: str = '')[source]#

Prints the given text with an animated effect.

Parameters:
  • text (str) – The text to print.

  • delay (float, optional) – The delay between each character printed. (default: 0.02)

  • end (str, optional) – The end character to print after each character of text. (default: "")

camel.utils.role_playing_with_function(task_prompt: str = 'Assume now is 2024 in the Gregorian calendar, estimate the current age of University of Oxford and then add 10 more years to this age, and get the current weather of the city where the University is located. And tell me what time zone University of Oxford is in. And use my twitter account infomation to create a tweet. Search basketballcourse from coursera And help me to choose a basketball by klarna.', function_list: List | None = None, model_type=None, chat_turn_limit=10, assistant_role_name: str = 'Searcher', user_role_name: str = 'Professor') None[source]#

Initializes and conducts a RolePlaying with FunctionCallingConfig session. The function creates an interactive and dynamic role-play session where the AI Assistant and User engage based on the given task, roles, and available functions. It demonstrates the versatility of AI in handling diverse tasks and user interactions within a structured RolePlaying framework.

Parameters:
  • task_prompt (str) – The initial task or scenario description to start the RolePlaying session. Defaults to a prompt involving the estimation of KAUST’s age and weather information.

  • function_list (list) – A list of functions that the agent can utilize during the session. Defaults to a combination of math, search, and weather functions.

  • model_type (ModelType) – The type of chatbot model used for both the assistant and the user. Defaults to GPT-4 Turbo.

  • chat_turn_limit (int) – The maximum number of turns (exchanges) in the chat session. Defaults to 10.

  • assistant_role_name (str) – The role name assigned to the AI Assistant. Defaults to ‘Searcher’.

  • user_role_name (str) – The role name assigned to the User. Defaults to ‘Professor’.

Returns:

This function does not return any value but prints out the

session’s dialogues and outputs.

Return type:

None

camel.utils.to_pascal(snake: str) str[source]#

Convert a snake_case string to PascalCase.

Parameters:

snake (str) – The snake_case string to be converted.

Returns:

The converted PascalCase string.

Return type:

str

Module contents#