mirror of
https://github.com/microsoft/autogen.git
synced 2025-11-14 17:13:29 +00:00
fix some docstring issues affecting rendering (#1739)
* fix some docstring issues affecting rendering * Update pydoc-markdown.yml * undo double backtick * Update compressible_agent.py
This commit is contained in:
parent
2750391f84
commit
d8a204a9a3
@ -84,10 +84,9 @@ Reply "TERMINATE" in the end when everything is done.
|
|||||||
compress_config (dict or True/False): config for compression before oai_reply. Default to False.
|
compress_config (dict or True/False): config for compression before oai_reply. Default to False.
|
||||||
You should contain the following keys:
|
You should contain the following keys:
|
||||||
- "mode" (Optional, str, default to "TERMINATE"): Choose from ["COMPRESS", "TERMINATE", "CUSTOMIZED"].
|
- "mode" (Optional, str, default to "TERMINATE"): Choose from ["COMPRESS", "TERMINATE", "CUSTOMIZED"].
|
||||||
"TERMINATE": terminate the conversation ONLY when token count exceeds the max limit of current model.
|
1. `TERMINATE`: terminate the conversation ONLY when token count exceeds the max limit of current model. `trigger_count` is NOT used in this mode.
|
||||||
`trigger_count` is NOT used in this mode.
|
2. `COMPRESS`: compress the messages when the token count exceeds the limit.
|
||||||
"COMPRESS": compress the messages when the token count exceeds the limit.
|
3. `CUSTOMIZED`: pass in a customized function to compress the messages.
|
||||||
"CUSTOMIZED": pass in a customized function to compress the messages.
|
|
||||||
- "compress_function" (Optional, callable, default to None): Must be provided when mode is "CUSTOMIZED".
|
- "compress_function" (Optional, callable, default to None): Must be provided when mode is "CUSTOMIZED".
|
||||||
The function should takes a list of messages and returns a tuple of (is_compress_success: bool, compressed_messages: List[Dict]).
|
The function should takes a list of messages and returns a tuple of (is_compress_success: bool, compressed_messages: List[Dict]).
|
||||||
- "trigger_count" (Optional, float, int, default to 0.7): the threshold to trigger compression.
|
- "trigger_count" (Optional, float, int, default to 0.7): the threshold to trigger compression.
|
||||||
|
|||||||
@ -29,12 +29,12 @@ class QdrantRetrieveUserProxyAgent(RetrieveUserProxyAgent):
|
|||||||
name (str): name of the agent.
|
name (str): name of the agent.
|
||||||
human_input_mode (str): whether to ask for human inputs every time a message is received.
|
human_input_mode (str): whether to ask for human inputs every time a message is received.
|
||||||
Possible values are "ALWAYS", "TERMINATE", "NEVER".
|
Possible values are "ALWAYS", "TERMINATE", "NEVER".
|
||||||
(1) When "ALWAYS", the agent prompts for human input every time a message is received.
|
1. When "ALWAYS", the agent prompts for human input every time a message is received.
|
||||||
Under this mode, the conversation stops when the human input is "exit",
|
Under this mode, the conversation stops when the human input is "exit",
|
||||||
or when is_termination_msg is True and there is no human input.
|
or when is_termination_msg is True and there is no human input.
|
||||||
(2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or
|
2. When "TERMINATE", the agent only prompts for human input only when a termination message is received or
|
||||||
the number of auto reply reaches the max_consecutive_auto_reply.
|
the number of auto reply reaches the max_consecutive_auto_reply.
|
||||||
(3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
|
3. When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
|
||||||
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
|
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
|
||||||
is_termination_msg (function): a function that takes a message in the form of a dictionary
|
is_termination_msg (function): a function that takes a message in the form of a dictionary
|
||||||
and returns a boolean value indicating if this received message is a termination message.
|
and returns a boolean value indicating if this received message is a termination message.
|
||||||
|
|||||||
@ -77,17 +77,17 @@ class RetrieveUserProxyAgent(UserProxyAgent):
|
|||||||
retrieve_config: Optional[Dict] = None, # config for the retrieve agent
|
retrieve_config: Optional[Dict] = None, # config for the retrieve agent
|
||||||
**kwargs,
|
**kwargs,
|
||||||
):
|
):
|
||||||
"""
|
r"""
|
||||||
Args:
|
Args:
|
||||||
name (str): name of the agent.
|
name (str): name of the agent.
|
||||||
human_input_mode (str): whether to ask for human inputs every time a message is received.
|
human_input_mode (str): whether to ask for human inputs every time a message is received.
|
||||||
Possible values are "ALWAYS", "TERMINATE", "NEVER".
|
Possible values are "ALWAYS", "TERMINATE", "NEVER".
|
||||||
(1) When "ALWAYS", the agent prompts for human input every time a message is received.
|
1. When "ALWAYS", the agent prompts for human input every time a message is received.
|
||||||
Under this mode, the conversation stops when the human input is "exit",
|
Under this mode, the conversation stops when the human input is "exit",
|
||||||
or when is_termination_msg is True and there is no human input.
|
or when is_termination_msg is True and there is no human input.
|
||||||
(2) When "TERMINATE", the agent only prompts for human input only when a termination message is received or
|
2. When "TERMINATE", the agent only prompts for human input only when a termination message is received or
|
||||||
the number of auto reply reaches the max_consecutive_auto_reply.
|
the number of auto reply reaches the max_consecutive_auto_reply.
|
||||||
(3) When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
|
3. When "NEVER", the agent will never prompt for human input. Under this mode, the conversation stops
|
||||||
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
|
when the number of auto reply reaches the max_consecutive_auto_reply or when is_termination_msg is True.
|
||||||
is_termination_msg (function): a function that takes a message in the form of a dictionary
|
is_termination_msg (function): a function that takes a message in the form of a dictionary
|
||||||
and returns a boolean value indicating if this received message is a termination message.
|
and returns a boolean value indicating if this received message is a termination message.
|
||||||
@ -136,10 +136,11 @@ class RetrieveUserProxyAgent(UserProxyAgent):
|
|||||||
- custom_text_types (Optional, List[str]): a list of file types to be processed. Default is `autogen.retrieve_utils.TEXT_FORMATS`.
|
- custom_text_types (Optional, List[str]): a list of file types to be processed. Default is `autogen.retrieve_utils.TEXT_FORMATS`.
|
||||||
This only applies to files under the directories in `docs_path`. Explicitly included files and urls will be chunked regardless of their types.
|
This only applies to files under the directories in `docs_path`. Explicitly included files and urls will be chunked regardless of their types.
|
||||||
- recursive (Optional, bool): whether to search documents recursively in the docs_path. Default is True.
|
- recursive (Optional, bool): whether to search documents recursively in the docs_path. Default is True.
|
||||||
**kwargs (dict): other kwargs in [UserProxyAgent](../user_proxy_agent#__init__).
|
`**kwargs` (dict): other kwargs in [UserProxyAgent](../user_proxy_agent#__init__).
|
||||||
|
|
||||||
Example of overriding retrieve_docs:
|
Example:
|
||||||
If you have set up a customized vector db, and it's not compatible with chromadb, you can easily plug in it with below code.
|
|
||||||
|
Example of overriding retrieve_docs - If you have set up a customized vector db, and it's not compatible with chromadb, you can easily plug in it with below code.
|
||||||
```python
|
```python
|
||||||
class MyRetrieveUserProxyAgent(RetrieveUserProxyAgent):
|
class MyRetrieveUserProxyAgent(RetrieveUserProxyAgent):
|
||||||
def query_vector_db(
|
def query_vector_db(
|
||||||
|
|||||||
@ -25,7 +25,7 @@ def consolidate_chat_info(chat_info, uniform_sender=None) -> None:
|
|||||||
|
|
||||||
|
|
||||||
def gather_usage_summary(agents: List[Agent]) -> Tuple[Dict[str, any], Dict[str, any]]:
|
def gather_usage_summary(agents: List[Agent]) -> Tuple[Dict[str, any], Dict[str, any]]:
|
||||||
"""Gather usage summary from all agents.
|
r"""Gather usage summary from all agents.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
agents: (list): List of agents.
|
agents: (list): List of agents.
|
||||||
@ -33,19 +33,24 @@ def gather_usage_summary(agents: List[Agent]) -> Tuple[Dict[str, any], Dict[str,
|
|||||||
Returns:
|
Returns:
|
||||||
tuple: (total_usage_summary, actual_usage_summary)
|
tuple: (total_usage_summary, actual_usage_summary)
|
||||||
|
|
||||||
Example return:
|
Example:
|
||||||
|
|
||||||
|
```python
|
||||||
total_usage_summary = {
|
total_usage_summary = {
|
||||||
'total_cost': 0.0006090000000000001,
|
"total_cost": 0.0006090000000000001,
|
||||||
'gpt-35-turbo':
|
"gpt-35-turbo": {
|
||||||
{
|
"cost": 0.0006090000000000001,
|
||||||
'cost': 0.0006090000000000001,
|
"prompt_tokens": 242,
|
||||||
'prompt_tokens': 242,
|
"completion_tokens": 123,
|
||||||
'completion_tokens': 123,
|
"total_tokens": 365
|
||||||
'total_tokens': 365
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Note:
|
||||||
|
|
||||||
`actual_usage_summary` follows the same format.
|
`actual_usage_summary` follows the same format.
|
||||||
If none of the agents incurred any cost (not having a client), then the total_usage_summary and actual_usage_summary will be {'total_cost': 0}.
|
If none of the agents incurred any cost (not having a client), then the total_usage_summary and actual_usage_summary will be `{'total_cost': 0}`.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def aggregate_summary(usage_summary: Dict[str, any], agent_summary: Dict[str, any]) -> None:
|
def aggregate_summary(usage_summary: Dict[str, any], agent_summary: Dict[str, any]) -> None:
|
||||||
|
|||||||
10
autogen/cache/cache.py
vendored
10
autogen/cache/cache.py
vendored
@ -14,16 +14,6 @@ class Cache:
|
|||||||
Attributes:
|
Attributes:
|
||||||
config (Dict[str, Any]): A dictionary containing cache configuration.
|
config (Dict[str, Any]): A dictionary containing cache configuration.
|
||||||
cache: The cache instance created based on the provided configuration.
|
cache: The cache instance created based on the provided configuration.
|
||||||
|
|
||||||
Methods:
|
|
||||||
redis(cache_seed=42, redis_url="redis://localhost:6379/0"): Static method to create a Redis cache instance.
|
|
||||||
disk(cache_seed=42, cache_path_root=".cache"): Static method to create a Disk cache instance.
|
|
||||||
__init__(self, config): Initializes the Cache with the given configuration.
|
|
||||||
__enter__(self): Context management entry, returning the cache instance.
|
|
||||||
__exit__(self, exc_type, exc_value, traceback): Context management exit.
|
|
||||||
get(self, key, default=None): Retrieves an item from the cache.
|
|
||||||
set(self, key, value): Sets an item in the cache.
|
|
||||||
close(self): Closes the cache.
|
|
||||||
"""
|
"""
|
||||||
|
|
||||||
ALLOWED_CONFIG_KEYS = ["cache_seed", "redis_url", "cache_path_root"]
|
ALLOWED_CONFIG_KEYS = ["cache_seed", "redis_url", "cache_path_root"]
|
||||||
|
|||||||
12
autogen/cache/cache_factory.py
vendored
12
autogen/cache/cache_factory.py
vendored
@ -28,11 +28,17 @@ class CacheFactory:
|
|||||||
and the provided redis_url.
|
and the provided redis_url.
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
Creating a Redis cache
|
|
||||||
> redis_cache = cache_factory("myseed", "redis://localhost:6379/0")
|
|
||||||
|
|
||||||
|
Creating a Redis cache
|
||||||
|
|
||||||
|
```python
|
||||||
|
redis_cache = cache_factory("myseed", "redis://localhost:6379/0")
|
||||||
|
```
|
||||||
Creating a Disk cache
|
Creating a Disk cache
|
||||||
> disk_cache = cache_factory("myseed", None)
|
|
||||||
|
```python
|
||||||
|
disk_cache = cache_factory("myseed", None)
|
||||||
|
```
|
||||||
"""
|
"""
|
||||||
if RedisCache is not None and redis_url is not None:
|
if RedisCache is not None and redis_url is not None:
|
||||||
return RedisCache(seed, redis_url)
|
return RedisCache(seed, redis_url)
|
||||||
|
|||||||
@ -225,7 +225,8 @@ def get_function_schema(f: Callable[..., Any], *, name: Optional[str] = None, de
|
|||||||
TypeError: If the function is not annotated
|
TypeError: If the function is not annotated
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
```
|
|
||||||
|
```python
|
||||||
def f(a: Annotated[str, "Parameter a"], b: int = 2, c: Annotated[float, "Parameter c"] = 0.1) -> None:
|
def f(a: Annotated[str, "Parameter a"], b: int = 2, c: Annotated[float, "Parameter c"] = 0.1) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|||||||
@ -103,7 +103,7 @@ def get_config_list(
|
|||||||
list: A list of configs for OepnAI API calls.
|
list: A list of configs for OepnAI API calls.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
```python
|
||||||
# Define a list of API keys
|
# Define a list of API keys
|
||||||
api_keys = ['key1', 'key2', 'key3']
|
api_keys = ['key1', 'key2', 'key3']
|
||||||
|
|
||||||
@ -292,7 +292,7 @@ def config_list_from_models(
|
|||||||
list: A list of configs for OpenAI API calls, each including model information.
|
list: A list of configs for OpenAI API calls, each including model information.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
```python
|
||||||
# Define the path where the API key files are located
|
# Define the path where the API key files are located
|
||||||
key_file_path = '/path/to/key/files'
|
key_file_path = '/path/to/key/files'
|
||||||
|
|
||||||
@ -383,7 +383,7 @@ def filter_config(config_list, filter_dict):
|
|||||||
in `filter_dict`.
|
in `filter_dict`.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
```python
|
||||||
# Example configuration list with various models and API types
|
# Example configuration list with various models and API types
|
||||||
configs = [
|
configs = [
|
||||||
{'model': 'gpt-3.5-turbo'},
|
{'model': 'gpt-3.5-turbo'},
|
||||||
@ -416,7 +416,6 @@ def filter_config(config_list, filter_dict):
|
|||||||
|
|
||||||
# The resulting `filtered_configs` will be:
|
# The resulting `filtered_configs` will be:
|
||||||
# [{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']}]
|
# [{'model': 'gpt-3.5-turbo', 'tags': ['gpt35_turbo', 'gpt-35-turbo']}]
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Note:
|
Note:
|
||||||
@ -467,7 +466,7 @@ def config_list_from_json(
|
|||||||
keys representing field names and values being lists or sets of acceptable values for those fields.
|
keys representing field names and values being lists or sets of acceptable values for those fields.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
```python
|
||||||
# Suppose we have an environment variable 'CONFIG_JSON' with the following content:
|
# Suppose we have an environment variable 'CONFIG_JSON' with the following content:
|
||||||
# '[{"model": "gpt-3.5-turbo", "api_type": "azure"}, {"model": "gpt-4"}]'
|
# '[{"model": "gpt-3.5-turbo", "api_type": "azure"}, {"model": "gpt-4"}]'
|
||||||
|
|
||||||
@ -511,7 +510,7 @@ def get_config(
|
|||||||
Constructs a configuration dictionary for a single model with the provided API configurations.
|
Constructs a configuration dictionary for a single model with the provided API configurations.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
```
|
```python
|
||||||
config = get_config(
|
config = get_config(
|
||||||
api_key="sk-abcdef1234567890",
|
api_key="sk-abcdef1234567890",
|
||||||
base_url="https://api.openai.com",
|
base_url="https://api.openai.com",
|
||||||
|
|||||||
@ -276,8 +276,10 @@ def create_vector_db_from_dir(
|
|||||||
custom_text_types (Optional, List[str]): a list of file types to be processed. Default is TEXT_FORMATS.
|
custom_text_types (Optional, List[str]): a list of file types to be processed. Default is TEXT_FORMATS.
|
||||||
recursive (Optional, bool): whether to search documents recursively in the dir_path. Default is True.
|
recursive (Optional, bool): whether to search documents recursively in the dir_path. Default is True.
|
||||||
extra_docs (Optional, bool): whether to add more documents in the collection. Default is False
|
extra_docs (Optional, bool): whether to add more documents in the collection. Default is False
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
API: the chromadb client.
|
|
||||||
|
The chromadb client.
|
||||||
"""
|
"""
|
||||||
if client is None:
|
if client is None:
|
||||||
client = chromadb.PersistentClient(path=db_path)
|
client = chromadb.PersistentClient(path=db_path)
|
||||||
@ -353,13 +355,17 @@ def query_vector_db(
|
|||||||
functions, you can pass it here, follow the examples in `https://docs.trychroma.com/embeddings`.
|
functions, you can pass it here, follow the examples in `https://docs.trychroma.com/embeddings`.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
QueryResult: the query result. The format is:
|
|
||||||
|
The query result. The format is:
|
||||||
|
|
||||||
|
```python
|
||||||
class QueryResult(TypedDict):
|
class QueryResult(TypedDict):
|
||||||
ids: List[IDs]
|
ids: List[IDs]
|
||||||
embeddings: Optional[List[List[Embedding]]]
|
embeddings: Optional[List[List[Embedding]]]
|
||||||
documents: Optional[List[List[Document]]]
|
documents: Optional[List[List[Document]]]
|
||||||
metadatas: Optional[List[List[Metadata]]]
|
metadatas: Optional[List[List[Metadata]]]
|
||||||
distances: Optional[List[List[float]]]
|
distances: Optional[List[List[float]]]
|
||||||
|
```
|
||||||
"""
|
"""
|
||||||
if client is None:
|
if client is None:
|
||||||
client = chromadb.PersistentClient(path=db_path)
|
client = chromadb.PersistentClient(path=db_path)
|
||||||
|
|||||||
@ -4,8 +4,7 @@ loaders:
|
|||||||
processors:
|
processors:
|
||||||
- type: filter
|
- type: filter
|
||||||
skip_empty_modules: true
|
skip_empty_modules: true
|
||||||
- type: smart
|
- type: google
|
||||||
- type: crossref
|
|
||||||
renderer:
|
renderer:
|
||||||
type: docusaurus
|
type: docusaurus
|
||||||
docs_base_path: docs
|
docs_base_path: docs
|
||||||
@ -13,4 +12,11 @@ renderer:
|
|||||||
relative_sidebar_path: sidebar.json
|
relative_sidebar_path: sidebar.json
|
||||||
sidebar_top_level_label: Reference
|
sidebar_top_level_label: Reference
|
||||||
markdown:
|
markdown:
|
||||||
escape_html_in_docstring: true
|
escape_html_in_docstring: false
|
||||||
|
descriptive_class_title: false
|
||||||
|
header_level_by_type:
|
||||||
|
Module: 1
|
||||||
|
Class: 2
|
||||||
|
Method: 3
|
||||||
|
Function: 3
|
||||||
|
Variable: 4
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user