mirror of
https://github.com/microsoft/autogen.git
synced 2025-11-01 02:09:58 +00:00
Custom Model Client docs follow-up (#1545)
* custom model client docs followup * fix function name in docs * Update website/docs/Use-Cases/enhanced_inference.md Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update website/docs/Use-Cases/enhanced_inference.md Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update website/docs/Use-Cases/enhanced_inference.md Co-authored-by: Chi Wang <wang.chi@microsoft.com> * Update website/docs/Use-Cases/enhanced_inference.md Co-authored-by: Chi Wang <wang.chi@microsoft.com> --------- Co-authored-by: Chi Wang <wang.chi@microsoft.com>
This commit is contained in:
parent
d999b452e8
commit
b1817ab5f6
@ -77,6 +77,8 @@ class ModelClient(Protocol):
|
||||
class Message(Protocol):
|
||||
content: Optional[str]
|
||||
|
||||
message: Message
|
||||
|
||||
choices: List[Choice]
|
||||
model: str
|
||||
|
||||
|
||||
@ -94,7 +94,9 @@
|
||||
" class ModelClientResponseProtocol(Protocol):\n",
|
||||
" class Choice(Protocol):\n",
|
||||
" class Message(Protocol):\n",
|
||||
" content: str | None\n",
|
||||
" content: Optional[str]\n",
|
||||
"\n",
|
||||
" message: Message\n",
|
||||
"\n",
|
||||
" choices: List[Choice]\n",
|
||||
" model: str\n",
|
||||
|
||||
@ -122,7 +122,9 @@ class ModelClient(Protocol):
|
||||
class ModelClientResponseProtocol(Protocol):
|
||||
class Choice(Protocol):
|
||||
class Message(Protocol):
|
||||
content: str | None
|
||||
content: Optional[str]
|
||||
|
||||
message: Message
|
||||
|
||||
choices: List[Choice]
|
||||
model: str
|
||||
|
||||
@ -89,7 +89,10 @@ In version >=1, OpenAI renamed their `api_base` parameter to `base_url`. So for
|
||||
|
||||
### Can I use non-OpenAI models?
|
||||
|
||||
Yes. Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
|
||||
Yes. You currently have two options:
|
||||
|
||||
- Autogen can work with any API endpoint which complies with OpenAI-compatible RESTful APIs - e.g. serving local LLM via FastChat or LM Studio. Please check https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs for an example.
|
||||
- You can supply your own custom model implementation and use it with Autogen. Please check https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models for more information.
|
||||
|
||||
## Handle Rate Limit Error and Timeout Error
|
||||
|
||||
|
||||
@ -107,9 +107,6 @@ The tuned config can be used to perform inference.
|
||||
|
||||
## API unification
|
||||
|
||||
<!-- `autogen.Completion.create` is compatible with both `openai.Completion.create` and `openai.ChatCompletion.create`, and both OpenAI API and Azure OpenAI API. So models such as "text-davinci-003", "gpt-3.5-turbo" and "gpt-4" can share a common API.
|
||||
When chat models are used and `prompt` is given as the input to `autogen.Completion.create`, the prompt will be automatically converted into `messages` to fit the chat completion API requirement. One advantage is that one can experiment with both chat and non-chat models for the same prompt in a unified API. -->
|
||||
|
||||
`autogen.OpenAIWrapper.create()` can be used to create completions for both chat and non-chat models, and both OpenAI API and Azure OpenAI API.
|
||||
|
||||
```python
|
||||
@ -133,7 +130,7 @@ print(client.extract_text_or_completion_object(response))
|
||||
|
||||
For local LLMs, one can spin up an endpoint using a package like [FastChat](https://github.com/lm-sys/FastChat), and then use the same API to send a request. See [here](/blog/2023/07/14/Local-LLMs) for examples on how to make inference with local LLMs.
|
||||
|
||||
<!-- When only working with the chat-based models, `autogen.ChatCompletion` can be used. It also does automatic conversion from prompt to messages, if prompt is provided instead of messages. -->
|
||||
For custom model clients, one can register the client with `autogen.OpenAIWrapper.register_model_client` and then use the same API to send a request. See [here](/blog/2024/01/26/Custom-Models) for examples on how to make inference with custom model clients.
|
||||
|
||||
## Usage Summary
|
||||
|
||||
@ -166,6 +163,8 @@ Total cost: 0.00027
|
||||
* Model 'gpt-3.5-turbo': cost: 0.00027, prompt_tokens: 50, completion_tokens: 100, total_tokens: 150
|
||||
```
|
||||
|
||||
Note: if using a custom model client (see [here](/blog/2024/01/26/Custom-Models) for details) and if usage summary is not implemented, then the usage summary will not be available.
|
||||
|
||||
## Caching
|
||||
|
||||
API call results are cached locally and reused when the same request is issued.
|
||||
@ -241,13 +240,6 @@ The differences between autogen's `cache_seed` and openai's `seed`:
|
||||
|
||||
### Runtime error
|
||||
|
||||
<!-- It is easy to hit error when calling OpenAI APIs, due to connection, rate limit, or timeout. Some of the errors are transient. `autogen.Completion.create` deals with the transient errors and retries automatically. Request timeout, max retry period and retry wait time can be configured via `request_timeout`, `max_retry_period` and `retry_wait_time`.
|
||||
|
||||
- `request_timeout` (int): the timeout (in seconds) sent with a single request.
|
||||
- `max_retry_period` (int): the total time (in seconds) allowed for retrying failed requests.
|
||||
- `retry_wait_time` (int): the time interval to wait (in seconds) before retrying a failed request.
|
||||
|
||||
Moreover, -->
|
||||
One can pass a list of configurations of different models/endpoints to mitigate the rate limits and other runtime error. For example,
|
||||
|
||||
```python
|
||||
@ -268,12 +260,16 @@ client = OpenAIWrapper(
|
||||
{
|
||||
"model": "llama2-chat-7B",
|
||||
"base_url": "http://127.0.0.1:8080",
|
||||
},
|
||||
{
|
||||
"model": "microsoft/phi-2",
|
||||
"model_client_cls": "CustomModelClient"
|
||||
}
|
||||
],
|
||||
)
|
||||
```
|
||||
|
||||
`client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, and a locally hosted llama2-chat-7B one by one,
|
||||
`client.create()` will try querying Azure OpenAI gpt-4, OpenAI gpt-3.5-turbo, a locally hosted llama2-chat-7B, and phi-2 using a custom model client class named `CustomModelClient`, one by one,
|
||||
until a valid result is returned. This can speed up the development process where the rate limit is a bottleneck. An error will be raised if the last choice fails. So make sure the last choice in the list has the best availability.
|
||||
|
||||
For convenience, we provide a number of utility functions to load config lists.
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user