autogen/website/docs/Migration-Guide.md
cheng-tan 4ccff54dbe
Logging (#1146)
* WIP:logging

* serialize request, response and client

* Fixed code formatting.

* Updated to use a global package, and added some test cases. Still very-much a draft.

* Update work in progress.

* adding cost

* log new agent

* update log_completion test in test_agent_telemetry

* tests

* fix formatting

* Added additional telemetry for wrappers and clients.

* WIP: add test for oai client and oai wrapper table

* update test_telemetry

* fix format

* More tests, update doc and clean up

* small fix for session id - moved to start_logging and return from start_logging

* update start_logging type to return str, add notebook to demonstrate use of telemetry

* add ability to get log dataframe

* precommit formatting fixes

* formatting fix

* Remove pandas dependency from telemetry and only use in notebook

* formatting fixes

* log query exceptions

* fix formatting

* fix ci

* fix comment - add notebook link in doc and fix groupchat serialization

* small fix

* do not serialize Agent

* formatting

* wip

* fix test

* serialization bug fix for soc moderator

* fix test and clean up

* wip: add version table

* fix test

* fix test

* fix test

* make the logging interface more general and fix client model logging

* fix format

* fix formatting and tests

* fix

* fix comment

* Renaming telemetry to logging

* update notebook

* update doc

* formatting

* formatting and clean up

* fix doc

* fix link and title

* fix notebook format and fix comment

* format

* try fixing agent test and update migration guide

* fix link

* debug print

* debug

* format

* add back tests

* fix tests

---------

Co-authored-by: Adam Fourney <adamfo@microsoft.com>
Co-authored-by: Victor Dibia <victordibia@microsoft.com>
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
2024-02-15 00:54:17 +00:00

35 lines
2.2 KiB
Markdown

# Migration Guide
## Migrating to 0.2
openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method.
Therefore, some changes are required for users of `pyautogen<0.2`.
- `api_base` -> `base_url`, `request_timeout` -> `timeout` in `llm_config` and `config_list`. `max_retry_period` and `retry_wait_time` are deprecated. `max_retries` can be set for each client.
- MathChat is unsupported until it is tested in future release.
- `autogen.Completion` and `autogen.ChatCompletion` are deprecated. The essential functionalities are moved to `autogen.OpenAIWrapper`:
```python
from autogen import OpenAIWrapper
client = OpenAIWrapper(config_list=config_list)
response = client.create(messages=[{"role": "user", "content": "2+2="}])
print(client.extract_text_or_completion_object(response))
```
- Inference parameter tuning and inference logging features are updated:
```python
import autogen.runtime_logging
# Start logging
autogen.runtime_logging.start()
# Stop logging
autogen.runtime_logging.stop()
```
Checkout [Logging documentation](https://microsoft.github.io/autogen/docs/Use-Cases/enhanced_inference#logging) and [Logging example notebook](https://github.com/microsoft/autogen/blob/main/notebook/agentchat_logging.ipynb) to learn more.
Inference parameter tuning can be done via [`flaml.tune`](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function).
- `seed` in autogen is renamed into `cache_seed` to accommodate the newly added `seed` param in openai chat completion api. `use_cache` is removed as a kwarg in `OpenAIWrapper.create()` for being automatically decided by `cache_seed`: int | None. The difference between autogen's `cache_seed` and openai's `seed` is that:
- autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
- openai's `seed` is a best-effort deterministic sampling with no guarantee of determinism. When using openai's `seed` with `cache_seed` set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.