mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-18 14:32:09 +00:00

* WIP:logging * serialize request, response and client * Fixed code formatting. * Updated to use a global package, and added some test cases. Still very-much a draft. * Update work in progress. * adding cost * log new agent * update log_completion test in test_agent_telemetry * tests * fix formatting * Added additional telemetry for wrappers and clients. * WIP: add test for oai client and oai wrapper table * update test_telemetry * fix format * More tests, update doc and clean up * small fix for session id - moved to start_logging and return from start_logging * update start_logging type to return str, add notebook to demonstrate use of telemetry * add ability to get log dataframe * precommit formatting fixes * formatting fix * Remove pandas dependency from telemetry and only use in notebook * formatting fixes * log query exceptions * fix formatting * fix ci * fix comment - add notebook link in doc and fix groupchat serialization * small fix * do not serialize Agent * formatting * wip * fix test * serialization bug fix for soc moderator * fix test and clean up * wip: add version table * fix test * fix test * fix test * make the logging interface more general and fix client model logging * fix format * fix formatting and tests * fix * fix comment * Renaming telemetry to logging * update notebook * update doc * formatting * formatting and clean up * fix doc * fix link and title * fix notebook format and fix comment * format * try fixing agent test and update migration guide * fix link * debug print * debug * format * add back tests * fix tests --------- Co-authored-by: Adam Fourney <adamfo@microsoft.com> Co-authored-by: Victor Dibia <victordibia@microsoft.com> Co-authored-by: Chi Wang <wang.chi@microsoft.com>
2.2 KiB
2.2 KiB
Migration Guide
Migrating to 0.2
openai v1 is a total rewrite of the library with many breaking changes. For example, the inference requires instantiating a client, instead of using a global class method.
Therefore, some changes are required for users of pyautogen<0.2
.
api_base
->base_url
,request_timeout
->timeout
inllm_config
andconfig_list
.max_retry_period
andretry_wait_time
are deprecated.max_retries
can be set for each client.- MathChat is unsupported until it is tested in future release.
autogen.Completion
andautogen.ChatCompletion
are deprecated. The essential functionalities are moved toautogen.OpenAIWrapper
:
from autogen import OpenAIWrapper
client = OpenAIWrapper(config_list=config_list)
response = client.create(messages=[{"role": "user", "content": "2+2="}])
print(client.extract_text_or_completion_object(response))
- Inference parameter tuning and inference logging features are updated:
import autogen.runtime_logging
# Start logging
autogen.runtime_logging.start()
# Stop logging
autogen.runtime_logging.stop()
Checkout Logging documentation and Logging example notebook to learn more.
Inference parameter tuning can be done via flaml.tune
.
seed
in autogen is renamed intocache_seed
to accommodate the newly addedseed
param in openai chat completion api.use_cache
is removed as a kwarg inOpenAIWrapper.create()
for being automatically decided bycache_seed
: int | None. The difference between autogen'scache_seed
and openai'sseed
is that:- autogen uses local disk cache to guarantee the exactly same output is produced for the same input and when cache is hit, no openai api call will be made.
- openai's
seed
is a best-effort deterministic sampling with no guarantee of determinism. When using openai'sseed
withcache_seed
set to None, even for the same input, an openai api call will be made and there is no guarantee for getting exactly the same output.