mirror of
https://github.com/microsoft/autogen.git
synced 2025-11-15 17:44:33 +00:00
cleanup
This commit is contained in:
parent
e549fc4f80
commit
0dff1237c1
@ -55,6 +55,8 @@ pip install "pyautogen[blendsearch]"
|
|||||||
Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation).
|
Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation).
|
||||||
<!-- Each of the [`notebook examples`](https://github.com/microsoft/autogen/tree/main/notebook) may require a specific option to be installed. -->
|
<!-- Each of the [`notebook examples`](https://github.com/microsoft/autogen/tree/main/notebook) may require a specific option to be installed. -->
|
||||||
|
|
||||||
|
For LLM inference configurations, check the [FAQ](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints).
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
* Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools and human.
|
* Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools and human.
|
||||||
@ -70,6 +72,8 @@ user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stoc
|
|||||||
The figure below shows an example conversation flow with AutoGen.
|
The figure below shows an example conversation flow with AutoGen.
|
||||||

|

|
||||||
|
|
||||||
|
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature.
|
||||||
|
|
||||||
* Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
* Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets.
|
||||||
```python
|
```python
|
||||||
# perform tuning
|
# perform tuning
|
||||||
@ -86,6 +90,8 @@ config, analysis = autogen.Completion.tune(
|
|||||||
response = autogen.Completion.create(context=test_instance, **config)
|
response = autogen.Completion.create(context=test_instance, **config)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-Inference) for this feature.
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
You can find a detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).
|
You can find a detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user