diff --git a/README.md b/README.md index 36add8c4c..e8853ffb2 100644 --- a/README.md +++ b/README.md @@ -55,6 +55,8 @@ pip install "pyautogen[blendsearch]" Find more options in [Installation](https://microsoft.github.io/autogen/docs/Installation). +For LLM inference configurations, check the [FAQ](https://microsoft.github.io/autogen/docs/FAQ#set-your-api-endpoints). + ## Quickstart * Autogen enables the next-gen LLM applications with a generic multi-agent conversation framework. It offers customizable and conversable agents which integrate LLMs, tools and human. @@ -70,6 +72,8 @@ user_proxy.initiate_chat(assistant, message="Plot a chart of META and TESLA stoc The figure below shows an example conversation flow with AutoGen. ![Agent Chat Example](https://github.com/microsoft/autogen/blob/main/website/static/img/chat_example.png) +Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-AgentChat) for this feature. + * Autogen also helps maximize the utility out of the expensive LLMs such as ChatGPT and GPT-4. It offers a drop-in replacement of `openai.Completion` or `openai.ChatCompletion` with powerful functionalites like tuning, caching, error handling, templating. For example, you can optimize generations by LLM with your own tuning data, success metrics and budgets. ```python # perform tuning @@ -86,6 +90,8 @@ config, analysis = autogen.Completion.tune( response = autogen.Completion.create(context=test_instance, **config) ``` +Please find more [code examples](https://microsoft.github.io/autogen/docs/Examples/AutoGen-Inference) for this feature. + ## Documentation You can find a detailed documentation about AutoGen [here](https://microsoft.github.io/autogen/).