mirror of
https://github.com/microsoft/autogen.git
synced 2025-07-26 18:31:36 +00:00
version update post release v1.2.2 (#1005)
This commit is contained in:
parent
fa5ccea862
commit
f097c20f86
@ -23,7 +23,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai,blendsearch] option:\n",
|
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai,blendsearch] option:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install flaml[openai,blendsearch]==1.2.1\n",
|
"pip install flaml[openai,blendsearch]==1.2.2\n",
|
||||||
"```"
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -40,7 +40,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# %pip install flaml[openai,blendsearch]==1.2.1 datasets"
|
"# %pip install flaml[openai,blendsearch]==1.2.2 datasets"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -23,7 +23,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen,blendsearch] option:\n",
|
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen,blendsearch] option:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install flaml[autogen,blendsearch]==1.2.1\n",
|
"pip install flaml[autogen,blendsearch]==1.2.2\n",
|
||||||
"```"
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -40,7 +40,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# %pip install flaml[autogen,blendsearch]==1.2.1 datasets"
|
"# %pip install flaml[autogen,blendsearch]==1.2.2 datasets"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen] option:\n",
|
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [autogen] option:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install flaml[autogen]==1.2.1\n",
|
"pip install flaml[autogen]==1.2.2\n",
|
||||||
"```"
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -38,7 +38,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# %pip install flaml[autogen]==1.2.1 datasets"
|
"# %pip install flaml[autogen]==1.2.2 datasets"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -21,7 +21,7 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai] option:\n",
|
"FLAML requires `Python>=3.7`. To run this notebook example, please install flaml with the [openai] option:\n",
|
||||||
"```bash\n",
|
"```bash\n",
|
||||||
"pip install flaml[openai]==1.2.1\n",
|
"pip install flaml[openai]==1.2.2\n",
|
||||||
"```"
|
"```"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -38,7 +38,7 @@
|
|||||||
},
|
},
|
||||||
"outputs": [],
|
"outputs": [],
|
||||||
"source": [
|
"source": [
|
||||||
"# %pip install flaml[openai]==1.2.1 datasets"
|
"# %pip install flaml[openai]==1.2.2 datasets"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
|
@ -5,9 +5,9 @@ In this example, we will tune several hyperparameters for the OpenAI's completio
|
|||||||
|
|
||||||
### Prerequisites
|
### Prerequisites
|
||||||
|
|
||||||
Install the [autogen,blendsearch] option. The OpenAI integration is in preview.
|
Install the [autogen,blendsearch] option.
|
||||||
```bash
|
```bash
|
||||||
pip install "flaml[autogen,blendsearch]==1.2.1 datasets"
|
pip install "flaml[autogen,blendsearch]==1.2.2 datasets"
|
||||||
```
|
```
|
||||||
|
|
||||||
Setup your OpenAI key:
|
Setup your OpenAI key:
|
||||||
@ -64,7 +64,9 @@ Before starting tuning, you need to define the metric for the optimization. For
|
|||||||
from functools import partial
|
from functools import partial
|
||||||
from flaml.autogen.code_utils import eval_function_completions, generate_assertions
|
from flaml.autogen.code_utils import eval_function_completions, generate_assertions
|
||||||
|
|
||||||
eval_with_generated_assertions = partial(eval_function_completions, assertions=generate_assertions)
|
eval_with_generated_assertions = partial(
|
||||||
|
eval_function_completions, assertions=generate_assertions,
|
||||||
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
This function will first generate assertion statements for each problem. Then, it uses the assertions to select the generated responses.
|
This function will first generate assertion statements for each problem. Then, it uses the assertions to select the generated responses.
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# Auto Generation
|
# Auto Generation
|
||||||
|
|
||||||
`flaml.autogen` is a subpackage for automating generation tasks. It uses [`flaml.tune`](../reference/tune/tune) to find good hyperparameter configurations under budget constraints.
|
`flaml.autogen` is a package for automating generation tasks (in preview). It uses [`flaml.tune`](../reference/tune/tune) to find good hyperparameter configurations under budget constraints.
|
||||||
Such optimization has several benefits:
|
Such optimization has several benefits:
|
||||||
* Maximize the utility out of using expensive foundation models.
|
* Maximize the utility out of using expensive foundation models.
|
||||||
* Reduce the inference cost by using cheaper models or configurations which achieve equal or better performance.
|
* Reduce the inference cost by using cheaper models or configurations which achieve equal or better performance.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user