Improve messaging in documentation (#1050)

* Improve messaging in documentation

* doc

* improve wording in blogpost

---------

Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
This commit is contained in:
Chi Wang 2023-05-24 14:28:52 -07:00 committed by GitHub
parent 9977a7aae1
commit e9fdbc6e02
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 12 additions and 14 deletions

View File

@ -19,10 +19,10 @@
## What is FLAML
FLAML is a lightweight Python library for efficient automation of machine
learning, including selection of
learning and AI operations, including selection of
models, hyperparameters, and other tunable choices of an application (e.g., inference hyperparameters for foundation models, configurations in MLOps/LMOps workflows, pipelines, mathematical/statistical models, algorithms, computing experiments, software configurations).
* For foundation models like the GPT series, it automates the experimentation and optimization of their inference performance to maximize the effectiveness for downstream applications and minimize the inference cost.
* For foundation models like the GPT series and AI agents based on them, it automates the experimentation and optimization of their performance to maximize the effectiveness for applications and minimize the inference cost.
* For common machine learning tasks like classification and regression, it quickly finds quality models for user-provided data with low computational resources.
* It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), or full customization (arbitrary training/inference/evaluation code).
* It supports fast automatic tuning, capable of handling complex constraints/guidance/early stopping. FLAML is powered by a [cost-effective
@ -76,7 +76,7 @@ config, analysis = oai.Completion.tune(
```
The automated experimentation and optimization can help you maximize the utility out of these expensive models.
A suite of utilities such as caching and templating are offered to accelerate the experimentation and application development.
A suite of utilities are offered to accelerate the experimentation and application development, such as low-level inference API with caching, templating, filtering, and higher-level components like LLM-based coding and interactive agents.
* With three lines of code, you can start using this economical and fast
AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML).
@ -118,11 +118,11 @@ You can find a detailed documentation about FLAML [here](https://microsoft.githu
In addition, you can find:
- Research around FLAML [here](https://microsoft.github.io/FLAML/docs/Research).
- [Research](https://microsoft.github.io/FLAML/docs/Research) and [blogposts](https://microsoft.github.io/FLAML/blog) around FLAML.
- Discord [here](https://discord.gg/Cppx2vSPVP).
- [Discord](https://discord.gg/Cppx2vSPVP).
- Contributing guide [here](https://microsoft.github.io/FLAML/docs/Contribute).
- [Contributing guide](https://microsoft.github.io/FLAML/docs/Contribute).
- ML.NET documentation and tutorials for [Model Builder](https://learn.microsoft.com/dotnet/machine-learning/tutorials/predict-prices-with-model-builder), [ML.NET CLI](https://learn.microsoft.com/dotnet/machine-learning/tutorials/sentiment-analysis-cli), and [AutoML API](https://learn.microsoft.com/dotnet/machine-learning/how-to-guides/how-to-use-the-automl-api).

View File

@ -30,7 +30,7 @@ FLAML is designed for easy extensibility and customization, allowing users to ad
## Embracing Large Language Models in FLAML v2
As large language models continue to reshape the AI ecosystem, FLAML is poised to adapt and grow alongside these advancements. Recognizing the importance of large language models, we have recently incorporated an autogen package into FLAML, and are committed to focusing our collective efforts on addressing the unique challenges that arise in LLMOps (Large Language Model Operations).
In its current iteration, FLAML offers support for model selection and inference parameter tuning for large language models. We are actively working on the development of new features, such as LLM selection, inference hyperparameter tuning for LLM, and agent-based LLM operations, to further expand FLAML's capabilities.
In its current iteration, FLAML offers support for model selection and inference parameter tuning for large language models. We are actively working on the development of new features, such as low-level inference API with caching, templating, filtering, and higher-level components like LLM-based coding and interactive agents, to enable more effective and economical usage of LLM.
We are eagerly preparing for the launch of FLAML v2, where we will place special emphasis on incorporating and enhancing features specifically tailored for large language models (LLMs), further expanding FLAML's capabilities.
We invite contributions from anyone interested in this topic and look forward to collaborating with the community as we shape the future of FLAML and LLMOps together.

View File

@ -3,12 +3,12 @@
<!-- ### Welcome to FLAML, a Fast Library for Automated Machine Learning & Tuning! -->
FLAML is a lightweight Python library for efficient automation of machine
learning, including selection of
learning and AI operations, including selection of
models, hyperparameters, and other tunable choices of an application.
### Main Features
* For foundation models like the GPT series, it automates the experimentation and optimization of their inference performance to maximize the effectiveness for downstream applications and minimize the inference cost.
* For foundation models like the GPT serie and AI agents based on them, it automates the experimentation and optimization of their performance to maximize the effectiveness for applications and minimize the inference cost.
* For common machine learning tasks like classification and regression, it quickly finds quality models for user-provided data with low computational resources.
* It is easy to customize or extend. Users can find their desired customizability from a smooth range: minimal customization (computational resource budget), medium customization (e.g., scikit-style learner, search space and metric), or full customization (arbitrary training/inference/evaluation code). Users can customize only when and what they need to, and leave the rest to the library.
* It supports fast and economical automatic tuning, capable of handling large search space with heterogeneous evaluation cost and complex constraints/guidance/early stopping. FLAML is powered by a [cost-effective
@ -40,7 +40,7 @@ config, analysis = oai.Completion.tune(
```
The automated experimentation and optimization can help you maximize the utility out of these expensive models.
A suite of utilities such as caching and templating are offered to accelerate the experimentation and application development.
A suite of utilities are offered to accelerate the experimentation and application development, such as low-level inference API with caching, templating, filtering, and higher-level components like LLM-based coding and interactive agents.
#### [Task-oriented AutoML](Use-Cases/task-oriented-automl)
@ -113,7 +113,7 @@ Then, you can use it just like you use the original `LGMBClassifier`. Your other
* Understand the use cases for [Auto Generation](Use-Cases/Auto-Generation), [Task-oriented AutoML](Use-Cases/Task-Oriented-Automl), [Tune user-defined function](Use-Cases/Tune-User-Defined-Function) and [Zero-shot AutoML](Use-Cases/Zero-Shot-AutoML).
* Find code examples under "Examples": from [AutoGen - OpenAI](Examples/AutoGen-OpenAI) to [Tune - PyTorch](Examples/Tune-PyTorch).
* Learn about [research](Research) around FLAML.
* Learn about [research](Research) around FLAML and check [blogposts](/blog).
* Chat on [Discord](https://discord.gg/Cppx2vSPVP).
If you like our project, please give it a [star](https://github.com/microsoft/FLAML/stargazers) on GitHub. If you are interested in contributing, please read [Contributor's Guide](Contribute).

View File

@ -1,7 +1,7 @@
# Auto Generation
`flaml.autogen` is a package for automating generation tasks (in preview), featuring:
* Leveraging [`flaml.tune`](../reference/tune/tune) to find good hyperparameter configurations under budget constraints, such that:
* Leveraging [`flaml.tune`](../reference/tune/tune) to adapt LLMs to applications, such that:
- Maximize the utility out of using expensive foundation models.
- Reduce the inference cost by using cheaper models or configurations which achieve equal or better performance.
* An enhanced inference API with utilities like API unification, caching, error handling, multi-config inference, context programming etc.
@ -35,8 +35,6 @@ These interactions and trade-offs make it difficult to manually determine the op
*Do the choices matter? Check this [blog post](/blog/2023/04/21/LLM-tuning-math) for a case study.*
## Tune Hyperparameters
With `flaml.autogen`, the tuning can be performed with the following information:
1. Validation data.
1. Evaluation function.