zero-shot AutoML in readme (#474)

* zero-shot AutoML in readme

* use pydoc-markdown 4.5.0 to avoid error in 4.6.0
This commit is contained in:
Chi Wang 2022-03-05 11:49:39 -08:00 committed by GitHub
parent 31ac984c4b
commit f0b0cae682
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 16 additions and 6 deletions

View File

@ -28,7 +28,7 @@ jobs:
- name: pydoc-markdown install - name: pydoc-markdown install
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install pydoc-markdown pip install pydoc-markdown==4.5.0
- name: pydoc-markdown run - name: pydoc-markdown run
run: | run: |
pydoc-markdown pydoc-markdown
@ -64,7 +64,7 @@ jobs:
- name: pydoc-markdown install - name: pydoc-markdown install
run: | run: |
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install pydoc-markdown pip install pydoc-markdown==4.5.0
- name: pydoc-markdown run - name: pydoc-markdown run
run: | run: |
pydoc-markdown pydoc-markdown

View File

@ -33,7 +33,7 @@ FLAML requires **Python version >= 3.6**. It can be installed from pip:
pip install flaml pip install flaml
``` ```
To run the [`notebook example`](https://github.com/microsoft/FLAML/tree/main/notebook), To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook),
install flaml with the [notebook] option: install flaml with the [notebook] option:
```bash ```bash
@ -43,7 +43,7 @@ pip install flaml[notebook]
## Quickstart ## Quickstart
* With three lines of code, you can start using this economical and fast * With three lines of code, you can start using this economical and fast
AutoML engine as a scikit-learn style estimator. AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML).
```python ```python
from flaml import AutoML from flaml import AutoML
@ -52,19 +52,29 @@ automl.fit(X_train, y_train, task="classification")
``` ```
* You can restrict the learners and use FLAML as a fast hyperparameter tuning * You can restrict the learners and use FLAML as a fast hyperparameter tuning
tool for XGBoost, LightGBM, Random Forest etc. or a customized learner. tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space).
```python ```python
automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"]) automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"])
``` ```
* You can also run generic hyperparameter tuning for a custom function. * You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function).
```python ```python
from flaml import tune from flaml import tune
tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600) tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600)
``` ```
* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task.
```python
from flaml.default import LGBMRegressor
# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor.
estimator = LGBMRegressor()
# The hyperparameters are automatically set according to the training data.
estimator.fit(X_train, y_train)
```
## Documentation ## Documentation
You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples. You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples.