diff --git a/.github/workflows/deploy-website.yml b/.github/workflows/deploy-website.yml index 4ca9d8f9d..3d816d98f 100644 --- a/.github/workflows/deploy-website.yml +++ b/.github/workflows/deploy-website.yml @@ -28,7 +28,7 @@ jobs: - name: pydoc-markdown install run: | python -m pip install --upgrade pip - pip install pydoc-markdown + pip install pydoc-markdown==4.5.0 - name: pydoc-markdown run run: | pydoc-markdown @@ -64,7 +64,7 @@ jobs: - name: pydoc-markdown install run: | python -m pip install --upgrade pip - pip install pydoc-markdown + pip install pydoc-markdown==4.5.0 - name: pydoc-markdown run run: | pydoc-markdown diff --git a/README.md b/README.md index 6408ca2cb..ad2294aa6 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ FLAML requires **Python version >= 3.6**. It can be installed from pip: pip install flaml ``` -To run the [`notebook example`](https://github.com/microsoft/FLAML/tree/main/notebook), +To run the [`notebook examples`](https://github.com/microsoft/FLAML/tree/main/notebook), install flaml with the [notebook] option: ```bash @@ -43,7 +43,7 @@ pip install flaml[notebook] ## Quickstart * With three lines of code, you can start using this economical and fast -AutoML engine as a scikit-learn style estimator. +AutoML engine as a [scikit-learn style estimator](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML). ```python from flaml import AutoML @@ -52,19 +52,29 @@ automl.fit(X_train, y_train, task="classification") ``` * You can restrict the learners and use FLAML as a fast hyperparameter tuning -tool for XGBoost, LightGBM, Random Forest etc. or a customized learner. +tool for XGBoost, LightGBM, Random Forest etc. or a [customized learner](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#estimator-and-search-space). ```python automl.fit(X_train, y_train, task="classification", estimator_list=["lgbm"]) ``` -* You can also run generic hyperparameter tuning for a custom function. +* You can also run generic hyperparameter tuning for a [custom function](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function). ```python from flaml import tune tune.run(evaluation_function, config={…}, low_cost_partial_config={…}, time_budget_s=3600) ``` +* [Zero-shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML) allows using the existing training API from lightgbm, xgboost etc. while getting the benefit of AutoML in choosing high-performance hyperparameter configurations per task. + +```python +from flaml.default import LGBMRegressor +# Use LGBMRegressor in the same way as you use lightgbm.LGBMRegressor. +estimator = LGBMRegressor() +# The hyperparameters are automatically set according to the training data. +estimator.fit(X_train, y_train) +``` + ## Documentation You can find a detailed documentation about FLAML [here](https://microsoft.github.io/FLAML/) where you can find the API documentation, use cases and examples.