use relative url in doc (#620)

* use relative url in doc

* update link
This commit is contained in:
Chi Wang 2022-07-01 13:28:16 -07:00 committed by GitHub
parent beb10273e8
commit 9bf13d66f1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 10 additions and 10 deletions

View File

@ -23,4 +23,4 @@ ENV DEBIAN_FRONTEND=dialog
RUN pip3 --disable-pip-version-check --no-cache-dir install flaml[test,notebook]
# For docs
RUN npm install --global yarn
RUN pip install pydoc-markdown
RUN pip install pydoc-markdown==4.5.0

View File

@ -623,7 +623,7 @@ class AutoML(BaseEstimator):
seed: int or None, default=None | The random seed for hpo.
n_concurrent_trials: [Experimental] int, default=1 | The number of
concurrent trials. When n_concurrent_trials > 1, flaml performes
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning)
[parallel tuning](../Use-Cases/Task-Oriented-AutoML#parallel-tuning)
and installation of ray is required: `pip install flaml[ray]`.
keep_search_state: boolean, default=False | Whether to keep data needed
for model search after fit(). By default the state is deleted for
@ -651,7 +651,7 @@ class AutoML(BaseEstimator):
the metrics_to_log dictionary returned by a customized metric function.
The customized metric function shall be provided via the `metric` key word
argument of the fit() function or the automl constructor.
Find an example in the 4th constraint type in this [doc](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#constraint).
Find an example in the 4th constraint type in this [doc](../Use-Cases/Task-Oriented-AutoML#constraint).
If `pred_time_limit` is provided as one of keyword arguments to fit() function or
the automl constructor, flaml will automatically (and under the hood)
add it as an additional element in the metric_constraints. Essentially 'pred_time_limit'
@ -2203,7 +2203,7 @@ class AutoML(BaseEstimator):
seed: int or None, default=None | The random seed for hpo.
n_concurrent_trials: [Experimental] int, default=1 | The number of
concurrent trials. When n_concurrent_trials > 1, flaml performes
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#parallel-tuning)
[parallel tuning](../Use-Cases/Task-Oriented-AutoML#parallel-tuning)
and installation of ray is required: `pip install flaml[ray]`.
keep_search_state: boolean, default=False | Whether to keep data needed
for model search after fit(). By default the state is deleted for
@ -2255,7 +2255,7 @@ class AutoML(BaseEstimator):
fit_kwargs_by_estimator: dict, default=None | The user specified keywords arguments, grouped by estimator name.
For TransformersEstimator, available fit_kwargs can be found from
[flaml/nlp/training_args.py:TrainingArgumentsForAuto](https://microsoft.github.io/FLAML/docs/reference/nlp/huggingface/training_args).
[TrainingArgumentsForAuto](nlp/huggingface/training_args).
e.g.,
```python

View File

@ -286,7 +286,7 @@ def run(
resources_per_trial: A dictionary of the hardware resources to allocate
per trial, e.g., `{'cpu': 1}`. It is only valid when using ray backend
(by setting 'use_ray = True'). It shall be used when you need to do
[parallel tuning](https://microsoft.github.io/FLAML/docs/Use-Cases/Tune-User-Defined-Function#parallel-tuning).
[parallel tuning](../../Use-Cases/Tune-User-Defined-Function#parallel-tuning).
config_constraints: A list of config constraints to be satisfied.
e.g., ```config_constraints = [(mem_size, '<=', 1024**3)]```

View File

@ -82,7 +82,7 @@ Then:
```console
npm install --global yarn # skip if you use the dev container we provided
pip install pydoc-markdown # skip if you use the dev container we provided
pip install pydoc-markdown==4.5.0 # skip if you use the dev container we provided
cd website
yarn install --frozen-lockfile
pydoc-markdown

View File

@ -307,7 +307,7 @@ automl.fit(X_train, y_train, max_iter=100, train_time_limit=1, pred_time_limit=1
4. Constraints on the metrics of the ML model tried in AutoML.
When users provide a [custom metric function](https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#optimization-metric), which returns a primary optimization metric and a dictionary of additional metrics (typically also about the model) to log, users can also specify constraints on one or more of the metrics in the dictionary of additional metrics.
When users provide a [custom metric function](#optimization-metric), which returns a primary optimization metric and a dictionary of additional metrics (typically also about the model) to log, users can also specify constraints on one or more of the metrics in the dictionary of additional metrics.
Users need to provide a list of such constraints in the following format:
Each element in this list is a 3-tuple, which shall be expressed
@ -395,7 +395,7 @@ automl2 = AutoML()
automl2.fit(X_train, y_train, time_budget=7200, starting_points=automl1.best_config_per_estimator)
```
`starting_points` is a dictionary or a str to specify the starting hyperparameter config. (1) When it is a dictionary, the keys are the estimator names. If you do not need to specify starting points for an estimator, exclude its name from the dictionary. The value for each key can be either a dictionary of a list of dictionaries, corresponding to one hyperparameter configuration, or multiple hyperparameter configurations, respectively. (2) When it is a str: if "data", use data-dependent defaults; if "data:path", use data-dependent defaults which are stored at path; if "static", use data-independent defaults. Please find more details about data-dependent defaults in [zero shot AutoML](https://microsoft.github.io/FLAML/docs/Use-Cases/Zero-Shot-AutoML#combine-zero-shot-automl-and-hyperparameter-tuning).
`starting_points` is a dictionary or a str to specify the starting hyperparameter config. (1) When it is a dictionary, the keys are the estimator names. If you do not need to specify starting points for an estimator, exclude its name from the dictionary. The value for each key can be either a dictionary of a list of dictionaries, corresponding to one hyperparameter configuration, or multiple hyperparameter configurations, respectively. (2) When it is a str: if "data", use data-dependent defaults; if "data:path", use data-dependent defaults which are stored at path; if "static", use data-independent defaults. Please find more details about data-dependent defaults in [zero shot AutoML](Zero-Shot-AutoML#combine-zero-shot-automl-and-hyperparameter-tuning).
### Log the trials
@ -421,7 +421,7 @@ with mlflow.start_run():
### Extra fit arguments
Extra fit arguments that are needed by the estimators can be passed to `AutoML.fit()`. For example, if there is a weight associated with each training example, they can be passed via `sample_weight`. For another example, `period` can be passed for time series forecaster. For any extra keywork argument passed to `AutoML.fit()` which has not been explicitly listed in the function signature, it will be passed to the underlying estimators' `fit()` as is. For another example, you can set the number of gpus used by each trial with the `gpu_per_trial` argument, which is only used by TransformersEstimator and XGBoostSklearnEstimator.
Extra fit arguments that are needed by the estimators can be passed to `AutoML.fit()`. For example, if there is a weight associated with each training example, they can be passed via `sample_weight`. For another example, `period` can be passed for time series forecaster. For any extra keywork argument passed to `AutoML.fit()` which has not been explicitly listed in the function signature, it will be passed to the underlying estimators' `fit()` as is. For another example, you can set the number of gpus used by each trial with the `gpu_per_trial` argument, which is only used by TransformersEstimator and XGBoostSklearnEstimator.
In addition, you can specify the different arguments needed by different estimators using the `fit_kwargs_by_estimator` argument. For example, you can set the custom arguments for a Transformers model: