mirror of
https://github.com/microsoft/autogen.git
synced 2025-08-03 22:32:20 +00:00

if save_best_model_per_estimator is False and retrain_final is True, unfit the model after evaluation in HPO. retrain if using ray. update ITER_HP in config after a trial is finished. change prophet logging level. example and notebook update. allow settings to be passed to AutoML constructor. Are you planning to add multi-output-regression capability to FLAML #192 Is multi-tasking allowed? #277 can pass the auotml setting to the constructor instead of requiring a derived class. remove model_history. checkpoint bug fix. * model_history meaning save_best_model_per_estimator * ITER_HP * example update * prophet logging level * comment update in forecast notebook * print format improvement * allow settings to be passed to AutoML constructor * checkpoint bug fix * time limit for autohf regression test * skip slow test on macos * cleanup before del
38 lines
1009 B
Python
38 lines
1009 B
Python
def test_classification_head():
|
|
from flaml import AutoML
|
|
|
|
from datasets import load_dataset
|
|
|
|
train_dataset = load_dataset("emotion", split="train[:1%]").to_pandas().iloc[0:10]
|
|
dev_dataset = load_dataset("emotion", split="train[1%:2%]").to_pandas().iloc[0:10]
|
|
|
|
custom_sent_keys = ["text"]
|
|
label_key = "label"
|
|
|
|
X_train = train_dataset[custom_sent_keys]
|
|
y_train = train_dataset[label_key]
|
|
|
|
X_val = dev_dataset[custom_sent_keys]
|
|
y_val = dev_dataset[label_key]
|
|
|
|
automl = AutoML()
|
|
|
|
automl_settings = {
|
|
"gpu_per_trial": 0,
|
|
"max_iter": 3,
|
|
"time_budget": 5,
|
|
"task": "seq-classification",
|
|
"metric": "accuracy",
|
|
}
|
|
|
|
automl_settings["custom_hpo_args"] = {
|
|
"model_path": "google/electra-small-discriminator",
|
|
"output_dir": "test/data/output/",
|
|
"ckpt_per_epoch": 5,
|
|
"fp16": False,
|
|
}
|
|
|
|
automl.fit(
|
|
X_train=X_train, y_train=y_train, X_val=X_val, y_val=y_val, **automl_settings
|
|
)
|