* init value type match
* bump version to 1.0.6
* add a note about flaml version in notebook
* add note about mismatched ITER_HP
* catch SSLError when accessing OpenML data
* catch errors in autovw test
Co-authored-by: Qingyun Wu <qingyun.wu@psu.edu>
* fix checkpoint naming + trial id for non-ray mode, fix the bug in running test mode, delete all the checkpoints in non-ray mode
* finished testing for checkpoint naming, delete checkpoint, ray, max iter = 1
* add bs restore test
* use default metric when not provided
* update documentation
* remove print
* period
* remove bs restore test
* Update website/docs/Use-Cases/Task-Oriented-AutoML.md
Issue I encountered:
#542 run test_restore.py and got _pickle.UnpicklingError: state is not a dictionary
I observed:
1. numpy version
i. When numpy==1.16*, np.random.RandomState.__getstate__() returns a tuple, not a dict.
_pickle.UnpicklingError occurs
ii. When numpy>1.17.0rc1, it returns a dict;
_pickle.UnpicklingError does not occur
iii. When numpy>1.17.0rc1, flaml uses np_random_generator = np.random.Generator,
_pickle.UnpicklingError does not occur
2. class _BackwardsCompatibleNumpyRng
When I remove func _BackwardsCompatibleNumpyRng.__getattr__() , _pickle.UnpicklingError doesn't occur (regardless of numpy version == 1.16* or 1.17*)
To sum up,
I think making modifications to class _BackwardsCompatibleNumpyRng is not a good choice (_BackwardsCompatibleNumpyRng came from ray)and we still need to learn more about the operation mechanism of pickle.
So I upgraded the numpy version that flaml requires:
setup.py:"NumPy>=1.17.0rc1"
* handle non-flaml scheduler in flaml.tune
* revise time budget
* Update website/docs/Use-Cases/Tune-User-Defined-Function.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update website/docs/Use-Cases/Tune-User-Defined-Function.md
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* Update flaml/tune/tune.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* add docstr
* remove random seed
* StopIteration
* StopIteration format
* format
* Update flaml/tune/tune.py
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* revise docstr
Co-authored-by: Chi Wang <wang.chi@microsoft.com>
* refactoring TransformersEstimator to support default and custom_hp
* handling starting_points not in search space
* addressing starting point more than max_iter
* fixing upper < lower bug