mirror of
				https://github.com/microsoft/autogen.git
				synced 2025-10-31 01:40:58 +00:00 
			
		
		
		
	 3e7aac6e8b
			
		
	
	
		3e7aac6e8b
		
			
		
	
	
	
	
		
			
			* simplify the initiation of chat * version update * include openai * completion * load config list from json * initiate_chat * oai config list * oai config list * config list * config_list * raise_error * retry_time * raise condition * oai config list * catch file not found * catch openml error * handle openml error * handle openml error * handle openml error * handle openml error * handle openml error * handle openml error * close #1139 * use property * termination msg * AIUserProxyAgent * smaller dev container * update notebooks * match * document code execution and AIUserProxyAgent * gpt 3.5 config list * rate limit * variable visibility * remove unnecessary import * quote * notebook comments * remove mathchat from init import * two users * import location * expose config * return str not tuple * rate limit * ipython user proxy * message * None result * rate limit * rate limit * rate limit * rate limit * make auto_reply a common method for all agents * abs path * refactor and doc * set mathchat_termination * code format * modified * emove import * code quality * sender -> messages * system message * clean agent hierarchy * dict check * invalid oai msg * return * openml error * docstr --------- Co-authored-by: kevin666aa <yrwu000627@gmail.com>
AutoML for NLP
This directory contains utility functions used by AutoNLP. Currently we support four NLP tasks: sequence classification, sequence regression, multiple choice and summarization.
Please refer to this link for examples.
Troubleshooting fine-tuning HPO for pre-trained language models
The frequent updates of transformers may lead to fluctuations in the results of tuning. To help users quickly troubleshoot the result of AutoNLP when a tuning failure occurs (e.g., failing to reproduce previous results), we have provided the following jupyter notebook:
Our findings on troubleshooting fine-tuning the Electra and RoBERTa model for the GLUE dataset can be seen in the following paper published in ACL 2021:
- An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models. Xueqing Liu, Chi Wang. ACL-IJCNLP 2021.
@inproceedings{liu2021hpo,
    title={An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models},
    author={Xueqing Liu and Chi Wang},
    year={2021},
    booktitle={ACL-IJCNLP},
}