| 
							
							
								 呆萌闷油瓶 | 790543131a | chore:add some new api version for azure openai (#5142) | 2024-06-13 16:30:47 +08:00 |  | 
			
				
					| 
							
							
								 yanghx | adc948e87c | fix(api/core/model_runtime/model_providers/baichuan,localai): Parse ToolPromptMessage. #4943 (#5138) Co-authored-by: -LAN- <laipz8200@outlook.com> | 2024-06-13 13:08:30 +08:00 |  | 
			
				
					| 
							
							
								 orangeclk | 79e8489942 | feat: support siliconflow (#5129) | 2024-06-13 12:59:41 +08:00 |  | 
			
				
					| 
							
							
								 xielong | ea69dc2a7e | feat: support hunyuan llm models (#5013) Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: Bowen Liang <bowenliang@apache.org> | 2024-06-12 17:24:23 +08:00 |  | 
			
				
					| 
							
							
								 Pika | ecc7f130b4 | fix(typo): misspelling (#5094) | 2024-06-12 17:01:21 +08:00 |  | 
			
				
					| 
							
							
								 sino | 0ce97e6315 | feat: support doubao llm function calling (#5100) | 2024-06-12 15:43:50 +08:00 |  | 
			
				
					| 
							
							
								 rerorero | 28997772a5 | fix: remote_url doesn't work for gemini (#5090) | 2024-06-12 13:14:53 +08:00 |  | 
			
				
					| 
							
							
								 orangeclk | 2050a8b8f0 | feat: add glm4 new models and zhipu embedding-2 (#5089) | 2024-06-12 08:22:17 +08:00 |  | 
			
				
					| 
							
							
								 sino | 5f870ac950 | chore: update maas model provider description (#5056) | 2024-06-11 11:22:22 +08:00 |  | 
			
				
					| 
							
							
								 Jaxon Ley | 2573b138bf | fix: update presence_penalty configuration for wenxin AI ernie-4.0-8k and ernie-3.5-8k models (#5039) | 2024-06-09 14:44:11 +08:00 |  | 
			
				
					| 
							
							
								 takatost | 3929d289e0 | feat: set default memory messages limit to infinite (#5002) | 2024-06-06 17:39:44 +08:00 |  | 
			
				
					| 
							
							
								 Joe | 5cdb95be1f | fix: gemini timeout error (#4955) | 2024-06-06 10:19:03 +08:00 |  | 
			
				
					| 
							
							
								 Bowen Liang | f32b440c4a | chore: fix indention violations by applying E111 to E117 ruff rules (#4925) | 2024-06-05 14:05:15 +08:00 |  | 
			
				
					| 
							
							
								 takatost | f44d1e62d2 | fix: bedrock get_num_tokens prompt_messages parameter name err (#4932) | 2024-06-05 01:53:05 +08:00 |  | 
			
				
					| 
							
							
								 takatost | d1dbbc1e33 | feat: backend model load balancing support (#4927) | 2024-06-05 00:13:04 +08:00 |  | 
			
				
					| 
							
							
								 Pan, Wen-Ming | b98a1a3303 | feat: added Anthropic Claude3 models to Google Cloud Vertex AI (#4870) Co-authored-by: pwm <pwm@google.com> | 2024-06-04 02:52:46 +08:00 |  | 
			
				
					| 
							
							
								 takatost | 696c5308a9 | chore: optimize nvidia nim credential schema and info (#4898) | 2024-06-04 02:26:26 +08:00 |  | 
			
				
					| 
							
							
								 Joshua | 3c8a120e51 | add-nvidia-mim (#4882) | 2024-06-03 21:10:18 +08:00 |  | 
			
				
					| 
							
							
								 Pan, Wen-Ming | cdbc260571 | Bugfix: Vertex AI vision model not support image (#4853) | 2024-06-02 11:11:09 +08:00 |  | 
			
				
					| 
							
							
								 Yash Parmar | e0da0744b5 | add: ollama keep alive parameter added. issue #4024 (#4655) | 2024-05-31 12:22:02 +08:00 |  | 
			
				
					| 
							
							
								 Weaxs | b189faca52 | feat: update ernie model (#4756) | 2024-05-29 14:57:23 +08:00 |  | 
			
				
					| 
							
							
								 xielong | e1cd9aef8f | feat: support baichuan3 turbo, baichuan3 turbo 128k, and baichuan4 (#4762) | 2024-05-29 14:46:04 +08:00 |  | 
			
				
					| 
							
							
								 crazywoola | 705a6e3a8e | Fix/4742 ollama num gpu option not consistent with allowed values (#4751) | 2024-05-29 13:33:35 +08:00 |  | 
			
				
					| 
							
							
								 xielong | 793f0c1dd6 | fix: Corrected schema link in model_runtime's README.md (#4757) | 2024-05-29 13:03:21 +08:00 |  | 
			
				
					| 
							
							
								 xielong | 88b4d69278 | fix: Correct context size for banchuan2-53b and banchuan2-turbo (#4721) | 2024-05-28 16:37:44 +08:00 |  | 
			
				
					| 
							
							
								 crazywoola | 27dae156db | fix: colon in file mistral.mistral-small-2402-v1:0 (#4673) | 2024-05-27 13:15:20 +08:00 |  | 
			
				
					| 
							
							
								 Giovanny Gutiérrez | 2deb23e00e | fix: Show rerank in system for localai (#4652) | 2024-05-27 12:09:51 +08:00 |  | 
			
				
					| 
							
							
								 longzhihun | fe9bf5fc4a | [seanguo] add support of amazon titan v2 and modify the price of amazon titan v1 (#4643) Co-authored-by: Chenhe Gu <guchenhe@gmail.com> | 2024-05-26 23:30:22 +08:00 |  | 
			
				
					| 
							
							
								 miendinh | f804adbff3 | feat: Support for Vertex AI - load Default Application Configuration (#4641) Co-authored-by: miendinh <miendinh@users.noreply.github.com>
Co-authored-by: crazywoola <427733928@qq.com> | 2024-05-25 13:40:25 +08:00 |  | 
			
				
					| 
							
							
								 Krasus.Chen | f156014daa | update lite8k/speed8k/128k max_token to newest (#4636) Co-authored-by: Your Name <chen@krasus.red> | 2024-05-24 19:33:42 +08:00 |  | 
			
				
					| 
							
							
								 Bowen Liang | 3fda2245a4 | improve: extract method for safe loading yaml file and avoid using PyYaml's FullLoader (#4031) | 2024-05-24 12:08:12 +08:00 |  | 
			
				
					| 
							
							
								 Patryk Garstecki | 296887754f | Support for Vertex AI (#4586) | 2024-05-24 12:01:40 +08:00 |  | 
			
				
					| 
							
							
								 QuietRocket | 9ae72cdcf4 | feat: Add Gemini Flash (#4616) | 2024-05-24 11:43:06 +08:00 |  | 
			
				
					| 
							
							
								 takatost | 11642192d1 | chore: add https://api.openai.com placeholder in OpenAI api base (#4604) | 2024-05-23 12:56:05 +08:00 |  | 
			
				
					| 
							
							
								 呆萌闷油瓶 | e57bdd4e58 | chore:update gpt-3.5-turbo and gpt-4-turbo parameter for azure (#4596) | 2024-05-23 11:51:38 +08:00 |  | 
			
				
					| 
							
							
								 somethingwentwell | 461488e9bf | Add Azure OpenAI API version for GPT4o support (#4569) Co-authored-by: wwwc <wwwc@outlook.com> | 2024-05-22 17:43:16 +08:00 |  | 
			
				
					| 
							
							
								 Justin Wu | 3ab19be9ea | Fix bedrock claude wrong pricing (#4572) Co-authored-by: Justin Wu <justin.wu@ringcentral.com> | 2024-05-22 14:28:28 +08:00 |  | 
			
				
					| 
							
							
								 呆萌闷油瓶 | d5a33a0323 | feat:add gpt-4o for azure (#4568) | 2024-05-22 11:02:43 +08:00 |  | 
			
				
					| 
							
							
								 Bowen Liang | e8e213ad1e | chore: apply and fix flake8-bugbear lint rules (#4496) | 2024-05-20 16:34:13 +08:00 |  | 
			
				
					| 
							
							
								 Ever | 4086f5051c | feat:Provide parameter config for mask_sensitive_info of MiniMax mode… (#4294) Co-authored-by: 老潮 <zhangyongsheng@3vjia.com>
Co-authored-by: takatost <takatost@users.noreply.github.com>
Co-authored-by: takatost <takatost@gmail.com> | 2024-05-20 10:15:27 +08:00 |  | 
			
				
					| 
							
							
								 fanghongtai | 1cca100a48 | fix:modify spelling errors: lanuage ->language in schema.md (#4499) Co-authored-by: wxfanghongtai <wxfanghongtai@gf.com.cn> | 2024-05-19 18:31:05 +08:00 |  | 
			
				
					| 
							
							
								 Bowen Liang | 04ad46dd31 | chore: skip unnecessary key checks prior to accessing a dictionary (#4497) | 2024-05-19 18:30:45 +08:00 |  | 
			
				
					| 
							
							
								 Yeuoly | 091fba74cb | enhance: claude stream tool call (#4469) | 2024-05-17 12:43:58 +08:00 |  | 
			
				
					| 
							
							
								 jiaqianjing | 0ac5d621b6 | add llm: ernie-character-8k of wenxin (#4448) | 2024-05-16 18:31:07 +08:00 |  | 
			
				
					| 
							
							
								 sino | 6e9066ebf4 | feat: support doubao llm and embeding models (#4431) | 2024-05-16 11:41:24 +08:00 |  | 
			
				
					| 
							
							
								 Yash Parmar | 332baca538 | FIX: fix the temperature value of ollama model (#4027) | 2024-05-15 08:05:54 +08:00 |  | 
			
				
					| 
							
							
								 Yeuoly | e8311357ff | feat: gpt-4o (#4346) | 2024-05-14 02:52:41 +08:00 |  | 
			
				
					| 
							
							
								 orangeclk | ece0f08a2b | add yi models (#4335) Co-authored-by: 陈力坤 <likunchen@caixin.com> | 2024-05-13 17:40:53 +08:00 |  | 
			
				
					| 
							
							
								 Weaxs | 8cc492721b | fix: minimax streaming function_call message (#4271) | 2024-05-11 21:07:22 +08:00 |  | 
			
				
					| 
							
							
								 Joshua | a80fe20456 | add-some-new-models-hosted-on-nvidia (#4303) | 2024-05-11 21:05:31 +08:00 |  |