takatost
|
c4d8bdc3db
|
fix: hf hosted inference check (#1128)
|
2023-09-09 00:29:48 +08:00 |
|
takatost
|
a7cdb745c1
|
feat: support spark v2 validate (#1086)
|
2023-09-01 20:53:32 +08:00 |
|
takatost
|
2eba98a465
|
feat: optimize anthropic connection pool (#1066)
|
2023-08-31 16:18:59 +08:00 |
|
takatost
|
417c19577a
|
feat: add LocalAI local embedding model support (#1021)
Co-authored-by: StyleZhang <jasonapring2015@outlook.com>
|
2023-08-29 22:22:02 +08:00 |
|
takatost
|
9ae91a2ec3
|
feat: optimize xinference request max token key and stop reason (#998)
|
2023-08-24 18:11:15 +08:00 |
|
takatost
|
9b247fccd4
|
feat: adjust hf max tokens (#979)
|
2023-08-23 22:24:50 +08:00 |
|
takatost
|
a76fde3d23
|
feat: optimize hf inference endpoint (#975)
|
2023-08-23 19:47:50 +08:00 |
|
takatost
|
e0a48c4972
|
fix: xinference chat support (#939)
|
2023-08-21 20:44:29 +08:00 |
|
takatost
|
6c832ee328
|
fix: remove openllm pypi package because of this package too large (#931)
|
2023-08-21 02:12:28 +08:00 |
|
takatost
|
25264e7852
|
feat: add xinference embedding model support (#930)
|
2023-08-20 19:35:07 +08:00 |
|
takatost
|
18dd0d569d
|
fix: xinference max_tokens alisa error (#929)
|
2023-08-20 19:12:52 +08:00 |
|
takatost
|
3ea8d7a019
|
feat: add openllm support (#928)
|
2023-08-20 19:04:33 +08:00 |
|
takatost
|
da3f10a55e
|
feat: server xinference support (#927)
|
2023-08-20 17:46:41 +08:00 |
|
takatost
|
95b179fb39
|
fix: replicate text generation model validate (#923)
|
2023-08-19 21:40:42 +08:00 |
|
takatost
|
9adbeadeec
|
feat: claude paid optimize (#890)
|
2023-08-17 16:56:20 +08:00 |
|
takatost
|
f42e7d1a61
|
feat: add spark v2 support (#885)
|
2023-08-17 15:08:57 +08:00 |
|
takatost
|
cc52cdc2a9
|
Feat/add free provider apply (#829)
|
2023-08-14 12:44:35 +08:00 |
|
takatost
|
1bd0a76a20
|
feat: optimize error raise (#820)
|
2023-08-13 00:59:36 +08:00 |
|
takatost
|
5fa2161b05
|
feat: server multi models support (#799)
|
2023-08-12 00:57:00 +08:00 |
|