takatost 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 1 年之前
..
__init__.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
anthropic_provider.py 9adbeadeec feat: claude paid optimize (#890) 1 年之前
azure_openai_provider.py 1bd0a76a20 feat: optimize error raise (#820) 1 年之前
base.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
chatglm_provider.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
hosted.py 9adbeadeec feat: claude paid optimize (#890) 1 年之前
huggingface_hub_provider.py 9b247fccd4 feat: adjust hf max tokens (#979) 1 年之前
minimax_provider.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
openai_provider.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
openllm_provider.py 6c832ee328 fix: remove openllm pypi package because of this package too large (#931) 1 年之前
replicate_provider.py 95b179fb39 fix: replicate text generation model validate (#923) 1 年之前
spark_provider.py f42e7d1a61 feat: add spark v2 support (#885) 1 年之前
tongyi_provider.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
wenxin_provider.py 5fa2161b05 feat: server multi models support (#799) 1 年之前
xinference_provider.py 9ae91a2ec3 feat: optimize xinference request max token key and stop reason (#998) 1 年之前