takatost 3efaa713da feat: use xinference client instead of xinference (#1339) 2 lat temu
..
__init__.py 5fa2161b05 feat: server multi models support (#799) 2 lat temu
anthropic_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
azure_openai_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
baichuan_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
base.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
chatglm_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
hosted.py 827c97f0d3 feat: add zhipuai (#1188) 2 lat temu
huggingface_hub_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
localai_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
minimax_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
openai_provider.py 9822f687f7 fix: max tokens of OpenAI gpt-3.5-turbo-instruct to 4097 (#1338) 2 lat temu
openllm_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
replicate_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
spark_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
tongyi_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
wenxin_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu
xinference_provider.py 3efaa713da feat: use xinference client instead of xinference (#1339) 2 lat temu
zhipuai_provider.py 42a5b3ec17 feat: advanced prompt backend (#1301) 2 lat temu