.. |
models
|
2c30d19cbe
feat: add baichuan prompt (#985)
|
hai 1 ano |
providers
|
9ae91a2ec3
feat: optimize xinference request max token key and stop reason (#998)
|
hai 1 ano |
rules
|
3ea8d7a019
feat: add openllm support (#928)
|
%!s(int64=2) %!d(string=hai) anos |
error.py
|
5fa2161b05
feat: server multi models support (#799)
|
%!s(int64=2) %!d(string=hai) anos |
model_factory.py
|
1d9cc5ca05
fix: universal chat when default model invalid (#905)
|
%!s(int64=2) %!d(string=hai) anos |
model_provider_factory.py
|
3ea8d7a019
feat: add openllm support (#928)
|
%!s(int64=2) %!d(string=hai) anos |
rules.py
|
5fa2161b05
feat: server multi models support (#799)
|
%!s(int64=2) %!d(string=hai) anos |