.. |
models
|
0796791de5
feat: hf inference endpoint stream support (#1028)
|
1 year ago |
providers
|
9ae91a2ec3
feat: optimize xinference request max token key and stop reason (#998)
|
1 year ago |
rules
|
3ea8d7a019
feat: add openllm support (#928)
|
1 year ago |
error.py
|
5fa2161b05
feat: server multi models support (#799)
|
1 year ago |
model_factory.py
|
1d9cc5ca05
fix: universal chat when default model invalid (#905)
|
1 year ago |
model_provider_factory.py
|
3ea8d7a019
feat: add openllm support (#928)
|
1 year ago |
rules.py
|
5fa2161b05
feat: server multi models support (#799)
|
1 year ago |