| .. |
|
__base
|
c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
|
11 months ago |
|
anthropic
|
2681bafb76
fix: handle document fetching from URL in Anthropic LLM model, solving base64 decoding error (#11858)
|
10 months ago |
|
azure_ai_studio
|
51db59622c
chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425)
|
11 months ago |
|
azure_openai
|
463fbe2680
fix: better gard nan value from numpy for issue #11827 (#11864)
|
10 months ago |
|
baichuan
|
daccb10d8c
fix: volcengine_maas and baichuan message error (#11625)
|
10 months ago |
|
bedrock
|
bb2f46d7cc
fix: add safe dictionary access for bedrock credentials (#11860)
|
10 months ago |
|
chatglm
|
40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296)
|
1 year ago |
|
cohere
|
463fbe2680
fix: better gard nan value from numpy for issue #11827 (#11864)
|
10 months ago |
|
deepseek
|
79801f5c30
fix: deepseek reports an error when using Response Format #11677 (#11678)
|
10 months ago |
|
fireworks
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
fishaudio
|
448a19bf54
fix: fish audio wrong validate credentials interface (#11019)
|
11 months ago |
|
gitee_ai
|
fc8fdbacb4
feat: add gitee ai vl models (#11697)
|
10 months ago |
|
google
|
366857cd26
fix: gemini system prompt with variable raise error (#11946)
|
10 months ago |
|
gpustack
|
8aae235a71
fix: int None will cause error for context size (#11055)
|
11 months ago |
|
groq
|
e7a4cfac4d
fix: name of llama-3.3-70b-specdec (#11596)
|
10 months ago |
|
huggingface_hub
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
huggingface_tei
|
6a0ff3686c
fix: fix typo (#12034)
|
10 months ago |
|
hunyuan
|
56434db4f5
feat:add hunyuan model(hunyuan-role, hunyuan-large, hunyuan-large-rol… (#11766)
|
10 months ago |
|
jina
|
8aae235a71
fix: int None will cause error for context size (#11055)
|
11 months ago |
|
leptonai
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
1 year ago |
|
localai
|
1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
|
1 year ago |
|
minimax
|
32f8439143
fix: add the missing abab6.5t-chat model of Minimax (#11484)
|
10 months ago |
|
mistralai
|
42d986b96d
[Pixtral] Add new model ; add vision (#11231)
|
10 months ago |
|
mixedbread
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
moonshot
|
643a90c48d
fix: use `removeprefix()` instead of `lstrip()` to remove the `data:` prefix (#11272)
|
10 months ago |
|
nomic
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
novita
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
1 year ago |
|
nvidia
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
nvidia_nim
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
1 year ago |
|
oci
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
1 year ago |
|
ollama
|
7e1184c071
feat: support json_schema for ollama models (#11449)
|
10 months ago |
|
openai
|
af2888d394
fix: remove json_schema if response format is disabled. (#12014)
|
10 months ago |
|
openai_api_compatible
|
7e154a467b
fix: better error message for stream (#11635)
|
10 months ago |
|
openllm
|
0067b16d1e
fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883)
|
11 months ago |
|
openrouter
|
4d6b45427c
Support streaming output for OpenAI o1-preview and o1-mini (#10890)
|
11 months ago |
|
perfxcloud
|
8aae235a71
fix: int None will cause error for context size (#11055)
|
11 months ago |
|
replicate
|
d057067543
fix: remove ruff ignore SIM300 (#11810)
|
10 months ago |
|
sagemaker
|
51db59622c
chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425)
|
11 months ago |
|
siliconflow
|
12d45e9114
fix: silicon change its model fix #11844 (#11847)
|
10 months ago |
|
spark
|
d0e0111f88
fix:Spark's large language model token calculation error #7911 (#8755)
|
1 year ago |
|
stepfun
|
643a90c48d
fix: use `removeprefix()` instead of `lstrip()` to remove the `data:` prefix (#11272)
|
10 months ago |
|
tencent
|
40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296)
|
1 year ago |
|
togetherai
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
1 year ago |
|
tongyi
|
c9b4029ce7
chore: the consistency of MultiModalPromptMessageContent (#11721)
|
10 months ago |
|
triton_inference_server
|
1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
|
1 year ago |
|
upstage
|
463fbe2680
fix: better gard nan value from numpy for issue #11827 (#11864)
|
10 months ago |
|
vertex_ai
|
7b03a0316d
fix: better memory usage from 800+ to 500+ (#11796)
|
10 months ago |
|
vessl_ai
|
aa895cfa9b
fix: [VESSL-AI] edit some words in vessl_ai.yaml (#10417)
|
11 months ago |
|
volcengine_maas
|
560d375e0f
feat(ark): add doubao-pro-256k and doubao-embedding-large (#11831)
|
10 months ago |
|
voyage
|
8aae235a71
fix: int None will cause error for context size (#11055)
|
11 months ago |
|
wenxin
|
e39e776d03
fix: better wenxin rerank handler, close #11252 (#11283)
|
10 months ago |
|
x
|
cf0ff88120
feat: add grok-2-1212 and grok-2-vision-1212 (#11672)
|
10 months ago |
|
xinference
|
03ba4bc760
fix error with xinference tool calling with qwen2-instruct and add timeout retry setttings for xinference (#11012)
|
11 months ago |
|
yi
|
e0846792d2
feat: add yi custom llm intergration (#9482)
|
1 year ago |
|
zhinao
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
1 year ago |
|
zhipuai
|
142b4fd699
feat: add zhipu glm_4v_flash (#11440)
|
10 months ago |
|
__init__.py
|
d069c668f8
Model Runtime (#1858)
|
1 year ago |
|
_position.yaml
|
fb49413a41
feat: add voyage ai as a new model provider (#8747)
|
1 year ago |
|
model_provider_factory.py
|
4e7b6aec3a
feat: support pinning, including, and excluding for model providers and tools (#7419)
|
1 year ago |