.. |
__base
|
c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
|
8 月之前 |
anthropic
|
c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
|
8 月之前 |
azure_ai_studio
|
51db59622c
chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425)
|
9 月之前 |
azure_openai
|
e03ec0032b
fix: Azure OpenAI o1 max_completion_token error (#10593)
|
9 月之前 |
baichuan
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
bedrock
|
16c41585e1
Fixing #11005: Incorrect max_tokens in yaml file for AWS Bedrock US Cross Region Inference version of 3.5 Sonnet v2 and 3.5 Haiku (#11013)
|
8 月之前 |
chatglm
|
40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296)
|
11 月之前 |
cohere
|
0067b16d1e
fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883)
|
8 月之前 |
deepseek
|
aae29e72ae
Fix Deepseek Function/Tool Calling (#11023)
|
8 月之前 |
fireworks
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
fishaudio
|
448a19bf54
fix: fish audio wrong validate credentials interface (#11019)
|
8 月之前 |
gitee_ai
|
ef8022f715
Gitee AI Qwen2.5-72B model (#10595)
|
9 月之前 |
google
|
08ac36812b
feat: support LLM process document file (#10966)
|
8 月之前 |
gpustack
|
76b0328eb1
feat: add gpustack model provider (#10158)
|
9 月之前 |
groq
|
b92504bebc
Added Llama 3.2 Vision Models Speech2Text Models for Groq (#9479)
|
10 月之前 |
huggingface_hub
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
huggingface_tei
|
096c0ad564
feat: Add support for TEI API key authentication (#11006)
|
8 月之前 |
hunyuan
|
92a3898540
fix: resolve the incorrect model name of hunyuan-standard-256k (#10052)
|
9 月之前 |
jina
|
0c1307b083
add jina rerank http timout parameter (#10476)
|
9 月之前 |
leptonai
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
11 月之前 |
localai
|
1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
|
9 月之前 |
minimax
|
5b8f03cd9d
add abab7-chat-preview model (#10654)
|
9 月之前 |
mistralai
|
5ddb601e43
add MixtralAI Model (#8517)
|
10 月之前 |
mixedbread
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
moonshot
|
1b5adf40da
fix: moonshot response_format raise error (#9847)
|
9 月之前 |
nomic
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
novita
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
11 月之前 |
nvidia
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
nvidia_nim
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
11 月之前 |
oci
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
ollama
|
fbfc811a44
feat: support function call for ollama block chat api (#10784)
|
8 月之前 |
openai
|
c5f7d650b5
feat: Allow using file variables directly in the LLM node and support more file types. (#10679)
|
8 月之前 |
openai_api_compatible
|
1a6b961b5f
Resolve 8475 support rerank model from infinity (#10939)
|
8 月之前 |
openllm
|
0067b16d1e
fix: refactor all 'or []' and 'or {}' logic to make code more clear (#10883)
|
8 月之前 |
openrouter
|
4d6b45427c
Support streaming output for OpenAI o1-preview and o1-mini (#10890)
|
8 月之前 |
perfxcloud
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
replicate
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
sagemaker
|
51db59622c
chore(lint): cleanup repeated cause exception in logging.exception replaced by helpful message (#10425)
|
9 月之前 |
siliconflow
|
a4fc057a1c
ISSUE=11042: add tts model in siliconflow (#11043)
|
8 月之前 |
spark
|
d0e0111f88
fix:Spark's large language model token calculation error #7911 (#8755)
|
10 月之前 |
stepfun
|
1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
|
9 月之前 |
tencent
|
40fb4d16ef
chore: refurbish Python code by applying refurb linter rules (#8296)
|
11 月之前 |
togetherai
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
11 月之前 |
tongyi
|
08ac36812b
feat: support LLM process document file (#10966)
|
8 月之前 |
triton_inference_server
|
1e829ceaf3
chore: format get_customizable_model_schema return value (#9335)
|
9 月之前 |
upstage
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
vertex_ai
|
05d43a4074
Fix: Correct the max tokens of Claude-3.5-Sonnet-20241022 for Bedrock and VertexAI (#10508)
|
9 月之前 |
vessl_ai
|
aa895cfa9b
fix: [VESSL-AI] edit some words in vessl_ai.yaml (#10417)
|
9 月之前 |
volcengine_maas
|
80da0c5830
fix: default max_chunks set to 1 as other providers (#10937)
|
8 月之前 |
voyage
|
b90ad587c2
refactor: move the embedding to the rag module and abstract the rerank runner for extension (#9423)
|
10 月之前 |
wenxin
|
4d5546953a
add llm: ernie-4.0-turbo-128k of wenxin (#10135)
|
9 月之前 |
x
|
bf9349c4dc
feat: add xAI model provider (#10272)
|
9 月之前 |
xinference
|
03ba4bc760
fix error with xinference tool calling with qwen2-instruct and add timeout retry setttings for xinference (#11012)
|
8 月之前 |
yi
|
e0846792d2
feat: add yi custom llm intergration (#9482)
|
10 月之前 |
zhinao
|
2cf1187b32
chore(api/core): apply ruff reformatting (#7624)
|
11 月之前 |
zhipuai
|
08ac36812b
feat: support LLM process document file (#10966)
|
8 月之前 |
__init__.py
|
d069c668f8
Model Runtime (#1858)
|
1 年之前 |
_position.yaml
|
fb49413a41
feat: add voyage ai as a new model provider (#8747)
|
10 月之前 |
model_provider_factory.py
|
4e7b6aec3a
feat: support pinning, including, and excluding for model providers and tools (#7419)
|
11 月之前 |