| .. | 
		
		
			
			
			
				
					| models | 0796791de5
					feat: hf inference endpoint stream support (#1028) | лет назад: 2 | 
		
			
			
			
				
					| providers | 9ae91a2ec3
					feat: optimize xinference request max token key and stop reason (#998) | лет назад: 2 | 
		
			
			
			
				
					| rules | 3ea8d7a019
					feat: add openllm support (#928) | лет назад: 2 | 
		
			
			
			
				
					| error.py | 5fa2161b05
					feat: server multi models support (#799) | лет назад: 2 | 
		
			
			
			
				
					| model_factory.py | 1d9cc5ca05
					fix: universal chat when default model invalid (#905) | лет назад: 2 | 
		
			
			
			
				
					| model_provider_factory.py | 3ea8d7a019
					feat: add openllm support (#928) | лет назад: 2 | 
		
			
			
			
				
					| rules.py | 5fa2161b05
					feat: server multi models support (#799) | лет назад: 2 |