| .. | 
			
		
		
			
			
			
				
					| 
						
							
						
						models
					 | 
				
				
					0796791de5
					feat: hf inference endpoint stream support (#1028)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						providers
					 | 
				
				
					9ae91a2ec3
					feat: optimize xinference request max token key and stop reason (#998)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						rules
					 | 
				
				
					3ea8d7a019
					feat: add openllm support (#928)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						error.py
					 | 
				
				
					5fa2161b05
					feat: server multi models support (#799)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						model_factory.py
					 | 
				
				
					1d9cc5ca05
					fix: universal chat when default model invalid (#905)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						model_provider_factory.py
					 | 
				
				
					3ea8d7a019
					feat: add openllm support (#928)
				 | 
				2 years ago | 
			
		
			
			
			
				
					| 
						
							
						
						rules.py
					 | 
				
				
					5fa2161b05
					feat: server multi models support (#799)
				 | 
				2 years ago |