Anyone managed to use mcp server with ollama model provider?
|
|
1
|
27
|
June 30, 2025
|
Help ::: How to use a custom (local) LLM with vLLM
|
|
2
|
261
|
June 10, 2025
|
Gemini 2.5 Flash Preview and Gemma3:1b/27b, Big Difference in Output for same task definition
|
|
2
|
88
|
May 29, 2025
|
Handling LLM Errors in Hierarchical CrewAI Process with Callbacks
|
|
9
|
227
|
April 14, 2025
|
LLM Response Error: ValueError: Invalid response from LLM call - None or empty
|
|
5
|
481
|
April 5, 2025
|
How to use the qwen2.5-vl-3b-instruct model with the CrewAi?
|
|
3
|
390
|
April 6, 2025
|
Problem with using locally deployed custom llm
|
|
2
|
190
|
March 23, 2025
|
Error: llm.py-llm:426 - ERROR: Failed to get supported params: argument of type 'NoneType' is not iterable
|
|
2
|
105
|
March 19, 2025
|
Nineteenai doesnt work with the usual llm setup code
|
|
1
|
14
|
March 18, 2025
|
Using Custom LLM
|
|
1
|
224
|
February 24, 2025
|
Using ollama(llama3.2-vision) to extract text from image
|
|
2
|
643
|
February 11, 2025
|
When native integrations with other LLMs?
|
|
2
|
61
|
January 9, 2025
|
Tried to both perform Action and give a Final Answer at the same time, I must do one or the other
|
|
8
|
776
|
November 21, 2024
|
BadRequestError: litellm.BadRequestError: LLM Provider NOT provided
|
|
5
|
1488
|
November 20, 2024
|
ImportError: cannot import name 'LLM' from 'crewai'
|
|
7
|
1663
|
November 20, 2024
|