Is anyone else having trouble with Knowledgebases and Agents?
I’m using mostly.the DeepSeek R1 Distill Llama 70B model, but changing models definitely didn’t resolve issues.
Connecting too many knowledge bases to an Agent results in the “reasoning” field being filled up with a bunch of repetitive garbage from the context or chunks of the context. The <think>{reasoning}</think> appears together with the Agent’s normal response. The Agent often has an “out of tokens error”. Detaching all the knowledge bases and reattaching a single knowledge bases fixes the issue, but reattaching multipole knowledge bases causes the same issue again.
.csv imports will not index. Indexing fails with a “Data sources updated successfully,” message but zero tokens are indexed. I actually want to use Multi QA MPNet Base Dot v1 but I tried all three embedding models, all fail the same way.
Has anyone else had either of these issues and if so have you found a fix?
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!