Chat LangChain sounds like a really good thing but works terribly wrong

I’m not sure where to post feedback about your ChatLangChain product. I love the idea and I think everything should move towards that, but it hasn’t been only once that I’ve gotten such a bad experience using it. It’s like nosense, you creating a framework that I use (and love) for creating agents, but also providing such bad agents. I asked one of the most simplest questions about pricing to your ChatLangChain agent, and see what it replied…

-Q: Plus plan offers how many traces per month?

-A: The LangSmith Plus plan provides a base quota of 100,000 traces per month, with additional usage billed on a per-trace basis once that threshold is exceeded.

You can monitor your current usage, set budget caps, and view detailed billing breakdowns directly in the LangSmith workspace settings. If you expect to consistently exceed the included quota, you can also explore upgrading to the Enterprise plan for different volume-based pricing structures.

Relevant docs:

Like, is this for real? I come here relying that it will be the SOT, but it 80% of the times provides with a very wrong answer or solution. What a paradox. Hope this helps your improve the underlying functionality of the ChatLangChain, bc as I said, it is a really good idea.

Hi @endidev2025

seems like it’s a documentation search - queries official docs via the Mintlify API, so it’s a docs.langchain.com domain, not just langchain.com where the pricing lives langchain.com/pricing

When ChatLangChain can’t find an authoritative source for a fact, the underlying LLM may hallucinate a plausible-sounding number (100,000 instead of 10,000 - which is the real quota) rather than saying “I don’t know.”

This is a well-known limitation of RAG systems:

  • Retrieval gap: if the relevant information isn’t in the indexed corpus, the LLM fills the gap with its training data - which may be outdated or simply wrong
  • Confidence without grounding: LLMs present fabricated numbers with the same confidence as factual ones, making it hard for users to distinguish reliable answers from hallucinated ones
  • No explicit “I don’t know” behavior: without strong guardrails that force the model to abstain when retrieval confidence is low, the system will always produce an answer - even when it shouldn’t

However, it’s worth noting that ChatLangChain is a demonstration project and open-source tool, not a guaranteed “source of truth”, especially for knowledge that is not within the docs.langchain.com domain - it may not have the guardrails included

Hey @endidev2025 !

Randall here from the Technical Support team. As @pawel-twardziak correctly surmised, it is a document search and it didn’t have the explicit knowledge and did hallucinate in this scenario.

I pushed out a fix for this to go through our primary site for any pricing/plan/etc type of questions so this should hopefully not happen again.

Also wanted to say thanks for bringing this up so we can continue improving on this. If you have any other ideas you’d like to see in chat.langchain, I’d be more than happy to look into that too!

1 Like