Join me Thursday, November 13th from 8AM PST through Friday, November 14th 6PM PST for an AMA. We’ll use this same topic.
Hit reply below to drop questions ahead of time! I’ll start answering your questions on the 13th.
As you know, over the past three years we’ve iterated from LangChain to LangSmith to LangGraph. We are working to figure out what the agents of the future look like, and build tools to help make them real. Feel free to ask questions on the past three years and how the space (and our offerings) have evolved. Ask me anything about the new features and products we launched in October. Also happy to speculate on where we see the future of agents headed!
We’re excited to host this AMA, answer your questions, and learn more about what you’re seeing and doing.
Hi @hwchase17 — huge thanks for your amazing work!
Would you consider letting init_chat_model support any installed provider package, not just a fixed whitelist? This would allow using many community, private, or in-development integrations (like langchain-chatglm, langchain-litellm, langchain-vllm, and others, not only OSS) directly by package and model name.
Hello @hwchase17 , really appreciate the work you and the team have been doing. I’m curious about the long-term vision for these LangChain and LangGraph. Are there plans to eventually consolidate into a single framework? For example, could we see LangChain being deprecated in favor of LangGraph as the primary framework, or do you see these tools serving distinctly different purposes long-term?
Hi @hwchase17, super stoked to jump in with a question! You team did a great job! making agent development way easier. Quick one—is LangChain planning to dive into UI-related stuff, like UI libraries or maybe JSON streaming?
Which industries or problem types are seeing the fastest ROI from LangChain-based systems, and how should young AI builders align projects to those domains
In terms of industry - pretty much everyone is adopting agents/genAI. technology companies are fastest moving, so we see the most there, with Finance/Fintech maybe next behind
In terms of problem types - everyone is building a “deep research for X” style agent. I’m very bullish on these types of problems, which is why we’re investing really heavily in DeepAgents. These types of agents are definitely newer/more cutting edge, but IMO is a great place for young AI builders to invest
Are there plans to add native cancellation support to LangGraph? I’m thinking of a middleware-style hook that could fire before a node executes, allowing for cancellation.
I know CallbackHandler might offer a way to do this, but I’m curious about your thoughts in general on this.
If I build a Docker image with the LangGraph CLI and deploy it as a container, do I still need a LangSmith API key or a Pro subscription? I’m still not totally clear on how Langchain/LangGraph self-hosting works.
Similarly, if I use the Langchain agent with a FastAPI server, I don’t need the Langsmith API key for self-hosting, right?
LangChain has mostly been targeted at developers. With the Agent Builder launch, is LangChain trying to expand it’s product offering to non-developers?
A bit, yes! Our focus is still developers - but we see that with agent builder its possible to build agents in a no code way, and we’re excited to explore that. Right now its a small part of what we do
Yes! We’re thinking of the best thing to do here - either will invest in integrating into AI SDK or do our own thing. Will try to have some updates shortly
Maybe not question but a request.
Why do we not have more real-world, production-grade examples? Like for deepagents, Langgraph. The ones we do have are outdated.
For instance, most deep agent implementations focus on fetching data from a vector database. It would be awesome to see an open-source, production-ready DeepAgents or LangGraph project that demonstrates a robust, practical setup.
Hi Harrison, it’s great to meet here in the forum! I really appreciate the direct interaction between you and developers in this thread. It’s been helpful—I even found a friend here and got his question answered by you!
On behalf of myself and our local devs, I have a question: LangChain v1 and its middleware are fantastic! Will there be an official middleware marketplace where developers can contribute and share their middleware?
Reading and hearing more about context engineering and I feel like there are a lot of opportunities to build frameworks/products to help developers reason about it in their applications. LangChain’s Memory system is a good attempt at addressing context eng concerns. Curious if LangChain will formalize or productize solutions beyond short/long-term memory (this doc refers to procedural memory, episode memory, etc)?