šŸš€ `langchain` 1.0 – Feedback Wanted!

Hey everyone,

We’re getting ready to release version 1.0 of the langchain package, and we’d love your input before it goes live.

Over the past few months, we’ve been rethinking the structure of the package to make it simpler, more focused, and easier to use—especially as agentic workflows become more common in real-world applications.

:light_bulb: Here’s What We’re Thinking

We want langchain 1.0 to offer a clean, intuitive starting point focused on the most-used patterns and abstractions:

  • Re-export core primitives like messages, tools, and prompts from langchain_core so users don’t need to learn multiple packages up front.

  • Expose common building blocks—agents, chains, mcp (model-context-protocol), retrievers—at the top level to make discovery and usage easier.

  • Simplified onboarding with:

    • Streamlined access to models and embeddings through universal init helpers:

    • Prebuilt workflows for common use cases like RAG, summarization, SQL, and more — so you can hit the ground running

We’re also removing deprecated modules to reduce clutter and improve usability.

:wrench: Adding LangGraph as a dependency

The langchain package will depend on:

  • langchain-core – for core abstractions like chat models, messages, tools, and prompts
  • langgraph – for agent workflow support

:books: Docs

We’re actively working on improving the documentation experience across our open-source ecosystem—including langchain, langgraph, and more.

Our goals are to:

  • Consolidate documentation to reduce duplication and minimize the number of separate docs sites
  • Improve global navigation so it’s easier to find relevant content—see the updated layout in langgraph
  • Unify documentation across packages to better reflect real-world, end-to-end workflows that span multiple components

Some of this work is already live, and we’ll continue rolling out improvements across the ecosystem.

:broom: Cleaning House

We’re retiring many legacy or unused modules (e.g., adapters, docstore).

Integrations now live in dedicated third-party packages—such as langchain_openai, langchain_anthropic, langchain_google—or in the community-maintained langchain_community. The main langchain package will no longer proxy imports from them.

To support existing projects that may still rely on deprecated functionality provided by langchain, we’ll publish a langchain-legacy package that retains the current structure and functionality.

:brain: We’d Love Your Feedback

We’re working to make langchain 1.0 both powerful and ergonomic—and your input is key to getting it right.

We’re especially interested in:

  • Any pain points you’ve run into with the current package structure
  • Features or workflows you’d like to see added or improved
  • Deprecated functionality you’re still relying on or unsure how to replace

Feel free to reply here with thoughts, questions, or concerns.

Thanks for building with us!

—The LangChain Team

6 Likes

I like the simple structure of the new docs (Quickstarts and General Concepts). Would be great to see some quick templates to copy and use out of the box. For instance, there could be a quick repo to copy for the ā€œbuild a basic chatbotā€ tutorial.

Also, I default to using LangGraph Studio to testing applications and always find myself adding langgraph-cli[inmem] to my requirements file. Perhaps langgraph-api for the langchain package?

@atc thanks for the feedback.

Just to confirm by the new docs you are referring to: LangGraph

rather than to: Introduction | šŸ¦œļøšŸ”— LangChain

1 Like

Yup I was referring to the new LangGraph docs.

1 Like

Regarding the Docs:
I’m new to LangChain and LangGraph. I’ve been going through the LangChain for LLM Application Development course by DeepLearning.AI, taught by Harrison. However, much of the course content seems deprecated.

I felt a bit frustrated at first because I spent a lot of time trying to find updated solutions, but the docs didn’t make it clear. For example, LLMChain is deprecated and we’re supposed to use LCEL now, but there’s no clear documentation explaining the updated approach.

If possible, I’d be happy to help improve the docs!

Hi @Tik1993, sorry to hear that – things move pretty fast around here, so it’s not always easy to stay on top of the many docs pages. Glad to hear you’re interested in helping out! We’ve put together some information here for contributing to docs.

Let me know if you have any trouble getting set up.

1 Like

Hello @eyurtsev

Delighted to hear changes are coming. A few thoughts as requested.

  • Keep it tight with Context7.
  • As hard as it is to do, wax the old stuff so web searches 404.
  • Treat LangChain Academy as THE source along with single pillar of documentation.
  • Please keep the open source CODE explanations front and center.
  • Low and no coders will find Studio and such; code devs need the example transfusions!

To wit, thank Lance for continuing to stoop down to help us little people in his courses.

What progress I have made (long way to go) with your magnificent software has been at the source. More precisely, tutorials by third parties are ā€˜englighteningā€˜, and in my experience, ā€˜misleading’. Not on purpose of course, simply that they are adding yet another layer of (in this case human) abstraction.

+1 for resolving the multi-generational library and document phenomenom as you propose :slight_smile:

Thanx.

2 Likes

Amazing :heart:. Looking forward for the release.

Agree! Huge thanks to Lance for providing such an amazing introduction course — it’s easy to follow and a great starting point for learning.
I spent a weekend watching the videos from LangChain Academy. Just wanted to share a small thought: I initially started with the Project: Building Ambient Agents with LangGraph, but found it difficult to follow without having some foundational knowledge.
After completing Foundation: Introduction to LangGraph, everything started to make more sense.
It would be really helpful if the courses included recommended skill levels or prerequisites in their descriptions.

It would be ideal if init_chat_model() could dynamically pick up any BaseChatModel present in the environment, rather than depending on a hard-coded list. That would let community and third-party back-ends (e.g. DashScope, ZhipuAI, Qianfan, OpenRouter) plug in without touching LangChain’s core—perfectly matching your ā€œintegrations live in separate packagesā€ approach and cutting down on PR overhead.

1 Like

Hello!

First, congratulations on the amazing v1 alpha release!

I have a question about the difference between AgentState and MessagesState. They appear to be very similar. What are the key differences, and in what cases should we use one over the other?

For context, here are the definitions I’m looking at:

class AgentState(TypedDict):
    messages: Annotated[Sequence[BaseMessage], add_messages]
    remaining_steps: NotRequired[RemainingSteps]

# Example Usage
from langchain.agents import AgentState

class State(AgentState):
    my_var: str
    customer_name: str
class MessagesState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]

# Example Usage
from langgraph.graph import MessagesState

class State(MessagesState):
    my_var: str
    customer_name: str

Additionally, it seems that AgentState (with its remaining_steps field) is designed for the new create_agent function. If that’s the case, why doesn’t it inherit from MessagesState? It seems like a natural extension.

Thanks for your help!

I am trying out the example in the v1 Quickstart:

I’ve run into a few issues:

  • Access to ā€œcontextā€ in the tool. The quickstart tries to access it via config, but that fails for me. I instead followed the docs at Tools - Docs by LangChain
  • The actual responses are a bit awkward, due to the second call requesting a structuredresponse. The model seems confused by what to put in ā€œconditionsā€ for that one. It might be better to specify in the system prompt, or to use a enum for conditions? It might also help to put expected output in the Quickstart.

Here’s what I got from gpt-4o-mini:

WeatherResponse(conditions=ā€œIt’s always sunny in Florida!ā€, punny_response=ā€œLooks like Florida’s putting its best foot forward—it’s a hot day!ā€)WeatherResponse(conditions=ā€˜thanks’, punny_response=ā€œYou’re welcome! I’m always here to shine some light on your weather queries!ā€)

Here’s what I got from gpt-5, which got tripped up by lack of a specific location:

WeatherResponse(conditions=ā€˜I can tell you’re in Florida, but I need your precise spot—what city or ZIP code should I check? (Examples: Miami, Orlando, Tampa,Jacksonville, Tallahassee.)’,punny_response=ā€˜I don’t mean to be a mist-ery, but I need your precip-ic location so I can rain down the right details!’)
WeatherResponse(conditions=ā€˜I still need your exact spot in Florida—what city or ZIP should I check? (e.g., Miami 33101, Orlando 32801, Tampa 33602, Jacksonville32202, Tallahassee 32301)’,punny_response=ā€˜You’re welcome! Now let’s make this forecast a shore thing—drop your city or ZIP so I can rain on with the details.’)

I imagine the Anthropic model gives different results, but I don’t have an Anthropic account yet, so I tested with Azure/GitHub Models.

You can see the full code that I got working here:

1 Like

I notice that tools is required for create_agent - it might be nice to give that a default of an empty sequence, to make it easier for developers to make prompt-only ā€œagentsā€.

Currently, if you don’t pass it in, you get:

File ā€œ/Users/pamelafox/python-ai-agents-demos/examples/langchainv1_basic.pyā€, line 43, in
agent = create_agent(
^^^^^^^^^^^^^
TypeError: create_agent() missing 1 required positional argument: ā€˜tools’

1 Like

Hi Pamela, It should work if you pass an empty list for tools=[]. It’s not the most intuitive API for this use case and we may relax the requirement that tools has to be specified. Appreciate the feedback!

@eyurtsev Yep, that’s what I did! I’m just saying that it might be nice if that wasn’t required. I’m prepping a talk about agents using langchain v1, and that’s a difference that I noticed between langchain v1 and other agent frameworks. But perhaps developers should be using the model classes directly for non-tools scenarios.

1 Like

Are there any examples available already somewhere of the recommended approach for multi-agent architectures, like supervisor, round-robin, hands off, etc? Would developers be suggested to use Langgraph for those scenarios?

We’re currently working on new examples / conceptual docs, and some will likely be released next week.

We do have some existing examples here: Custom implementation , but they may change.

We’ll likely focus more discussing tools vs handoffs:

  • tools will be recommended for the supervisor architecture; where a subagent is used as a tool (and the fact that the tool uses an agent is an implementation detail),. Here, control is always returned to the supervisor.
  • handoffs result in a change in the active agent. One agent can handoff to another agent to assist with a user request. And a user can continue conversation with with the new agent.

We will likely stop discussing topologies (e.g., network, supervisor / supervisor as tools etc Overview ) since it’s not super helpful in understanding what architecture to use.

1 Like

Hello!

I noticed the retry/fix output parsers that where exposed in langchain have disappeared and are not available langchain-core. Is that expected and why ?

Otherwise the documentation looks much clearer and simplier. Looking forward to seeing it getting completed!

@eyurtsev Thanks, that’s very helpful to hear how you’re thinking about discussing agentic architectures.
Based off your comment, this is how I attempted a supervisor architecture:
https://github.com/Azure-Samples/python-ai-agent-frameworks-demos/blob/main/examples/langchainv1_supervisor.py#L162
Not sure if there’s a better way to wrap an agent as a tool beside what I did there - if there’s going to be something built-in?

Generally, is the goal that most/all people should be moving langchain versus langgraph?

@Louis the RetryOutputParser will be available in langchain-legacy. Could you share a bit more about how / when you use it?