🚀 `langchain` 1.0 – Feedback Wanted!

@pamelafox this looks correct.

We have a pre-built package for this ( GitHub - langchain-ai/langgraph-supervisor-py ); however, we will likely end up recommending that users do exactly what you did in your example.

The reason is that with the custom @tool it’s really obvious how to do all the context engineering.

@tool
def meal_agent_tool(query: str) -> str:
    """Invoke the recipe planning agent and return its final response as plain text."""
    logger.info("Tool:meal_agent invoked")
    response = meal_agent.invoke({"messages": [HumanMessage(content=query)]})
    final = response["messages"][-1].content
    return final

Can be changed to take multiple inputs (e.g., a budget), incorporate state or config information

@tool
def plan_recipe(recipe_request: str, .. can insert memory or config .. or additional input parameters for recipe planning (e.g., budget)):
    """Plan a recipe given the user request."""
    logger.info("Tool:meal_agent invoked")
    # agent can be created dynamically here in the extreme case
    response = meal_agent.invoke({"messages": [HumanMessage(content=query)]})
    # final can be just the answer, or it could incorporate more of the reasoning of the agent (to pass it to the original supervisor)
    result = response["messages"][-1].content
    # Alternatively, could also return a langgraph command to update short term memory etc.
    result = Command(..)
    return result

One suggestion is to keep the function names and the doc-strings LLM facing rather than developer facing (i.e., not meal_agent_tool , but plan_recipe ) – these are the names that LLMs

1 Like

@eyurtsev Thanks! I made the suggested function name changes, that all makes sense.

I’m poking around other parts of the langchain v1 docs now. On the Streaming docs, it has this line:

print(f"content: {data[‘messages’][-1].content_blocks}")

I got an error with content_blocks however:
AttributeError: ‘AIMessage’ object has no attribute ‘content_blocks’

Shouldn’t that be content? I ended up modifying that print quite a bit anyway since the content for tool_calls is empty (on the call itself).

I also played with the new middleware, to try to limit the number of tool calls an agent can make. It’s not clear to me whether I should use before_model with a jump, or modify_model_request and set tools to empty. I ended up doing the latter, but I think it confuses the final model call. Here’s the middleware:

class ToolCallLimitMiddleware(AgentMiddleware):
    def __init__(self, limit) -> None:
        super().__init__()
        self.limit = limit

    def modify_model_request(self, request: ModelRequest, state: AgentState) -> ModelRequest:
        tool_call_count = sum(1 for msg in state["messages"] if isinstance(msg, AIMessage) and msg.tool_calls)
        if tool_call_count >= self.limit:
            logger.info("Tool call limit of %d reached, disabling further tool calls.", self.limit)
            request.tools = []
        return request

Hi, I am new to space but excited to pick up my skills on langchain js to build few ideas into product.
I am trying to follow on with below docs and hands on Quickstart
Appreciate the doc structure and feels like much better then current 0.3v, but still few below issues faced as detailing down below.

  1. Some nuances on when to use what
  • example:
  • initchatmodel.invoke() vs create agent.invoke()
  • then in params - like when to use model string or directly llm:llm
  • when to use use direct model string as ollama:llama3.18b vs “llama3.1:8b“,{modelprovider:”ollama”}
  • when to provide tools in array to createAgent vs directly in new ChatOllama() with bind_tools()

I understand this could be flexibility provided or some performance or use cases based optional usage is what I am trying to figure out like under the hood does it help with performance or provider more configurations, etc not sure
2. Ollama - tools - need help here

I understand ollama support tools as well based on Chat models - Docs by LangChain and limited models support tools, we trying to use llama3.1:8b with tools but get error on tool_choice not supported by ollama - trying quick start with ollama to openAI cost and friction to getting APIKEY.
Recommendations: Would suggest a quick start with local setups including models to save cost and billing while learning using ollama, llmstudio or any free llm provider locally/cloud to extend it further there could be steps for complete freshers with setting things up locally since there are lot of moving parts as well easier to expriment to learn without cost burden in learning especially when we still by quicstart don’t know much on limiting tokens.
Please check on below code if I am missing on something,Appreciate help here.

  1. Perplexity not supported.
    I see perplexity chat model there on langchain but not working for me, need help on if I am doing it right, as well if docs needs update here.

    Please refer below code

export default class LangAgent {

constructor() {

this.systemPrompt = systemPrompt

this.checkpointer = new MemorySaver();

this.tools = [getUserLocation, getWeather],

this.checkpointer = this.checkpointer

this.responseFormat = responseFormat

autoBind(this)

}

init = async () => {

this.agent = createAgent({

model:“ollama:llama3.1:8b”,

//TODO: uncomment to test perplexity, and comment ollama

// llm:   initChatModel(“sonar”, {modelProvider:“perplexity”}),

prompt: this.systemPrompt,

//TODO comment tools to check working without tools

tools:this.tools,

responseFormat: this.responseFormat,

checkpointer: this.checkpointer,

    });

}

chat = async (userQuery, config = {

configurable: { thread_id: “1” },

context: { user_id: “1” },

}) => {

return await this.agent.invoke({

messages: [{ role: “user”, content: userQuery }],

    }, config)

}

}


As well tried with chatOllama but getting same error - Tool choice is not supported for ChatOllama.

My query is I am not passing tool_choice, just following docs.

As well if we remove tools from create agent it gives below error

Tools not passed

/node_modules/langchain/dist/agents/ReactAgent.cjs:29

	const toolClasses = Array.isArray(options.tools) ? options.tools : options.tools.tools;

	                                                                                ^
TypeError: Cannot read properties of undefined (reading ‘tools’)
Perplexity model Error

Unsupported { modelProvider: perplexity }.\n\nSupported model providers are: openai, anthropic, azure_openai, cohere, google-vertexai, google-vertexai-web, google-genai, ollama, mistralai, groq, cerebras, bedrock, deepseek, xai, fireworks, together"
Ollama Tool Choice error

Tool choice is not supported for ChatOllama.
    at ChatOllama.invocationParams node_modules/@langchain/ollama/dist/chat_models.js:342:19
  1. isAIMessage used in quickStart guide for v1.0 -

But isAIMessage shows deprecated in the package - @langchainlangchainlangchainlangchainlangchainlangchainlangchainlangchain/core@next
import { isAIMessage, ToolMessage } from "@langchain/core/messages";

These observations and issues are based on someone new to langchain and overall ai world, trying to upskill on gen AI fundamentals (the docs sections on concepts is helpful) through below docs and quickstart.PS: its all based on JS stack and langchain@next npm package.

Hi, Can someone help here!

Hi Pamela: I hit the same issue today. “Access to “context” in the tool. The quickstart tries to access it via config, but that fails for me. I instead followed the docs at Page Not Found ” The link that you pasted above is broken now. How did you overcome this issue?

I think I got the answer looking at your Gist :slight_smile: Thanks a ton for including the code snippet there.

This is an awesome step up, I love the push_ui_message feature, simplified my greatest pain. which was the ui communicating tool calls etc. I would really love for middleware to work with state_schema set. This would be awesome for me, as i need those summarization and prompt_cache layers.

There’s a lot of positive feedback, and while the new agent instantiation has cleaned things up quite a bit, it seems to have completely bucked the concept of the “CHAIN”, and basically the overall functionality has been reduced to fit into the groupthink.

The most useful tools that I had were processing chains, taking a document, running it through a few specific LLM passes using cheap targeted prompts, getting structured output and then further acting upon that data.

You basically have replaced the programming equivalent of basic blocks with a complicated branch statement and are pretending like it’s a positive change.

If this was a rework of/addition to langgraph, I’d be all for it. At this point, I’m going to have to give up on this project. Y’all have an huge amount of code and some great integrations (Speaking of which, what’s going to be the future of those?), but you don’t have a clear direction on what the libraries are supposed to represent. It still seems like it’s just your “utils.py” for LLM glue.