How to read uploaded file by LangGraph

I am checking this document Context engineering in agents - Docs by LangChain which reads the uploaded files by wrap_model call


I test this agent with LangSmith Studio, but request.state.get("uploaded_files", []) always return [] when i upload files from Studio, how to make it work?

Hi,

You’re using the LangSmith Studio Web UI (https://smith.langchain.com/studio/), right? Are you checking the Web Developer Tools console? It might be logging some issues there.

Yes, I am using LangSmith Studio Web UI, I checked the Developer Console and Network Tab, but it did not list anything.

I just added print(request.state) for debugging. Here’s the output for an example PDF I uploaded using the “Add Multimodal Content” button:

{‘messages’: [HumanMessage(content=[{‘type’: ‘text’, ‘text’: ‘’}, {‘type’: ‘file’, ‘file’: {‘file_data’: ‘data:application/pdf;base64,blablablabla_very_long_string’, ‘filename’: ‘invoice_100812809.pdf’}}], additional_kwargs={}, response_metadata={}, id=‘dac8ad7e-d6c1-41ae-b19b-2f54a881a2b3’)]}

1 Like

Here’s my Studio script that I run using langgraph dev:

from langgraph.graph import START, END, StateGraph, MessagesState

from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable


@wrap_model_call
def inject_file_context(
    request: ModelRequest,
    handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
    """Inject context about files user has uploaded this session."""
    # Read from State: get uploaded files metadata
    uploaded_files = request.state.get("uploaded_files", [])
    print("in inject func")
    print(request.state)
    if uploaded_files:
        # Build context about available files
        file_descriptions = []
        for file in uploaded_files:
            file_descriptions.append(
                f"- {file['name']} ({file['type']}): {file['summary']}"
            )
        file_context = f"""Files you have access to in this conversation:
{chr(10).join(file_descriptions)}
Reference these files when answering questions."""
        # Inject file context before recent messages
        messages = [  
            *request.messages,
            {"role": "user", "content": file_context},
        ]
        request = request.override(messages=messages)  
    return handler(request)


agent = create_agent(
    model="ollama:qwen3",
    middleware=[inject_file_context]
)


# Node
def assistant(state: MessagesState):
    print("in assistant")
    result = agent.invoke({"messages": state["messages"]})
    print(result)
    return {"messages": result["messages"]}

# Build graph
builder = StateGraph(MessagesState)
builder.add_node("assistant", assistant)
builder.add_edge(START, "assistant")
builder.add_edge("assistant", END)

# Compile graph
graph = builder.compile()

Thanks for sharing this, exactly, I can get file base64 content by uploading image or pdf, previously, I uploaded markdown file, and it did not included in request.state.

It seems the state will not contains upload_files automatically. Do you think we need to manually read file from request.state, and then update the state.

In addition, I am wondering why markdown file not included.