Hi everyone,
I’m trying to use the LangSmith Studio file upload feature to attach PDFs directly to my prompt. The files appear correctly in the UI, but when the prompt is sent to my LLM the run crashes with the following error:
Background run failed. Exception: <class 'openai.BadRequestError'>
(Error code: 400 - {
'error': {
'message': "Missing required parameter: 'messages[1].content[1].file.file_id'.",
'type': 'invalid_request_error',
'param': 'messages[1].content[1].file.file_id',
'code': 'missing_required_parameter'
}
})
What I noticed:
If I place the same PDF into my filesystem backend and load it from there instead of using the Studio file upload, everything works correctly.
So it seems specifically related to passing uploaded files from the LangSmith Studio UI to the model.
Questions:
-
Is there something special that needs to be configured for file uploads from LangSmith Studio when using Deep Agents?
-
Does gpt-4.1 actually support this type of file attachment in messages? If not which AI models currently support this workflow out of the box?
-
Sometimes I need to upload large PDF files (50+ pages) for analysis. Is there a recommended general approach for handling this with Deep Agents, such as using subagents, middleware, chunking, or another pattern? Ideally, I don’t want to permanently store these files in my application. I would prefer a solution where the files can be processed temporarily and deleted again once they are no longer needed
Any hints or working examples would be really appreciated.
Thanks!
Studio UI:
My Agent:
from pathlib import Path
from dotenv import load_dotenv
from deepagents import create_deep_agent
from deepagents.backends.filesystem import FilesystemBackend
from langchain_openai import AzureChatOpenAI
from jura_ai.prompts import get_prompt
from langgraph.store.memory import InMemoryStore
from deepagents.backends import StoreBackend
load_dotenv()
PROJECT_ROOT = Path(__file__).parent.parent.parent
model = AzureChatOpenAI(
azure_deployment="gpt-4.1",
api_version="2025-01-01-preview",
temperature=0,
)
agent = create_deep_agent(
model=model,
backend=FilesystemBackend(
root_dir=str(PROJECT_ROOT / "filesystem"),
virtual_mode=True,
),
skills=[str(PROJECT_ROOT / "skills")],
tools=[],
system_prompt=get_prompt("system_prompt.md"),
)
