hi @SeccoMaracuja
The root cause is that Skills are documentation-based, not execution-based - the agent reads the skill’s instructions and then generates its own response, which is where the reformatting happens.
Skills in Deep Agents follow a progressive disclosure pattern (source: deepagents/middleware/skills.py, line 561-600):
- the
SkillsMiddleware injects skill names and descriptions into the system prompt
- When the agent recognizes a skill applies, it reads the full
SKILL.md file via the read_file tool
- the agent follows the skill’s instructions and generates its own response
- here is no post-processing or output interception by the skills system
This means the agent’s LLM is free to rephrase, summarize, or reformat the output - which is exactly the problem you’re experiencing.
solution 1: response_format with a pydantic schema
The most reliable approach is to use the response_format parameter on create_deep_agent to enforce a structured output schema. This way, the agent’s final response must conform to your defined structure.
from pydantic import BaseModel, Field
from deepagents import create_deep_agent
from langchain.agents.structured_output import ToolStrategy
# Define schemas for each skill output type
class ArchitectureTable(BaseModel):
"""Structured architecture analysis output."""
business_objective: str = Field(description="Business objective section with description and tasks")
success_metrics: str = Field(description="Success metrics section with description and tasks")
system_impact: str = Field(description="System impact section with description and tasks")
data_tasks: str = Field(description="Data tasks section with description and tasks")
risks: str = Field(description="Risks section with description and tasks")
phased_plan: str = Field(description="Phased plan section with description and tasks")
agent = create_deep_agent(
skills=["/skills/"],
response_format=ToolStrategy(schema=ArchitectureTable),
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Analyze the architecture for project X"}]
})
# Access the validated structured output
structured = result["structured_response"]
print(structured.business_objective)
print(structured.phased_plan)
The response_format parameter is passed directly through to the underlying create_agent call (source: deepagents/graph.py, line 91 and 314).
Docs reference: Deep Agents Customization
solution 2: string field to preserve raw output
If you want to preserve the skill’s output exactly as-is (e.g. a markdown table), use a single string field in your schema:
from pydantic import BaseModel, Field
from deepagents import create_deep_agent
from langchain.agents.structured_output import ToolStrategy
class SkillOutput(BaseModel):
"""Preserves the skill output verbatim."""
skill_name: str = Field(description="Name of the skill that was executed")
raw_output: str = Field(
description=(
"The EXACT output produced by following the skill instructions. "
"Do NOT modify, summarize, or reformat this content in any way. "
"Preserve all original formatting including tables, bullet points, "
"and whitespace exactly as generated."
)
)
agent = create_deep_agent(
skills=["/skills/"],
response_format=ToolStrategy(schema=SkillOutput),
)
result = agent.invoke({
"messages": [{"role": "user", "content": "Run the architecture analysis for project X"}]
})
# The raw_output field preserves the original formatting
print(result["structured_response"].raw_output)
The key here is the detailed field description - it acts as an instruction to the LLM to preserve the format verbatim.
solution 3: custom ystem prompt to prevent reformatting
If you don’t need structured output but simply want the agent to stop reformatting, add explicit instructions in the system_prompt:
from deepagents import create_deep_agent
SYSTEM_PROMPT = """
CRITICAL OUTPUT RULE: When you execute a Skill, you MUST return the Skill's
output EXACTLY as produced - character for character. Do NOT:
- Rephrase or summarize the output
- Change the formatting (tables, lists, bullet points)
- Add your own commentary around the output
- Merge or restructure sections
Simply output the Skill result as-is. If the Skill produces a markdown table,
return that exact markdown table. If it produces bullet points, return those
exact bullet points.
"""
agent = create_deep_agent(
system_prompt=SYSTEM_PROMPT,
skills=["/skills/"],
)
This approach relies on prompt engineering and is less reliable than structured output, but it’s the simplest change.
solution 4: middleware for dynamic per-skill output handling
For the most flexibility (5 skills with different formats), you can write custom middleware that dynamically selects the response format based on which skill was activated:
from langchain.agents.middleware import AgentMiddleware, ModelRequest, ModelResponse
from langchain.agents.structured_output import ToolStrategy
from pydantic import BaseModel, Field
class TableOutput(BaseModel):
table_markdown: str = Field(description="Complete markdown table output, preserved exactly")
class ListOutput(BaseModel):
list_content: str = Field(description="Complete structured list output, preserved exactly")
class PlainTextOutput(BaseModel):
text: str = Field(description="Complete plain text output, preserved exactly")
SKILL_FORMAT_MAP = {
"architecture-analysis": ToolStrategy(schema=TableOutput),
"requirements-list": ToolStrategy(schema=ListOutput),
"summary-writer": ToolStrategy(schema=PlainTextOutput),
}
class SkillOutputMiddleware(AgentMiddleware):
def wrap_model_call(self, request, handler):
messages = request.messages
matched_format = None
for msg in reversed(messages):
content = getattr(msg, "content", "")
if isinstance(content, str):
for skill_name, fmt in SKILL_FORMAT_MAP.items():
if skill_name in content:
matched_format = fmt
break
if matched_format:
break
if matched_format:
request = request.override(response_format=matched_format)
return handler(request)
agent = create_deep_agent(
skills=["/skills/"],
middleware=[SkillOutputMiddleware()],
response_format=ToolStrategy(schema=PlainTextOutput), # base format; middleware overrides
)
which one to use?
| Approach |
Reliability |
Complexity |
Best For |
| Solution 1: Pydantic schema per skill type |
High |
Medium |
When you need typed, validated output |
Solution 2: Single raw_output string field |
High |
Low |
Preserving exact formatting for all skills |
| Solution 3: System prompt only |
Medium |
Very Low |
Quick fix, prototype stage |
| Solution 4: Custom middleware |
Very High |
High |
Multiple skills with different schemas |
structured_response is excluded from subagent output (source: deepagents/middleware/subagents.py, line 121-128). If your skills run inside subagents, the structured response won’t propagate to the parent. You’d need to include the data explicitly in the ToolMessage.
ToolStrategy vs ProviderStrategy: ToolStrategy works with any model that supports tool calling. ProviderStrategy uses the provider’s native structured output API (e.g., OpenAI’s JSON mode). Both are imported from langchain.agents.structured_output.
Skills don’t own the output - they are pure documentation injected into the system prompt. The agent reads them and generates its own response. This is by design (progressive disclosure pattern). The response_format parameter is the official mechanism to control output structure.
Links: