Source code vs. LangGraph first?

Many of my previous and even some current projects and new features were deeply developed using vLLM’s OpenAI Completion API or various model providers’ SDKs. I even wrote numerous strategy patterns to integrate different modalities. For example, in a robot’s real-time interaction scenario, the robot needs multi-modal capabilities: after voice interaction, it pushes to a semantic model for reasoning, uses different sub-agents to trigger modal tasks for different scenarios — calling a vision model on vLLM when video understanding is needed, and combining outputs from different modal contexts as a single task output when TTS is required.

I was relatively early in adopting LangChain, and I did use it for development at the time. But I later found that using the OpenAI SDK or Completion API directly was even more streamlined and flexible. When LangGraph came along, I only briefly looked at its logic and felt the learning curve was a bit steep, so I didn’t dive deep into it. My projects already had logic similar to LangGraph, though the level of implementation varied across projects. Looking back now, the code and structure of those projects appear quite messy.

Here’s a simple example: my tasks were previously all triggered through MCP, including structured outputs, tool calling, and even context integration. When I needed to ask the agent about my meeting records from recent days, it would call a time tool, query the database (either vector search or inverted index), then orchestrate and integrate the outputs from different tools with deep-thinking context before responding. When multiple tasks came in simultaneously, I had the concept of sub-agents, but unlike Deep Agents where each sub-agent has its own isolated context, all my agents shared a single context, managed through key-value numbered asynchronous operations. At the same time, to achieve different state effects, I designed an instance-level variable orchestration state machine. The state transitions were often just to escape async callback hell!

As the number of projects grew, many problems surfaced. The code structure I wrote at the time was overly complex — or rather, chaotic — violating the original intention of keeping things simple (even though it was flexible).

Now, with new technologies like skills and AI coding, I’ve tried them myself and found the results quite good. I also noticed Deep Agents and realized it was exactly what I needed — both now and before!

I recently started using Deep Agents in my projects, and the results have been excellent. I no longer need to reuse my messy legacy code. After a brief look at the Deep Agents package source code, I noticed that when you use synchronous calls externally, it internally resolves the blocking issue through a thread pool. The code logic is also very elegant.

Using Deep Agents, I quickly built a streamlined yet flexible AI coding tool combined with vLLM (using ChatOpenAI) — just like Claude Code.

What I want to say is: Deep Agents fulfilled my dream of rapid development, haha.:hugs:

Can I dive directly into the Deep Agents source code to learn? Or should I first study LangGraph in depth?

When was Deep Agents released? It’s truly a remarkably “fast” package.

1 Like

hi @yech

awesome story! Love it :heart_with_arrow:

IMHO your diagnosis is correct: direct-SDK flexibility scales fast at first, then complexity explodes around state, context isolation, and orchestration.
If your goal is to keep shipping while improving architecture, I would not pause for a full deep dive into LangGraph first.

Deep Agents source first, or LangGraph first?

Deep Agents first (pragmatic), then LangGraph depth in parallel - to understand what’s under the hood.

  • Deep Agents is explicitly positioned as a higher-level harness with planning, filesystem context tools, and subagent delegation already wired in
  • it is built on top of LangChain + LangGraph runtime, so you still get LangGraph fundamentals (durability, streaming, interrupts) while shipping
  • in code, create_deep_agent is assembled via LangChain’s create_agent, and returns a compiled LangGraph graph. So you are already learning LangGraph indirectly by using it

My path was like this:

  • no langchain, only raw SKDs/APIs
  • then + langchain
  • then + langgraph
  • then + deepagens

But what I’ve learnt is that I would have started in the reverse order :smiley:

Practical learning order:

  1. Ship with Deep Agents now (create_deep_agent) for real tasks
  2. Read LangChain create_agent internals to understand the core tool loop and middleware composition
  3. Then study LangGraph deeply (state graphs, nodes/edges, checkpointers, interrupts) when you need deterministic branching, custom runtime behavior, or latency/control tuning beyond harness defaults

When was Deep Agents released?

From PyPI history, the first published version is 0.0.1 on 2025-07-29 - deepagents · PyPI

BTW, @yech can I ask you fro a huge favor?

Many developers have been convinced that frameworks for LLMs are a mistake and certainly aren’t needed at this time. I think completely the opposite. Maybe because I went through the whole journey from zero :smiley: today I can’t imagine delivering AI agents without LangChain :smiley:

Could I write an article with you on my blog about your experience specifically? It would be great for others who have been convinced by the masses of influencers opposed to AI frameworks (hype or narrowed intentions for money…? :smiley: )

Hi @yech

First of all — really enjoyed reading your journey. It’s a very relatable progression :slightly_smiling_face:

I think you’re thinking about this the right way. Direct SDKs feel fast and flexible early on, but once state, orchestration, and scaling enter the picture, complexity ramps up quickly.

If your goal is to keep shipping while improving structure, I’d suggest:

  • Start building with Deep Agents (higher-level, productive quickly)

  • Learn LangGraph in parallel to understand what’s happening under the hood

  • Go deep into LangGraph when you need tighter control, deterministic branching, or custom runtime behavior

You don’t need to pause everything to master LangGraph first. You can grow into it.

Great reflections — you’re clearly on the right path :rocket:

hi, @pawel-twardziak
My pleasure. If you need my assistance, please feel free to let me know~

1 Like

hi @Bitcot_Kaushal
thank you for your help!
i think i’ll continue studying the lang series.

1 Like