Hello there!
i feel like langchain is kinda bloated. why do we need so many abstractions for simple interactions with LLM ? why cant we just modify our prompt normally and expect everything to work? why langchain is special ?
Thanks!
Hi @Sanctious
any code examples where you see LangChain bloated?
It would be easier to discuss.
In general, if you’re just calling one model with a static prompt, you don’t need LangChain or LangGraph. They shine when you need reliability, composition, tooling, and production features beyond a single prompt/response.
When direct prompts are enough
- Simple prototype: One model call, no tools, no memory, no retries/streaming/observability. Use the provider SDK directly.
What LangChain adds (use only what you need)
-
Standardized model interfaces and provider‑agnosticity: swap models or providers without rewriting app logic; consistent primitives for chat models, retrievers, tools, and vector stores.
-
LCEL/Runnables for composition and control: Compose steps, parallel map/reduce, retries/timeouts, token streaming, tool calling - without bespoke glue code.
-
Structured outputs: Ask for JSON/Pydantic‑typed outputs and validate them.
-
Caching and rate‑limit helpers: Reduce cost/latency for repeated calls.
-
Observability and evaluation: Tracing, datasets, comparisons, and regression checks via LangSmith.
What LangGraph adds (for agentic/stateful workflows)
-
Explicit control flow with graphs: Nodes do work; edges route what happens next. Easier to reason about agent loops, branches, and multi‑actor systems than ad‑hoc while/if logic.
-
State with reducers/channels: Typed shared state updated across nodes with clear merge semantics.
-
Durable execution and checkpointing: Resume from last good step after failures; persist progress per node.
-
Human‑in‑the‑loop (interrupts): Pause at defined points, collect feedback/approval, continue deterministically.
-
Streaming of intermediate steps: Surface partial results and reasoning steps to the UI in real time.