We implemented Anthropic's Skills approach using LangChain v1, feedbacks

Hi,

The explanation for the project and the code is here : Building Claude-Style Skills in LangChain v1

Feedback I’d like to share:

  • It is confusing that create_agent requires all tools to be defined upfront.
  • I was surprised to see that the State has to be defined in the middleware. Maybe the docs for LangChain v1 could explain why that is the case in greater details, and how to leverage this.
  • The types and objects we use there (runtime, request, handler) were quite a headache. Sometimes I was wondering where I had to get the state from, etc… Somehow I think the concept of State is less important in v1 than in v0.x, but it creates a bit of complexity.

This is the first project I do in plain v1 from scratch, so this might be the normal learning curve.

I hope this feedback helps,

Have fun !

  • It is confusing that create_agent requires all tools to be defined upfront.

We might relax this requirement in the future. It hasn’t come up in the past as a feature that folks need.

  • I was surprised to see that the State has to be defined in the middleware. Maybe the docs for LangChain v1 could explain why that is the case in greater details, and how to leverage this

It doesn’t have to be, you can define provide state as a parameter to create_agent.

  • The types and objects we use there (runtime, request, handler) were quite a headache. Sometimes I was wondering where I had to get the state from, etc… Somehow I think the concept of State is less important in v1 than in v0.x, but it creates a bit of complexity.

The request and handler were introduced to raise the ceiling of what can be done in the hook (e.g., retries, rate limiting, fallbacks, short-circuiting, caches, running requests in parallel with different models). All of this functionality is enabled by request and handler. So the ceiling in terms of what you can do is very high here!

What would you suggest we improve or which parts did you not like in this API?

1 Like

“We might relax this requirement in the future. It hasn’t come up in the past as a feature that folks need.”

Please see my comment on Are dynamic tool lists allowed when using create_agent? - #8 by rhlarora84

Apart from allowing tools declaration/discovery mid-run, I think most of what can be improved is related to the docs and examples. If you can nail those, then the API is pretty well designed. I simply wanted to share how it felt (for me) right now - it doesn’t necessarily mean you made incorrect choices. :wink: Thanks for answering.

“We might relax this requirement in the future. It hasn’t come up in the past as a feature that folks need.”

How? Are people really compiling each time a user talk to their agents? This very issue is preventing my company from migrating to langchain v1 :frowning:

Is there a workaround for this?

The only one I can think of is using a real Claude Skill approach where you basically have only one tool (command line) and this command line tool calls your other tools. This is really not something I advize you to do (since mistakes are likely), but until langchain moves… I don’t know.

@rhlarora84 any news on this ?

Thanks @Batiste for the really interesting write-up.

I’d also like to have a “skill-like” setup with Langchain, but my issue is that in this approach, the LLM will typically use many more iterations (bash “ls …”, bash “cat…”, etc), which increases the cost very quickly as each call typically bills the whole input history.

How do you handle this issue ? Do you manually manipulate the history to have reduced input tokens for these small iterative steps ? Or do you use caching / Claude sessions ?

hi. I learned a lot from it.
and what i think the diff between langchain and cloude

  1. form langchain skills doc demo, skills organization more like code, not file system.
    it lack of Infrastructure tools for skills architecture
    For example,to load skills from file system, i have to write a tool name load_skills, inside the tool , use Programming Language api read it
    for same reason, there is a tool use for call local script in command line way. how to write command and args can be describe in skill content
const load_skills = tool(
    async ({ skill_file_path }) => {
        try {
            return readFileSync(skill_file_path, 'utf8');
        } catch (error) {
            return `${skill_file_path} not found`
        }
    },
    {
        name: "load_skills",
        description: "load_skills",
        schema: z.object({
            skill_file_path: z.string().describe("path"),
        }),
    }
)
  1. i can tell skills when there is some thing wrong, use throw_error tool.
    but, in throw_error when use interrupt func, i wish command goto can set throw_error. but in createAgent, it not. i have to set goto to START. that make context hard to management