Built a tamper-evident audit log for LangChain agents (early users welcome)

Hi all — I wanted to share a tool I just finished building that may be useful for folks running LangChain agents in production.

AI Action Ledger is a tamper-evident, append-only audit log for AI agent actions.

It lets you:

  • Log LLM calls, tool usage, and agent steps

  • Cryptographically hash-chain events so edits are detectable

  • Store hashes + metadata by default (no raw prompts unless you opt in)

  • Verify action chains later via API or dashboard

I’ve added a LangChain callback so integration is essentially drop-in.

Use cases I had in mind:

  • Debugging agent behavior (“what actually happened?”)

  • Incident review when an agent does something unexpected

  • Internal accountability / audit trails

  • Giving customers verifiable logs of AI actions

Important limitations (being explicit):

  • This does not prevent actions — it only records them

  • Completeness can’t be guaranteed (only what’s logged)

  • No compliance claims (SOC2, etc.)

Repo (self-hosted, Docker Compose):
:backhand_index_pointing_right: https://github.com/Jreamr/ai-action-ledger

I’m looking for early users who:

  • Are running LangChain agents

  • Want better visibility / auditability

  • Are open to giving feedback

Happy to answer questions or tailor the integration if useful.