Hi everyone — I’ve been exploring a runtime governance model called PLD (Phase Loop Dynamics), and I wanted to share an integration prototype that may be useful for evaluation, observability, and loop-level stability analysis.
This example attaches PLD to a LangGraph agent in a non-invasive “observer-only” mode — meaning:
No routing or behavioral control
No intervention / repair logic
No modification of LangGraph state or agent decisions
Instead:
Each turn is observed
A runtime signal (e.g., continue, drift, session_closed) is emitted
A PLD-compliant structured event is logged via RuntimeSignalBridge + RuntimeLoggingPipeline
Output is stored as JSONL (with OTel export planned)
-–
Why?
Multi-turn LLM agents often fail not because of missing knowledge — but because behavior destabilizes over time:
repeated tool calls
reasoning drift
inconsistent tone/agency
temporary correction, then relapse
PLD introduces a runtime loop model:
Drift → Repair → Reentry → Continue → Outcome
This prototype only logs the first and last stages:
Continue | Drift | Outcome
…to establish a measurable trace before introducing intervention or control.
-–
Example Structure
A new example has been added here:
agent-pld-metrics/examples/langgraph_assistants at main · kiyoshisasano/agent-pld-metrics · GitHub
examples/langgraph_assistants/
README.md
run.py
graph.py
agent_node.py
config.yaml
pld_runtime_integration.py
Key principle:
All PLD events are generated strictly through the official RuntimeSignalBridge.build_event(…) API.
No schemas or event dicts are manually constructed.
-–
How to Run
git clone GitHub - kiyoshisasano/agent-pld-metrics: PLD: Runtime Phase Model for Stable Multi-Turn LLM Systems
cd agent-pld-metrics
pip install -r requirements.txt
export OPENAI_API_KEY=your_key_here
python examples/langgraph_assistants/run.py
You’ll see:
normal conversation output
and a JSONL event trace at:
logs/langgraph_pld_demo.jsonl
Example events include:
continue_normal
tool_error (if simulated failure occurs)
session_closed
-–
Status
This is currently in an:
> Exploratory / Candidate Stage
Seeking implementation feedback
No assumptions yet about general adoption — the goal is simply:
> “Can runtime-phase logging help measure and diagnose multi-turn agent behavior?”
-–
Looking for Feedback
Specifically:
Would you use PLD as an external observability layer?
Should the next step explore:
soft repair?
model switching?
OTel traces?
dataset-driven evaluation?
If you’d like to test this, pair it with LangGraph traces, or compare against AgentOps / Rasa / custom telemetry, I’d love to hear your thoughts.
-–
Author:
Kiyoshi Sasano
GitHub: GitHub - kiyoshisasano/agent-pld-metrics: PLD: Runtime Phase Model for Stable Multi-Turn LLM Systems