Observer-Mode Integration: PLD Runtime v2.0 × LangGraph + OpenAI Assistants API

Hi everyone — I’ve been exploring a runtime governance model called PLD (Phase Loop Dynamics), and I wanted to share an integration prototype that may be useful for evaluation, observability, and loop-level stability analysis.

This example attaches PLD to a LangGraph agent in a non-invasive “observer-only” mode — meaning:

:cross_mark: No routing or behavioral control

:cross_mark: No intervention / repair logic

:cross_mark: No modification of LangGraph state or agent decisions

Instead:

:white_check_mark: Each turn is observed

:white_check_mark: A runtime signal (e.g., continue, drift, session_closed) is emitted

:white_check_mark: A PLD-compliant structured event is logged via RuntimeSignalBridge + RuntimeLoggingPipeline

:white_check_mark: Output is stored as JSONL (with OTel export planned)

-–

:puzzle_piece: Why?

Multi-turn LLM agents often fail not because of missing knowledge — but because behavior destabilizes over time:

repeated tool calls

reasoning drift

inconsistent tone/agency

temporary correction, then relapse

PLD introduces a runtime loop model:

Drift → Repair → Reentry → Continue → Outcome

This prototype only logs the first and last stages:

Continue | Drift | Outcome

…to establish a measurable trace before introducing intervention or control.

-–

:open_file_folder: Example Structure

A new example has been added here:

:right_arrow: agent-pld-metrics/examples/langgraph_assistants at main · kiyoshisasano/agent-pld-metrics · GitHub

examples/langgraph_assistants/

README.md

run.py

graph.py

agent_node.py

config.yaml

pld_runtime_integration.py

Key principle:

All PLD events are generated strictly through the official RuntimeSignalBridge.build_event(…) API.

No schemas or event dicts are manually constructed.

-–

:play_button: How to Run

git clone GitHub - kiyoshisasano/agent-pld-metrics: PLD: Runtime Phase Model for Stable Multi-Turn LLM Systems

cd agent-pld-metrics

pip install -r requirements.txt

export OPENAI_API_KEY=your_key_here

python examples/langgraph_assistants/run.py

You’ll see:

normal conversation output

and a JSONL event trace at:

logs/langgraph_pld_demo.jsonl

Example events include:

continue_normal

tool_error (if simulated failure occurs)

session_closed

-–

:compass: Status

This is currently in an:

> Exploratory / Candidate Stage

Seeking implementation feedback

No assumptions yet about general adoption — the goal is simply:

> “Can runtime-phase logging help measure and diagnose multi-turn agent behavior?”

-–

:man_raising_hand: Looking for Feedback

Specifically:

Would you use PLD as an external observability layer?

Should the next step explore:

soft repair?

model switching?

OTel traces?

dataset-driven evaluation?

If you’d like to test this, pair it with LangGraph traces, or compare against AgentOps / Rasa / custom telemetry, I’d love to hear your thoughts.

-–

Author:

Kiyoshi Sasano

GitHub: GitHub - kiyoshisasano/agent-pld-metrics: PLD: Runtime Phase Model for Stable Multi-Turn LLM Systems