What gets captured
TheAgentDbgLangChainCallbackHandler hooks into LangChain’s built-in callback system and records two event types:
- LLM calls — triggered by
on_llm_start/on_chat_model_start→on_llm_end. Records model name, prompt, response text, and token usage. - Tool calls — triggered by
on_tool_start→on_tool_end/on_tool_error. Records tool name, input arguments, result, and error status.
LLM_CALL events with status="error". Tool errors are recorded as TOOL_CALL events with status="error" and include the error message.
Installation
Install AgentDbg with the LangChain extra:langchain-core alongside AgentDbg. If you import the integration without langchain-core present, you get a clear ImportError with install instructions.
Setting up the handler
Wrap your entrypoint with @trace
The handler requires an active AgentDbg run. Use the Alternatively, set
@trace decorator on the function that calls your chain:AGENTDBG_IMPLICIT_RUN=1 in your environment to start a run automatically without the decorator.Full example
This example uses a fake LLM so it runs without any API key or network calls:Guardrails with LangChain
All AgentDbg guardrails work with the callback handler. When a guardrail fires (for example,stop_on_loop detecting a repeated pattern), the handler immediately stops the run—bypassing LangChain’s except Exception error handling and LangGraph’s graph executor—so no further token-spending calls are made.
Reusing the handler across runs
If you call your traced function multiple times with the same handler instance, callhandler.reset() between runs to clear the abort state:
Checking for aborts after invoke
As a defensive fallback, the handler stores any guardrail exception onhandler.abort_exception. Use handler.raise_if_aborted() to re-raise it if needed after an invoke() call returns:
The handler requires an active AgentDbg run. Wrap your entrypoint with
@trace, use traced_run(...), or set AGENTDBG_IMPLICIT_RUN=1 in your environment.