Issue story
As a Dify workflow builder, I want Dify to support OpenAI-style external tool callback protocol, so that Chatflow orchestration can be combined with OpenClaw local execution.
Why this matters:
- Better capability composition: Chatflow can orchestrate model/tool strategy, while OpenClaw executes tools locally.
- Cost efficiency: route different LLM nodes by task size and complexity, reducing unnecessary token usage.
- Higher flexibility: each LLM node can optionally serve as an OpenClaw-facing model endpoint.
- Better reliability for tool-calling conversations in multi-turn and interrupted/resumed runs.
What has been implemented in PR #32296:
- Added tool definitions, tool_choice, and tool_results support in chat/advanced_chat APIs.
- Added structured/openclaw_text tool call modes.
- Streamed tool call chunks through SSE events.
- Persisted and reconstructed tool_calls in message memory.
- Added pause/resume workflow handling for pending tool callbacks.
- Fixed orphan tool_calls handling in LLM node and improved TokenBufferMemory tool message parsing.
- Added LLM-node level controls like external_tool_callback_enabled and max_tool_call_rounds.
Expected outcome
- Dify can act as a robust orchestrator for OpenAI-compatible external tool callback flows.
- Better interoperability with OpenClaw-like local tool ecosystems.
- More predictable behavior in multi-turn, paused, and resumed tool-calling scenarios.
Additional context
PR: feat: Add Openai response tool call supper by JAVA-LW · Pull Request #32296 · langgenius/dify · GitHub
I’m looking for feedback on:
- API shape and compatibility expectations
- workflow pause/resume event design
- real-world use cases from teams combining cloud LLM + local tool execution