Dify+webhook+poll+error

I built an asynchronous processing system using Vercel + Supabase + Dify + Clerk.
The last node in Dify uses an HTTP webhook to POST results to Supabase, which are then displayed on the frontend.

I encountered an issue: when an error is reported.
If a Dify node reports an error, the entire process stops and cannot form a closed loop.
It is possible to build an error-handling loop through error branches.
However, enabling error branches for each node is too cumbersome for complex workflows.

Is there a way to allow error information to also flow directly through the business logic webhook?

Dify has the error handling for the LLM node, HTTP, Code, and Tools. You can see this link (Error Handling - Dify Docs) to learn how to use it.

Hi,Sherry

Thank you for your feedback.

Dify’s current error handling mechanism primarily focuses on how content nodes process errors, with the goal of ensuring workflow execution is smooth.

The specific scenario I’m referring to is when Dify integrates with Supabase. I need to post the overall status of a workflow (e.g., “Success”, “Failure”, “Partial Success”) to Supabase so that it can be promptly reflected on the frontend.

To achieve this, there should either be a system-level notification that requires no extra handling, or I should be able to process this via a webhook. Overall, Dify currently seems lacking in this particular scenario.

It is a common use case for Dify to serve as one component within a larger system. The absence of this status callback mechanism, while seemingly a small gap, creates significant integration headaches. To work around this, I’ve had to consider integrating third-party monitoring tools.

Alternatively, it’s possible that Dify already has this functionality, and I simply haven’t discovered or unearthed it yet.

Sherry_M <notifications@dify-community.discoursemail.com> 于2026年1月15日周四 17:52写道:

Dify has always provided a monitoring mechanism, and it can also interconnect with multiple monitoring systems.
For specific methods, you might need to ask @Dify_Forum_Helper

Your requirement essentially involves two levels:

  1. The business system needs a ‘global status callback’ (success / failure / partial success), even if an intermediate node reports an error, it should still notify Supabase.
  2. The desire to avoid creating a separate error branch for each node and instead have a more abstract layer.

Currently, Dify’s design primarily focuses on ‘node-level error handling + monitoring/tracing integration,’ rather than ‘automatic global status webhooks.’ Considering your scenario, you can approach this in two ways:


I. Simulate a ‘Global State Machine’ within the workflow

Although there isn’t a ready-made ‘workflow_finished’ event, you can implement the desired status callback using a simple pattern while minimizing the effort of drawing error branches for each node:

1. Create a ‘State Aggregation’ Variable

  • At the beginning of the workflow, create a global variable, for example:
    • workflow_status: initial value "Success"
    • error_messages: initial value [] or ""

2. Configure Error Branches Only on Critical Nodes

You can set error branches only for ‘critical path nodes,’ rather than all nodes, for example:

  • External HTTP call nodes
  • Critical LLM inference nodes
  • Custom Code nodes

Each error branch should do two things:

  1. Change workflow_status to "Failure" or "PartialSuccess" (depending on your business definition).
  2. Append error messages to error_messages.

This can take the form of a Code node or a ‘status update’ HTTP node:

# Pseudocode
workflow_status = "Failure"
error_messages.append(node_error_message)

This way, you don’t need to implement complex logic in the error branch; you just ‘mark the status,’ and a single node will later be responsible for the Supabase callback.

3. Place a Unified ‘Result Reporting’ HTTP Node at the End of the Workflow

Regardless of whether there are intermediate failures, as long as you ensure:

  • All ‘nodes that might fail and you care about’ have error branches pointing to a path that continues the workflow (e.g., aggregating to a ‘continue execution’ node),
  • The workflow will ultimately reach a ‘final state HTTP node’,

This final state HTTP node can then uniformly write the results to Supabase:

POST /your-supabase-endpoint
{
  "run_id": "{{ context.run_id }}",
  "status": "{{ workflow_status }}",
  "errors": "{{ error_messages }}",
  "data": { ... content that the business needs to return to the frontend ... }
}

Key points are:

  • Error branches do not ‘terminate’ the process, but rather ‘mark the status + return to the main line’
  • The main line will always end with a webhook that writes the overall status back to Supabase

This way, even without a ‘system-level callback,’ you’ve effectively implemented one.


II. Utilize Dify’s Monitoring / Tracing Integration

You mentioned considering third-party monitoring; Dify indeed has built-in integration with various monitoring/tracing systems (those providers you see on the right side of the screenshot).

These monitoring systems typically offer two types of capabilities:

  1. Trace / Span details for each run
    • Can record success/failure, duration, error messages, etc., for each node.
  2. Webhook / Export / API Query
    • You can configure rules within these systems:
      • Trigger a webhook when a specific App / workflow has an error
      • Or periodically pull ‘execution status’ from their API, and then your own service writes the results back to Supabase.

The characteristics of this approach are:

  • Pros: No need to add much logic in Dify, more comprehensive observation data; suitable for unified monitoring across multiple workflows and environments.
  • Cons: For your Supabase frontend, the status is ‘indirectly’ synchronized, requiring you to write an additional small service to forward status from the monitoring system.

If you are already considering third-party monitoring, then you can:

  1. Configure a familiar provider in Dify’s ‘Monitoring / Tracing’;
  2. Check if the provider supports:
    • Creating ‘alert rules’ => triggering HTTP callbacks
    • Or providing a query API, allowing you to poll / asynchronously refresh on the Supabase side.

III. Is there a ‘Built-in Global Status Callback’ currently?

Based on the information I currently have access to:

  • Dify has:
    • Node-level error handling (LLM / HTTP / Code / Tools)
    • Workflow monitoring and third-party monitoring system integration
  • But I haven’t seen something like:
    • ‘Automatically POST to a specified URL when the workflow ends (regardless of success/failure)’
      as a direct configuration entry.

Therefore:

  • The functionality you need now requires you to build a layer yourself using ‘status aggregation + final state webhook node’;
  • Or use a monitoring/tracing system for ‘indirect callbacks’.

IV. A Practical Design Suggestion (Combining with Supabase)

Given your current architecture (Vercel + Supabase + Dify + Clerk), I would recommend the following:

  1. Supabase table structure: Add fields:
    • workflow_run_id (corresponds to Dify’s run id)
    • status: 'pending' | 'running' | 'success' | 'failure' | 'partial'
    • error_summary: text / jsonb
  2. Frontend: Display execution status based only on these fields.
  3. Dify workflow:
    • At the beginning, create workflow_status="Success" and error_messages=[]
    • Add error branches to critical nodes: only ‘write status + continue’
    • The last HTTP node POSTs to Supabase:
      {
        "workflow_run_id": "{{ context.run_id }}",
        "status": "{{ workflow_status }}",
        "error_summary": "{{ error_messages }}",
        "result": { ... }
      }
      
  4. If needed, supplement with:
    • Configure alerts in the monitoring system (e.g., notify operations when a workflow’s failure rate > X%), which is ‘operational monitoring’ and decoupled from the business frontend.

If you’re willing to share your current workflow structure (which nodes are ‘critical nodes’), I can help you draw a specific topology with the ‘minimum error branches,’ aiming to get you the overall status needed for Supabase with minimal changes.

Thank you for your timely and detailed guidance. I believe I now understand how to adjust my workflow topology for testing. Here’s my refined implementation plan:

Revised Workflow Topology Strategy

  1. Create a Global Variable at the Start
    I will initialize a global variable (e.g., workflow_status) at the beginning of the workflow to track its overall state.

  2. Handle Error Branches for Critical Nodes
    All “potentially failing and relevant nodes” will have error branches that lead to a “Continue Execution” node. This ensures the workflow always progresses toward a final state, even when errors occur.

  3. Merge HTTP Webhook Endpoints
    Currently, two separate HTTP webhooks (one for success, one for error) are in use. These will be merged into a single unified HTTP endpoint that handles both outcomes.

  4. Remove Hardcoded status: “failed”
    The hardcoded status: “failed” assignment in the error branch will be removed. The final status will be dynamically determined by the workflow’s actual execution path.

The system can reliably post the final status to Supabase for real-time frontend feedback.

I appreciate your insights. This approach should resolve the integration gaps I previously encountered and streamline error handling across the workflow.

Dify Forum Helper <notifications@dify-community.discoursemail.com> 于2026年1月26日周一 14:53写道:

Great! Your version of the “topology reconstruction plan” is already very close to a stable, maintainable design, logically self-consistent, and indeed solves the “missing closed loop” problem you initially mentioned. Let me add a couple of points on implementation details and potential pitfalls to help you go straight to a POC.


1. Your 4-point plan – A point-by-point confirmation

  1. Create a Global Variable at the Start
    Absolutely correct. I suggest you use a Code / variable assignment node near the “Start” to initialize these:
{
  "workflow_status": "Success",      // Default to success
  "error_messages": [],              // Collect error messages uniformly
  "important_flags": {}              // Optional, record which types of steps failed
}

If using Dify’s “Variables” capability, you can also use multiple independent variables, depending on your preference.


  1. Handle Error Branches for Critical Nodes
    The key points for this step are:
  • Error branches should not terminate directly, but instead:
    1. Update the global status (workflow_status to "Failure" or "PartialSuccess")
    2. Record the error description for the current node (e.g., node name, error code)
    3. Jump to a unified “Continue Execution” node (this can be a regular transit node, specifically responsible for rejoining the main flow)

In a Code node, you can write something similar to this (pseudocode):

workflow_status = "Failure"  # or "PartialSuccess"
error_messages.append({
    "node": "fetch_user_profile",
    "error": node_error_message
})

As long as all critical error branches can eventually converge to one or a few “continue” nodes, your future maintenance costs will be manageable.


  1. Merge HTTP Webhook Endpoints
    Merging both success/failure webhooks into a single, unified HTTP node is the correct direction:
  • The request body should uniformly include:
    • status: {{ workflow_status }}
    • errors: {{ error_messages }}
    • And business data (e.g., results, partial results, trace id, etc.)

This way, the Supabase side only needs one endpoint to:

  • Distinguish between success/failure/partial success based on the status field
  • Decide whether to display detailed errors or provide more user prompts based on errors

  1. Remove Hardcoded status: "failed"
    This point is also crucial. You are now entrusting the status calculation logic to be determined by the “actual progression of the overall workflow” rather than making a final decision in a local error branch:
  • In some scenarios, you might need:
    • Some nodes fail but are degradable => "PartialSuccess"
    • Some non-critical nodes fail without affecting the overall outcome => still "Success"
  • These strategies can be centrally written in:
    • A “status aggregation” Code node
    • Or simply set flags in each error branch, and the last Code node calculates the final workflow_status based on these flags

For example, the last Code node can do this (pseudocode):

if any(fatal_errors):
    workflow_status = "Failure"
elif any(non_fatal_errors):
    workflow_status = "PartialSuccess"
else:
    workflow_status = "Success"

Then send this result to Supabase.


2. When actually implementing, you can also pay attention to two small optimizations

2.1 Record a stable run identifier

It is recommended to write Dify’s run ID into Supabase as well:

  • The field can be named workflow_run_id or trace_id
  • Benefits:
    • When troubleshooting issues in frontend / monitoring tools, you can use this ID to directly navigate back to Dify’s run records
    • In the future, if you integrate third-party tracing (Langfuse / LangSmith / Opik, etc.), you can also link them using the same ID

2.2 Retain a fallback strategy for “no response” scenarios

Your current solution works well for calling back to Supabase when the “workflow executes normally to completion.”
In a real environment, you might also encounter a few cases where:

  • The entire workflow fails / is rate-limited before the entry point
  • An orchestration bug prevents it from reaching the final webhook node

For such extreme cases, you can implement an additional fallback on the Supabase side:

  • When creating a new record: status = 'pending'
  • When Dify callbacks successfully: update to success / failure / partial
  • Scheduled task (or database trigger + timeout detection):
    • If a record remains in pending status for more than X minutes without being updated => mark as timeout or unknown, and the frontend displays a “retry later” prompt

This can prevent the problem of the “frontend getting stuck in loading forever due to extreme errors.”


3. How you can validate this solution next

I suggest you first create a Minimum Viable Product (MVP):

  1. Keep only 2–3 critical nodes, intentionally creating an error in one (e.g., by consistently requesting a 404 HTTP node).
  2. Verify 3 scenarios:
    • All nodes succeed => Supabase receives status = "Success"
    • A critical node fails but the workflow continues => Supabase receives status = "Failure" / "PartialSuccess"
    • Check if the error_messages content in the Supabase table helps you quickly pinpoint the issue.

If you encounter any of the following during your testing:

  • Error branches not reaching the unified node
  • Global variables not updating successfully on certain paths
  • Issues with Supabase table structure / trigger design

You can post screenshots of the simplified workflow + Supabase table structure, and I can help you review it again to see if there are further areas for simplification or automation.


In summary, your approach to this adjustment is very correct:
Using “status aggregation + unified webhook + Supabase status field” to compensate for the “system-level workflow status callback” that Dify currently doesn’t provide is already a relatively elegant solution from an integration perspective.