Key information re-confirmation:
- You have confirmed with Postman / curl or similar tools that: for those requests that “did not return”, the server indeed never terminates or returns any HTTP response (it’s not a client timeout).
- In the session logs: all nodes, including the final “Direct Reply”, show a green checkmark
, with no error messages.
- The direct reply is simply:
{{ 代码执行 3.result }}, and the upstream code node outputs “a very long piece of text” (the main body of a contract), not the JSON string you mentioned before.
Combining with the screenshot you just posted, it’s clear that the final direct reply is a large block of Chinese contract text—meaning that what’s ultimately written back to HTTP in this process is an extremely long text, with extreme length and character types (punctuation, special symbols, etc.).
In a 1.11.0 scenario like this, the most likely situation is:
Certain specific inputs + a particularly long piece of content, at the layer of “writing the final result back to the HTTP response”, triggered a backend exception (e.g., serialization, encoding, or gateway limits).
The log interface only marks the node as “execution successful”, but the layer that actually writes HTTP failed, so the client never receives a response.
Since I currently don’t have access to your instance’s backend logs, I can only provide a set of troubleshooting and mitigation solutions more targeted at the current phenomenon. You can try them in order of priority:
One-step “Verification Idea”: Shorten the final output to see if the problem disappears
Don’t worry about JSON yet; first, just verify “if it’s caused by content length / the content itself.”
1. Directly change “Direct Reply” to a fixed short text
Temporarily change it to:
调试:OK
Keep other nodes (LLM, Code Execution) unchanged. Then:
- Call this workflow dozens of times consecutively;
- Observe if the entire HTTP request still fails to return.
- If this no longer causes “stuck without returning”:
It can basically be confirmed that the problem is strongly related to “that large block of text being returned” (length or characters).
- If it still occasionally gets stuck:
Then it’s not a content issue, but more like a concurrency, connection, or reverse proxy configuration issue. In this case, you should check Nginx / reverse proxy / gunicorn logs.
2. If “fixed short text” is stable, then gradually put your content back
2.1 “Compress / Normalize” the result in the code node before replying
Your current 代码执行 3 (Code Execution 3) likely concatenates multiple LLM contents and directly returns a long string of text.
It is recommended to do a simple wrapper, put the actual content to be output into a field, and set an upper limit on the length:
MAX_LEN = 8000 # Set an arbitrary upper limit for now, e.g., 8k characters, adjust according to your business
# Assuming final_text is your concatenated complete contract text
text = final_text
# Truncate if too long, and mark it (just for verification)
truncated = False
if len(text) > MAX_LEN:
text = text[:MAX_LEN]
truncated = True
return {
"text": text,
"length": len(final_text),
"truncated": truncated,
}
Then, the “Direct Reply” node only outputs:
{{ nodes["代码执行 3"].outputs.text }}
Observe two things:
- Whether the HTTP non-return situation disappears after using this “text field with an upper limit”;
- If requests where
truncated == True also consistently no longer get stuck, it indicates that “overly long content” is very likely triggering the problem at some layer.
If your business absolutely requires returning the entire long text, then consider splitting it (see next point).
3. Try “segmenting the return” to see if the problem only occurs with extreme lengths
Make another version (if step 2 proves it’s related to length):
In the code node, break the long text into an array:
CHUNK_SIZE = 2000
chunks = []
for i in range(0, len(final_text), CHUNK_SIZE):
chunks.append(final_text[i:i+CHUNK_SIZE])
return {
"chunks": chunks,
"total_len": len(final_text),
"chunk_count": len(chunks),
}
Change “Direct Reply” to only return a JSON string describing these chunks, instead of directly stuffing the entire large text:
{{ nodes["代码执行 3"].outputs | tojson }}
If this also remains stable, it means that “single-field ultra-long text” is what’s truly causing trouble, not the process itself.
You can then, in your own business:
- Either only return meta information + storage ID in Dify (e.g., put the full text into your own storage and return a URL / key);
- Or change to multi-turn conversations to return in batches.
4. Check gateway / reverse proxy / timeout and body size limits
Because you have already confirmed:
- On the Dify server side, nodes “seem” to finish normally;
- But the HTTP link just doesn’t terminate or return.
Besides application bugs, another common cause for such problems is intermediate layer restrictions:
- Nginx / Traefik and other reverse proxies’:
proxy_read_timeout
client_max_body_size
- etc.;
- Backend WSGI / ASGI server (gunicorn, uvicorn)'s:
- worker timeout;
- single response body size / buffer limits.
If you installed using the official docker-compose / helm, it is recommended to:
- Check the logs of the gateway container / Nginx container to see if there are any obvious errors during the “no return” instances;
- Increase relevant timeout times and body limits, then test again.
5. If convenient, it is recommended to supplement the characteristics of one successful / one failed “input”
No need to paste the actual full contract text, just state the visible differences between the two types of calls:
- For example:
- For the times that returned: text length was about 2–3k characters;
- For the times that didn’t return: text was 1–20k characters, containing many special punctuation marks, parentheses, blank lines, etc.;
- Or:
- The problem only occurs when certain LLM branches produce “particularly long paragraphs.”
I can use these characteristics to further determine whether it’s a length issue or whether certain special characters/encodings are more likely to trigger the problem (e.g., certain invisible control characters, ultra-long continuous lines, etc.).
6. What to do if it’s confirmed to be a “content-related Dify 1.11.0 bug”
Once you have verified through the steps above that:
- Shortening, segmenting,
| tojson and other methods work fine;
- Only when “a particularly long block of text” is directly inserted into “Direct Reply” does it occasionally hang;
Then it can basically be concluded that this is a bug in 1.11.0 in certain content scenarios. Next, it is recommended to:
- Package and organize the “minimum reproducible process” and a redacted sample long text;
- Create an issue on GitHub
langgenius/dify: specify version 1.11.0, call method (which API), whether streaming, and the phenomenon of “all nodes successful but HTTP not returning”;
- At the same time, deploy solutions like “segmenting/truncating/only returning ID” in production to avoid affecting business.
Summary
From all your current screenshots and descriptions:
- “Workflow internal completely normal” + “HTTP occasionally not returning” + “last hop is extremely long text”,
makes me lean towards this being a response write-back issue triggered by length / specific characters, rather than a misconfiguration of your workflow.
- The fastest verification method is:
First change the final direct reply to extremely short fixed text, to see if the problem immediately disappears; if so, then gradually restore the content to see at what “length / form” it starts to become unstable.
You can first try the “Direct Reply = 调试:OK” (Debug: OK) version, run it multiple times, and see if the “no HTTP return” situation no longer occurs; once you have results, we can then refine the next steps.