A chatflow process was created:
In Dify, debugging and API calls are one-time outputs, not streaming. If the agent node were replaced with an LLM node, it could be streaming and respond faster. Does an agent node need to wait for the entire process to complete before responding? The thinking process can sometimes be very long.
