In the current workflow, the LLM outputs multiple parameters, but the input for the next node can only accept the LLM’s full text output as a single variable. Calling the large model again with a parameter simulator is too time-consuming. Besides a parameter extractor, is there any faster way to obtain these multiple output parameters? Currently, the temporary solution is to add a code node after the LLM to parse the output using Python, based on the Dify JSON response standard format. However, writing a separate Python script for parsing each output is too complex.
Using Python for parsing is the fastest method…