The Professional course limits the number of input messages to 5,000—but does this include the total of both LLM input and output? If this limit is exceeded, will the system freeze?
You mean “message credits/month” on the plans page, right?
I’m not an official member of the Dify team but rather a contributor, so I can’t provide an official answer.
However, as far as I understand, message credits are consumed each time an LLM runs, with the amount determined by the specific model used.
Therefore, the number of input or output tokens does not affect the consumption of credits.
In other words, the consumption depends only on the model used and the number of times it’s executed, regardless of the token amount.
Once all your credits are used up, you’ll get an error indicating you’ve exceeded the quota when you try to run the LLM.
Hope this helps.
Thank you for your response. I understand that credits are consumed each time an LLM is used.
So, if I register my own API key, I won’t be using the credits provided by Dify, meaning the 5000 credits won’t be consumed and the limit will be removed, right?
It’s mostly correct.
To be more precise, by registering your own API key, you can choose the priority between using the quota (credits) or your API key in the model provider settings.
By setting it to “API key”, credit limitations will no longer apply to your usage.
F.Y.I. about language policy
Thank you for the detailed information. It was very helpful.
