This example workflow demonstrates a way to connect multiple LLMs to a single AI Agent/LangChain Node and programmatically use one – or in this case loop through them.
This AI workflow takes in customer complaints and generates a response that is being validated before returned. If the answer was not satisfactory, the response will be generated again with a more capable model.
Beware that the order of the used LLMs is determined by the order they have been added to the workflow, not by the position on the canvas.
After cloning this workflow into your environment, open the chat and send this example message:
I really love waiting two weeks just to get a keyboard that doesn’t even work. Great job. Any chance I could actually use the thing I paid for sometime this month?
Most likely you will see that the first validation fails, causing it to loop back to the generation node and try again with the next available LLM.
Since AI responses are unpredictable, the results and number of tries will differ for each run.
Please note, that this workflow can only run on self-hosted n8n instances, since it requires the LangChain Code Node.