This workflow contains community nodes that are only compatible with the self-hosted version of n8n.
Self-hosting custom LLMs is becoming more popular and easier with turn-key inferencing tools like Ollama. With Ollama you can host your own proprietary models for customers in a private cloud or on your own hardware.
But monetizing custom-trained, propietary models is still a challenge, requiring integrations with payment processors like Stripe, which don't support micropayments for on-demand API consumption.
With this free workflow you can quickly monetize your proprietary LLM models with the x402 payment scheme in n8n with 1Shot API.
Through x402, users and AI agents can pay per-inference, whith no overhead wasted on centralized payment processors.
Check out the YouTube tutorial for this workflow so see the full end-to-end process.