This workflow connects n8n to the Hugging Face Inference API, letting you run powerful open-source AI models for text generation, summarization, sentiment analysis, translation, and image generation — all fully automated, no GPU setup required. Simply POST a request and get AI-powered results back in seconds.
To give developers, agencies, and businesses a plug-and-play automation for running any Hugging Face model without managing infrastructure. Replace expensive proprietary APIs with open-source alternatives that you control.
Tasks this workflow handles out of the box:
Hugging Face hosts over 400,000 open-source models — many matching or exceeding the quality of paid APIs at a fraction of the cost. This workflow:
Step 1 — Webhook receives the task request with input text and task type
Step 2 — Set node stores your Hugging Face API key and normalizes all inputs
Step 3 — Code node selects the right model and builds the correct API payload for the task
Step 4 — HTTP Request calls the Hugging Face Inference API with the built payload
Step 5 — Code node parses and formats the raw API response into clean structured output
Step 6 — Respond node returns the final result as JSON to the caller
Step 1: Import this workflow into your n8n instance
Step 2: Open the Set API Config node and replace YOUR_HF_API_KEY with your token
Step 3: Activate the workflow
Step 4: POST to /webhook/hf-runner with your task payload
Step 5: Swap models anytime by changing the model field in your request
{
"task": "summarization",
"input": "Your long text goes here...",
"model": "",
"parameters": {}
}