This workflow is the official backend for the StopSlopIn Chrome extension – it classifies LinkedIn posts as quality or slop using a strict LLM quality gate and learns from user votes over time via a Qdrant vector store.
This runs the webhook that powers the StopSlopIn Chrome extension on the Chrome Web Store. The extension sends LinkedIn posts here for analysis and user votes here for training – everything stays on your own n8n instance.
stopslopinA single webhook exposes two actions, selected via a ?action= query parameter: analyze for classification, vote for training.
Switch node routes incoming requests based on the action parameteranalyze: each post is enriched with similar prior-rated posts pulled from Qdrant (RAG), batched together, and sent to the LLM with a strict quality-gate system promptpass / fail results, which is sent back to the callervote: the post is embedded and stored in Qdrant along with the user's "good" or "slop" rating as metadataOpenAI Chat Model for any LangChain-compatible chat model (Claude, Ollama, etc.)Basic LLM Chain node to match your own feed tasteFilter node (default 0.7) to make RAG examples looser or stricterNote: post contents sent through this workflow are forwarded to the configured LLM and embeddings provider (OpenAI by default). Swap those nodes for a local or alternative provider if that is a concern.