This workflow is for Product Managers, Indie Hackers, and Customer Success teams who collect feature requests but struggle to notify specific users when those features actually ship. It helps you turn old feedback into customer loyalty and potential upsells.
This workflow creates a "Semantic Memory" of user requests. Instead of relying on exact keyword tags, it uses Vector Embeddings to understand the meaning of a request.
For example, if a user asks for "Night theme," and months later you release "Dark Mode," this workflow understands they are the same thing, finds that user, and drafts a personal email to them.
vector extension enabled.HTTP Request node with your specific Supabase Project URL.Open your Supabase SQL Editor and paste this script to set up the tables and search function:
-- 1. Enable Vector Extension
create extension if not exists vector;
-- 2. Create Request Table (Smart Columns)
create table feature_requests (
id bigint generated by default as identity primary key,
content text,
metadata jsonb,
embedding vector(768), -- 768 for Nomic, 1536 for OpenAI
created_at timestamp with time zone default timezone('utc'::text, now()),
user_email text generated always as (metadata->>'user_email') stored,
user_name text generated always as (metadata->>'user_name') stored
);
-- 3. Create Search Function
create or replace function match_feature_requests (
query_embedding vector(768),
match_threshold float,
match_count int
)
returns table (
id bigint,
user_email text,
user_name text,
content text,
similarity float
)
language plpgsql
as $$
begin
return query
select
feature_requests.id,
feature_requests.user_email,
feature_requests.user_name,
feature_requests.content,
1 - (feature_requests.embedding <=> query_embedding) as similarity
from feature_requests
where 1 - (feature_requests.embedding <=> query_embedding) > match_threshold
order by feature_requests.embedding <=> query_embedding
limit match_count;
end;
$$;
⚠️ Dimension Warning: This SQL is set up for 768 dimensions (compatible with the local nomic-embed-text model included in the template).
If you decide to switch the Embeddings node to use OpenAI's text-embedding-3-small, you must change all instances of 768 to 1536 in the SQL script above before running it.