This workflow automates AI Search Engine Optimization (ASEO) tracking for digital marketing agencies. It tests your client's website visibility across four major AI platforms—ChatGPT, Claude, DeepSeek, and Perplexity—using brand-neutral prompts, analyzes ranking position and presence strength on each platform, identifies top competitors, and returns a structured 27-field scorecard with actionable recommendations. Designed as a sub-workflow, it integrates directly into your existing client audit or reporting pipeline.
This workflow is triggered by a parent workflow and receives two parameters:
https://example.com)Stage 1 — Brand-Neutral Prompt Generation
GPT-4.1-mini generates a realistic search prompt that potential customers would type into an AI chatbot to find a company like the client. Critically, the prompt does not include the client's brand name—it focuses on their services and industry instead. For example, for a Los Angeles product photography studio, the prompt would be something like "best product photography studio for Amazon listings in Los Angeles" rather than the studio's name. This tests true organic discoverability, not brand recall.
Stage 2 — Four-Platform Sequential Testing
The same generated prompt is submitted sequentially to four AI platforms:
Each platform agent runs independently with error handling enabled. If one platform API is down or throws an error, the workflow continues and returns partial results—it does not fail entirely.
Stage 3 — Cross-Platform Analysis
DeepSeek analyzes all four platform outputs together and produces a structured JSON report covering each platform's ranking (Yes/No), position (1–10 or null), presence strength percentage, key mentions, and top competitors. It also generates an overall summary comparing all platforms.
Stage 4 — Data Flattening
The nested JSON is flattened into 27 individual fields that can be directly inserted into a Google Sheet row, database, or passed back to the parent workflow for reporting.
The workflow returns 27 structured data fields:
Estimated setup time: 20–25 minutes
This is a sub-workflow. It does not have its own schedule trigger. It runs when a parent workflow calls it using n8n's Execute Workflow node.
Setting up the parent workflow:
Website | Value: your client's website URL expression (e.g., ={{ $json['Website URL'] }})Website Summary | Value: your client's business description (e.g., ={{ $json['Business Description'] }})Example parent workflow structure:
Schedule Trigger (Weekly / Monthly)
→ Read Client List from Google Sheets
→ Loop Over Each Client
→ Execute Workflow (this AI Search Ranking Analyzer)
Pass: Website = {{ $json['Website URL'] }}
Pass: Website Summary = {{ $json['Summary'] }}
→ Append 27 Fields to Reporting Sheet
→ Send Report Email or Slack Notification
Testing the trigger connection:
This workflow uses two OpenAI models:
Generate Brand-Neutral Search Prompts, Parse Prompt as JSON, and GPT Model for Parser SupportTest Visibility on ChatGPTTo connect:
GPT Model for Prompt Generation → select your OpenAI credential, set model to gpt-4.1-miniGPT Model for Parser Support → select your OpenAI credential, set model to gpt-4.1-miniGPT-4o-mini for ChatGPT Test → select your OpenAI credential, set model to gpt-4o-miniUsed by the Test Visibility on Claude agent via Claude Sonnet 3.7 Model.
To connect:
Claude Sonnet 3.7 Model node and select your credentialclaude-3-7-sonnet-20250219Used by two nodes: DeepSeek Model for Testing (platform test) and DeepSeek Model for Analysis (final summarizer).
To connect:
DeepSeek Model for Testing node → select your credentialDeepSeek Model for Analysis node → select your credentialUsed by the Test Visibility on Perplexity node (Perplexity native node, not an AI agent).
To connect:
Test Visibility on Perplexity node and select your credentialGenerate Brand-Neutral Search Prompts (bypass the executeWorkflowTrigger for isolated testing){
"Website": "https://your-test-site.com",
"Website Summary": "A company that provides [your service] in [your city]"
}
Generate Brand-Neutral Search Prompts produces a sensible search queryAnalyze All Platform Results produces structured JSONFlatten JSON to 27 Data Fields produces all 27 fields correctlyThe entry point of this sub-workflow. Listens for execution from a parent workflow via n8n's Execute Workflow node. Receives two inputs: Website (client URL) and Website Summary (business description text). These values are referenced by subsequent nodes throughout the workflow.
An AI agent powered by GPT-4.1-mini that creates a realistic search query a potential customer might type into an AI chatbot to find a business like the client—without using the client's brand name. This tests organic discoverability based on services and industry positioning rather than brand recognition. The output is a single focused search prompt.
A Structured Output Parser that enforces JSON schema {"Prompts": "..."} on the generated prompt. Uses GPT Model for Parser Support as its language model and has autoFix enabled, so malformed outputs are automatically retried and corrected.
An AI agent that submits the generated search prompt to ChatGPT (GPT-4o-mini) and records the response. This captures what ChatGPT currently recommends when users search for services like the client's.
An AI agent powered by Claude Sonnet 3.7 (Anthropic) that receives the same prompt and records Claude's recommendations. Has onError: continueRegularOutput so the workflow continues if Claude's API is unavailable.
An AI agent powered by DeepSeek that tests the same prompt on DeepSeek's platform. Also has onError: continueRegularOutput for resilience.
Uses n8n's native Perplexity node (not an AI agent) to submit the prompt to Perplexity's search-augmented AI. Perplexity is particularly important because it uses real-time web search, making its recommendations highly relevant for current visibility. Has onError: continueRegularOutput.
A DeepSeek-powered AI agent that receives all four platform outputs simultaneously along with the client website URL and the original search prompt. It analyzes each platform independently—determining whether the client appears (Yes/No), at what position (1–10), how strongly (0–100%), how they are mentioned, and which competitors appear. It also generates an overall summary comparing all platforms and provides specific improvement recommendations. Uses Parse Analysis as Structured JSON as its output parser.
A Set node that extracts values from the nested JSON output of the analyzer into 27 flat fields. This makes the data ready for direct insertion into a Google Sheets row, Airtable record, or database table—or for return to the parent workflow.
A No Operation node marking the successful completion of the workflow. The parent workflow receives all 27 fields as the execution output.
In your parent workflow, maintain a Google Sheet with columns:
| Client Name | Website URL | Business Description | Last Checked |
|---|---|---|---|
| Example Corp | https://example.com | A SaaS company that provides... | 2025-01-15 |
Your parent workflow reads each row, passes the Website URL and Business Description to this sub-workflow, and writes the 27 returned fields back into the sheet for tracking.
After execution, check the Flatten JSON to 27 Data Fields node output. For each platform you get:
The Overall Summary tells you:
Run this workflow monthly per client. Append results to a Google Sheet with a date column. Track whether presence strength is improving, whether the client appears on more platforms over time, and whether competitors are losing or gaining ground.
Change the number of platforms: Remove any platform agent node and update the Analyze All Platform Results prompt to exclude that platform's output reference.
Add more platforms: Add new AI agent nodes (e.g., Grok, Gemini) between Test Visibility on Perplexity and Analyze All Platform Results. Update the analyzer prompt to include the new platform's output.
Generate multiple prompts: Modify Generate Brand-Neutral Search Prompts to produce 3–5 different prompts. Loop through each and aggregate results for more comprehensive testing.
Write results directly to Google Sheets: After Flatten JSON to 27 Data Fields, add a Google Sheets Append node in your parent workflow to log each audit automatically.
Add email or Slack notifications: After the workflow completes in the parent, add a Send Email or Slack node that formats the key metrics (Overall Ranking, Average Presence Strength, Recommendations) into a readable client report.
Adjust presence strength scoring: Modify the Analyze All Platform Results prompt to change how the AI scores presence strength—for example, weighting first-position mentions more heavily.
Parent workflow not triggering this workflow
Website and Website Summary parameters not passing
Website and Website Summary (case-sensitive, space in second parameter)Receive Website and Summary from Parent node's input panel to verify received dataOne platform returns empty output
Structured output parser fails
Parse Prompt as JSON has autoFix enabled—it will retry malformed outputs automaticallyParse Analysis as Structured JSON fails, simplify the prompt in Analyze All Platform Results or increase max tokensGenerated prompt includes client brand name
Generate Brand-Neutral Search Prompts agent prompt instructs GPT to avoid brand namesAll 27 fields not appearing in output
Analyze All Platform Results node outputFlatten JSON to 27 Data Fields expressions reference the correct node namesDigital marketing agencies offering ASEO services: Run monthly AI visibility audits for 20–50 clients from one parent workflow. Generate client reports showing AI platform rankings, presence strength trends, and competitor comparisons. Position ASEO as a premium new service.
SEO teams expanding beyond Google: Use this alongside traditional Google ranking reports. Show clients their full search visibility picture—covering both Google and the AI chatbots that are increasingly influencing purchase decisions.
Competitive intelligence: Run this workflow for your own site and 3–5 competitors simultaneously. Identify which competitors dominate AI recommendations and reverse-engineer their content strategy.
Brand monitoring: Track how AI chatbots describe your brand over time. Detect if competitors are gaining ground or if negative associations are appearing in AI responses.
New market entry research: Before entering a new market or launching a new service line, test whether your website would appear in AI searches for that service category. Use results to guide content strategy before launch.
Time savings: 45–60 minutes of manual AI testing per client, eliminated per audit cycle
Coverage: 4 major AI platforms tested in a single automated run
Output quality: Structured, consistent 27-field data format—ready for Google Sheets, dashboards, or PDF reports
Scalability: Process 50+ clients per parent workflow run with no additional manual effort
Competitive advantage: One of the first systematic approaches to measuring AI Search Engine Optimization (ASEO)—a space with no established tooling yet
For any questions, custom development, or workflow integration support:
📧 Email: [email protected]
🌐 Website: https://www.incrementors.com/