Teams building health/fitness apps, coaches running check-ins in chat, and anyone who needs quick, structured nutrition insights from food photos—without manual logging.
This workflow accepts a food image (URL or Base64), uses a vision-capable LLM to infer likely ingredients and rough gram amounts, estimates per-ingredient calories, and returns a strict JSON summary with total calories and a short nutrition note. It normalizes different payloads (e.g., Telegram/LINE/Webhook) into a common format, handles transient errors with retries, and avoids hardcoded secrets by using credentials/env vars.
gpt-4o
or equivalent)LLM_MODEL
and LLM_TEMPERATURE
(e.g., 0.3
).imageUrl
, and confirm the strict JSON output.{
"dishName": "string",
"ingredients": [{ "name": "string", "amount": 0, "calories": 0 }],
"totalCalories": 0,
"nutritionEvaluation": "string"
}
Rename all nodes clearly, include sticky notes explaining the setup, and never commit real IDs, tokens, or API keys.