Back to Templates

Generate Cinematic Videos from Text Prompts with GPT-4o, Fal.AI Seedance & Audio

Created by

Created by: Jaruphat J. || jaruphatj

Jaruphat J.

Last update

Last update a day ago

Share


Who’s it for?

This workflow is built for:

  • AI storytellers, content creators, YouTubers, and short-form video marketers
  • Anyone looking to transform text prompts into cinematic AI-generated videos fully automatically
  • Educators, trainers, or agencies creating story-based visual content at scale

What It Does

This n8n workflow allows you to automatically turn a simple text prompt into a multi-scene cinematic video, using the powerful Fal.AI Seedance V1.0 model (developed by ByteDance — the creators of TikTok).

It combines the creativity of GPT-4o, the motion synthesis of Seedance, and the automation power of n8n to generate AI videos with ambient sound and publish-ready format.


How It Works

  1. Accepts a prompt from Google Sheets (configurable fields like duration, aspect ratio, resolution, scene count)
  2. Uses OpenAI GPT-4o to write a vivid cinematic narrative
  3. Splits the story into n separate scenes
  4. For each scene:
    • GPT generates a structured cinematic description (characters, camera, movement, sound)
    • The Seedance V1.0 model (via Fal.AI API) renders a 5s animated video
    • Optional: Adds ambient audio via Fal’s MM-Audio model
  5. Finally:
    • Merges all scene videos using Fal’s FFmpeg API
    • Optionally uploads to YouTube automatically

Why This Is Special

  • Fal.AI Seedance V1.0 is a highly advanced motion video model developed by ByteDance, capable of generating expressive, stylized 5–6 second cinematic clips from text.
  • This workflow supports full looping, scene count validation, and wait-polling for long render jobs.
  • The entire story, breakdown, and scene design are AI-generated — no manual effort needed.
  • Output is export-ready: MP4 with sound, ideal for YouTube Shorts, Reels, or TikTok.

Requirements


How to Set It Up

  1. Clone the template into your n8n instance
  2. Configure credentials:
    • Fal.AI Header Token
    • OpenAI API Key
    • Google Sheets OAuth2
    • (Optional) YouTube API OAuth
  3. Prepare a Google Sheet with these columns:
    • story (short prompt)
    • number_of_scene
    • duration (per clip)
    • aspect_ratio, resolution, model
  4. Run manually or trigger on Sheet update.

How to Customize

  • Modify the storytelling tone in GPT prompts (e.g., switch to fantasy, horror, sci-fi)
  • Change Seedance model params like style or seed
  • Add subtitles or branding overlays to final video
  • Integrate LINE, Notion, or Telegram for auto-sharing

Example Output

Prompt: “A rabbit flies to the moon on a dragonfly and eats watermelon together”
→ Result: 3 scenes, each 5s, cinematic camera pans, soft ambient audio, auto-uploaded to YouTube
Result