Show HN: Seedance2 – Stop "prompt guessing" and start directing AI video
1 points
1 hour ago
| 0 comments
| seedancevideo.app
| HN
We’ve all seen the viral AI video clips: stunning, surreal, but ultimately... random. As developers and creators, we noticed a frustrating pattern. Using current AI video tools feels like playing a slot machine. You put in a prompt, pull the lever, and hope the "AI gods" give you what you envisioned. If you need a specific camera movement or a consistent character, you're stuck in a loop of "regenerate and pray."

We built Seedance2 because we believe the future of AI isn't just about generation—it’s about direction.

The Story Behind the Workflow In traditional filmmaking, a director doesn't just give a vague description; they use storyboards, reference clips, and specific audio cues. We wanted to bring that level of precision to AI. Our goal was to create a "Control Studio" where every input serves a functional purpose in the creative pipeline.

What makes this different? Instead of relying solely on text, Seedance2 introduces a Multi-Modal Timeline. This allows you to anchor your creative intent using various signals:

Camera Motion Transfer: You can upload a reference clip from sites like vibecreature.com or your own library, and our engine will "extract" the camera's soul—the pans, tilts, and zooms—and apply them to your generated scene.

Frame Anchoring: Tired of AI videos that start and end in total chaos? You can lock the first and last frames to ensure narrative continuity, making it actually usable for professional editing.

@Reference Prompting: This is our favorite feature. In your prompt, you can use @mentions to tell the AI exactly which uploaded asset to use for what. For example: "A cinematic shot of @image1 moving with the energy of @video_ref." * Beat-Synced Logic: By analyzing audio tracks, the engine can align visual transitions with the rhythm, a workflow we’ve been refining at seedvideo.net to help creators ship music-driven content faster.

Why we’re sharing this now The feedback loop in video production is currently too slow. Whether you are building e-commerce ads or pre-visualizing a feature film, the bottleneck is always "control." We’ve optimized our engine for speed and precision, allowing for a 3-step loop: Upload -> Direct (@mention) -> Ship.

We are a small team of engineers and artists obsessed with making AI a tool, not just a toy. We’d love the HN community to stress-test our studio. What’s missing in your AI video workflow? How can we make the "Director" experience more intuitive?

Check it out here: Seedance2 Studio

No one has commented on this post.