Explainer

Building a Mii Designer with AI: Why Specs Matter More Than Prompts

Most AI-coding tutorials spend all their time on prompts. I think that's backwards. The prompt matters way less than the spec the prompt points to.

The video version · same thesis, looser edits

My 13-year-old daughter is obsessed with Nintendo’s Tomodachi Life. Over a weekend, she asked if we could build it ourselves.

I knew this genre was perfect for a hobby AI-assisted build. Games like this are heavy on writing and personality, rather than physics and cutting-edge 3D graphics. We set our expectations: we weren’t competing with Nintendo, but building something in the spirit of the game. We called the project Daily Dream.

Over the course of three major iterations—and one massive pivot triggered by her honest feedback—I learned a few fundamental truths about building alongside AI coding agents.

Phase 1: The Art of the Spec

Most AI-coding tutorials spend all their time focusing on the prompt. I actually think that’s backwards. If you’re going to use AI to write code, the highest leverage activity is writing better specs, not better prompts.

Instead of jumping right into the code, we spent our entire first Saturday not writing code. We brainstormed visual styles with Claude (evaluating 2D sprites, VRM, pixel art, and SVG) and landed on a hybrid approach: a procedural SVG body paired with an AI-generated face.

More importantly, I spent time writing a rigorous PRD.md (Product Requirements Document), alongside .agents/rules/project-rules.md to define always-on constraints, and .agents/skills/mii-schema.md to map the exact data shapes.

When I finally handed the kickoff prompt to Antigravity, it wasn’t a magic paragraph—it was simply a pointer to a meticulously written spec. Writing the PRD took me longer than I expected, but it saved me 10x the time in agent back-and-forth.

Phase 2: The Hybrid Build and the Proxy Pattern

With the spec ready, I handed it to the AI agent and watched it build. It was a rapid time-lapse of implementation:

  1. Setting up the base apps/mii-designer/ structure.
  2. Rendering the first working SVG body.
  3. Hooking into a local proxy to securely manage the Gemini API key without exposing it to the browser.
  4. Calling Nano Banana 2 (Gemini 3.1 Flash Image Preview) for the faces.

It resulted in our first two characters: Luna, who was fully SVG, and Koji, who used the new AI-generated face.

The face generation was beautiful. The body rendering was crisp. The API integration was seamless. But when my daughter walked in and saw Luna and Koji side-by-side, her reaction completely changed the project’s direction.

“The faces don’t match the bodies.”

Phase 3: The Pivot to SVG

She was entirely right. Koji’s face looked photographic, while his body was a flat, chibi-style SVG. The characters looked like they belonged in two entirely different games.

As an engineer, I had a whole architecture diagram in my head for why the hybrid approach was clever. She looked at it for ten seconds and saw the fundamental flaw. In a game like this, recognizability-at-a-glance and consistency matter far more than individual visual expressiveness. It’s supposed to be a cast of characters, not a costume contest.

I pasted our pivot prompt into the agent and watched it relentlessly delete files. We stripped out the entire AI image generation pipeline, the API proxy, and the complex backend setup. We replaced it with a massive expansion of the procedural SVG system.

We shipped a new V2 schema with:

  • 10 hair styles
  • 8 eye shapes
  • 6 mouth shapes
  • 4 eyebrow styles
  • Along with blush toggles, freckles, and free-form color pickers.

To make it seamless, the agent even wrote a migration script that automatically upgraded V1 JSON exports to the new V2 face system with a nice little UI toast notifying the user.

The Power of the Randomize Button

With the new robust SVG system in place, the agent added a ‘Randomize Face’ button. I watched my daughter hit it repeatedly, generating dozens of unique characters in seconds.

When you looked at the new gallery of 6 fully-SVG Miis, they all spoke the same visual language. The lesson here wasn’t just about deleting code. It was about learning what the real constraints were.

In our next iteration, we’re going to take this cast of characters and actually make them talk to each other. Because designing a character is only half the battle—making them feel alive is what matters.

More in this series