Teaching AI to Design What I Mean, Not Just What I Say

My experiments trying to take back control of AI-generated UI

My experiments trying to take back control of AI-generated UI

It started with a chat on Friday. One of our designers had been experimenting with Figma Make, and while the AI-generated prototype looked functional, she couldn’t shake the frustration — the font sizes were slightly off, spacing inconsistent, colours not quite right. It was almost her design, but not her design.

That conversation stuck with me. If a designer can’t rely on AI to match what they already know, how can we trust it to design what we don’t know yet?

So I decided to dig in. How much control could I actually wrestle back from the machine?

Before you read this, please know that I haven’t been successful yet and I’m sharing this in case anyone else is on a similar journey.

Two Flavours of Control

That conversation revealed we were really talking about two different use cases.

1. Rapid layout experiments
Sometimes we want to move quickly — to test new structures or ideas in Figma and have them immediately reflected in a working prototype. The closer that prototype feels to the final product, the more useful the feedback becomes.

2. Reconfiguring existing components
Other times, the goal isn’t inventing new layouts — it’s exploring new combinations of existing components already live on the website. In those cases, fidelity matters less than faithfulness: using the same components, tokens, and design system the codebase relies on.

Those two goals — speed and faithfulness — would end up shaping everything that followed.

Experiment 1: Replicating the Designer’s Frustration

Over the weekend, I tried to reproduce the designer’s workflow. And yes — I hit the same wall almost immediately. Fonts and sizes didn’t match. The spacing looked slightly “off.”

I experimented with Figma variables, hoping they might help Figma Make “see” my intent more clearly, but the results didn’t improve. In hindsight, I suspect using proper Figma styles might have given Make more to latch onto — something to test next time.

When I looked under the hood, I started to understand what was going on. Publishing a Figma file for Make generates a CSS file full of custom properties — essentially a set of design tokens. Make then applies those tokens to a set of Shadcn components.

So really, Make isn’t designing with my system; it’s theming Shadcn’s.
 And that meant I was playing inside someone else’s rules.

Experiment 2: Trying to Bring My Own Code

Next, I wondered if I could inject some of my own logic — to give Make a few of my own rules to play by. That led me to Figma Code Connect, which links live components to Figma designs. On paper, it sounded ideal: I could bring my own code into the loop.

Unfortunately, it turned into a dead end. Code Connect is locked behind Figma’s Organisation plan, which carries a pretty steep annual price tag. I couldn’t even trial it.

I’ve reached out to Figma to ask if I can have temporary access — we’ll see.
 But that roadblock was a good reminder that sometimes the hardest limit on experimentation isn’t technical — it’s access.

Experiment 3: Mixing Copilot, MCPs, and Storybook

With that avenue closed, I changed direction again.

I’d been using GitHub Copilot and several Model Context Protocols (MCPs) recently, so I tried connecting them to Storybook as a way to generate components quickly and use AI to assemble them into layouts.

The flow was simple:

  1. Ask Copilot to create a basic React component.
  2. View and refine it in Storybook.
  3. Ask Copilot again to use those components to build the layout from my Figma design. This can be done by prompting Copilot with something as simple as “Build the layout I have selected in Figma” — very cool!

1*A0GxTkgJgRG9BseaW9woNQ.png

To my surprise, it worked pretty well — at least for simple compositions. But it still required a lot of manual glue: I was mirroring values between Figma and React by hand.

So it kinda worked… but it definitely wouldn’t scale.

Still, it was the first time I felt I’d actually wrestled back a little control.

View the Storybook I created on GitHub.

Experiment 4: Tokens First — Figma Variables → CSS Custom Properties

After a few detours, I circled back to the foundation.
 If both Figma and code share the same design tokens, then fidelity should be automatic — AI or not.

The idea:

  1. Define everything in Figma variables (colours, spacing, fonts, radii, etc.).
  2. Export those values.
  3. Convert them to CSS custom properties.

1*Fj9bQEpNRFo7rJvdSBy_jg.png

That way, both the AI and the codebase speak the same language.

I like this idea in principle — Figma and code grounded in a shared foundation. But I’m still working through a few speed bumps.

Speed Bump 1 — Units Don’t Translate Cleanly

Figma doesn’t always store units explicitly. A number used in text might represent pixels, but the same number elsewhere might mean something different. When exported, that distinction disappears.

For example:

  • line-height should be unitless in CSS, but Figma struggles to combine variables with line-height, so it ends up in pixels.
  • font-weight, you might want REMs, but all you get is pixels.

Without that intent, it’s tricky to produce clean tokens automatically.

Speed Bump 2 — Getting the Variables Out

Accessing variables programmatically requires Enterprise-level API access — a tier above Organisation. MCP can reach a bit further, but I’d rather keep the process lightweight enough to run in a CI/CD pipeline.

For now, I’ve settled on a plugin-based approach.

My Current Toolchain (Work-in-Progress)

I like these because they align with open standards and don’t require custom scripts that I’d need to maintain later.

If I can nail this mapping, Figma, code, and AI prompts could all share the same vocabulary — colours, spacing, typography — without translation loss.

What I’ve Learned (So Far)

  1. AI respects fences. The clearer your constraints (tokens, components, names), the more faithful its output.
  2. Tokens beat vibes. Figma variables help, but styles + variables + naming conventions work better.
  3. Library constraints leak. Figma Make + Shadcn means you’re theming Shadcn; step outside if you want true control.
  4. Manual glue doesn’t scale. Copying values between Figma and code is fine for experiments, fatal for real workflows.
  5. Access matters. Sometimes the biggest blocker isn’t capability — it’s licensing.

Where This Might Go Next

My goal now is to make this whole flow scriptable — a single pipeline where:

Figma variables → W3C JSON → Style Dictionary → CSS tokens, and those tokens feed Storybook, Copilot prompts, and future Figma Make projects.

If I can get that loop running, I’ll finally have something close to what I set out for: not AI designing for me, but AI designing with me — grounded in a shared language of tokens and intent.