Stitch Prompt Guide

Hi @Thomas_Schroeder ,

I found a solution to resolve the inconsistency issues, which was to copy the HTML code to an editor and then select the code snippet I want Stitch to use as a reference, also providing the CSS information.

There’s also the option of specifying the ID of the layout you want it to use as a reference, but for finer adjustments, I now prefer to use the HTML reference instructions directly :smiling_face_with_sunglasses:

I really appreciated if you can show me the ChatGPT assistant prompt. Thank

I encounter the same issue when using AI Studio. I’ve found that adding the phrase, “Please refrain from altering any other functionalities or design elements,” during adjustments yields favorable results. You might consider giving it a try.

Stitch prompt tips summary:

  1. Clear layout structure - Use “left/middle/right” to describe

  2. Ambiance adjectives - Like “modern, simple, similar to Notion”

  3. Change one thing at a time - Avoid mixing multiple features

  4. Refer to mature tools - Like “style reference Linear/Figma”

  5. Include mock data - Make the design more realistic

1 Like

Hi Ross, would love to learn more about what you shared. I feel it would be helpful to people like me getting into the basics of coding and AI prompting. Thank you for this :slight_smile:

It’s unacceptable that Stitch continues generating content without giving users a stop option or asking for proper consent. This results in unnecessary output and wasted time. Basic controls like stop, cancel, and confirmation should be mandatory in any AI tool. Please fix this.

Hey @mark_kow thanks for sharing this here! This is really helpful for our new users and a great addition to the community. We truly appreciate the effort you put into this!

Hey @Angelo_Ramos you can start with a very simple prompt in Stitch! For example, try asking it to: ‘Create a homepage for a gym app.’ Stitch will generate a creative layout with all the essential elements. From there, you can easily modify the design to fit your needs and continue generating the next screens for your project. We’re excited to see what you build!

Hi @Sadiq, we completely understand the need for a ‘Stop’ button. Please rest assured that this is already on our roadmap and being tracked by our development team. We are committed to making the generation process more flexible for you. Thanks for sticking with us!

Hey Angelo what’s on your mind? Happy to help. It might sound discouraging but good prompting comes from prompting. I have 20k+ prompts across various models. This helped me better understand propositional logic for different models, how they parse and what they prioritize. There is no golden prompt that works for all models and each needs uts own approach, which also changed with every model upgrade. I suggest messing with Antigravity, an AI code assistant enabled IDE (has outage today), or something similar, eg. Cursor. An other approach would be to use Google AI Studio or Lovable to see how your prompts translate to visuals. If you manage to get the results you want you will slowly understand ehat was the key prompts that led you there. Anything else I can help with feel free to ping me.

2 Likes

hai thanks mate for sharing, really helpful for me

  1. How create screen designs based workflows?
  2. How to integrate to Antigravity?

Hey @chuluujav_lkhagva I hope this helps.
Introducing the Stitch MCP Server. :electric_plug:

You can now pipe Stitch designs directly into your favorite tools like Antigravity.

  • Generate new screens without leaving your IDE
  • Fetch the code from any design
  • Inject context: Give your agent full visual awareness

Docs and more information :backhand_index_pointing_down:

Hi @RISHABH_CHAUHAN,

First, I want to appreciate the direction Stitch is taking, especially with the MCP Server and IDE integration. As a founder managing multiple startups, the dream of a “Design-to-Code” pipeline is exactly what I need to accelerate my MVPs.

However, after extensively testing the current models (including Pro), I’ve hit a significant roadblock that prevents me from using Stitch for production: The model lacks “Native Mobile” DNA.

I attempted to design a minimalist, futuristic iOS utility (think “Liquid Glass” aesthetics, MeshGradients, and native blur materials). Instead of generating a clean, haptic-focused mobile interface, the model hallucinated a sci-fi “Power Plant” dashboard with generic web-style cards and non-native elements.

The Gap: It feels like Stitch is heavily biased towards Web/CSS paradigms. It struggles to distinguish between a “Web Dashboard” and a “Native Mobile App.” It doesn’t seem to “think” in SwiftUI primitives (Materials, Springs, VStacks, Dynamic Islands).

My Request: For us mobile founders, we need a “Native Mobile First” mode. We need a model that:

  1. Understands Modern HIG: Knows that “Energy” in a mobile context often means “Wellness/Tracking,” not “Kilowatts/Voltage.”

  2. Native Materials: Prioritizes native system materials (UltraThinMaterial, MeshGradient) over solid hex colors or custom PNG assets.

  3. Mobile Ergonomics: Understands reachability, touch targets, and the difference between a mouse click and a finger tap.

I really want to make Stitch my primary prototyping tool, but until it speaks “Native iOS,” I’m forced to stick to manual coding for that high-end feel.

Looking forward to seeing this evolve!

Can images be uploaded and included with the original prompt?

just one word , wow
amazing tool for frontend dev

It’s a great tool, but I want to use it on a larger scale. Is there an API? I want to deploy Stitch to our own internal workflow, rather than using it on the official website or in an IDE. How can I do that?

Hey @Wayndrous yes, you can definitely input images along with your prompts in Stitch to get the results you’re looking for.

Thanks for the kind words, Akhilesh! It’s been great helping out. Keep experimenting with Stitch and let us know what you think!

this happend to me. i’ve tried to create exactly like what you did. But, only around 60-70% acomplished like what i want. So then i give short prompt to each generated i need to refine.