I found a solution to resolve the inconsistency issues, which was to copy the HTML code to an editor and then select the code snippet I want Stitch to use as a reference, also providing the CSS information.
There’s also the option of specifying the ID of the layout you want it to use as a reference, but for finer adjustments, I now prefer to use the HTML reference instructions directly
I encounter the same issue when using AI Studio. I’ve found that adding the phrase, “Please refrain from altering any other functionalities or design elements,” during adjustments yields favorable results. You might consider giving it a try.
Hi Ross, would love to learn more about what you shared. I feel it would be helpful to people like me getting into the basics of coding and AI prompting. Thank you for this
It’s unacceptable that Stitch continues generating content without giving users a stop option or asking for proper consent. This results in unnecessary output and wasted time. Basic controls like stop, cancel, and confirmation should be mandatory in any AI tool. Please fix this.
Hey @mark_kow thanks for sharing this here! This is really helpful for our new users and a great addition to the community. We truly appreciate the effort you put into this!
Hey @Angelo_Ramos you can start with a very simple prompt in Stitch! For example, try asking it to: ‘Create a homepage for a gym app.’ Stitch will generate a creative layout with all the essential elements. From there, you can easily modify the design to fit your needs and continue generating the next screens for your project. We’re excited to see what you build!
Hi @Sadiq, we completely understand the need for a ‘Stop’ button. Please rest assured that this is already on our roadmap and being tracked by our development team. We are committed to making the generation process more flexible for you. Thanks for sticking with us!
Hey Angelo what’s on your mind? Happy to help. It might sound discouraging but good prompting comes from prompting. I have 20k+ prompts across various models. This helped me better understand propositional logic for different models, how they parse and what they prioritize. There is no golden prompt that works for all models and each needs uts own approach, which also changed with every model upgrade. I suggest messing with Antigravity, an AI code assistant enabled IDE (has outage today), or something similar, eg. Cursor. An other approach would be to use Google AI Studio or Lovable to see how your prompts translate to visuals. If you manage to get the results you want you will slowly understand ehat was the key prompts that led you there. Anything else I can help with feel free to ping me.
First, I want to appreciate the direction Stitch is taking, especially with the MCP Server and IDE integration. As a founder managing multiple startups, the dream of a “Design-to-Code” pipeline is exactly what I need to accelerate my MVPs.
However, after extensively testing the current models (including Pro), I’ve hit a significant roadblock that prevents me from using Stitch for production: The model lacks “Native Mobile” DNA.
I attempted to design a minimalist, futuristic iOS utility (think “Liquid Glass” aesthetics, MeshGradients, and native blur materials). Instead of generating a clean, haptic-focused mobile interface, the model hallucinated a sci-fi “Power Plant” dashboard with generic web-style cards and non-native elements.
The Gap: It feels like Stitch is heavily biased towards Web/CSS paradigms. It struggles to distinguish between a “Web Dashboard” and a “Native Mobile App.” It doesn’t seem to “think” in SwiftUI primitives (Materials, Springs, VStacks, Dynamic Islands).
My Request: For us mobile founders, we need a “Native Mobile First” mode. We need a model that:
Understands Modern HIG: Knows that “Energy” in a mobile context often means “Wellness/Tracking,” not “Kilowatts/Voltage.”
Native Materials: Prioritizes native system materials (UltraThinMaterial, MeshGradient) over solid hex colors or custom PNG assets.
Mobile Ergonomics: Understands reachability, touch targets, and the difference between a mouse click and a finger tap.
I really want to make Stitch my primary prototyping tool, but until it speaks “Native iOS,” I’m forced to stick to manual coding for that high-end feel.
It’s a great tool, but I want to use it on a larger scale. Is there an API? I want to deploy Stitch to our own internal workflow, rather than using it on the official website or in an IDE. How can I do that?
this happend to me. i’ve tried to create exactly like what you did. But, only around 60-70% acomplished like what i want. So then i give short prompt to each generated i need to refine.