Stitch Prompt Guide

Yes, this tripped me up on my first trial use of Stitch - I ended up with many versions of each screen - one for each change and each one just having the current modification i.e. not building the screen incrementally

Is the way to handle this to delete previous screen in canvas & replace with new screen or is there a prompt technique which will retain currenr screen & add in new changes?

One other very useful tip I got from ChatGpt is to give it the details of the app I want & tell it to create a stitch prompt which is broken into Stitch component blocks so you can reuse them across all screens - Makes for better consistency, resuability, maintainability & scaling.

ChatGPT will then create a JSON-style structure to feed to Stitch as the prompt :slightly_smiling_face:

Hey @Akshat_Dadhich I discussed about this with internal team, I want to know more about this like which kind of rules you want to be applied automatically as right now we do not have this feature, the only way is to copy paste those rules into prompt.
Thanks!

great tool.. just got to know about it

2 Likes

This is my prompt pipeline for arpa.chat . There are basically both hidden, public view only, and public editable prompts that append to the pipeline.

For creative tasks like coding, generative media, etc. There is also a prompt enhancer that pulls gemini api to take whatever the user is writting and make it better and tailored for the underlying AI’s logic. Eg. if you are prompting Veo, and ask “a dog running”, the enhancer will take that and create a json structure veo would expect and fill the blanks like, what kind of dog, where is it running, background details, camera details and more, making it easy to generate high quality outputs from basic prompts.

In addition, the customization itself can be streamlined in a user friendly way. Most AI platforms allow for prompt customization, but they expect that everyone knows how to prompt like a PRO. We all know that is hardly the case, even with experienced users.

For example, you can give toggle buttons or selection boxes to users, to customize their settings, which would not just change ui/ux, but also append relevant prompts to the pipeline. Eg. if user selects he wants the AI to respond with short responses, he only clicks short, and in the backend you append a detailed prompt that teaches the agent to respond shortly.

Feedback 09.2025:

Creating several pages do not show consistency.
Each page seems created independent from any other page.
I got several different “skip/next/continue/…” button combinations when creating onboarding pages. That is unfortunate.
Asking to for just one ! page to change the “skip” button to the design of (name) page, did not work. “Taking” designs from one page to another is not working, even when giving either of them as input to the prompt.

  • It is good in creating pages from scratch as they are described.
  • Changing things is working sometimes. Changing text is working, changing elements or design is not really working.
    pretty frustrating.

If I could take or pinpoint elements of a page and prompt to use this on a specific/all pages would help greatly.
Or moving elements per drag and drop into a (new) page to show, what I want and the prompt to recreate the page → game changer.

The inconsistency is annoying and the only way it to download or copy the code and “stitch” them together manually! There are great ideas in every page created.
It should create a kind of framework of a page to be used for all subsequent pages of a work/userflow group. Then add the elements of every page’s purpose.

Great, thanks! Let us know what you like and what could be better!

Hey @Thomas_Schroeder Thanks so much for taking the time to write out this detailed feedback—we genuinely appreciate it!
Please know we have already created bugs to address these immediate failures. Moreover, your suggestions for a global framework and visual element pinpointing are brilliant, and we’ve captured them as high-priority feature requests. We agree that system-level thinking is the ultimate fix, and your input will drive that development. We’re on it!

1 Like

Today I designed two pages, and I am literally amazed by the professional design it gave me.

Hey @adil_balti Happy to hear that, please let us know what changes you would like to make in Stitch and what are your favorite features till now.

Is there any way to integrate design system ? For ex. carbon design system , Material design ?

There’s always an inconsistency when refining each screen. Sometimes, the color changes or the styling for an element changes and so on. It’s still pretty cool

But what has helped me is using ChatGPT to generate the prompts that I feed to Google Stitch

Hello, I’m now to Stitch, i design my full saas web application but, am finding it deficut to get the source file, please can you guide me through?

Hey @Akhigbe_Kelvin welcome to the forum! We really appreciate your feedback.
To get the best results when referencing an input image, you need to deep prompt Stitch mentioning every detail you want. If you need more options or want to change a specific part of the design, you can use the Annotate function to generate multiple screens from the same style. Thanks!

Hey @Amos_Joseph, once all your design screens are generated, look to the left of the interface for the three lines (menu icon). Click that to find the “Download All” option. This will save all your designs and the associated source code to your local machine. Thanks!