I can’t put myself in the shoes of the folks that are way ahead of me and have apps out there that they are using to run a business. I can guess at how this impactful this is though.
We are however, early adopters of new technology, that clearly is empowering but we are all finding our way. We don’t know the pressures the team delivering this are under. Sure, more comms would be welcomed - it’s better to have some comms, even if it’s not great news, to have no comms at all! 
I just wanted to report my experience, with some diagnostics that may be of use.
Also, in case it is lost in many posts where there’s ongoing issues, I put this reply to someone who felt the Code Assistant seemed to review their entire codebase every time they submitted an instruction:
I’ve totally refined this with AI to pull-out my app specific info and just give guidelines I’ve used - I MAY BE COMPLETELY WRONG!….
Anyway, may be of use so here we go:
——
After working with the Code Assistant in AI Studio Build for a while, I found that the biggest improvement in output quality came from giving the assistant a small amount of architectural context.
Instead of relying on ad-hoc prompts, I now use two simple elements:
-
A system instruction that establishes engineering rules.
-
A PROJECT_STANDARDS.md file that describes the architecture and conventions of the app.
The system instruction reminds the assistant how it should behave when generating code. For example:
-
follow modular patterns
-
check for existing components before creating new ones
-
respect the project’s state management approach
-
avoid unnecessary re-renders
-
consult the project standards when unsure
Example structure:
You are an expert React/TypeScript engineer.
Follow these project laws:
1. Modularity
Break large files into hooks or reusable components.
2. Reuse
Search the codebase before creating new components.
3. State
Use the project's global state system instead of deep prop drilling.
4. Performance
Memoize expensive calculations and stabilise callbacks.
5. Source of truth
If unsure about architecture or schema, read PROJECT_STANDARDS.md.
The second piece is the project standards file. This acts as the architectural “source of truth” for the assistant.
Typical sections include:
This gives the model enough situational awareness to avoid inventing patterns that don’t exist in the project.
The difference in behaviour has been noticeable:
Without context:
With context:
-
code tends to follow existing conventions
-
fewer architectural surprises
-
easier to maintain outputs
I’m curious whether others are doing something similar with Build or other AI coding environments.
Specifically:
-
Do you maintain an architecture contract for the assistant?
-
Do you enforce structured project rules via system instructions?
-
Have you found other ways to keep the assistant aligned with your codebase?
Would be interested to hear what approaches are working well for others.
——
Of course AI added those questions, but actually I’d be interested to know.
You can probaly post that to the Code Assistant and ask it to implement for your app.
Note the System Instructions seem to be a global setting across all you apps, but the PROJECT_STANDARDS.md can be customised for each app.
Hope this helps.
E & OE!
