Looks like some stability is returning

There’s hope!

Been working well for me today using 3.1 Pro Preview across quite a lot of prompts, with faster completion times, only 1 time out. Seems more like the usage we got before the big update.

Free quota ran out, so selected paid key, but this did not work :frowning:

I’ve not published this app yet though, so can’t comment on that. Also for info, I’m UK based, and I started this app AFTER the big update on the 23rd Feb:

What stability?

demo viewer hanging again, and 99% of my code files is missing again…

1 Like

I can’t put myself in the shoes of the folks that are way ahead of me and have apps out there that they are using to run a business. I can guess at how this impactful this is though.

We are however, early adopters of new technology, that clearly is empowering but we are all finding our way. We don’t know the pressures the team delivering this are under. Sure, more comms would be welcomed - it’s better to have some comms, even if it’s not great news, to have no comms at all! :slight_smile:

I just wanted to report my experience, with some diagnostics that may be of use.

Also, in case it is lost in many posts where there’s ongoing issues, I put this reply to someone who felt the Code Assistant seemed to review their entire codebase every time they submitted an instruction:


I’ve totally refined this with AI to pull-out my app specific info and just give guidelines I’ve used - I MAY BE COMPLETELY WRONG!….

Anyway, may be of use so here we go:

——
After working with the Code Assistant in AI Studio Build for a while, I found that the biggest improvement in output quality came from giving the assistant a small amount of architectural context.

Instead of relying on ad-hoc prompts, I now use two simple elements:

  1. A system instruction that establishes engineering rules.

  2. A PROJECT_STANDARDS.md file that describes the architecture and conventions of the app.

The system instruction reminds the assistant how it should behave when generating code. For example:

  • follow modular patterns

  • check for existing components before creating new ones

  • respect the project’s state management approach

  • avoid unnecessary re-renders

  • consult the project standards when unsure

Example structure:

You are an expert React/TypeScript engineer.

Follow these project laws:

1. Modularity
Break large files into hooks or reusable components.

2. Reuse
Search the codebase before creating new components.

3. State
Use the project's global state system instead of deep prop drilling.

4. Performance
Memoize expensive calculations and stabilise callbacks.

5. Source of truth
If unsure about architecture or schema, read PROJECT_STANDARDS.md.

The second piece is the project standards file. This acts as the architectural “source of truth” for the assistant.

Typical sections include:

  • tech stack overview

  • directory structure

  • state management approach

  • database entities

  • authentication flow

  • component conventions

  • AI integration rules

This gives the model enough situational awareness to avoid inventing patterns that don’t exist in the project.

The difference in behaviour has been noticeable:

Without context:

  • assistant introduces random patterns

  • inconsistent state management

  • duplicate components

With context:

  • code tends to follow existing conventions

  • fewer architectural surprises

  • easier to maintain outputs

I’m curious whether others are doing something similar with Build or other AI coding environments.

Specifically:

  • Do you maintain an architecture contract for the assistant?

  • Do you enforce structured project rules via system instructions?

  • Have you found other ways to keep the assistant aligned with your codebase?

Would be interested to hear what approaches are working well for others.

——

Of course AI added those questions, but actually I’d be interested to know.

You can probaly post that to the Code Assistant and ask it to implement for your app.

Note the System Instructions seem to be a global setting across all you apps, but the PROJECT_STANDARDS.md can be customised for each app.

Hope this helps.

E & OE!
:slight_smile:

Today I’ve been using free quota and then was able to continue with a PAID KEY until I hit a limit there too on that project ‘penny’. I switched to my app ‘logit’ and was able to continue working on that with PAID KEY.

penny - after big update, not ready to publish, maybe tomorrow

logit - was before the big update, and I have been able to re-publish it today.

Basic change to footer from 2025 to 2026 as ‘evidence’:

Reporting is now looking accurate:

I’ve cleared cache, not used incognito

The code generation piece actually worked for me today… first time in weeks I was able to get it to work. They also seem to have added native integration with Firebase, which is awesome. I was having to switch back and forth, setting up rules and keys. Now it all just integrates. Mind you I needed to spend an hour getting it to connect to the database I’d already had working, but whatever, it’s finally working again. HOWEVER, I can’t push my updates to GCP. I wish they’d just get a stable version going, then roll all their changes out to a test environment or something like everyone else in the world before constantly breaking features. It’s so frustrating, especially since I only have time to play with this nights / weekends.