How to report a case of training data being outputted?

See title. Model is Gemini 2.0 Flash Experimental.

explain further please.

There seems to be a bug where re-using a certain prompt several times within the same conversation causes the agent to start outputting text on its own. In this specific case, it was research articles based on the topic included in the initial prompt.

The research papers were outputted verbatim, and I’m not sure if it only chooses to output those which are open access.