Handling safety checks

I am facing issues while streaming the response using vertex ai, I have looked many docs not able to find the way to handle the safety response, I want to raise http expectation in case of chunk failed due to safety warning here is the example how u am trying to do it

for response in responses:
logger.debug(“response chunk %s”, response)
If “finish_reason” in response and response[“finish_reason”] != “SAFETY”:
logger.debug(“response chunk data %s”, response_text)
yield response.text, self.conversation_id
generated_response += response.text
else:
logger.warning(“Response is empty or blocked by safety filters.”)
raise HTTPException(
detail=“The response was blocked due to safety concerns. Please try rephrasing your query.”
)

Hi @Suraj_Sanwal,

Could you please clarify what specific issues you’re facing while streaming the response from Vertex AI? Are there problems with the response format, safety filtering, or handling the streamed chunks?

Yes I was facing the issue with stream the response once the content is failing due to safety i was trying to raise http exception that case, it doesn’t work for me so now I gone through the solution to yield the response with added error identifier in streaming it self