Hi,
I was wondering how safety parameters are used with the OpenAI library? For my use-case I require to manually set them and don’t see this option anywhere
Hi,
I was wondering how safety parameters are used with the OpenAI library? For my use-case I require to manually set them and don’t see this option anywhere
Same question. For science and medical topics we are seeing content often too easily blocked with default safety settings. learnlm-1.5-pro-experimental seems to have more strict default settings.
Well, LearnLM looks to me at least to be targeting school children, so stricter safety settings are appropriate, I think.
Yes, that’s good for children but the model is very capable for enabling learning beyond that audience. We also have the same issue with ‘exp’ models. Our use case involves science and medicine topics for continued education for healthcare pros. Also, we offer resources for prospective clinical trial participants. So we need to set safety to be less strict for these audiences. I understand the model is experimental, but we can’t even experiment with it with default settings. Meanwhile for models that can be accessed through Gemini API we can easily set safety settings.
AFAIK, the ChatCompletionRequest doesn’t provide any properties to set safety settings specifically.
Hoping this going to be provided in the near future by the team. Although, this might actually be an issue regarding the OpenAI API itself not providing options to set safety constraints. Perhaps someone could check their API docs, I couldn’t find anything (yet).
Cheers, JoKi
Yep, roadblock there. Is it crazy to suggest that OpenAI API unsupported options could be set at the Gemini API-key level, and those would then be conveyed server-side @ endpoint unless overridden? Besides handling configs, this could be an efficient way to cache system prompts.