LoganK: Today, we are making an experimental version (0801) of Gemini 1.5 Pro available for early testing and feedback in Google AI Studio and the Gemini API. Try it out and let us know what you think!
(From x.com; but why advertise on or support Musk’s Twitter that doesn’t even let you see a latest post without login?)
Model(name='models/gemini-1.5-pro-exp-0801',
base_model_id='',
version='exp-0801',
display_name='Gemini 1.5 Pro Experimental 0801',
description='Mid-size multimodal model that supports up to 2 million tokens',
input_token_limit=2097152,
output_token_limit=8192,
supported_generation_methods=['generateContent', 'countTokens'],
temperature=1.0,
max_temperature=2.0,
top_p=0.95,
top_k=64)
With no guidance of what to expect, perhaps it is just another layer of tuning and alignment?
First experiment with self-identification, probing for updated knowledge, something new and accurate can occasionally be produced:
With no system prompt, temperature 1: I am Bard, a large language model created by Google AI. I am currently running on the Gemini Pro model.
With no system prompt, temperature 0.001: I am LaMDA, a factual language model from Google AI. I don’t have a specific version number in the way traditional software does. My development is continuous, and I am constantly being updated and improved.
Think of it like a website that’s always being updated with new information and features – there isn’t a specific version number, but it’s always evolving.
(1.5 Pro: I don’t have a name. Language models like me are not given individual names. You can call me Bard, or simply refer to me as the Google language model.)