Context Cache Creation with Pro Model Variants

Hey! I have the following function which I’m using to create a context cache:

def create_context_cache(self, video_file: Any) -> Any:
        """Creates a context cache from the video file."""
        try:
            cache = genai.caching.CachedContent.create(
                model=self.model_name_pro,
                system_instruction="You are an AI video editor assistant.",
                contents=[video_file],
                ttl=timedelta(minutes=30),
            )
            print("created context cache")
            return cache
        except Exception as e:
            raise Exception(f"Failed to create context cache: {str(e)}")

When I use self.model_name_pro = "models/gemini-1.5-pro-001", it seems the cache doesn’t get created for inputs (~3500 tokens) that don’t meet the minimum token requirement of ~32k tokens. However, if I use self.model_name_flash = "models/gemini-1.5-pro-002", the cache seems to get created. I thought the minimum token requirement for context caching is the same regardless of model:

CachedContent(
    name='cachedContents/v4vmw2zafkxl',
    model='models/gemini-1.5-pro-002',
    display_name='',
    usage_metadata={
        'total_token_count': 3549,
    },
    create_time=2024-11-25 00:19:28.067023+00:00,
    update_time=2024-11-25 00:19:28.067023+00:00,
    expire_time=2024-11-25 00:49:27.064035+00:00
)

Any updates here? @Logan_Kilpatrick @Manorama_Namboori