The batch mode API requires you add a model parameter (https://ai.google.dev/api/batch-api#google.ai.generativelanguage.v1beta.BatchService.BatchGenerateContent), but the request in batch.inputConfig also accepts a model parameter (https://ai.google.dev/api/batch-api#GenerateContentRequest). Which model will it actually use?
Is it possible to declare a different model per inline request in the batch API?