I have a tiny TFlite model running on the microcontroller for real time object detection and I would like to experiment with this model predictions send to the Gemini 2.0 model over C/C++ API for high level planning/reasoning, are there any suggestion to pair the TFlite model predictions with the Gemini model?
Just to clarify, TFLite appears to be specialized for on-device inference. Please note that it is currently not possible to run Gemini 2.0 Flash on-device. Therefore, if you intend to run Gemini on-device, Gemini 2.0 model isn’t available for that use case.
However, if you intend to take the TFLite predictions and feed them into Gemini 2.0 separately, that could be done via the API and there are examples in the cookbook I shared previously.
Hi,
There’s always the REST API to integrate Gemini into other programming languages than available SDKs, eg. C# like I created an SDK, or C/C++ as required by @ramkumarkoppu