according to tensorflow/lite/arena_planner.c in git hub
I wanted to know that my understanding is correct.
I understood input, output, variable tensor is allocated statically during planallocations
but intermediate tensors are allocated dynamically during executeallocations.
I wanted to know the way of dynamic allocating memory in here.
Hi @rita19991020 ,
As per my understanding, during the planning phase, input, output, and variable tensors are allocated statically, while intermediate tensors are allocated dynamically during the execution phase. The sizes of the intermediate tensors are known after the model has been initialized, and the ExecuteAllocations() function is called to allocate memory for each intermediate tensor using the AllocateTensor() function. If there is enough free space in the arena, the memory address of the first byte of the tensor’s data buffer is returned. If there is not enough free space, nullptr is returned, and the allocation fails, resulting in an error. It is important to set the arena size appropriately to avoid running out of memory during execution.
I Hope this helps.
Thanks.