Layers have methods get_weights() and set_weights().
These calls work with a list of weight matrices in Numpy format. You can use get_weights() to get the list of weights, pick out the matrix from this list (you have to know where it is). You can than use Numpy transpose to create a new matrix, and set that as the matrix in the new weights. I think Conv2D weights is [n-dimensional weight matrix, n-dimensional bias matrix]
Now for the … part. The Conv2D weights have 4 dimensions. You have to work out what those dimensions are for the old and new matrix, and then copy each
You have to allocate a new numpy matrix that contains a set of nxn weight matricess.
I need to draw these things out on paper to get all of the dimensions right.
This answer from StackOverflow recommends transposing the images and using the “old” matrix multiply:
The image shape is (N, C, H, W) and we want the output to have shape (N, H, W, C) . Therefore we need to apply tf.transpose with a well chosen permutation perm.
It may be possible to start transposing the weights in the convolutional layers but as you say the issue will be when you get to a flatten and dense layer, assuming you’re performing classification here.
You will need to also ensure that the flatten layer uses the correct format (first vs last is a parameter on flatten)
If this doesn’t work, another approach could be to create a custom transpose layer as the first layer of the network for tflite. That way you don’t need to change any weights throughout the network, this layer simply transposes the format of the input image into the format the rest of the model expects.
I’m answering on my phone while travelling so apologies for no links and any duplication of answers.