is it possible to extract feature maps right after each conv layer as numpy arrays and do computations on them then convert the resulted feature maps arrays back to tensors to feed them to the next layer in the model?
If that is possible, please let me know, coz i am hopeless as i tried to do that many time and i failed.
To make it clear, please have a look at the following example :
inputs = Input(shape=(48,48,3))
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs)
conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1)
#### here i need to get the activation maps of conv1 as numpy arrays ####
pool1 = MaxPooling2D((2, 2))(conv1)
#shape=(None, 64, 24, 24)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1)
conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2)
pool2 = MaxPooling2D((2, 2))(conv2)
Thanks for your reply,
actually i need to convert the tensor generated by a convolution layer to numpy coz i have to find the maximum and minimum pixel values in each feature map. this helps me to identify the range of pixels in each feature map and then do the rest of computations.
i tried to convert a tensor to numpy using mytensor.numpy() inside custom layer. however this also didnât work.
For TF ops like : tf.math.reduce_max , tf.math.reduce_min , tf.math.maximum or tf.math.minimum
all these TF functions donât return the maximum/minimum value in each feature map, however their functionality is based on specific axis which is somehow different to what i need.
It would be very kind of you if you can recommend other functions in TF (if any) that could extract max/min values from each feature map. without the need to do TFâ> Numpy conversion
For TF.numpy ops:
As i am new to tensorflow and keras, In fact, i just knew about such type of TF.
and after going through TF.numpy ops documentation , it seems there is marginal differences with regular TF. i donât know whether i can use TF.numpy ops to build models instead of regular TF.
VGG_model= Model(inputs=VGG_model.input, outputs=VGG_model.get_layer('block1_conv2').output) # to take the features of the first VGG block
features_maps = VGG_model.predict(img) # extracting feature maps from the img
single_fm= features_maps[1] # taking single feature map from the generated feature_maps
maxval= np.max(single_fm)
minval= np.min(single_fm)
in the above example , i can perform numpy operations like (max/min) coz the result of VGG_model.predictis numpy array by default.
However, if i need to apply this code in between the layers of my model (as stated in post above), the type of each layer is kerastensor. Therefore, i need to convert kerastensor to numpy to do the max/min operations on each individual feature map.
Exactly, this is only one feature map. then i should apply this process to all other feature maps generated by a particular convolution layer.
I need to extract the min and max values of each feature map so i can identify the range of values each feature map consist of. Doing so, will assist me to figure out the range of values that each feature map consist of. then after, i can do enhancement process to those feature maps with the assist of the derived information.
as a result, i will get new feature maps with new pixel values i set through other equations that emphasis on foreground pixels and suppress background ones.
To have an example can you write your dummy constant Tensor example with the shape that you want and show me what min and max do you want to extract from that tensor as output?
import tensorflow as tf
tensor = tf.constant([[[1,2,3],[3,4,5]],[[6,7,8],[9,10,11]]])
print(tensor.shape)
print(tensor)
Sure, in the following model, i need to convert conv1 layer to numpy :
input = Input(shape=(48, 48, 1))
conv1 = Conv2D(32, kernel_size=5, padding='same')(input)
# here i need to convert conv1 to numpy #
Conv1 is a KerasTensor of shape ([None, 48, 48, 32]) i need to convert it to numpy to iterate over the 32 feature maps and manipulate them individually, then wrap them all into single list and convert it to KerasTensor to be fed it to the next layer in the model
Generally I suggest you to start to express what you want to achieve with something simpler like a tensor manipulation without introducing layer input etc.
In this way you can check if you could vectorize the operations that you need with tensorflow ops cause manually iterate over tensor with loops is going to be quite inefficient in general.
I suggest you to start with a small manual filled tensor that is your dummy feature map.
Thank you so much dear sir for your kind help and support. you really had long breadth trying to simplify issue to me. i will start over as you advice hopefully i could figure it out with tensor ops only.
Thanks once again.
same problem, use model.get_layer(âden_shapeâ).output to get feature map, its dypte is KerasTensor, error to convert KerasTensor to numpy, have you get solution? I guess its related to Eager Execution vs. Graph Execution.