I understand that the brain does not care about the size of the image, to some extent. And so do the Convolutional layers because the kernel size does not need a specific input.

However we define an input size for the neural models, and then it happens that we can not input an image of a different size, but this should not actually matter ?

So the way we do it is getting rid of the input layer, just like that, or there is any way to just input an image of any size (even if less accurate) ?

Hi @Mah_Neh, You can use input_shape=(None, None, channels). None in a shape denotes a variable dimension. But not all layers will work with such variable dimensions (such as Flatten). For example, If you have a flatten layer in your model you can replace that with GlobalAveragePooling layer. Thank You.

But how is that trained on? I mean you’d have infinite possible dimensions and the weights would be different I could train with 60,60,3 but mixed up with 256,256,3 and 125,45,3…

To make a system or algorithm independent of input size, you need to design it in a way that its performance or behavior does not vary significantly as the size of the input changes. Here are a few approaches to achieve input size independence:

Algorithmic Complexity: Analyze the algorithmic complexity of your system. Aim for algorithms that have a time or space complexity that is not directly proportional to the input size. For example, using algorithms with constant or logarithmic time complexity (O(1) or O(log n)) ensures that the system’s performance remains consistent regardless of input size.

Scaling Techniques: Implement scaling techniques that allow your system to handle large input sizes efficiently. This may involve dividing the input into smaller manageable chunks, using parallel processing or distributed computing techniques, or optimizing data structures and algorithms to handle large datasets more effectively.