Prince Canuma
1 min readApr 3, 2019

--

Interesting question, to answer it I will give to scenarios.

First, there are models such as VGG16, InceptionV3 and others that only support a determined image size because of the fully connected layers(classifier), which is the case of most AI algorithms that return a prediction of a class(like ‘dog 95%’). This happens because with a Fully connected layer you have a fixed number of learned weights(knowledge) your network has to work with, so varying inputs would require a varying number of weights — and that’s not possible.

Second scenario, if your model is a fully convolutional model, meaning its input is an image and the output too, then it can train using different image sizes, because this type of architecture is size invariant.

--

--

Prince Canuma
Prince Canuma

Written by Prince Canuma

Helping research & production teams achieve MLOps success | Ex-@neptune_ai

Responses (1)