Hi Bjarne Thorsted, I hope you find this response in good health,
I'm so sorry I didn't dive deep into the *why*.
"The aim of digital image processing is to improve the image data (features) by suppressing unwanted distortions and/or enhancement of some important image features so that our AI-Computer Vision models can benefit from this improved data to work on."
I mistakenly believed that the passage above in the article would not need a deeper explanation. But since that wasn't enough I'm more than happy to give you a better explanation. It's very simple
Like any other data collected, before feeding into any sort of algorithm it need to be cleaned and formatted in way better suited for the algorithm.
Let me give you an example:
" most carnivorous eat raw meat, thats the best way suited for them. While for humans most meat, need to be washed clean and cooked(excluding sushi😁), that how our bodies where designed to accept meat. It's very hard to have a different format."
Data preprocessing aims at making raw data at hand more amendable to neural networks. This includes vectorization, normalization, handling missing values and feature extraction.
Vectorization
All inputs and target( x and label y) in a neural network must be tensors floating-point data or intergers(in some cases). Whatever data you need to process -- sound, images, text-- you must first then into tensors, a step called data vectorizarion.
Value normalization
In image classification, you start from an image data encoded as intergera in the 0-255 range. Before you fees the data into a neural network you need to normalize the data into the 0-1 range to make training easier and use less computational power.
I hope this can successfully answer your question.
I could go on and on and geek out l, I but I don't want to make an article of a response 😃.
If any more doubts you can tweet me @CanumaGdt.
Take care.