The idea behind using a neural network for image recognition is that you don’t have to tell it what to look for in an image. You don’t even need to care about what it looks for. With enough training, the neural network should be able to pick out details that allow it to make accurate identifications.
For things like figuring out whether there’s a cat in an image, neural networks don’t provide much, if any, advantages over the actual neurons in our visual system. But where they can potentially shine are cases where we don’t know what to look for. There are cases where images may provide subtle information that a human doesn’t understand how to read, but a neural network could pick up on with the appropriate training.
Now, researchers have done just that, getting a deep-learning algorithm to identify risks of heart disease using an image of a patient’s retina.