Machines can never think like humans, or can they?
Experienced doctors sometimes diagnose illnesses within moments of coming in contact with a patient. If you ask them, they might try to explain their thought flow from the various symptoms they spot to the diagnosis. But, rather than deliberate thought, this instant diagnosis is attributable to intuition. The doctor cannot possibly tell you which hundreds of factors went through his brain in a split-second and how much weight each of them was given by his mind when coming to the diagnosis.
Here’s an excerpt from an article called “Humanising Algorithms” [free account required] published by The Ken:
The discussion wore on. By afternoon, all agreed that we are going to have algorithms, which no one understands. While deep learning may be throwing results, which people are happy with, no one really knows what happens in those 10-11 layers of algorithms.
“For instance, in image recognition, deep learning is doing very well but people probably understand the first layer and maybe the last layer but what happens in the [several] middle layers nobody knows. People are now trying to understand why machine learning gives the [good] results that it gives,” Somenath Biswas of IIT-Kanpur later explained.
Do you see the parallels here? Machine learning does the same thing as human learning. It takes a lot of inputs and analyses in ways that nobody can make sense of and gives output.
Machine Learning = Human Intuition