Is Deep Learning a Dead End?

Peter Harrison - Apr 10 '19 - - Dev Community

While Deep Learning has made machine learning possible it requires vast data sets that are properly normalized along with vast computing resources in order to successfully train them. Compare this with your dog who can learn where you hid his dog treat by seeing you hide it only once.

This is the chasm we have yet to bridge in machine intelligence. While back propagation algorithms have got us off the starting blocks and the volume of data sets and raw compute resources have allowed us to brute force training it is nowhere close to the learning performance of natural brains.

Learning through experience or observation is not unique to humans. The octopus, which has a brain that developed independently from humans, is able to observe the behaviours of other octopus and emulate them. After seeing another octopus remove a cap to get at a crab they can replicate the same actions to obtain their own reward. The evolutionary convergence of human and octopus brains gives us a pathway to understanding what the important aspects of natural learning neural networks are.

Neither humans or octopus have huge data centers at their disposal to run back propagation algorithms to determine inter neuron weightings. The training of weights in natural brains occurs through feedback paths. Natural brains operate based on discrete voltage spikes which trigger neurotransmitters across synapses. The weightings are modified based on the timing of input spikes, which in turn modifies the synapse chemistry. Artificial neural networks use back propagation to model the probability of a spike and modify the weightings of the intermediate neuron connections.

In natural brains the feedback connections for training are part of the network. Artificial Neural Networks on the other hand only have feed forward connections. Adding feedback connections would create loops that would make the back propagation algorithms intractable. Current Deep Learning specifies a set of privileged output neurons. The network is trained in reference to the output neurons, with the back propagation algorithm modifying intermediate connection weights such that the input neurons will trigger the right output neuron.

With natural brains there are no privileged neurons. Weightings are determined only through interactions via direct connections rather than the weightings being modified according to some remote privileged neuron.

Despite the disadvantages that natural brains have compared to machine brains, operating at only hundreds of operations a second compared to machines which operate at billions of operations a second, natural brains still vastly outperform machine brains in terms of learning performance.

While neural networks are evidently the way forward for machine intelligence we need to radically change how they are trained if we are to approach the learning efficiency of natural brains.

The rapid progress in machine learning we have seen since 2012 confirmed my original estimation of how close we are to true general machine intelligence. With the resources being committed to machine intelligence and the focus major tech companies have placed on it I believed the pace of development would only increase leading to general machine intelligence in the short term.

But this apparent progress was deceptive, based on the success of a machine learning algorithm which worked, but very poorly. There is now almost an orthodoxy that machine learning requires vast volumes of structured data and huge computing resources. How many are stopping to ask why comparably feeble natural brains are far more capable of learning?

To achieve substantial improvement in machine learning systems I expect we will need to follow natures lead and use feedback and local neural connections to train them rather than the algorithmic approach currently in favour. It is possible that the success of the back propagation approach is leading much of the machine learning community on a wild goose chase. It might be that we will see another AI winter if the vast resources being poured into the back propagation approach to training neural networks does not bear fruit.

On the other hand there are already many successful applications which are being exploited to make fortunes off the back of Deep Learning. There is so much at stake now in AI that it is hard to see how tech companies could step away.

Progress continues to be made in researching an approach closer to natural brains. Neuroscientists are trying to image the working brain in higher resolution to better understand its structure. Computer scientists are experimenting with Spiking Neural Networks which are closer to how natural brains work.

My bullish estimation on the probability of achieving general intelligence in machines has been moderated by new understanding of the limitations of the current crop of Deep Learning based systems. There will be new applications that may cause significant social impact. But I believe we will need to radically improve learning performance before we achieve general intelligence, and I'm no longer as certain that this will be achieved in the short time frame I originally believed.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Terabox Video Player