Is Deep Learning a Necessary Ingredient for AI?
Deep learning appears to be a key magical ingredient for the realization of many artificial intelligence tasks, but these tasks can be realized using simpler shallow architectures
The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.
The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity. “A positive answer questions the need for deep-learning architectures and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan's Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”
"We've shown that efficient learning on an artificial shallow architecture can achieve the same classification success rates that previously were achieved by deep-learning architectures consisting of many layers and filters, but with less computational complexity,” said Yarden Tzach, a PhD student and contributor to this work. “However, the efficient realization of shallow architectures requires a shift in the properties of advanced GPU technology, and future dedicated hardware developments,” he added.
The efficient learning on brain-inspired shallow architectures goes hand in hand with efficient dendritic tree learning which is based on previous experimental research by Prof. Kanter on sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, like different spike waveforms, refractory periods and maximal transmission rates (see also a video on dendritic learning: https://vimeo.com/702894966 ).
For years brain dynamics and machine learning development were researched independently, however recently brain dynamics has been revealed as a source for new types of efficient artificial intelligence.