By Narayan Srinivasa – Machine learning has emerged as the dominant tool for the implementation of complex cognitive tasks resulting in machines that have demonstrated, in some cases, super-human performance. However, these machines require training with a large amount of labeled data and this energy-hungry training process has often been prohibitive in the absence of costly super-computers.
The ways in which animals and humans learn is far more efficient, driven by the evolution of a different processor in the form of a brain that simultaneously optimizes energy of computation with efficient information processing capabilities. The next generation of computers, called neuromorphic processors, will strive to strike this delicate balance between efficiency of computation with the energy needed for this computation.
The foundation for the design of neuromorphic processors is rooted in our understanding of how biological computation is very different from the digital computers of today (Figure).
The brain is composed of noisy analog computing elements including neurons and synapses. Neurons operate as relaxation oscillators. Synapses are implicated in memory formation in the brain and can only resolve between three-to-four bits of information at each synapse. It is well known that the brain operates using a plethora of brain rhythms but without any global clock (i.e., clock free) where the dynamics of these elements operate in an asynchronous fashion. more>