Explainable neural networks that simulate reasoning

Paul J. Blazek, Milo M. Lin

Research output: Contribution to journalArticlepeer-review

20 Scopus citations


The success of deep neural networks suggests that cognition may emerge from indecipherable patterns of distributed neural activity. Yet these networks are pattern-matching black boxes that cannot simulate higher cognitive functions and lack numerous neurobiological features. Accordingly, they are currently insufficient computational models for understanding neural information processing. Here, we show how neural circuits can directly encode cognitive processes via simple neurobiological principles. To illustrate, we implemented this model in a non-gradient-based machine learning algorithm to train deep neural networks called essence neural networks (ENNs). Neural information processing in ENNs is intrinsically explainable, even on benchmark computer vision tasks. ENNs can also simulate higher cognitive functions such as deliberation, symbolic reasoning and out-of-distribution generalization. ENNs display network properties associated with the brain, such as modularity, distributed and localist firing, and adversarial robustness. ENNs establish a broad computational framework to decipher the neural basis of cognition and pursue artificial general intelligence.

Original languageEnglish (US)
Pages (from-to)607-618
Number of pages12
JournalNature Computational Science
Issue number9
StatePublished - Sep 2021

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Science (miscellaneous)


Dive into the research topics of 'Explainable neural networks that simulate reasoning'. Together they form a unique fingerprint.

Cite this