The Architecture of Neuro-Symbolic Computers

Objectives

Self-Programming Interpreter Neural Networks (Spinners)

Through the introduction of Self-Programming Interpreter Neural Networks, or Spinners, we aim to create neuro-symbolic artificial intelligence systems that can match the human ability to:

Spinners are neural networks that are structurally equivalent to programming language interpreters. We expect that Spinners will improve sample efficiency, interpretability, and task performance.

At present, we focus on creating Spinner architectures within the logic/relational and functional programming paradigms.

Meta-Reflective Neural Networks

We believe that the sample efficiency of human learning arises in part from a capacity for meta-reflection, catching and learning from errors by reasoning about reasoning itself at multiple levels. Thus, we seek to create meta-reflective neural networks, taking inspiration from reflective towers of interpreters.

Formal Verification of Safety in Neural Networks

We intend to apply formal software verification techniques in order to establish safety guarantees about our neural networks.

Current Projects

Relational Programming

Functional Programming

Programming Language of Thought

As programming language theorists and machine learning researchers, we take up the task of formally specifying the semantics of, and implementing, the programming language of thought.

One imagines a hierarchy of “executive” programs which function to analyze macrotasks into microtasks. Such programs may “call” both one another and lower-level problem-solving routines, though the extent of such cross-referencing is limited by the ingenuity of the program and, of course, the overall computational capacity of the machine.

—Jerry Fodor in The Language of Thought (1975)

Collaborators