A Better AI: Deep Learning and Symbolic Reasoning Together

The behavioral psychologist Daniel Kahneman has described human reasoning as being of two key types. One is fast and automatic, and the other is slow and deliberative. Metaphorically, deep learning best approximates the first kind of reasoning, Type I reasoning, while symbolic reasoning best approximates the second kind of reasoning, Type II reasoning.

Deep learning tends to be opaque and at times provides unreliable answers. At its heart, it requires massive amounts of data and relies on a particular kind of machine learning, conceptual learning, that is incomplete with regards to complex design, goal-oriented planning, metalevel reasoning, and natural language understanding based on shared understandings of the world.

Symbolic reasoning has complementary strengths and weaknesses. It supports goal-oriented reasoning, logical deduction, model-based language understanding, metalevel reasoning, complex design, and knowledge modeling. For example, it is the basis of the semantic web and ontologies in use today.

Both systems can be brittle when applied outside of the original domain they were developed for or when common-sense reasoning is required. Both require humans to carefully specify the problems themselves. Eventually, true human-level artificial intelligence will require embodied reasoning to learn common-sense concepts such as ‘up’, ‘down’, ‘in’, ‘out’, or the 83 meanings of ‘over’ George Lakoff described in his book “Women, Fire, and Dangerous Things.” Further, conceptual metaphors and many of our societal concepts and folk theories require knowledge gained from living in the world, living in a human body, and interacting with fellow humans. For the time being, we can build AIs that combine deep learning and symbolic reasoning and augment them with human knowledge and interaction to fill in these gaps.



William R. Murray, PhD
CEO