Understanding and Common Sense: Two Sides of the Same Coin?

Kristinn R. Th´orisson & David Kremelberg
Icelandic Institute for Intelligent Machines, Reykjavik, Iceland CADIA, School of Comp. Sci., Reykjavik University Reykjavik, Iceland

Abstract. The concept of “common sense” (“commonsense”) has had a visible role in the history of artificial intelligence (AI), primarily in the context of reasoning and what’s been referred to as “symbolic knowledge representation.” Much of the research on this topic has claimed to target general knowledge of the kind needed to ‘understand’ the world, stories, complex tasks, and so on. The same cannot be said about the concept of “understanding”; although the term does make an appearance in the discourse in various sub-fields (primarily “language understanding” and “image/scene understanding”), no major schools of thought, theories or undertakings can be discerned for understanding in the same way as for common sense. It’s no surprise, therefore, that the relation between these two concepts is an unclear one. In this review paper we discuss their relationship and examine some of the literature on the topic, as well as the systems built to explore them. We agree with the majority of the authors addressing common sense on its importance for artificial general intelligence. However, we claim that while in principle the phenomena of understanding and common sense manifested in natural intelligence may possibly share a common mechanism, a large majority of efforts to implement common sense in machines has taken an orthogonal approach to understanding proper, with different aims, goals and outcomes from what could be said to be required for an ‘understanding machine.’

Common sense (“commonsense knowledge”, “common sense reasoning”) has been deemed an important topic in AI by many authors since the field’s inception (Lenat et al. 1990, Liu and Singh 2004, McCarthy 1959, 1963, Minsky 2006, Panton et al. 2006). Following its use in our everyday language, the term has typically been used broadly in the AI literature, incorporating a large portion of human experience relating to the spatial, physical, social, temporal, and psychological aspects of everyday life (Liu and Singh 2004). Used in this way, the term refers to a vast body of knowledge assumed to be common to most humans. It is also used to refer to modes of reasoning and argumentation, as much of everyday planning involves the usage of standard forms of deduction, induction and abduction (e.g. “strong winds may blow rain through an open window so don’t leave your books on the windowsill”).

View PDF