Reasoning About Deceit: 1. The Computational Perspective

[The following is Part I in a two-part guest post by Will Bridewell and Alistair M. C. Isaac. — JS]

We live in an age of post-truth rhetoric, fake news, and misinformation; consequently, questions of how to accurately identify deceptive communication and to appropriately respond to it have become increasingly important. Our work examines these questions from a computational perspective, asking what representational abilities an artificial agent needs if it is to detect and reason about deception. We argue that such agents need more than a mere theory of mind; they also need to explicitly represent norms of behavior and distinguished classes of motive. Perhaps surprisingly, this computational perspective has broader implications for the rational resolution of political disputes.

When we are only passively involved in a conversation, such as listening to a lecture or watching the news, we might care only about the truth-values of the statements we hear. When we actively engage with a (potentially) deceptive agent, however, we need to determine the category of deception occurring before we can respond sensibly. For instance, if a used-car salesman says, “These wheels are in great condition,” he might be (a) lying, which suggests that the wheels are in poor condition; (b) bullshitting, which implies that he has no idea about the quality of the wheels; or (c) paltering, in which case the wheels are fine, but the statement aims to draw attention away from other problems with the car. Responses to the salesman’s statement include determining which belief to adopt about the wheels, the best question to ask next in the conversation, and even what decision to make about purchasing the car. Each of these actions is differentially affected by the category of deceptive speech.

The example of paltering, or deceiving with misleading truths, illustrates the impossibility of detecting deceptive speech based solely on cues about the truth-value of an utterance. Likewise, statements of false or dubious truth-value may result from misspeaking, false beliefs, plain ignorance, or even phlegmatic sincerity. The first step to disentangling these categories from the various types of deceptive speech is to employ a representational theory of mind. An agent with a theory of mind can represent the beliefs and desires of others as different from its own. In other words, such an agent can represent a deceptive speaker as believing not p while desiring that the agent believe p—one possible motivation for a deceptive utterance of “p.”

Because neither people nor computers can read minds, detecting if a speaker has such a deceptive motive requires making assumptions about their mental state. That is, abductive inference is necessary to connect the perceived effect, the deceptive speech, to an underlying cause, the speaker’s motivational state. Evidence about the truth value of an utterance may serve as a starting point, but it must be combined with other evidence about the speaker for this inference to be effective.

For instance, suppose that Will asks Alistair whether he is a great singer. Alistair, who is a tried and true friend, says, “Yeah, man, you’re the best!” Taking this to heart, Will auditions in front of the American Idol judges, who laugh so hard they cry. Will could infer either that Alistair cannot discriminate good singing from bad or that he let his friendship dictate a polite response instead of a truthful one. If Alistair otherwise demonstrates musical taste, then a polite lie might be the most coherent explanation.

While this inference requires representing Alistair’s beliefs and desires, it also requires the identification of prioritized motives. When it came to Will’s singing, Alistair’s interest in being a supportive friend overrode his basic honesty. Our analysis of this situation is that normal speech is governed by standing norms (for instance the Gricean maxims), and at times, these may conflict. For instance, the norm to be truthful in its strictest sense may be superseded by norms to be polite (as above), be brief, or be relevant. From a computational perspective, these norms may be represented as a hierarchy of goals, and appropriate speech is the outcome of a constraint satisfaction procedure, which satisfies as many of these as possible. Deception proper occurs when some other goal, an ulterior motive, intercedes, and is ranked higher than standing norms, such as when the duplicitous salesman prioritizes sell the car over be truthful.

The ulterior motive account of deceptive speech has several appealing features. First, it slices through much of the standard debate about the definition of lying, which has tended to turn on questions such as whether the putative liar must believe her statement to be false, or whether she must “intend to deceive” the speaker. On our account, “intent to deceive” is one possible form of ulterior motive, but not the only one. Likewise, deceptive utterances may be believed false, or not; what makes a statement deceptive is the role of an ulterior motive in generating it.

More importantly from the computational perspective, this account provides a specific delineation of the representational capacities needed for effective identification and classification of deception. In addition to a basic theory of mind, a deception-savvy agent must be able to represent underlying motives and their relationship to standing norms. Otherwise, the ability to distinguish between an outright lie, which may prompt a formal challenge, and a false belief, which could lead to education, would be lacking. More generally, if there is no ulterior motive or one cannot be inferred, then an apparent disagreement should be resolvable through requests for clarification, education, or a disentangling of the priority hierarchy over standing norms.

Part 2 will examine some implications of these conclusions for political discourse.


Will Bridewell is a computer scientist at the U.S. Naval Research Laboratory’s Navy Center for Research in Artificial Intelligence.  Alistair M. C. Isaac is a lecturer in the philosophy of mind and cognition at the University of Edinburgh.  Their computational approach to deception is explored in more detail in “White Lies on Silver Tongues: Why Robots Need to Deceive (and How)” in Robot Ethics 2.0, Oxford University Press.

Comments are closed.

Back to Top