Logical uncertainty


I recently have come across a quote by Alan Turing which I found very illuminating:

The view that machines cannot give rise to surprises is due, I beliieve, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false.

Alan M. Turing

It is cited by Scott Aaaronson in his essay Why Philosophers Should Care About Computational Complexity. It’s something I have blogged about and wondered about.

Today, I came across a MIRI paper, “Questions of Reasoning Under Logical Uncertainty”, which describes this concept.

Consider a black box with one input chute and two output chutes. The box is known to take a ball in the input chute and then (via some complex Rube Goldberg machine) deposit the ball in one of the output chutes. An environmentally uncertain reasoner does not know which Rube Goldberg machine the black box implements.

A logically uncertain reasoner may know which machine the box implements, and may understand how the machine works, but does not (for lack of computational resources) know how the machine behaves.

Standard probability theory is a powerful tool for reasoning under environmental uncertainty, but it assumes logical omniscience: once a probabilistic reasoner has determined precisely which Rube Goldberg machine is in the black box, they are assumed to know which output chute will take the ball. By contrast, realistic reasoners must operate under logical uncertainty: we often know how a machine works, but not precisely what it will do.

I like the term “logical uncertainty” as it really captures the intuition that it is, in some sense, a generalization of the concept of information.

February 20, 2015
310 words


Categories
Tags
MIRI information philosophy Turing complexity

Connect. Socialize.