Probability distributions classically represent ignorance (at least for orthodox bayesians). I’ve longed wondered if and how it could be applied to represent thinks which are knowable but hard to compute.
Here’s a rough insight to formalize this! Instead of calculating the result of a call to a function, one can return a probability estimate. The error is then lowered by progressively doing more computations. This is a form of lazy evaluation since computation is deferred, and it allows one to get useful results while avoiding long computations.
The tradeoff is then: how much bits of information on the value of that function can I gain by spending an additional computing cycle.