Note: Please treat this talk, and all alternative versions, as CC-BY:
This work is licensed under a Creative Commons Attribution 3.0 Unported License.
Text of talk
What’s the Difference Between a Mathematician, an Engineer and a Programmer?
Mathematicians use natural logs.
Engineers use decibels (10 times log base 10).
Programmers use bits (log base 2)
This explanation is for programmers, and uses bits: all logs are base 2
Some useful functions:
odds(p) = p/(1-p) [For gamblers: "1/3 probability" turns into "odds are 1-to-2"]
logit(p) = log(odds(p))
expit(b) = exp(b/1+b) [The inverse of logit -- check!]
Belief(X) = logit(probability you assign to X)
Fact: Belief(not X)=-Belief(X)
Belief is measured in bits!
* Belief(X)=0: probability 0.5, zero knowledge
* Belief(X)=1: probability is 2/3
* Belief(X)=-1: probability is 1/3
* Belief(X)=5: probability about 0.97
* Belief(X)=10: “I’m 99.9% certain about this!”
* Belief(X)=-10: “There’s a 0.001 chance of that!”
* Belief(X)=infinity: probability 1, or “The religious belief”…
Overconfidence: >>1-expit(B) of beliefs of strength >B are wrong (for some B>0)
Underconfidence: <<1-expit(B) of beliefs of 0<strength<B are wrong (for some B>0)
Well-calibrated: Neither overconfident nor underconfident
If you see an event E, and you wonder if X is true, when is E helpful? Only when P(E given X) != P(E given not X). But how much?
Likelihood(E given X) = P(E given X)/P(E given not X)
Evidence(E about X) = log(Likelihood(E given X))
Evidence is measured in bits!
Why the “suggestive name”?
Belief(X after seeing E) = Belief(X)+Evidence(E about X)
Bayes’ Theorem: “If you are well-calibrated, and update beliefs according to THE FORMULA, you remain well-calibrated”
Corrolary: If you sometimes count evidence twice, or sometimes only weakly, you FALL OUT OF CALIBRATION!
Bayes’ Theorem is math, not a suggestion. If you care about being right, you can’t afford to ignore it!