How do we know a coin is fair enough to toss? The obvious way is to make an experiment by throwing it several times. We record the outcomes by counting how many tail (T) and head (H) appear. We could give the value of tail is 0, and the value of the head is 1. If the outcome is rather balanced (the average is around 0.5), then we can sure that the coin is fair enough to toss. A simple statistics.

Taking a more serious approach, in Bayesian fields, we could see in a broader view. We can observe the beautiful “movement” the distribution of the probability. We don’t treat probability just as “a number”, but also “a probability distribution” gradually accross all possible outcomes. With the helps of little tricks called “conjugate prior”, we could view and update the belief so it became easier to calculate.

 

It is shown by the graphics 1. The graphics 1 is about an experiment of 40 tosses with outcome is HTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHT. This experiment uses Beta distribution as prior to the binomial (coin toss).

 

Then, for the second experiment (graphics 2), when the outcome is TTTTTTTTHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH.

In the earlier attempts, we have the belief that the coin tends showing the tail (T) side. It is shown by the graphics that the peak is near to zero. Then, after many attempts, when the outcome got many H (by simple average calculation is 80%), the peak shifted to 0.8 (higher than before).

 

That is simple illustration of Bayesian paradigm.

The logics is intuitively in line with our “common sense”. For example, you have been informed about a product that is great and many people informed you about that. The more people informed you about the greatness of the product, the higher your belief about the product. This bayesian updating belief is many used in literature. But unfortunately, not much taught in Indonesian computer science lectures.

 

Key Differences:

  1. When the maximum graphics value of probability is 0.5, it is same with the result of simple mean calculation.
  2. But, in simple statistics calculation (mean), we can’t see the confidence factor. In Bayesian statistics, the more confirmations we get, the higher probability will get to the value.
  3. We also can see a dynamics belief adjusting here. The bolder the graphics is (more data that have been learnt), the firmer the belief we have about a hypothesis.