interesting paper by Gelman and Shalizi on the philosophy of Bayesian methods:

they argue that the common interpretation of Bayes as updating levels of certainty is flawed, and it’s better seen as Popperian hypothesis generation and testing. crucial to practice is that posteriors should be routinely checked and models constantly changed/replaced, not just updated.


also: choice quote: a lil burn on something that always bothered me:

> statistical folklore says that the chi-squared statistic is ultimately a measure of sample size (cf. Lindsay & Liu, 2009) [→ ]

Show thread


@melissaboiko Interesting. My own take has long been that testing a single hypothesis in Bayes is fairly meaningless, and instead, we should compare different hypotheses on the same evidence.


@ansugeisler I am new to Bayesian practice and so it’s not super clear to me yet how to go about it, but they say they're fans of pushing a model beyond its limits until it cracks, analysing how and why it fails, and incorporating this newly learned information into new models (“You have to want to falsify the model. If you love somebody, set them free.” – Gelman)

"Bayesian Data Analysis" has a chapter on model checking, incl. graphical checking which looks cool, but I'm not there yet.


@melissaboiko Sounds like a fun book.

OTOH, I am forced to the thought that there's a fine line between "perfecting a model" and "falsification-proofing a model", and that the latter can be just adapting an incorrect model to the format of the evidence.

Sign in to participate in the conversation
Scholar Social

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!