Popis: |
The present is an introductory summary on the topic of misinformative and fraudolent statistical inferences, in the light of recent attempts to reform social sciences. The manuscript is focused is on the concept of replicability, that is the likelihood of a scientific result to be reached by two independent sources. Replication studies are often ignored and most of the scientific interest regards papers presenting theoretical novelties. As a result, replicability happens to be uncorrelated with bibliometric performances. These often reflect only the popularity of a theory, but not its validity. These topics are illustrated via two case studies of very popular theories. Statistical errors and bad practices are discussed. The consequences of the practice of omitting inconclusive results from a paper, or 'p-hacking', are discussed. Among the remedies, the practice of preregistration is presented, along with attempts to reform peer review through it. As a tool to measure the sensitivity of a scientific theory to misinformation and disinformation, multiversal theory and methods are discussed. |