Abstract
In 1930, Fisher presented his fiducial argument as a solution to the "fundamen- tally false and devoid of foundation" practice of using Bayes’ theorem with uniform priors to represent ignorance about a parameter. His solution resulted in an “objec- tive” posterior distribution on the parameter space, but was the subject of a long controversy in the statistical community. The theory was never fully accepted by his contemporaries, notably the Neyman-Wald school of thought, and after Fisher’s death in 1962 the theory was largely forgotten, and widely considered his "biggest blunder". In the past 20 years or so, his idea has received renewed attention, from numer- ous authors, yielding several more modern approaches. The common goal of these approaches is to obtain an objective distribution on the parameter space, summa- rizing what might be reasonably learned from the data – without invoking Bayes’ theorem. Similarly, from the Bayesian paradigm, approaches have been made to create prior distributions that are in a sense objective, based either on invariance arguments, or on entropy arguments – yielding an “objective” posterior distribution, given the data. This thesis traces the origins of these two approaches to objective statistical inference, examining the underlying logic, and investigates when they give equal, similar or vastly different answers, given the same data.