Finely tuned models sacrifice explanatory depth. (arXiv:1910.13608v1 [physics.hist-ph])
<a href="http://arxiv.org/find/physics/1/au:+Azhar_F/0/1/0/all/0/1">Feraz Azhar</a>, <a href="http://arxiv.org/find/physics/1/au:+Loeb_A/0/1/0/all/0/1">Abraham Loeb</a>

It is commonly argued that an undesirable feature of a theoretical or
phenomenological model is that salient observables are sensitive to values of
parameters in the model. But in what sense is it undesirable to have such
‘fine-tuning’ of observables (and hence of the underlying model)? In this
paper, we argue that the fine-tuning can be interpreted as a shortcoming of the
explanatory capacity of the model: in particular it signals a lack of
explanatory depth. In support of this argument, we develop a scheme—for
models that arise broadly in the sciences—that quantitatively relates
fine-tuning of observables described by these models to a lack of depth of
explanations based on these models. A significant aspect of our scheme is that,
broadly speaking, the inclusion of larger numbers of parameters in a model will
decrease the depth of the corresponding explanation. To illustrate our scheme,
we apply it in two different settings in which, within each setting, we compare
the depth of two competing explanations. The first setting involves
explanations for the Euclidean nature of spatial slices of the universe today:
in particular, we compare an explanation provided by the big-bang model of the
early 1970s with an explanation provided by a general model of cosmic
inflation. The second setting has a more phenomenological character, where the
goal is to infer from a limited sequence of data points, using maximum entropy
techniques, the underlying probability distribution from which these data are
drawn. In both of these settings we find that our analysis favors the model
that intuitively provides the deeper explanation of the observable(s) of
interest. We thus provide an account that unifies two ‘theoretical virtues’ of
models used broadly in the sciences—namely, a lack of fine-tuning and
explanatory depth—to show that, indeed, finely tuned models sacrifice
explanatory depth.

It is commonly argued that an undesirable feature of a theoretical or
phenomenological model is that salient observables are sensitive to values of
parameters in the model. But in what sense is it undesirable to have such
‘fine-tuning’ of observables (and hence of the underlying model)? In this
paper, we argue that the fine-tuning can be interpreted as a shortcoming of the
explanatory capacity of the model: in particular it signals a lack of
explanatory depth. In support of this argument, we develop a scheme—for
models that arise broadly in the sciences—that quantitatively relates
fine-tuning of observables described by these models to a lack of depth of
explanations based on these models. A significant aspect of our scheme is that,
broadly speaking, the inclusion of larger numbers of parameters in a model will
decrease the depth of the corresponding explanation. To illustrate our scheme,
we apply it in two different settings in which, within each setting, we compare
the depth of two competing explanations. The first setting involves
explanations for the Euclidean nature of spatial slices of the universe today:
in particular, we compare an explanation provided by the big-bang model of the
early 1970s with an explanation provided by a general model of cosmic
inflation. The second setting has a more phenomenological character, where the
goal is to infer from a limited sequence of data points, using maximum entropy
techniques, the underlying probability distribution from which these data are
drawn. In both of these settings we find that our analysis favors the model
that intuitively provides the deeper explanation of the observable(s) of
interest. We thus provide an account that unifies two ‘theoretical virtues’ of
models used broadly in the sciences—namely, a lack of fine-tuning and
explanatory depth—to show that, indeed, finely tuned models sacrifice
explanatory depth.

http://arxiv.org/icons/sfx.gif