Philosophia 80/1 I 2020 I pp. 9
a 39 CC BY-NC-SA 3.0 I ISSN
0328-9672 (impresa) I ISSN 2313-9528 (en línea)
Controversies on the Empirical Significance of Auxiliary Assumptions*
Controversias en torno a la
significación empírica de los supuestos auxiliares
María CAAMAÑO ALEGRE
Universisad de Valladolid (España)
mariac@fyl.uva.es
Abstract: Theoretical laws need to be conjoined with auxiliary
assumptions in order to be empirically testable, whether in natural or social
science. A particularly heated debate has been developing over the nature and
role of these assumptions in economic theories. The so called
“F(riedman)-Twist” (“the more significant the theory, the more unrealistic the
assumptions”, Friedman 1953) as well as some later criticisms by authors like
Musgrave, Lawson, Mäki and Cartwright will be examined. I will explore the
apparent conflict between the Popperian desideratum to pursue the independent
testability of auxiliary assumptions and the idealizational theoretical means
needed to isolate causal variables.
Keywords: Friedman-Twist, auxiliary assumptions, realism in
economics, idealization.
Resumen: En cualquier ámbito
científico, las leyes teóricas deben combinarse con supuestos auxiliares para
poder contrastarse empíricamente. En economía, se ha venido desarrollando un
debate particularmente acalorado sobre la naturaleza y el papel de estos
supuestos en las teorías económicas. Se examinarán el llamado
"F(riedman)-Twist" ("the more significant the theory, the more
unrealistic the assumptions ", Friedman 1953), así como algunas críticas
posteriores de autores como Musgrave, Lawson, Mäki y Cartwright, atendiendo al
aparente conflicto entre el desideratum popperiano de buscar la
contrastabilidad independiente de los supuestos auxiliares y los procedimientos
de idealización necesarios para aislar las variables causales.
Palabras clave: Friedman-Twist, supuestos
auxiliares, realismo en economía, idealización.
1. Introduction
Auxiliary assumptions have been
under discussion over the last decades − particularly in the field of economics−,
where special attention has been paid to the risk of misusing of idealizations.
Idealizations often operate at different although interrelated levels, in the
very formulation of theoretical laws and in auxiliary assumptions usually
accompanying the latter. Idealizing assumptions in economics have often been
the target of criticism due to their highly unrealistic nature. Yet, Popper’s
frequently invoked emphasis on specification and refutability is not at odds
with idealizations, as it is sometimes suggested, it is just at odds with
epistemically unjustified idealizations, not helpful to uncover any interesting
truths. My goal is to
explore the apparent conflict between the Popperian desideratum to pursue the
independent testability of auxiliary assumptions and the idealizational
theoretical means needed to isolate causal variables. I will argue that
heuristic assumptions or idealizations must be evaluated by methods other than
the merely derivational ones, combining different resources, like the bidirectional
method of empirical approximation,
[1] and dialectical methods such as
contrast explanation and replacement of assumptions.[2]
I will start
with some clarifications on the debate around auxiliary assumptions (section
2), then I will discuss what is called the “F(riedman)-Twist” in economics,
that is, Milton Friedman’s influential (and controversial) vindication of
unrealistic assumptions (section 3), and comment on Alan Musgrave’s critical
respond to it (section 4). After that, I will consider some methodological
insights by Tony Lawson, Uskali Mäki and Nancy Cartwright that could be
regarded as challenges affecting both sides of the debate (section 5). Finally,
some possible answers to the challenges will be sketched (section 6).
2. Main features of the debate
The nature of auxiliary
assumptions is a subject that has been addressed very early in the contemporary
philosophy of science, already in Pierre Duhem’s discussion of the holistic
features of confirmation.[3] He
convincingly argued that it is impossible to test a hypothesis in isolation,
since, in order to derive empirical consequences from a hypothesis, the latter
needs to be conjoined with many other assumptions and hypotheses about the
world, the functioning of measuring instruments, the environmental conditions,
etc. For example, in testing hypotheses from thermodynamics, we need to be able
to empirically determine changes in temperature by correlating changes in
temperature with changes in some other quantity. If we use a mercury
thermometer to this end, we need to assume that changes in the length of the
strand of mercury is what is relevant to be able to establish changes in temperature
and to endorse numerous assumptions about how mercury expands or contracts as
the temperature rises or falls.
According to Duhem, this type of measurement depends on the assumption of
certain laws of nature, like linear expansion, according to which, change in
length is directly proportional to the change in temperature. Also, there are
assumptions on the conditions under which a temperature reading as given by a
mercury thermometer should be disregarded, for example, if the mercury
thermometer is placed in a strong magnetic field.
As it is well
known, Duhem emphasizes as an important implication of his view that
confirmation holism precludes the possibility of performing crucial
experiments. He famously denied that there had been a crucial experiment
leading to the rejection of the particle theory of light in favor of the wave
theory of light. According to him:
(…) in fact,
what the experiment declares stained with error is the whole group of
propositions accepted by Newton, and after him by Laplace and Biot, that is,
the whole theory from which we deduce the relation between the index of
refraction and the velocity of light in various media. But in condemning this
system as a whole by declaring it stained with error, the experiment does not
tell us where the error lies. Is it in the fundamental hypothesis that light
consists in projectiles thrown out with great speed by luminous bodies? Is it
in some other assumption concerning the actions experienced by light corpuscles
due to the media in which they move?[4]
Willard Van Orman Quine took
Duhem’s argument a step further and asserted that a theory can always avoid
refutation by changing the auxiliary assumptions conjoined with it.[5] While
accepting the very fact of confirmation holism, Popper rejected the
implications drawn from the so called Duhem-Quine thesis, in particular, the
idea that, when a false prediction is derived from a hypothesis conjoined with
auxiliary assumptions, it is not possible to identify where the mistake lies.[6]
Against this “holistic dogma”, as he calls it, he claimed that it is always
possible to pinpoint the logical connections between hypotheses or assumptions
and refuted predictions. The way to do that would be similar to the one applied
to prove the independence of axioms in axiomatic systems, which would involve
finding out a model that satisfies all axioms but the independent one. When
some refuting evidence is gathered, such evidence may provide a model that
satisfies several assumptions while not the main hypothesis that happens to be
conjoined with them. If so, even in non-axiomatized systems, we could identify
the source of error by conjoining a different hypothesis to the same
assumptions and check whether the previously refuting evidence is now a model
of the new system sharing the same auxiliary assumptions with the old system.
In that case, if a positive result is obtained, we have good grounds to infer
that the assumptions were not the source of error in the first place, that is,
when conjoined with the old hypothesis. As a consequence, the more analyzed a
theoretical system is, the better for methodological purposes.
Earlier, Popper had also objected
to the idea that ad hoc
modifications, replacements or additions of auxiliary assumptions is an
acceptable scientific practice.[7] Good
scientific practice would require an effort to uncover mistakes, not the
opposite, and so, according to Popper, auxiliary assumptions should be modified
whenever there is refuting evidence undermining them, but not when the refuting
evidence rather undermines the main hypothesis the assumptions are conjoined
with. A case in point is the ad hoc
assumption that phlogiston has negative weight in order for
phlogistonians to accommodate the anomaly of the increase in weight of calcined
metals, despite that fact that no independent evidence supported the
introduction of such assumption. Some typical, textbook examples of auxiliary assumptions may also help
to get a sense of how the Popperian test for the acceptability of auxiliary
assumptions could work. To test the hypothesis that puerperal fever was caused
by cadaveric contamination,
it was assumed that certain substance used to remove such contamination had
indeed disinfecting power, an assumption that was clearly testable
independently of the hypothesis it was conjoined to. The testing of Copernican
astronomy on the basis of the lack of observable stellar parallax is a less
clear-cut case, since here the independent test of some auxiliary assumptions
has its own difficulties. The lack of observable stellar parallax can only be
acknowledged as evidence refuting Copernican astronomy if it is assumed that
stellar parallax can be observed regardless of the distance between the Earth
and the stars. On the contrary, if the magnitude of the distance between the
Earth and the stars is assumed to rule out the possibility of observing the
stellar parallax, then, obviously, the lack of observable stellar parallax
cannot be considered as evidence refuting Copernican astronomy. Now, here we
have a case where confirmation holism comes together with some limitations in
the independent testability of auxiliary assumptions, ultimately resulting in a
strong (although historically transient) underdetermination of theory by
observation.
Interestingly, while the problem
of auxiliary assumptions has mainly been approached indirectly in general
philosophy of science, most often in discussing confirmation holism,
underdetermination and adhocness,[8] a
particularly heated and detailed debate has developed over the role of
auxiliary assumptions in economic theories. As a consequence, the issue of
auxiliary assumptions has been addressed more thoroughly in the literature in
philosophy of social science.[9] It
is certainly very common that some philosophical and methodological discussions
about different aspects of science are developed in more detail in social
sciences, where the problematic side of those aspects appears more clearly. The
problem of validity of experiments and the very issue of auxiliary assumptions
are cases in point, falling out of the focus of attention in general philosophy
of science, traditionally very oriented towards the study of natural science,
where experiments and assumptions seem to be less problematic if compared to
social science. Still, the progress or insights that philosophers of social
science have made in these subjects can be extended beyond social science to
reach also natural science.
In the middle of the 20th
century, Milton Friedman made a very influential defense of the use of
idealizations in economics, leading to what has been labeled “the
F(riedman)-Twist”, which can be comprised in his famous statement that: “the
more significant the theory, the more unrealistic the assumptions”.[10] The
F-Twist implied that economics follows the example of physics in applying the
Galilean “paradigm”, thus including, in economic theory, assumptions equivalent
to those about frictionless planes, perfectly rigid bodies, mass points in
physics. In both fields, the predictive fruitfulness of idealizations would
provide the epistemic justification of these literally false assumptions. In a
paper from 1981, Alan Musgrave tried to untwist the F-Twist by showing that, in
his own words: “[in economic theory] the more unrealistic domain assumptions
are, the less testable and hence less significant is the theory”.[11]
According to him, there has been a systematic misuse of the Galilean “paradigm”
in economic theory, a misuse related to both the neglect of the empirical
nature of auxiliary assumptions and the failure to distinguish the different
purposes of negligibility, domain and heuristic assumptions respectively.
I will first
examine the Friedmanian arguments in favor of “unrealistic assumptions” and
later will focus on some criticisms by authors like Musgrave, Uskali Mäki, Tony
Lawson and Nancy Cartwright to conclude with some positive proposals invoking
the role of empirical approximations.
3. The F(riedman)-Twist: the vindication of unrealistic
assumptions
Friedman’s
argument in favor of unrealistic assumptions hinges on the distinction between
descriptive accuracy and analytical relevance. The former would require a
detailed empirical correspondence between theoretical assumptions and the
target domain, while the latter would involve an explanatory and predictive
effectiveness usually dependent, in turn, on the endorsement of unrealistic
assumptions. According to Friedman, descriptive accuracy is not compatible with
analytical relevance, that is, an empirically detailed theory would defeat its
own purpose, namely, explaining and predicting phenomena in certain domain by
identifying a few variables as the main one responsible for them. The
identification and selection of a limited set of explanatory variables, as well
as the formulation of fundamental conjectures on how they operate, force us to
go beyond realistic descriptions into the domain of unrealistic, idealizing
assumptions. Granting that any genuine explanation would require this move, the
methodological desideratum of
explaining more with less would accentuate it. Friedman’s somehow sarcastic
rejection of realistic assumptions, endorsed by virtue of their descriptive
accuracy, is clear when he states:
A completely
"realistic" theory of the wheat market would have to include not only
the conditions directly underlying the supply and demand for wheat but also the
kind of coins or credit instruments used to make exchanges; the personal
characteristics of wheat-traders such as the color of each trader's hair and
eyes, his antecedents and education, the number of members of his family, their
characteristics, antecedents, and education, etc.; the kind of soil on which
the wheat was grown, its physical and chemical characteristics, the weather
prevailing during the growing season; the personal characteristics of the
farmers growing the wheat and of the consumers who will ultimately use it; and
so on indefinitely. Any attempt to move very far in achieving this kind of
"'realism" is certain to render a theory utterly useless.[12]
A theory like
the one described in the above quote would be unmanageable in its detail and
hence would lack any focus that could enable us to uncover the (often) hidden causes
determining the phenomena under study. Without a careful choice of a few
variables applicable to theoretically represent a wide range of phenomena, we
are left with no explanatory resources to make causal inferences, and thus
ultimately, with no means to make predictions. In Friedman’s view, the
similarity gap between theoretical variables and empirical phenomena is simply
the natural consequence of what theorizing takes, namely, covering a great
number of heterogeneous, complex phenomena with a few simple concepts providing
a homogeneous representation. Theories, according to Friedman, must then be
unrealistic, and their acceptability depends entirely on their predictive
success, which includes, not only
future events, but also past events not known to the person making the
prediction.[13]
(M. Friedman, 1953/1966, p. 8). Moreover, he has forcefully rejected the
idea that even a theory with highly unrealistic postulates can still be made
indirectly realistic by conjoining descriptively accurate or realistic
auxiliary assumptions with it ˗a view that he considers as harmful as widely
spread in the mid-20th century economics. From his standpoint, not
only the same arguments that hold for the unrealism of theoretical postulates
hold for the unrealism of auxiliary assumptions, but also the same test for
validity (i.e., predictive success) must be simultaneously applied in both
cases. To put it differently, auxiliary assumptions should not be empirically
tested independently of their conjoined theory, but rather together with it,
since their validity is to be evaluated according to the purpose that they are
expected to fulfill, namely, to make the conjoined theory predictively
successful. The mutual dependence between theory and auxiliary assumptions for
them to be empirically tested follows, as a consequence, from Friedman’s
account. He, on the one hand, (at least implicitly) assumes that a theory
holistically depends, for its confirmation, on its auxiliary assumptions, and
on the other hand, points out that auxiliary assumptions depend on their
conjoined theory for testing their validity as auxiliary devices enabling the
confirmation of that very theory. While the first kind of holistic dependence
is usually associated with the widely accepted Duhem-Quine thesis, the second
line of dependence is peculiar to Friedman’s approach, where auxiliary
assumptions are presented as heuristic devices meant to increase the analytical
relevance of a certain theory, rather than as inherited truths about the domain
of application of a theory or the experimental conditions required for its
testing. Descriptive accuracy would certainly be a valuable feature for the
latter and no relativity to the conjoined theory would emerge in that case,
which implies that auxiliary assumptions should be testable independently of
the conjoined theory. By contrast, if such assumptions are mere heuristic
devices intended to maximize the explanatory and predictive capacity of a
conjoined theory, their validation becomes relative to the theory, and descriptive
accuracy, for reasons already explained, need not be acknowledged as a valuable
feature. To put it bluntly, in order to serve the purposes of their unrealistic
conjoined theory, auxiliary assumptions would have to provide new unrealistic
resources to cope with extremely complex, heterogeneous domains. Again, we can
see how Friedman states his view:
To put this point less paradoxically, the relevant question to ask about
the "assumptions" of a theory is not whether they are descriptively
"realistic," for they never are, but whether they are sufficiently
good approximations for the purpose in hand. And this question can be answered
only by seeing whether the theory works, which means whether it yields
sufficiently accurate predictions. The two supposedly independent tests thus
reduce to one test.[14]
Note
that when Friedman talks of “the two supposedly independent tests” he is
referring to the very idea of distinguishing testing a theory from testing its auxiliary
assumptions, a distinction motivated by the purpose of making sure that the
second are “realistic”. In his view, on the contrary, no such distinction makes
sense. Both test are inextricably united by their shared heuristic,
idealizational nature.
In Friedman’s approach, assumptions of ideal conditions like “perfect
competition” and “perfect monopoly” underlying neoclassical economic theory are
to be evaluated with regard to their analytical relevance, i.e., by their
contribution to the predictive success of such theory. Predictive success would
play a twofold role: as the purpose of auxiliary assumptions and as the
criterion to evaluate them, ultimately providing also the criterion for
acceptable departures from realism, since assumptions would need to deviate
from realism in order to fulfill their purpose. The question of what to neglect
in studying economic phenomena could only be answered by checking what choice
of neglect proves more helpful in terms of predictive power. According to
Friedman, the difference in the contribution to predictive power that an
auxiliary assumption (or a set of them) can make constitutes all the available
evidence to judge whether the idealized features represented in the auxiliary
assumptions make more difference to the phenomenon under study than the
neglected features. He thereby implicitly acknowledges that the predictive
contribution of auxiliary assumptions plays a key role in guiding causal
inference, which is the cornerstone of scientific theorizing. The core of his
argument is presented in the following quote:
What is the criterion by which to judge whether a particular departure
from realism is or is not acceptable? Why is it more "unrealistic" in
analyzing business behavior to neglect the magnitude of businessmen's costs
than the color of their eyes? The obvious answer is because the first makes
more difference to business behavior than the second; but there is no way of
knowing that this is so simply by observing that businessmen do have costs of
different magnitudes and eyes of different color. Clearly it can only be known
by comparing the effect on the discrepancy between actual and predicted
behavior of taking the one factor or the other into account.[15]
The above quote suggests that auxiliary assumptions prove analytically
relevant in so far as they contribute to identify the prevalent causal factors
involved in the phenomenon under study, an identification that, in turn, can
only be achieved by comparing different (sets of) assumptions with respect to
their relative contribution to the predictive power of their conjoined theory.
Now, given that the scope of a theory is always restricted in at least two
ways, namely, by the specific problems under study and by the circumstances
under which it holds, the test of prediction for analytical relevance is itself
relative to both restrictions. To put it differently, the analytical relevance
of “unrealistic” or ideal assumptions is always relative to the problem
addressed and the circumstances under consideration. The pursuit of analytical
relevance amounts to the pursuit of a correspondence between the ideal and real
entities in a particular problem and under particular circumstances, and this
implies that the choice of variables used to define such correspondence is strongly
restricted by pragmatic and contextual factors. Certainly, without those
restrictions, auxiliary assumptions could be established in a “realistic” way
and make the same contribution whatever the theory. But, again, without those
restrictions, all theorizing would become pointless, either too trivial or
unmanageably complex. As emphasized by Friedman, the choice of assumptions only
makes sense relative to a problem:
Everything depends on the problem; there is no inconsistency in regarding
the same firm as if it were a perfect competitor for one problem, and a
monopolist for another, just as there is none in regarding the same chalk mark
as a Euclidean line for one problem, a Euclidean surface for a second, and a
Euclidean solid for a third.[16]
Circumstances
of application of a theory are equally important. For instance, the evolution
of retail prices of cigarettes affected by an increase of the federal cigarette
tax during a war period would be very different from their evolution if the tax
increase had occurred before that period. War circumstances may make it more
convenient to replace the ideal assumption of perfect competitors by the ideal
assumption of perfect monopoly, for in such circumstances each firm may
prioritize their prestige and keeping their share of the market,[17]
thereby adjusting their prices with other firms and making sure that the
quantity produced could satisfy the demand.
Friedman’s
reference to unrealistic assumptions does not seem to fit well with the
examples of auxiliary assumptions mentioned earlier, in connection with Duhem’s
account. Those examples where directly concerned with background knowledge
involved in the use of experimental instruments or in the acknowledgment of
certain conditions for observation. It seems utterly absurd to vindicate the
unrealism of such assumptions, which are empirical in nature. Therefore, even
if, as we will see in the following section, the notion of auxiliary assumption
includes very different kinds of assumptions, it appears plausible that Friedman
is primarily referring to idealization assumptions. This is still a very broad
category, but certainly one that does not overlap with empirical assumptions on
experimental conditions. So we will later narrow down the discussion to the
issue about the justification of ideal assumptions. Friedman himself is
certainly not explicit about these distinctions and so, in order to clarify the
different roles of assumptions, it will be useful to take into account
Musgrave’s taxonomy as well as his objection to what he describes as the
unnoticed change in the status of auxiliary assumptions in economic theory.[18]
Before turning to Musgrave’s criticism of Friedman’s view, I would like
to highlight a few aspects of the latter’s account. First, Friedman is far from
holding an antirealist or merely instrumentalist view of science, his
vindication of false assumptions instead being related to their essential role
in uncovering the truth behind the appearances. Second, according to him, the
only way to check whether false auxiliary assumptions are acceptable is by
deriving successful predictions from the hypothesis the assumptions are
conjoined to. The following sections raise some concerns about the validity of
Friedman’s criterion for the acceptability of auxiliary assumptions, not about
his general idea that false assumptions are necessary to achieve some
theoretical truths. As later criticisms by Lawson, Mäki and Cartwright will
show, the derivational method advocated Friedman is too limited, in different
respects. The “test of prediction”, as Friedman calls it, does not only
overlooks the importance of the bridge
principles providing an empirical interpretation for auxiliary assumptions, but
also the tradeoff between predictive power and scope of application of a
hypothesis, which is also connected with the contrast between ideal conditions
in the experimental setting and real conditions in the target domain.
4. Musgrave’s
criticism: advocating the independent testability of auxiliary assumptions.
In contrast to Friedman, Musgrave
vindicates both the empirical significance of auxiliary assumptions and their
testability independently of their conjoined theory.[19] According to the second,
most economist (including Friedman) would have failed to distinguish between
three kinds of auxiliary assumptions:
- negligibility assumptions, i.e.,
empirically testable assertions regarding the low influence of certain
variables on the phenomena under study;
- domain assumptions, i.e.,
empirically testable assertions expressing restrictions on the domain of
application of a theory; and
- heuristic assumptions, i.e.,
empirically evaluable assertions intended to enable successive approximations
to the phenomena under study.
Musgrave claims that in none of
the three cases it is true that, the more significant the theory, the more
unrealistic the assumptions. Conversely, in all three cases, the role assigned
to the assumptions could only be successfully played by them if they prove
empirically sound. Yet, each kind of assumption plays a different role, one
that is not compatible with the others. Negligible factors do not restrict the
domain of application of a theory as do domain assumptions, precisely by
pointing to some factors as not negligible. In none of these two cases the
assumptions are meant as fictions for purposes of approximation, as happens
with heuristic assumptions. But even in this third case, the role of ideal
assumptions is to be judged by their contribution to empirical approximation.
Contrary to what is argued by Friedman, the lack of “realism” of auxiliary
assumptions, closely connected to their lack of independent empirical
evaluation, would hamper progress in economics. The problem increases due to
the unnoticed change in the role of such assumptions in economic theory. To use
Musgrave’s own example, “assume that the budget is balanced” may mean:
1.
Whether or not the budget is balanced makes no detectable difference to
the phenomena under investigation;
2.
If the budget is balanced, then the following applies;
3.
Let us temporarily assume that the budget is balanced.
Each meaning is not compatible
with the others and calls for a different empirical evaluation. Musgrave points
out that heuristic assumptions can be understood as negligibility assumptions
turned into heuristic devices allowing for successive approximation and, thus,
for taking steps towards precise predictions. His view is similar to Ernest
Nagel’s in this respect, for both understand the heuristic role of
idealizations primarily as enabling empirical approximation, and therefore as
leading to more descriptively accurate formulations of a theory.[20]
Nagel’s earlier discussion of Friedman’s account moreover suggests that the
latter conflates three different senses of ‘unreal’ applied to assumptions:
descriptive inaccuracy due to abstraction, descriptive inaccuracy due to
falsity and descriptive inaccuracy due to idealization. As Nagel notes, there
is no genuine debate on the relevance of the first sense as denoting an
essential feature of scientific assumptions. It is also uncontroversial that
falsity is to be avoid unless it successfully serves idealization purposes. The
question, then, is again on what basis ideal assumptions are acknowledged as
valid and whether the validity criterion itself involves a move towards
realism. Friedman would have failed to realize that the application of his test
of prediction for ideal assumptions involves such move after all, because no
predictive progress is possible unless the idealized conditions stated in the
assumptions are gradually relaxed and different interfering factors omitted in
the ideal assumptions are included in subsequent formulations of auxiliary
assumptions conjoined with a theory, whether they are assumptions on friction
conjoined with Galileo’s law or assumptions on bounded rationality conjoined
with the rational maximization of returns hypothesis.[21]
The dynamical nature of the status
of auxiliary assumptions is another important aspect stressed by Musgrave, who
draws attention to the fact that the development of inquiry often requires
moving from one kind of assumption to another. For instance, to explain the
mechanical features of the Solar System, Newton initially neglected the
inter-planetary gravitational forces. In particular, his initial formulation of
Kepler’s planetary hypothesis includes the negligibility assumption that the
actions of the planets one upon another are so small that they can be
neglected. Later on, once astronomical observations became more refined,
Newton’s negligibility assumptions turned into heuristic ones regarding
inter-planetary gravitational forces and, ultimately, those assumptions
systematically developed in his theory of perturbations.
Friedman and
Musgrave clearly agree that ideal assumptions play an important role in scientific
theorizing, but both disagree on how to understand their empirical
significance. The method of successive approximation that Musgrave regards
characteristic of heuristic assumptions involves a constant evolution towards
“realism” (or descriptive accuracy) that clashes with the picture emerging from
Friedman’s account, where the lack of realism is a feature preserved by
assumptions repeatedly subject to the test of prediction. In fact, even if both
authors invoke prediction as a key evaluative means for assumptions, Friedman,
as opposed to Musgrave, acknowledges no progression towards descriptive
accuracy as the result of systematically applying the predictive test. Next I
will explore some limitations and challenges affecting both approaches.
5. Challenges to Friedman’s and Musgrave’s views
Let us
examine some relevant contributions to the debate after Musgrave’s paper, many
of them pointing to difficulties shared by Friedman’s and Musgrave’s views. In
particular, Tony Lawson and has questioned the gold standard of derivational
methods, whether applied to auxiliary assumptions directly (as suggested by
Musgrave) or indirectly (as advocated by Friedman). On the other hand, the main
issue raised by Uskali Mäki’s account of idealizations as theoretical
isolations conflicts with Friedman’s and Musgrave’s views in different ways,
for it entails a vindication of idealizations even when no predictive test is
applicable. Finally, despite the fact that Cartwright has to some extent
endorsed Friedman’s view, especially when arguing that in order to be
explanatory and predictively fruitful, theories must lie, she has raised some
important objections to the use of ideal assumptions in economics, a use that
would systematically preclude external validity.
5.1 Lawson’s objection to the limitations of the
derivational approach to empirical significance
Let us focus
on the heuristic use of auxiliary assumptions and suppose that economic
theories do pass the predictive test, thus enabling us to derive empirical consequences.
It could then be argued, à la
Friedman, that the empirical significance of auxiliary assumptions and of
idealizations in general can only be assessed by evaluating the overall
explanatory/predictive power of the theory including such idealizations. This
derivational view of empirical significance, often associated with the idea
that simplifying and fictionalizing are the cornerstones of scientific
explanation, has been vigorously criticized by Lawson, who provides the
following illustrative example:
It may be
true that ‘all polar bears are white’. But if this apparent truth is
deductively generated from the assumptions that ‘all polar bears eat snow’ and
‘all snow-eaters are white’, we have added nothing to our understanding of
polar bears, snow or whiteness; and nor have we provided explanatory support
for the proposition that ‘all polar bears are white’. All deductive exercises
that are so based on known absurd fictions, and this inevitably includes almost
all mathematical modelling exercises in modern economics, are just as
pointless.[22]
The use of
ideal assumptions in economics would be hampered by the peculiar use of
mathematics in economics, which, according to Lawson, is too influenced by Hilbert’s
reconsideration of math as concerned with “providing a pool of frameworks for
possible realities”, rather than being regarded as the language of nature. As
shown in the example above, absurd fictions may play a role in inferring true
empirical consequences from a theory, thereby fulfilling Friedman’s requirement
for empirical significance. Yet, their empirical contribution would have more
to do with the triviality of the empirical features they are associated with
than with their correspondence with relevant hidden features of the real events
under study. Ultimately, Lawson’s overall criticism of traditional economics is
related to the mismatch between the method of isolation, atomization and
mathematical modelling, on the one hand, and conditions of application (open
systems marked by internal-relations, process, emergent totalities, meaning,
value) on the other.[23] He
calls for dialectical methods such as contrast explanation, more sensitive to
the ontological complexities of the social domain and conducive to an
evaluation of assumptions based on their contribution to understanding real
events rather than to predicting some trivial facts. In contrast explanations,
the goal is to explain unexpected differences in outcomes, i.e., to explain
why, in outcomes assumed to share the same causal history − and thus to be the
same− we find a surprising difference. This kind of explanations should provide
an answer to questions of the form “why x rather than y?” like, for example,
why unemployment is falling everywhere in a region except in one area?[24] A
key advantage of contrast explanations would be that they can be equated to
experiments occurring outside the laboratory, as they enable us to standardize
for all causal factors except one over a particular domain, hence allowing for
causal explanation without artificial simplification.[25]
5.2 Mäki’s vindication of idealizations as
theoretical isolations
A central idea underlying Mäki’s
approach is that the highly complex and intertwined nature of social
interactions requires their theoretical decomposition by means of idealizing
assumptions, whose purpose consists in isolating causal variables, often by
making false simplifying assumptions.[26] Only by endorsing literally
false assumptions regarding some complex domains would we be able to gain
access to (isolate) some simple hidden truths about the causal connections
operating in them. The explanatory requirement of theoretical isolation would
then justify the methodological use of non-transient (pace Musgrave) and non-predictive (pace Friedman) idealizations. However, as suggested by Mäki, false
assumptions are often kept even though they do not contribute to the isolation
of any real causal variable, thereby losing or betraying their purpose. This
inadequate use of idealizing assumptions results in a lack of connection
between them and real systems —or, in in Mäki’s terms, in using ‘substitute
models’ as if they were ‘surrogate models’—, and
even in imposing isolations precluded in real systems.[27] For example, excluding the
role of institutions when representing economic systems could dramatically
limit the explanatory capacity of the corresponding representation. By imposing
isolations precluded in real systems, theoretical models in economics may end
up devoid of empirical and explanatory significance.
While recognizing the three
different roles that, according to Musgrave, auxiliary assumptions may play,[28]
Mäki vindicates the methodological role of false assumptions in the form of
idealizations, and not merely as a transient heuristic device to be discarded
in the future. In order to identify and represent causal connections, we would
need to find a way to isolate those causal links from (usually) a highly
complex, open and uncertain range of interfering variables. Even if no
predictive power is gained by employing idealizations, the explanatory power
would require idealizations. In Mäki’s view, idealizations are justified,
neither on the basis of predictive effectiveness (contrary to Friedman), nor as
gradual approximations (contrary to Musgrave), but as devices to uncover some
hidden truths about domains whose complexity precludes the chances of
generating predictions. Even if ideal assumptions are often not intended as
empirical approximations,[29] they
manage to uncover real, identifiable tendencies or causal connections existing
beneath the surface of interference factors.
Let us get a clearer view of how
idealizations should work according to Mäki by considering one of his own
examples.[30] In
vindicating the Galilean kind of idealizations in economics, he compares
Galileo’s idealizations supplementing mechanical laws to idealizations employed
in von Thünen’s model of agricultural land use in the Isolated State. This
model successfully isolates distance (or the associated transportation cost) as
the major causal factor that shapes land use patterns in agriculture, leaving
aside a wide variety of heterogeneous interfering factors like the proximity of
other cities, the dimension of the city, geographical accidents like mountains
or rivers, and assuming uniform fertility and climate, no trade, and so on. The
derivational or predictive approach to the evaluation of ideal assumptions
would not be applicable in cases like the above. The inapplicability of such
approach is here not related to the Duhem-Quine problem –for even a complete
set of theories and hypothesis would be affected by exceptions and provisos in
their application−, but to the very nature and role of ideal assumptions,
essentially consisting in the isolation of variables in the discovery context.[31]
Now, what
happens when neither successful predictions, nor successful explanations are
obtained despite the massive use of idealizations? What is the justification
supporting the use of idealizations in those cases and, therefore, on what
grounds can they be kept as valid research devices? Mäki warns against the risk
that mere tractability (or heuristic) assumptions overrule meaningful
idealizations, giving rise to ontologically ungrounded idealizations. The risk
of arriving at ungrounded idealizations is stressed in the following quote:
Just as
biologists will fail in representing a system such as the human organism if
they consistently exclude the brain or the heart from their theory, economists
might fail in representing an economic system for certain explanatory purposes
—such as for explaining the performance of a developing economy— if the
isolations they employ exclude the role of institutions.[32]
In order to
revert the tendency towards ontologically vacuous idealizations, Mäki suggests
some replacement of assumptions in economic theory. Some of the replacements
that he advocates entail moving from assuming symmetric information to assuming
asymmetric information, from zero to positive transaction costs, from certainty
to uncertainty in decision making, from unbounded to bounded rationality, from
maximization to satisficing, from asocial and amoral agents to ones with social
and moral awareness; and so on.[33]
5.3 Cartwright’s tradeoff between internal and
external validity
Despite being sympathetic with
Friedman’s vindication of ideal assumptions,[34] Cartwright thinks that an
inadequate use of idealizations is often made in economics, where false assumptions
are kept even if they do not play the important methodological role that
Galilean assumptions would play in physics.[35] In particular, they often
do not enable interesting experimentation – ‘interesting’ in the sense that
allows for successful causal inferences. On the contrary, unreal assumptions
would become a mere device for purposes of deriving consequences from a theory,
whether or not such consequences can be tested in a way that guarantees the
generalizability of the results to the target domain. Contrary to Galilean
assumptions, these inadequate assumptions would overconstrain the applicability
of the theory and, thus, the experimental conditions needed for its testing.
She mentions
several examples of the overconstrained nature of economic models –which would
actually compensate for their meager number of general theoretical principles−,
among them, Lucas’s models from his 1973 “Expectations and the Neutrality of
Money” and the skill-loss model of Pissarides, that would contain around sixteen
assumptions.[36]
Like in other cases, the problem would be that theories lack or hardly have
bridge principles, that is, principles that provide links between the
theoretical concepts and the empirical concepts. In the ideal gas theory, for
example, we would find the bridge principle identifying the theoretical concept
of mean kinetic energy of the molecules with the empirical concept of
temperature. This sort of principle establishes a correspondence between
theoretical constructs and empirical phenomena providing some grounds for
justifying our belief in the correspondence between theoretical constructs and
real entities. What we find in economics, when bridge principles are missing,
is a proliferation of auxiliary assumptions meant to fill the gap between
general theoretical postulates and their concrete applications. This
proliferation is far from serving purposes of theoretical isolation or
empirical approximation, both extremely useful in making good experimentation
possible−again, ‘good’ in the sense that favors both internal and external
validity of the experiment. The overconstrained nature of economic models, on
the contrary, only makes it possible, at best, to maximize the internal
validity of experiment, that is, the evidence that the covariation between the
presumed independent and dependent variables results from a causal
relationship. External validity would be systematically precluded by the very
overconstraining nature of assumptions. As a consequence, the predictive power
of a theory may be at odds with its scope of application and, therefore, with
the external validity of experiments testing the theory conjoined with the
assumptions−provided that external validity requires the generalizability from
results obtained in a research setting to phenomena out of such setting.
According to Cartwright, the problem of the overconstraining nature of
assumptions affect also experimentation through ramdomized control trials,
whose deductive nature in combination with the inclusion of overconstraining assumptions
inevitable results in narrowness of scope.[37]
6. Facing the methodological challenges
After recalling the main
contributions to the debate on ideal assumptions, a question remains as to how
the empirical significance of heuristic or ideal assumptions can be evaluated
if not merely by derivational methods. Addressing this question implies going
back to the issue of how it is possible to attain independent empirical support
for idealizations and what alternative assumptions should be considered as empirically
more significant than the prevalent ones. As pointed out earlier, Lawson and
Mäki have suggested some dialectical methods to deal with the second issue, let
us now address the first issue by considering some other relevant
contributions.
The vast literature on
idealization from Poznań School of Methodology, and, more in particular, Igor Hanzel bi-directional method of empirical
approximation, provide some interesting clues. Leszek Nowak’s (1943-2009) foundational ideas on the idealizational
nature of scientific models, which have been further developed by the Poznań School of Methodology, emphasize
the contrast between generalization or abstraction in the Aristotelian sense
and idealization, the latter entailing a deletion and/or deformation of properties
conducive to the creation of ideal (not real) objects. Following Nowak’s ideas,
Giacomo Borbone and Krzysztof Brzechczyn take the combination of systematic
idealization and concretization to be the main mechanism underlying mature
science, scientific modelling or the very possibility of bridging the gap
between essence and appearance.[38]
The dynamics of mature science would involve three stages: the introduction of
ideal assumptions, the formulation of ideal laws and the gradual concretization
of the laws to the point where completely factual laws, free from ideal
assumption, are obtained.[39]
Interestingly,
the idealization-concretization mechanism involves more than mere derivational
evaluation of idealizations, since concretization, or the possibility of
de-idealize assumptions, is a precondition for prediction and can operate in
different directions. In a 2016 paper by Igor Hanzel,[40]
the author questions the usual reading of Newton’s second law and draws
attention to the bi-directionallity of the method favored by Newton. According
to Hanzel, mass and acceleration are not the main factors (grounds) determining
the phenomenal effect to be equated with the force. Rather, force would be the
main factor causing the phenomenal effect of acceleration in bodies with
certain mass. In Newton’s bi-directional method, he goes from the effect of
forces to forces and from forces to their effects. Before the formulation of
laws makes it possible to go from force (as cause) to some of its effects
(change of movement along time), some definitions are established so that force
can be determined on the basis of some of its attributable effects (change of
state of a body, proportionality between the magnitude of the generated force
and that of generated motion).
Hanzel
emphasizes the relevance of the distinction between two kinds of phenomenal
effects (or conditions of modification of the ground): a) forms of appearance
of the ground (main explanatory cause), which would be made explicit in
definitions; and b) forms of manifestation of the ground, which would be made
explicit in laws.[41]
According to him, the empirical approximation in terms of “forward”
concretization (thus in the direction from laws to applications, or from causes
to conditions where concrete effects can be identified as manifestations of the
causes) should be supplemented by an empirical approximation in terms of
“backward” concretization (in the direction from appearances of the causes to
the definitions of the causes). Note that the appearances of the causes can be,
either effects other than the manifestations of the causes, or (observable)
causes of such causes. Given that ‘ground’ is here understood as the main
factor that plays an especially relevant explanatory role, Hanzel argument
implies that grounds should be empirically supported by both kinds of effects,
which, if determinable in quantitative terms would provide an immanent (law
dependent) and external (law independent) measure respectively. As Hanzel
points out, contrary to Newton’s second law, Marx’s law of value is not only
explicit about the forms of manifestation of the ground but also about the
forms of appearance of the ground, which in this case are not certain effects
but the phenomenal causes of the ground, in particular, the amount of time
involved in producing a good would be the cause of its value. Value, in turn,
would interact with the value of other products thereby causing the phenomenal
effect of price.[42]
Hanzel bi-directional method of empirical approximation implicitly amounts
to recognizing the importance not only of predictions (derivations from laws or
“forward” concretization) but also of prior evidences (basis for definitions or
“backward” concretization). The first
kind of approximation essentially involves deductive inference leading to
predictions, the second sort of approximation, by contrast, operates through
abduction, resulting in definitions intended to best explain some salient
empirical features of the domain under study. The need to evaluate theoretical
concepts −as something different and more basic than the evaluation of
theoretical laws− and the related resort to abductive inference have been only
very recently acknowledged in economic methodology. These, however, are
important aspects of the non-derivational, not merely deductive view of the
empirical evaluation of ideal assumptions. As recently argued by James J.
Heckman and Burton Singer, the rigid separation of the processes of model
generation and model testing, despite its analytical convenience, is artificial
and misleading in different ways.[43]
In invoking abduction in economics, they reach a conclusion in agreement with
the view suggested here, namely, the insufficiency of the predictive test á la Friedman to evaluate assumptions:
This approach addresses the problem of using the same data to formulate
and test hypotheses. Analysts are advised to test provisional models on fresh
data, possibly of a different character than the data used to formulate initial
hypotheses, and to draw new testable implications from hypotheses that survive
an initial stage of scrutiny.[44]
If we apply these ideas to our subject, it becomes clear that the very
generation of ideal assumption needs, indeed, to be justified, and obviously such
justification cannot be obtained through predictions inferred from already
generated, accepted assumptions. Yet, there is no reference to generation
requirements in Friedman’s discussion and no elaboration on the problem of
choosing or accepting certain concepts instead of others. On the other hand,
Mäki’s advocated method for deciding about the replacement of assumptions does
include a combined process of de-idealization and reisolation very in tune with
the above mentioned generative purposes and the iterative bidirectional method
of concretization-idealization put forward by Hanzel.[45]
7. Concluding
remarks
The
different approaches to auxiliary assumptions discussed in the previous
sections have shed light on the different roles of idealization. The main roles
of empirical approximation and theoretical isolation are neither always
simultaneously attainable, nor always evaluable by the same means. Friedman’s
test of prediction for auxiliary assumptions, as well as the rejection of both
descriptive accuracy and independent testability associated with it, face
serious limitations and leaves the expected correspondence between assumptions
and reality unexplained.
There have been important contributions to overcome the failure to
distinguish between different kinds of auxiliary assumptions and, also, the
limits of the predictive or derivational account of empirical significance.
Dialectical methods and bi-directional empirical approximation represent two
promising venues to explore in the future.
References
Borbone, Giacomo & Brzechczyn,
Krzysztof, “The Role of Models in
Science: An Introduction”, in Idealization XIV: Models in Science,
Poznań Studies in the Philosophy of the Sciences and the Humanities, Volume
108, ed. Giacomo Borbone & Krzysztof Brzechczyn (Boston: Brill/Rodopi,
2016), 1-10.
Cartwright, Nancy, “Are RCTs the
Gold Standard?” BioSocieties (Special
Issue: The Construction and Governance of Randomised Controlled Trials) 2/1,
March (2007a): 11-20.
Cartwright, Nancy, “The Vanity of
Rigour in Economics: Theoretical Models and Galilean Experiments”, in Hunting Causes and Using Them: Approaches in
Philosophy and Economics, by: Nancy Cartwright (Cambridge, New York:
Cambridge University Press, 2007b), 217-261.
Duhem, Pierre, The Aim and
Structure of Physical Theory (Princeton (NJ): Princeton University Press,
1906/1991).
Friedman, Milton, "The
Methodology of Positive Economics", in Essays
in Positive Economics, by Milton Friedman (Chicago: University of Chicago
Press, 1953/1966), 3-16, 30-43.
Hanzel, Igor, “The Inherent Type
of Scientific Law, The Idealized Types of Scientific Law”, in Idealization
XIV: Models in Science, Poznań Studies in the Philosophy of the Sciences and
the Humanities, Volume 108, ed. Giacomo Borbone & Krzysztof Brzechczyn
(Boston: Brill/Rodopi, 2016), 43-62.
Heckman,
James J. & Singer, Burton, “Abducting Economics”, American Economic Review: Papers & Proceedings, 107/5 (2017):
298–302.
Lakatos,
Imre, The Methodology of Scientific Research Programmes (Cambridge: Cambridge University
Press, 1978).
Lawson, Tony, “Applied Economics,
Contrast Explanation and Asymmetric Explanation”, Cambridge Journal of
Economics, 33/4 (2009): 405–19.
Lawson, Tony, “Central Fallacies
of Modern Economics”, in Economic Objects
and the Objects of Economics. Virtues and Economics, vol. 3, ed. Peter Róna
& László Zsolnai (Cham: Springer, 2018), 51-68.
Mäki, Uskali & Piimies,
Jukka-Pekka, “Ceteris paribus”, in The Handbook of Economic Methodology,
ed. Davis, John B., Hands, D. Wades & Mäki, Uskali (Edward Elgar,
Cheltenham, 1998), 55-59.
Mäki, Uskali, “Kinds of
Assumptions and Their Truth: Shaking an Untwisted F-Twist”, Kyklos, 53/3
(2000): 303-322.
Mäki, Uskali,
“Ceteris Paribus: Interpretaciones e Implicaciones”, Revista Asturiana de Economía,
28 (2003): 7-32.
Mäki, Uskali, “Realistic Realism
about Unrealistic Models”, in The Oxford Handbook of Philosophy of Economics,
ed. Harold Kincaid & Don Ross (Oxford: Oxford University Press, 2009),
68-98.
Musgrave, Alan, “’Unreal
Assumptions’ in Economic Theory: The F-Twist Untwisted”, Kyklos, 34/3
(1981): 377-87.
Nagel, Ernest, “Assumptions in
Economic Theory”, The American Economic
Review, 53/2, May (1963): 211-219.
Popper, Karl, The Logic of Scientific Discovery (London: Routledge, 1935/2002).
Popper, Karl, Conjectures and
Refutations: The Growth of Scientific Knowledge (London: Routledge, 1963).
Quine,
Willard Van Orman, “Two Dogmas of Empiricism”, in From a Logical Point of
View, by Willard Van Orman Quine (Cambridge, Massachusetts: Harvard
University Press, 1951/1953), 20-46.
La
autora es Profesora Titular de Filosofía de la Ciencia en la Universidad de
Valladolid (España). Sus principales áreas de interés son la filosofía general de
la ciencia, la metodología de la ciencia, la filosofía del lenguaje y la
epistemología. Su investigación se ha centrado en cuestiones limítrofes entre
dichos campos, como la inconmensurabilidad, la validez experimental, la
evaluación de teorías y las relaciones interteóricas.
Recibido: 15 de diciembre
de 2019.
Aprobado para su publicación: 10 de
enero de 2020.
* I am thankful to Valeriano Iranzo and other members of the Valencia Philosophy
Lab for valuable feedback on an earlier version of this work. This research was
financially supported by the research projects “Laws and Models in Physical,
Chemical, Biological, and Social Sciences” (PICT-2018-03454, ANPCyT,
Argentina), and “Stochastic Representations in the Natural Sciences: Conceptual
Foundations and Applications (STOCREP)” (PGC2018-099423-B-I00, Spanish Ministry
of Science, Innovation and Universities).
[1]) Igor Hanzel, “The Inherent Type of
Scientific Law, The Idealized Types of Scientific Law”, in Idealization XIV:
Models in Science, Poznań Studies in the Philosophy of the Sciences and the
Humanities, Volume 108, ed. Giacomo Borbone & Krzysztof Brzechczyn
(Boston: Brill/Rodopi, 2016), 43-62.
[2]) See, respectively, Tony Lawson,
“Applied Economics, Contrast Explanation and Asymmetric Explanation”, Cambridge
Journal of Economics, 33/4 (2009): 405–19, “Central Fallacies of Modern
Economics”, in Economic Objects and the
Objects of Economics. Virtues and Economics, vol. 3, ed. Peter Róna &
László Zsolnai (Cham: Springer, 2018), 51-68, and Uskali Mäki, “Realistic
Realism about Unrealistic Models”, in The Oxford Handbook of Philosophy of
Economics, ed. Harold Kincaid & Don Ross (Oxford: Oxford University
Press, 2009), 68-98.
[3]) Pierre Duhem, The Aim and
Structure of Physical Theory (Princeton (NJ): Princeton University Press,
1906/1991).
[4]) Ibid., 187.
[5]) Willard Van Orman Quine, “Two
Dogmas of Empiricism”, in From a Logical Point of View, by Willard Van
Orman Quine (Cambridge, Massachusetts: Harvard University Press, 1951/1953),
20-46.
[6]) Karl Popper, Conjectures and
Refutations: The Growth of Scientific Knowledge (London: Routledge, 1963),
322-325.
[7]) Karl Popper, The Logic of Scientific Discovery (London: Routledge, 1935/2002),
19-20, 59-61.
[8]) Duhem, The Aim and Structure of
Physical Theory, Popper, The Logic of Scientific Discovery, Popper, Conjectures and Refutations,
Quine, “Two Dogmas of Empiricism”, Imre Lakatos, The
Methodology of Scientific Research Programmes (Cambridge: Cambridge
University Press, 1978).
[9]) Ernest Nagel, “Assumptions in
Economic Theory”, The American Economic
Review, 53/2, May (1963): 211-219, Alan Musgrave, “’Unreal Assumptions’ in
Economic Theory: The F-Twist Untwisted”, Kyklos, 34/3 (1981): 377-87,
Mäki, Uskali & Piimies, Jukka-Pekka, “Ceteris paribus”, in The Handbook
of Economic Methodology, ed. Davis, John B., Hands, D. Wades & Mäki,
Uskali (Edward Elgar, Cheltenham, 1998), 55-59, Uskali Mäki, “Kinds of
Assumptions and Their Truth: Shaking an Untwisted F-Twist”, Kyklos, 53/3
(2000): 303-322, Uskali Mäki, “Ceteris Paribus: Interpretaciones e
Implicaciones”, Revista Asturiana de Economía, 28 (2003): 7-32, Nancy
Cartwright, “Are RCTs the Gold Standard?”, BioSocieties
(Special Issue: The Construction and Governance of Randomised Controlled
Trials) 2/1, March (2007a): 11-20, Nancy Cartwright, “The Vanity of Rigour in
Economics: Theoretical Models and Galilean Experiments”, in Hunting Causes and Using Them: Approaches in
Philosophy and Economics, by: Nancy Cartwright (Cambridge; New York:
Cambridge University Press, 2007b), 217-261.
[10]) Milton Friedman, "The
Methodology of Positive Economics", in Essays
in Positive Economics, by Milton Friedman (Chicago: University of Chicago
Press, 1953/1966), 3-16, 30-43, 14.
[11]) Musgrave, “’Unreal Assumptions’ in
Economic Theory: The F-Twist Untwisted”, 382.
[12]) Friedman, "The Methodology of
Positive Economics", 32.
[13]) Ibid., 8.
[14]) Ibid., 15.
[15]) Ibid., 32-33.
[16]) Ibid., 36.
[17]) Ibid., 36-37.
[18]) Musgrave, “’Unreal Assumptions’ in
Economic Theory: The F-Twist Untwisted”, 385-6.
[19]) Musgrave, “’Unreal Assumptions’ in
Economic Theory: The F-Twist Untwisted”.
[20]) Nagel, “Assumptions in Economic
Theory”, 215-17.
[21]) Ibid., 217-18.
[22]) Lawson, “Central Fallacies of
Modern Economics”, 62.
[23]) Ibid., 62-63.
[24]) Lawson, “Applied Economics,
Contrast Explanation and Asymmetric Explanation”, 408.
[25]) Ibid., 409.
[26]) Mäki, “Realistic Realism about
Unrealistic Models”, 78.
[27]) Ibid., 85.
[28]) Mäki & Piimies, “Ceteris paribus”,
Mäki, “Kinds of Assumptions and Their Truth: Shaking an Untwisted F-Twist”.
[29]) Mäki, “Ceteris Paribus: Interpretaciones e Implicaciones”, 21.
[30]) Mäki, “Realistic Realism about
Unrealistic Models”, 78-80.
[31]) Mäki, “Ceteris Paribus: Interpretaciones e Implicaciones”,
25-26.
[32]) Mäki, “Realistic Realism about
Unrealistic Models”, 85.
[33]) Ibid., Mäki, “Ceteris Paribus: Interpretaciones e
Implicaciones”.
[34]) Cartwright, “The Vanity of Rigour
in Economics: Theoretical Models and Galilean Experiments”, 217.
[35]) Ibid., 226.
[36]) Ibid., 227-8.
[37]) Cartwright, “Are RCTs the Gold
Standard?”
[38]) Borbone & Brzechczyn, “The
Role of Models in Science: An Introduction”, in Idealization XIV: Models in
Science, Poznań Studies in the Philosophy of the Sciences and the Humanities,
Volume 108, ed. Giacomo Borbone & Krzysztof Brzechczyn (Boston:
Brill/Rodopi, 2016), 1-10, 2.
[39]) Ibid., 4.
[40]) Hanzel, “The Inherent Type of
Scientific Law, The Idealized Types of Scientific Law”.
[41]) Ibid., 49, 56.
[42]) Ibid., 51.
[43]) James J. Heckman & Burton
Singer, “Abducting Economics”, American
Economic Review: Papers & Proceedings, 107/5 (2017): 298–302.
[44]) Ibid., 301.
[45]) Mäki, “Realistic Realism about
Unrealistic Models”.