Escolar Documentos
Profissional Documentos
Cultura Documentos
Laurens Vanderstraeten
In the first decade of the twentieth century Ernst Cassirer wrote Substance and Function, a
book with the bold ambition of philosophically analyzing the complete conceptual status of
theoretical physics. Of course, around that time physics was a lot less diversified than it is now,
so that a philosopher such as Cassirer could still have a pretty good overview of all important
developments. Also, physics itself was a lot stronger connected to the philosophical literature
of its time, such that the bridge between the two could be crossed more easily. Nowadays,
these two conditions for a fruitful interplay between philosophy and physics are no longer met.
Physics has become too diverse to overlook its structure in the way Cassirer did, and physicists
are too immersed in the problems of physics to relate to the philosophical literature. As a result,
philosophy of physics has become a discipline that works in the margins of both physics and
philosophy, without the hope of appealing to an audience outside its niche.
One could argue that this is the rightful place for the philosophy of physics, because physics
has become a scientific discipline for which all rules of the game have been decided a long time
ago – around the time of Cassirer, I suppose. In fact, there have been many times this has
turned out as my only conclusion. Yet, the writing of this thesis requires the hypothesis that
something interesting can still be said, and, at the end, I finally believe this to be true. I believe
that the philosophy of physics can still prove its worth in the future for philosophy and physics
alike, and I believe that the tools of Cassirer can be a great inspiration. I hope that this thesis
might show a glimpse of what is possible in that respect.
I am indebted to Maarten Van Dyck for pushing me in the neo-Kantian direction; any failure
to carry through his original suggestions are on my account. Also, I want to thank him for the
patience he has shown during the five years it took me to write this thesis. I am grateful to
Matthias Bal for reading this thesis and learning me about renormalization.
Laurens Vanderstraeten
Gent, May 28, 2017
Contents
1 Overview 1
Bibliography 59
Chapter 1
Overview
In this thesis we will give a transcendental account of the theory of renormalization, one of the
cornerstones of contemporary theoretical physics. This account is based on the work of Ernst
Cassirer, which is the subject of the first chapter. The second chapter contains the application
to the theory of renormalization theory. Let us give a short overview.
1
Chapter 1. Overview 2
physics, and therefore constitutes an interesting case for the transcendental approach that cap-
tures the conceptual structure of theoretical physics, in contrast to metaphysical or ontological
approaches.
We explain the historical development of renormalization theory from the early ideas of Lan-
dau, over the seminal insights of Wilson, and to the place it has obtained in modern condensed-
matter and high-energy physics. In a next section, we focus on the philosophical literature
concerning renormalization, and we try to indicate where our approach diverges from the exist-
ing frameworks. In particular, we are out to show that existing approaches (i) focus too much
on ontological commitments that cannot be found in the physics itself, and (ii) fail to integrate
the crucial lessons from renormalization in a comprehensive philosophical framework. This is
argued for by going back to the original writings of Anderson and others, which have put the
idea of emergence on the map.
In the last section, we try to give our own account of renormalization in the spirit of Cassirer.
Thereto, we shortly reiterate Cassirer’s account of the nineteenth-century concept of energy. We
lay bare the constitutive functions of the use of effective degrees of freedom and the concept of a
scale transformation. Though seemingly unrelated, we show that computational approaches in
theoretical physics can be nicely integrated into our framework, which learns that computational
physics deserves more attention from the epistemologist than it commonly receives. Next, we
identify the advent of the ideas of renormalization and emergence as installing a new ideal of
unification, similarly to the way in which Cassirer understood the energy concept. Finally, we
take up the critique of Friedman as we have left it in the previous chapter, and show that
Cassirer’s conception is both richer and more limited than Friedman’s in capturing the case of
renormalization.
Chapter 2
We start this chapter with a quote of Ernst Cassirer that appears towards the end of Substance
and Function, a quote that nicely summarizes what, for Cassirer, is at stake in the philosophy
of physics:
He who grants science the right to speak of objects and of the causal relations of
objects, has thereby already left the circle of the immanent being and gone over into
the realm “transcendence”. [1, p.295]
Indeed, theoretical physics is taken to yield knowledge about nature that should be, in some
sense, objective and independent of the particularities of the working physicist, any place or time,
and even man itself. If we grant physics this claim to objectivity, the philosophical question
opens up as to how this is possible: How does physics arrive at an objective and determinate
picture of the world? How do we leave the circle of immanent being – the chaos of sense
impressions and subjective states of our ego – and transcend towards the scientific picture of
the external world?
In this chapter we investigate Cassirer’s philosophy of physics as it was formulated in his
book Substance and Function. As it was written in 1910, the book analyzes the structure
of theoretical physics as it looked like around the turn of the century, so the first section of
this chapter consists of a brief sketch of nineteenth-century physics and, in particular, the
conceptual difficulties that faced physicists around that time [Sec. 2.1]. We go on by placing the
philosophical roots of Cassirer in neo-Kantianism [Sec. 2.2], and then discuss the argumentation
of Substance and Function in some detail [Sec. 2.3]. In a next section, we briefly go over the later
work of Cassirer on modern physics [Sec. 2.4]. In the following section, we make the jump to
contemporary philosophy of physics and, in particular, the work of Michael Friedman [Sec. 2.5].
The discussion of the work of Friedman will show how Cassirer can still prove to be relevant
today. We conclude the chapter by summarizing what we, based on the work of Cassirer and
Friedman, believe contemporary philosophy of physics should consist of.
3
Chapter 2. Ernst Cassirer and the philosophy of physics 4
In this chapter we would like to start from a different view on this period. Although relativity
theory and quantum mechanics unmistakably reshuffled a lot of nineteenth-century beliefs on our
picture of reality, and some fin-de-siècle physicists were overly optimistic about the approaching
completeness of physics [2], it is nonetheless clear that “classical physics” has had its own
number of conceptual innovations. Moreover, and on this aspect we would like to focus, from
the epistemological point of view, these innovations spurred lively debates on the status of
physical knowledge and the physicist’s picture of reality. [3]
Before 1800, exact mathematical laws were exclusively used for mechanical phenomena within
the Newtonian framework, whereas heat, electricity, and optics were described rather qualita-
tively. In the nineteenth century this situation changed: On the one hand, the work of e.g.
Laplace and Fourier on heat brought non-mechanical phenomena under the scope of mathemati-
cal analysis, while, on the other, the works of e.g. Fresnel on the wave nature of light and of Joule
on the conversion of mechanical and thermal energy showed how optical and thermal processes
are intimately connected with mechanics. Behind these developments, we can characterize the
goal of nineteenth-century physics as the search for unification of different fields of physics under
the strict validity of mathematical laws. At first, the unifying principles were mostly thought of
in mechanical terms, but later on world-views based on electromagnetism or energetics became
viable options as well. [3]
A paradigmatic example of this unification of physical phenomena guided by mathemati-
cal laws is Maxwell’s formulation of the laws of electrodynamics, which instantly brought the
fields of optics and electromagnetism into one mathematical theory. In order to consistently
interpret the wave equations, Maxwell felt forced to introduce mechanical models underlying
the electromagnetic fields – the fields are identified with the motion or rotation of a mechanical
ether; without this mechanical basis, the mathematical framework would be not intelligible. Still,
Maxwell always refused to interpret the mechanical models as an explanation for the electromag-
netic equations, and instead stressed the hypothetical status of these models, serving more as an
analogy or illustration than as a description of physical reality. Indeed, in later works Maxwell
introduced the field equations in a purely mathematical manner, without a specific mechanical
model, but retained the idea that mathematical physics should still keep dynamical concepts in
mind as these are appropriate to the representation of physical reality. [4]
Maxwell’s struggles with the status of mechanical models reflect the dominant program
of nineteenth-century physics, where all physical phenomena were explained by the structure
and laws of motion of a mechanical system. Yet, the dominance of this mechanical ontology
did not imply that mechanical models were to be interpreted literally as representations of
physical reality, but mostly should serve as hypothetical constructions that elucidate the physical
meaning of the mathematical laws. As with the later work of Maxwell, it could even be enough
to formulate laws within the framework of Lagrangian dynamics, where these laws were still
subsumed under the principles of mechanical phenomena, but speculation of a mechanical nature
was avoided altogether. [3]
environment. In The Science of Mechanics, his famous book on the historical development of
mechanics [6], Mach illustrates in detail how “the task of scientific knowledge now appears as:
the adaptation of ideas to facts and the adaptation of ideas to one another” [7, p.31]. This
implies that there is a continuous transition from man’s pre-scientific coping with his natural
environment to scientific theorizing:
The attitudes and humble everyday skills of the artisan change imperceptibly into
the attitudes and devices of the physicist; and economy of action develops gradually
into the intellectual economy of the scientist, which can also play its part in the
pursuit of purely ideal goals. [7, p.33]
Mach is especially wary of metaphysical speculations in scientific theories; every element of a
scientific theory should in the end be brought back to sensations or perceptions. These sensations
are no simple uninterpreted sense-data, but are determined as the “final link in a chain reaching
from the environment to the central organ of sense” [7, p.39]. In fact, it is one of the goals of
science to show the complex relation between sensations and the sense organs, a relation that is
in no way interpreted within a “naı̈ve-realistic view of the world” [7, p.38].
On the other side of the debate we have Planck, who characterizes, almost directly in re-
sponse to the Machian position, the development of physics as a progressive unification of all
physical phenomena “achieved by emancipating the system from its anthropomorphic elements,
in particular from specific sense impressions” [8, p.6]. He gives the example of the second law of
thermodynamics, which has, throughout its different formulations, been stripped of all human
associations1 ; only in this way can this physical law be given “a firm basis in reality” [8, p.18].
Planck defends the idea of a physical world-picture as reflecting real natural events that take
place in a way that is completely independent from us, devoid of any arbitrary creations of
the human intellect. The goal of science “is not the complete adaptation of our ideas to our
impressions, but the complete liberation of the physical world-picture from the individuality
of the creative mind” [8, p.26]. Interestingly, Planck concedes that the Machian conception is
perfectly coherent, but claims that it misses the essence of natural science as it conceived by
any working scientist. Indeed, instead of the sensationalism of Mach, Planck contests that
a constant, unified world-picture is, as I have tried to show, the fixed goal which
true natural science, in all its forms, is perpetually approaching. . . . This constant
element, independent of every human (and indeed of every intellectual) individuality,
is what we call “the Real”. Or is there today a single physicist worthy of serious
consideration who doubts the reality of the energy principle? [8, p.25]
Both views on the goal and structure of physics clashed when it came to the scientific
status of the kinetic theory of gases, according to which the thermodynamic properties of a
gas were understood as probabilistic laws for the large number of atoms out of which the gas
supposedly consists.2 Indeed, whereas Mach, because of his “dislike for hypothetico-fictitious
1
In the case of thermodynamics, these antropomorphic elements are the ability to do work or the idea of
irreversible processes, which are, in Planck’s conception, both dependent on or affiliated to human technical skills.
It is by Boltzmann’s probabilistic definition that entropy first gets a mathematical meaning independent of any
human association.
2
In the kinetic theory of gases, associated with Maxwell and Boltzmann, the validity of irreversibility in
physical processes is understood to be probabilistic and thermodynamic entropy is defined in a probabilistic way.
In particular, Boltzmann defined the entropy of a certain thermodynamic state as the logarithm of the number
of microscopic configurations that give rise to this state, entailing that a state has a higher entropy as it is
more likely to be realized by the microscopic particles (atoms). The underlying ontology of this theory is again
of a mechanical and/or dynamical nature. In fact, it appears that, initially, Planck himself strongly opposed
to this atomistic interpretation of the laws of thermodynamics, by questioning the intelligibility of probabilistic
explanation of entropy; it is only in later years that he accepted the Boltzmann definition. [3]
Chapter 2. Ernst Cassirer and the philosophy of physics 6
physics” [7, p.35], refuses to accept the existence of imperceptible atoms, Planck acknowledges
the Boltzmann definition of entropy as the first giving it a “firm basis in reality” [8, p.18].
Another point where Mach and Plank disagreed was the relation between electrodynamics and
mechanics: whereas Planck suggests that electrodynamics should in the end be subsumed under
the unifying concepts of mechanics, it seems that Mach was a lot more inclined to drop the
mechanical ontology that dominated nineteenth-century physics. Ironically, around that same
time this particular issue was being resolved by the theory of relativity, and it was precisely the
work of Mach who, amongst others, has proven a great inspiration for Einstein [9].
From this brief sketch we get a picture of the epistemological concerns of physicists at the
turn of the century. We can say that nineteenth-century physics is characterized by the math-
ematization and unification of a whole new range of physical phenomena, and that these higher
levels of abstraction confronted physicists with concerns about the status and methodology of
theoretical physics. These concerns played on the level of philosophical and physical theoriz-
ing, and were deemed important both for philosophy and the content of the physical theories
themselves. Indeed, our discussion of the Mach-Plank dispute illustrates that this type of philo-
sophical debate was not the pass-time of two retired physics professors who have decided to
dedicate some time to philosophical reflection, but that these discussions were followed by physi-
cists and philosophers alike, and their conclusions had a great impact on physical theories.3
The Mach-Plank dispute also shows that the debates could be extremely polarized and
uncompromising. On one side of the spectrum, we find an empiricism aiming to avoid any
‘metaphysical speculations’ and reduce every physical concept to simple sense impressions, and
on the other side, there is a realism for which physicists try to find the concepts that describe
nature, independently from any human particularities. Although arguments for both positions
are drawn from the history of physics and the experience of the working phycisist, the arguments
lack a unified perspective and a specific philosophical perspective seems to be missing. As a result,
Mach’s position seems outdated in the light of the theoretical aspirations of physics at the turn
of the century, but Planck’s focus on anthropomorphisms is overly simplistic for capturing all
developments in theoretical physics. Moreover, both approaches fail to systematically account
for the motors behind nineteenth-century physics, viz. the mathematization and unification
of nature. It would take a systematic philosophical analysis of the situation to clear up these
epistemological issues.
The neo-Kantians inherited the fundamental Kantian insight that the object of knowledge is
not an external reality existing independently from our judgement – a transcendent realm of real
objects in the realist conception, or uninterpreted sense-data in the empiricist tradition – but
that every object is ‘constituted’ as it is conceptualized within a certain a priori logical structure.
This structure is not to be understood in a psychological or physiological way – as Helmholtz
did – but should be pictured as a set of logical ‘faculties’ or functions, that allow to fix sense
impressions into a conceptual space. Without these faculties, sense impressions are devoid of
any objective meaning; they are a priori, in the sense that they come before experience. This
approach of thinking about knowledge is called ‘transcendental’, indicating that the conditions
of possibility for objective judgements are up for philosophical analysis.
Famously, Kant explored this approach with respect to Newtonian physics. Indeed, as he
was confronted with a confusion of different metaphysical interpretations of classical mechanics,
he investigated how the Newtonian paradigm of objective knowledge was made possible by the
faculties of sensibility, understanding and reason that are involved in the act of judgement. Kant
arrived at a tripartite structure with (i) the faculties of pure intuition (essentially the intuitions
of space and time), (ii) the faculties of understanding (logical structures or forms of judgement),
and (iii) the faculty of reason (providing regulative principles or ideals). The principles of
physics (the categories) then arise when the pure forms of judgement are given spatio-temporal
content in relation to the pure forms of intuition (through the transcendental schematism of the
understanding), whereas the regulative ideals serve as the non-determinate guiding principles
that drive the progress of science.
Although the neo-Kantians reinvigorated the transcendental approach of Kant, they did not
take over this structure. In particular, they refused to accept a dualism between, on the one
hand, a discursive (conceptual) faculty of understanding, and, on the other, an intuitive (non-
conceptual) faculty of sensibility. Instead, they wanted to understand the structure that makes
objective knowledge first possible, in purely logical or conceptual terms alone.4 It is by applying
logical concepts to experience, that experience is first constructed; talk about a reality (or pure
sensibility) existing before the logical faculties is non-sensible.
In the Marburg school of neo-Kantianism, ‘experience’ was understood exclusively in sci-
entific terms; it was scientific experience for which they wanted to analyze the conditions of
possibility.5 Indeed, whereas Kant seemed to posit something like a persistent self (a tran-
scendental unity of apperception) as the fundament on which judgement was constructed, the
Marburgers saw the body of science, with its rules, methods and procedures, as responsible for
the constitution of experience [10]. The strategy for exposing this function of science is contained
in Cohen’s ‘transcendental method’, which takes the best physical theories of the day as starting
point and seeks to explain the possibility of experience by identifying the a priori laws that are
present in these theories. Put differently, (scientific) experience is given as a task to philosophy,
in the sense that philosophy strives to articulate the principles of mathematical natural science
that generate objects of possible experience. This method is essentially historic, in that philoso-
phers in different periods of the history of science will be faced with a different science, and will,
consequently, arrive at different conceptions of what constitutes objective experience. [13]
4
The reason for this refusal to allow an intuitive faculty of sensibility is, in part, due to the discovery of
non-Euclidian geometries. Indeed, for Kant, the a priori structure of space that was generated by the faculty of
sensibility had an a priori Euclidean structure, and geometry derives from this faculty of intuition. The formulation
of different, yet consistent, geometries suggested that this could no longer be true and led the neo-Kantians to
believe that geometry (and mathematics in general) is due to the logical faculties of understanding alone.
5
It was their claim that this was also the focus of Kant himself: the Critique of Pure Reason was supposed
to expose the a priori structure of classical mechanics. This Kant interpretation has recently been revisited with
great success [12].
Chapter 2. Ernst Cassirer and the philosophy of physics 8
To a large extent, Cassirer takes over the methodology of Cohen, and carries it further.
Cassirer’s first major work, Das Erkenntnisproblem in der Philosophie und Wissenschaft der
neueren Zeit, first published in 1906, traces the history of science and philosophy from the
perspective of Marburg Neo-Kantianism. In particular, Cassirer discusses the ‘mathematization
of nature’, or the application of ideal mathematical structures to an empirically given nature, as
the decisive achievement of the scientific revolution. The book also contains a lengthy discussion
of Kant, where, in the line of Cohen, Cassirer contests the separation of the faculties of intuition
and understanding and proposes to replace these faculties by a fundamental creative activity of
thought that progressively generates the object of natural science. Space and time then arise not
as expressions of a separate, non-discursive intuition, but as the first products of this creativity
of thought. Importantly, and in clear disagreement with Kant and the logicist tradition of Frege
and Russell, formal logic is not fundamental but appears as an abstraction from ‘transcendental
logic’, where the latter denotes the unitary process of constructing scientific knowledge. In
Substance and Function this transcendental logic will be worked out more systematically, as
well as the idea of the constitution of the object of science through a progressive determination
of the a priori principles of mathematical physics. [14]
we are used to in science. Indeed, it seems that in this rule of concept formation there is always
a tacit reference to another intellectual criterion. In the system of Aristotle, for example, the
ambiguity of the logical doctrine of abstraction is supplemented with a metaphysical theory,
by which the formation of generic concepts ends in the discovery of the real essences of things.
This form of logic has been transformed and refined7 , but it remains that it is only through a
fixed thing-like substratum that logical concepts can obtain their application. It is precisely this
fixation on thing-concepts that Cassirer is out to contest and that he wants to replace with a
form of logic based on relation-concepts.
The inspiration for this move comes from the nineteenth-century reshaping of mathematics.
Indeed, in mathematics it is clear that the method of abstraction cannot characterize or justify
the necessary concepts, because in the definitions of pure mathematics another realm of objects
is created that is in no direct way connected to the world of ‘things’.8 Moreover, mathematical
concepts or formulas have the feature that, as they become more general, they become more
determinate, and that the more special cases of a given mathematical formula follow from the
general case. This relation between the universal and the particular stands in contrast to the
relation of abstraction, where the more general concepts are stripped from sharp determinations.
In mathematics, the most general concepts are also the richest.
This leads Cassirer to his fundamental idea of a logic based on relation-concepts, where (i)
the individual is conceived as a determinate step under the rule of a more general concept, and
(ii) these concepts are serial, in the sense that they generate a series of objects by successive
applications of the same conceptual rule. Just as in mathematics, a logical concept “represents a
universal law, which, by virtue of the successive values which the variable can assume, contains
within itself all the particular cases for which it holds” [1, p.21]. This opens up a new paradigm
for a methodology of logic, where all concept formation is connected with a sequence generated
by functional relations between the members of this sequence – the form and meaning of the
concept are exhausted by this generating relation. In addition, every element falling under a
given concept only has meaning as an element within the series that is generated by the concept;
an object has no independent ‘existence’ – not even in a logical or mathematical sense. As
Cassirer sets out to show in the rest of the book, it is only through this logical methodology
that the determinate character of scientific concepts can be understood.
Thus what is here given is always only a temporally limited and determined reality,
not a state which can be retained in unchanging logical identity. It is the fulfilment
7
As an example of this metaphysical transformation of the same logical methodology, Cassirer discusses the
psychological epistemology of Berkeley: “While formerly it had been outer things that were compared and out of
which a common element was selected, here the same process is merely transformed to presentations as psychical
correlates of things.” [1, p.9]
8
As we will see, the same is true for theoretical physics, since “these concepts of physics also are not intended
merely to produce copies of perceptions, but to put in place of the sensuous manifold another manifold, which
agrees with certain theoretical conditions” [1, p.14]
Chapter 2. Ernst Cassirer and the philosophy of physics 10
of the demand for this latter, however, which constitutes all the meaning and value
of the pure numerical concepts. [1, p.33]
The ‘meaning’ and ‘value’ correspond to the universal applicability to every individual case,
as a condition for judgements concerning individuals. As will become clear later on, these
mathematical concepts will indeed serve as conditions of possibility for the arrangement of
individuals into an inclusive whole. In this sense, the logically determinate character as relation-
concepts is a prerequisite for the role these mathematical concepts will play in mathematical
physics.
The challenge of founding arithmetic in relation-concepts was met by the work of Dedekind,
who founded all arithmetic definitions and propositions in the concept of progression. Indeed,
starting from the concept of a series (i.e. a first a member and a relation of succession) the
integer and fractional numbers as well as addition and multiplication can be developed, without
ever taking recourse to the relations of concrete measurable objects. The framework can even
be extended to include irrational, imaginary, and transfinite numbers. What is established
by this logical construction is “a system of ideal objects whose whole content is exhausted in
their mutual relations” [1, p.39]. Indeed, the essence of number is exhausted by the conceptual
rule that defines a structured manifold: no number is anything more than a place within this
conceptual whole.
The rationale behind Cassirer’s founding of arithmetic becomes clear when it is opposed to
the attempt by Frege and Russell to reduce number theory to logic through the use of classes.
Although Cassirer admits that this reduction is a great advance over sensationalistic theories,
it cannot be satisfactory given the function the number concept has to play in the whole of
knowledge. Again, we see that mathematical concepts are supposed to play a constitutive role,
which the determination of number by the equivalence of classes cannot do. Instead, Cassirer
aims at defining numbers from “a purely categorical point of view” [1, p.54], without taking
recourse to thing-like concepts such as classes. Only in this way can the numerical concepts be
applied in the mathematical sciences.
The same motives drive Cassirer’s discussion of geometry, for which the development of a
purely functional conceptualization has been more involved. This development is presented as a
progressive evolution starting with the geometry of the ancient Greeks, through Cartesian and
differential geometry, and resulting in the formulation of projective geometry. In this form, for
the first time, “we start from an original unit from which, by a certain generating relation, the
totality of the members is evolved in fixed order” [1, p.88]. Importantly, Cassirer construes this
historical development of concepts as a process with an inner necessity. Indeed, in this process
the formulation of group theory forms the final “conclusion to a tendency of thought, which we
can trace in its purely logical aspects from the first beginnings of mathematics” [1, p.94].9
In the last few paragraphs of the third chapter, Cassirer briefly discusses how the purely
functional concepts of geometry are to be applied to empirical reality, and, more specifically, how
to decide between different geometries in their application to real space. With this discussion,
however, we have left the realm of the pure functional concepts of mathematics and embarked
on the critical analysis of mathematical physics.
of knowledge and, consequently, the basic meaning of the functional concept; it is only by laying
bare the transcendental meaning of the functional concept that it finds its true import. So
the real challenge that Cassirer takes up is showing that he can make sense of the historical
evolution of the natural sciences – in particular, mathematical physics – within the framework
of the functional concept. And the ambitions towards this effect are quite high: Cassirer wants
to capture the whole structure of physics with only a small number of fundamental concepts;
the most important are the concepts of space and time, substance and energy.
With respect to the physics of space and time this fundamental philosophical question has
been clouded by the metaphysical discussion on the absolute or relative nature of space-time.
In the background, however, another question emerges that is of epistemological importance:
Different attempts have been undertaken to ground the physical meaning of space-time deter-
minations in purely empirical terms. In the system of Mach [6], for example, it is the influence
of the mass distribution of the universe – the fixed stars – that generates the law of inertia for
the earthly bodies. Looking at the meaning and function of the law of inertia in the system of
mechanics, however, no reference is made to these fixed stars. Indeed, we can easily transform to
other frames of reference and lose the connection with the fixed stars, without the law of inertia
losing its intelligibility. So the concept of uniform motion is only related to the “ideal schemata
offered by geometry and arithmetic” [1, p.175] and only functions as such in physical theory.
The grounding of the law of inertia in empirical terms enters the system of physics only through
an external demand inspired by empiricism.10 This demand has inspired other approaches for
founding the existence of inertial frames in sensuous objects11 , but always it appears that it is
not so much the existence of these objects but rather the assumption of their existence that
validates the use of mechanical concepts. But then it is clear that the meaning of these physical
concepts was already established beforehand in an ideal, mathematical construction. The search
for ‘things’ existing in the sensuous world for grounding e.g. the law of inertia involves a circle,
because inertia and the other principles of mechanics are already tacitly recognized beforehand
as universal mathematical principles.
This implies that the real philosophical problem with respect to space and time concerns the
form and function these principles exhibit in the conceptual structure of theoretical physics. In
line with Cassirer’s basic logical convictions, the logical character of space and time is that of
10
At this point, it proves worthwile to further pinpoint Cassirer’s view on the philosophy of Ernst Mach. Indeed,
when describing the “scientific ideal of pure description” Cassirer writes
The goal of this philosophy of physics would be reached, if we resolved every concept, which enters
into physical theory, into a sum of perceptions, and replaced it by this sum [. . . ]. [1, p.114]
The question Cassirer asks is whether this conception of physics is indeed a description of the actual status of
physics or “confused with a general demand that is made of these theories” [1, p.115].
The answer to this question can only be won by following the course of physical investigation itself
and considering the function of the concept that is involved directly in its procedure. [1, p.115]
With this statement, Cassirer explicitly places himself in the debate for which we have taken the Mach-Planck
dispute as an example [Sec. 2.1]. We see that Cassirer takes the philosophy of Mach as a demand on how physics
should be structured, a demand that is out of touch with the way actual physics has evolved.
11
Cassirer discusses the “fundamental body” of Streintz or the “body alpha” of Neumann as attempts to define
inertial frames through the introduction of some special body or object in empirical reality, with respect to which
inertial movements can be defined.
Chapter 2. Ernst Cassirer and the philosophy of physics 12
“systems of relations in the sense that every particular construction in them denotes always an
individual position, that gains its full meaning only through its connections with the totality
of serial members” [1, p.172]. Indeed, a particular position in space only gains meaning with
reference to other positions, or more generally, a spatial manifold, and every moment of time is
determined with reference to an earlier or later contrasted with it. Space and time thus appear
as serial concepts, where individual space-time points only have meaning as elements within the
space-time manifold.
The same goes for matter and ether, the two physical ‘substances’ that are supposed to
capture all physical processes taking place inside the space-time framework. Cassirer describes
a number of historical transformations, where the concept of matter has been stripped from all
sensuous content and has evolved into a purely logical center of possible relations. Indeed, the
idea of a point mass makes it possible for matter to be a subject of physical processes described
by purely mathematical relations (i.e., differential equations). The concept of the ether equally
expresses the connections between different physical processes, and “all that physics teaches of
the “being” of the ether can, in fact, be ultimately reduced to judgements about such connections”
[1, p.163]. So again, the content of the physical concepts of matter and ether are exhausted by
considering their logical place in the universal schemata, in which the relations of empirical
reality can be first represented in a scientifically determinate fashion.12
A last concept with special significance is that of energy. We have seen that an empirical
phenomenon only becomes an object for knowledge when it is ascribed a definite place in the
mathematical manifold of serial concepts, but, for Cassirer, the real task of knowledge consists
of placing the different series within a unified system. This requires a principle which enables to
connect different series according to an exact numerical scale and a constant numerical relation
governing the transition from one series to the others. This scale is provided by the concept
of energy, which, starting from the famous equivalence of motion and heat, has progressively
included more domains of physics. Energy represents a common series for all physical processes,
making possible an objective correlation according to law in which all physical contents (light,
heat, motion, etc.) stand. It signifies an intellectual point of view, “from which all these
phenomena can be measured, and thus brought into one system in spite of all sensuous diversity”
[1, p.192].
Again, it would be a mistake to think that physics has discovered a new self-existent thing.
Instead, energy simply appears as the expression of an exact numerical relation that pertains
to physical processes, and the meaning of the energy concept is exhausted by that numerical
equivalence. In this respect, energy as a unifying concept seems to have an epistemological
advantage over the attempts at unification within the mechanistic world-view. Indeed, under the
concept of energy, two physical processes “are the “same” not because they share any objective
property, but because they can occur as members of the same causal equation, and thus can be
subsituted for each other from the standpoint of pure magnitude” [1, p.199]. Energism shows
that unification is not necessarily connected to analyzing things and processes into their ultimate
intuitive parts, as a mechanical reductionism would do.13
12
In this connection, Cassirer points to the fact that “the exactitude and perfect rational intelligibility of
scientific connections are only purchased with a loss of immediate thing-like reality” and that “it must appear
as a genuine impoverishment of reality that all existential qualities of the object are gradually stripped off.” [1,
p.164] These remarks echo the view of Planck [Sec. 2.1] associated with the banning of antropomorphous elements
in physical science, but gain a positive philosophical significance with Cassirer.
13
Cassirer explicitly states that he does not favor energism, as “[t]he conflict between the two conceptions can
ultimately only be decided by the history of physics itself; for only history can show which of the two views can
finally be most adequate to the concrete tasks and problem” [1, p.202].
Chapter 2. Ernst Cassirer and the philosophy of physics 13
Even before its individual value has been empirically established within each of the
possible comparative series, the fact is recognized, that it necessarily belongs to
some of these series, and an anticipatory schema is therewith produced for its closer
determination. [1, p.150]
Cassirer calls this “a type of transcendence” [1, p.281], where a particular given impression
becomes a mathematical symbol and designates a fixed physical property in a larger theoretical
structure. This shows that the body of physical concepts is constitutive for a scientifically
determinate conception of reality, and that there are no ‘bare facts’ that any scientific theory
can compare to.
This also implies that physical concepts are not tested in isolation, but that their validity is
evaluated by their function in a theoretical complex; it is these theoretical complexes that are
judged on their correctness as they unite the totality of experience into an unbroken unity. The
agreement between the observations and the system of deductions always remains an approxima-
tion, since the mathematical structure of pure thought is always only postulated to correspond
to physical reality.14 Indeed, “[w]e inscribe the data of experience in our constructive schema,
and thus gain a picture of physical reality; but this picture always remains a plan, not a copy,
and is thus always capable of change” [1, p.186].
Crucially, however, the meaning of the mathematical conceps and principles is not dependent
on their application to physical reality. It is precisely because they are exactly determined as
mathematical principles, that physical concepts such as space and time have the fixity and
exactness that is required for them to function in physical theory. Indeed, space and time act
as “pure functions, by means of which an exact knowledge of empirical reality is possible” [1,
p.182]. They are first considered in intellectual abstraction and only then generate a “general
schema for possible changes in general” [1, p.182], and it is in this application to physical reality
that it is first decided whether real movements in fact conform to these determinations. In this
14
Cassirer refers to Poincaré, who has described these mathematical constructions as conventions when they
are introduced to survey physical facts more easily. For Cassirer, the characterization of the ideal conceptual
creations as conventions recognizes that “thought does not proceed merely receptively and imitatively in them,
but develops a characteristic and original spontaneity” [1, p.187]. In the following section [Sec. 2.3.5] we will
explain that the concepts of theoretical physics are more than conventions.
Chapter 2. Ernst Cassirer and the philosophy of physics 14
respect, it is essential for scientific knowledge of nature that the empirical realization of the
mathematical concepts can shift, yet their logical meaning and necessity remain intact.15
Finally note that the scientific motive of unification seemingly confronts Cassirer with a
paradox: It appears that every experimental observation will always demand for a growing
number of natural laws in order to capture the observation in its peculiarity. Indeed, it will never
be possible to isolate a physical process such that only one law will capture the phenomenon
completely and exactly. It is at this point that the full power of the functional concept shows
itself. In contrast to the generic concept of traditional logic, where every abstraction corresponds
to a stripping of determination, the functional concept becomes more determinate in its content
and application to the particular as it becomes more universal. So in order for the physical
concepts to capture the particularity of experience in growing exactness, a progression towards
more universal concepts is needed. Indeed, every universal relation necessarily contains a growing
number of more particular relations and “has a tendency to connect itself with other relations
to become more and more useful in the mastery of the individual.” [1, p.255]. Put differently,
[t]he advance of experiment goes hand in hand with the advancing universality of
the fundamental law, by which we explain and construct empirical reality. [1, p.258]
because the reason for this transformation is precisely the preservation of these principles; with-
out a fixed logical standard, it would make no sense to transform our scientific body in response
to some observations, because there would be no scientific observation. So, in order to make
sense of the progression of scientific principles, we need to assume that there is “an ultimate
constant standard of measurement of supreme principles of experience in general.” [1, p.268] It
is the task of the critical theory of experience to search for this ‘universal invariant theory of
experience’:
These ultimate logical invariants or “invariants of experience” are called a priori by Cassirer,
because they are contained as necessary premises in every judgement on empirical facts.
We have seen that the objects of physics arise as we transform experience to the demands
of theoretical concepts; through the different conceptualizations science gains different objecti-
fications of physical reality. But these represent different stages in the fulfillment of the same
fundamental demand of objectification. It is through the realization of this demand (i.e. the
identification of the invariants of experience) that the real meaning of the concept of the object
is established. So it is this fundamental demand or search to fix the object of physics in its
full determination that, despite the impossibility to attain it in principle, drives the progress of
science.
Because Cassirer had always placed himself in the philosophical tradition of Kant, it proved
vital that he could incorporate Einstein’s theories within his project, a challenge that he met in
a monograph on relativity theory.17 In this work, Cassirer argues that the theories of Einstein
do not provide a refutation of the philosophy of Kant, but, instead, are a new confirmation
that only transcendental philosophy can provide the correct explanation of the structure and
meaning of theoretical physics.
Cassirer begins by describing the advent of relativity theory as a critical revaluation of the
system of physics. Indeed, after the experiments that had made a unified conception of physical
phenomena impossible using the laws of nineteenth-century physics – the Michelson-Morley
experiment is the most famous – there was a need for a critical examination and correction of
e.g. the classical conceptions of space and time, the concept of matter in mechanics, and the
ether concept in electrodynamics. According to Cassirer, this intellectual process was continuous
in the sense that the same demand for constancy and unity in nature had been at work in
developing the old physics and overthrowing it. The result was, as always, a further liberation
from the “presuppositions of the naively sensuous and “substantialistic” view of the world” [1,
p.386] in favor of a unified system of functional space-time determinations, where space and
time themselves have been further stripped from their thing-like meaning.
What came in the place of the classical notions of space and time are the pure forms of
coexistence and succession, which only have a meaning as serial concepts appearing in the
description of physical phenomena. Indeed, in relativity theory physical processes are described
by world-lines in the four-dimensional space-time manifold, a manifold that presupposes the
serial forms of space and time. Although the space and time coordinations are mixed up for
different observers, as dictated by the equations of relativity, the two functions of coexistence
and succession remain at work in every space-time description of a physical process. Indeed, the
theory of relativity proposes the epistemological insight that neither pure space nor pure time
have an existence in themselves, but only in their unified application under the mathematical
laws of relativity to physical phenomena do space and time retain empirical meaning.
Then, of course, the problem remains of making sense of the non-Euclidean structure of
the space-time manifold. First of all, Cassirer opposes any empirical grounding of geometry,
because the meaning of geometrical concepts is exhausted by their function in the ideal system
of geometry, and they possess no immediate correlate in the world of existence. Moreover, the
geometrical axioms are never to be regarded as concerning things or relations of things in reality;
instead, they should be evaluated to what extent they, in their totality, constitute the physical
object and make physical knowledge possible. But, Cassirer argues, this is exactly what relativity
theory has realized: The ontological meaning of geometry has lost all meaning, and the only
question that remains is which geometrical system should be used for the interpretation of the
phenomena of nature and their dependencies according to law.18 Indeed, the theory of relativity
provides a mathematical framework for space-time determinations, making possible the exact
formulation of certain physical relations such as the laws of gravitation or electromagnetism,
without attaching any existence to the space-time manifold itself.
17
Cassirer published the monograph as Zur Einsteinschen Relativitätstheorie in 1920, which was translated to
English together with Substance and Function in 1923.
18
At this point, Cassirer evokes the philosophy of Kant, and makes clear that pure intuition has no role to play
in the realm of knowledge of the empirical and the physical. Indeed, it is only the rules of understanding that
give the existence of phenomena their synthetic unity. In this regard, it is only a small step beyond Kant to also
take into account non-Euclidean axioms.
Chapter 2. Ernst Cassirer and the philosophy of physics 17
shows shat the project of Cassirer can still provide an important inspiration for contemporary
authors. As Friedman puts it himself,
. . . I construct a narrative depicting both the development of the modern exact sci-
ences from Newton to Einstein and the parallel development of modern scientific
philosophy from Kant through the early twentieth century. I use this narrative to
support a neo-Kantian philosophical conception of the nature of the sciences in ques-
tion – which, in particular, aims to give an account of the distinctive intersubjective
rationality these sciences can justly claim. [21, p.11]
21
With respect to contemporary physics, the study of Pickering [25] on the historical development of particle
physics is particularly relevant.
Chapter 2. Ernst Cassirer and the philosophy of physics 19
constitutive framework, for example, the general relativistic field equations and the classical
Newtonian law of gravitation appear both as alternative empirical possibilities defined within
a common empirical space, whereas in the old framework of classical mechanics the Einstein
equation could not even be formulated. From the Einsteinian perspective both gravitation laws
can be coherently formulated and their empirical meaning devised, such that, under a decisive
experiment, one can be favored over the other. In this retrospective point of view, the transition
from Newton to Einstein seems to be perfectly reasonable.
In addition to this retrospective account, Friedman develops a prospective account of inter-
paradigm rationality that explains how there can still be a rational route from the point of view
of the earlier framework leading to the later framework. This implies that new concepts and
principles of a new constitutive framework develop out of, and as a rational continuation of,
the old concepts and principles, and that, despite the incommensurability between frameworks,
practitioners of a new framework can still appeal to the persons working within the old framework
using conceptual resources that are available for both sides.
This ambitious reply to conceptual relativism is argued for by Friedman’s detailed exposition
of how Einstein, in writing down his special and general theory of relativity, made connection
with (i) a long intellectual tradition of space-time theories going back to the seventeenth century,
(ii) the debate on the foundations of geometry, (iii) the philosophical debate on the status and
goal of scientific knowledge, (iv) empirical evidence on the detectability of relative motion in elec-
trodynamics and the equivalence of gravitational and inertial mass, etc. Indeed, by embedding
this specific revolution of mathematical physics within a larger intellectual (philosophical, scien-
tific, technological, experimental, etc.) tradition, it can be shown how relativity “could have ever
become a real possibility and thus a genuinely live alternative” [22, p.115], and, consequently,
how the rational nature of the transition is laid bare.
our present scientific community, which has achieved temporary consensus based on
the communicative rationality erected on its present constitutive principles, as an
approximation to a final, ideal community of inquiry (to use an obviously Peircean
figure) that has achieved a universal, trans-historical communicative rationality on
the basis of the fully general and adequate constitutive principles reached in the ideal
limit of scientific progress. [22, p.64]
Chapter 2. Ernst Cassirer and the philosophy of physics 21
This regulative ideal is thoroughly Kantian because “we must view our present scientific com-
munity as an approximation to such an ideal community, I suggest, for only so can the required
inter-paradigm notion of communicative rationality be sustained” [22, p.64]. Yet, whereas Cas-
sirer saw in relativity theory the culmination of Kantian philosophy as revised by the Marburg
school, Friedman believes that “we need a more far-reaching revision of Kantian transcendental
philosophy than Cassirer has suggested” [26, p.250]. Indeed, Friedman suggests that it is neces-
sary to “relativize the Kantian a priori to a given scientific theory in a given historical context
and, as a consequence, to historicize the notion of transcendental philosophy itself” [26, p.251].
Let us disentangle these two notions and discuss them separately. Firstly, Friedman seems
to claim that Cassirer did not endorse a fully relativized a priori, but this claim appears, from
our detailed discussion of Cassirer’s works, misguided. Indeed, we have identified one of the
aims in Substance and Function as justifying the progression of scientific theories by a historical
reconstruction of the principles that make scientific experience possible at any stage. This
interpretation is confirmed by Cassirer in a letter to Moritz Schlick, where he states that the
a priori “can assume the most various developments in the progress of knowledge”, and that
the idea of unity in nature “can be specified in particular principles and presuppositions [...]
depending on the progress of scientific experience” [27, p.50-51]. This supports our claim that
Cassirer indeed elaborates a relativization of a priori principles connected with the progress
of scientific knowledge and that Friedman underrates the extent to which Cassirer revised the
epistemology of Kant in order to arrive at a relativized a priori. [28]
The notion of historicizing transcendental philosophy seems to be more to the point. Fried-
man explains that, in Cassirer’s conception22 ,
[w]e have no way of anticipating a priori the specific constitutive principles of future
theories, and so all we can do, it appears, is wait for the historical process to show
us what emerges a posteriori as a matter of fact. How, then, can we develop a philo-
sophical understanding of the evolution of modern science that is at once genuinely
historical and properly transcendental? [29, p.696]
We have seen that Friedman proposes to embed the development of natural science within
a larger intellectual (philosophical, scientific, technological, etc.) tradition, showing how the
replacement of constitutive principles can be made intelligible. In particular, he shows that
transcendental philosophy exhibits its own historical transformation and provides the basis for
the constitutive principles of the natural sciences. The prime example of Friedman again serves
as an excellent illustration: it is by tracing the development of transcendental philosophy trough
Kant, Helmholtz, Poincaré, and, ultimately, Einstein that the relativity revolution can be made
intelligible. [29, 30]
Of course, Friedman’s idea of a historicized transcendental philosophy is a thoroughly post-
Kuhnian philosophy of science. Indeed, the rationale behind this move is precisely to be able
to give not only a retrospective account of scientific progress, but also to provide a prospective
one. By adding other intellectual dimensions, Friedman tells us a historical narrative that fixes
the rationality of science across scientific revolutions. Cassirer, as he was not faced with the
Kuhnian paradigm dynamics, did not share this concern, so that his account lacks Friedman’s
‘broader intellectual perspective’.23 Indeed, Cassirer rests content with a retrospective account
of the conceptual development of mathematical physics, showing the internal rationality of its
historical progression.
22
In this quote, Friedman discusses the approaches of both Cassirer and Husserl.
23
One could interpret Cassirer’s philosophy of symbolic forms as a widening of his perspective in this sense. [28]
Chapter 2. Ernst Cassirer and the philosophy of physics 22
[a]s a matter of fact, Cassirer (and the Marburg school more generally) does not
defend a relativized conception of a priori principles. Rather, what is absolutely a
priori are simply those principles that remain throughout the ideal limiting process.
In this sense, [. . . ], Cassirer’s conception of the a priori is purely regulative, with no
remaining constitutive elements. [22]
Since this will prove an important issue in the next chapter, let us discuss this conclusion
in a bit more detail. Clearly Friedman likes to reconsider Cassirer’s rejection of the faculty of
sensibility, and would like to “preserve some kind of independence for a faculty of sensibility
conceived along broadly Kantian lines” [31, p.48]. This was attempted in the Dynamics of Rea-
son by identifying, within the structure of physical theories, the level of coordinating principles
whose role was to relate mathematical concepts to empirical phenomena. In a later publication,
Friedman makes his idea of coordinating principles as thoroughly constitutive in the Kantian
sense more elaborate. His attempt at “a more structured reinterpretation of the Kantian faculty
of sensibility [...] involves replacing the Kantian faculty of sensibility with what we now call
physical frames of references” [31, p.48]. The idea is that
[l]aboratory frames attached to the surface of the earth, for example, can be described,
at least to a very high degree approximation, by Euclidean geometry and Newtonian
physics. So they are faithful, in this respect, to the independent a priori structure of
our faculty of sensibility according to Kant. In particular, any abstract theoretical
structure we might then introduce (such as that of Minkowski space-time) must still
be related to this prior perceptually given structure in order to have the empirical
meaning that it does. [31, p.48]
24
Friedman discusses this modification of the Kantian a priori in the context of the discussion between Cassirer
and Schlick on the philosophic interpretation of relativity theory. Although the logical empiricism of Schlick
carries a lot of elements of a neo-Kantian approach, it is on this issue that the two philosophers clearly diverge.
For logical empiricism (Carnap) a purely logical analysis of science provides a fully determinate constitution of
science, whereas for Cassirer it is “transcendental logic” that characterizes philosophy of science, where only a
generic, at all times indeterminate, conception of the constitution of science is appropriate. A similar divergence
between Cassirer and Reichenbach is discussed by Ryckman [20], where Reichenbach in the end also takes recourse
to a logical analysis of physics with an unproblematic account of the empirical objects, whereas for Cassirer the
constitution of the physical object is precisely the problem for critical philosophy.
Chapter 2. Ernst Cassirer and the philosophy of physics 23
So Friedman characterizes the relation between abstract mathematical theories and observa-
tional phenomena in a way that is surprisingly close to Kant’s original contentions: there are
mathematical structures both at the observational level, which is a prior perceptually given
structure, and the theoretical level, which is designed in abstract physical theories, and the two
levels are “coordinated with one another by a complex developmental interaction in which each
informs the other” [31, p.49]. The first level is structured by Euclidean geometry and Newtonian
physics, and can be coordinated to Minkowksi spacetime by limiting procedures. In particular,
“the familiar laboratory frames of classical physics play an essential role in relating the mathe-
matical structure of Minkowski space-time to our actual perceptual experience of nature” [31,
p.48]. According to Friedman, we find that empirical phenomena can only be generated in
relativity theory by virtue of coordinating it to a perceptual space with a Euclidan structure.25
In this respect, Friedman is correct in differentiating Cassirer’s conception of the constitutive
function of a priori principles from his own. For Cassirer, there is only the space of physical
theories as shaped by mathematical concepts on one side, and the unstructured chaos of sense
perceptions on the other. Only the former space carries definite scientific meaning, and therefore
contains empirical phenomena in the scientific sense, whereas nothing definite can be said about
the latter. This is, indeed, the consequence of the Marburg school denying a faculty of pure
sensibility and only keeping the faculty of understanding in the game.
Yet, we believe that it is misleading to strip Cassirer’s a priori principles of their constitutive
dimension. As Cassirer repeatedly claims, it is the proper task of critical philosophy to unravel
the different constitutive principles that make it possible for science to represent experience
as a determinate whole. Because he does not acknowledge a separate faculty of sensibility,
Cassirer can no longer characterize the distinction between constitutive and regulative principles
in the way Kant did, but this does not imply that the Cassirer transformed the constitutive a
priori into a purely regulative one. [28] Similarly, the fact that Cassirer does not identify a
distinct level of coordinating principles does not mean that physical principles have lost their
constitutive function. Instead, it just means that constitutivity does not necessarily follow the
specific meaning that Friedman attaches to it. In fact, we believe that Friedman’s insistence on
a separate faculty of pure sensibility leads to a problematic characterization of the function of
physical principles: theoretical physics does not need any bridges to a realm of pure sensibility
in order to give physical meaning to empirical phenomena. In the case of relativity theory, there
is only the four-dimensional curved space-time manifold. In particular, the physical meaning of
the equivalence principle or reference frames has nothing to do with bridging the gap between
abstract theories and empirical phenomena – their meaning is exhausted by the function of these
principles in the theory of general relativity, and it is the theory as a whole that gives physical
meaning to the movements of planets in the curved spacetime. Similarly, we don’t need to
couple back to classical conceptions of the world in order to give empirical meaning to quantum
mechanics.
In conclusion: Just as an internal history of the development of theoretical physics is enough
for laying bare its internal structure and evolution, we believe that the meaning of specifically
physical concepts are exhausted by their function in physical theories. In particular, the prob-
25
In the final chapter of Dynamics of Reason we find a preliminary attempt at applying the same idea of
coordination to the case of quantum mechanics. Here Friedman assigns a central place to the correspondence
principle, according to which a quantum system behaves according to the laws of classical mechanics in the limit
of large quantum numbers; it is supposed to explain why only classical behavior is observed in macroscopic phys-
ical systems. As Friedman explains, “it performed this essential coordinating function by relating experimental
phenomena to limited applications of classical concepts within the new evolving theory of atomic structure” [22,
p.122]. Just as in the case of inertial frames, we find here that coordinating principles are supposed to bridge
between a classically structured perceptual space (pure sensibility in Kantian terminology) and a theoretical space
structured by abstract mathematics. In the case of quantum mechanics this feels like a very suspicious move, not
in the least because a principle is recovered that has not been found in physical theories for the last fifty years.
Chapter 2. Ernst Cassirer and the philosophy of physics 24
lematic way of relating abstract physical theories to some space of pure sensibility (put forward
by Friedman) can be avoided by realizing that there is no such relation, at least as theoretical
physics is concerned. Therefore, Cassirer’s conception of the constitutive a priori is more tailored
to account for the conceptual structure of theoretical physics as far as the physical meaning and
function of its concepts is concerned. In particular, it does not require us to take recourse to
an a priori space with a different (read: classical) structure from the one we find in the most
advanced theories of physics.
Moreover, we find that Cassirer’s conception allows for a more fruitful interplay between the
constitutive and the regulative dimensions of the a priori principles. Indeed, with Cassirer the
same concept can both have a constitutive function in a contemporary physical theory, and point
towards the ‘invariants of experience’ that would function in an ideal stage of physical theorizing.
Because the concepts and principles of physics develop in a historical progress, this interplay is
necessarily dynamic. In a domain of physics that is constantly evolving, the identification of a
strict level of coordinating principles in the sense of Friedman would therefore underestimate
these dynamics.26 We will show in the following chapter that a philosophical account of the
theory of renormalization is better carried out without introducing the level of coordinating
principles.
Still, if we take Friedman’s concerns seriously, we must ask whether Cassirer’s account misses
something in relating physical theory to empirical observations. In a more recent publication
[31] we find that Friedman starts focusing on the praxis of scientific observation and empirical
testing. In the work of Cassirer we do not find an explicit account of how a physical theory is
actually tested in practice. Friedman seems to suggest that this is, in the end, how Cassirer’s
framework falls short:
And so Kants reliance on the a priori structure of the faculty of sensibility necessarily
common to all human beings is replaced by the demand of the experimental (and
therefore technological) community for universally communicable (replicable) results.
[...] The extremely abstract mathematical structure of general relativity, for example,
thereby acquires a connection (via a reconfigured version of a schema connecting
the understanding to sensibility) with our actual perceptual experience of the world
around us—now essentially including technologically enhanced perceptual experience
in engineered experimental contexts. Abstract (purely intellectual) mathematical
reasoning acquires a necessary and very productive relationship with the concrete
technological practice of experimenters and engineers. [31, p.50]
Thus Friedman opens up a dimension of constitutive principles that appeals to the experimental
technologies in which these principles are tested. This move is complementary to Friedman’s
notion of historicizing the a priori, which we discussed in the previous section, and has the
potential of adding extra dimensions to the meaning and rationality of physical principles that
go beyond the purely internal dimension. If understood in this way, we believe that Friedman’s
ideas can prove very fruitful in order to connect physical concepts to a larger intellectual and
technological context. In the following chapter we will shortly hint at how this might go in the
case of the theory of renormalization.
26
The reason why it does seem to work for the theory of relativity can then be explained by the fact that it
concerns a hundred-year old theory that, without the input from other theories such as quantum field theory, is
rather inert in describing real physical phenomena.
Chapter 2. Ernst Cassirer and the philosophy of physics 25
• It traces the historical motives behind the development of principles in theoretical physics.
The ultimate goal of this historical approach is finding the ‘ultimate invariants of expe-
rience’ in the sense of Cassirer. The work of Friedman has shown that this goal is not
rendered futile by the historiography of science in the aftermath of the work of Kuhn.
• It looks for the constitutive function of physical concepts and principles. With Cassirer we
should investigate how the concepts of mathematical physics make first possible a scientific,
determinate and fixed experience of physical reality. The work of Friedman has shown that
this goal is not rendered futile in the light of Quinean holism.
• Both of the above functions of the concepts of physics – the regulative and the constitutive
– exhibit an interplay in the work of Cassirer; this interplay should be laid bare.
• The concepts of theoretical physics are functional (serial, structural) concepts. The history
of physics should be understood as a progressive elimination of thing-like concepts. In that
respect, interpreting the concepts of physics in an ontological way is always misguided, and
only a thoroughly epistemological interpretation of theoretical physics by transcendental
logic is warranted.
Renormalization in contemporary
physics: a transcendental perspective
27
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 28
of the theoretical tools of high-energy physics have originated in condensed-matter theory, and
vice-versa, so that it makes no sense to treat any of the two domains in isolation.
A possible reason for this misleading focus of philosophers could be the small interest from
professional physicists in philosophical questions. In the previous chapter we could clearly see
a line from physicists such as Mach and Planck to the philosophical concerns of Cassirer, but
no such lines can be discerned between contemporary physics and philosophy. Physicists are
typically not familiar with philosophical literature3 , whereas philosophical accounts seem out of
touch with real-life physics.
In this chapter we will discuss the theory of renormalization, which has been a central
theme in theoretical physics for the last fifty years across condensed-matter, high-energy and
statistical physics. Because it appears in so many subdisciplines, a philosophical account of
renormalization is less likely to focus on ontological and more on epistemological issues, making
it the ideal subject for the transcendental approach that we have laid out in the previous chapter.
We will start by situating this subject within the landscape of contemporary theoretical physics
[Sec. 3.1], and then give a historical introduction to the theory of renormalization [Sec. 3.2]. The
philosophical part starts with a review of the literature on renormalization [Sec. 3.3], so that we
can set the stage for our transcendental account following Cassirer [Sec. 3.4].
Solving this eigenvalue equation for the electron distribution ψ is, however, practically impossible
even for a simple one-atom system, because the complexity of this mathematical problem scales
exponentially with the number of electrons. This implies that, even though we know exactly
how its constituents behave and interact, we cannot derive a molecule’s properties. Yet, despite
3
Feyerabend would put it as follows: “The younger generation of physicists, the Feynmans, the Schwingers,
etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrödinger,
Boltzmann, Mach and so on. But they are uncivilised savages, they lack in philosophical depth [...].” [32, p.386]
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 29
the impossibility of finding an exact solution, a variety of techniques has been proposed to find
approximate solutions that capture the correct physical properties of the molecule. Typically,
these approximate methods assume that the electrons can be treated as independent particles –
Hartree-Fock theory and density-functional theory are the most famous examples – and neglect
the collective phenomena (correlations) of the particles. Corrections to these mean-field meth-
ods4 are typically treated in perturbation theory, a procedure that assumes that the correlations
do not drastically change the properties of the system.
This example illustrates the general strategy for solving a generic many-body problem.
Whereas taking into account the correlated behavior of all constituents in the many-body sys-
tem leads to a far too complex mathematical problem, approximate methods are devised such
that the constituents are somehow treated as if they behave independently. Although these
methods have led to great successes in the past, there have always been many-body systems
for which they do not predict the physical behavior correctly. These problems typically involve
collective behavior of the system’s constituents that go beyond the mean-field description. For
example, in quantum chemistry there are many molecules for which density-functional theory
gives wrong predictions. Other examples can be found in condensed-matter physics, where ma-
terials have been discovered that do not exhibit the typical conductor/semi-conductor/insulator
behavior that is expected from band theory5 . These new phases of matter are characterized by
collective behavior of the material’s constituents, which cannot be captured by treating them as
independent particles because the correlations between the particles are too strong. It can even
happen that the macroscopic properties of these materials are completely disconnected from the
microscopic constituents, in the sense that we cannot directly understand the collective behavior
as somehow arising from the constituents. In that case, the macroscopic properties are said to
emerge from its microscopic basis.
When this happens, the many-body problem takes on a new dimension: it involves the ques-
tion of how the emergent properties of a many-body system can originate from its microscopic
constituents. Again, the structure of the many-body problem entails that we know the micro-
scopic laws exactly, so we have, in principle, a complete description of the system. Yet, this
description does not lead to an understanding of the collective behavior of the system, because
qualitatively new macroscopic phenomena are seen to emerge in the system. We will discuss
the issue of emergence in more detail later in this chapter, but, in order to make things more
concrete, we will first discuss three examples in a bit more detail.
Figure 3.1: Cartoon of superconductivity. On the left we see an uncorrelated state of electrons (repre-
sented as black figures) while on the right we have a system where all electrons are bound into Cooper
pairs, which are ’condensed’ into a correlated quantum state (the collective nature of the state is repre-
sented by the correlated dance). The correlated motion of the electrons is said to be an emergent effect
in this many-electron system. Figure taken from [33].
Let us look at one particular issue in a bit more detail, viz. the determination of the hadron
masses such as the proton and neutron. This problem again has the threefold structure of a
many-body problem: (i) we are faced with a system of a large number of microscopic constituents
(the quarks described by a continuous field), (ii) we know the fundamental interactions for the
field (they follow directly from the QCD Lagrangian), and (iii) we want to understand how
the proton forms as the lowest energy state of two up quarks and one down quark. It appears
that the quantum correlations and fluctuations inside the proton are extremely strong, making
perturbative calculations worthless. In fact, it is very hard to see how the proton can be thought
to emerge from the microscopic quarks and gluons as described by the standard model.
Here the sum runs over all nearest-neighbor pairs {ij} and gives a negative energy contribution
if the spins are aligned. The laws of statistical mechanics teach us that all properties of the
system are determined by the partition function
∑ 1
Z= e−βH({si }) , β= ,
kB T
{si }
i.e. the sum over all spin configurations weighted by the Boltzmann factors, where T is the
temperature of the system.
We are for the third time confronted with a many-body problem: (i) we have a large number
of microscopic constituents (spins on a lattice), (ii) we have a full description of these constituents
and their interactions (everything follows from the partition function), and (iii) we are interested
in the collective behavior of the system. Here we are interested in the system’s magnetization,
which is given by the averaged direction of the spin. In the limit of infinite temperature, all spins
will be uncorrelated and the average spin will be zero; in the zero-temperature limit all spins will
settle down in the same direction as energetically favored and the average spin will be one. It
appears now that in between these two limits there is a sharp phase transition (see Fig. 3.2), and,
more interestingly, that the spin correlations become stronger around the transition point and
cannot be understood from mean-field theory nor perturbative corrections. Again, the strong
correlations at the phase transition originate from collective effects that seem to emerge from
the microscopic constituents, and cannot be understood starting from the microscopics directly.
(a) (b)
Figure 3.2: The two-dimensional Ising model. (Left) A number of spins are arranged on a two-dimensional
square lattice, where every spin interacts with its four nearest neighbors. (Right) As the temperature
is increased, we go from a system where all spins point in the same direction (average magnetization is
one) to a system where the spins are uncorrelated (average magnetization is zero). In between, there is
a sharp phase transition.
the fields of particle physics and solid-state physics7 were involved with very different physical
phenomena, and not much overlap was found between the two. This has changed dramatically
since the fifties, when methods of quantum field theory – originally the theory for describing
elementary particles – were imported to describe condensed-matter systems as well. In the
opposite direction the concepts of symmetry breaking, originally developed by condensed-matter
physicists, were applied to high-energy physics. Later ideas from the theory of phase transitions
on the one hand, and divergences in quantum field theory on the other, were combined in the
development of the renormalization group. Nowadays, much of the phenomena of high-energy
physics can be found back in condensed-matter systems, and the same concepts can be applied
to understand these physical phenomena. Famous examples include the observation of the Higgs
mode in superconductors [34], and the emergence of gauge bosons and fermions in spin systems
[35]. The defining difference between the two fields from a conceptual point of view is that in
the case of condensed-matter physics the microscopic constituents and laws are known and the
observed phenomena, exotic as they may be, are supposed to arise from a more ‘fundamental’
level, whereas an underlying level is not known in the case of high-energy physics.
As will become clear in the rest of this chapter, we believe that the divisions between different
subfields of physics, and a strong focus on one of these fields in particular, is unwarranted
from a philosophical point of view, at least if we want to map out the conceptual structure of
contemporary theoretical physics. The focus on high-energy physics from the side of philosophy
is rooted in the motive of grounding a metaphysical or ontological picture of the world, and is
unwarranted from the epistemological point of view. Instead, it proves to be more rewarding to
investigate concepts or motives that run across the different subdisciplines of physics. The many-
body problem is one of these motives, and the theory of renormalization provides an extremely
elegant way of getting a grip on it. At the end of this chapter, we will see that the theory of
renormalization gives us a way of understanding one of the key conceptual features that any
physical theory shares, irrespective of the ontology the theory supposedly describes.
7
Solid-state physics is, more or less, the old term for condensed-matter physics. Nowadays, the former also
refers to the ‘old way’ of doing condensed-matter physics, where crystal structures and band theory were among
the main subjects.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 33
systems at criticality. In the end Wilson came up with a procedure of treating the connections
between scales, explaining why Landau’s theory can make sense in some cases and why it fails
in other.
remains the same. This implies that the following relation should hold:
∑
e−βHeff ({s }) = e−βH({s }∪{s }) ,
> < >
{s< }
which gives us, in principle, a prescription for transforming the original model to a new effective
model, which acts only on a part of the spins of the lattice. Since only half of the spins are
retained in the effective model, the spacings between the spins has doubled and the model can
be said to be defined on a larger scale. If one now rescales the spacings between the effective
spins to the original spacings, one ends up with the same basic constituents as in the original
model, but now with a different energy functional Heff ({s> }). Thus, we have designed a map
between models, which can be written down in full generality as
( ) ( ( ))
H(l) {s} = R H(l−1) {s} .
In Fig. 3.3 we have summarized the renormalization procedure. This procedure can now be re-
peated many times, which leads to a renormalization-group flow14 of effective models. Supposing
that we start from a spin system with only nearest-neighbor coupling, the renormalization-group
flow will introduce couplings that range further than only the one between nearest neighbors.
Even worse, not only two-spin but also four-spin, six-spin, etc. interactions will be generated.
Consequently, the renormalization-group flow can be pictured as a flow through the space of
14
These ideas of renormalization are commonly grouped under the term ‘renormalization group’. In order to
avoid confusion, we will avoid the use of this term, and rather refer to ‘renormalization theory’ or ‘theory of
renormalization’, keeping in mind that this does not correspond to a physical theory in the strict sense, but
rather to a set of interrelated ideas, concepts and procedures. We will use the term ‘renormalization-group flow’,
however, since this term has a stricter meaning in the physics literature.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 36
Figure 3.3: Schematic representation of a scale transformation. We start out with a set of spins {s}
interacting through a certain Hamiltonian H({s}). We select half of the spins {s< } (yellow) and average
over them, while the other half {s> } (yellow) are kept as degrees of freedom. The yellow spins move away
from the picture, and, by imposing that the partition remain the same, we arrive at a new Hamiltonian
Heff ({s> }) for half of the spins. Finally, we rescale the spacings between these effective spins (and rotate
the lattice) such that we arrive at the same lattice structure, but with a different Hamiltonian. This
relation defines the map R(·) between Hamiltonians.
all possible spin couplings, where the flow dictates how these effective couplings change as the
scale is tuned. The transformation that generates a renormalization-group flow – the above map
R(·) gives an explicit realization of such a transformation – is called a scale transformation as
it maps a given model to another model living on a larger scale.
In most situations these renormalization-group flows terminate at a fixed-point model H∗ ,
characterized by the fixed-point relation
( ) ( ( ))
H∗ {s} = R H∗ {s} .
Typically, one has a small set of fixed points, and different microscopic models can flow to the
same fixed-point models. Indeed, if a small interaction term is added to a certain microscopic
model, it is typically expected that the model will still flow to the same fixed point; the perturba-
tion is called irrelevant in that case. Given a certain model, one also has relevent perturbations,
which have the effect that the fixed point of the model is changed. Some of these fixed points
correspond to trivial models, for which e.g. all the spins are frozen to point in the same direc-
tion, or do not interact at all. Other fixed points, however, are more interesting and represent
critical states. In fact, every one of these non-trivial fixed points corresponds to a universal-
ity class: every model that flows to the same critical (non-trivial) fixed point belongs to the
same class. A critical fixed point has a defining set of properties such as scaling behavior and
critical exponents, which can be obtained by linearizing the above flow equations. So with the
concept of renormalization-group flows Wilson had found a natural explanation of universality:
systems of different physical character may, nevertheless, flow to the same critical fixed point.
The different incoming trajectories onto this same fixed point correspond to distinct irrelevant
interactions that are present in the microscopic models, but which are washed out by the scale
transformation.
Now the above renormalization procedure of averaging over degrees of freedom is particularly
simple in the case of spins, and is expected to become more difficult in the case of atoms,
electrons, field theories, etc. Also, the effective degrees of freedom that arise in the course
of a renormalization-group flow can become rather different from the ones on the microscopic
level. Therefore, the significance of Wilson’s work consists rather in the conceptual ideas of scale
transformations, renormalized couplings, effective degrees of freedom, and renormalization-group
flows. Fisher puts it as follows:
Indeed, the design of effective RG transformations turns out to be an art more than
a science: there is no standard recipe! Nevertheless, there are guidelines: the general
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 37
Figure 3.4: Schematic representation of a renormalization-group flow. We can see that different micro-
scopic models (l = 0) determine different itineraries through the space of effective models and lead to a
small set of fixed points. In this case the possibilities are simple: a microscopic model flows to one of the
two trivial fixed points, unless it is at the critical point. In the latter case the model flows to the critical
fixed point (indicated with an asterix). Figure taken from Ref. [36]
In the next section, we will give different examples of how this ‘general philosophy enunciated
by Wilson’ is realized in different physical contexts or theories.
Let us explain this ‘most exciting aspect’ in a bit more detail.15 The Kondo model describes a
magnetic impurity coupled to the conduction band of a nonmagnetic metal; the crucial question,
unsolvable by perturbation theory, is the low-temperature behavior of this impurity spin. Wil-
son’s solution to the problem is to (i) discretize the conduction band to discrete energy levels
with a logarithmic spacing, (ii) transform the system to a half-infinite spin chain with the first
spin representing the impurity, and (iii) solving this spin-chain system iteratively. Starting from
the impurity spin in every iteration a new site is added to the system and, in order to keep
the size of the Hilbert space tractable, the number of states is truncated by only keeping the
lowest-energy states of the Hamiltonian for the current part of the chain.
So the general procedure of integrating out non-important degrees of freedom is motivated
from a very practical point of view. Because a computer can store only a finite number of
15
Note that the details of Wilson’s numerical solution of the Kondo problem are not important to understand
the rest of this chapter.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 38
states, Wilson needed a procedure to select only the important states in every iteration of the
renormalization procedure. In this case, it appears that selecting only the lowest-energy states
leads to a numerical algorithm that predicts the impurity’s physical properties. But this must
mean that these states represent the effective degrees of freedom that capture the physics of
the Kondo model at a given energy scale. Therefore, from the very start of the theory of
renormalization, the numerical simulation and physical understanding of a system are two faces
of the same coin, in the sense that both require to account for the effective degrees of freedom
that determine the physics of a many-body system at a certain scale. Indeed,
the solution of the Kondo problem is the first example where the full renormaliza-
tion program (as the author conceives it) has been realized: the formal and scaling
aspects of the fixed points, eigenoperators, and scaling laws will be blended with
the practical-aspect of numerical approximate calculations of effective interactions
to give a quantitative solution (the present accuracy is a few percent) to a problem
that previously had seemed hopeless. [37, p.805]
many-body systems.
Q: Doesn’t all this mean that quantum field theory, for all its successes, is an
approximation that may have little to do with the underlying theory? And isn’t
renormalization a bad thing, since it implies that we can only probe the high energy
theory through a small number of parameters?
A: Nobody ever promised you a rose garden. [41, p.10-11]
From the previous paragraphs, one can very well imagine that the standard model of elemen-
tary particle physics is an effective theory for other degrees of freedom living at a smaller scale.
In fact, more and more physicists are actively working out the idea that the phenomenology
of high-energy physics arises from an underlying microscopic theory, just as in the case of a
condensed-matter system. This ‘condensed-matter point of view’ can be stated as follows:
17
The concept of a renormalization-group flow go back to the work on the running of coupling constants in
quantum electrodynamics by Gell-Mann and Low [42], which served as an important inspiration for Wilson.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 41
As we probe nature at shorter and shorter distance scales, we will either find in-
creasing simplicity, as predicted by the reductionist particle physics paradigm, or
increasing complexity, as suggested by the condensed-matter point of view. We
will either establish that photons and electrons are elementary particles, or we will
discover that they are emergent phenomena—collective excitations of some deeper
structure that we mistake for empty space. [35, p.879]
This quote finally confirms that the ideas of renormalization and emergence are at the founda-
tions of high-energy physics as well, and that the conceptual import of renormalization theory is
similar as in condensed-matter physics. Therefore, a philosophical account of the theory of renor-
malization is needed that can capture its conceptual structure across the different subdisciplines
of physics.
Most notably, we found that the recent developments support a pluralism in theo-
retical ontology, an antifoundationalism in epistemology and an antireductionism in
methodology. These implications are in sharp contrast with the neo-Platonism im-
plicit in the traditional pursuit of quantum field theorists, which took mathematical
entities as the ontological foundation of physical theories and which assumed that,
through rational (mainly mathematical) human activities, one could arrive at an ul-
timate stable theory of everything. Also, contrary to the previous image of scientific
theories that was implicit in the mathematical structure of QFT, the new image
fostered by the EFT approach is that scientific theories are not to be conceived as
necessary products of scientific rationality, but rather should be seen as contingent
descriptions of nature, revisable in the course of changing circumstances. [43, p.69]
These drastic conclusions are drawn from a detailed historical analysis of the theory of renormal-
ization in high-energy physics, and to a lesser extent, statistical physics. We can already note
that these conclusions seem at odds with the approach we take in this thesis, since for us the
scientific rationality of physical theories is the starting point, the feature that a transcendental
analysis should, in some sense, explain. Indeed, the idea that physical theories are contingent
descriptions of nature directly contradicts our goal of showing how the object of theoretical
physics is ideally determined by a set of ‘ultimate invariants of experience’.
In the following four subsections we will discuss some of the issues that are raised in Cao and
Schweber’s paper, and try to get a feeling of contemporary philosophical accounts of renormal-
ization theory in physics. Since we aim at developing a transcendental account in the spirit of
Cassirer in the following section, we will try to bring home the two important conclusions that (i)
the specifically philosophical conclusions drawn by contemporary philosophers with regards to
epistemology and/or ontology do not necessarily follow from the physics alone19 , and (ii) these
18
In this quote, EFT stands for effective field theory; the EFT program denotes the new approach in high-energy
physis, where every field theory is thought to describe the effective low-energy physics of another field theory that
lives at a higher energy scale. As we have discussed in Sec. 3.2.3, this is the consequence of the modern view on
renormalization in high-energy physics. Crucially, in the EFT program the most accurate field theories describing
the standard model of particle physics are themselves only effective field theories of a lower-lying level to which
we haven’t had any experimental access so far.
19
It is often suggested that, once one understands the physics thorougly – a vantage point that supposedly only
a small number of philosophers attain – these conclusions are the only viable option.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 42
3.3.1 Empiricism
At multiple occasions, Cao and Schweber note that recent developments in renormalization
theory support philosophical empiricism. The reason is that, according to the EFT program,
an understanding of physical phenomena on a given scale needs to be supported by empirical
data on that scale. This implies the fundamental importance of phenomenological approaches
in physics. From that perspective, physical theories cannot be more than “effective instruments
for organizing the data by imposing local order and coherence, and they conceive and express
local causal regularities” [43, p.76]. This, in turn, supports a localist view of theory, which
“characterizes physical (or more generally, scientific) theories as historically situated and context-
dependent” [43, p.74].
Their position is further characterized by contrasting it with the idea that “the development
of fundamental physics will end with the discovery of an ultimate, definitive, and conclusive
mathematical formalism” [43, p.77]. Indeed,
From our perspective this argument cannot carry any weight. In the previous section we have
shown that the physical motivation for the EFT program is not restricted to high-energy physics,
but that the same commitment to ‘relevant degrees of freedom’ is present in e.g. condensed-
matter physics. Moreover, condensed-matter physics is typically not driven by the dream of a
final theory describing some fundamental nature of the physical world. From that perspective,
contrasting the rationale behind the EFT program with the dream of the string theorist20 serves
not as an argument for the claim that renormalization theory implies empiricism.
Moreover, we don’t agree with the claim that the inevitability of phenomonology implies an
empiricism in the philosophy of physics. Indeed, Cao and Schweber observe from the develop-
ments of renormalization theory that physics can only establish local regularities, but use this
observation as follows:
The limited nature of our experience in producing knowledge of the world undermines
the universal claim of physical laws: it only allows ascertaining family resemblance
(regularities) in local region of space and time. From local regularities we cannot
construct physical theories that are unique and necessary. On the contrary, all
theories are context-dependent, culturally relative, and historically changeable. [43,
p.75]
Again, the argument seems to be that, with the dream of a grand unified theory crushed by
renormalization theory, the only remaining option is that a physical theory is e.g. context-
dependent. But the fact that the physical behavior of nature depends on the scale on which it
20
String theory is currently the best option for developing an ‘ultimate’ theory of everything, see the footnote
in Sec. 3.4.4 for more on string theory.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 43
is probed, does not imply that our understanding of nature on that scale is culturally relative.
Instead, renormalization theory has learned that a physical description of nature necessarily
involves a setting of the scale on which it is probed, but this stipulation of scale is an exact and
well-defined part of physical theory, and does not point to the “socially constructive nature of
physical theories” [43, p.77].
Ultimately, it seems that Cao and Schweber’s ontological considerations are responsible for
their empiricistic conclusions. In their view the EFT program implies a representation of the
physical world as “layered into quasi-autonomous domains, each layer having its own ontology
and associated ‘fundamental’ laws”, giving rise to a so-called “hierarchical pluralism in theoreti-
cal ontology” [43, p.72]. Since every quasi-autonomous domain demands an empirical input that
is historically contingent, the conclusion is unavoidable that the ontologies that are discovered
are contingent as well. Then it is only a small step to the claim that “scientific theories are
not to be conceived as necessary products of scientific rationality, but rather should be seen as
contingent descriptions of nature” [43, p.69].
From the transcendental perspective, the conclusion that ontologies are historically contin-
gent does not need to worry us that much, and, crucially, does not imply that scientific theories
are contingent descriptions of nature. The reason is that Cao and Schweber assume that the
ontological commitments of scientific theories are inseparable from their scientific content, an
assumption that we, following Cassirer, want to avoid at all costs. Instead, we would like to
focus on the conceptual structure of theoretical physics, and its evolution, where scientific ratio-
nality is taken as a postulate and cannot be disproven by the content of physical theories. In
particular, we will try to show that renormalization theory has redefined the “universal claim of
physical laws” (see quote above), rather than it has undermined it.21
to the conclusion that connecting a physical theory with its ontological commitments rather
obscures the conceptual features of renormalization theory.
Let us therefore look at a philosophical analysis of physical understanding that is closer to
our goals. In an interesting paper, Hartmann also discusses Cao and Schweber’s paper and comes
to the conclusion that, “[l]eaving metaphysical questions aside, it seems to be philosophically
more interesting to examine the formal relations between the theories, models and EFTs we
have already” [46, p.298]. From our perspective, this seems to be more to the point, indeed.
Although Hartmann’s paper is mainly about differentiating the functions of theories, models
and effective field theories in high-energy physics, we would like to focus on his idea of global
and local understanding. Whereas a local understanding is obtained by capturing a physical
phenomenon in terms of the degrees of freedom that are relevant at the energy scale under
consideration, a global understanding consists of showing how the phenomenon follows from the
microscopic equations governing the system.22 This difference in the theoretical understanding
of physical phenomena is illustrated with a case study of quantum chromodynamics. As we have
seen in Sec. 3.1, the major difficulty with this theory is the fact that it is extremely difficult to
actually compute, starting from the fundamental Lagrangian, observable consequences such as
the observed masses of the proton and neutron. In fact, this has only been realized recently with
the development of highly advanced computational methods and the use of huge computational
resources. According to Hartmann, this has produced a global understanding of the mass of
the proton, since it is incorporated as a consequence of the fundamental equations of quantum
chromodynamics, but fails to produce a local understanding, since this computational method
acts like a black box. On the other side, we have effective models such as asymptotic freedom,
confinement, or dynamical chiral symmetry breaking, which provide a local understanding of
how the proton arises as a particle, but these explanations lack a global understanding because
no general principles are directly involved.
Although the differentiation of global and local understanding sheds light on the different
functions playing a role in explaining physical phenomena, there seems to be a tension between
both types of understanding. Both types of understanding are needed to fully understand a
physical phenomenon, but it is rather unclear how they relate to each other. The dichotomy
can be analyzed further by reiterating the example of quantum chromodynamics. In the case
of the proton mass, for example, we believe that Hartmann misses the point of lattice gauge
theory as he downplays this computational method as providing just a black box. Instead, the
efforts of lattice gauge theory and other computational approaches in determining the mass
of the proton from the fundamental Lagrangian should be understood as providing a physical
understanding of how the proton arises as an effective description of quantum chromodynamics
at low energies. Put more generally, physical understanding requires an understanding of how
the effective degrees of freedom at a certain scale – the energy scale at which the proton is probed
– arise through the interactions of the degrees of freedom that live at other scales – the energy
scale at which quark dynamics are important. In that respect, it is unclear what the added value
of the idea of global understanding could be, if not for the crucial requirement that the relevant
degrees of freedom and processes can be shown to arise from a more global standpoint. This
is, of course, the place where renormalization theory comes in, so one of the challenges for the
next section will be to show how the concept of renormalization integrates the different levels
at which we can understand a physical phenomenon.
22
According to Hartmann, whereas local understanding is produced by causal/mechanistic explanations of a
physical phenomenon, a global understanding is rather produced by an explanation that fits the phenomenon
in a general framework. It is by combining these different accounts that one obtains scientific understanding:
“science studies a given phenomenon from various theoretical perspectives, all of which reveal some explanatory
information about the phenomenon in question ” [46, p.300].
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 45
Taking the decoupling theorem and EFT seriously would entail considering the re-
ductionist (and a fortiori the constructivist) program an illusion, and would lead to
its rejection and to a point of view that accepts emergence, hence to a pluralistic
view of possible theoretical ontologies. [43, p.71]
of philosophy. In particular, in contrast to the ontological pluralism of Cao and Schweber, this
‘physics-first’ approach does not make any claims on the ontological commitments that the EFT
program supposedly adheres.
Once we have realized this, we can see that this ontological dimension of emergence is un-
called for in the case of superconductivity. The fact that we can use the same Nambu-Goldstone
field theory for describing the phenomenon of superconductivity in a variety of different ma-
terials, irrespective of the microscopic properties, does not imply that we have found a new
ontology on that scale. The only conclusion is that, in order to understand the phenomenon of
superconductivity, we need to introduce new degrees of freedom (i.e. a bosonic field theory) that
live on the energy scale at which superconductivity is observed. The fact that different materials
exhibit the same effective degrees of freedom is a very non-trivial physical fact, and has found
its explanation in renormalization theory as different microscopic systems can flow to the same
fixed point under a renormalization-group flow. But this physical fact does not come with any
ontological commitments: it is just a part of physical understanding that different degrees of
freedom become important at different scales!
Let us therefore discuss Crowther’s account of emergence in more detail. She sees a tension
in the relation between an effective field theory and the underlying microscopic theory: on the
one hand, it should be impossible to reduce the effective field theory from the more fundamental
theory, but, on the other hand, there should be, in principle, a way to derive a low-energy theory
from the high-energy physics. This is essentially the same tension that we identified earlier in the
paper of Hartmann [46], and can again be illustrated with the example of QCD. There it should,
in principle, be possible to derive the low-energy properties of the quarks (the hadrons) from
the QCD Lagrangian, and physicists are actively pushing the numerical methods for making
this happen. Nevertheless, it seems impossible to derive a theory describing the low-energy
behavior from QCD only, and an external input is required for developing EFTs such as chiral
perturbation theory.
At this point, Crowther notes that, although it is in principle possible to obtain quantitative
low-energy predictions from a high-energy theory, the EFT framework is often necessary in a
more subtle sense:
An effective, low-energy theory is the only means of properly describing the low-
energy behavior of a system. EFTs are formulated in terms of the appropriate
degrees of freedom for the energy being studied, and are necessary for imparting an
understanding of the low-energy physics. Because the low-energy degrees of freedom
do not exist at higher energy, the high-energy theory is unable to present the relevant
low-energy physics. [47, p.428]
Based on this crucial insight, it is made obvious that hinging emergence on the notion of deriv-
ability or reduction misses the point. Instead, Crowther proposes to focus on two positive
aspects of emergence, i.e. on the fact that the low-energy physics is novel and autonomous with
respect to the high-energy theory. Novelty means that new features appear in the low-energy
regime that are not features of the high-energy theory, and autonomy captures the fact that a
low-energy theory is impervious to changes in the high-energy system. This positive definition
has the advantage that it is “naturally suggested by the physics”, whereas taking emergence as
a failure of reduction “distracts from the lessons of the actual physics”: “It means developing
an account true to the science rather than seeking to carry-over prior intuitions and concepts
from other branches of philosophy” [47, p.430].
Still, just as in Hartmann’s case, we believe that the tension is not taken care of. Crowther
rightly problematizes the differentiation between ontological and epistemological reduction from
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 47
the perspective of physical practice, and rightly emphasizes the fact that identifying the correct
degrees of freedom is necessary for understanding the physics at low-energy physics. In that
sense, the low-energy physics can be said to emerge from the high-energy system, and this idea
of emergence is indeed one of the conceptual innovations of modern physics. But, what is missed
here, is the fact that a real physical understanding of these low-energy degrees of freedom is only
attained if it is understood how these arise from the high-energy system: What is the physical
mechanism that, starting from the high-energy physics, gives rise to the low-energy description?
Indeed, whereas Crowther focuses on the novelty and the autonomy of the emergent physics,
the question of how the emergent physics arises from the microscopic degrees of freedom is
not properly taken into account. The challenge here is, again, to show how both sides of the
tension can be relieved in a positive way, i.e. how can we take the novelty and autonomy of
emergent physical behavior seriously, while, at the same time, not loosing the physicist’s goal of
understanding how these effective degrees of freedom emerge from the underlying microscopics.
In the next section we will show that it is precisely the idea of the renormalization group
that gives us the conceptual framework for tackling this question. In our analysis, a crucial
role will be played by computational approaches: it is in the efforts of obtaining quantitative
predictions on low-energy behavior starting from the microscopic theory, that a physicist gains
understanding of the low-energy physics. Crowther downplays this aspect of theoretical physics:
Thus, we can distinguish between an EFT’s role in enabling quantitative predictions
in the low-energy regime—a role which, in principle, could be fulfilled by the high-
energy theory—and its role in appropriately describing the behavior of a system at
low energy, and thereby facilitating an understanding of the low-energy physics—a
role which could not be fulfilled by the high-energy theory. [47, p.428]
We will show that enabling quantitative predictions in the low-energy regime by numerical
simulations takes a crucial place in the structure of contemporary physics, and provides one of
the keys for resolving the tension between reduction and emergence.
this constructionism breaks down when confronted with difficulties that are related to scale and
complexity, because it turns out that the behavior of large systems cannot be understood by a
simple extrapolation from the properties of a few particles. Instead, at every level of scale and
complexity “entirely new laws, concepts, and generalizations are necessary, requiring inspiration
and creativity to just as great a degree” [49, p.393]. This breakdown of the constructionist
hypothesis is illustrated by Anderson mainly through the concept of symmetry breaking, which
provides a physical mechanism that explains how, at a certain scale, behavior can be observed
that is entirely new with respect to the underlying fundamental laws.
So, Anderson’s paper should, in the first instance, be read as a polemic against the high-
energy physicist’s monopoly on the notion of fundamentality. “[A]t each level of complexity
entirely new properties appear, and the understanding of the new behaviors requires research
which I think is as fundamental in its nature as any other” [49, p.393]. As Anderson explains,
the research that is done for understanding the property of a system with a broken symmetry
is “as fundamental as many things one might so label”, but “it needed no new knowledge of
fundamental laws and would have been extremely difficult to derive synthetically from those
laws” [49, p.395].23
Secondly, Anderson does not oppose reductionism, but rather argues for “the breakdown of
the constructionist converse of reductionism” [49, p.393]. What Anderson is claiming, is that, in
order to understand a physical phenomenon on a given energy scale, entirely new laws, concepts,
and generalizations are necessary. Indeed, “the behaviour of large and complex aggregates of
elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of
the properties of a few particles” [49, p.393]. As Anderson accepts reductionism, he does not deny
that low-energy behavior can be reduced to the more fundamental laws, but he emphasizes that
this is (i) often extremely difficult or all but impossible, and (ii) not essential for understanding
what is going on at the low-energy scale. We see the same tension appearing as the one we have
identified earlier in the papers by Hartmann and Crowther, but taking on a more pragmatic
form here. Anderson’s discussion of superconductivity is illuminating:
Thirdly, we note that no commitment to any form of ontological emergence is found. Unless
we should interpret Anderson’s use of the word ‘fundamental’ as implying some kind of ‘ontology’
appearing on different scales, there is no need to see any argument for ontological pluralism or
ontological emergence in his paper.
In a more recent paper [51], Laughlin and Pines have reiterated Anderson’s statements on
the status of reductionism in theoretical physics, but in a seemingly stronger sense. Parallel to
Anderson’s distinction between reductionism and constructionism, they state that “[w]e have
succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything
only to discover that it has revealed exactly nothing about many things of great importance”
[51, p.28]. They seem to go beyond Anderson in identifying ‘higher organizing principles’ that
work at a certain energy scale independently from an underlying microscopic theory. These
principles are “transcendent”, and “would continue to be true and to lead to exact results even
if the Theory of Everything were changed” [51, p.28].
23
The historical context [50] clearly shows that this reading of Anderson’s paper is the correct one.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 49
The fact that emergent physical phenomena are regulated by higher organizing principles
implies that these phenomena are insensitive to microscopics; they are determined by higher
organizing principles and nothing else. Examples include the quasiparticles of a Fermi liquid, or
superconductivity, and are dubbed ‘protectorates’ as they are protected against changes in the
microscopics. This is relevant to “the broad question of what is knowable in the deepest sense
of the term”, because the “the nature of the underlying theory is unknowable until one raises
the energy scale sufficiently to escape protection” [51, p.29].
Let us again take the example of the superconductor, where the higher organizing principle
would be the breaking of a local gauge symmetry. Indeed, the field theory that is invoked
for explaining superconductivity cannot be reduced to the underlying equations governing the
electrons in the material: the field describes the collective, low-energy degrees of freedom, which
are entirely different from the microscopic constituents (i.e. the electrons). Moreover, they are
insensitive to the microscopic degrees of freedom as different materials can exhibit exactly the
same symmetry breaking pattern.
Drawing philosophical conclusions with respect to ontological emergence, however, again
requires something more.24 In line with Anderson, Laughlin and Pines do not rule out the
possibility of an explanation of how the field theory arises from the microscopic details; they just
state that this is, in general, extremely difficult – as the unsuccessful attempts at explaining high-
Tc superconductivity from microscopic details show – and not necessarily essential or interesting.
Indeed, the message of the paper is again polemic in trying to convince people that high-energy
physics is not more interesting than condensed-matter physics, and that the “deductive path
from the ultimate equations to the experiment without cheating” [51, p.30] is not necessarily
the path that theoretical physics should take.
We conclude, again, that the argument for ontological pluralism or emergence in physics –
and any conclusions with respect to ontology – is not taken from the physics itself, but rather
is inspired from the metaphysical aspirations of philosophers. Ontological commitments are not
made by physicists, but are ascribed to physical theories by philosophers. In the next section, we
show that this is not necessary, and that we can make perfect sense of emergence (as physicists
understand it) without invoking any ingredients from metaphysics or ontology. As Cassirer
would put it: “Science at least knows nothing of such a transformation into substance, and
cannot understand it” [1, p.192].
development in physics within a transcendental framework. We have seen that a realist phi-
losophy of physics with a focus on ontological considerations leads to unacceptable conclusions,
at least from the transcendentalist’s perspective. In particular, when taking the ontological
commitments of a certain physical theory seriously, there always looms the tension between
the emergence of new ontologies on a given energy scale, and the realization that these ontolo-
gies should be a consequence of the underlying microscopic degrees of freedom, albeit only in
principle.
Energism shows that this form of numerical order is not necessarily connected with-
out analyzing the things and processes into their ultimate intuitive parts, and recom-
pounding them from the latter. The general problem of mathematical determination
can be worked out without any necessity for this sort of concrete composition of a
whole out of its parts. [1, p.201]
So the demand of unification does not require that all of physics should be reduced to an inclusive
unitary picture for which every physical phenomenon is interpreted as an expression of the same
ultimate substance. Instead, “the demands of the theory of knowledge are rather satisfied when
a way is shown for [...] producing a complex of coordinations, in which each individual process
has its definite place” [1, p.203].
the scale at which the theory or description is supposed to hold, and it is generally impossible
to give a theory that describes a given system at all energy scales in a unified way.
The example of superconductivity illustrates why scale and effective degrees of freedom
necessarily enter into physical understanding. Suppose we had a way of actually solving all
the equations that describe the electrons on the microscopic level, but this solution acts as a
black box. Could we say that we have reached an understanding of why a certain material is
superconducting? The answer is obviously no, since we cannot see how the collective motion
of the electrons gives rise to this low-energy behavior. We need the field theory describing
the effective degrees of freedom at this energy scale, in order to formulate the mechanism of
symmetry breaking that gives rise to superconductivity. This implies that the need for an
effective description does not arise because of our limited intellectual or computational resources
in deriving the observable consequences of the microscopic theory, but that the use of effective
degrees of freedom is a necessary part of any physical theory.
How do we fit this feature of physical theories in a philosophical framework à la Cassirer?
We begin by noting that for Cassirer a physical theory describes an observable phenomenon
in the sense that the phenomenon undergoes a transition from what is offered in experience
to the form in which it appears in a physical statement. This transition or transformation is
mediated by the physical concepts, and only after a phenomenon has been given a place in the
conceptual structure does it gain a scientifically determinate meaning. In this sense the concepts
are constitutive for a scientific knowledge about the physical world. Importantly, these concepts
have a strict mathematical meaning independently from their application to physical reality; it
is only because they have this fixed meaning beforehand, that they can constitute the exactness
that is required for a scientific picture of the otherwise chaotic sensuous world.
The concepts of a field theory and symmetry breaking provide examples of such constitutive
concepts. They have a strict mathematical meaning before they are supposed to explain any-
thing: a quantum field is a well-defined mathematical object, the Lagrangian for the field can
have gauge symmetries, which the field can break to settle in a less symmetric configuration,
this local symmetry breaking leads to massless excitations known as Goldstone bosons, etc. All
these concepts and their consequences are derived within a strictly mathematical setting, and it
is only by applying this field theory and its symmetry breaking to the degrees of freedom in an
actual physical system that we aim to gain knowledge about the physical world.25 Because all
these physical concepts now take the role of effective degrees of freedom, it is all the more clear
that they are not just abstracted from empirical observation. Indeed, one does not ‘observe’ a
gauge field when a system becomes superconducting, but one applies the concept of a gauge field
to a many-electron system in order to explain the observed superconducting properties. This
is an operation that supposes, from the epistemological point of view, an active function from
the side of theory. Furthermore, in the scheme of Cassirer there is no reference to ontology:
understanding a many-body system by an effective field theory does not carry any ontological
commitments. This does not imply that we do not take the philosophical ramifications of physics
seriously, but, instead, our epistemological framework captures exactly in what way the physicist
understands emergence.
So the procedure of theoretical physics is the following. A physical phenomenon can be un-
25
This ‘capturing the degrees of freedom in a mathematical framework’ can take on different forms: One can
write down a quantum field theory, where the fluctuations of the field correspond to the low-energy fluctuations
of the system; Or one writes down the Feynman diagrams for a quasiparticle propagator in a many-electron
system; Or one can think of quantum states in an effective Hilbert space, where an effective Hamiltonian captures
the interactions between the low-energy degrees of freedom; Or one writes down a path integral that acts as a
generating functional for computing the low-energy dynamical correlations; Or one comes up with a variational
wave function for the many-body system with the variational manifold encapsulating the low-energy subspace of
the system; etc.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 52
derstood if a way is found of identifying the degrees of freedom that live on the energy scale at
which the phenomenon takes place, and formulating a mathematical theory that, (i) describes
the behavior of these degrees of freedom, and (ii) has the observed phenomenon as a mathe-
matical consequence. The identification of effective degrees of freedom is an active operation
by the theoretical physicist, an operation that is physically motivated from the concept of scale
transformations and an associated renormalization-group flow in the space of effective models.
In fact, renormalization teaches us that this operation is a necessary step in understanding a
physical phenomenon: without specifying the scale at which a physical system is probed, it
makes no sense to refer to a certain description of the system.26
This is the first function of renormalization theory: it teaches us that effective degrees
of freedom necessarily enter in a physical description of a physical system at a certain
scale, and are only valid on that scale.
Thus renormalization theory learns us that a physical theory is necessarily only valid at
a given energy scale, and cannot be naively extrapolated to give an accurate description of a
physical system across different scales. However, following Cassirer, this can only be the first
step in our physical understanding of the world: our physical picture of the world should be more
than the sum of successful theories for physical phenomena. Indeed, just as the energy concept
was understood by Cassirer as a principle for connecting the different physical phenomena, “in
which we have first arranged the content of the given, among themselves by a unitary law” [1,
p.190], we need here a principle or mechanism of connecting the different energy scales, which
explains why entirely different concepts are needed if the scale is changed, but also shows how
we can integrate these different conceptualizations into a unified whole.
It is, of course, the machinery of renormalization theory that accomplishes this demand.
Indeed, renormalization-group flows give us a physical mechanism that explains why a physical
theory cannot be extrapolated across different energy scales, and explains how effective degrees
of freedom can arise that are qualitatively different from the underlying microscopic theory.
This function of renormalization theory is important in two ways. The first is that it gives us a
physical mechanism in principle that explains the disconnectedness of different scales. Indeed,
as we have read in the papers by Anderson and Laughlin & Pines, the fully detailed mechanism
that gives rise to emergent physical phenomena is often not particularly interesting, and a
physicist can rest content with a theory on a certain scale without having reduced it to its
underlying microscopics: as long as the physics on that scale is properly described by a certain
theory, the phenomenology can be said to be understood in a satisfactory way. No deeper
insights are necessary here. Still this procedure would be entirely unintelligible if the concept of
renormalization would be absent: one still needs an understanding of how effective descriptions
can arise in general. Therefore, renormalization theory is crucial in the conceptual structure of
26
This epistemological reconstruction of what it means to understand superconductivity for a given material
explains how we should understand universality from a philosophical point of view. Indeed, the fact that it is
possible to apply the same mathematical formalism to different physical systems and at different energy scales does
not require us to draw deep philosophical conclusions about the ontology of the physical world. The mathematical
formalism does not carry any ontological commitments: it only carries a constitutive function of giving physical
phenomena such as superconductivity a place within a conceptual structure, and, as such, yielding a theoretical
understanding of what is happening if a material becomes superconducting. The observation that entirely different
physical systems can be understood with the same concepts is a feature of our physical picture of the world –
universality is rightly understood as a deep physical insight! – but does not present us with any epistemological
difficulties.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 53
theoretical physics, as it makes the approach of effective descriptions on a certain energy scale
intelligible.
Secondly, renormalization theory always guarantees the physicist that there should be a
mechanism or explanation for the emergent physics, and motivates physicists to look for these
mechanisms. But renormalization theory also shows what this explanation should consist of: the
physicist should come up with a physical mechanism of how effective degrees of freedom can arise
from the microscopic constituents of the theory. In the case of superconductivity this is exactly
what happened: the mechanism of Cooper pairing explains how an effective bosonic field theory
can arise in a system of electrons. In the case of topological systems it is the entanglement
degrees of freedom that explain how anyonic quasiparticles can emerge from an electronic or
bosonic system. In fact, the rationale behind the whole field of strongly-correlated quantum
many-body physics is determined by this second function of renormalization theory.
This creates the situation that, in order to actually compute something, the physicist needs
to have an idea what the relevant information is about the system she wants to store in the
computer: she needs to find a way of only doing numerical operations on the degrees of freedom
that are important for simulating a given phenomenon. It is a priori not clear that is always
possible, but, as we have seen, it is precisely the theory of renormalization that guarantees
that effective degrees of freedom can be identified at a certain scale that determine a system’s
behavior. This, in turn, opens up the possibility of simulating a system at that scale, as long as
a way is found of implementing these degrees of freedom on a computer.
Through the formalism of renormalization, it appears that understanding a physical phe-
nomenon and simulating it on a computer follow a conceptually similar path. This is not a
coincidence, as this relation between understanding the low-energy behavior of a many-body
system and simulating it, is exactly what motivated Wilson in his formulation of the renormal-
ization group. As we discussed in Sec. 3.2.2, it was the challenge of numerically simulating the
Kondo problem that made Wilson realize that one has to find a way of devising what the effec-
tive degrees of freedom are on a given energy scale. This shows that the renormalization group
provided, from the beginning, both a conceptual tool of understanding how theories change
under scale transformations, and a numerical procedure for simulating the effective degrees of
freedom on a certain scale.
In the forty years that followed simulating many-body physics has required computational
physicists to come up with smart ways of capturing physical phenomena with limited numerical
resources. This has led to lattice formulations for field theories, which can then be simulated with
e.g. Monte-Carlo techniques, variational parametrizations that capture the essential features of
a many-body system at a given energy scale, mean-field approaches leading to self-consistency
equations, etc. All these examples of computational approaches to the many-body problem
show that physical understanding through numerical simulations has become essential in the
way physicists work, and it would be a philosophical mistake to reduce numerics to a black
box that does not lead to understanding of the physical phenomenon that is being simulated.
Instead, in the context of computational physics the two functions of renormalization theory
flow naturally out of pragmatic concerns: (i) because of the limits on computational resources,
the numerical simulation of a physical phenomenon necessarily requires an identification of the
relevant degrees of freedom, and (ii) the theory of renormalization (in the broadest sense) points
the way towards an efficient simulation of the many-body problem. Therefore, incorporating
computational physics within our philosophical analysis confirms our picture on renormalization
theory.
This picture of the physical world is different from the one theoretical physicists have long
thought to be working towards. Indeed, another idea of unification has traditionally been the
motor behind ‘fundamental’ physics, viz. the idea that we, in the end, want to understand all
physical phenomena starting from a fundamental theory of the elementary constituents of the
physical world. This was the drive behind the mechanistic program in the nineteenth century,
or the hope in the early days of quantum mechanics, and remains, to this day, the goal that
string theorists set themselves. In its place, another idea of unification has taken root in the
structure of theoretical physics. We believe that this is, from the philosophical point of view,
the most interesting way of understanding the discussion concerning reduction and emergence.
We can understand the writings of Anderson and others [see Sec. 3.3.4] as a renunciation of this
old idea of unification, and as an articulation of the new idea of what a unified physics consists
of.29
It is interesting to note that this view on physics is much in the line of Cassirer’s charac-
terization of physical concepts as relational and non-substantialistic. In the beginning of this
section, we have already reiterated the views of Cassirer on the energy concept as preferable
from an epistemological point of view, because it provides a way of relating qualitatively different
phenomena without reducing them both to some common substantial basis. Renormalization
theory shows that there is no fundamental theory that explains all physical phenomena in one
stretch, but that physics always needs effective descriptions valid on certain scales. Yet, all
these effective descriptions are connected through renormalization-group flows, which determine
a strict mathematical relation between these descriptions. Therefore it would be a fatal mistake
to interpret the effective degrees of freedom on a given energy scale in a substiantialistic way –
as the emergence of a new ontology – since they appear as elements in a renormalization flow.
In the spirit of Cassirer, the development of renormalization theory is interpreted as yet another
step in the evolution towards less and less substantialistic conceptions of the physical world, and
therefore confirms the progression in the historical development of theoretical physics.
We have tried to make clear that ontology in philosophy of physics is uncalled for. The
reason why philosophers take up this notion time and again should be viewed in the light of
the realism/anti-realism debate, and the fact that the ontological commitments of a certain
physical theory – the fundamental nature of the world that it lays bare – are important from a
After its modern construction by Wilson and others, the renormalization group has appeared in
thousands of papers devoted to the development of the understanding of physical, social, biological
and financial systems. However, renormalization is substantially more than a technical tool. It is
primarily a method for connecting the behavior at one scale to the phenomena at a very different
scale. It serves for example, to connect the physics at the scale of an atom with the observed macro-
scopic properties of materials. One might argue, and I believe that argument, that the connection
among “laws of nature” at different scales of energy, length, or aggregation is the root subject of
physics. One would then argue that Wilson has provided us with the single most relevant tool for
understanding physics. [52, p.2]
29
In this chapter we have largely ignored all the efforts that theoretical physicists are investing in string theory
as the best option for a unified theory encompassing both quantum field theory and gravity. These efforts can be
read as an articulation of the ‘old’ idea of unification, and could show that this idea has not at all disappeared
from contemporary physics. We should note, however, that the endeavors of string theory are not necessarily in
contradiction with the ‘new’ ideal that we have put forward. Indeed, in a paper by David Gross we read:
First this theory, used simply as an example of a unified theory at a very high energy scale, provides
us with a vindication of the modern philosophy of the renormalization group and the effective
Lagrangian that I discussed previously. [...] String theory could explain the emergence of quantum
field theory in the low energy limit, much as quantum mechanics explains classical mechanics, whose
equations can be understood as determining the saddlepoints of the quantum path integral in the
limit of small ℏ. [53, p.66]
How to incorporate string theory in our philosophical framework, however, we leave for further study.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 56
philosophical point of view. As a consequence, it is on the level of ontology that the scientific
rationality of physical theories is to be maintained. In the framework of Cassirer, however,
we have found other resources for grounding the rationality of physics: it is by showing how
physical concepts succeed in capturing more and more of the sensuous world in a mathematically
structured whole, and, as such, giving physical phenomena an exact theoretical meaning. By
fitting renormalization theory within this framework, we have shown that the rationality of
contemporary physics can be maintained without taking recourse to a realistic and/or ontological
account of physics. Moreover, the work of Friedman has shown that this approach is not rendered
futile in the light of Quinean holism and Kuhnian paradigm dynamics, and our analysis can be
read as a confirmation of this project.
However, we have seen that Friedman’s account aims at opening up other dimensions in
which physical principles acquire meaning. In his notion of the historicized a priori, Friedman
sets out to fix the rationality of scientific principles by placing them in a larger intellectual and
technological development. In this chapter, we have focused on the internal dimension of the
principles of renormalization theory, for which we found the framework of Cassirer to be ideally
suited. Yet, we believe that the theory of renormalization can provide an interesting case for
exploring these larger dimensions that Friedman is aiming at. A few interesting directions could
be
• the technological context: The advent of computational physics requires the introduction
of computer technology.
• the intellectual context: The development of renormalization ideas takes place in a physics
community that focuses rather on widening the scope and solving problems in more diverse
fields of physics, than on redefining the ‘fundamental’ concepts.
• the political context: It should be noted that this new way of doing physics falls within the
aftermath of World War 2 and, in particular, the Manhattan project, which have reshaped
the scope and funding of theoretical physics.
As an illustration of the intertwining of these three dimensions, we note that one of the first
application of computers in physical research was during the Manhattan project, where comput-
ers (and physicists) were used to determine how much energy is released in an atomic explosion
[54]. It remains a subject of further study to what extent these three dimensions can be worked
out further, and could lead to a thoroughly historicized account of the development of postwar
theoretical physics.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 58
Bibliography
[1] E. Cassirer, Substance and Function, and Einstein’s Theory of Relativity (The Open Court
Publishing Company, 1923).
[2] H. Kragh, Quantum generations: a history of physics in the twentieth century (Princeton
University Press, 2002).
[3] P. M. Harman, Energy, Force and Matter: the Conceptual Development of Nineteenth-
Century Physics (Cambridge University Press, 1982).
[4] P. M. Harman, The Natural Philosophy of James Clerk Maxwell (Cambridge University
Press, 1998).
[6] E. Mach, The Science of Mechanics. A critical and historical exposition of its principles
(Open Court Publishing Co., 1893).
[7] E. Mach, “The Guiding Principles of My Scientific Theory of Knowledge and Its Reception
by My Contemporaries,” in Physical Reality: Philosophical Essays on Twentieth-century
Physics (Harper & Row, 1910).
[8] M. Planck, “The Unity of the Physical World-Picture,” in Physical Reality: Philosophical
Essays on Twentieth-century Physics (Harper & Row, 1909).
[9] G. Holton, “Mach, Einstein, and the Search for Reality,” Daedalus 97, 636 (1968).
[11] S. Luft and F. Capeillères, “Neo-Kantianism in Germany and France,” in The History of
Continental Philosophy (Acumen Publishing, 2010) pp. 47–85.
[12] M. Friedman, Kant and the Exact Sciences (Harvard University Press, 1992).
[14] M. Friedman, A Parting of the Ways: Carnap, Cassirer and Heidegger (Open Court, 2000).
[15] P. Duhem, La théorie physique: son objet, sa structure (Libraire Philosophique J. Vrin,
1906).
[16] H. Hertz, The Principles of Mechanics Presented in a New Form (Cosimo Classics, 1899).
59
Bibliography 60
[18] E. Cassirer, Determinism and Indeterminism in Modern Physics (Yale University Press,
1956).
[20] T. A. Ryckman, The Reign of Relativity: Philosophy in Physics 1915-1925 (Oxford Univer-
sity Press, 2007).
[22] M. Friedman, Dynamics of Reason (Stanford Kant Lectures) (CSLI Publications, 2001).
[23] W. V. Quine, “Main Trends in Recent Philosophy: Two Dogmas of Empiricism,” The
Philosophical Review 60, 20 (1951).
[24] T. S. Kuhn, The Structure of Scientific Revolutions (University of Chicago Press, 1962).
[26] M. Friedman, “Ernst Cassirer and Thomas Kuhn: the Neo-Kantian Tradition in History
and Philosophy of Science,” Philosophical Forum 39, 239 (2008).
[28] M. Ferrari, “Between Cassirer and Kuhn. Some remarks on Friedman’s relativized a priori,”
Studies in History and Philosophy of Science Part A 43, 18 (2012).
[30] M. Friedman, “Einstein, Kant, and the a priori,” in EPSA Philosophical Issues in the
Sciences: Launch of the European Philosophy of Science Association (Springer Netherlands,
2010) pp. 65–73.
[31] M. Friedman, “Reconsidering the dynamics of reason: Response to Ferrari, Mormann, Nord-
mann, and Uebel,” Studies in History and Philosophy of Science Part A 43, 47 (2012).
[32] P. K. Feyerabend, I. Lakatos, and M. Motterlini, For and Against Method (The University
of Chicago Press, 1999).
[34] D. Pekker and C. M. Varma, “Amplitude / Higgs Modes in Condensed Matter Physics,”
Annual Review of Condensed Matter Physics 6, 269 (2015).
[35] M. Levin and X.-G. Wen, “Colloquium : Photons and electrons as emergent phenomena,”
Reviews of Modern Physics 77, 871 (2005).
[36] M. E. Fisher, “Renormalization group theory: Its basis and formulation in statistical
physics,” Reviews of Modern Physics 70, 653 (1998).
[37] K. G. Wilson, “The renormalization group: Critical phenomena and the Kondo problem,”
Reviews of Modern Physics 47, 773 (1975).
Bibliography 61
[41] J. Polchinski, “Effective Field Theory and the Fermi Surface,” arXiv:hep-th/9210046
(1992).
[42] M. Gell-Mann and F. Low, “Quantum Electrodynamics at Small Distances,” Physical Re-
view 95, 1300 (1954).
[43] T. Y. Cao and S. S. Schweber, “The conceptual foundations and the philosophical aspects
of renormalization theory,” Synthese 97, 33 (1993).
[44] S. Weinberg, “Why the Renormalization Group is a Good Thing,” in Asymptotic Realms
of Physics: Essays in Honor of Francis E. Low, edited by A. H. Guth, K. Huang, and R. L.
Jaffee (MIT Press, 1983) pp. 1–19.
[45] J. Ladyman, D. Ross, D. Spurrett, and J. Collier, Everything must go: metaphysics natu-
ralized (Oxford University Press, 2007).
[46] S. Hartmann, “Effective Field Theories, Reductionism and Scientific Explanation,” Studies
in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern
Physics 32, 267 (2001).
[47] K. Crowther, “Decoupling emergence and reduction in physics,” European Journal for Phi-
losophy of Science 5, 419 (2015).
[48] M. Morrison, “Emergent Physics and Micro-Ontology*,” Philosophy of Science 79, 141
(2012).
[50] E. Castellani, “Reductionism, emergence, and effective field theories,” Studies in History
and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics
33, 251 (2002).
[51] R. B. Laughlin and D. Pines, “The theory of everything,” Proceedings of the National
Academy of Sciences of the United States of America 97, 28 (2000).
[53] D. J. Gross, “The triumph and limitations of quantum field theory,” in Conceptual Foun-
dations of Quantum Field Theory, edited by T. Y. Cao (Cambridge University Press, 1999)
pp. 56–67.
[54] R. P. Feynman, ” Surely you’re joking, Mr. Feynman!”: adventures of a curious character
(Vintage, 1985).