Você está na página 1de 67

Ernst Cassirer and a transcendental approach

towards contemporary physics

Laurens Vanderstraeten

Dissertation submitted in fulfillment of the requirements


for the degree of Master of Arts: Philosophy

Supervisor: Maarten Van Dyck

Academic year: 2016–1017


Foreword

In the first decade of the twentieth century Ernst Cassirer wrote Substance and Function, a
book with the bold ambition of philosophically analyzing the complete conceptual status of
theoretical physics. Of course, around that time physics was a lot less diversified than it is now,
so that a philosopher such as Cassirer could still have a pretty good overview of all important
developments. Also, physics itself was a lot stronger connected to the philosophical literature
of its time, such that the bridge between the two could be crossed more easily. Nowadays,
these two conditions for a fruitful interplay between philosophy and physics are no longer met.
Physics has become too diverse to overlook its structure in the way Cassirer did, and physicists
are too immersed in the problems of physics to relate to the philosophical literature. As a result,
philosophy of physics has become a discipline that works in the margins of both physics and
philosophy, without the hope of appealing to an audience outside its niche.
One could argue that this is the rightful place for the philosophy of physics, because physics
has become a scientific discipline for which all rules of the game have been decided a long time
ago – around the time of Cassirer, I suppose. In fact, there have been many times this has
turned out as my only conclusion. Yet, the writing of this thesis requires the hypothesis that
something interesting can still be said, and, at the end, I finally believe this to be true. I believe
that the philosophy of physics can still prove its worth in the future for philosophy and physics
alike, and I believe that the tools of Cassirer can be a great inspiration. I hope that this thesis
might show a glimpse of what is possible in that respect.
I am indebted to Maarten Van Dyck for pushing me in the neo-Kantian direction; any failure
to carry through his original suggestions are on my account. Also, I want to thank him for the
patience he has shown during the five years it took me to write this thesis. I am grateful to
Matthias Bal for reading this thesis and learning me about renormalization.

Laurens Vanderstraeten
Gent, May 28, 2017
Contents

1 Overview 1

2 Ernst Cassirer and the philosophy of physics 3


2.1 Theoretical physics in the 19th century . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ernst Cassirer and the Marburg school . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Substance and function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.1 A new logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.2 The serial forms in mathematics . . . . . . . . . . . . . . . . . . . . . . . 9
2.3.3 The concepts of natural science . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.4 Physical theory and experience . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.5 The progress of scientific knowledge . . . . . . . . . . . . . . . . . . . . . 14
2.4 Cassirer on modern physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.1 The theory of relativity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 Quantum mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Michael Friedman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 The dynamics of reason . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5.2 The historicized a priori . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.3 Constitutive or regulative? . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Coda. The transcendental philosophy of contemporary physics . . . . . . . . . . . . . 25

3 Renormalization in contemporary physics: a transcendental perspective 27


3.1 Theoretical physics in the 21st century . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 The theory of renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.1 The prehistory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2.2 Wilson’s intervention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.2.3 Renormalization and the many-body problem . . . . . . . . . . . . . . . . 38
3.3 Review of philosophical literature . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.1 Empiricism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3.2 Physical understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.3 The question of emergence . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3.4 More is different . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4 Renormalization as a functional concept . . . . . . . . . . . . . . . . . . . . . . . 49
3.4.1 Energy in the work of Cassirer . . . . . . . . . . . . . . . . . . . . . . . . 50
3.4.2 The necessity of scale and effective degrees of freedom . . . . . . . . . . . 50
3.4.3 The importance of the computational approach . . . . . . . . . . . . . . . 53
3.4.4 The object of physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.4.5 Historicizing renormalization . . . . . . . . . . . . . . . . . . . . . . . . . 56

Bibliography 59
Chapter 1

Overview

In this thesis we will give a transcendental account of the theory of renormalization, one of the
cornerstones of contemporary theoretical physics. This account is based on the work of Ernst
Cassirer, which is the subject of the first chapter. The second chapter contains the application
to the theory of renormalization theory. Let us give a short overview.

Ernst Cassirer and the philosophy of physics


This chapter discusses Cassirer’s philosophy of physics as he wrote it down in his book Substance
and Function. For a good understanding of the goals of Cassirer’s writing, we first take a quick
look at the situation of theoretical physics around the turn of the century. In particular, we
discuss an exchange between Ernst Mach and Max Planck, two physicists, which clearly indicates
the philosophical problems that surrounded nineteenth-century physics. Next, we trace the
philosophical roots of Cassirer back to the neo-Kantian revival of transcendental philosophy
and, in particular, the so-called Marburg school as articulated by Cohen.
After these introductory sections, we dive into Substance and Function and show in detail how
Cassirer analyzes the conceptual structure of theoretical physics. In particular, he explains how
the use of relational concepts allows physics to give determinate meaning to physical phenomena
and to integrate different phenomena in an inclusive whole. In line with the Marburg school,
Cassirer’s approach is historical in the sense that these concepts of physics are analyzed within
their historical development, and point to an ideal end-point of physics where the object of
physics is to be fully determined. After Substance and Function, this focus on the historical
progression of physics forced Cassirer to apply his philosophical framework to the theory of
relativity and quantum mechanics as well, and we shortly discuss his monographs on these
subjects.
We conclude the chapter by bringing Cassirer to the 21st century through the work of
Michael Friedman. We will explain how Friedman believes that a neo-Kantian approach to the
philosophy of physics is still possible today through his notion of the relativized a priori. We
are particularly interested in his claim that he goes beyond the work of Cassirer by historicizing
the a priori and by safeguarding the constitutive function of a priori principles, because both
notions will come back at the end of the next chapter.

Renormalization in contemporary physics: a transcendental perspective


In the second chapter we apply the ideas of the previous one to the theory of renormalization.
In order to set the stage, we start by exploring some of contemporary theoretical physics, and,
in particular, the many-body problem. We show that this problem reappears in many areas of

1
Chapter 1. Overview 2

physics, and therefore constitutes an interesting case for the transcendental approach that cap-
tures the conceptual structure of theoretical physics, in contrast to metaphysical or ontological
approaches.
We explain the historical development of renormalization theory from the early ideas of Lan-
dau, over the seminal insights of Wilson, and to the place it has obtained in modern condensed-
matter and high-energy physics. In a next section, we focus on the philosophical literature
concerning renormalization, and we try to indicate where our approach diverges from the exist-
ing frameworks. In particular, we are out to show that existing approaches (i) focus too much
on ontological commitments that cannot be found in the physics itself, and (ii) fail to integrate
the crucial lessons from renormalization in a comprehensive philosophical framework. This is
argued for by going back to the original writings of Anderson and others, which have put the
idea of emergence on the map.
In the last section, we try to give our own account of renormalization in the spirit of Cassirer.
Thereto, we shortly reiterate Cassirer’s account of the nineteenth-century concept of energy. We
lay bare the constitutive functions of the use of effective degrees of freedom and the concept of a
scale transformation. Though seemingly unrelated, we show that computational approaches in
theoretical physics can be nicely integrated into our framework, which learns that computational
physics deserves more attention from the epistemologist than it commonly receives. Next, we
identify the advent of the ideas of renormalization and emergence as installing a new ideal of
unification, similarly to the way in which Cassirer understood the energy concept. Finally, we
take up the critique of Friedman as we have left it in the previous chapter, and show that
Cassirer’s conception is both richer and more limited than Friedman’s in capturing the case of
renormalization.
Chapter 2

Ernst Cassirer and the philosophy of


physics

We start this chapter with a quote of Ernst Cassirer that appears towards the end of Substance
and Function, a quote that nicely summarizes what, for Cassirer, is at stake in the philosophy
of physics:
He who grants science the right to speak of objects and of the causal relations of
objects, has thereby already left the circle of the immanent being and gone over into
the realm “transcendence”. [1, p.295]
Indeed, theoretical physics is taken to yield knowledge about nature that should be, in some
sense, objective and independent of the particularities of the working physicist, any place or time,
and even man itself. If we grant physics this claim to objectivity, the philosophical question
opens up as to how this is possible: How does physics arrive at an objective and determinate
picture of the world? How do we leave the circle of immanent being – the chaos of sense
impressions and subjective states of our ego – and transcend towards the scientific picture of
the external world?
In this chapter we investigate Cassirer’s philosophy of physics as it was formulated in his
book Substance and Function. As it was written in 1910, the book analyzes the structure
of theoretical physics as it looked like around the turn of the century, so the first section of
this chapter consists of a brief sketch of nineteenth-century physics and, in particular, the
conceptual difficulties that faced physicists around that time [Sec. 2.1]. We go on by placing the
philosophical roots of Cassirer in neo-Kantianism [Sec. 2.2], and then discuss the argumentation
of Substance and Function in some detail [Sec. 2.3]. In a next section, we briefly go over the later
work of Cassirer on modern physics [Sec. 2.4]. In the following section, we make the jump to
contemporary philosophy of physics and, in particular, the work of Michael Friedman [Sec. 2.5].
The discussion of the work of Friedman will show how Cassirer can still prove to be relevant
today. We conclude the chapter by summarizing what we, based on the work of Cassirer and
Friedman, believe contemporary philosophy of physics should consist of.

2.1 Theoretical physics in the 19th century


Nineteenth-century physics is often described as a rather dull episode in the development of
physics, a century dominated by a mechanistic world-view as originally designed by Newton.
The conceptual structure is deemed monolithic, in stark contrast with the ground-breaking
conceptual revolutions that have shaken theoretical physics to its foundations at the beginning
of the twentieth century.

3
Chapter 2. Ernst Cassirer and the philosophy of physics 4

In this chapter we would like to start from a different view on this period. Although relativity
theory and quantum mechanics unmistakably reshuffled a lot of nineteenth-century beliefs on our
picture of reality, and some fin-de-siècle physicists were overly optimistic about the approaching
completeness of physics [2], it is nonetheless clear that “classical physics” has had its own
number of conceptual innovations. Moreover, and on this aspect we would like to focus, from
the epistemological point of view, these innovations spurred lively debates on the status of
physical knowledge and the physicist’s picture of reality. [3]
Before 1800, exact mathematical laws were exclusively used for mechanical phenomena within
the Newtonian framework, whereas heat, electricity, and optics were described rather qualita-
tively. In the nineteenth century this situation changed: On the one hand, the work of e.g.
Laplace and Fourier on heat brought non-mechanical phenomena under the scope of mathemati-
cal analysis, while, on the other, the works of e.g. Fresnel on the wave nature of light and of Joule
on the conversion of mechanical and thermal energy showed how optical and thermal processes
are intimately connected with mechanics. Behind these developments, we can characterize the
goal of nineteenth-century physics as the search for unification of different fields of physics under
the strict validity of mathematical laws. At first, the unifying principles were mostly thought of
in mechanical terms, but later on world-views based on electromagnetism or energetics became
viable options as well. [3]
A paradigmatic example of this unification of physical phenomena guided by mathemati-
cal laws is Maxwell’s formulation of the laws of electrodynamics, which instantly brought the
fields of optics and electromagnetism into one mathematical theory. In order to consistently
interpret the wave equations, Maxwell felt forced to introduce mechanical models underlying
the electromagnetic fields – the fields are identified with the motion or rotation of a mechanical
ether; without this mechanical basis, the mathematical framework would be not intelligible. Still,
Maxwell always refused to interpret the mechanical models as an explanation for the electromag-
netic equations, and instead stressed the hypothetical status of these models, serving more as an
analogy or illustration than as a description of physical reality. Indeed, in later works Maxwell
introduced the field equations in a purely mathematical manner, without a specific mechanical
model, but retained the idea that mathematical physics should still keep dynamical concepts in
mind as these are appropriate to the representation of physical reality. [4]
Maxwell’s struggles with the status of mechanical models reflect the dominant program
of nineteenth-century physics, where all physical phenomena were explained by the structure
and laws of motion of a mechanical system. Yet, the dominance of this mechanical ontology
did not imply that mechanical models were to be interpreted literally as representations of
physical reality, but mostly should serve as hypothetical constructions that elucidate the physical
meaning of the mathematical laws. As with the later work of Maxwell, it could even be enough
to formulate laws within the framework of Lagrangian dynamics, where these laws were still
subsumed under the principles of mechanical phenomena, but speculation of a mechanical nature
was avoided altogether. [3]

As an excellent illustration of these epistemological debates we consider an interesting ex-


change between Ernst Mach and Max Planck taking place in the years 1910-1911. The exchange
is about the status and goal of theoretical physics in an age of radically new conceptions and
theories, and is clearly taking place against a post-Kantian background where any naive realism
is out of the question. [5]
Let us start with Mach, the scientist-philosopher who was one of the most influential figures
in the devaluation of the mechanical ontology through a historical and critical analysis of physical
theories. For Mach, science can only be understood as a product of human evolution, where
abstract scientific theories are viewed as ever more complex ways of man to cope with his natural
Chapter 2. Ernst Cassirer and the philosophy of physics 5

environment. In The Science of Mechanics, his famous book on the historical development of
mechanics [6], Mach illustrates in detail how “the task of scientific knowledge now appears as:
the adaptation of ideas to facts and the adaptation of ideas to one another” [7, p.31]. This
implies that there is a continuous transition from man’s pre-scientific coping with his natural
environment to scientific theorizing:
The attitudes and humble everyday skills of the artisan change imperceptibly into
the attitudes and devices of the physicist; and economy of action develops gradually
into the intellectual economy of the scientist, which can also play its part in the
pursuit of purely ideal goals. [7, p.33]
Mach is especially wary of metaphysical speculations in scientific theories; every element of a
scientific theory should in the end be brought back to sensations or perceptions. These sensations
are no simple uninterpreted sense-data, but are determined as the “final link in a chain reaching
from the environment to the central organ of sense” [7, p.39]. In fact, it is one of the goals of
science to show the complex relation between sensations and the sense organs, a relation that is
in no way interpreted within a “naı̈ve-realistic view of the world” [7, p.38].
On the other side of the debate we have Planck, who characterizes, almost directly in re-
sponse to the Machian position, the development of physics as a progressive unification of all
physical phenomena “achieved by emancipating the system from its anthropomorphic elements,
in particular from specific sense impressions” [8, p.6]. He gives the example of the second law of
thermodynamics, which has, throughout its different formulations, been stripped of all human
associations1 ; only in this way can this physical law be given “a firm basis in reality” [8, p.18].
Planck defends the idea of a physical world-picture as reflecting real natural events that take
place in a way that is completely independent from us, devoid of any arbitrary creations of
the human intellect. The goal of science “is not the complete adaptation of our ideas to our
impressions, but the complete liberation of the physical world-picture from the individuality
of the creative mind” [8, p.26]. Interestingly, Planck concedes that the Machian conception is
perfectly coherent, but claims that it misses the essence of natural science as it conceived by
any working scientist. Indeed, instead of the sensationalism of Mach, Planck contests that
a constant, unified world-picture is, as I have tried to show, the fixed goal which
true natural science, in all its forms, is perpetually approaching. . . . This constant
element, independent of every human (and indeed of every intellectual) individuality,
is what we call “the Real”. Or is there today a single physicist worthy of serious
consideration who doubts the reality of the energy principle? [8, p.25]
Both views on the goal and structure of physics clashed when it came to the scientific
status of the kinetic theory of gases, according to which the thermodynamic properties of a
gas were understood as probabilistic laws for the large number of atoms out of which the gas
supposedly consists.2 Indeed, whereas Mach, because of his “dislike for hypothetico-fictitious
1
In the case of thermodynamics, these antropomorphic elements are the ability to do work or the idea of
irreversible processes, which are, in Planck’s conception, both dependent on or affiliated to human technical skills.
It is by Boltzmann’s probabilistic definition that entropy first gets a mathematical meaning independent of any
human association.
2
In the kinetic theory of gases, associated with Maxwell and Boltzmann, the validity of irreversibility in
physical processes is understood to be probabilistic and thermodynamic entropy is defined in a probabilistic way.
In particular, Boltzmann defined the entropy of a certain thermodynamic state as the logarithm of the number
of microscopic configurations that give rise to this state, entailing that a state has a higher entropy as it is
more likely to be realized by the microscopic particles (atoms). The underlying ontology of this theory is again
of a mechanical and/or dynamical nature. In fact, it appears that, initially, Planck himself strongly opposed
to this atomistic interpretation of the laws of thermodynamics, by questioning the intelligibility of probabilistic
explanation of entropy; it is only in later years that he accepted the Boltzmann definition. [3]
Chapter 2. Ernst Cassirer and the philosophy of physics 6

physics” [7, p.35], refuses to accept the existence of imperceptible atoms, Planck acknowledges
the Boltzmann definition of entropy as the first giving it a “firm basis in reality” [8, p.18].
Another point where Mach and Plank disagreed was the relation between electrodynamics and
mechanics: whereas Planck suggests that electrodynamics should in the end be subsumed under
the unifying concepts of mechanics, it seems that Mach was a lot more inclined to drop the
mechanical ontology that dominated nineteenth-century physics. Ironically, around that same
time this particular issue was being resolved by the theory of relativity, and it was precisely the
work of Mach who, amongst others, has proven a great inspiration for Einstein [9].

From this brief sketch we get a picture of the epistemological concerns of physicists at the
turn of the century. We can say that nineteenth-century physics is characterized by the math-
ematization and unification of a whole new range of physical phenomena, and that these higher
levels of abstraction confronted physicists with concerns about the status and methodology of
theoretical physics. These concerns played on the level of philosophical and physical theoriz-
ing, and were deemed important both for philosophy and the content of the physical theories
themselves. Indeed, our discussion of the Mach-Plank dispute illustrates that this type of philo-
sophical debate was not the pass-time of two retired physics professors who have decided to
dedicate some time to philosophical reflection, but that these discussions were followed by physi-
cists and philosophers alike, and their conclusions had a great impact on physical theories.3
The Mach-Plank dispute also shows that the debates could be extremely polarized and
uncompromising. On one side of the spectrum, we find an empiricism aiming to avoid any
‘metaphysical speculations’ and reduce every physical concept to simple sense impressions, and
on the other side, there is a realism for which physicists try to find the concepts that describe
nature, independently from any human particularities. Although arguments for both positions
are drawn from the history of physics and the experience of the working phycisist, the arguments
lack a unified perspective and a specific philosophical perspective seems to be missing. As a result,
Mach’s position seems outdated in the light of the theoretical aspirations of physics at the turn
of the century, but Planck’s focus on anthropomorphisms is overly simplistic for capturing all
developments in theoretical physics. Moreover, both approaches fail to systematically account
for the motors behind nineteenth-century physics, viz. the mathematization and unification
of nature. It would take a systematic philosophical analysis of the situation to clear up these
epistemological issues.

2.2 Ernst Cassirer and the Marburg school


These philosophical concerns and debates taking place within natural science exercised a great
influence on the development of academic philosophy as well. In a time where philosophy of
science was dominated by speculative ‘Naturphilosophie’, some scientists took recourse to the
original work of Kant to look for a philosophical articulation of science. A central figure in this
development was Hermann von Helmholtz, who investigated to what extent our perception of
external reality is the result of a process whereby neural stimulations are made intelligible to the
human mind. Helmholtz framed these investigations in Kantian terms, where transcendental
philosophy was transformed into physiognomy. This scientific return to Kant was welcomed by
neo-Kantian philosophers, but they proposed a much more systematic reappraisal of the work
of Kant. [10, 11]
3
The strategy of this section has been to illustrate this importance by highlighting one specific philosophical
exchange between well-known physicists. A complete argument would have to include a discussion of physicists
such as Helmholtz, Hertz, Boltzmann, Poincaré, Duhem, etc., who wrote important, essentially philosophical,
works on similar issues.
Chapter 2. Ernst Cassirer and the philosophy of physics 7

The neo-Kantians inherited the fundamental Kantian insight that the object of knowledge is
not an external reality existing independently from our judgement – a transcendent realm of real
objects in the realist conception, or uninterpreted sense-data in the empiricist tradition – but
that every object is ‘constituted’ as it is conceptualized within a certain a priori logical structure.
This structure is not to be understood in a psychological or physiological way – as Helmholtz
did – but should be pictured as a set of logical ‘faculties’ or functions, that allow to fix sense
impressions into a conceptual space. Without these faculties, sense impressions are devoid of
any objective meaning; they are a priori, in the sense that they come before experience. This
approach of thinking about knowledge is called ‘transcendental’, indicating that the conditions
of possibility for objective judgements are up for philosophical analysis.
Famously, Kant explored this approach with respect to Newtonian physics. Indeed, as he
was confronted with a confusion of different metaphysical interpretations of classical mechanics,
he investigated how the Newtonian paradigm of objective knowledge was made possible by the
faculties of sensibility, understanding and reason that are involved in the act of judgement. Kant
arrived at a tripartite structure with (i) the faculties of pure intuition (essentially the intuitions
of space and time), (ii) the faculties of understanding (logical structures or forms of judgement),
and (iii) the faculty of reason (providing regulative principles or ideals). The principles of
physics (the categories) then arise when the pure forms of judgement are given spatio-temporal
content in relation to the pure forms of intuition (through the transcendental schematism of the
understanding), whereas the regulative ideals serve as the non-determinate guiding principles
that drive the progress of science.
Although the neo-Kantians reinvigorated the transcendental approach of Kant, they did not
take over this structure. In particular, they refused to accept a dualism between, on the one
hand, a discursive (conceptual) faculty of understanding, and, on the other, an intuitive (non-
conceptual) faculty of sensibility. Instead, they wanted to understand the structure that makes
objective knowledge first possible, in purely logical or conceptual terms alone.4 It is by applying
logical concepts to experience, that experience is first constructed; talk about a reality (or pure
sensibility) existing before the logical faculties is non-sensible.
In the Marburg school of neo-Kantianism, ‘experience’ was understood exclusively in sci-
entific terms; it was scientific experience for which they wanted to analyze the conditions of
possibility.5 Indeed, whereas Kant seemed to posit something like a persistent self (a tran-
scendental unity of apperception) as the fundament on which judgement was constructed, the
Marburgers saw the body of science, with its rules, methods and procedures, as responsible for
the constitution of experience [10]. The strategy for exposing this function of science is contained
in Cohen’s ‘transcendental method’, which takes the best physical theories of the day as starting
point and seeks to explain the possibility of experience by identifying the a priori laws that are
present in these theories. Put differently, (scientific) experience is given as a task to philosophy,
in the sense that philosophy strives to articulate the principles of mathematical natural science
that generate objects of possible experience. This method is essentially historic, in that philoso-
phers in different periods of the history of science will be faced with a different science, and will,
consequently, arrive at different conceptions of what constitutes objective experience. [13]
4
The reason for this refusal to allow an intuitive faculty of sensibility is, in part, due to the discovery of
non-Euclidian geometries. Indeed, for Kant, the a priori structure of space that was generated by the faculty of
sensibility had an a priori Euclidean structure, and geometry derives from this faculty of intuition. The formulation
of different, yet consistent, geometries suggested that this could no longer be true and led the neo-Kantians to
believe that geometry (and mathematics in general) is due to the logical faculties of understanding alone.
5
It was their claim that this was also the focus of Kant himself: the Critique of Pure Reason was supposed
to expose the a priori structure of classical mechanics. This Kant interpretation has recently been revisited with
great success [12].
Chapter 2. Ernst Cassirer and the philosophy of physics 8

To a large extent, Cassirer takes over the methodology of Cohen, and carries it further.
Cassirer’s first major work, Das Erkenntnisproblem in der Philosophie und Wissenschaft der
neueren Zeit, first published in 1906, traces the history of science and philosophy from the
perspective of Marburg Neo-Kantianism. In particular, Cassirer discusses the ‘mathematization
of nature’, or the application of ideal mathematical structures to an empirically given nature, as
the decisive achievement of the scientific revolution. The book also contains a lengthy discussion
of Kant, where, in the line of Cohen, Cassirer contests the separation of the faculties of intuition
and understanding and proposes to replace these faculties by a fundamental creative activity of
thought that progressively generates the object of natural science. Space and time then arise not
as expressions of a separate, non-discursive intuition, but as the first products of this creativity
of thought. Importantly, and in clear disagreement with Kant and the logicist tradition of Frege
and Russell, formal logic is not fundamental but appears as an abstraction from ‘transcendental
logic’, where the latter denotes the unitary process of constructing scientific knowledge. In
Substance and Function this transcendental logic will be worked out more systematically, as
well as the idea of the constitution of the object of science through a progressive determination
of the a priori principles of mathematical physics. [14]

2.3 Substance and function


A systematic philosophical study of theoretical physics by Cassirer appeared in 1910 in his work
Substance and Function.6 From the two previous sections we can deduce a number of problems
or concerns that Cassirer is aiming to address in this work. Firstly, there are the epistemo-
logical difficulties that physicists faced in the light of the mathematization and unification of
theoretical physics in the nineteenth century. As an answer to the rather negative sensational-
ism of Mach, the challenge is to give a positive account of this development that goes beyond
Planck’s account of the elimination of antropomorphuous elements. Secondly, in the line of neo-
Kantianism, Cassirer sets out to provide a thoroughly systematic philosophy of science, where
he ultimately wants to show how mathematical physics allows to give a fixed, determinate and
unified meaning to objective knowledge. This should stand in stark contrast to metaphysical
or speculative accounts of science, but also to abstractionism, empiricism or logicism. Thirdly,
this philosophy of science has to be historical, such that it could fix the rational progress of
science throughout its historical development. Ideally, the progress of physics would confirm the
primacy of transcendental philosophy in the spirit of Kant.

2.3.1 A new logic


We have seen that Cassirer, in the line of Marburg neo-Kantianism, wants to give a purely logical
characterization of the a-priori structure of objective knowledge, but it appears that traditional
logic does not provide the tools to make this happen. Therefore, Substance and Function begins
by advocating an alternative methodology of logic.
He starts with a discussion of the basic structure of traditional logic, where different things
or objects are collected in classes in virtue of some common feature, giving rise to a generic
concept which comprehends all the determinations in which things in the same class agree. It
is through the process of abstraction that these concepts can rise up from the multiplicity of
individual things. At this point, however, Cassirer expresses his doubts whether this procedure
of forming concepts through abstraction can lead to the sharp and unambiguous determinations
6
The book appeared in 1910 under the title Substanzbegriff und Funktionsbegriff: Untersuchungen über die
Grundfragen der Erkenntniskritik, and was translated to English in 1923 together with a monograph on the
theory of relativity (see Sec. 2.4).
Chapter 2. Ernst Cassirer and the philosophy of physics 9

we are used to in science. Indeed, it seems that in this rule of concept formation there is always
a tacit reference to another intellectual criterion. In the system of Aristotle, for example, the
ambiguity of the logical doctrine of abstraction is supplemented with a metaphysical theory,
by which the formation of generic concepts ends in the discovery of the real essences of things.
This form of logic has been transformed and refined7 , but it remains that it is only through a
fixed thing-like substratum that logical concepts can obtain their application. It is precisely this
fixation on thing-concepts that Cassirer is out to contest and that he wants to replace with a
form of logic based on relation-concepts.
The inspiration for this move comes from the nineteenth-century reshaping of mathematics.
Indeed, in mathematics it is clear that the method of abstraction cannot characterize or justify
the necessary concepts, because in the definitions of pure mathematics another realm of objects
is created that is in no direct way connected to the world of ‘things’.8 Moreover, mathematical
concepts or formulas have the feature that, as they become more general, they become more
determinate, and that the more special cases of a given mathematical formula follow from the
general case. This relation between the universal and the particular stands in contrast to the
relation of abstraction, where the more general concepts are stripped from sharp determinations.
In mathematics, the most general concepts are also the richest.
This leads Cassirer to his fundamental idea of a logic based on relation-concepts, where (i)
the individual is conceived as a determinate step under the rule of a more general concept, and
(ii) these concepts are serial, in the sense that they generate a series of objects by successive
applications of the same conceptual rule. Just as in mathematics, a logical concept “represents a
universal law, which, by virtue of the successive values which the variable can assume, contains
within itself all the particular cases for which it holds” [1, p.21]. This opens up a new paradigm
for a methodology of logic, where all concept formation is connected with a sequence generated
by functional relations between the members of this sequence – the form and meaning of the
concept are exhausted by this generating relation. In addition, every element falling under a
given concept only has meaning as an element within the series that is generated by the concept;
an object has no independent ‘existence’ – not even in a logical or mathematical sense. As
Cassirer sets out to show in the rest of the book, it is only through this logical methodology
that the determinate character of scientific concepts can be understood.

2.3.2 The serial forms in mathematics


Before embarking on the analysis of the natural sciences, Cassirer first considers the conceptual
structure of pure mathematics. In a parallel fashion he shows how the concepts of arithmetic
and geometry have developed into forms that confirm the relation-based methodology.
In both cases it is not immediately clear that a founding in logic is actually needed for the
mathematical concepts. Indeed, both numbers and geometrical shapes could be taken to be
arrived at by abstraction from experience. The response from Cassirer to this sensationalism
provides us with a clear statement of what is at stake in his analysis:

Thus what is here given is always only a temporally limited and determined reality,
not a state which can be retained in unchanging logical identity. It is the fulfilment
7
As an example of this metaphysical transformation of the same logical methodology, Cassirer discusses the
psychological epistemology of Berkeley: “While formerly it had been outer things that were compared and out of
which a common element was selected, here the same process is merely transformed to presentations as psychical
correlates of things.” [1, p.9]
8
As we will see, the same is true for theoretical physics, since “these concepts of physics also are not intended
merely to produce copies of perceptions, but to put in place of the sensuous manifold another manifold, which
agrees with certain theoretical conditions” [1, p.14]
Chapter 2. Ernst Cassirer and the philosophy of physics 10

of the demand for this latter, however, which constitutes all the meaning and value
of the pure numerical concepts. [1, p.33]

The ‘meaning’ and ‘value’ correspond to the universal applicability to every individual case,
as a condition for judgements concerning individuals. As will become clear later on, these
mathematical concepts will indeed serve as conditions of possibility for the arrangement of
individuals into an inclusive whole. In this sense, the logically determinate character as relation-
concepts is a prerequisite for the role these mathematical concepts will play in mathematical
physics.
The challenge of founding arithmetic in relation-concepts was met by the work of Dedekind,
who founded all arithmetic definitions and propositions in the concept of progression. Indeed,
starting from the concept of a series (i.e. a first a member and a relation of succession) the
integer and fractional numbers as well as addition and multiplication can be developed, without
ever taking recourse to the relations of concrete measurable objects. The framework can even
be extended to include irrational, imaginary, and transfinite numbers. What is established
by this logical construction is “a system of ideal objects whose whole content is exhausted in
their mutual relations” [1, p.39]. Indeed, the essence of number is exhausted by the conceptual
rule that defines a structured manifold: no number is anything more than a place within this
conceptual whole.
The rationale behind Cassirer’s founding of arithmetic becomes clear when it is opposed to
the attempt by Frege and Russell to reduce number theory to logic through the use of classes.
Although Cassirer admits that this reduction is a great advance over sensationalistic theories,
it cannot be satisfactory given the function the number concept has to play in the whole of
knowledge. Again, we see that mathematical concepts are supposed to play a constitutive role,
which the determination of number by the equivalence of classes cannot do. Instead, Cassirer
aims at defining numbers from “a purely categorical point of view” [1, p.54], without taking
recourse to thing-like concepts such as classes. Only in this way can the numerical concepts be
applied in the mathematical sciences.
The same motives drive Cassirer’s discussion of geometry, for which the development of a
purely functional conceptualization has been more involved. This development is presented as a
progressive evolution starting with the geometry of the ancient Greeks, through Cartesian and
differential geometry, and resulting in the formulation of projective geometry. In this form, for
the first time, “we start from an original unit from which, by a certain generating relation, the
totality of the members is evolved in fixed order” [1, p.88]. Importantly, Cassirer construes this
historical development of concepts as a process with an inner necessity. Indeed, in this process
the formulation of group theory forms the final “conclusion to a tendency of thought, which we
can trace in its purely logical aspects from the first beginnings of mathematics” [1, p.94].9
In the last few paragraphs of the third chapter, Cassirer briefly discusses how the purely
functional concepts of geometry are to be applied to empirical reality, and, more specifically, how
to decide between different geometries in their application to real space. With this discussion,
however, we have left the realm of the pure functional concepts of mathematics and embarked
on the critical analysis of mathematical physics.

2.3.3 The concepts of natural science


The exposition of the conceptual structure of mathematics shows the clearest and most perfect
example of the “logical nature of the pure functional concept” [1, p.112]. Yet, it is only through
a critical analysis of the natural sciences that we can find a definitive formulation of the problem
9
This idea of interpreting the historical development of geometry as a progression towards an ideal end-point
will be worked out further in Sec. 2.3.5.
Chapter 2. Ernst Cassirer and the philosophy of physics 11

of knowledge and, consequently, the basic meaning of the functional concept; it is only by laying
bare the transcendental meaning of the functional concept that it finds its true import. So
the real challenge that Cassirer takes up is showing that he can make sense of the historical
evolution of the natural sciences – in particular, mathematical physics – within the framework
of the functional concept. And the ambitions towards this effect are quite high: Cassirer wants
to capture the whole structure of physics with only a small number of fundamental concepts;
the most important are the concepts of space and time, substance and energy.
With respect to the physics of space and time this fundamental philosophical question has
been clouded by the metaphysical discussion on the absolute or relative nature of space-time.
In the background, however, another question emerges that is of epistemological importance:

The question continually arises, whether in the foundation of mechanics we have to


assume only such concepts as are directly borrowed from the empirical bodies and
their perceptible relations, or whether we must transcend the sphere of empirical
existence in any direction in order to conceive the laws of this existence as a perfect,
closed continuity. [1, p.172]

Different attempts have been undertaken to ground the physical meaning of space-time deter-
minations in purely empirical terms. In the system of Mach [6], for example, it is the influence
of the mass distribution of the universe – the fixed stars – that generates the law of inertia for
the earthly bodies. Looking at the meaning and function of the law of inertia in the system of
mechanics, however, no reference is made to these fixed stars. Indeed, we can easily transform to
other frames of reference and lose the connection with the fixed stars, without the law of inertia
losing its intelligibility. So the concept of uniform motion is only related to the “ideal schemata
offered by geometry and arithmetic” [1, p.175] and only functions as such in physical theory.
The grounding of the law of inertia in empirical terms enters the system of physics only through
an external demand inspired by empiricism.10 This demand has inspired other approaches for
founding the existence of inertial frames in sensuous objects11 , but always it appears that it is
not so much the existence of these objects but rather the assumption of their existence that
validates the use of mechanical concepts. But then it is clear that the meaning of these physical
concepts was already established beforehand in an ideal, mathematical construction. The search
for ‘things’ existing in the sensuous world for grounding e.g. the law of inertia involves a circle,
because inertia and the other principles of mechanics are already tacitly recognized beforehand
as universal mathematical principles.
This implies that the real philosophical problem with respect to space and time concerns the
form and function these principles exhibit in the conceptual structure of theoretical physics. In
line with Cassirer’s basic logical convictions, the logical character of space and time is that of
10
At this point, it proves worthwile to further pinpoint Cassirer’s view on the philosophy of Ernst Mach. Indeed,
when describing the “scientific ideal of pure description” Cassirer writes
The goal of this philosophy of physics would be reached, if we resolved every concept, which enters
into physical theory, into a sum of perceptions, and replaced it by this sum [. . . ]. [1, p.114]
The question Cassirer asks is whether this conception of physics is indeed a description of the actual status of
physics or “confused with a general demand that is made of these theories” [1, p.115].
The answer to this question can only be won by following the course of physical investigation itself
and considering the function of the concept that is involved directly in its procedure. [1, p.115]
With this statement, Cassirer explicitly places himself in the debate for which we have taken the Mach-Planck
dispute as an example [Sec. 2.1]. We see that Cassirer takes the philosophy of Mach as a demand on how physics
should be structured, a demand that is out of touch with the way actual physics has evolved.
11
Cassirer discusses the “fundamental body” of Streintz or the “body alpha” of Neumann as attempts to define
inertial frames through the introduction of some special body or object in empirical reality, with respect to which
inertial movements can be defined.
Chapter 2. Ernst Cassirer and the philosophy of physics 12

“systems of relations in the sense that every particular construction in them denotes always an
individual position, that gains its full meaning only through its connections with the totality
of serial members” [1, p.172]. Indeed, a particular position in space only gains meaning with
reference to other positions, or more generally, a spatial manifold, and every moment of time is
determined with reference to an earlier or later contrasted with it. Space and time thus appear
as serial concepts, where individual space-time points only have meaning as elements within the
space-time manifold.
The same goes for matter and ether, the two physical ‘substances’ that are supposed to
capture all physical processes taking place inside the space-time framework. Cassirer describes
a number of historical transformations, where the concept of matter has been stripped from all
sensuous content and has evolved into a purely logical center of possible relations. Indeed, the
idea of a point mass makes it possible for matter to be a subject of physical processes described
by purely mathematical relations (i.e., differential equations). The concept of the ether equally
expresses the connections between different physical processes, and “all that physics teaches of
the “being” of the ether can, in fact, be ultimately reduced to judgements about such connections”
[1, p.163]. So again, the content of the physical concepts of matter and ether are exhausted by
considering their logical place in the universal schemata, in which the relations of empirical
reality can be first represented in a scientifically determinate fashion.12
A last concept with special significance is that of energy. We have seen that an empirical
phenomenon only becomes an object for knowledge when it is ascribed a definite place in the
mathematical manifold of serial concepts, but, for Cassirer, the real task of knowledge consists
of placing the different series within a unified system. This requires a principle which enables to
connect different series according to an exact numerical scale and a constant numerical relation
governing the transition from one series to the others. This scale is provided by the concept
of energy, which, starting from the famous equivalence of motion and heat, has progressively
included more domains of physics. Energy represents a common series for all physical processes,
making possible an objective correlation according to law in which all physical contents (light,
heat, motion, etc.) stand. It signifies an intellectual point of view, “from which all these
phenomena can be measured, and thus brought into one system in spite of all sensuous diversity”
[1, p.192].
Again, it would be a mistake to think that physics has discovered a new self-existent thing.
Instead, energy simply appears as the expression of an exact numerical relation that pertains
to physical processes, and the meaning of the energy concept is exhausted by that numerical
equivalence. In this respect, energy as a unifying concept seems to have an epistemological
advantage over the attempts at unification within the mechanistic world-view. Indeed, under the
concept of energy, two physical processes “are the “same” not because they share any objective
property, but because they can occur as members of the same causal equation, and thus can be
subsituted for each other from the standpoint of pure magnitude” [1, p.199]. Energism shows
that unification is not necessarily connected to analyzing things and processes into their ultimate
intuitive parts, as a mechanical reductionism would do.13
12
In this connection, Cassirer points to the fact that “the exactitude and perfect rational intelligibility of
scientific connections are only purchased with a loss of immediate thing-like reality” and that “it must appear
as a genuine impoverishment of reality that all existential qualities of the object are gradually stripped off.” [1,
p.164] These remarks echo the view of Planck [Sec. 2.1] associated with the banning of antropomorphous elements
in physical science, but gain a positive philosophical significance with Cassirer.
13
Cassirer explicitly states that he does not favor energism, as “[t]he conflict between the two conceptions can
ultimately only be decided by the history of physics itself; for only history can show which of the two views can
finally be most adequate to the concrete tasks and problem” [1, p.202].
Chapter 2. Ernst Cassirer and the philosophy of physics 13

2.3.4 Physical theory and experience


In contrast to the purely mathematical concepts, the concepts of natural science should be in
some way connected to experience or experiment. How does Cassirer characterize this relation?
At this point, the work of Duhem [15] is taken into account. Duhem shows how a scientific
experiment is mediated by a number of intellectual moves and that its interpretation depends on
a number of fundamental theoretical assumptions. In fact, instead of the practical instrument
the physicist substitutes an ideal instrument from which accidental defects are excluded. It
follows that measurement is never a purely empirical procedure, but the result of conceptual
operations. Cassirer explains that it is the function of the scientific concept that makes possible
“the transition from what is directly offered in the perception of the individual element, to the
form, which the elements gain finally in the physical statement.” [1, p.143] More specifically, the
sensuous qualities of things are brought under the serial concepts of mathematics: to measure
a physical phenomenon is to transform it into a serial, numerical determination.
In this regard, physical concepts are conceived as apperceptive concepts that are necessary
for empirical knowledge in general. Indeed, a theoretical structure provides a scheme into which
specific observations can be fitted and, by this procedure, gain a fixed form and assume clearly
defined physical properties.

Even before its individual value has been empirically established within each of the
possible comparative series, the fact is recognized, that it necessarily belongs to
some of these series, and an anticipatory schema is therewith produced for its closer
determination. [1, p.150]

Cassirer calls this “a type of transcendence” [1, p.281], where a particular given impression
becomes a mathematical symbol and designates a fixed physical property in a larger theoretical
structure. This shows that the body of physical concepts is constitutive for a scientifically
determinate conception of reality, and that there are no ‘bare facts’ that any scientific theory
can compare to.
This also implies that physical concepts are not tested in isolation, but that their validity is
evaluated by their function in a theoretical complex; it is these theoretical complexes that are
judged on their correctness as they unite the totality of experience into an unbroken unity. The
agreement between the observations and the system of deductions always remains an approxima-
tion, since the mathematical structure of pure thought is always only postulated to correspond
to physical reality.14 Indeed, “[w]e inscribe the data of experience in our constructive schema,
and thus gain a picture of physical reality; but this picture always remains a plan, not a copy,
and is thus always capable of change” [1, p.186].
Crucially, however, the meaning of the mathematical conceps and principles is not dependent
on their application to physical reality. It is precisely because they are exactly determined as
mathematical principles, that physical concepts such as space and time have the fixity and
exactness that is required for them to function in physical theory. Indeed, space and time act
as “pure functions, by means of which an exact knowledge of empirical reality is possible” [1,
p.182]. They are first considered in intellectual abstraction and only then generate a “general
schema for possible changes in general” [1, p.182], and it is in this application to physical reality
that it is first decided whether real movements in fact conform to these determinations. In this
14
Cassirer refers to Poincaré, who has described these mathematical constructions as conventions when they
are introduced to survey physical facts more easily. For Cassirer, the characterization of the ideal conceptual
creations as conventions recognizes that “thought does not proceed merely receptively and imitatively in them,
but develops a characteristic and original spontaneity” [1, p.187]. In the following section [Sec. 2.3.5] we will
explain that the concepts of theoretical physics are more than conventions.
Chapter 2. Ernst Cassirer and the philosophy of physics 14

respect, it is essential for scientific knowledge of nature that the empirical realization of the
mathematical concepts can shift, yet their logical meaning and necessity remain intact.15
Finally note that the scientific motive of unification seemingly confronts Cassirer with a
paradox: It appears that every experimental observation will always demand for a growing
number of natural laws in order to capture the observation in its peculiarity. Indeed, it will never
be possible to isolate a physical process such that only one law will capture the phenomenon
completely and exactly. It is at this point that the full power of the functional concept shows
itself. In contrast to the generic concept of traditional logic, where every abstraction corresponds
to a stripping of determination, the functional concept becomes more determinate in its content
and application to the particular as it becomes more universal. So in order for the physical
concepts to capture the particularity of experience in growing exactness, a progression towards
more universal concepts is needed. Indeed, every universal relation necessarily contains a growing
number of more particular relations and “has a tendency to connect itself with other relations
to become more and more useful in the mastery of the individual.” [1, p.255]. Put differently,
[t]he advance of experiment goes hand in hand with the advancing universality of
the fundamental law, by which we explain and construct empirical reality. [1, p.258]

2.3.5 The progress of scientific knowledge


Cassirer presents this progression towards more universal concepts in an explicitly historical way.
Yet, if this historical process is to have an internal rationality behind it, it appears essential that
relations that are progressively established be compatible with each other. According to Cassirer,
“this compatibility is assured in principle by the fact, that the determination of the particular
case takes place on the basis of the determination of the general case, and tacitly assumes the
validity of the latter.” [1, p.255] But, and this seems absolutely crucial, this process of growing
universality can never be completed, and the fundamental laws of science, which at a certain
point in time seem to represent the final form of all empirical processes, will at a later stage
only serve as the material for further consideration.
All scientific thought is dominated by the demand for unchanging elements, while
on the other hand, the empirically given constantly renders this demand fruitless.
We grasp permanent being only to lose it again. From this standpoint, what we call
science appears not as approximation to any “abiding and permanent” reality, but
only as continually renewed illusion, as a phantasmagoria, in which each new picture
displaces all the earlier ones, only itself to disappear and by annihilated by another.
[1, p.266]
The sceptic might argue that this is the road to an epistemological relativity: every scientific
picture of the world, which makes an objective conception of this world first possible, will always
be replaced by another picture that makes the previous conception worthless and arbitrary. This
argument is refuted by Cassirer as he points out that this succession of scientific conceptions
must “proceed according to a definite principle of methodic advance” [1, p.267]. Whenever an
observation is found not to agree with the body of scientific determinations, the scientist will
first look for variations of the less universal laws. When this seems impossible, more fundamen-
tal laws will have to be modified. But, “this transition never means that the fundamental form
absolutely disappears, and another absolutely new form arises in its place.” Indeed, this new
form “must contain the answer to questions, proposed with the older form.” By this feature, a
logical connection is established and a “common forum of judgement” is opened, “to which both
are subjected.” [1, p.268] In every transformation, a certain set of principles is always preserved,
15
Cassirer refers to the work of Hertz [16] as the clearest expression of this relation of theory and experience.
Chapter 2. Ernst Cassirer and the philosophy of physics 15

because the reason for this transformation is precisely the preservation of these principles; with-
out a fixed logical standard, it would make no sense to transform our scientific body in response
to some observations, because there would be no scientific observation. So, in order to make
sense of the progression of scientific principles, we need to assume that there is “an ultimate
constant standard of measurement of supreme principles of experience in general.” [1, p.268] It
is the task of the critical theory of experience to search for this ‘universal invariant theory of
experience’:

The goal of critical analysis would be reached if we succeeded in conceptually defining


those moments, which persist in the advance from theory to theory because they are
the conditions of any theory. At no given stage of knowledge can this goal be perfectly
achieved; nevertheless it remains as a demand, and prescribes a fixed direction to
the continuous unfolding and evolution of the systems of experience. [1, p.269]

These ultimate logical invariants or “invariants of experience” are called a priori by Cassirer,
because they are contained as necessary premises in every judgement on empirical facts.
We have seen that the objects of physics arise as we transform experience to the demands
of theoretical concepts; through the different conceptualizations science gains different objecti-
fications of physical reality. But these represent different stages in the fulfillment of the same
fundamental demand of objectification. It is through the realization of this demand (i.e. the
identification of the invariants of experience) that the real meaning of the concept of the object
is established. So it is this fundamental demand or search to fix the object of physics in its
full determination that, despite the impossibility to attain it in principle, drives the progress of
science.

2.4 Cassirer on modern physics


We have seen that Cassirer was very ambitious in Substance and Function as he wanted to lay
bare the conceptual structure of all of theoretical physics in a unified way. The developments
in physics, however, were soon to demand even more of his neo-Kantian approach. It speaks in
favor of Cassirer’s framework – tailored originally to classical nineteenth-century physics – that it
can be applied to the revolutionary developments of twentieth-century physics without changing
much of its basic convictions. Indeed, the revolutionary principles of relativity and quantum
mechanics are seen as confirmations of the central features of his epistemology, viz. (i) the
functional, non-substantialistic (relational) meaning of physical concepts, (ii) the constitutive
function of these concepts for a scientific experience of nature, and (iii) the continuity in the
progress of physics by the regulative ideals of unity under functional laws.16

2.4.1 The theory of relativity


In the beginning of the twentieth century the world of physics was revolutionized by Einstein’s
special and general theories of relativity. Not only did these theories change the physicist’s
conception of space and time drastically, but they also had great implications on the philosophy
of science. In particular, the authority of Kant on the foundations of space-time physics was
severely shaken by the general theory of relativity – it appeared that Euclidean geometry did
not provide a correct description of empirical space, a fact that Kant took to be true a priori.
16
We will discuss two of Cassirer’s works that are explicitly focussed on modern physics. This exhausts all of
Cassirer’s later writing on physics, because other later works follow the ideas of Substance and Function rather
closely where the philosophy of physics is concerned. [17]
Chapter 2. Ernst Cassirer and the philosophy of physics 16

Because Cassirer had always placed himself in the philosophical tradition of Kant, it proved
vital that he could incorporate Einstein’s theories within his project, a challenge that he met in
a monograph on relativity theory.17 In this work, Cassirer argues that the theories of Einstein
do not provide a refutation of the philosophy of Kant, but, instead, are a new confirmation
that only transcendental philosophy can provide the correct explanation of the structure and
meaning of theoretical physics.
Cassirer begins by describing the advent of relativity theory as a critical revaluation of the
system of physics. Indeed, after the experiments that had made a unified conception of physical
phenomena impossible using the laws of nineteenth-century physics – the Michelson-Morley
experiment is the most famous – there was a need for a critical examination and correction of
e.g. the classical conceptions of space and time, the concept of matter in mechanics, and the
ether concept in electrodynamics. According to Cassirer, this intellectual process was continuous
in the sense that the same demand for constancy and unity in nature had been at work in
developing the old physics and overthrowing it. The result was, as always, a further liberation
from the “presuppositions of the naively sensuous and “substantialistic” view of the world” [1,
p.386] in favor of a unified system of functional space-time determinations, where space and
time themselves have been further stripped from their thing-like meaning.
What came in the place of the classical notions of space and time are the pure forms of
coexistence and succession, which only have a meaning as serial concepts appearing in the
description of physical phenomena. Indeed, in relativity theory physical processes are described
by world-lines in the four-dimensional space-time manifold, a manifold that presupposes the
serial forms of space and time. Although the space and time coordinations are mixed up for
different observers, as dictated by the equations of relativity, the two functions of coexistence
and succession remain at work in every space-time description of a physical process. Indeed, the
theory of relativity proposes the epistemological insight that neither pure space nor pure time
have an existence in themselves, but only in their unified application under the mathematical
laws of relativity to physical phenomena do space and time retain empirical meaning.
Then, of course, the problem remains of making sense of the non-Euclidean structure of
the space-time manifold. First of all, Cassirer opposes any empirical grounding of geometry,
because the meaning of geometrical concepts is exhausted by their function in the ideal system
of geometry, and they possess no immediate correlate in the world of existence. Moreover, the
geometrical axioms are never to be regarded as concerning things or relations of things in reality;
instead, they should be evaluated to what extent they, in their totality, constitute the physical
object and make physical knowledge possible. But, Cassirer argues, this is exactly what relativity
theory has realized: The ontological meaning of geometry has lost all meaning, and the only
question that remains is which geometrical system should be used for the interpretation of the
phenomena of nature and their dependencies according to law.18 Indeed, the theory of relativity
provides a mathematical framework for space-time determinations, making possible the exact
formulation of certain physical relations such as the laws of gravitation or electromagnetism,
without attaching any existence to the space-time manifold itself.
17
Cassirer published the monograph as Zur Einsteinschen Relativitätstheorie in 1920, which was translated to
English together with Substance and Function in 1923.
18
At this point, Cassirer evokes the philosophy of Kant, and makes clear that pure intuition has no role to play
in the realm of knowledge of the empirical and the physical. Indeed, it is only the rules of understanding that
give the existence of phenomena their synthetic unity. In this regard, it is only a small step beyond Kant to also
take into account non-Euclidean axioms.
Chapter 2. Ernst Cassirer and the philosophy of physics 17

2.4.2 Quantum mechanics


Possibly the conception of quantum mechanics revolutionized the physical way of thinking in
an even more drastic way, and, in contrast to relativity theory, continues to confront physicists
and philosophers with interpretational problems. In his Determinism and Indeterminism in
Modern Physics [18]19 Cassirer takes up the challenge of incorporating quantum theory within
his philosophical framework.
Cassirer first makes the important point that the idea that quantum theory implies a drastic
departure from the classical idea of causality, is essentially misguided. He traces back the concept
of causality as it functioned in classical theory, characterizing it as a regulative principle of a
general conformity to law. The Laplacean ideal of causal determination, often thought as the
canonical formulation of classical causality, is identified as a metaphysical fiction by Cassirer. If
understood properly, the principle of causality functions both in classical physics and quantum
theory. [19]
The real departure from classical physics is situated on the level of the concept of a physical
state. Classically, the “state of a thing in a given moment is completely determined in every way
and with respect to all possible predicates”, a conception that was thought of as “a definition
of what we are to understand by the “reality” of a thing” [18, p.189]. In particular, it is
the spatiotemporal determination of an empirical object that is considered as the true citerion
of its existence. This classical notion is drastically transformed in quantum theory by the
superposition principle and the uncertainty relations, where e.g. an electron no longer has
a determinate location in space or definite amount of energy; the classical (substantialistic)
notion of “thing” loses meaning. In this respect, the formalism of quantum mechanics is a new
step in the progressive functionalization of physical concepts: the quantum formalism “was not
created for the description of things and states but refers to the representation of the behaviour
of physical systems” [18, p.192].
Just like relativity theory, quantum theory gives a physical realization of an essentially
epistemological insight: By abandoning the notion of absolute determination of the classical
thing, quantum theory formulates mathematically strict conditions for physical knowledge of
nature. Of course, this is not a skeptical conclusion, in the sense of having only limited access
to an external reality, but leads to the realization that quantum theory “prescribes limits to
the being which we can ascribe to natural things, and not the reverse” [18, p.194]. Thus,
quantum mechanics makes the explicitly transcendental conclusion that there are conditions of
accessibility necessarily bounding the object of experience. [19]

2.5 Michael Friedman


Around the time that Cassirer wrote his monograph on the theory of relativity, a few more
radical philosophers such as Schlick and Reichenbach wrote down their own philosophical con-
clusions based on Einstein’s revolutionary theory. Although these works were quite close to
neo-Kantianism originally, the authors quickly diverged from the Kantian project and formed,
under the influence of Russell and Wittgenstein, a new school of thought that would go under
the name of logical positivism. Mainly because of the dominance of logical positivism, the neo-
Kantian philosophy of science of Cassirer has not received a lot of attention, although there have
been a few exceptions [20].
Recently, however, the philosophy of Cassirer was revived in the work of Michael Friedman.
Although the philosophical concerns and challenges have shifted considerably, Friedman’s work
19
The book was first published in German as Determinismus und Indeterminismus in der modernen Physik in
1936, but was translated into English only in 1956.
Chapter 2. Ernst Cassirer and the philosophy of physics 18

shows shat the project of Cassirer can still provide an important inspiration for contemporary
authors. As Friedman puts it himself,

. . . I construct a narrative depicting both the development of the modern exact sci-
ences from Newton to Einstein and the parallel development of modern scientific
philosophy from Kant through the early twentieth century. I use this narrative to
support a neo-Kantian philosophical conception of the nature of the sciences in ques-
tion – which, in particular, aims to give an account of the distinctive intersubjective
rationality these sciences can justly claim. [21, p.11]

2.5.1 The dynamics of reason


In this section, we will discuss how Friedman has incorporated Cassirer’s ideas in his Dynamics
of Reason [22], and assess to what extent Cassirer’s work can still provide viable insights in
contemporary philosophy of physics.

Scientific philosophy after Kuhn and Quine


The first challenge to a contemporary neo-Kantian approach to the philosophy of science is
the epistemological holism as formulated by Quine in his Two Dogmas of Empiricism [23]. In
this view, our knowledge should be described as a vast web of interconnected beliefs, which
impinges on experience only along the edge, or “like a field of force whose boundary conditions
are experience” [23, p.39]. The body of scientific knowledge stands before the ‘tribunal of
experience’ as a whole. In this web, some beliefs are closer to the periphery of experience than
others, in the sense that they are more likely to be chosen for revision in the light of recalcitrant
experience. Simple statements about physical objects are of this kind, because they can be
easily revised in the light of experience without shaking up the whole system of beliefs. But,
importantly, this does not imply that they are of another kind than the more entrenched beliefs
about e.g. arithmetic; it is only through pragmatic inclinations20 that these beliefs are held to
be more fundamental. On an epistemological level, the difference between analytic and synthetic
statements is no longer meaningful: all beliefs are equally empirical.
The second challenge consists of the conceptual relativism that has gained momentum in
the aftermath of Kuhn’s The Structure of Scientific Revolutions [24]. Based on the historical
studies of Kuhn21 – showing the absence of rational rules governing the revolutionary transitions
between scientific paradigms or conceptual frameworks – it is argued that the only viable notion
of scientific rationality is a local or contextual one, where non-rational factors such as persuasion
and commitment within some particular social community determine the acceptance of a certain
body of scientific knowledge.
Friedman’s work can be read – and is presented as such in his Dynamics of Reason – as
a direct response to these two challenges of contemporary philosophy of science. He proposes
an approach that, on the one hand, doesn’t flatten out the conceptual structure of scientific
knowledge, and, on the other, retains the inherent rationality of scientific progress. Let us see
how.
20
The motivation of Quine for this view is much in the spirit of Mach:
As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for
predicting future experience in the light of past experience. [23, p.41]

21
With respect to contemporary physics, the study of Pickering [25] on the historical development of particle
physics is particularly relevant.
Chapter 2. Ernst Cassirer and the philosophy of physics 19

The relativized a priori


In a direct response to Quinean holism, Friedman makes the case that the different parts of
physical theories cannot be viewed as symmetrical elements of a larger conjunction, which can
then equally face the tribunal of experience. Instead, he works out the notion of relativized
a priori principles, which cannot be tested directly in experience, but rather define the space
of empirical possibilities for a certain theory. This notion is illustrated for the case of three
space-time theories, i.e. Newtonian mechanics, special relativity and general relativity. Within
these theories, three asymmetrically functioning parts can be distinguished. The first is the
part of the mathematical theories, representations or structures, describing the spatio-temporal
framework in question (Euclidean space, Minkowski space-time and Riemannian space-time
manifolds, resp.). The physical or empirical part (universal gravitation, Maxwell’s equations
and Einstein’s equations, resp.) uses these structures in formulating precise physical laws for
empirical phenomena. But, in order for these mathematical laws to acquire a precise empirical
meaning a third part (the Newtonian laws of motion, the speed-of-light principle, the equivalence
principle, resp.) is needed to set up a general correspondence or coordination between the
mathematical and empirical part. This part consists of relativized, yet a priori principles of
coordination.
Only under the form of this tripartite structure can the body of physical knowledge be related
to sensory experience. Indeed, with the constitutive a priori framework in place, the physical
laws, expressed in the language of pure mathematics, obtain an empirical content and can be
confronted with empirical observation. The physicist can compare calculated values of various
physical magnitudes – think of the advance of the perihelion of Mercurius, in the case of general
relativity – with the values obtained through measurement, and make a quantitative assessment
of the correspondence between theory and experiment. But, crucially, such a correspondence is
only made possible by the constitutive framework that fixes the empirical content of the theory;
the framework defines a set of empirical possibilities, and sensory experience decides which of
these is actually realized.
This implies that the constitutive a priori principles cannot be tested directly by experience,
like the properly empirical laws. Take the Michelson-Morley experiment, which, in retrospect,
provided a very good reason to accept the speed-of-light principle of special relativity. Yet, the
Lorentz-Fitzgerald theory of electrodynamics equally provided an explanation of the experiment
within the classical space-time structure. Einstein, however, elevated the result of the experiment
to a new constitutive principle, whereas Lorentz and Fitzgerald retained absolute simultaneity
and presented the experimental discovery rather as an empirical fact. At this point, the “decision”
or “convention” between the two theories is essentially non-empirical. This decision is, of course,
often motivated by empirical facts, but, from an epistemological point of view, is not decided
before the Quinean tribunal of experience.

The progress of scientific knowledge


This brings us to the Kuhnian challenge of making sense of scientific revolutions, without giving
up the rationality of scientific progress. For, even though the previous section showed how
scientific knowledge is structured through constitutive principles, it remains unclear how the
transition from one framework to the other can take place in a rational way.
Part of the answer to this question is provided by Friedman’s observation that successive
frameworks in physics are not independent, but rather provide ever greater expansions of the
space of empirical possibilities, such that a new framework contains the earlier one as a special
and/or approximate case. Typically, principles that count as constitutive at one stage shift to
the status of merely empirical laws at a later stage. From the point of view of the Einsteinian
Chapter 2. Ernst Cassirer and the philosophy of physics 20

constitutive framework, for example, the general relativistic field equations and the classical
Newtonian law of gravitation appear both as alternative empirical possibilities defined within
a common empirical space, whereas in the old framework of classical mechanics the Einstein
equation could not even be formulated. From the Einsteinian perspective both gravitation laws
can be coherently formulated and their empirical meaning devised, such that, under a decisive
experiment, one can be favored over the other. In this retrospective point of view, the transition
from Newton to Einstein seems to be perfectly reasonable.
In addition to this retrospective account, Friedman develops a prospective account of inter-
paradigm rationality that explains how there can still be a rational route from the point of view
of the earlier framework leading to the later framework. This implies that new concepts and
principles of a new constitutive framework develop out of, and as a rational continuation of,
the old concepts and principles, and that, despite the incommensurability between frameworks,
practitioners of a new framework can still appeal to the persons working within the old framework
using conceptual resources that are available for both sides.
This ambitious reply to conceptual relativism is argued for by Friedman’s detailed exposition
of how Einstein, in writing down his special and general theory of relativity, made connection
with (i) a long intellectual tradition of space-time theories going back to the seventeenth century,
(ii) the debate on the foundations of geometry, (iii) the philosophical debate on the status and
goal of scientific knowledge, (iv) empirical evidence on the detectability of relative motion in elec-
trodynamics and the equivalence of gravitational and inertial mass, etc. Indeed, by embedding
this specific revolution of mathematical physics within a larger intellectual (philosophical, scien-
tific, technological, experimental, etc.) tradition, it can be shown how relativity “could have ever
become a real possibility and thus a genuinely live alternative” [22, p.115], and, consequently,
how the rational nature of the transition is laid bare.

2.5.2 The historicized a priori


At this point, it should already be clear that the neo-Kantian philosophy of Cassirer has been
of great influence for Friedman’s work. Let us therefore consider the correspondence between
their two conceptions of scientific philosophy in more detail.
We have seen that Friedman characterizes the development of mathematical physics as a
progression of ever greater intellectual possibilities, where new constitutive a priori frameworks
follow out of older ones and enlarge the space of empirical meaning, taking place on the back-
ground of a common tradition of cultural change. Friedman finds himself now in “a position
to add, from a philosophical point of view, that we can thus view the evolution of succeeding
paradigms or frameworks as a convergent series, as it were, in which we successively refine our
constitutive principles in the direction of ever greater generality and adequacy” [22, p.63]. This
idea of convergence is understood as a regulative ideal in the Kantian sense, as an ideal state of
completion never to be attained but always to be pursued. Our present constitutive principles
are thus taken to represent one stage in a convergent process, as an approximation to more gen-
eral principles that will only be articulated at a later stage. At this point, Cassirer is brought
into the story as an early defender of the same idea, as

our present scientific community, which has achieved temporary consensus based on
the communicative rationality erected on its present constitutive principles, as an
approximation to a final, ideal community of inquiry (to use an obviously Peircean
figure) that has achieved a universal, trans-historical communicative rationality on
the basis of the fully general and adequate constitutive principles reached in the ideal
limit of scientific progress. [22, p.64]
Chapter 2. Ernst Cassirer and the philosophy of physics 21

This regulative ideal is thoroughly Kantian because “we must view our present scientific com-
munity as an approximation to such an ideal community, I suggest, for only so can the required
inter-paradigm notion of communicative rationality be sustained” [22, p.64]. Yet, whereas Cas-
sirer saw in relativity theory the culmination of Kantian philosophy as revised by the Marburg
school, Friedman believes that “we need a more far-reaching revision of Kantian transcendental
philosophy than Cassirer has suggested” [26, p.250]. Indeed, Friedman suggests that it is neces-
sary to “relativize the Kantian a priori to a given scientific theory in a given historical context
and, as a consequence, to historicize the notion of transcendental philosophy itself” [26, p.251].
Let us disentangle these two notions and discuss them separately. Firstly, Friedman seems
to claim that Cassirer did not endorse a fully relativized a priori, but this claim appears, from
our detailed discussion of Cassirer’s works, misguided. Indeed, we have identified one of the
aims in Substance and Function as justifying the progression of scientific theories by a historical
reconstruction of the principles that make scientific experience possible at any stage. This
interpretation is confirmed by Cassirer in a letter to Moritz Schlick, where he states that the
a priori “can assume the most various developments in the progress of knowledge”, and that
the idea of unity in nature “can be specified in particular principles and presuppositions [...]
depending on the progress of scientific experience” [27, p.50-51]. This supports our claim that
Cassirer indeed elaborates a relativization of a priori principles connected with the progress
of scientific knowledge and that Friedman underrates the extent to which Cassirer revised the
epistemology of Kant in order to arrive at a relativized a priori. [28]
The notion of historicizing transcendental philosophy seems to be more to the point. Fried-
man explains that, in Cassirer’s conception22 ,

[w]e have no way of anticipating a priori the specific constitutive principles of future
theories, and so all we can do, it appears, is wait for the historical process to show
us what emerges a posteriori as a matter of fact. How, then, can we develop a philo-
sophical understanding of the evolution of modern science that is at once genuinely
historical and properly transcendental? [29, p.696]

We have seen that Friedman proposes to embed the development of natural science within
a larger intellectual (philosophical, scientific, technological, etc.) tradition, showing how the
replacement of constitutive principles can be made intelligible. In particular, he shows that
transcendental philosophy exhibits its own historical transformation and provides the basis for
the constitutive principles of the natural sciences. The prime example of Friedman again serves
as an excellent illustration: it is by tracing the development of transcendental philosophy trough
Kant, Helmholtz, Poincaré, and, ultimately, Einstein that the relativity revolution can be made
intelligible. [29, 30]
Of course, Friedman’s idea of a historicized transcendental philosophy is a thoroughly post-
Kuhnian philosophy of science. Indeed, the rationale behind this move is precisely to be able
to give not only a retrospective account of scientific progress, but also to provide a prospective
one. By adding other intellectual dimensions, Friedman tells us a historical narrative that fixes
the rationality of science across scientific revolutions. Cassirer, as he was not faced with the
Kuhnian paradigm dynamics, did not share this concern, so that his account lacks Friedman’s
‘broader intellectual perspective’.23 Indeed, Cassirer rests content with a retrospective account
of the conceptual development of mathematical physics, showing the internal rationality of its
historical progression.
22
In this quote, Friedman discusses the approaches of both Cassirer and Husserl.
23
One could interpret Cassirer’s philosophy of symbolic forms as a widening of his perspective in this sense. [28]
Chapter 2. Ernst Cassirer and the philosophy of physics 22

2.5.3 Constitutive or regulative?


The core distinction, however, between Cassirer and Friedman lies in the constitutive function
of a priori principles. Friedman explains how Kant consciously made a difference between con-
stitutive principles (necessary conditions for the comprehensibility of the phenomenal world of
sensible experience) and regulative principles (providing the ideal ends or goals for seeking the
complete science of nature). The former are due to the faculties of understanding and sensibility,
and can be fully determined a priori, whereas the latter are due to the use of reason and judge-
ment and have to remain indeterminate at any given stage of science [14]. Friedman believes
that “the Marburg tendency to minimize or downplay the role of the Kantian faculty of pure
intuition or pure sensibility on behalf of the faculty of pure understanding represents a profound
interpretive mistake” [26, p.247], and, Cassirer did no longer acknowledge the constitutive di-
mension of the a priori, as it was precisely the function of the constitutive principles of Kant
to bridge between the faculties of sensibility and understanding. Therefore, still according to
Friedman, the a priori only retains its regulative function with Cassirer: the a priori obtains its
full specification as the “ultimate logical invariants” that stand at the ideal completion of the
process of science.24 Friedman concludes that

[a]s a matter of fact, Cassirer (and the Marburg school more generally) does not
defend a relativized conception of a priori principles. Rather, what is absolutely a
priori are simply those principles that remain throughout the ideal limiting process.
In this sense, [. . . ], Cassirer’s conception of the a priori is purely regulative, with no
remaining constitutive elements. [22]

Since this will prove an important issue in the next chapter, let us discuss this conclusion
in a bit more detail. Clearly Friedman likes to reconsider Cassirer’s rejection of the faculty of
sensibility, and would like to “preserve some kind of independence for a faculty of sensibility
conceived along broadly Kantian lines” [31, p.48]. This was attempted in the Dynamics of Rea-
son by identifying, within the structure of physical theories, the level of coordinating principles
whose role was to relate mathematical concepts to empirical phenomena. In a later publication,
Friedman makes his idea of coordinating principles as thoroughly constitutive in the Kantian
sense more elaborate. His attempt at “a more structured reinterpretation of the Kantian faculty
of sensibility [...] involves replacing the Kantian faculty of sensibility with what we now call
physical frames of references” [31, p.48]. The idea is that

[l]aboratory frames attached to the surface of the earth, for example, can be described,
at least to a very high degree approximation, by Euclidean geometry and Newtonian
physics. So they are faithful, in this respect, to the independent a priori structure of
our faculty of sensibility according to Kant. In particular, any abstract theoretical
structure we might then introduce (such as that of Minkowski space-time) must still
be related to this prior perceptually given structure in order to have the empirical
meaning that it does. [31, p.48]
24
Friedman discusses this modification of the Kantian a priori in the context of the discussion between Cassirer
and Schlick on the philosophic interpretation of relativity theory. Although the logical empiricism of Schlick
carries a lot of elements of a neo-Kantian approach, it is on this issue that the two philosophers clearly diverge.
For logical empiricism (Carnap) a purely logical analysis of science provides a fully determinate constitution of
science, whereas for Cassirer it is “transcendental logic” that characterizes philosophy of science, where only a
generic, at all times indeterminate, conception of the constitution of science is appropriate. A similar divergence
between Cassirer and Reichenbach is discussed by Ryckman [20], where Reichenbach in the end also takes recourse
to a logical analysis of physics with an unproblematic account of the empirical objects, whereas for Cassirer the
constitution of the physical object is precisely the problem for critical philosophy.
Chapter 2. Ernst Cassirer and the philosophy of physics 23

So Friedman characterizes the relation between abstract mathematical theories and observa-
tional phenomena in a way that is surprisingly close to Kant’s original contentions: there are
mathematical structures both at the observational level, which is a prior perceptually given
structure, and the theoretical level, which is designed in abstract physical theories, and the two
levels are “coordinated with one another by a complex developmental interaction in which each
informs the other” [31, p.49]. The first level is structured by Euclidean geometry and Newtonian
physics, and can be coordinated to Minkowksi spacetime by limiting procedures. In particular,
“the familiar laboratory frames of classical physics play an essential role in relating the mathe-
matical structure of Minkowski space-time to our actual perceptual experience of nature” [31,
p.48]. According to Friedman, we find that empirical phenomena can only be generated in
relativity theory by virtue of coordinating it to a perceptual space with a Euclidan structure.25
In this respect, Friedman is correct in differentiating Cassirer’s conception of the constitutive
function of a priori principles from his own. For Cassirer, there is only the space of physical
theories as shaped by mathematical concepts on one side, and the unstructured chaos of sense
perceptions on the other. Only the former space carries definite scientific meaning, and therefore
contains empirical phenomena in the scientific sense, whereas nothing definite can be said about
the latter. This is, indeed, the consequence of the Marburg school denying a faculty of pure
sensibility and only keeping the faculty of understanding in the game.
Yet, we believe that it is misleading to strip Cassirer’s a priori principles of their constitutive
dimension. As Cassirer repeatedly claims, it is the proper task of critical philosophy to unravel
the different constitutive principles that make it possible for science to represent experience
as a determinate whole. Because he does not acknowledge a separate faculty of sensibility,
Cassirer can no longer characterize the distinction between constitutive and regulative principles
in the way Kant did, but this does not imply that the Cassirer transformed the constitutive a
priori into a purely regulative one. [28] Similarly, the fact that Cassirer does not identify a
distinct level of coordinating principles does not mean that physical principles have lost their
constitutive function. Instead, it just means that constitutivity does not necessarily follow the
specific meaning that Friedman attaches to it. In fact, we believe that Friedman’s insistence on
a separate faculty of pure sensibility leads to a problematic characterization of the function of
physical principles: theoretical physics does not need any bridges to a realm of pure sensibility
in order to give physical meaning to empirical phenomena. In the case of relativity theory, there
is only the four-dimensional curved space-time manifold. In particular, the physical meaning of
the equivalence principle or reference frames has nothing to do with bridging the gap between
abstract theories and empirical phenomena – their meaning is exhausted by the function of these
principles in the theory of general relativity, and it is the theory as a whole that gives physical
meaning to the movements of planets in the curved spacetime. Similarly, we don’t need to
couple back to classical conceptions of the world in order to give empirical meaning to quantum
mechanics.
In conclusion: Just as an internal history of the development of theoretical physics is enough
for laying bare its internal structure and evolution, we believe that the meaning of specifically
physical concepts are exhausted by their function in physical theories. In particular, the prob-
25
In the final chapter of Dynamics of Reason we find a preliminary attempt at applying the same idea of
coordination to the case of quantum mechanics. Here Friedman assigns a central place to the correspondence
principle, according to which a quantum system behaves according to the laws of classical mechanics in the limit
of large quantum numbers; it is supposed to explain why only classical behavior is observed in macroscopic phys-
ical systems. As Friedman explains, “it performed this essential coordinating function by relating experimental
phenomena to limited applications of classical concepts within the new evolving theory of atomic structure” [22,
p.122]. Just as in the case of inertial frames, we find here that coordinating principles are supposed to bridge
between a classically structured perceptual space (pure sensibility in Kantian terminology) and a theoretical space
structured by abstract mathematics. In the case of quantum mechanics this feels like a very suspicious move, not
in the least because a principle is recovered that has not been found in physical theories for the last fifty years.
Chapter 2. Ernst Cassirer and the philosophy of physics 24

lematic way of relating abstract physical theories to some space of pure sensibility (put forward
by Friedman) can be avoided by realizing that there is no such relation, at least as theoretical
physics is concerned. Therefore, Cassirer’s conception of the constitutive a priori is more tailored
to account for the conceptual structure of theoretical physics as far as the physical meaning and
function of its concepts is concerned. In particular, it does not require us to take recourse to
an a priori space with a different (read: classical) structure from the one we find in the most
advanced theories of physics.
Moreover, we find that Cassirer’s conception allows for a more fruitful interplay between the
constitutive and the regulative dimensions of the a priori principles. Indeed, with Cassirer the
same concept can both have a constitutive function in a contemporary physical theory, and point
towards the ‘invariants of experience’ that would function in an ideal stage of physical theorizing.
Because the concepts and principles of physics develop in a historical progress, this interplay is
necessarily dynamic. In a domain of physics that is constantly evolving, the identification of a
strict level of coordinating principles in the sense of Friedman would therefore underestimate
these dynamics.26 We will show in the following chapter that a philosophical account of the
theory of renormalization is better carried out without introducing the level of coordinating
principles.
Still, if we take Friedman’s concerns seriously, we must ask whether Cassirer’s account misses
something in relating physical theory to empirical observations. In a more recent publication
[31] we find that Friedman starts focusing on the praxis of scientific observation and empirical
testing. In the work of Cassirer we do not find an explicit account of how a physical theory is
actually tested in practice. Friedman seems to suggest that this is, in the end, how Cassirer’s
framework falls short:

And so Kants reliance on the a priori structure of the faculty of sensibility necessarily
common to all human beings is replaced by the demand of the experimental (and
therefore technological) community for universally communicable (replicable) results.
[...] The extremely abstract mathematical structure of general relativity, for example,
thereby acquires a connection (via a reconfigured version of a schema connecting
the understanding to sensibility) with our actual perceptual experience of the world
around us—now essentially including technologically enhanced perceptual experience
in engineered experimental contexts. Abstract (purely intellectual) mathematical
reasoning acquires a necessary and very productive relationship with the concrete
technological practice of experimenters and engineers. [31, p.50]

Thus Friedman opens up a dimension of constitutive principles that appeals to the experimental
technologies in which these principles are tested. This move is complementary to Friedman’s
notion of historicizing the a priori, which we discussed in the previous section, and has the
potential of adding extra dimensions to the meaning and rationality of physical principles that
go beyond the purely internal dimension. If understood in this way, we believe that Friedman’s
ideas can prove very fruitful in order to connect physical concepts to a larger intellectual and
technological context. In the following chapter we will shortly hint at how this might go in the
case of the theory of renormalization.
26
The reason why it does seem to work for the theory of relativity can then be explained by the fact that it
concerns a hundred-year old theory that, without the input from other theories such as quantum field theory, is
rather inert in describing real physical phenomena.
Chapter 2. Ernst Cassirer and the philosophy of physics 25

Coda. The transcendental philosophy of theoretical physics


Before making the transition to contemporary physics, let us first recapitulate what we be-
lieve are the important aspects of a transcendental approach towards theoretical physics in the
tradition of Cassirer, and as partly reinvigorated by Friedman.

• It traces the historical motives behind the development of principles in theoretical physics.
The ultimate goal of this historical approach is finding the ‘ultimate invariants of expe-
rience’ in the sense of Cassirer. The work of Friedman has shown that this goal is not
rendered futile by the historiography of science in the aftermath of the work of Kuhn.

• It looks for the constitutive function of physical concepts and principles. With Cassirer we
should investigate how the concepts of mathematical physics make first possible a scientific,
determinate and fixed experience of physical reality. The work of Friedman has shown that
this goal is not rendered futile in the light of Quinean holism.

• Both of the above functions of the concepts of physics – the regulative and the constitutive
– exhibit an interplay in the work of Cassirer; this interplay should be laid bare.

• The concepts of theoretical physics are functional (serial, structural) concepts. The history
of physics should be understood as a progressive elimination of thing-like concepts. In that
respect, interpreting the concepts of physics in an ontological way is always misguided, and
only a thoroughly epistemological interpretation of theoretical physics by transcendental
logic is warranted.

• We want to understand the functional concept as an attempt at mathematization and


unification of the scientific experience; the goal of theoretical physics is to bring all of
scientific experience into a whole structured by mathematical (serial) relations.
Chapter 2. Ernst Cassirer and the philosophy of physics 26
Chapter 3

Renormalization in contemporary
physics: a transcendental perspective

Contemporary philosophy of physics is dominated by the realist/anti-realist debate. Indeed, in


the aftermath of Kuhn’s work on the history of science, the challenge is to show that there
is cognitive progress in science and that a realist conception of science can be maintained. In
order to make this happen in the context of theoretical physics, there is needed a philosophical
articulation of the fundamental ontology of physics. In addition, philosophy of physics typically
takes place within the broader context of physicalism: a thorough analysis of fundamental
physics is the first step to a naturalized metaphysics. In both cases, a reductive connotation is
implicit: the challenge is to lay bare the exact metaphysical nature of the fundamental ontology
of physics. Within that perspective, it is straightforward to turn the philosopher’s attention to
elementary particle physics, with relativistic quantum field theory as its underlying theoretical
structure, and the structure of space and time, for which the theory of relativity provides the
best source. Indeed, it is the standard model of particle physics and Einstein’s theory of gravity
that provide us with the best theories for describing the basic ontology of nature.
Yet, this primacy of high-energy physics1 is not found in theoretical physics itself. Certainly,
high-energy physics retains a special place in the sense that it describes the physics at the
highest-energy scale that experiments can probe today. But this doesn’t mean necessarily that
it also contains the most interesting or the most fundamental physics, or that it stands at the
bottom of a strict hierarchy. In a sense that will become clear throughout this chapter, the field
of condensed-matter physics2 is equally interesting and fundamental, and requires no conceptual
input from high-energy physics.
If this picture is correct, the philosophical quest of understanding the epistemological struc-
ture of theoretical physics should not focus on high-energy physics exclusively. Indeed, the
principles and concepts that are used in e.g. condensed-matter physics should form the object
of philosophical analysis just as much as the concepts of particle physics. Once this is realized, a
first observation should already make clear that the field of condensed-matter physics contains
just as many concepts and principles that prove worthy of philosophical analysis. In fact, many
1
We will use the terms ‘high-energy physics’ or ‘elementary-particle physics’ to denote the discipline of theoret-
ical physics involved with the relativistic quantum field theories describing the elementary particles. All research
that is specifically concerned with the general theory of relativity will not be taken into account in this thesis,
whereas we will discuss string theory only in passing.
2
Condensed-matter physics is the branch of physics that seeks to understand the behavior of condensed phases
of matter. The most familiar of these condensed phases are solid and liquids, but also contains superconductors,
(anti)-ferromagnetic ordered phases, Bose-Einstein condensates, and many more. As it involves such a diversity
of different subjects, methods, and concepts, it is hard to define condensed-matter physics without enumerating
all different subfields, but its import will become clear throughout this chapter.

27
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 28

of the theoretical tools of high-energy physics have originated in condensed-matter theory, and
vice-versa, so that it makes no sense to treat any of the two domains in isolation.
A possible reason for this misleading focus of philosophers could be the small interest from
professional physicists in philosophical questions. In the previous chapter we could clearly see
a line from physicists such as Mach and Planck to the philosophical concerns of Cassirer, but
no such lines can be discerned between contemporary physics and philosophy. Physicists are
typically not familiar with philosophical literature3 , whereas philosophical accounts seem out of
touch with real-life physics.
In this chapter we will discuss the theory of renormalization, which has been a central
theme in theoretical physics for the last fifty years across condensed-matter, high-energy and
statistical physics. Because it appears in so many subdisciplines, a philosophical account of
renormalization is less likely to focus on ontological and more on epistemological issues, making
it the ideal subject for the transcendental approach that we have laid out in the previous chapter.
We will start by situating this subject within the landscape of contemporary theoretical physics
[Sec. 3.1], and then give a historical introduction to the theory of renormalization [Sec. 3.2]. The
philosophical part starts with a review of the literature on renormalization [Sec. 3.3], so that we
can set the stage for our transcendental account following Cassirer [Sec. 3.4].

3.1 Theoretical physics in the 21st century


Whereas theoretical physics in the nineteenth century could be characterized by a few themes
or motives, and the structure could be, more or less, captured in a small number of fundamental
concepts, the discipline of theoretical physics seems to have become too diverse to discern an
overarching conceptual structure. This diversity makes it impossible to analyze the whole of
theoretical physics within one conceptual framework in the way that Cassirer did for nineteenth-
century physics.
Yet we can still identify a few general problems or motives which pop up in different forms
throughout different sub-disciplines of physics. In this chapter we will focus on the many-
body problem, the solution of which might be one of the most persisting challenges that run
through the history of twentieth-century physics and still faces contemporary physicists with
insurmountable difficulties. The structure of a many-body problem always boils down to the
following elements: (i) we are faced with a physical system with a large number of constituents,
(ii) for which we know how the individual constituents behave and interact with each other,
but (iii) we are challenged to predict or understand the collective behavior of the system. One
particular example of this threefold structure is the problem of computing the chemical properties
of large molecules starting from the laws of quantum mechanics. Indeed, we know that the
chemical properties of a molecule can be deduced from the distribution of the electrons around
the nuclei, which, in turn, can be computed from the static Schrödinger equation
 
∑ ℏ 2 ∑ e 2 ∑ Z k e2 ( ) ( )
− ∇2i + −  ψ {⃗xi } = E ψ {⃗xi } . (3.1)
2me |⃗xi − ⃗xj | |⃗xi − ⃗xk |
i j k

Solving this eigenvalue equation for the electron distribution ψ is, however, practically impossible
even for a simple one-atom system, because the complexity of this mathematical problem scales
exponentially with the number of electrons. This implies that, even though we know exactly
how its constituents behave and interact, we cannot derive a molecule’s properties. Yet, despite
3
Feyerabend would put it as follows: “The younger generation of physicists, the Feynmans, the Schwingers,
etc., may be very bright; they may be more intelligent than their predecessors, than Bohr, Einstein, Schrödinger,
Boltzmann, Mach and so on. But they are uncivilised savages, they lack in philosophical depth [...].” [32, p.386]
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 29

the impossibility of finding an exact solution, a variety of techniques has been proposed to find
approximate solutions that capture the correct physical properties of the molecule. Typically,
these approximate methods assume that the electrons can be treated as independent particles –
Hartree-Fock theory and density-functional theory are the most famous examples – and neglect
the collective phenomena (correlations) of the particles. Corrections to these mean-field meth-
ods4 are typically treated in perturbation theory, a procedure that assumes that the correlations
do not drastically change the properties of the system.
This example illustrates the general strategy for solving a generic many-body problem.
Whereas taking into account the correlated behavior of all constituents in the many-body sys-
tem leads to a far too complex mathematical problem, approximate methods are devised such
that the constituents are somehow treated as if they behave independently. Although these
methods have led to great successes in the past, there have always been many-body systems
for which they do not predict the physical behavior correctly. These problems typically involve
collective behavior of the system’s constituents that go beyond the mean-field description. For
example, in quantum chemistry there are many molecules for which density-functional theory
gives wrong predictions. Other examples can be found in condensed-matter physics, where ma-
terials have been discovered that do not exhibit the typical conductor/semi-conductor/insulator
behavior that is expected from band theory5 . These new phases of matter are characterized by
collective behavior of the material’s constituents, which cannot be captured by treating them as
independent particles because the correlations between the particles are too strong. It can even
happen that the macroscopic properties of these materials are completely disconnected from the
microscopic constituents, in the sense that we cannot directly understand the collective behavior
as somehow arising from the constituents. In that case, the macroscopic properties are said to
emerge from its microscopic basis.
When this happens, the many-body problem takes on a new dimension: it involves the ques-
tion of how the emergent properties of a many-body system can originate from its microscopic
constituents. Again, the structure of the many-body problem entails that we know the micro-
scopic laws exactly, so we have, in principle, a complete description of the system. Yet, this
description does not lead to an understanding of the collective behavior of the system, because
qualitatively new macroscopic phenomena are seen to emerge in the system. We will discuss
the issue of emergence in more detail later in this chapter, but, in order to make things more
concrete, we will first discuss three examples in a bit more detail.

Example 1: The superconductor


The paradigmatic example of a condensed-matter system is the superconductor, which will
serve as an instructive example throughout this chapter. It is the phenomenon where a material
exhibits a number of exotic properties such as zero resistivity or the Meissner effect when it is
cooled below a certain critical temperature. Superconductivity is a typical example of a many-
body phenomenon since it involves a large number of electrons moving inside a lattice6 , for
which we know how they interact with each other, but which display a strong collective effect
4
In mean-field approximations the individual electrons are thought to move in an average potential that is
generated by all the other particles. This is a drastic approximation because all correlations between the individual
particles are neglected.
5
Band theory is a mean-field theory for the electrons that move through the crystal of a given material.
6
A superconducting material is typically a solid material with a certain lattice structure, where the atoms are
arranged in a periodic structure. Most of the electrons are closely bound to the atoms, but the electrons in the
atom’s outer cells are delocalized and can move through the lattice. The lattice itself is not completely inert,
because it can vibrate. The interactions between these lattice vibrations (phonons) and the electrons is crucial
for explaining superconductivity, because these phonons mediate the attractive interactions between the electrons
and allow for the formation of Cooper pairs (see further).
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 30

Figure 3.1: Cartoon of superconductivity. On the left we see an uncorrelated state of electrons (repre-
sented as black figures) while on the right we have a system where all electrons are bound into Cooper
pairs, which are ’condensed’ into a correlated quantum state (the collective nature of the state is repre-
sented by the correlated dance). The correlated motion of the electrons is said to be an emergent effect
in this many-electron system. Figure taken from [33].

once the material is cooled.


In the so-called conventional superconductors, the microscopic mechanism responsible for
these properties is more or less explained by the concept of Cooper pairing and the BCS theory.
The idea is that the electrons in a superconductor are bound in pairs due to a attraction me-
diated by lattice vibrations, and these Cooper pairs undergo Bose-Einstein condensation. This
Bose-Einstein condensation is a quantum-mechanical effect, and implies that all Cooper pairs
‘condense’ in a strongly-correlated quantum state. As a result, the dynamics of the supercon-
ductor cannot be understood on the level of the individual level of the electrons, but rather on
the level of the macroscopic quantum state in which the electrons settle. For that reason, the
phenomenon of superconductivity is said to be an emergent property of a many-electron system
(see Fig. 3.1). Note that a second group of superconductors (the non-conventional ones) cannot
be understood in this way and the physical mechanism giving rise to these high-temperature
superconductors is still not understood. The reason why this problem is so difficult is that
it involves even stronger quantum correlations between the electrons and, consequently, more
exotic collective (quantum) effects.

Example 2: Quantum chromodynamics


Another notoriously difficult problem of strongly-correlated many-body physics defying a sat-
isfactory analytical or numerical treatment is provided by quantum chromodynamics. This is
the fundamental theory describing the interactions between quarks, and is a crucial part of the
standard model of elementary particle physics. It is a quantum field theory that is specified by
the QCD Lagrangian
( ) 1
LQCD = ψ̄i i(γ µ Dµ )ij − mδij ψj − Gaµν Gµν
a ,
4
which, in principle, determines all the physical properties of quarks. When it comes to high-
energy scattering experiments, perturbation theory gives us the tools to predict the experimental
results starting from the fundamental Lagrangian. When computing quark-gluon properties at
lower energies, however, perturbation theory fails because strong quantum correlations become
important. In this regime, physicists have struggled the last decades to deduce the strong-
correlation effects from the fundamental theory, but, in a lot of cases, have failed to do so.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 31

Let us look at one particular issue in a bit more detail, viz. the determination of the hadron
masses such as the proton and neutron. This problem again has the threefold structure of a
many-body problem: (i) we are faced with a system of a large number of microscopic constituents
(the quarks described by a continuous field), (ii) we know the fundamental interactions for the
field (they follow directly from the QCD Lagrangian), and (iii) we want to understand how
the proton forms as the lowest energy state of two up quarks and one down quark. It appears
that the quantum correlations and fluctuations inside the proton are extremely strong, making
perturbative calculations worthless. In fact, it is very hard to see how the proton can be thought
to emerge from the microscopic quarks and gluons as described by the standard model.

Example 3: The Ising model


As a third example we introduce the Ising model, which serves as the paradigm of a system
of statistical mechanics exhibiting a non-trivial phase transition. The Ising model can be best
thought of as a toy model consisting of a lattice of spins si = ±1 arranged on a two-dimensional
square lattice, but other lattices are also possible (and can give rise to different properties).
The model is further specified by a Hamiltonian, which dictates how the energy depends on the
configuration of the different spins. In the ferromagnetic Ising model, the only terms in the
Hamiltonian are nearest-neighbor interactions,

H({si }) = − si sj .
⟨ij⟩

Here the sum runs over all nearest-neighbor pairs {ij} and gives a negative energy contribution
if the spins are aligned. The laws of statistical mechanics teach us that all properties of the
system are determined by the partition function
∑ 1
Z= e−βH({si }) , β= ,
kB T
{si }

i.e. the sum over all spin configurations weighted by the Boltzmann factors, where T is the
temperature of the system.
We are for the third time confronted with a many-body problem: (i) we have a large number
of microscopic constituents (spins on a lattice), (ii) we have a full description of these constituents
and their interactions (everything follows from the partition function), and (iii) we are interested
in the collective behavior of the system. Here we are interested in the system’s magnetization,
which is given by the averaged direction of the spin. In the limit of infinite temperature, all spins
will be uncorrelated and the average spin will be zero; in the zero-temperature limit all spins will
settle down in the same direction as energetically favored and the average spin will be one. It
appears now that in between these two limits there is a sharp phase transition (see Fig. 3.2), and,
more interestingly, that the spin correlations become stronger around the transition point and
cannot be understood from mean-field theory nor perturbative corrections. Again, the strong
correlations at the phase transition originate from collective effects that seem to emerge from
the microscopic constituents, and cannot be understood starting from the microscopics directly.

The relation between condensed-matter and high-energy physics


These three examples are taken from condensed-matter physics and high-energy physics alike,
and illustrate the conceptual similarities between these two fields of physics. Yet originally
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 32

(a) (b)

Figure 3.2: The two-dimensional Ising model. (Left) A number of spins are arranged on a two-dimensional
square lattice, where every spin interacts with its four nearest neighbors. (Right) As the temperature
is increased, we go from a system where all spins point in the same direction (average magnetization is
one) to a system where the spins are uncorrelated (average magnetization is zero). In between, there is
a sharp phase transition.

the fields of particle physics and solid-state physics7 were involved with very different physical
phenomena, and not much overlap was found between the two. This has changed dramatically
since the fifties, when methods of quantum field theory – originally the theory for describing
elementary particles – were imported to describe condensed-matter systems as well. In the
opposite direction the concepts of symmetry breaking, originally developed by condensed-matter
physicists, were applied to high-energy physics. Later ideas from the theory of phase transitions
on the one hand, and divergences in quantum field theory on the other, were combined in the
development of the renormalization group. Nowadays, much of the phenomena of high-energy
physics can be found back in condensed-matter systems, and the same concepts can be applied
to understand these physical phenomena. Famous examples include the observation of the Higgs
mode in superconductors [34], and the emergence of gauge bosons and fermions in spin systems
[35]. The defining difference between the two fields from a conceptual point of view is that in
the case of condensed-matter physics the microscopic constituents and laws are known and the
observed phenomena, exotic as they may be, are supposed to arise from a more ‘fundamental’
level, whereas an underlying level is not known in the case of high-energy physics.
As will become clear in the rest of this chapter, we believe that the divisions between different
subfields of physics, and a strong focus on one of these fields in particular, is unwarranted
from a philosophical point of view, at least if we want to map out the conceptual structure of
contemporary theoretical physics. The focus on high-energy physics from the side of philosophy
is rooted in the motive of grounding a metaphysical or ontological picture of the world, and is
unwarranted from the epistemological point of view. Instead, it proves to be more rewarding to
investigate concepts or motives that run across the different subdisciplines of physics. The many-
body problem is one of these motives, and the theory of renormalization provides an extremely
elegant way of getting a grip on it. At the end of this chapter, we will see that the theory of
renormalization gives us a way of understanding one of the key conceptual features that any
physical theory shares, irrespective of the ontology the theory supposedly describes.
7
Solid-state physics is, more or less, the old term for condensed-matter physics. Nowadays, the former also
refers to the ‘old way’ of doing condensed-matter physics, where crystal structures and band theory were among
the main subjects.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 33

3.2 The theory of renormalization


Renormalization is not really a theory in the traditional sense of the word – the sense in which
relativity or quantum mechanics are theories – but rather denotes a set of interrelated ideas,
concepts, methods, and procedures. Therefore, the best option for describing renormalization is
by a historical introduction, explaining the different aspects as they appeared in their historical
development.
The history of renormalization theory can be told in many ways, but two historical narratives
seem to present themselves. The first starts in the early days of quantum field theory, where the
problem of infinities in perturbative calculations were progressively solved with renormalization
procedures. The second narrative takes off in the context of statistical physics, and, in particular,
critical phenomena. We will follow the latter, because this is the road commonly taken in
condensed-matter physics, and we will couple back to quantum field theory at the end of this
section.8

3.2.1 The prehistory


In our narrative9 , the advent of the renormalization theory has its roots in the theory of clas-
sical phase transitions. Traditionally, a phase transition is understood in terms of statistical
mechanics, which directly characterizes the link between the microscopic world of atoms, spins,
and molecules, and the macroscopic world that can be directly observed; a phase transition is
understood as a macroscopic system suddenly settling into a drastically different equilibrium
configuration under the influence of an external parameter such as temperature or pressure.
It was the genius of Landau and his introduction of the order parameter that led to another
perspective on phase transitions. He realized that, because a phase transition is a collective
effect of a many-body system, it cannot be characterized on the microscopic or macroscopic
levels alone, but is rather taking place across different scales inside the material10 . The idea
of introducing an order parameter can be stated as providing a mesoscopic object that lives on
different length scales at the same time, capturing the system’s fluctuations on these different
scales as it approaches a phase transition. Indeed, the fluctuations and dynamics of the order
parameter describe the collective long-wavelength fluctuations, which eventually diverge exactly
at a critical point.11
Landau’s theory provided the first systematic understanding of phase transitions, but the
theory was soon proven wrong by the exact solution of the two-dimensional Ising model as
provided by Onsager. The problem was that Onsager’s computation of the partition function
and thermodynamic properties of the Ising model did not agree with Landau’s general theory of
phase transitions. Indeed, as we have seen, the Ising model exhibits a sharp critical point, where
the expectation value of the order parameter vanishes according to a non-trivial power law as
a function of temperature (see Fig. 3.2). Soon it was realized that this violation of Landau’s
theory is a generic property of phase transitions, and a whole field of physics emerged trying to
8
Our main source for writing this section was Ref. [36].
9
This is a rather technical section which aims at reviewing all important concepts leading up to Wilson’s
formulation of the renormalization group. The reader who is not familiar with these concepts is not required to
fully understand this section.
10
In this section, the concept of scale refers to the typical length of the fluctuations that we are talking about.
We will have a lot more to say about scales later in this section.
11
For later reference, we note here that Landau’s characterization of a continuous phase transition in terms of
an order parameter as a dynamic fluctuating object provides the first example of an effective field theory. Indeed,
the fluctuations of the order parameter capture the dynamics of a macroscopic system as it goes through a phase
transition, without directly coupling back to the microscopic constituents: It captures the effective degrees of
freedom that determine the system’s behavior across different scales.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 34

understand the critical properties of many-body systems undergoing a phase transition.


The assumption of the ‘classical’ theory of Landau was that all functions entering the the-
ory should have a smooth (analytic) character, even at the phase transition. Physically, this
amounts to a neglect of the interactions between fluctuations across different length scales, so
that the small-scale fluctuations can be safely incorporated into effective parameters that be-
have smoothly on the larger scales. In the non-classical cases of criticality – the generic case
for interacting systems – this assumption proves to be wrong, and it is only by the advent of
renormalization theory that the effect of fluctuations across scales has been incorporated in a
correct way.
On the road to the renormalization group, the theory of critical exponents, scaling behavior
and universality have proven to be crucial. Both experimentally and theoretically it was realized
that, although the power laws that are observed around critical points are not those of the
classical theory of Landau, the same exponents systematically pop up in entirely different many-
body systems. This points to the fact that phase transitions can be classified in a small number
of universality classes, characterized by a set of critical exponents. Moreover, the different
exponents within a certain class were shown to obey certain scaling relations.
In the sixties the existence of these phenomena related to phase transitions and criticality
were firmly established experimentally and theoretically, but a real understanding that could
explain all the different aspects was still lacking. Moreover, one would like to firmly relate
the phenomenology of criticality to statistical mechanics, which is supposed to describe the
microscopic constituents of the systems. This is a requirement that can be traced back to the
work of Landau, where the order parameter was introduced without providing a direct link to
the underlying microscopic spins or atoms out of which a system is made. The crucial question
is: how do the order parameter and its fluctations arise from the microscopics, and how is
it possible that an effective field theory can accurately describe the behavior of a many-body
system as it approaches a phase transition?

3.2.2 Wilson’s intervention


This is the point where the work of Wilson comes in.12 We will explain Wilson’s original
formulation of the renormalization group in some detail, based on his seminal review paper of
1975 [37].
The first important point is that the renormalization group is about scales in many-body
systems, entailing that dynamics, fluctuations, and correlations can be said to occur within a
certain window of energy, length, or momentum.13 The basic assumption of Wilson is that these
scales are locally coupled:
The basic physical idea underlying the renormalization group approach is that the
many length or energy scales are locally coupled. For example, the behavior of
fluctuations in a magnet with wavelengths from 1000 to 2000 Å are assumed to be
primarily affected by fluctuations with nearby wavelengths, e.g., 500-1000 Å or 2000-
4000 Å. Fluctuations with wavelengths much less or much greater than 1000 Å are
less important. [37, p.775]
Remember that the rationale behind Landau’s theory was that we don’t need the microscopics
of a system in order to understand the behavior at a much larger scale, but that this fails for
12
The work of other people such as Widom, Kadanoff and Fisher are also important in this story, but we focus
here on Wilson for pedagogical reasons.
13
Here the notion of scale is left rather vague, because it can refer to energy, length or other dimensional
quantities. These different definitions of scale are typically related, in the sense that probing certain length scales
requires probes with a certain characteristic energy. For example, resolving features on a certain length scale with
light probes requires the photons to have a certain energy or wavelength.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 35

systems at criticality. In the end Wilson came up with a procedure of treating the connections
between scales, explaining why Landau’s theory can make sense in some cases and why it fails
in other.

Renormalization group flows


Instead of postulating the existence of an effective theory at a certain scale, Wilson went back
to ask the question how such an effective description arises from the microscopics as they are
described by statistical mechanics. Suppose, thereto, that we have a system of a large numbers
of spins {s} arranged on a regular lattice (the Ising model is an example of such a system). As
each spin can take on two states, the partition function of this system consists of a sum over all
possible configurations, ∑
Z= e−βH({s}) ,
{s}=±1

where H({s}) is the energy of a particular configuration. One possibility of systematically


computing this huge sum – the number of configurations scales exponentially in the number of
spins – is by first dividing the spins into two groups. The first group {s< } we will keep in our
analysis, but the second group {s> } we will sum over so that they drop out of the problem.
After we have taken the partial sum, we will be left with an effective energy function Heff ({s> })
involving only the second group of untouched spins, but, if we do not want to change the physics
of the system, we should require that the new partition function

e−βHeff ({s })
>
Z̃ =
{s> }

remains the same. This implies that the following relation should hold:

e−βHeff ({s }) = e−βH({s }∪{s }) ,
> < >

{s< }

which gives us, in principle, a prescription for transforming the original model to a new effective
model, which acts only on a part of the spins of the lattice. Since only half of the spins are
retained in the effective model, the spacings between the spins has doubled and the model can
be said to be defined on a larger scale. If one now rescales the spacings between the effective
spins to the original spacings, one ends up with the same basic constituents as in the original
model, but now with a different energy functional Heff ({s> }). Thus, we have designed a map
between models, which can be written down in full generality as
( ) ( ( ))
H(l) {s} = R H(l−1) {s} .

In Fig. 3.3 we have summarized the renormalization procedure. This procedure can now be re-
peated many times, which leads to a renormalization-group flow14 of effective models. Supposing
that we start from a spin system with only nearest-neighbor coupling, the renormalization-group
flow will introduce couplings that range further than only the one between nearest neighbors.
Even worse, not only two-spin but also four-spin, six-spin, etc. interactions will be generated.
Consequently, the renormalization-group flow can be pictured as a flow through the space of
14
These ideas of renormalization are commonly grouped under the term ‘renormalization group’. In order to
avoid confusion, we will avoid the use of this term, and rather refer to ‘renormalization theory’ or ‘theory of
renormalization’, keeping in mind that this does not correspond to a physical theory in the strict sense, but
rather to a set of interrelated ideas, concepts and procedures. We will use the term ‘renormalization-group flow’,
however, since this term has a stricter meaning in the physics literature.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 36

Figure 3.3: Schematic representation of a scale transformation. We start out with a set of spins {s}
interacting through a certain Hamiltonian H({s}). We select half of the spins {s< } (yellow) and average
over them, while the other half {s> } (yellow) are kept as degrees of freedom. The yellow spins move away
from the picture, and, by imposing that the partition remain the same, we arrive at a new Hamiltonian
Heff ({s> }) for half of the spins. Finally, we rescale the spacings between these effective spins (and rotate
the lattice) such that we arrive at the same lattice structure, but with a different Hamiltonian. This
relation defines the map R(·) between Hamiltonians.

all possible spin couplings, where the flow dictates how these effective couplings change as the
scale is tuned. The transformation that generates a renormalization-group flow – the above map
R(·) gives an explicit realization of such a transformation – is called a scale transformation as
it maps a given model to another model living on a larger scale.
In most situations these renormalization-group flows terminate at a fixed-point model H∗ ,
characterized by the fixed-point relation
( ) ( ( ))
H∗ {s} = R H∗ {s} .

Typically, one has a small set of fixed points, and different microscopic models can flow to the
same fixed-point models. Indeed, if a small interaction term is added to a certain microscopic
model, it is typically expected that the model will still flow to the same fixed point; the perturba-
tion is called irrelevant in that case. Given a certain model, one also has relevent perturbations,
which have the effect that the fixed point of the model is changed. Some of these fixed points
correspond to trivial models, for which e.g. all the spins are frozen to point in the same direc-
tion, or do not interact at all. Other fixed points, however, are more interesting and represent
critical states. In fact, every one of these non-trivial fixed points corresponds to a universal-
ity class: every model that flows to the same critical (non-trivial) fixed point belongs to the
same class. A critical fixed point has a defining set of properties such as scaling behavior and
critical exponents, which can be obtained by linearizing the above flow equations. So with the
concept of renormalization-group flows Wilson had found a natural explanation of universality:
systems of different physical character may, nevertheless, flow to the same critical fixed point.
The different incoming trajectories onto this same fixed point correspond to distinct irrelevant
interactions that are present in the microscopic models, but which are washed out by the scale
transformation.
Now the above renormalization procedure of averaging over degrees of freedom is particularly
simple in the case of spins, and is expected to become more difficult in the case of atoms,
electrons, field theories, etc. Also, the effective degrees of freedom that arise in the course
of a renormalization-group flow can become rather different from the ones on the microscopic
level. Therefore, the significance of Wilson’s work consists rather in the conceptual ideas of scale
transformations, renormalized couplings, effective degrees of freedom, and renormalization-group
flows. Fisher puts it as follows:
Indeed, the design of effective RG transformations turns out to be an art more than
a science: there is no standard recipe! Nevertheless, there are guidelines: the general
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 37

Figure 3.4: Schematic representation of a renormalization-group flow. We can see that different micro-
scopic models (l = 0) determine different itineraries through the space of effective models and lead to a
small set of fixed points. In this case the possibilities are simple: a microscopic model flows to one of the
two trivial fixed points, unless it is at the critical point. In the latter case the model flows to the critical
fixed point (indicated with an asterix). Figure taken from Ref. [36]

philosophy enunciated by Wilson [...] is to attempt to eliminate first those micro-


scopic variables or degrees of freedom of “least direct importance” to the macroscopic
phenomenon under study, while retaining those of most importance. [36, p.672]

In the next section, we will give different examples of how this ‘general philosophy enunciated
by Wilson’ is realized in different physical contexts or theories.

The numerical renormalization group


Still, there is one aspect of Wilson’s achievements that we have not discussed yet, one that is com-
monly forgotten in historical overviews but, from our perspective, is crucial for understanding
the significance of renormalization theory in physics.

The fourth aspect of renormalization group theory is the construction of nondiagram-


matic renormalization group transformations, which are then solved numerically,
usually using a digital computer. This is the most exciting aspect of the renormal-
ization group, the part of the theory that makes it possible to solve problems which
are unreachable by Feynman diagrams. The Kondo problem has been solved by a
nondiagrammatic computer method. [37, p.776]

Let us explain this ‘most exciting aspect’ in a bit more detail.15 The Kondo model describes a
magnetic impurity coupled to the conduction band of a nonmagnetic metal; the crucial question,
unsolvable by perturbation theory, is the low-temperature behavior of this impurity spin. Wil-
son’s solution to the problem is to (i) discretize the conduction band to discrete energy levels
with a logarithmic spacing, (ii) transform the system to a half-infinite spin chain with the first
spin representing the impurity, and (iii) solving this spin-chain system iteratively. Starting from
the impurity spin in every iteration a new site is added to the system and, in order to keep
the size of the Hilbert space tractable, the number of states is truncated by only keeping the
lowest-energy states of the Hamiltonian for the current part of the chain.
So the general procedure of integrating out non-important degrees of freedom is motivated
from a very practical point of view. Because a computer can store only a finite number of
15
Note that the details of Wilson’s numerical solution of the Kondo problem are not important to understand
the rest of this chapter.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 38

states, Wilson needed a procedure to select only the important states in every iteration of the
renormalization procedure. In this case, it appears that selecting only the lowest-energy states
leads to a numerical algorithm that predicts the impurity’s physical properties. But this must
mean that these states represent the effective degrees of freedom that capture the physics of
the Kondo model at a given energy scale. Therefore, from the very start of the theory of
renormalization, the numerical simulation and physical understanding of a system are two faces
of the same coin, in the sense that both require to account for the effective degrees of freedom
that determine the physics of a many-body system at a certain scale. Indeed,

the solution of the Kondo problem is the first example where the full renormaliza-
tion program (as the author conceives it) has been realized: the formal and scaling
aspects of the fixed points, eigenoperators, and scaling laws will be blended with
the practical-aspect of numerical approximate calculations of effective interactions
to give a quantitative solution (the present accuracy is a few percent) to a problem
that previously had seemed hopeless. [37, p.805]

3.2.3 Renormalization and the many-body problem


In the previous section we have discussed three examples of many-body systems exhibiting
emergent behavior which, although the constituents and their interactions are known exactly,
cannot be understood starting from the microscopic description alone. From the insights of
Wilson we can now understood how such emergent behavior can come about, and how it can
disconnect from the microscopic description of the system.
One important concept is that of effective degrees of freedom determining a certain physical
phenomenon. In full generality, we can define the degrees of freedom for a certain system as
determining the possible states or configurations that the system can take. In the case of the
Ising model, the degrees of freedom are spins on a lattice, whereas we have quark and gluon fields
for quantum chromodynamics, and electrons moving through a crystal for the superconductor.
Importantly, Wilson has shown that we can define scale transformations that average over a set
of given degrees of freedom, giving rise to effective models with a new set of degrees of freedom.
In the case of the simple scale transformation in Fig. 3.3, these new degrees of freedom are again
spins, but scale transformations can also give rise to qualitatively different ones. For example,
in the case of quantum chromodynamics we have seen that the effective degrees of freedom are
not the original quarks and gluons, but rather the protons and neutrons that emerge at low
energies. Similarly, superconductivity is best described by identifying the symmetry breaking of
effective gauge fields below a certain temperature.
This picture thus suggests that many-body systems are described by different effective de-
grees of freedom depending on the scale at which the system is being probed. The renormali-
zation-group flows determine how these effective degrees of freedom on a certain scale can be
entirely disconnected from the degrees of freedom at a smaller scale. These effective degrees
of freedom, and the physical phenomena they describe, can then be said to emerge from the
microscopic basis.
These insights and developments have led to a specific way of looking at a many-body system.
Although the microscopic constituents and the elementary interactions are often well-known in
a typical system, the crucial question for understanding a certain phenomenon is: What are
the effective degrees of freedom that determine the system’s behavior at this scale? The most
exciting research typically involves many-body physics for which these effective descriptions
involve exotic physical concepts such as collective excitations, gauge fields, effective particles
with anyonic statistics, etc. Let us discuss some examples in the fields of condensed-matter and
high-energy physics.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 39

Renormalization in condensed-matter physics


The field of condensed-matter theory has a few paradigmatic procedures of obtaining such effec-
tive descriptions. We have already encountered the use of order parameters for characterizing e.g.
the phase transitions in the Ising model, where the fluctuations of this order parameter occur on
length scales that are much larger than the scale at which we see the individual spins; the fluctu-
ations of the order parameter are the degrees of freedom determining the low-energy features of
the Ising model at criticality. The understanding of superconductivity follows a similar path, in
that the low-energy behavior of the superconductor can be described by quantum fluctuations of
the order parameter. In the specific case of the superconductivity, the effective field theory for
these fluctuations has an electromagnetic gauge invariance, which is spontaneously broken. This
breaking of a gauge symmetry explains all characteristic superconducting behavior [38], so that
the gauge field indeed describes the effective degrees of freedom on the energy scale at which
the phenomenon of superconductivity is exhibited, whereas the microscopic degrees of freedom
(the electrons and the lattice) do not allow for such a description.
Another example is fermi liquid theory, a theory that has been extremely successful in
describing the electronic properties of conductors. It was again Landau [39] who first came up
with the idea that the low-energy excitations of a gas of interacting electrons can be pictured as
‘quasiparticles’. These are defined in the non-interacting limit as the free electrons, and acquire
an effective mass and lifetime as the interactions are turned on. Most physical properties of a
fermi liquid at low temperature can be derived from the quasiparticle distribution function and
their scattering cross sections. Only quite recently, this approach has been rigorously understood
from the viewpoint of renormalization theory: It has been shown that fermi liquid theory is
obtained if one integrates out high-energy degrees of freedom, much in the spirit of Wilson.
[40, 41] The conclusion from this renormalization analysis is that the quasiparticles are indeed
the effective low-energy degrees of freedom, capturing the phenomenology of the electrons in a
metal at low energy or low temperature.
Together with the concept of an order parameter, the idea of quasiparticles remains to this
day one of the standard ways of understanding the low-energy behavior of a strongly-correlated
electron system. Both paradigms of condensed-matter physics can be rightly viewed as effective
field theories, and are both firmly rooted in renormalization theory16 . In the case of these
two examples, this renormalization perspective can be made rigorous, but this does not always
have to be the case. Indeed, the use of a certain effective field theory is often motivated from
physical intuition, experimental results, perturbation theory, mean-field approaches, numerical
simulations, etc. It is the power of an effective description that a direct link with the microscopic
constituents is not needed to understand a system’s behavior at a certain energy scale.
Also, we should stress that there is an extreme variety of renormalization approaches available
in the condensed-matter literature. Already in the paper of Wilson, we can find different scale
transformations: in real space, where e.g. groups of spins are averaged over; in momentum space,
which is typically the approach in quantum field theories; or in energy levels, which leads to
the solution of the Kondo problem. Recently, it has been realized that one can also renormalize
in entanglement degrees of freedom, which has led to effective parametrizations of quantum
many-body states. One can also perform scale transformations on different objects: in the path
integral of a field theory, in a partition function of statistical mechanics, in a quantum many-
body wave function, on the level of correlation functions, etc. Finally, as we have illustrated with
Wilson’s solution of the Kondo problem, the ideas of renormalization can be implemented in a
conceptual or analytic way, but also lead to computational methods for numerically simulating
16
Another resource for motivating these approaches is the use of symmetries and symmetry breaking. Of course,
the concepts of renormalization and symmetry are strongly connected, but we will solely focus on renormalization
in this text.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 40

many-body systems.

Renormalization in high-energy physics


Above we have written the history of renormalization from the perspective of condensed-matter
physics, but it would be strongly misguided to claim that renormalization theory was invented
in the context of critical phenomena, and only subsequently applied to relativistic quantum field
theory. Indeed, preliminary notions of renormalization go back as early as the late 1940s with
the successful renormalization of the divergences in quantum electrodynamics.17 Yet it proved
difficult to generalize these successes to other field theories, and the whole project of quantum
field theories came under attack because of the non-renormalizability of e.g. the weak interaction.
Although this particular issue was in the end solved around 1970, and the status of quantum field
theory as the theoretical framework for describing elementary particles and their interactions
was restored, it remained a matter of debate what ‘renormalizability’ was really supposed to
mean and whether it should serve as a guiding principle or not. Arguably, renormalization in
quantum field theory was fully understood only after Wilson’s insights on scale transformations
and renormalization flows. [43]
The ‘modern’ view on renormalization in relativistic quantum field theory is very similar to
the one that is at work in condensed-matter physics. As Weinberg has put it, the method in its
most general form can be understood as “a way to arrange in various theories that the degrees of
freedom that you’re talking about are the relevant degrees of freedom for the problem at hand”
[44, p.15]. This has led to the formulation of effective field theories that are designed to describe
the low-energy physics of another quantum field theory that is valid at higher energies. For a
given field theory, this might be implemented by deleting the ‘heavy fields’ from the theory, since
these only have observable effects at high energies, and only keeping the low-energy fields with
suitably redefined masses and couplings. The prime example is again quantum chromodynamics,
where at high energy the relevant particles (fields) are quarks and gluons, whereas, at low
energies, the physics can be understood with e.g. massless pions or, at even lower energies,
protons and neutrons. [44]
The logical conclusion of these developments is that the field theories for the elementary
particles in the standard model are themselves effective field theories of another unknown field
theory, and that they are not fundamental in the traditional sense of the word. Also, because
effective degrees of freedom can be entirely different from the underlying theory, it seems to be
principally impossible to deduce any properties of this more fundamental theory unless one can
do experiments to sufficiently high energies. The situation is nicely summarized by Polchinski
in the following Q&A:

Q: Doesn’t all this mean that quantum field theory, for all its successes, is an
approximation that may have little to do with the underlying theory? And isn’t
renormalization a bad thing, since it implies that we can only probe the high energy
theory through a small number of parameters?
A: Nobody ever promised you a rose garden. [41, p.10-11]

From the previous paragraphs, one can very well imagine that the standard model of elemen-
tary particle physics is an effective theory for other degrees of freedom living at a smaller scale.
In fact, more and more physicists are actively working out the idea that the phenomenology
of high-energy physics arises from an underlying microscopic theory, just as in the case of a
condensed-matter system. This ‘condensed-matter point of view’ can be stated as follows:
17
The concept of a renormalization-group flow go back to the work on the running of coupling constants in
quantum electrodynamics by Gell-Mann and Low [42], which served as an important inspiration for Wilson.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 41

As we probe nature at shorter and shorter distance scales, we will either find in-
creasing simplicity, as predicted by the reductionist particle physics paradigm, or
increasing complexity, as suggested by the condensed-matter point of view. We
will either establish that photons and electrons are elementary particles, or we will
discover that they are emergent phenomena—collective excitations of some deeper
structure that we mistake for empty space. [35, p.879]

This quote finally confirms that the ideas of renormalization and emergence are at the founda-
tions of high-energy physics as well, and that the conceptual import of renormalization theory is
similar as in condensed-matter physics. Therefore, a philosophical account of the theory of renor-
malization is needed that can capture its conceptual structure across the different subdisciplines
of physics.

3.3 Review of philosophical literature


In a seminal paper Cao and Schweber summarize the philosophical ramifications of renormaliza-
tion theory as follows18 :

Most notably, we found that the recent developments support a pluralism in theo-
retical ontology, an antifoundationalism in epistemology and an antireductionism in
methodology. These implications are in sharp contrast with the neo-Platonism im-
plicit in the traditional pursuit of quantum field theorists, which took mathematical
entities as the ontological foundation of physical theories and which assumed that,
through rational (mainly mathematical) human activities, one could arrive at an ul-
timate stable theory of everything. Also, contrary to the previous image of scientific
theories that was implicit in the mathematical structure of QFT, the new image
fostered by the EFT approach is that scientific theories are not to be conceived as
necessary products of scientific rationality, but rather should be seen as contingent
descriptions of nature, revisable in the course of changing circumstances. [43, p.69]

These drastic conclusions are drawn from a detailed historical analysis of the theory of renormal-
ization in high-energy physics, and to a lesser extent, statistical physics. We can already note
that these conclusions seem at odds with the approach we take in this thesis, since for us the
scientific rationality of physical theories is the starting point, the feature that a transcendental
analysis should, in some sense, explain. Indeed, the idea that physical theories are contingent
descriptions of nature directly contradicts our goal of showing how the object of theoretical
physics is ideally determined by a set of ‘ultimate invariants of experience’.
In the following four subsections we will discuss some of the issues that are raised in Cao and
Schweber’s paper, and try to get a feeling of contemporary philosophical accounts of renormal-
ization theory in physics. Since we aim at developing a transcendental account in the spirit of
Cassirer in the following section, we will try to bring home the two important conclusions that (i)
the specifically philosophical conclusions drawn by contemporary philosophers with regards to
epistemology and/or ontology do not necessarily follow from the physics alone19 , and (ii) these
18
In this quote, EFT stands for effective field theory; the EFT program denotes the new approach in high-energy
physis, where every field theory is thought to describe the effective low-energy physics of another field theory that
lives at a higher energy scale. As we have discussed in Sec. 3.2.3, this is the consequence of the modern view on
renormalization in high-energy physics. Crucially, in the EFT program the most accurate field theories describing
the standard model of particle physics are themselves only effective field theories of a lower-lying level to which
we haven’t had any experimental access so far.
19
It is often suggested that, once one understands the physics thorougly – a vantage point that supposedly only
a small number of philosophers attain – these conclusions are the only viable option.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 42

accounts lack a comprehensive view on the epistemological function of renormalization theory.


Of course, the burden of proof lies with our own account of renormalization, rather than explic-
itly disproving all other approaches. Therefore, this section should also be read as a setting of
the stage for the following one, and as an indication of the challenges for a philosophical analysis
of renormalization.

3.3.1 Empiricism
At multiple occasions, Cao and Schweber note that recent developments in renormalization
theory support philosophical empiricism. The reason is that, according to the EFT program,
an understanding of physical phenomena on a given scale needs to be supported by empirical
data on that scale. This implies the fundamental importance of phenomenological approaches
in physics. From that perspective, physical theories cannot be more than “effective instruments
for organizing the data by imposing local order and coherence, and they conceive and express
local causal regularities” [43, p.76]. This, in turn, supports a localist view of theory, which
“characterizes physical (or more generally, scientific) theories as historically situated and context-
dependent” [43, p.74].
Their position is further characterized by contrasting it with the idea that “the development
of fundamental physics will end with the discovery of an ultimate, definitive, and conclusive
mathematical formalism” [43, p.77]. Indeed,

the empiricist position in epistemology that is supported by the recent developments


in renormalization theory is characterized by its antiessentialism and its antifounda-
tionalism, its rejection of a fixed underlying natural ontology expressed by mathemat-
ical entities, and its denial of universal, purely mathematical truths in the physical
world. [43, p.77]

From our perspective this argument cannot carry any weight. In the previous section we have
shown that the physical motivation for the EFT program is not restricted to high-energy physics,
but that the same commitment to ‘relevant degrees of freedom’ is present in e.g. condensed-
matter physics. Moreover, condensed-matter physics is typically not driven by the dream of a
final theory describing some fundamental nature of the physical world. From that perspective,
contrasting the rationale behind the EFT program with the dream of the string theorist20 serves
not as an argument for the claim that renormalization theory implies empiricism.
Moreover, we don’t agree with the claim that the inevitability of phenomonology implies an
empiricism in the philosophy of physics. Indeed, Cao and Schweber observe from the develop-
ments of renormalization theory that physics can only establish local regularities, but use this
observation as follows:

The limited nature of our experience in producing knowledge of the world undermines
the universal claim of physical laws: it only allows ascertaining family resemblance
(regularities) in local region of space and time. From local regularities we cannot
construct physical theories that are unique and necessary. On the contrary, all
theories are context-dependent, culturally relative, and historically changeable. [43,
p.75]

Again, the argument seems to be that, with the dream of a grand unified theory crushed by
renormalization theory, the only remaining option is that a physical theory is e.g. context-
dependent. But the fact that the physical behavior of nature depends on the scale on which it
20
String theory is currently the best option for developing an ‘ultimate’ theory of everything, see the footnote
in Sec. 3.4.4 for more on string theory.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 43

is probed, does not imply that our understanding of nature on that scale is culturally relative.
Instead, renormalization theory has learned that a physical description of nature necessarily
involves a setting of the scale on which it is probed, but this stipulation of scale is an exact and
well-defined part of physical theory, and does not point to the “socially constructive nature of
physical theories” [43, p.77].
Ultimately, it seems that Cao and Schweber’s ontological considerations are responsible for
their empiricistic conclusions. In their view the EFT program implies a representation of the
physical world as “layered into quasi-autonomous domains, each layer having its own ontology
and associated ‘fundamental’ laws”, giving rise to a so-called “hierarchical pluralism in theoreti-
cal ontology” [43, p.72]. Since every quasi-autonomous domain demands an empirical input that
is historically contingent, the conclusion is unavoidable that the ontologies that are discovered
are contingent as well. Then it is only a small step to the claim that “scientific theories are
not to be conceived as necessary products of scientific rationality, but rather should be seen as
contingent descriptions of nature” [43, p.69].
From the transcendental perspective, the conclusion that ontologies are historically contin-
gent does not need to worry us that much, and, crucially, does not imply that scientific theories
are contingent descriptions of nature. The reason is that Cao and Schweber assume that the
ontological commitments of scientific theories are inseparable from their scientific content, an
assumption that we, following Cassirer, want to avoid at all costs. Instead, we would like to
focus on the conceptual structure of theoretical physics, and its evolution, where scientific ratio-
nality is taken as a postulate and cannot be disproven by the content of physical theories. In
particular, we will try to show that renormalization theory has redefined the “universal claim of
physical laws” (see quote above), rather than it has undermined it.21

3.3.2 Physical understanding


Let us therefore focus on the epistemological ramifications of renormalization theory, where Cao
and Schweber first lay their focus on the many fruitful interactions between quantum field theory
and statistical physics. Indeed, although a historical analysis shows the many intersections
between the two fields in a coherent way (see Sec 3.2), this does not answer the question why it
is, in fact, possible that physical insights from one domain are applicable to an entirely different
domain – why is the formalism of critical phenomena applicable to field theory, and vice versa?
In the framework of Cao and Schweber, which is determined by the ontological commitments
that a certain theory contains, the apparent universality of physical concepts is a philosophi-
cal problem: “Why are the physical insights obtained from one phenomenological domain (e.g.,
spins in crystal lattices) relevant, translatable, and applicable to another entirely different do-
main (e.g., continuous fields)?” [43, p.73]. According to Cao and Schweber, we can understand
this in terms of physical and mathematical analogies, as “different physical interpretations of
the mathematical formalisms in different domains of phenomena are connected by metaphorical
transformations of concepts involved in the formalisms” [43, p.76]. In Sec. 3.2 we have made
clear, however, that the universality of physical concepts involves more than metaphorical trans-
formations, and rather point to a deep physical insight related to scale transformations and
renormalization. Indeed, the fact that the same field theory can be applied to different physical
systems is explained through the idea of renormalization-group flows, and shows that different
systems can exhibit the same physics at low energies or large length scales. Again, we come
21
Note that another resource for Cao and Schweber’s empiristic conclusions is the realism/anti-realism debate,
where the historical contingency of ontologies in physics is seen as a problem. One response is structural realism
[45], for which ontological commitments in physics is not made with respect to the objects in physical theories,
but rather with respect to the structures in these theories. Note that our approach does not commit to this line
of thinking either, as we are trying to avoid the realist/anti-realist debate altogether.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 44

to the conclusion that connecting a physical theory with its ontological commitments rather
obscures the conceptual features of renormalization theory.
Let us therefore look at a philosophical analysis of physical understanding that is closer to
our goals. In an interesting paper, Hartmann also discusses Cao and Schweber’s paper and comes
to the conclusion that, “[l]eaving metaphysical questions aside, it seems to be philosophically
more interesting to examine the formal relations between the theories, models and EFTs we
have already” [46, p.298]. From our perspective, this seems to be more to the point, indeed.
Although Hartmann’s paper is mainly about differentiating the functions of theories, models
and effective field theories in high-energy physics, we would like to focus on his idea of global
and local understanding. Whereas a local understanding is obtained by capturing a physical
phenomenon in terms of the degrees of freedom that are relevant at the energy scale under
consideration, a global understanding consists of showing how the phenomenon follows from the
microscopic equations governing the system.22 This difference in the theoretical understanding
of physical phenomena is illustrated with a case study of quantum chromodynamics. As we have
seen in Sec. 3.1, the major difficulty with this theory is the fact that it is extremely difficult to
actually compute, starting from the fundamental Lagrangian, observable consequences such as
the observed masses of the proton and neutron. In fact, this has only been realized recently with
the development of highly advanced computational methods and the use of huge computational
resources. According to Hartmann, this has produced a global understanding of the mass of
the proton, since it is incorporated as a consequence of the fundamental equations of quantum
chromodynamics, but fails to produce a local understanding, since this computational method
acts like a black box. On the other side, we have effective models such as asymptotic freedom,
confinement, or dynamical chiral symmetry breaking, which provide a local understanding of
how the proton arises as a particle, but these explanations lack a global understanding because
no general principles are directly involved.
Although the differentiation of global and local understanding sheds light on the different
functions playing a role in explaining physical phenomena, there seems to be a tension between
both types of understanding. Both types of understanding are needed to fully understand a
physical phenomenon, but it is rather unclear how they relate to each other. The dichotomy
can be analyzed further by reiterating the example of quantum chromodynamics. In the case
of the proton mass, for example, we believe that Hartmann misses the point of lattice gauge
theory as he downplays this computational method as providing just a black box. Instead, the
efforts of lattice gauge theory and other computational approaches in determining the mass
of the proton from the fundamental Lagrangian should be understood as providing a physical
understanding of how the proton arises as an effective description of quantum chromodynamics
at low energies. Put more generally, physical understanding requires an understanding of how
the effective degrees of freedom at a certain scale – the energy scale at which the proton is probed
– arise through the interactions of the degrees of freedom that live at other scales – the energy
scale at which quark dynamics are important. In that respect, it is unclear what the added value
of the idea of global understanding could be, if not for the crucial requirement that the relevant
degrees of freedom and processes can be shown to arise from a more global standpoint. This
is, of course, the place where renormalization theory comes in, so one of the challenges for the
next section will be to show how the concept of renormalization integrates the different levels
at which we can understand a physical phenomenon.
22
According to Hartmann, whereas local understanding is produced by causal/mechanistic explanations of a
physical phenomenon, a global understanding is rather produced by an explanation that fits the phenomenon
in a general framework. It is by combining these different accounts that one obtains scientific understanding:
“science studies a given phenomenon from various theoretical perspectives, all of which reveal some explanatory
information about the phenomenon in question ” [46, p.300].
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 45

3.3.3 The question of emergence


Another case by Cao and Schweber that has proven to be influential, is the one for emergence:

Taking the decoupling theorem and EFT seriously would entail considering the re-
ductionist (and a fortiori the constructivist) program an illusion, and would lead to
its rejection and to a point of view that accepts emergence, hence to a pluralistic
view of possible theoretical ontologies. [43, p.71]

This notion of ontological pluralism or emergence is typically seen in relation to reduction:


“The claim that some phenomenon is emergent is usually understood as the claim that the
phenomenon is in some sense not reducible to (i.e. deducible from) its base” [47, p.429]. This
leads to the core distinction between ontological emergence (the failure of reduction in principle)
and epistemological emergence (the failure of reduction in practice). The question of emergence
then becomes the articulation of which of the two types of reduction is not being exemplified in
a certain physical theory.
This differentation of epistemological and ontological emergence takes a central place in a
contribution by Morrison [48], where the focus is on emergent low-energy behavior in condensed-
matter systems such as superconductors. Her argument starts from the demand that emergence
should be distinguished from low-energy behavior that is merely ‘resultant’, because this rather
corresponds to the epistemological independence of a certain macroscopic description with re-
spect to the microscopic details, and is a fairly common feature in physical explanation. In order
to have ontological emergence, something more is needed: there should be a complete autonomy
of the low-energy physics from the microscopic theory, without an ontological link between the
two.
This stronger sense of ontological emergence is exemplified in systems that exhibit universal
behavior, where the prototypical example is superconductivity. The standard explanation for
this phenomenon is the BCS theory (see Sec. 3.1), where (i) pairs of electrons are formed through
Cooper pairing, (ii) the low-energy behavior of this collection of Cooper pairs is described
by introducing a bosonic field theory, and (iii) this boson theory exhibits symmetry breaking
below a certain critical temperature. As Morrison explains, however, one can “derive the exact
(emergent) properties of superconductors simply from the assumption of broken electromagnetic
gauge invariance without relying on the microphysics of Cooper pairing” [48, p.153]. This implies
that we “do not need a microscopic story about electron pairing and the approximations that
go with it to derive the exact consequences that define a superconductor” [48, p.155]. Thus,
it is symmetry breaking that “provides the dynamical explanation of emergent phenomena”,
whereas “the specific microphysical details are irrelevant; how the symmetry is broken is not
part of the account” [48, p.156]. The mechanism of symmetry breaking is universal, as different
physical systems can exhibit exactly the same symmetry breaking pattern and give rise to the
same physical phenomena at low energies. Because universal phenomena “originate from vastly
different micro properties, there is no obvious ontological or explanatory link between the micro
and macro levels” [48, p.162], and, therefore, we have a clear example of ontological emergence.
The analysis of Morrison is guided by the goal of establishing some kind of ontological emer-
gence, in contrast to a merely epistemological one. We believe, however, that this distinction
is nowhere to be found in theoretical physics itself, and can only be imported due to philo-
sophical concerns. This conclusion can also be read in a paper by Crowther, where she shows
that understanding emergence in terms of reduction is “tied by the metaphysicians’ fixation on
ontological emergence, the need to sharply distinguish it from “merely” epistemological emer-
gence” [47, p.430]. Our transcendental view on the philosophy physics is highly sympathetic
with Crowther’s claims that we should view emergence as it actually appears in physics, and
not base it on ontological and metaphysical conceptions that are imported from other branches
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 46

of philosophy. In particular, in contrast to the ontological pluralism of Cao and Schweber, this
‘physics-first’ approach does not make any claims on the ontological commitments that the EFT
program supposedly adheres.
Once we have realized this, we can see that this ontological dimension of emergence is un-
called for in the case of superconductivity. The fact that we can use the same Nambu-Goldstone
field theory for describing the phenomenon of superconductivity in a variety of different ma-
terials, irrespective of the microscopic properties, does not imply that we have found a new
ontology on that scale. The only conclusion is that, in order to understand the phenomenon of
superconductivity, we need to introduce new degrees of freedom (i.e. a bosonic field theory) that
live on the energy scale at which superconductivity is observed. The fact that different materials
exhibit the same effective degrees of freedom is a very non-trivial physical fact, and has found
its explanation in renormalization theory as different microscopic systems can flow to the same
fixed point under a renormalization-group flow. But this physical fact does not come with any
ontological commitments: it is just a part of physical understanding that different degrees of
freedom become important at different scales!

Let us therefore discuss Crowther’s account of emergence in more detail. She sees a tension
in the relation between an effective field theory and the underlying microscopic theory: on the
one hand, it should be impossible to reduce the effective field theory from the more fundamental
theory, but, on the other hand, there should be, in principle, a way to derive a low-energy theory
from the high-energy physics. This is essentially the same tension that we identified earlier in the
paper of Hartmann [46], and can again be illustrated with the example of QCD. There it should,
in principle, be possible to derive the low-energy properties of the quarks (the hadrons) from
the QCD Lagrangian, and physicists are actively pushing the numerical methods for making
this happen. Nevertheless, it seems impossible to derive a theory describing the low-energy
behavior from QCD only, and an external input is required for developing EFTs such as chiral
perturbation theory.
At this point, Crowther notes that, although it is in principle possible to obtain quantitative
low-energy predictions from a high-energy theory, the EFT framework is often necessary in a
more subtle sense:

An effective, low-energy theory is the only means of properly describing the low-
energy behavior of a system. EFTs are formulated in terms of the appropriate
degrees of freedom for the energy being studied, and are necessary for imparting an
understanding of the low-energy physics. Because the low-energy degrees of freedom
do not exist at higher energy, the high-energy theory is unable to present the relevant
low-energy physics. [47, p.428]

Based on this crucial insight, it is made obvious that hinging emergence on the notion of deriv-
ability or reduction misses the point. Instead, Crowther proposes to focus on two positive
aspects of emergence, i.e. on the fact that the low-energy physics is novel and autonomous with
respect to the high-energy theory. Novelty means that new features appear in the low-energy
regime that are not features of the high-energy theory, and autonomy captures the fact that a
low-energy theory is impervious to changes in the high-energy system. This positive definition
has the advantage that it is “naturally suggested by the physics”, whereas taking emergence as
a failure of reduction “distracts from the lessons of the actual physics”: “It means developing
an account true to the science rather than seeking to carry-over prior intuitions and concepts
from other branches of philosophy” [47, p.430].
Still, just as in Hartmann’s case, we believe that the tension is not taken care of. Crowther
rightly problematizes the differentiation between ontological and epistemological reduction from
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 47

the perspective of physical practice, and rightly emphasizes the fact that identifying the correct
degrees of freedom is necessary for understanding the physics at low-energy physics. In that
sense, the low-energy physics can be said to emerge from the high-energy system, and this idea
of emergence is indeed one of the conceptual innovations of modern physics. But, what is missed
here, is the fact that a real physical understanding of these low-energy degrees of freedom is only
attained if it is understood how these arise from the high-energy system: What is the physical
mechanism that, starting from the high-energy physics, gives rise to the low-energy description?
Indeed, whereas Crowther focuses on the novelty and the autonomy of the emergent physics,
the question of how the emergent physics arises from the microscopic degrees of freedom is
not properly taken into account. The challenge here is, again, to show how both sides of the
tension can be relieved in a positive way, i.e. how can we take the novelty and autonomy of
emergent physical behavior seriously, while, at the same time, not loosing the physicist’s goal of
understanding how these effective degrees of freedom emerge from the underlying microscopics.
In the next section we will show that it is precisely the idea of the renormalization group
that gives us the conceptual framework for tackling this question. In our analysis, a crucial
role will be played by computational approaches: it is in the efforts of obtaining quantitative
predictions on low-energy behavior starting from the microscopic theory, that a physicist gains
understanding of the low-energy physics. Crowther downplays this aspect of theoretical physics:
Thus, we can distinguish between an EFT’s role in enabling quantitative predictions
in the low-energy regime—a role which, in principle, could be fulfilled by the high-
energy theory—and its role in appropriately describing the behavior of a system at
low energy, and thereby facilitating an understanding of the low-energy physics—a
role which could not be fulfilled by the high-energy theory. [47, p.428]
We will show that enabling quantitative predictions in the low-energy regime by numerical
simulations takes a crucial place in the structure of contemporary physics, and provides one of
the keys for resolving the tension between reduction and emergence.

3.3.4 More is different


In Sec. 3.1 we have already explained how theoretical physics has developed into a diverse field
of science, where the search for a fundamental theory of physical reality is no longer the only
goal, nor the most interesting one. The debate on emergence is a direct consequence of this
development, as it has become clear that interesting physics can emerge in a physical system on
a certain scale, without a direct connection to underlying microscopic degrees of freedom that
make up the system. This realization was first made by condensed-matter physicists, as they
wanted to make clear that their work is, at the least, equally fundamental and exciting as the
work of the high-energy physicist. In this last subsection, we will investigate this side of the
story a bit more, specifically in order to argue for our claims above that a lot of the philosophical
debates on ontological emergence do not have their basis in the works of physicists themselves.
The discussion on emergence in physics originates to a large extent from the seminal paper
More Is Different by Anderson from 1972 [49]. In this paper, Anderson opposes the common
view that “the only scientists who are studying anything really fundamental” [49, p.393] are the
high-energy physicists working on the fundamental laws of elementary particles and cosmology.
This seems to be an obvious corrolary of reductionism, where “the workings of our minds and
bodies, and of all the animate and inanimate matter of which we have any detailed knowledge,
are assumed to be controlled by the same set of fundamental laws” [49, p.393]. But, as Anderson
is out to show, this reasoning is a fallacy, because reductionism does not imply the constructionist
hypothesis: “The ability to reduce everything to simple fundamental laws does not imply the
ability to start from those laws and reconstruct the universe” [49, p.393]. In many-body physics,
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 48

this constructionism breaks down when confronted with difficulties that are related to scale and
complexity, because it turns out that the behavior of large systems cannot be understood by a
simple extrapolation from the properties of a few particles. Instead, at every level of scale and
complexity “entirely new laws, concepts, and generalizations are necessary, requiring inspiration
and creativity to just as great a degree” [49, p.393]. This breakdown of the constructionist
hypothesis is illustrated by Anderson mainly through the concept of symmetry breaking, which
provides a physical mechanism that explains how, at a certain scale, behavior can be observed
that is entirely new with respect to the underlying fundamental laws.
So, Anderson’s paper should, in the first instance, be read as a polemic against the high-
energy physicist’s monopoly on the notion of fundamentality. “[A]t each level of complexity
entirely new properties appear, and the understanding of the new behaviors requires research
which I think is as fundamental in its nature as any other” [49, p.393]. As Anderson explains,
the research that is done for understanding the property of a system with a broken symmetry
is “as fundamental as many things one might so label”, but “it needed no new knowledge of
fundamental laws and would have been extremely difficult to derive synthetically from those
laws” [49, p.395].23
Secondly, Anderson does not oppose reductionism, but rather argues for “the breakdown of
the constructionist converse of reductionism” [49, p.393]. What Anderson is claiming, is that, in
order to understand a physical phenomenon on a given energy scale, entirely new laws, concepts,
and generalizations are necessary. Indeed, “the behaviour of large and complex aggregates of
elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of
the properties of a few particles” [49, p.393]. As Anderson accepts reductionism, he does not deny
that low-energy behavior can be reduced to the more fundamental laws, but he emphasizes that
this is (i) often extremely difficult or all but impossible, and (ii) not essential for understanding
what is going on at the low-energy scale. We see the same tension appearing as the one we have
identified earlier in the papers by Hartmann and Crowther, but taking on a more pragmatic
form here. Anderson’s discussion of superconductivity is illuminating:

But sometimes, as in the case of superconductivity, the new symmetry—now called


broken symmetry because the original symmetry is no longer evident—may be of an
entirely unexpected kind and extremely difficult to visualize. In the case of supercon-
ductivity, 30 years elapsed between the time when physicists were in possession of
every fundamental law necessary for explaining it and the time when it was actually
done. [49, p.395]

Thirdly, we note that no commitment to any form of ontological emergence is found. Unless
we should interpret Anderson’s use of the word ‘fundamental’ as implying some kind of ‘ontology’
appearing on different scales, there is no need to see any argument for ontological pluralism or
ontological emergence in his paper.

In a more recent paper [51], Laughlin and Pines have reiterated Anderson’s statements on
the status of reductionism in theoretical physics, but in a seemingly stronger sense. Parallel to
Anderson’s distinction between reductionism and constructionism, they state that “[w]e have
succeeded in reducing all of ordinary physical behavior to a simple, correct Theory of Everything
only to discover that it has revealed exactly nothing about many things of great importance”
[51, p.28]. They seem to go beyond Anderson in identifying ‘higher organizing principles’ that
work at a certain energy scale independently from an underlying microscopic theory. These
principles are “transcendent”, and “would continue to be true and to lead to exact results even
if the Theory of Everything were changed” [51, p.28].
23
The historical context [50] clearly shows that this reading of Anderson’s paper is the correct one.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 49

The fact that emergent physical phenomena are regulated by higher organizing principles
implies that these phenomena are insensitive to microscopics; they are determined by higher
organizing principles and nothing else. Examples include the quasiparticles of a Fermi liquid, or
superconductivity, and are dubbed ‘protectorates’ as they are protected against changes in the
microscopics. This is relevant to “the broad question of what is knowable in the deepest sense
of the term”, because the “the nature of the underlying theory is unknowable until one raises
the energy scale sufficiently to escape protection” [51, p.29].
Let us again take the example of the superconductor, where the higher organizing principle
would be the breaking of a local gauge symmetry. Indeed, the field theory that is invoked
for explaining superconductivity cannot be reduced to the underlying equations governing the
electrons in the material: the field describes the collective, low-energy degrees of freedom, which
are entirely different from the microscopic constituents (i.e. the electrons). Moreover, they are
insensitive to the microscopic degrees of freedom as different materials can exhibit exactly the
same symmetry breaking pattern.
Drawing philosophical conclusions with respect to ontological emergence, however, again
requires something more.24 In line with Anderson, Laughlin and Pines do not rule out the
possibility of an explanation of how the field theory arises from the microscopic details; they just
state that this is, in general, extremely difficult – as the unsuccessful attempts at explaining high-
Tc superconductivity from microscopic details show – and not necessarily essential or interesting.
Indeed, the message of the paper is again polemic in trying to convince people that high-energy
physics is not more interesting than condensed-matter physics, and that the “deductive path
from the ultimate equations to the experiment without cheating” [51, p.30] is not necessarily
the path that theoretical physics should take.
We conclude, again, that the argument for ontological pluralism or emergence in physics –
and any conclusions with respect to ontology – is not taken from the physics itself, but rather
is inspired from the metaphysical aspirations of philosophers. Ontological commitments are not
made by physicists, but are ascribed to physical theories by philosophers. In the next section, we
show that this is not necessary, and that we can make perfect sense of emergence (as physicists
understand it) without invoking any ingredients from metaphysics or ontology. As Cassirer
would put it: “Science at least knows nothing of such a transformation into substance, and
cannot understand it” [1, p.192].

3.4 Renormalization as a functional concept


In the previous sections, we have tried to make clear that a physical description of nature de-
pends crucially on the scale at which it is being probed. This is true in high-energy physics,
where the program of effective field theories is an explicit realization of this insight, and in
condensed-matter physics, where a physical description of a certain material always requires
the identification of the relevant degrees of freedom at a certain scale. It is the development of
the renormalization group that has provided a mathematical formalism and physical mechanism
for making this idea concrete and workable: renormalization-group flows show how parame-
ters change under scale transformations, which of these parameters are relevant, what effective
models are obtained after a renormalization-group flow, etc.
The question that we are taking in this section, is how to make philosophical sense of this
24
The philosopher Morrison acknowledges that something more is needed since “we need to differentiate ex-
planatory from ontological claims since emergence is not simply about different organization principles being
important at different scales or laws not requiring specific micro details” [48, p.150]. For the physicists Laughlin
and Pines, however, this is exactly what emergence is, and, as we will see in the next section, this is exactly what
a transcendental analysis needs.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 50

development in physics within a transcendental framework. We have seen that a realist phi-
losophy of physics with a focus on ontological considerations leads to unacceptable conclusions,
at least from the transcendentalist’s perspective. In particular, when taking the ontological
commitments of a certain physical theory seriously, there always looms the tension between
the emergence of new ontologies on a given energy scale, and the realization that these ontolo-
gies should be a consequence of the underlying microscopic degrees of freedom, albeit only in
principle.

3.4.1 Energy in the work of Cassirer


Before treating renormalization theory head-on, let us shortly reiterate how Cassirer incorpo-
rated the concept of energy in his philosophical analysis of nineteenth-century physics. We
have seen that in Cassirer’s time the goal of unification was an important motive in theoretical
physics, and that the mechanical program was one of the prime options for establishing this –
think about Maxwell’s efforts at reducing electromagnetism to a mechanical phenomenon.
According to Cassirer, this goal of unification is crucial in understanding the epistemological
structure of physics. Whereas the first step in obtaining physical knowledge is “the insertion
of the sensuous manifold into series of purely mathematical structure”, this must remain “in-
adequate as long as these series are separated from each other”. Indeed, the object of physical
knowledge means “more than the mere sum of properties; it means the unity of the properties,
and thus their reciprocal dependency”. This postulate finds its expression in physics if a princi-
ple is found, which “enables us to connect the different series, in which we have first arranged
the content of the given, among themselves by a unitary law” [1, p.190]. According to Cassirer,
such a principle is found in the nineteenth-century concept of energy, which allows to connect
different physical phenomena (heat, motion, electricity, light, chemical reaction, etc) into an
inclusive system.
Cassirer suggests that the concept of energy is, from an epistemological point of view, prefer-
able to mechanical reduction as a way of unifying physics. Indeed, energism, in contrast to the
mechanical program, permits to “relate two qualitatively different fields of natural phenomena,
without having previously reduced them to the processes of movement, and thus having divested
them of their specific character” [1, p.199].

Energism shows that this form of numerical order is not necessarily connected with-
out analyzing the things and processes into their ultimate intuitive parts, and recom-
pounding them from the latter. The general problem of mathematical determination
can be worked out without any necessity for this sort of concrete composition of a
whole out of its parts. [1, p.201]

So the demand of unification does not require that all of physics should be reduced to an inclusive
unitary picture for which every physical phenomenon is interpreted as an expression of the same
ultimate substance. Instead, “the demands of the theory of knowledge are rather satisfied when
a way is shown for [...] producing a complex of coordinations, in which each individual process
has its definite place” [1, p.203].

3.4.2 The necessity of scale and effective degrees of freedom


The first important lesson from renormalization theory is that the physical understanding of a
certain phenomenon takes recourse to degrees of freedom, physical concepts and mechanisms
that are only applicable at the scale at which the phenomenon takes place. Consequently, a
physical theory or description that conveys this understanding necessarily contains reference to
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 51

the scale at which the theory or description is supposed to hold, and it is generally impossible
to give a theory that describes a given system at all energy scales in a unified way.
The example of superconductivity illustrates why scale and effective degrees of freedom
necessarily enter into physical understanding. Suppose we had a way of actually solving all
the equations that describe the electrons on the microscopic level, but this solution acts as a
black box. Could we say that we have reached an understanding of why a certain material is
superconducting? The answer is obviously no, since we cannot see how the collective motion
of the electrons gives rise to this low-energy behavior. We need the field theory describing
the effective degrees of freedom at this energy scale, in order to formulate the mechanism of
symmetry breaking that gives rise to superconductivity. This implies that the need for an
effective description does not arise because of our limited intellectual or computational resources
in deriving the observable consequences of the microscopic theory, but that the use of effective
degrees of freedom is a necessary part of any physical theory.
How do we fit this feature of physical theories in a philosophical framework à la Cassirer?
We begin by noting that for Cassirer a physical theory describes an observable phenomenon
in the sense that the phenomenon undergoes a transition from what is offered in experience
to the form in which it appears in a physical statement. This transition or transformation is
mediated by the physical concepts, and only after a phenomenon has been given a place in the
conceptual structure does it gain a scientifically determinate meaning. In this sense the concepts
are constitutive for a scientific knowledge about the physical world. Importantly, these concepts
have a strict mathematical meaning independently from their application to physical reality; it
is only because they have this fixed meaning beforehand, that they can constitute the exactness
that is required for a scientific picture of the otherwise chaotic sensuous world.
The concepts of a field theory and symmetry breaking provide examples of such constitutive
concepts. They have a strict mathematical meaning before they are supposed to explain any-
thing: a quantum field is a well-defined mathematical object, the Lagrangian for the field can
have gauge symmetries, which the field can break to settle in a less symmetric configuration,
this local symmetry breaking leads to massless excitations known as Goldstone bosons, etc. All
these concepts and their consequences are derived within a strictly mathematical setting, and it
is only by applying this field theory and its symmetry breaking to the degrees of freedom in an
actual physical system that we aim to gain knowledge about the physical world.25 Because all
these physical concepts now take the role of effective degrees of freedom, it is all the more clear
that they are not just abstracted from empirical observation. Indeed, one does not ‘observe’ a
gauge field when a system becomes superconducting, but one applies the concept of a gauge field
to a many-electron system in order to explain the observed superconducting properties. This
is an operation that supposes, from the epistemological point of view, an active function from
the side of theory. Furthermore, in the scheme of Cassirer there is no reference to ontology:
understanding a many-body system by an effective field theory does not carry any ontological
commitments. This does not imply that we do not take the philosophical ramifications of physics
seriously, but, instead, our epistemological framework captures exactly in what way the physicist
understands emergence.
So the procedure of theoretical physics is the following. A physical phenomenon can be un-
25
This ‘capturing the degrees of freedom in a mathematical framework’ can take on different forms: One can
write down a quantum field theory, where the fluctuations of the field correspond to the low-energy fluctuations
of the system; Or one writes down the Feynman diagrams for a quasiparticle propagator in a many-electron
system; Or one can think of quantum states in an effective Hilbert space, where an effective Hamiltonian captures
the interactions between the low-energy degrees of freedom; Or one writes down a path integral that acts as a
generating functional for computing the low-energy dynamical correlations; Or one comes up with a variational
wave function for the many-body system with the variational manifold encapsulating the low-energy subspace of
the system; etc.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 52

derstood if a way is found of identifying the degrees of freedom that live on the energy scale at
which the phenomenon takes place, and formulating a mathematical theory that, (i) describes
the behavior of these degrees of freedom, and (ii) has the observed phenomenon as a mathe-
matical consequence. The identification of effective degrees of freedom is an active operation
by the theoretical physicist, an operation that is physically motivated from the concept of scale
transformations and an associated renormalization-group flow in the space of effective models.
In fact, renormalization teaches us that this operation is a necessary step in understanding a
physical phenomenon: without specifying the scale at which a physical system is probed, it
makes no sense to refer to a certain description of the system.26

This is the first function of renormalization theory: it teaches us that effective degrees
of freedom necessarily enter in a physical description of a physical system at a certain
scale, and are only valid on that scale.

Thus renormalization theory learns us that a physical theory is necessarily only valid at
a given energy scale, and cannot be naively extrapolated to give an accurate description of a
physical system across different scales. However, following Cassirer, this can only be the first
step in our physical understanding of the world: our physical picture of the world should be more
than the sum of successful theories for physical phenomena. Indeed, just as the energy concept
was understood by Cassirer as a principle for connecting the different physical phenomena, “in
which we have first arranged the content of the given, among themselves by a unitary law” [1,
p.190], we need here a principle or mechanism of connecting the different energy scales, which
explains why entirely different concepts are needed if the scale is changed, but also shows how
we can integrate these different conceptualizations into a unified whole.
It is, of course, the machinery of renormalization theory that accomplishes this demand.
Indeed, renormalization-group flows give us a physical mechanism that explains why a physical
theory cannot be extrapolated across different energy scales, and explains how effective degrees
of freedom can arise that are qualitatively different from the underlying microscopic theory.

This determines the second function of renormalization theory: it teaches us how


effective degrees of freedom arise from lower-energy scales.

This function of renormalization theory is important in two ways. The first is that it gives us a
physical mechanism in principle that explains the disconnectedness of different scales. Indeed,
as we have read in the papers by Anderson and Laughlin & Pines, the fully detailed mechanism
that gives rise to emergent physical phenomena is often not particularly interesting, and a
physicist can rest content with a theory on a certain scale without having reduced it to its
underlying microscopics: as long as the physics on that scale is properly described by a certain
theory, the phenomenology can be said to be understood in a satisfactory way. No deeper
insights are necessary here. Still this procedure would be entirely unintelligible if the concept of
renormalization would be absent: one still needs an understanding of how effective descriptions
can arise in general. Therefore, renormalization theory is crucial in the conceptual structure of
26
This epistemological reconstruction of what it means to understand superconductivity for a given material
explains how we should understand universality from a philosophical point of view. Indeed, the fact that it is
possible to apply the same mathematical formalism to different physical systems and at different energy scales does
not require us to draw deep philosophical conclusions about the ontology of the physical world. The mathematical
formalism does not carry any ontological commitments: it only carries a constitutive function of giving physical
phenomena such as superconductivity a place within a conceptual structure, and, as such, yielding a theoretical
understanding of what is happening if a material becomes superconducting. The observation that entirely different
physical systems can be understood with the same concepts is a feature of our physical picture of the world –
universality is rightly understood as a deep physical insight! – but does not present us with any epistemological
difficulties.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 53

theoretical physics, as it makes the approach of effective descriptions on a certain energy scale
intelligible.
Secondly, renormalization theory always guarantees the physicist that there should be a
mechanism or explanation for the emergent physics, and motivates physicists to look for these
mechanisms. But renormalization theory also shows what this explanation should consist of: the
physicist should come up with a physical mechanism of how effective degrees of freedom can arise
from the microscopic constituents of the theory. In the case of superconductivity this is exactly
what happened: the mechanism of Cooper pairing explains how an effective bosonic field theory
can arise in a system of electrons. In the case of topological systems it is the entanglement
degrees of freedom that explain how anyonic quasiparticles can emerge from an electronic or
bosonic system. In fact, the rationale behind the whole field of strongly-correlated quantum
many-body physics is determined by this second function of renormalization theory.

3.4.3 The importance of the computational approach


One particular aspect of many-body physics that is typically not taken seriously in a philosoph-
ical analysis, is the importance of numerics in understanding physical phenomena. Yet, there
are hardly any papers published in theoretical physics that do not contain a numerical part.
The reason why philosophy does not take into account numerics is that it does not seem
to provide any real insight into the physical mechanisms that are at work in nature. Indeed,
the idea is that a numerical computation can produce quantitative predictions from a physical
theory, but this does not add any understanding that was not already contained in the theory:
the computer only acts as a black box that spits out numbers at the end of a computation. If
the numbers match the experiment, this counts as a confirmation that the theory is correct,
but does nothing more than that. The prototypical example of this relation between a physical
theory and numerics is the determination of the hadron masses by lattice gauge theory, where
the fundamental Lagrangian provides the real physics of the system and lattice gauge theory
acts as a black box yielding the numerical value of the mass of the proton.27
Taking numerics seriously in physics starts by realizing why it is so hard to actually simulate
a many-body system. Suppose one has a microscopic theory for the elementary particles out of
which the many-body system consists, and one has all the fundamental equations one needs to
deduce, in principle, the microscopic behavior of the system – think of a gas of electrons inside
a metal, moving in the static potential generated by the lattice and mutually interacting via the
law of Coulomb. In principle, any quantum-mechanical problem should reduce to diagonalizing
matrices, for which there are very efficient algorithms available in any software package. The
problem is, however, that the size of these matrices scales exponentially with the number of
particles out of which the system is built! This implies that it is, as a matter of principle,
impossible to simulate a quantum-mechanical system that contains a large number of degrees
of freedom, just because the space of possible configurations (and the associated operators)
explodes.
27
In a paper by Hartmann, which we have discussed in Sec. 3.3.2, the situation is perceived as follows:
In the case of the strong interactions, QCD does indeed specify the overall dynamics of the system;
there are quarks and gluons, and these entities interact in a very complicated way with each other
according to the Lagrangian density of QCD. But not much more can be said: the rest has to be
done numerically with the help of high-powered computers [...]. And computers function like a black
box. All possible Feynman diagrams are summarised, although, perhaps, only a few of them (or a
certain subclass of them) produce almost the whole effect under investigation. A knowledge of these
actually relevant processes would produce insight and understanding. Lattice gauge theory does not
produce this insight, and QCD is, therefore, effectively a black-box theory. [46, p.289]
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 54

This creates the situation that, in order to actually compute something, the physicist needs
to have an idea what the relevant information is about the system she wants to store in the
computer: she needs to find a way of only doing numerical operations on the degrees of freedom
that are important for simulating a given phenomenon. It is a priori not clear that is always
possible, but, as we have seen, it is precisely the theory of renormalization that guarantees
that effective degrees of freedom can be identified at a certain scale that determine a system’s
behavior. This, in turn, opens up the possibility of simulating a system at that scale, as long as
a way is found of implementing these degrees of freedom on a computer.
Through the formalism of renormalization, it appears that understanding a physical phe-
nomenon and simulating it on a computer follow a conceptually similar path. This is not a
coincidence, as this relation between understanding the low-energy behavior of a many-body
system and simulating it, is exactly what motivated Wilson in his formulation of the renormal-
ization group. As we discussed in Sec. 3.2.2, it was the challenge of numerically simulating the
Kondo problem that made Wilson realize that one has to find a way of devising what the effec-
tive degrees of freedom are on a given energy scale. This shows that the renormalization group
provided, from the beginning, both a conceptual tool of understanding how theories change
under scale transformations, and a numerical procedure for simulating the effective degrees of
freedom on a certain scale.
In the forty years that followed simulating many-body physics has required computational
physicists to come up with smart ways of capturing physical phenomena with limited numerical
resources. This has led to lattice formulations for field theories, which can then be simulated with
e.g. Monte-Carlo techniques, variational parametrizations that capture the essential features of
a many-body system at a given energy scale, mean-field approaches leading to self-consistency
equations, etc. All these examples of computational approaches to the many-body problem
show that physical understanding through numerical simulations has become essential in the
way physicists work, and it would be a philosophical mistake to reduce numerics to a black
box that does not lead to understanding of the physical phenomenon that is being simulated.
Instead, in the context of computational physics the two functions of renormalization theory
flow naturally out of pragmatic concerns: (i) because of the limits on computational resources,
the numerical simulation of a physical phenomenon necessarily requires an identification of the
relevant degrees of freedom, and (ii) the theory of renormalization (in the broadest sense) points
the way towards an efficient simulation of the many-body problem. Therefore, incorporating
computational physics within our philosophical analysis confirms our picture on renormalization
theory.

3.4.4 The object of physics


The two previous sections have showed that renormalization theory is constitutive for our phys-
ical picture of the world. We have shown (i) that understanding a physical phenomena requires
the physicist to write mathematical theories about effective degrees of freedom that are only rel-
evant at a certain scale, (ii) that renormalization theory entails in a mathematically exact way
how all these local understandings can be incorporated into an inclusive whole, and (iii) how both
analytical and computational approaches can contribute to understanding physical phenomena.
Therefore, renormalization theory provides us with the conceptual structure in which we can
further build and order a unified understanding of the physical world: We have theories about
physical phenomena that are only valid on a certain energy or scale, and these different theories
are connected through scale transformations as they appear in the renormalization group.28
28
As an illustration of this view, the following quote by Leo Kadanoff (one of the co-founders of the theory of
renormalization) is remarkable:
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 55

This picture of the physical world is different from the one theoretical physicists have long
thought to be working towards. Indeed, another idea of unification has traditionally been the
motor behind ‘fundamental’ physics, viz. the idea that we, in the end, want to understand all
physical phenomena starting from a fundamental theory of the elementary constituents of the
physical world. This was the drive behind the mechanistic program in the nineteenth century,
or the hope in the early days of quantum mechanics, and remains, to this day, the goal that
string theorists set themselves. In its place, another idea of unification has taken root in the
structure of theoretical physics. We believe that this is, from the philosophical point of view,
the most interesting way of understanding the discussion concerning reduction and emergence.
We can understand the writings of Anderson and others [see Sec. 3.3.4] as a renunciation of this
old idea of unification, and as an articulation of the new idea of what a unified physics consists
of.29
It is interesting to note that this view on physics is much in the line of Cassirer’s charac-
terization of physical concepts as relational and non-substantialistic. In the beginning of this
section, we have already reiterated the views of Cassirer on the energy concept as preferable
from an epistemological point of view, because it provides a way of relating qualitatively different
phenomena without reducing them both to some common substantial basis. Renormalization
theory shows that there is no fundamental theory that explains all physical phenomena in one
stretch, but that physics always needs effective descriptions valid on certain scales. Yet, all
these effective descriptions are connected through renormalization-group flows, which determine
a strict mathematical relation between these descriptions. Therefore it would be a fatal mistake
to interpret the effective degrees of freedom on a given energy scale in a substiantialistic way –
as the emergence of a new ontology – since they appear as elements in a renormalization flow.
In the spirit of Cassirer, the development of renormalization theory is interpreted as yet another
step in the evolution towards less and less substantialistic conceptions of the physical world, and
therefore confirms the progression in the historical development of theoretical physics.
We have tried to make clear that ontology in philosophy of physics is uncalled for. The
reason why philosophers take up this notion time and again should be viewed in the light of
the realism/anti-realism debate, and the fact that the ontological commitments of a certain
physical theory – the fundamental nature of the world that it lays bare – are important from a
After its modern construction by Wilson and others, the renormalization group has appeared in
thousands of papers devoted to the development of the understanding of physical, social, biological
and financial systems. However, renormalization is substantially more than a technical tool. It is
primarily a method for connecting the behavior at one scale to the phenomena at a very different
scale. It serves for example, to connect the physics at the scale of an atom with the observed macro-
scopic properties of materials. One might argue, and I believe that argument, that the connection
among “laws of nature” at different scales of energy, length, or aggregation is the root subject of
physics. One would then argue that Wilson has provided us with the single most relevant tool for
understanding physics. [52, p.2]

29
In this chapter we have largely ignored all the efforts that theoretical physicists are investing in string theory
as the best option for a unified theory encompassing both quantum field theory and gravity. These efforts can be
read as an articulation of the ‘old’ idea of unification, and could show that this idea has not at all disappeared
from contemporary physics. We should note, however, that the endeavors of string theory are not necessarily in
contradiction with the ‘new’ ideal that we have put forward. Indeed, in a paper by David Gross we read:
First this theory, used simply as an example of a unified theory at a very high energy scale, provides
us with a vindication of the modern philosophy of the renormalization group and the effective
Lagrangian that I discussed previously. [...] String theory could explain the emergence of quantum
field theory in the low energy limit, much as quantum mechanics explains classical mechanics, whose
equations can be understood as determining the saddlepoints of the quantum path integral in the
limit of small ℏ. [53, p.66]
How to incorporate string theory in our philosophical framework, however, we leave for further study.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 56

philosophical point of view. As a consequence, it is on the level of ontology that the scientific
rationality of physical theories is to be maintained. In the framework of Cassirer, however,
we have found other resources for grounding the rationality of physics: it is by showing how
physical concepts succeed in capturing more and more of the sensuous world in a mathematically
structured whole, and, as such, giving physical phenomena an exact theoretical meaning. By
fitting renormalization theory within this framework, we have shown that the rationality of
contemporary physics can be maintained without taking recourse to a realistic and/or ontological
account of physics. Moreover, the work of Friedman has shown that this approach is not rendered
futile in the light of Quinean holism and Kuhnian paradigm dynamics, and our analysis can be
read as a confirmation of this project.

3.4.5 Historicizing renormalization


In Sec. 2.5.3 we have discussed a crucial distinction between the frameworks of Cassirer and
Friedman with respect to the constitutive function of a priori principles. Whereas for Cassirer
the conceptual structure of theoretical physics generates a scientifically determinate meaning
for empirical phenomena – there are no meaningful physical phenomena before the concepts
of physics – for Friedman there is a faculty of pure sensibility that generates a space of direct
perceptions, where coordinative principles are supposed to bridge between pure sensibility and
abstract (mathematical) physical theories. Let us see how the insights from this chapter can
shed more light on this distinction between Cassirer and Friedman.
In the previous subsections we have adequately shown that the approach of Cassirer is very
well suited to integrate the theory of renormalization into the conceptual structure of contem-
porary theoretical physics. In particular, we have shown that the theory of renormalization,
and the associated ideas of scale and effective degrees of freedom, are necessary to yield definite
descriptions of physical phenomena. In that sense, the theory of renormalization yields a set
of principles that is constitutive for giving physical meaning to empirical phenomena. On the
other hand, we have identified the framework of renormalization as yielding a new blueprint of
what a unified physics should consist of. Physicists no longer aim for one comprehensive theory
allowing to understand the physical world in one stretch, but rather look for a description of
the effective degrees of freedom that determine a given physical phenomenon on a certain scale.
This points to the regulative function of renormalization theory, because it teaches us what are
the conditions of any physical theory and which is the direction in which physical theories are
evolving. Therefore, we believe that the theory of renormalization serves as an illustration of the
interplay between the constitutive and regulative dimensions of a priori concepts in theoretical
physics, an interplay that is characteristic of a priori principles in the approach of Cassirer. Also,
the history of the theory of renormalization clearly shows that this interplay is a dynamic one.
So what about the level of coordinating principles that Friedman takes to be essential for
attaching empirical meaning to the abstract theories of mathematical physics? In the first
instance, one is tempted to assign a coordinative function to the ideas of scale and effective
degrees of freedom. Indeed, whereas mathematical concepts are entirely abstract and lack any
empirical meaning, their empirical content is gained by specifying the scale at which they are
supposed to apply and the effective degrees of freedom they are supposed to capture. Yet, one
quickly realizes this is not what is happening: effective degrees of freedom do not live in this
perceptual space of pure sensibility, because they are elements of the mathematical framework
and don’t have any meaning outside of it. Similarly, the concept of scale is a physical concept
that only makes sense in the context of the renormalization group and scale transformations. In
general we can say that it makes no sense to try to fit the principles of renormalization theory in
a schematism that tries to bridge somehow between the space of abstract mathematical theories
on the one side, and a faculty of pure sensibility on the other.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 57

However, we have seen that Friedman’s account aims at opening up other dimensions in
which physical principles acquire meaning. In his notion of the historicized a priori, Friedman
sets out to fix the rationality of scientific principles by placing them in a larger intellectual and
technological development. In this chapter, we have focused on the internal dimension of the
principles of renormalization theory, for which we found the framework of Cassirer to be ideally
suited. Yet, we believe that the theory of renormalization can provide an interesting case for
exploring these larger dimensions that Friedman is aiming at. A few interesting directions could
be

• the technological context: The advent of computational physics requires the introduction
of computer technology.

• the intellectual context: The development of renormalization ideas takes place in a physics
community that focuses rather on widening the scope and solving problems in more diverse
fields of physics, than on redefining the ‘fundamental’ concepts.

• the political context: It should be noted that this new way of doing physics falls within the
aftermath of World War 2 and, in particular, the Manhattan project, which have reshaped
the scope and funding of theoretical physics.

As an illustration of the intertwining of these three dimensions, we note that one of the first
application of computers in physical research was during the Manhattan project, where comput-
ers (and physicists) were used to determine how much energy is released in an atomic explosion
[54]. It remains a subject of further study to what extent these three dimensions can be worked
out further, and could lead to a thoroughly historicized account of the development of postwar
theoretical physics.
Chapter 3. Renormalization in contemporary physics: a transcendental perspective 58
Bibliography

[1] E. Cassirer, Substance and Function, and Einstein’s Theory of Relativity (The Open Court
Publishing Company, 1923).

[2] H. Kragh, Quantum generations: a history of physics in the twentieth century (Princeton
University Press, 2002).

[3] P. M. Harman, Energy, Force and Matter: the Conceptual Development of Nineteenth-
Century Physics (Cambridge University Press, 1982).

[4] P. M. Harman, The Natural Philosophy of James Clerk Maxwell (Cambridge University
Press, 1998).

[5] S. Toulmin, “Introduction,” in Physical Reality: Philosophical Essays on Twentieth-century


Physics (Harper & Row, 1970).

[6] E. Mach, The Science of Mechanics. A critical and historical exposition of its principles
(Open Court Publishing Co., 1893).

[7] E. Mach, “The Guiding Principles of My Scientific Theory of Knowledge and Its Reception
by My Contemporaries,” in Physical Reality: Philosophical Essays on Twentieth-century
Physics (Harper & Row, 1910).

[8] M. Planck, “The Unity of the Physical World-Picture,” in Physical Reality: Philosophical
Essays on Twentieth-century Physics (Harper & Row, 1909).

[9] G. Holton, “Mach, Einstein, and the Search for Reality,” Daedalus 97, 636 (1968).

[10] A. Jensen, “Neo-Kantianism,” in Internet Encyclopedia of Philosophy (2013).

[11] S. Luft and F. Capeillères, “Neo-Kantianism in Germany and France,” in The History of
Continental Philosophy (Acumen Publishing, 2010) pp. 47–85.

[12] M. Friedman, Kant and the Exact Sciences (Harvard University Press, 1992).

[13] S. Edgar, “Hermann Cohen,” in The Stanford Encyclopedia of Philosophy (2015).

[14] M. Friedman, A Parting of the Ways: Carnap, Cassirer and Heidegger (Open Court, 2000).

[15] P. Duhem, La théorie physique: son objet, sa structure (Libraire Philosophique J. Vrin,
1906).

[16] H. Hertz, The Principles of Mechanics Presented in a New Form (Cosimo Classics, 1899).

[17] T. Mormann, “From Mathematics to Quantum Mechanics - On the Conceptual Unity of


Cassirer’s Philosophy of Science,” in The Philosophy of Ernst Cassirer: A Novel Assessment,
edited by J. T. Friedman and S. Luft (De Gruyter, 2015) pp. 31–64.

59
Bibliography 60

[18] E. Cassirer, Determinism and Indeterminism in Modern Physics (Yale University Press,
1956).

[19] T. A. Ryckman, “A Retrospective View of Determinism and Indeterminism in Modern


Physics,” in The Philosophy of Ernst Cassirer: A Novel Assessment, edited by J. T. Fried-
man and S. Luft (De Gruyter, 2015).

[20] T. A. Ryckman, The Reign of Relativity: Philosophy in Physics 1915-1925 (Oxford Univer-
sity Press, 2007).

[21] M. Friedman, The Spinoza Lectures (University of Amsterdam) (2012).

[22] M. Friedman, Dynamics of Reason (Stanford Kant Lectures) (CSLI Publications, 2001).

[23] W. V. Quine, “Main Trends in Recent Philosophy: Two Dogmas of Empiricism,” The
Philosophical Review 60, 20 (1951).

[24] T. S. Kuhn, The Structure of Scientific Revolutions (University of Chicago Press, 1962).

[25] A. Pickering, Constructing quarks (University of Chicago Press, 1984).

[26] M. Friedman, “Ernst Cassirer and Thomas Kuhn: the Neo-Kantian Tradition in History
and Philosophy of Science,” Philosophical Forum 39, 239 (2008).

[27] E. Cassirer, in Nachgelassene Manuskripte und Texte, edited by J. M. Krois (2009).

[28] M. Ferrari, “Between Cassirer and Kuhn. Some remarks on Friedman’s relativized a priori,”
Studies in History and Philosophy of Science Part A 43, 18 (2012).

[29] M. Friedman, “Synthetic History Reconsidered,” in Discourse on a New Method: Rein-


vigorating the Marriage of History and Philosophy of Science, edited by M. Domski and
M. Dickson (Open Court, 2010) pp. 571–813.

[30] M. Friedman, “Einstein, Kant, and the a priori,” in EPSA Philosophical Issues in the
Sciences: Launch of the European Philosophy of Science Association (Springer Netherlands,
2010) pp. 65–73.

[31] M. Friedman, “Reconsidering the dynamics of reason: Response to Ferrari, Mormann, Nord-
mann, and Uebel,” Studies in History and Philosophy of Science Part A 43, 47 (2012).

[32] P. K. Feyerabend, I. Lakatos, and M. Motterlini, For and Against Method (The University
of Chicago Press, 1999).

[33] ”Choreographing the dance of electrons”, https://phys.org/news/2015-12-choreographing-


electrons.html.

[34] D. Pekker and C. M. Varma, “Amplitude / Higgs Modes in Condensed Matter Physics,”
Annual Review of Condensed Matter Physics 6, 269 (2015).

[35] M. Levin and X.-G. Wen, “Colloquium : Photons and electrons as emergent phenomena,”
Reviews of Modern Physics 77, 871 (2005).

[36] M. E. Fisher, “Renormalization group theory: Its basis and formulation in statistical
physics,” Reviews of Modern Physics 70, 653 (1998).

[37] K. G. Wilson, “The renormalization group: Critical phenomena and the Kondo problem,”
Reviews of Modern Physics 47, 773 (1975).
Bibliography 61

[38] S. Weinberg, “Superconductivity for Particular Theorists,” Progress of Theoretical Physics


Supplement 86, 43 (1986).

[39] L. D. Landau, “The theory of a Fermi liquid,” JETP 3, 920 (1957).

[40] R. Shankar, “Renormalization-group approach to interacting fermions,” Reviews of Modern


Physics 66, 129 (1994).

[41] J. Polchinski, “Effective Field Theory and the Fermi Surface,” arXiv:hep-th/9210046
(1992).

[42] M. Gell-Mann and F. Low, “Quantum Electrodynamics at Small Distances,” Physical Re-
view 95, 1300 (1954).

[43] T. Y. Cao and S. S. Schweber, “The conceptual foundations and the philosophical aspects
of renormalization theory,” Synthese 97, 33 (1993).

[44] S. Weinberg, “Why the Renormalization Group is a Good Thing,” in Asymptotic Realms
of Physics: Essays in Honor of Francis E. Low, edited by A. H. Guth, K. Huang, and R. L.
Jaffee (MIT Press, 1983) pp. 1–19.

[45] J. Ladyman, D. Ross, D. Spurrett, and J. Collier, Everything must go: metaphysics natu-
ralized (Oxford University Press, 2007).

[46] S. Hartmann, “Effective Field Theories, Reductionism and Scientific Explanation,” Studies
in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern
Physics 32, 267 (2001).

[47] K. Crowther, “Decoupling emergence and reduction in physics,” European Journal for Phi-
losophy of Science 5, 419 (2015).

[48] M. Morrison, “Emergent Physics and Micro-Ontology*,” Philosophy of Science 79, 141
(2012).

[49] P. W. Anderson, “More is different,” Science 177, 393 (1972).

[50] E. Castellani, “Reductionism, emergence, and effective field theories,” Studies in History
and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics
33, 251 (2002).

[51] R. B. Laughlin and D. Pines, “The theory of everything,” Proceedings of the National
Academy of Sciences of the United States of America 97, 28 (2000).

[52] L. P. Kadanoff, “Kenneth Geddes Wilson, 1936-2013, an appreciation,” Journal of Statisti-


cal Mechanics: Theory and Experiment 2013, P10016 (2013).

[53] D. J. Gross, “The triumph and limitations of quantum field theory,” in Conceptual Foun-
dations of Quantum Field Theory, edited by T. Y. Cao (Cambridge University Press, 1999)
pp. 56–67.

[54] R. P. Feynman, ” Surely you’re joking, Mr. Feynman!”: adventures of a curious character
(Vintage, 1985).

Você também pode gostar