(2018). “Multiple Realization and Robustness.” In Bertolaso, Caianiello, and Serrelli (eds.), Biological Robustness: Emerging perspectives from within the Life Sciences. Springer, pp. 75-94.
Book | Preprint
Explanation and Causation in Cognitive Neuroscience
My primary area of research concerns explanation and causation in cognitive neuroscience. My research on this topic involves several interrelated projects, the majority of which build on my dissertation and focus on the phenomenon of robustness in neural systems—that is, the brain’s capacity to maintain functions despite substantial variation among the component parts and processes that perform those functions. These forms of functional stability can be found at all levels within the brain, from individual neurons to small neural circuits to brain-wide networks. Robustness has profound consequences for our understanding of the mind-brain relation, posing challenges to causal discovery, causal hypothesis testing, and basic aspects of model construction. But I argue these consequences cannot be easily accommodated or even clearly articulated within existing philosophical frameworks based on multiple realization and a dichotomy between reduction and autonomy.
In “Multiple Realization and Robustness” (Biological Robustness: Springer 2018), I offer a novel characterization of multiple realization in terms of causal explanatory frameworks, where I redefine the concept as causal stability at a higher level despite relevant causal heterogeneity at a lower level. I show that robustness in neural systems exemplifies this form of multiple realization and illuminates a novel set of challenges to mind-brain reduction. This work provides an important update to the concept of multiple realization and connects it to recently discovered and highly significant empirical phenomena that are found across many levels in neuroscience—from cellular to systems to cognitive. In addition, this work reorients the epistemic significance of multiple realization away from broad theses about the autonomy of psychology from neuroscience and instead toward a rich set of underexplored challenges to understanding causation, causal inference, and complex inter-level relationships in biological and cognitive systems.
In “Robustness and Modularity” (forthcoming in the British Journal for the Philosophy of Science), I argue that robustness is compatible with the condition of modularity in interventionist accounts of causal explanation and their attendant methods of causal inference. A system is modular when intervening on a causal relationship does not change other (nonconnected) causal relationships within that system. This condition holding is crucial to ensure the reliability of causal inferences based on experimental interventions. There is tension between robustness and modularity because robustness is often achieved via compensatory changes to other causal relationships within a system. I show how this tension can be resolved by understanding the nature of feedback control within robust systems, and I further explore the challenges feedback control itself poses to understanding causation and causal inference. This paper provides clarity on key concepts within the interventionist account of causation, it reconciles an existing debate about the tension between robustness and interventionism, and it lays groundwork for future research incorporating timescale sensitivity into the interventionist framework.
I also have several projects targeting broader topics in causation and explanation in the mind-brain sciences. In “The Cognitive Neuroscience Revolution” (Synthese 2016, coauthored with Gualtiero Piccinini), we challenge classical views in philosophy of science and metaphysics of mind about the autonomy of psychology from neuroscience. In place of the traditional, fragmented, multidiscipline view of cognitive science, we advocate a framework that stresses integration across multiple mechanistic levels wherein disciplinary boundaries are frequently blurred. Our alternative framework both descriptively captures current practice in cognitive neuroscience and further provides a normative framework for understanding how different research programs within cognitive neuroscience should be integrated.
In a separate paper, “Mechanistic Abstraction” (Philosophy of Science 2016, also coauthored with Gualtiero Piccinini), we argue that mechanistic explanations frequently abstract away from idiosyncratic details of individual cases and events to provide explanations with varying degrees of scope or generality. Our argument runs counter to recent interpretations that maintain mechanistic explanations are always enhanced by incorporating more detail about target systems. We show that mechanistic explanations frequently serve as type-level explanations with scope beyond particular instances. Once this generality is acknowledged, it is clear that the goal of providing such explanations means that greater detail will often undermine explanatory value, particularly by reducing scope. This paper provides important clarification on the relationship between, on one hand, detail and abstraction in the explanans and, on the other, the scope of the explanandum in mechanistic explanation. The clarity offered by our account helps to resolve contentious issues that have recently been debated in the more general framework of mechanistic explanation.
I have also been developing a project in collaboration with Felipe De Brigard on the nature of human brain networks. That project recently received support through the National Science Foundation’s Science and Technology Studies division (Award #: 2218556; Abstract). Network neuroscience has grown exponentially in the past decade, particularly in the context of neuroimaging. This rapid development has induced many deep disagreements among researchers in the field. For instance, neuroimagers disagree regarding fundamental questions about what network models correspond to—for instance, whether they are mere summaries of data or instead correspond to important functional units within the brain— and whether they should be characterized cognitively (e.g., “salience” network) or strictly neuroanatomically (e.g., “medial fronto-parietal” network). Such issues cannot be settled simply by looking to data: they reflect deeper conceptual issues regarding how, for instance, abstract and idealized mathematical models provide useful information about target systems and what that means for the correspondence between the model and the actual system in the world. Our project seeks to bring clarity to these issues. We plan to produce several interdisciplinary articles that will, among other things, provide an analysis of the sorts of evidence required to attribute functions to brain networks as well as argue that different epistemic aims mandate different degrees of correspondence between network models and the world. This research project is timely, addresses a growing crisis in network neuroimaging, and promises to have immense impact both within and beyond cognitive neuroscience and philosophy of neuroscience.
Philosophy of Perception
The broad theme of my work in philosophy of perception lies in the idea that many of our intuitions about conscious experience and perceptual content are unreliable and that this unreliability presents potential pitfalls to both philosophers and perceptual psychologists working on these topics. My approach to this topic is heavily informed by perceptual psychology and neurophysiology, and it is guided by the idea that phenomenal experience is richer and more diverse than our work-a-day abilities to describe that experience with language.
In “Operationalizing Consciousness: Subjective Report and Task Performance” (Philosophy of Science 2013), I argue that standard forced-choice protocols in psychophysics, which require subjects to indicate whether they saw a stimulus, often rely on problematic assumptions about consciousness and representational content. I argue that the use of such tasks is grounded in a binary view of consciousness, which assumes that subjects must either be conscious of stimuli or must have no awareness of them at all. Such a binary view neglects a range of plausible alternatives between these extreme ends of the spectrum, and so the use of dichotomous tasks in the study of unconscious perception is dubious and question-begging. This paper provides a substantive, philosophically motivated critique of a prominent methodology in the study of perceptual experience.
In a similar vein, in “Range Content, Attention, and the Precision of Representation” (Philosophical Psychology 2020), I question whether a notion of phenomenal consciousness that is totally separate from representational content is coherent. Many philosophers have argued that attention can cause changes in phenomenal experience without any corresponding change in representational content, thus objecting to representationalist accounts of phenomenal experience. I show that these arguments fail to consider that attention can increase the precision of representational content. I then provide evidence from perceptual psychology that suggests that visual attention does in fact increase precision. My argument in this paper provides a potent set of tools that enable representationalists to handle challenges raised by the phenomenal effects of attention. However, it also paves the way for future work challenging a thesis commonly defended in tandem with representationalism—the transparency thesis—on grounds that transparency may be at odds with variations in the precision of representational content.
I am currently collaborating with two perceptual psychologists, Jenn Lee at NYU and Lara Krisst at UC Davis, and another philosopher, Jerry Viera at Sheffield, on a project investigating dissociative theories of consciousness—that is, theories that maintain that a distinction between phenomenal consciousness and access consciousness. We are designing several variants of the classic Sperling experiment, which has been a notable but contentious source of evidence in debates about dissociative theories. This project is ongoing and recently received support from the Templeton World Charity Foundation through the 2020/2021 Summer Seminar in Neuroscience and Philosophy (SSNAP) at Duke.
Brains exhibit remarkable capacities to maintain functions despite substantial variation in the component parts and processes that support those functions. This robustness of neural functions can be found at all levels of organization within the brain. For example, individual neurons show stable electrophysiological properties despite variation in the ion channels that determine those properties. Neural circuits produce stable outputs despite variation in the synaptic strengths between and intrinsic activity of the cells that make up those circuits. And neuroplasticity can enable recovery of function from macroscale damage to entire cortical areas. These different forms of neural robustness are imminently relevant to anyone interested in understanding the mind-brain relation, explanation in neuroscience, and the relationships between different levels of organization in complex systems.
Philosophical debates about the mind-brain relation have, however, failed to make substantial contact with this phenomenon of functional robustness. This is particularly puzzling given that the concept of multiple realization has been central to these debates since the 1970s. In broad terms, multiple realization is the claim that higher-level properties correspond to a number of distinct lower-level properties. And it is typically cited as a crucial premise in arguments against reductionism and in arguments looking to secure the autonomy of the so-called special sciences from the physical sciences. Functional robustness, at least on its face, would seem to be of patent relevance to multiple realization, as it demonstrates a clear case in which there is stability at the level of the function performed, despite variation in the causal structures that support performance of that function.
Philosophical accounts of multiple realization have, however, had a blind spot to the types of cases functional robustness presents. Particularly in the context of the mind-brain sciences, these accounts have tended to focus on the possibility of the same mental state arising in different organisms (e.g. animal pain vs. octopus pain) or in silica (i.e. the possibility of artificial intelligence), whereas functional robustness points toward a sort of causal heterogeneity underlying stable functions within a particular species or even within a particular organism.
Some reasons for the myopia of traditional accounts of multiple realization have to do with historical coincidence of the scientific state of the art at the time that early debates about multiple realization were taking place. For instance, advances in computer science teased the development of artificial intelligence that might bear similarities to human intelligence. And little was understood about the complexity underlying functions within particular neural systems, supporting the assumption that a mental state, like pain, is not multiply realized within a particular species (let alone within a particular organism). This meant looking to computers or other organisms for potential sources of multiple realization, rather than looking at how stable functions are performed within particular organisms.
In this dissertation, I provide a novel account of multiple realization. My account reframes the concept in terms of causal theories of explanation, in contrast to the original framing in terms of the deductive-nomological theory of explanation. I align my account of multiple realization with the phenomenon of functional robustness, particularly by examining a number of cases of robustness in neural systems. I then explore the epistemic consequences of functional robustness. In particular, I argue that systems that exhibit robustness will tend to violate causal faithfulness, thus posing challenges to causal hypothesis testing and causal discovery. I then consider the proposal that robustness undermines modularity – i.e. the ability of causal relationships within a system to be disrupted independently. I argue that it does not and instead that robustness often is due to feedback control driving systems toward particular outcomes. As a result, robustness will attend failures of acyclicity, not failures of modularity. I conclude by contrasting these epistemic consequences of functional robustness with those traditionally associated with multiple realization.