Editorial: Predictive Processing and Consciousness

This Special Issue features contributions by leading researchers working on the topic of PP and consciousness. By bringing together philosophical figures central to the development of the framework as well as experts on methodological approaches to the scientific study of consciousness, we hope to foster an ongoing debate which will clear off conceptual confusions and offer a unified approach to investigating consciousness within the PP framework.

Jakob Hohwy extends his account of predictive processing to the case of consciousness. Self-evidencing describes the purported predictive processing of all self-organizing systems, whether conscious or not. Self-evidencing in itself is therefore not sufficient for consciousness. Different systems may however be capable of self-evidencing in different, specific and distinct ways. Some of these ways of self-evidencing can be matched up with several properties of consciousness, and can explain them. This carves out a distinction in nature between those systems that are conscious, as described by these properties, and those that are not. This approach sheds new light on phenomenology, and suggests that some self-evidencing may be characteristic of consciousness.

Maxwell Ramstead and colleagues offer a way of applying a computational approach to the phenomenal aspect of consciousness (Ramstead et al. 2022). Their paper presents a version of neurophenomenology based on generative modelling techniques developed in computational neuroscience and biology. They call this approach computational phenomenology because it applies methods originally developed in computational modelling to phenomenology. Their contribution offers a new approach to neurophenomenology based on generative modelling, including an in-depth discussion of how this application of generative modelling differs from previous attempts to use it to explain consciousness. In short, generative modelling allows one to construct a computational model of the inferential or interpretive process that best explains this or that kind of lived experience.

Martina Vilas, Ryszard Auksztulewicz and Lucia Melloni (2022) likewise extend their computational approach to consciousness, highlighting the importance of active inference. Recently, the mechanistic framework of active inference has been put forward as a principled foundation to develop an overarching theory of consciousness which would help address conceptual disparities in the field (Wiese 2018; Hohwy and Seth 2020). For that promise to bear out, they argue that current proposals resting on the active inference scheme need refinement to become a process theory of consciousness. One way of improving a theory in mechanistic terms is to use formalisms such as computational models that implement, attune and validate the conceptual notions put forward. In this contribution, they examine how computational modelling approaches have been used to refine the theoretical proposals linking active inference and consciousness, with a focus on the extent and success to which they have been developed to accommodate different facets of consciousness and experimental paradigms, as well as how simulations and empirical data have been used to test and improve these computational models. While current attempts using this approach have shown promising results, the authors argue that they remain preliminary in nature. To refine their predictive and structural validity, testing those models against empirical data is needed i.e., new and unobserved neural data. A remaining challenge for active inference to become a theory of consciousness is to generalize the model to accommodate the broad range of consciousness explananda; and in particular, to account for the phenomenological aspects of experience. Notwithstanding these gaps, this approach has proven to be a valuable avenue for theory advancement and holds great potential for future research.

Niia Nikolova, Peter Waade, Karl Friston and Micah Allen (2022) turn from active to interoceptive inference within the predictive processing framework. The mainstream science of consciousness offers a few predominant views of how the brain gives rise to awareness. Chief among these are the Higher Order Thought Theory, Global Neuronal Workspace Theory, Integrated Information Theory, and hybrids thereof. In parallel, rapid development in predictive processing approaches have begun to outline concrete mechanisms by which interoceptive inference shapes selfhood, affect, and exteroceptive perception. Here, they consider these new approaches in terms of what they might offer our empirical, phenomenological, and philosophical understanding of consciousness and its neurobiological roots.

Geoffrey Lee and Nico Orlandi (2022) address the issue of probabilistic representations as a core element in predictive processing accounts. As mentioned above, PP construes perceptual processing as probabilistic and posits probabilistic representations. Orlandi and Lee consider three models of sensory activity from perceptual neuroscience, namely signal detection theory (SDT), probabilistic population codes (PPC), and sampling and then reflect on the sense in which the probabilistic states introduced in these models are probabilistic representations. Comparing and contrasting these probabilistic states to credences as they are understood in epistemology, they suggest that probabilistic representation, in an appropriately robust sense, can be understood as a form of analog representation. Finally, they apply this to the issue of whether conscious experience represents uncertainty, interpreting this as the claim that there are phenomenal features of experience that serve as analog probabilistic representations.

Robert Rupert (2022) turns to the topic of the self in a broadly predictive processing framework of cognitive systems and consciousness. His essay presents the conditional probability of co-contribution account of the individuation of cognitive systems (CPC) and argues that CPC provides an attractive basis for a theory of the cognitive self. The argument proceeds in a largely indirect way, by emphasizing empirical challenges faced by an approach that relies entirely on predictive processing (PP) mechanisms to ground a theory of the cognitive self. Given the challenges faced by PP-based approaches, we should prefer a theory of the cognitive self of the sort CPC offers, one that accommodates variety in the kinds of mechanism that, when integrated, constitute a cognitive system (and thus the cognitive self), to a theory according to which the cognitive self is composed of essentially one kind of thing, for instance, prediction-error minimization mechanisms. The final section focuses on one of the central functions of the cognitive self: to engage in conscious reasoning. Rupert argues that the phenomenon of conscious, deliberate reasoning poses an apparently insoluble problem for a PP-based view, one that seems to rest on a structural limitation of predictive-processing models. In a nutshell, conscious reasoning is a single-stream phenomenon, but, in order for PP to apply, two streams of activity must be involved, a prediction stream and an input stream. Thus, with regard to the question of the nature of the self, PP-based views must yield to an alternative approach, regardless of whether proponents of predictive processing, as a comprehensive theory of cognition, can handle the various empirical challenges canvassed earlier in the paper.

Two contributions address the topic from the perspective of embodied or enactive accounts of cognition. Julian Kiverstein, ​​Michael Kirchhoff and Michael Thacker (2022) start by focusing on pain experience. Their paper aims to provide an account of the subjective character of pain experience in terms of what we will call ‘embodied predictive processing’. They argue that the predictive machinery that constitutes pain experience is not brain bound but is distributed across the whole body. The prediction error minimising system that generates pain experience comprises the immune system, the endocrine system, and the autonomic system in continuous causal interaction with pathways spread across the whole neural axis. As we will see, they argue that these systems function in a coordinated and coherent manner as a single complex adaptive system to maintain homeostasis. This system, which they refer to as the neural-endocrine-immune (NEI) system, maintains homeostasis through the process of prediction error minimisation. They propose a view of the NEI ensemble as a multiscale nesting of Markov blankets that integrates the smallest scale of the cell to the largest scale of the embodied person in pain. In this way, they show how the EPP theory can make sense of how pain experience is neurobiologically constituted, and how a PP theory of pain can meet this constraint of accounting for the highly complex phenomenology of pain experience.

Shaun Gallagher, Daniel Hutto and Inês Hipólito (2022) argue that a number of perceptual (exteroceptive and proprioceptive) illusions present problems for predictive processing accounts. They review explanations of the Müller-Lyer Illusion (MLI), the Rubber Hand Illusion (RHI) and the Alien Hand Illusion (AHI) based on the idea of Prediction Error Minimization (PEM), and show why they fail. In spite of the relatively open communicative processes which, on many accounts, are posited between hierarchical levels of the cognitive system in order to facilitate the minimization of prediction errors, perceptual illusions seemingly allow prediction errors to rule. Even if, at the top, humans have reliable and secure knowledge that the lines in the MLI are equal, or that the rubber hand in the RHI is not our hand, the system seems unable to correct for sensory errors that form the illusion. The authors argue that the standard PEM explanation based on a short-circuiting principle doesn’t work. This is the idea that where there are general statistical regularities in the environment there is a kind of short circuiting such that relevant priors are relegated to lower-level processing so that information from higher levels is not exchanged (Ogilvie & Carruthers 2016), or is not as precise as it should be (Hohwy 2013). Such solutions (without convincing explanation) violate the idea of open communication and/or they over-discount the reliable and secure knowledge that is in the system. Finally, they propose an alternative, 4E (embodied, embedded, extended, enactive) solution, arguing that PEM fails to take into account the ‘structural resistance’ introduced by material and cultural factors in the broader cognitive system.

Kathryn Nave, George Deane, Mark Miller and Andy Clark (2022) focus attention on recent developments that highlight expected future free energy.Footnote 1 They ask under what conditions the minimization of this quantity might underpin or help explain conscious experience. Their speculative suggestion is that Expected Free Energy is relevant only insofar as it delivers what Ward, Roberts & Clark (2011) have previously described as a sense of our own poise over an action space. Perceptual experience, they argue, is nothing other than the process that puts current actions in contact with goals and intentions, enabling some creatures to know the space of options that their current situation makes available. This proposal links the minimization of expected free energy to work suggesting a deep link between conscious contents and contents that are computed at an ‘intermediate’ level of processing, apt for controlling action.

留言 (0)

沒有登入
gif