The Content Hypothesis
Sketching a representational route to explaining phenomenal consciousness
Note to readers: This post presents an overview of a representationalist approach to explaining phenomenal consciousness, one worked out in more detail in academic papers here and here which have supporting citations. Your comments welcome.
Abstract: When we’re awake and conscious, the world appears to us in terms of experienced sensory qualities delivered, somehow, by perceptual contact with what’s outside the head. The question is: how do we account for that appearance in terms of what we scientifically represent the world to be: a physical, mind-independent reality in which we don’t find consciousness itself as a public object. The content hypothesis suggests that if we construe sensory qualities as a species of representational content this can explain their two main features, their subjective privacy and their qualitativeness. As a general rule of representation, we won’t find the terms of representational content - concepts, numbers, propositions, or sensory qualities - in the world they represent. Hence phenomenal content is only available to, private to, the system that instantiates the representational vehicles, not something we can observe about the system from the outside. And since self-maintaining representational systems at our level need basic, non-decomposable epistemic primitives that are not themselves represented, taking qualities to be the content of such primitives can account for their ineffability, apparent intrinsicality, unmediated presence, and having a specific, immediately recognizable character.
The Appearance of a World
As suggested by philosophers and cognitive scientists such as Thomas Metzinger, Anil Seth, Karl Friston, Antti Revonsuo and others, conscious experience can be construed as a qualitatively rendered self-in-the-world reality model. When we’re conscious and out and about, the world seems immediately present to us since we’re not generally aware of our experience as an epistemic intermediary. But the world, including our own bodies, only appears to us as delivered by perception and interoception.1 If in being conscious we consist of a behavior-guiding reality model, this naturally suggests that representation might be key to understanding consciousness, and indeed representationalism is a mainstream view with many variants explored by philosophers such as Fred Dretske, Michael Tye, and William Lycan. Most of the major theoretical contenders, e.g., global neuronal workspace, predictive processing, information integration, recurrent processing, and higher order thought, are consistent with consciousness as a representational phenomenon. If perceptual experience reliably covaries with our sensory engagement with the world, a possible hypothesis is that the basic phenomenal particulars of that experience - pain, red, sweet, the usual suspects - are representational content carried by the neural vehicles involved in representational functions such as vision and touch.
The vehicle-content distinction is illustrated by your reading this sentence: you don’t see concepts or propositions sitting on this page of text, only the black and white letter forms and words that constitute the vehicles of this sentence’s conceptual and propositional content. But that content, as delivered by your reading this, is perfectly real as part of your behavior-guiding world model: you understand what it’s about. Likewise, according to what I will call the content hypothesis, you won’t see phenomenal, that is, qualitative content peering into the brain, only the neural vehicles that carry it. But such content is unequivocally real for the person whose brain it is. There is nothing more concretely, immediately, non-conceptually present for you than a stabbing pain despite its invisibility and unavailability to the rest of us (such pain can occur without any visible behavioral manifestation).
The Need for a Reality Model
The content hypothesis can be motivated as follows. To meet the behavioral challenges posed by living in uncertain and changing environments, creatures like us need to have an updateable behavior-guiding best guess as to what’s in our immediate vicinity. That guess, as predictive processing accounts of cognition have it, must be available online in advance to guide behavior since action often takes place under time pressure. We generally aren’t afforded the leisure to build a representation of the environment and then act on it, so we perforce act on what our current predictive model says is out there. If the prediction is wrong but not fatal, the model gets revised accordingly and we live another day. Since in the heat of the moment we don’t have time to second guess that prediction we’ve been naturally selected to be reasonably good predictors.
The sensory qualities of phenomenal consciousness, the basic constituents of our reality model, are exactly what we can’t second-guess: they are immediately and untranscendably present as the constituents of our perceptual experience. As V.S. Ramachandran and William Hirstein put it in Three Laws of Qualia, your sensation of red is irrevocably red: you can’t choose to experience it otherwise, a good thing for the reliable detection of ripe fruit and banded coral snakes. Red, and all the other phenomenal qualities that populate your consciousness, usually combined in phenomenal gestalts corresponding to objects and situations, function as dependable, albeit not infallible, subjective indicators of what’s out there in the world.
They are also reliable indicators of what’s in here, in the body. Emotions (fear, joy, anger), internal sensations (lust, hunger, thirst), and more diffuse aspects of phenomenal consciousness such as the sense of being a self, present themselves as immediately and untranscendably here and now, each with its own immutable phenomenal character. You can’t distance yourself from or change the qualitative way it is to be a system tasked with maintaining itself in good working order – the survival mandate. Variations along the pain-pleasure and desire-disgust continuums are the subjective indicators of what that mandate requires of you.
Subjective indicators are only available to the conscious system. The objective representational situation involves functions such as multi-modal perception and interoception that are completely specifiable in physical, flow-chartable terms of neural assemblies and their activation patterns, terms that make no mention of conscious experience. Neuroscience doesn’t “see” phenomenal content, only neural goings-on. Hence arises the hard problem of consciousness: why should there be a subjective, conscious reality model constituted by phenomenal content if the causal work of guiding behavior is handled by the neurophysiology of perceptual cognition hooked up to effectors like arms and legs? And how might phenomenality be entailed as a function of such cognition?
Blocking the Representational Regress
When addressing these questions, it’s important to note that we’re not going to see (observe, measure, detect) sensory qualities like pain, red, or sweetness generated as a further physical effect of the perceptual functions they’re associated with – all we’re ever going to see is neural goings-on. Consciousness is not produced by the brain as a distinct observable public object, a point made by David Papineau, Susan Blackmore, and me way back in 1995 in the Journal of Consciousness Studies. Moreover, each sensory quality’s unique, immediately recognizable subjective character can’t be specified in terms of objective, quantifiable metrics, the usual scientific requirement for calling something physical. But even if qualities resist physicalization in these respects, their nature as representational contents might be naturalistically explicable, perhaps as an entailment of our being systems with necessarily limited cognitive resources.
As suggested by Thomas Metzinger in his tour de force Being No One: The Self-Model Theory of Subjectivity, to avoid a metabolically costly and time consuming representational regress, self-maintaining systems like us must have bottom-level representations that are not themselves further represented, but simply function as discrete, non-decomposable elements out of which more complex states of affairs are represented. To function effectively in real time, we need neurally instantiated behavior-guiding epistemic primitives that reliably co-vary with the state of the environment and body. This lines up with the subjective fact that our sensory experience of the world is built out of monadic, atomic qualities - the basic elements of our phenomenal reality model that can’t be second-guessed. The characteristic feels of redness, sweetness, and bitterness, for example, don’t admit of decomposition into more basic qualitative components. The content hypothesis has it that the epistemic primitives – neural invariants in different sensory modalities operated on by higher level representational functions that map objects and scenes – constitute the physical, observable representational vehicles that carry this basic, irrevocable representational content – what we can call phenomenal primitives.
Features of Phenomenal Content
Because it’s bottom-level, not further represented content, this can explain why red and other basic sensory qualities - the phenomenal primitives - have the phenomenal properties usually attributed to them. Most notably they each have a specific, immediately recognizable, non-conceptual and unanalyzable (atomic) qualitative character. Such characters, being atomic, are thus ineffable since there are no further terms in which they can be described (about this see “Plain vanilla”). They present themselves as subjectively intrinsic for the same reason: their character seems sui generis, not a function of any discernable conscious relation. And they are directly, irrevocably present in experience, not conceptual or propositional products of conscious representational inference. If such content exists for a system – content having a specific character which is ineffable, subjectively intrinsic, and unmediatedly present - there’s no sense in which that system’s representational processing can be conceived as going on “in the dark,” so to speak. The “inner light” of consciousness is nothing over and above these enumerated features of phenomenal content, whether or not that content involves the visual sensory modality.2
Phenomenal contents aren’t themselves objects of descriptive knowledge but rather the basic, not further represented terms in which knowledge about the world, and interoceptive knowledge about the body, are non-conceptually couched in experience. When combined in innumerable configurations and intensities representing objects and situations, they make up the familiar, ordinary appearance of reality, your reality model. That model usually works well enough to keep you out of trouble, so we can judge it as being more or less accurate or correct. But curiously enough, basic sensations are neither right nor wrong, correct nor incorrect, but simply reliable indicators of states of affairs outside the head. Your red is a reliable indicator of ripe strawberries in your vicinity. And, being yours alone, it’s incomparable: there’s no public standard of red to check it against, nor any way to compare it to someone else’s given the privacy of experience (see next section and again, Plain vanilla). The same goes for any basic phenomenal quality of your sensory experience. Since there are no objective standards by which to ascertain the phenomenal characters of your sensations, the question of why they have just these characters becomes inapposite. The content hypothesis thus relieves a theory of consciousness from the obligation to explain why our respective reds or our respective pains have the particular characters they do.
Privacy and the Representational Relation
Besides accounting for phenomenal properties, the content hypothesis also comports with experiential privacy. We don’t find concepts, propositions, numbers, or – the crucial point here – experienced qualities among the denizens of the intersubjectively available material world. As cognitive systems we necessarily deploy epistemic primitives as bottom-level, neurally instantiated co-variants when representing the world and our bodies, but their content – the phenomenal primitives – aren’t objects we can point to and observe in that world as we represent it. This can explain why phenomenal consciousness, construed as basic representational content, isn’t an observable: it only exists for the system instantiating the representational vehicles. Since content isn’t something we ever see in spacetime (recall the text example above), we’re not going to find red looking in the brain. Consciousness as phenomenal content, therefore, can be construed as a private representational reality while the world as it appears to us via experience can be construed as a represented reality, the one in which we find the neural vehicles but not the content they carry. This is an instance of what might be a general rule of representation: we don’t and won’t find the terms that constitute our representational reality (concepts, numbers, propositions, or sensory qualities) in the world - our represented reality - as represented in those terms. See section 5 of “Locating consciousness” (Journal of Consciousness Studies, 2019) for an elaboration of this point.
Theoretical Compatibility
That phenomenality consists of bottom-level, neurally carried content comports with neurobiologically oriented representationalist approaches to consciousness such as global neuronal workspace (GNW) theory and predictive processing. The content hypothesis can motivate research to find neural assemblies or activation patterns that play the role of epistemic primitives, for example as informational invariants amplified and broadcast by the GNW.
On the predictive processing account, such invariants could correspond to what Andy Clark, Carl Friston, and Sam Wilkinson describe as
...mid-level re-codings of impinging energies that are estimated as highly certain, in ways that leave room for the same mid-level encodings to be paired with different higher-level pictures, including ones in which nothing in the world corresponds to the properties and features at all (as we might judge in the lucid dreaming case).
The irrevocability of basic qualities, on the content hypothesis, would be the high estimated certainty of mid-level encodings: we aren’t in a representational position to second-guess the redness of red as it appears in our prediction-based phenomenal world models.3
On what’s known as quality space theory, epistemic primitives and their associated qualitative content would be stable nodes or attractors in multidimensional quality spaces corresponding to sensory modalities (vision, touch, etc.), each of which has its dedicated neural basis in the brain. Such nodes, for example those in color vision corresponding to subjectively experienced basic hues such as red, yellow, green and blue, would be the invariant end points of dimensions along which qualitative variation is experienced, e.g., that orange is more similar to red than blue, green closer to blue than red.
On the interoceptive side, we would expect to find valence or hedonic primitives representing threats or enhancements to the integrity of the system, the contents of which would be experienced as variations of pain and pleasure, the basic subjective indicators of well-being. Even more basically, the ascending reticular arousal system or ARAS has been proposed as the carrier of the phenomenal content of bare awareness – what you experience just upon awakening in the morning. As hypothesized by Thomas Metzinger, it’s the untranscendable subjective indicator of being a cognitive system poised to deploy its world model.
Whether the content hypothesis gets traction is a matter of research in cognitive neuroscience to flesh out such possibilities, as well as conceptual development in understanding mental representation. It will gain empirical support should, as suggested above, neural invariants be found that correlate with experienced qualities. The selective deactivation or disruption of such invariants in psycho-physical experiments involving perceptual processing should alter the qualitative features of a subject’s conscious experience in predictable ways.
Obviating Mental Causation
In addition to being compatible with empirically oriented representationalist theories, the content hypothesis can dissolve the vexed problem of mental causation: how does phenomenality contribute to behavior control? Construing it as content suggests that it doesn’t. Instead, the neural representational vehicles help carry out the objective causal work of guiding behavior, while experience – phenomenal content - is the subjective indicator that such work is going on, not a causal contributor. True, conscious qualitative content is often what we can’t help but experience as causal: feeling pain is what, from my subjective standpoint, seems to make me wince. But as the saying goes, correlation ain’t necessarily causation. On the content hypothesis, pain is a reliable indicator of nociception and learning avoidance behavior but does no causal work itself. It’s a bit of phenomenal content that arises in parallel with the neural and muscular processes associated with it and that are doing the work. Evolution selected for the behaviorally advantageous, physically instantiated cognitive capacities with which phenomenal consciousness is associated, but not for phenomenality itself, at least not as a causal contributor to action.4 By avoiding the need for phenomenal causation, the content hypothesis respects the causal closure of the physical while giving experience its own proprietary and essential – for everyday purposes – content-level explanatory space. That I report eating chocolate because it tastes so good is convenient phenomenal explanatory shorthand for the complex neuro-muscular story going on under the hood. That the taste per se isn’t doing causal work needn’t worry us since the neural processing is already handling the job perfectly well. About the phenomenal and physical as two explanatory spaces running in parallel, see here.
Objections
A host of objections will crop up against the content hypothesis; I’ll mention just a few here and feel free to state yours in the comments. It may not appeal to physicalists since it posits that phenomenal content - thus phenomenal consciousness - may not be a straightforwardly physical phenomenon, but rather essentially representational. I’ll just observe here that physicalism as a metaphysical thesis is itself a representational conclusion about reality, a high-level conceptual view, not arrived at independently of our representational capacities. It may be true, but shouldn’t be assumed to be. Consciousness research may show representation, not physicality, comes first. Or it may not - epistemic humility is in order on both sides of this question, it seems to me; methodologically, we should remain metaphysically agnostic when seeking to explain consciousness.
Some physicalists such as David Papineau will claim that sensory qualities just are a certain category of neurally realized functional states, but the physicalist identity claim runs into difficulties, some of which are explored here. Illusionists, a subset of physicalists, will say there’s nothing phenomenal about conscious experience to begin with, so the hard problem doesn’t get off the ground. I address illusionism in section 3 of Locating consciousness (JCS 2019), in an exchange with Keith Frankish here, in a critique of his reactivity schema theory here, and in an analysis of Dennett’s view of mental content here.
Open Questions
The status of content and the role of representation in behavior (if any, enactivists think representation is overrated) is a major focus of cognitive science, so whether experience can be construed as representation involving phenomenal content may well hang on further conceptual and empirical developments. In her book Deflating Mental Representation, Frances Egan argues that attributing representational content to neural states is a useful pragmatic explanatory gloss on their possible functional role, but that distal content, that about external states of affairs, isn’t a determinate, objective feature of neural states. Regarding phenomenal content, she observes that
No philosophical theory of perceptual experience—not representationalism, sense data theory, naïve realism, nor the qualia version of adverbialism—explains phenomenal character. None of these theories explain why looking at a red tomato produces an experience with its distinctive phenomenal character rather than, say, one with the phenomenal character characteristic of a visual experience of green grass, or a mental state with no phenomenal character at all. (126)
Whether this state of affairs will continue indefinitely or whether some theory of phenomenal consciousness will close the explanatory gap described by this quote is an open question. There’s a great deal of intellectual and empirical capital now devoted to the hard problem and the (not so) “easy” problems of explaining features of conscious experience in terms of neuroscience (what collectively Anil Seth calls the “real” problem of consciousness). The theoretical landscape is overcrowded and due for culling as research methods improve and the AI revolution prompts reconsideration of what sorts of systems can be conscious. Whether the content hypothesis or something like it makes the cut and becomes a full fledged, viable theory will depend on its explanatory virtues and empirical testability, both yet to be determined.
If you’ve had a lucid dream, you know that the sleeping brain, deprived of all sensory input, is all that’s needed to generate the appearance of a world in vivid qualitative detail.
Characterizing consciousness as “the lights being on” inside and its absence as things “going on in the dark” are of course visual metaphors. They can lose their grip if we consider what consciousness is like for systems without visual processing, or for individuals congenitally blind from birth.
Note that “mid-level” refers to an intermediate layer of predictive processing in which epistemic primitives that carry bottom-level, monadic phenomenal content might be realized.
As Frances Egan puts it in her book Deflating Mental Representation: “Recall that characterizing something as a representation presupposes a distinction between a vehicle and the content that it carries, where the former has causal powers.” (144, emphasis added)

I’m certainly aligned with you regarding the truth of phenomenal content Tom. But I think it’s the still primitive nature of science in a neurological to cognitive sense that’s forcing you to make some ultimately spooky concessions here. And I do suspect that you aren’t exactly happy with an ultimate privacy and such to phenomenal content. You just aren’t aware of a reasonable way around such things and so cling to the hope that there’s actually nothing spooky here. Consider an alternative however which would invalidate all such spooky notions.
We know that when light enters your eyes, associated neural information is sent to your brain, which is to say a vast non-conscious computer that algorithmically processes input information for associated output function. It’s also known that the best neural correlate for consciousness found so far is synchronous neural firing. So it could be that processed light information causes neurons to fire in your head with a synchrony that creates an electromagnetic field which itself resides as the you that sees what you do. If the brain’s EMF happens to be the substrate of consciousness (as proposed by Johnjoe McFadden in 2000) then phenomenal existence would no longer need to be considered “inherently private” or any other standard but ultimately spooky attribute. And why has science been so slow to empirically explore such a natural possibility? Perhaps because many today fail to grasp that the most popular supposedly natural proposal (computational functionalism) is ultimately supernatural. I discuss such magic in my post #3, as well as EMF consciousness in my post #4. This probably isn’t the sort of criticism that you’re currently looking for, but perhaps an interesting possibility to consider as well?