See Also: Book Notes, (me), Notes on Consciousness, Seth: Being You, Dehaene: Consciousness and the Brain, Haidt: Happiness Hypothesis, Barrett: 7.5 Brain Lessons, Barrett: How Emotions Are Made, Sterling: Allostatis, Human: Makes Us Unique, Consciousness: Confessions, Blank Slate, Neuroscience of Human Relationships, Thinking, Fast and Slow

The Brain's Representational Power

The Brain's Representational Power: On Consciousness and the Integration of Modalities
Cyriel M.A. Pennartz

See Also: Neurorepresentationalism, Cyriel M.A. Pennartz, What is neurorepresentationalism? From neural activity and predictive processing to multi-level representations and consciousness, Behavioural Brain Research, Volume 432,2022,113969, ISSN 0166-4328, https://doi.org/10.1016/j.bbr.2022.113969

Key idea #1: hierarchy from single-neurons, to ensembles (cortical columns), to unimodal networks, to multimodeal meta-networks
Key idea #2: consciousness is distributed through the brain

image PennartzFig9_6.jpg If you have read a handful of consciousness, I recommend you read this book by Cyriel Pennartz. It lays the ground work for a theory of consciousness he calls Neurorepresentationalism. It is essentially how we represent and model the world in our heads. Pennartz recently debated Stanislaus Dehaene, primary advocate for the Global Neuronal Workspace Theory. Unfortunately, I never saw score card published.

As part of his model for how we represent the world with out neurons, we takes a deep dive into all sensory systems and how they come together. There is much more on olfaction than I've ever read before. He diagrams a hierarchy on page 284.
"Concretely, the modified account ranges from the single-neuron level to functional ensembles (for which empirical evidence is emerging) and, hence, to unimodal meta-networks that are integrated into the higher level of multimodal meta-networks."

I love this plain statement that consciousness is distributed through the brain on page 259:
"... conscious representation is not located within any specific node of the entire system. We might find neural correlates of consciousness at multiple anatomic locations, but a more interesting quest is to identify the components of representation those locations mediate."

If you like this quote on page 132, then you will like the book.
"The task of rapidly constructing a multimodal, situational representation capturing at least roughly what is going on in the world is what I call the brain's representational problem. The brain's solution to it is what we call consciousness or conscious representation. This concept is not entirely new. For instance, Koch (2004) proposed that visual awareness presents us with the "gist" of a visual scene a succinct summary of "what is in front of me". Earlier on, contemporary philosophers already emphasized the singular, unified nature of conscious experience and its characteristic first-person perspective. Also Marr's (1982) model on the brain's construction of progressive "sketches" for vision (primal, 2.5-D, and 3-D) incorporated the notion of a spatial representation for conscious vision."

Pennartz has some cool research! p.262 "...firing rate is used to code feature specificity and that spike phases are important for coding relational aspects, providing a basis for meaning. Therefore, attractors at higher sensory and cognitive levels are predicted to be robustly structured in the phase domain as well."


p.222 " The current framework acknowledges that conscious systems represent events and objects beyond the system's own events and states. Because the term "intentionality" is often used in a linguistic or cognitivist context and carries connotations both to (motor) intentions and higher-order, cognitive belief systems I prefer to avoid, I will refer here to the constructive and projectional capacity of conscious systems. That is, our brains generate conscious representations having perceptual content, but inherent to this content is that perceived objects and indeed whole situations or scenes- are projected to, or situated at, different locations than the physical position of our brains. Integral to healthy, full-blown representations is that they are spatially ordered, including the external locations at which objects are situated."

p.142 " The anatomical top-down feedback referred to here pertains to recurrent feedback across the cortex in general, taking effect as soon as any "higher" area is activated by feedforward input. For example, area V2 feeds its L2-3 activity back to VI as soon as its superficial neurons start firing. No feedback from higher cortical areas is presumed necessary for this low-level feedback to occur."
...
"Tactile information, coming in via the ventral posterior thalamic nuclei, is processed sequentially in subregions of SI (areas 3a and 3b to areas 1 and 2), which send their outputs to secondary somatosensory cortex (SIl) and higher cortical regions, especially posterior parietal area 5. Is the local wiring diagram of SI similar to that of visual cortex (see figure 6.2) And is a hierarchy also present in the somatosensory system?"

p.820 " A second, related way of looking at modality identification is to ask to what extent information gathered in a given modality X has predictive value for the information gathered via a separate sensing system Y. The more correlated the two inputs in X and Y are, the higher the predictive validity. This view is akin to the theoretical framework of 'predictive coding,' developed in the field of vision, holding that what we perceive is not directly our environment but rather the inferred causes of the sensory inputs we receive. Under this view, the brain constructs a model that is continuously predicting what goes on in the external world to explain the sensory changes reaching our brain, and it uses newly arriving sensory inputs to compute error signals between predicted and actual world states, updating the model."

Structures

p.136 " In chapter 2 we saw how the cortex is composed of six layers, stacked on top of each other like pancakes. Cortical areas are classified as having a layer 4 (L4) or as lacking this layer. By and large, areas harboring a L4 are sensory in nature, whereas those without one are located mostly in the frontal lobe but also include some temporal lobe areas. Because L4 has a granular (grainy) appearance due to its numerous stellate (*star-like") neurons, subsets of frontal areas are also referred to as "agranular" or "dysgranular.""

Wiring Up

p.103 " Charles Bell (1869/1811) and Johannes Müller (1838) adopted a more general perspective on the coding of sensory modalities, taking Young's theory into account and extending it to the brain and afferent nerves. Bell and Müller examined whether the nature of a sensation is determined by a specific pattern of activity emitted by sensory transducers on the body surface or by the distinct properties of that receptor. The first option, referred to as 'pattern coding,' is exemplified by a hypothetical photoreceptor sensitive to both red and blue light. It would transmit wavelength information by different activity patterns (e.g., a burst of action potentials in response to red light vs. a slow, regular train to blue light). The second option, referred to as "labeled-line coding," is the hypothesis adopted by Bell and Müller, and it received widespread support from subsequent research, most notoriously by Hermann von Helmholtz (1863, 1866) on the visual and auditory system. It postulates that it is the nature of the distinct sensory receptors, and the nervous pathways that relay their activation patterns to the brain, that determines the modality a sensation belongs to."

interesting tidbits

p.33 "From the viewpoint of consciousness and searching for overarching, central representations, the brain area at the apex may be extremely interesting. What is this highly integrative "summit" region, where all inputs come together? It turns out to be the hippocampus, placed in a privileged position where it can absorb preprocessed information from all main sensory modalities: vision, audition, touch and pain, proprioception, smell, taste, and the vestibular senses. This might tempt us to think of this structure as an all-seeing eye, a "seat of the soul." Yet, this structure does not appear to account for consciousness but forges multimodal inputs into a neural code for spatiotemporal context for multimodal configurations of stimuli in time and space and has a main function in memory, not conscious perception per se."

p.40 "So what makes neural activity in the primary visual cortex VI "visual"? Is it enough to say that the neurons respond to visual input? The problem here is that neither these neurons, nor their fellow circuits in higher areas, have any intrinsic knowledge specifying the kind of information they are processing. Their sensory modality is not a built-in property of the cortical area they are located in. Will it have to be afferent input that determines the content or meaning of what their firing activity represents? In primates, area V1 is in the unique position that it is virtually the only cortical area receiving direct and strong input from the visual thalamic nucleus"

p.234 " When scrutinizing neural mechanisms potentially accounting for the impression of perspectival unity, it appears unlikely that different low-level maps in multiple sensory modalities would converge onto a single, higher-order map. Apart from the issue of whether a single, overarching map exists, multimodal convergence subserving spatial referencing likely takes place in, at least, a set of parietal and frontal (particularly premotor) areas. Specifically, convergence of visual, auditory, and somatosensory information in the dorsal, "spatial" stream is thought to occur in the inferior parietal lobule, which includes parietal area 7A and the LIP, and in ventral premotor cortex. Multimodal information also converges onto the dorsolateral prefrontal cortex, but lesions in this area cause primarily working memory and related executive deficits, not loss of consciousness."

p.242 " There is more to feature binding than attention or memory-based mechanisms. Psychophysical experiments suggest that feature grouping and scene segmentation can also proceed in a "bottom-up" fashion, without attentional or mnemonic guidance. These processes may be directly driven by early analysis of sensory input structure, occurring in parallel across a visual or auditory scene and even before focal attention or memory guidance can take effect. This bottom-up processing is also referred to as pre-attentive Gestalt group-ing, as it performs grouping or binding based on early analysis of Gestalt features such as common motion, colinearity, and similarity. If a bird suddenly flies right past your head, its commonly moving features are so salient and intrusive that no top-down guidance will be required." ======= taste & olfaction

p.78 "Similarly, taste signaling depends on a long chain of lower sensory stations originating at the taste buds in the tongue, but taste perception is predominantly associated with the gustatory cortex, anatomically corresponding to portions of the insular cortex (figure 2.10; Small, 2010). Loss of taste perception ageusia is caused by lesions of the gustatory cortex but may also occur when the corresponding relay nucleus of the thalamus is damaged (Kim et al., 2007). Whereas the perceptual intermingling of odor and taste during a good meal is universally recognized, little is known about how this integration comes about at a thalamocortical level"

P.101 " Smell, we now know, relates to the chemical structure of odorants that determines their specific binding to about 1,000-1,300 different kinds of genetically encoded receptors in the nasal epithelium. We do not know how it is that the spike patterns traveling from receptors to the brain give rise to a sensation of smell, but at least we can say that the specificity of a detected odor corresponds to the specificity of the odorant's chemical structure and its binding to a multitude of olfactory receptors."

p.147 " In the olfactory bulb, axons from thousands of sensory receptor cells converge on a few tens of relay cells, organized in anatomically discrete units called "glomeruli." Neurons in these glomeruli project to the piriform cortex (Franks & Isaacson, 2006). Both across the bulb and piriform cortex, a spatially varied pattern of activity arises, a topographic "signature" of a specific odor. Whereas the bulb may be dedicated to low-level processes such as feature extraction and selection of information, the piriform cortex may mediate higher olfactory functions involving associative memory, pattern separation, and the synthesis and distinction of odor "objects" against a background.

Here we stumble on some strong deviations from canonical cortex. First, anatomists have traditionally distinguished the piriform cortex as a main component of the "paleo-cortex" (phylogenetically "old" cortex), as it consists of only three layers instead of the usual six found in neocortex. Secondly, the mediodorsal thalamic nucleus might act as a relay for olfactory perception showed that piriform-prefrontal connectivity is mostly direct; axons running indirectly via the mediodorsal nucleus constitute a minority. Although this matter is not yet settled, it seems reasonable to propose that direct piriform-prefrontal connections, and not the thalamus, play a central role in conscious smell perception."

p.146 " There are only a few areas of brain research that underwent such a strong transformation in knowledge as olfactory research over the past decades. Originally, seven basic types of odor receptors were envisioned, and specific combinations of receptor activity patterns were proposed to account for the large diversity in odors we can discriminate. Currently, no less than about 1,000 molecularly distinct odorant receptors have been identified in rodents (380 in humans), originating from a large family of genes Buck"

p.150 " Although several neuroimaging and EEG studies have linked prefrontal activity to consciousness, the nature of this evidence is correlative and the activity may arise from related processes such as verbal reporting, working memory, and cognitive control. When lesion evidence is added to the electrophysiological evidence, we can conclude that the current evidence pleads against a major, indispensable role of frontal cortex in consciousness except for the significance of orbitofrontal and anterior cingulate cortex for olfaction and affective pain."

p.214 " As we already touched upon Kant's notion of time as a necessary attribute of consciousness, it is time to address his concept of space. Could there be experiences completely or largely devoid of spatial aspects? Smell, for instance, is not exactly our most spatially accurate modality. With our eyes closed, a scent of perfume might come from anywhere around us. But leaving the problem of source localization aside, where is it that we smell relative to our body map? We associate odors with the position of our nose we know we do not smell at the tip of our index finger, for instance. Humans are microsmatic, having low olfactory capacities relative to dogs, but even we have an ability to perceive whether an odor reaches the olfactory epithelium via the nose (orthonasally; associated with an external source) or via the mouth-larynx opening (retronasally; associated with food flavors). Spatiality is not alien to modalities different from vision or smell, ranging from taste, touch, pain, and thermoception to less localized body sensations such as visceral sensations, or, more popularly, "gut feeling." Even in a Ganzfeld in which subjects are facing an evenly lit and contourless visual field people having both eyes open retain a notion of space, a realization they are looking at a spatially divisible environment and not at a point or 3-D object."

p.223 " ...memory-based interpretation of sensory inputs has been elaborated by explaining perception as a constructive process depending on the interaction between sensory input and stored knowledge. A general problem in establishing an interpretation is how particular sensory inputs come to be put in place as evidence: which inputs are worth considering, and which others should be ignored? A first subprocess in this selection is attention, which focuses us on the most relevant subspaces of all sensory dimensions, either in a bottom-up (attention driven by input saliency) or top-down fashion (driven by pre-established knowledge or memory. Attentional processes should be segregated from conscious processing per se, because we can be conscious of inputs without attending to them. The two processes also appear to be neurally dissociable. Altogether, psychophysics suggests regarding attention as a function to steer and amplify conscious processing and facilitate relevant inputs making the transition from nonconscious (or preconscious) to conscious processing. Secondly, the selection of inputs as evidence for perceptual constructs relies not only on single-source evidence but also on the degree to which different sensory sources match or mismatch when converging at higher multimodal levels in cortex."

p.227 " The bottom line, then, is to incorporate interpretation into a set of fundamental requirements for consciousness. Interpretive processes proceed at a "higher," more cognitive level, as illustrated by visual agnosia, or occur at lower levels, closer to the entry site of sensory input into thalamocortical systems. ... we are no longer maintaining a sharp boundary between "perception" and "cognition": both are fundamentally a form of gnosis.

Interpretation assumes grades of complexity, depending on the position of the processing node in the sensory cognitive hierarchy. This is not to say that sensory input can be dispensed with as soon as a system achieves interpretation. The brain continuously samples sensory inputs, performing reality checks on ongoing changes. When you approach your car parked on a street, you have a rough idea of what the side of the car that you cannot see will look like. In theory, your interpretive system could retrieve these features from memory and "fill them in" to construct a complete, three-dimensional percept, yet you do not see the car's other sid. When we think of the brain as a world-modeling device, it is crucial to maintain a distinction between actual sensory input and what the world is predicted to be like. Consciousness is chiefly concerned with the actual situation. The lack of three-dimensional "filling in" is functional, because a direct visual input is lacking and there might be unexpected objects at the other side, such as a hole in the street or scratch on the door."

p.230 " "Consciousness" means, in the first place, perceptual consciousness and has to meet more requirements than those needed to maintain cell and body function. Self-awareness comes into play at a more advanced stage than consciousness, as a form of metacognition. The realization of an "I" follows the emergence of conscious representations in time, ontogenetically but often also phenomenologically. As applies to consciousness in general, self-awareness depends on the seamless cooperation between different sensory mapping systems. Some of these systems are structured as a body map, but equally important are the frameworks used for exteroceptive senses, vision and audition, because the comparison between body and exteroceptive maps is prime to distinguishing the "self" and the "nonself," the outer world."

p.244 " Altogether, binding and grouping of features remain an essential component of low-level integration and thus an important building block in constructing conscious representations. Higher-order feature detectors are likely to contribute their own share to feature integration, which is limited, however. Important resources for feature binding lie in early, pre-attentive grouping processes and in top-down influences from attentional and memory systems."

p.253 " In advance of cognitive interpretation, lower level visual information will be propagated feedforward to higher brain areas involved in the semantic classification of inputs. The collective activity of feature detectors in V1, V2 (etc.) representing the visual inputs of figure 4.5 will, by itself, not suffice to "see the input pattern as a duck or a rabbit. To see a duck, this activity pattern needs to activate cell populations higher up in the visual-mnemonic hierarchy, which will generally fire when a duck is perceived: these cells code the category "duck" whenever an input sufficiently representing a duck reaches them. This activation might be achieved by the visual input as rendered in figure 4.5 but also by other visual inputs such as body movement patterns or "quacking" sounds of a duck. Once the "duck" category cells are activated in this bottom-up manner, their own activity will be propagated recurrently to the lower-level assemblies conveying the input.

Is this theoretical scenario physiologically realistic? Current evidence attributes category-coding functions to neural assemblies in the prefrontal cortex, inferior and medial temporal lobe. In chapter 3 we encountered the cognitive disorder semantic dementia, marked not by a loss of concrete daily-life memory but by the inability to retrieve and use general, decontextualized concepts to describe facts. Such concepts are exactly the categorical information we are looking for the materials used to cognitively interpret sensory inputs. Specific semantic deficits are intimately associated with damage to the human medial temporal lobe, more specifically the anteromedial and inferolateral temporal cortex."

image PennartzFig9_10B.jpg

Figure 9.10 B

" The input at VI is now broadcast to higher-order visual areas such as the middle temporal area (MT), V4, and intraparietal cortex (IP) and henceforth to yet higher areas such as the inferotemporal cortex (IT) and rhinal cortices (RC; including entorhinal and perirhinal areas, projecting upstream toward the hippocampus, not shown). Not all intermediate areas have been shown as part of these routes (see figure 6.3, plate 3, for a more complete mapping). The reciprocal, echoing interactions are symbolized by colliding arrows (not shown for all map-to-map connections. The input activates subsets of neurons in MI, IP, and V4 and even sparser subsets in IT and RC. Recognition of complex object and situational information in IT, RC, and hippocampus results in feedback to higher visual areas where, for example, motion-and color-coding patterns of activity are modified as part of the recognition process. Conscious representation cannot be localized to any particular node of the system as each node makes its own perceptual-cognitive contribution. The scheme leaves out many relevant visual and memory-related areas as well as other sensory modalities and multisensory areas such as posterior parietal cortex. "

p.260 " We think of the consecutive right-to-left and left-to-right projections as an iterative process continuously updating representations in all modules. A stream-of-consciousness situation will be marked by continuous updating, but a relatively constant visual scene may drive the system to converge to a (temporarily) stable state. In case of emerging percepts, the dynamic, iterative interactions are likely to evolve sparsely and globally asynchronously, in agreement with the desynchronized nature of the wakeful EEG and sparingly distributed ensemble firing. Nonetheless, this firing may be organized in locally effective gamma cycles."

p.259 " The scheme of multiple systems continuously echoing module-specific information back to lower stages and to each other does not treat feedback from higher "visual" areas such as MT fundamentally differently than feedback from still higher "categorical" areas such as the anterior temporal lobe. Following chapter 8, we do not regard sensory interpretation as being radically different from cognitive interpretation, despite graded differences in complexity and content."

p.261 " When we first zoom in on a unimodal map, such as for visual shape (see figure 9.6), an attractor state at this level can be conceived as a "bump" of neural activity that can move across the map. Within that hillock, at a more fine-grained level, neural activity is chiefly made up of activity increments in the neuronal subpopulation tuned to a feature of the object or situation that is currently presented (see figure 9.6, inset). Given constant input, the hillock will be stationary, both in terms of map location and feature detectors activated at that location. A robust change in input will shift the bump to another location on the map or activate different feature detectors at the same location. This behavior is characteristic of a continuous attractor model."

p.262 " This switching is regulated by top-down control from frontal cortical areas and requires energy because of the resistance of "semantic" coding states to leave their basin of attraction. ... A similar explanation may hold for alternating perceptual states in binocular rivalry and some visual illusions, noting that the predominance of top-down versus bottom-up control may differ per case. The robustness and resilience of representations is most salient at the level of cognitive interpretation ... stability is an important principle as well in intermediate-level areas such as V3, V4- V4a, and MT. As eye blinks and saccades cause momentary changes in VI activity, and firing activity is not maintained here when visual input is interrupted, attractor properties at this low level may be either absent, or present but not needed to support perceptual stability."

image BassettChangeEnergy.jpg
*** Dani Bassett Energy Needed to change vs. to keep stable.

Earlier on, I proposed that firing rate is used to code feature specificity and that spike phases are important for coding relational aspects, providing a basis for meaning. Therefore, attractors at higher sensory and cognitive levels are predicted to be robustly structured in the phase domain as well. Simulations support the biophysical feasibility of such structuring. A switch in the categorization of an object will not only be accompanied by a shift to a different attractor state in the corresponding semantic modules but should- by virtue of principle (¡i) also tip over connected attractors in other areas to assume a different state, specifically in the phase domain. A category

DynAssocNet - p263

p.277 " Because the current account assumes that higher levels of network organization have no independent information on how to interpret low-level events, this interpretation must be essentially arbitrary. But how then are we to avoid that low-level patterns become randomly mapped to any type of percept? That there is no objective, preordained way for the brain to decode low-level patterns into percepts fits well with the notion of subjectivity, but this should not be taken to imply that our percepts of 400- versus 405-m wavelengths of light should be wildly different, given the same color context. Let us reflect on our options and first consider the argument that there actually is a strict relationship between an external input, a neural activation pattern, and its meaning. This is guaranteed by the specificity of sensory receptors transducing external physical energy into specific spike outputs and"

p.265 " Reviewing our attempts so far to plow through the minefield of consciousness research, a skeptic may still wonder why all that firing-rate and phase-coding business would actually lead to anything like the phenomena we see and feel consciously. After all, haven't we just been discussing spike trains, neural ensembles, and brain areas that send electrical signals back and forth? Where does a phenomenal sensation like the taste of an orange emerge? In short, we are still facing "the explanatory gap" (Levine, 1983)."

p.277 " ... from the top down (a supra-assembly network trying to make sense of low-level events). Which cues would the meta-network have to interpret an ensemble code in a certain way, for example, as representing the color blue? Without an explicit or implicit "key" for decipherment, the most reasonable way to go about this is to propose the meta-network interpret an ensemble code however it likes, although within certain boundaries. This is basically how subjectivity may arise: the brain's initial interpretation of low-level inputs will be arbitrary, but once the brain has settled for (or "chosen") an interpretation, the interpretation of other inputs is constrained by this choice. These constraints will be explored below."

p.284 " Earlier on, we encountered empirical arguments against gamma synchronicity as a neural correlate of consciousness. However, the point here is that Chalmers's argument could be applied to reject any potential neural correlate C suggested by neural recordings because C is described in physicochemical terms and thereby is unfit per definition to provide a conclusive explanation. The counterargument was already laid out: let us not overrate the power of our imagination, incapable as it is in jumping across multiple representational levels simultaneously. No matter which neural mechanism we come up with, and no matter how complex or powerful it is, it is impossible to show directly how phenomenal experience corresponds to patterns of neural activity. What we can do is to situate the suspected neural correlate at the right level of processing and flesh out its relationships with phenomen defined at higher levels by doing a great deal of laborious research."

p.281 " Could we then maintain that consciousness manifested at the highest levels of representation has a separate function over and above neural phenomena? Earlier on, I argued its function is to provide a multimodal situation sketch of what goes on in an organism's immediate environment and body (see chapter 6). At present, it may seem that consciousness is reduced to an epiphenomenon and the real hard work is done by all those little neurons. However, if we accept that the lower, intermediate and higher levels exert no causal effect on each other- that the type of relationship they have is not of the form "A influences B" then the distinction between epiphenomenon and "real work" vanishes. Multimodal representation is a process realized simultaneously at all four levels. What distinguishes the various levels can be best described as representational or experiential complexity, and it is an open question for future research to flesh out this description using empirical data. At the level of multimodal meta-networks the system has the full representational power to smell and see a rose. At the unimodal meta-network level experiential complexity is lower, leaving only one modality to provide a dimensionally reduced sketch of one's surroundings and body state. At the ensemble level, the distinction between submodalities is lost, leaving us with within-area activity bumps and attractors. Further dimensional reduction takes place when descending to single neurons. The function of conscious representation is exerted at all levels, and no level is more epiphenomenal or superfluous than another although the experience can only be "felt" in full force when the highest level functions properly."

p.284 " In summary, I have argued in this chapter how Marr's multilevel notion of mind-brain organization can be modified to accommodate the functional and representational demands applying to conscious brain systems proposed in earlier chapters. Concretely, the modified account ranges from the single-neuron level to functional ensembles (for which empirical evidence is emerging) and, hence, to unimodal meta-networks that are integrated into the higher level of multimodal meta-networks. No saltatory transitions in consciousness or neural-to-mental activity exist between levels, as meaning and experience are built up gradually when moving up to higher levels."


What is Neurorepresentationalism?

What is neurorepresentationalism? From neural activity and predictive processing to multi-level representations and consciousness,
Cyriel M.A. Pennartz,
Behavioural Brain Research, Volume 432,2022,113969, ISSN 0166-4328, https://doi.org/10.1016/j.bbr.2022.113969

To the RIGHT: Fig. 3. Functional organization of different levels of representation in the construction of conscious experience.

5. Multi-level representations and the hard problem

"More concretely, multi-level representations underlying conscious experience have been proposed to be constructed bottom-up by the level of:
(i) single neurons, having the capacity to respond to single features;
(ii) ensembles of neurons, forming small, within-area local networks capable of pattern coding within a single submodality (e.g. shape);
(iii) unimodal metanetworks, which combine the hypotheses from lower-order ensembles into representations of objects considered within a single modality (e.g. all visual features making up a visual object) and
(iv) multimodal metanetworks, integrating the information coded by unimodal metanetworks into multisensory object representations"

Page 8:
"Neurorepresentationalism deploys its multi-level concept such that both low (e.g., V1) and high (e.g., inferotemporal) cortical areas can contribute to visual consciousness. For instance, inferotemporal areas of the non-human primate brain, containing neurons with very wide receptive fields, likely contribute position-invariant shape information to a perceptual representation, whereas V1 neu- rons, having small receptive fields, may contribute local visual details at a high resolution."

Table 1 Neurorepresentationalism considers five properties of phenomenal experience in healthy individuals to constitute inalienable hallmarks of consciousness.

Hallmark of conscious experience Explanation
Multimodal richness Conscious experience is qualitative in nature, i.e., it is characterized by sensations in multiple distinct modalities (vision, audition, somatosensation, smell, taste, vestibular sense). These main modalities can be partitioned into submodalities (e.g., for vision: color, texture, shape, motion, etc.).
Situatedness and immersion In a conscious state we find ourselves situated in a space that is usually characterized by certain objects in the foreground and other stimuli in the background. Our body is experienced as immersed in the situation, occupying a central position relative to the surroundings.
Unity and integration Consciousness is not made up of different elemental experiences, but is unified or integrated in that we have only one single experience at any given time. Our senses work together to enable the construction of an undivided, multimodal, spatially encompassing representation.
Dynamics and stability Conscious experience is continuously updated following changes in the external environment and our body. Despite this dynamic aspect and ubiquitous movement of the head, eyes and other body parts, stationary objects in the environment are experienced as stable.
Intentionality The property that a carrier substrate of consciousness can generate signals that are interpreted as, and refer to, something other than itself ('aboutness'). The brain's ability to interpret its own neural activity patterns not only pertains to ambiguous stimuli, illusions or hallucinations, but is considered a general and fundamental hallmark of consciousness.

 

Definition of consciousness - page 5 " a definition of conscious experience ..: it is the multimodally rich, dynamic survey of the subject’s current situation, including his own body and functionally earmarked for planned behavioral and cognitive actions in the future."

Page 9:
"Phenomenal experience just ‘is’."

    Stuff from book to foloow up on.
    
  P 199 Gilbert Ryle
p.197 mask/sweep - PC

p.204 beta is prediction and gamma is error !!
Alpha waves
p.205 Alpha waves frame perception?
2023-04-23 YON <> jch.com/notes/PennartzBrainRep.html