Article 1 – What Your Brain Decides Before You Do 

This article is part of “The Brain Behind the Behaviour, a three-part series”. It began with a lecture by Nancy Kanwisher at MIT freely available as part of her full course on MIT OpenCourseWare which triggered a question I could not leave alone: how does the brain decide what we perceive, before we are even aware of perceiving it? That question led into a deep dive at the intersection of neuroscience, neurodiversity, and the frameworks we use to understand human difference. Drawing on earlier work about music and memory, motivation and detection errors, and the gap between internal reality and public perception, this series moves from the science of brain architecture through the implications for how we understand individual difference, to the research questions we are not yet asking. Each article stands alone. Together they build an argument.

A word before we begin. I am not a neuroscientist. I am a writer and strategist with a lifelong habit of following questions wherever they lead, including, in this case, deep into territory I had no formal training for. When Kanwisher's lecture stopped me in my tracks, I did what I always do: I kept reading, kept connecting, kept asking what this means beyond the lab. The hardware and software comparisons I use throughout this series are deliberate simplifications. They are scaffolding, not blueprints; they’re tools for building understanding, not claims to technical precision. A neuroscientist would use different language. I am not a neuroscientist; I am an explorer who found something worth mapping, and this is my attempt at a map of understanding.

Meet Bob. 

Bob is, by any reasonable measure, a capable and functional person. He holds down a job, maintains relationships, handles complexity. But ask him to find his car in a parking garage he visited twenty minutes ago, and something unexpected happens. He cannot do it. Not as a matter of preference or distraction. He genuinely, structurally cannot orient himself in space. He is lost in a way that has nothing to do with intelligence, attention, or effort. 

Bob is not a fictional character in a random story. He is a real person. Nancy Kanwisher describes him to open a conversation about a rather disorienting discovery in modern neuroscience: that the human brain contains regions of extraordinary specificity, dedicated to single functions with such precision that their loss causes a malfunction that nothing else can compensate for. Bob’s navigational blindness is almost certainly traceable to a small region in the parahippocampal cortex (the parahippocampal place area, or PPA), discovered by Kanwisher and her colleagues in the late 1990’s. This region responds specifically to the spatial layout of environments. Not to objects, not to faces, not to abstract space. But to the physical configuration of rooms, streets, and landscapes. 

Think of such region as the brain’s equivalent of a BIOS chip: a dedicated, low-level system that runs before the operating system even loads, handling one specific task so that everything built on top of it can function. Remove it, or compromise it, and one specific function fails, while everything else continues to run normally. 

That is Bob.

The Specificity Problem 

Kanwisher’s broader question, is one that has driven decades of research using fMRI imaging, and is deceptively simple: is the human brain a collection of specialised components, each dedicated to a specific cognitive task, or is it a general-purpose system in which every region participates in everything? 

The answer, it turns out, is neither. And both. 

What the imaging evidence shows is that the brain contains genuine islands of specificity, with regions that are remarkably, almost stubbornly dedicated to single high-level functions. The fusiform face area processes faces. The PPA processes spatial layout. A region in the auditory cortex responds selectively to song, distinct from speech and other sounds. Another region activates specifically when we think about what other people are thinking. These are not vague tendencies. They are consistent, reproducible, and anatomically localised across individuals. 

But these islands exist within a much larger ocean of distributed processing. General intelligence does not live in any single region. Neither does personality, creativity, social cognition, or motivation. These emerge from how networks interact; from the quality and character of the connections between regions, not from the regions themselves. Damage a hub in that network and the effects are diffuse. Damage a dedicated island and the effect is surgical: one function, gone, while everything around it continues normally. 

This distinction between what the brain does in specific dedicated regions and what it does through broad distributed networks is the axis around which everything else in this series of articles turns.

Before You Were Aware of It 

Here is the bit that tends to produce a moment of genuine discomfort in people who hear it for the first time: 

By the time any perception reaches your awareness, the brain has already made most of the important decisions about it

Consider color. When light enters your eye, the cone cells in your retina do not send a faithful representation of the wavelength spectrum to your brain. These photoreceptors send compressed difference signals; these signals are already pre-processed, already filtered through the specific sensitivities of the three cone types that evolution settled on. The signal that leaves the retina has already been interpreted. By the time it reaches the lateral geniculate nucleus (or: LGN; the relay station between retina and visual cortex), it has been further reformatted into opponent-color pairs: red against green, blue against yellow. By the time it reaches the visual cortex, dedicated regions are computing color constancy; it’s the brain’s mechanism for ensuring that an apple looks red in sunlight and in shadow, even though the actual wavelengths reaching your eye are completely different in the two conditions. 

At each stage, the signal has been transformed. What you experience as the color red is not red. It is the brain’s construction of red, built from a cascade of interpretive steps that began before the signal ever reached the cortex. 

This is not a minor technical point. It is the foundation of something important: there is no such thing as raw perceptual data in human experience. There is physical reality: photons, air pressure waves, chemical gradients. And then there is what the brain makes of those signals, which is always and already an interpretation, shaped by the architecture of the system doing the interpreting. 

Neuroscientist Karl Friston developed the current formal framework of what is now called predictive processing, building on earlier work in this direction by Helmholtz and others, into its most mathematically rigorous form. The brain, in this framework, is not a passive receiver of sensory information. It is an active prediction machine, continuously generating models of what it expects to perceive, and then comparing those predictions against the incoming signal. What reaches conscious awareness is largely the mismatch (read: the prediction error), not the raw signal itself. Perception is the brain’s best guess, continuously revised. 

Which means that two people looking at the same object are not, in any straightforward sense, seeing the same thing. They are each constructing an experience from a physical signal, using a prediction model built from their own history, their own architecture, their own prior encounters with the world. The physical signal may be shared, but the construction certainly is not.

There is an older piece of evidence for the same idea, from an unexpected direction. In the 1940’s, the psychologist Edward Tolman ran a series of experiments on rats navigating mazes. He trained them on a particular route to find food, then blocked that route and presented an entirely new set of paths. If the rats had simply memorised a sequence of stimulus-response movements (‘turn left here, turn right there’) they would have been helpless. Instead, they immediately took the shortest available path toward where the food had been, even though they had never used that path before. 

What they had learned was not a procedure but something closer to a map: an abstract internal model of the space, flexible enough to support novel routes. Tolman called these cognitive maps. What they demonstrated is that even in rats, the brain does not just record experience; rather, it constructs a model from it. The framework that Karl Friston fromalised is, in some sense, the direct descendant of Tolman’s maze. The brain builds an internal model, and it uses that model to navigate. The map is not the territory. But the map is what you are using, every time you think you are simply looking at the territory.

The Spectrophotometer Problem 

Color science has a useful tool called a spectrophotometer. It measures the precise spectral composition of a color: the exact wavelengths, the exact reflectance curve, the exact color value. It is as close to objective color measurement as we can get. 

And yet, spectrophotometers have what the field calls inter-device variation. Two instruments measuring the same color will produce slightly different readings. Not because either is broken, but because no measuring instrument is perfectly calibrated to an absolute standard. Each has its own tolerance range, its own systematic offset, its own characteristic deviation from the theoretical ideal. And sometimes, inter-device variation is greater than the difference being measured, which means the instrument cannot reliably detect the very thing it was built to see.

The human visual system is the same. Every eye has a slightly different distribution of cone cells. Every brain has a slightly different V4 (“visual area 4”) region for color processing. Every person has accumulated a slightly different history of color experience that has shaped their predictive model. And every linguistic community has carved color space into slightly different categories, so that the boundary between what counts as blue and what counts as green is not the same across different languages. This linguistic boundary genuinely affects where perception draws the line. 

If you have ever felt that your experience of color, or sound, or space does not quite match what others describe, this is part of the reason. Your device is calibrated differently. At every layer of the processing stack, from the hardware of your cone cells, through the operating system of your predictive model, to the application layer of your cultural vocabulary, there are calibration offsets. Not errors. Offsets. 

This is not just a technical detail. For some people, those offsets are more pronounced. Their measuring instrument is calibrated further from the assumed standard. We will return to this in the second article in this series, because it turns out to have significant implications for how we understand neurodivergent experience. But even in the general case, the notion of a “standard observer” — the theoretical average human visual system that colorimetry uses as its reference point — describes almost no actual observer with precision. It is a useful fiction, a necessary abstraction for engineering and design. It should not be mistaken for a description of how human perception actually works.

Why Music Survives When Memory Does Not 

The same layered architecture that produces color experience produces musical experience, but in a way that reveals something additional — and deeply strange — about how the brain organises function. 

Clive Wearing was a British musicologist and conductor, by any measure one of the most musically knowledgeable people of his generation. In 1985, a herpes encephalitis infection destroyed his hippocampus and surrounding structures, leaving him with one of the most severe cases of amnesia ever documented. He could not form new memories. He could not recall events from minutes before. He lived in a perpetual present, unable to hold on to the thread of his own experience. Every time his wife walked into the room, he greeted her as if she had just returned from years abroad, because, for him, every moment was the first moment after an eternity of nothing. 

And yet, when Clive sat down at a piano, he could play. Conducting a choir, he was himself again; present, precise, and musically intact. Twenty years after his infection, he described the experience of music as the only time he felt fully alive. His episodic memory was gone. His semantic memory was largely destroyed. But his musical memory — his ability to recognise, perform, and emotionally inhabit music — was preserved to a degree that astonished his neurologists. 

The reason is architectural. Musical memory for deeply familiar pieces appears to be distributed across a more primitive, more resilient set of brain structures than episodic or semantic memory, anchored partly in the supplementary motor area and the cerebellum. These are the regions that Alzheimer’s pathology and hippocampal damage reach later, or not at all. Musical memory is embedded closer to the procedural layer, to the part of the brain that knows how to ride a bicycle without being able to explain it. Which means that familiar music is stored, at least in part, at the level of the BIOS. Deeply embedded, running before the higher applications load, and the last thing to go when those applications fail. 

This is not only clinically important. It is philosophically striking. It suggests that some of what we experience as most essentially human (the ability to be moved by music, to inhabit a melody, to find oneself again in a song), may not live in the cognitive upper floors of the brain at all. It may live much further down in the architecture than previously assumed.

Hardware, Software, and What Lives Between

The analogy that makes all of this easier to grasp, without reducing it to something simpler than it is, is one most of us already carry from working with technology. 

Hardware is the physical architecture. It exists before any programme runs. In the brain, this is the structure of dedicated regions, the configuration of cone cells, the prenatal differentiation that determines fundamental architecture before a single experience has occurred. Bob’s spatial processing region is hardware. Clive Wearing’s destroyed hippocampus was hardware damage. The color-processing pathway from retina to V4 is hardware. These structures do what they do regardless of what we learn or believe or experience. They are the BIOS of the system, running before we are even aware there is a system to run. 

Software is what runs on that hardware. In the brain, this is the predictive model; it’s the accumulated set of expectations, calibrations, and connection strengths that the brain has built through experience. It is shaped by the hardware it runs on, but it is not determined by it. Experience matters, culture matters, and the linguistic categories your community uses to carve up color space matter. But software cannot run on architecture it was not built for, and it cannot override what the hardware was designed to do. 

Between them is the operating system: the connectivity patterns that mediate between dedicated hardware functions and the applications of conscious experience. This is where much of what makes us individually different actually lives. Not in the hardware, which is largely shared across the species, and not in the application layer of culture and language, but in the specific configuration of how regions connect, communicate, and inhibit each other. 

The question this analogy raises — and the question this series will pursue — is what happens when the hardware, or the operating system, is configured differently from what the frameworks around us were built to expect. 

That is not a question about malfunction. It is a question about calibration. 

And it turns out we have been measuring with an instrument that was never quite standardised in the first place.


The next article in this series, “There Is No Standard Observer”, looks at what the brain architecture described here means for how we understand neurodivergent difference, and why the frameworks we use to detect and describe human cognition may be systematically missing what they most need to see.


Tags

Comments are closed