Article 3 – The Questions We Haven’t Asked
This is the third and final article in “The Brain Behind the Behaviour” series. The first article described how the brain constructs perception through a layered architecture of dedicated hardware regions and broad distributed networks. The second argued that neurodivergent brains are not deviations from a correct standard, but different calibrations of a system that was never standardised. And that our frameworks for detecting, describing, and assessing human cognition are systematically blind to what those calibrations produce. This article asks what follows from that argument: not answers, but the questions we have not yet been able to ask, because we have been looking through the wrong instrument. These are not abstract questions. They are the keys to unlocking potential we have systematically overlooked.
There is a man in Susan Schaller’s book “A Man Without Words” who spent the first twenty-seven years of his life without language. Ildefonso was deaf, had never been taught sign language, and had navigated the world entirely through observation, pattern recognition, gesture, and what Schaller describes as a rich, coherent, fully inhabited inner life. He had relationships. He had memory. He had a sense of self. He had, in every meaningful sense, an identity; built entirely without the application layer that most cognitive frameworks assume is the foundation of human experience.
When Schaller finally taught him sign language, Ildefonso wept. Not because he had finally become a person. Because he had finally found the words for what he had always been.
This story is the place this series arrives at, and where the questions it wants to ask begin. It is a story about what the brain can do before the frameworks we use to detect it arrive. It’s about inner life that existed entirely outside the instrument’s range, coherent, complex, and real, but invisible to any standard-observer measurement.
From Argument to Agenda
The first two articles in this series built a case. The brain has layers: hardware regions of specific dedicated function, an operating system of connectivity and predictive processing, an application layer of language and culture and conscious experience. Different neurotypes are differently configured at different layers of that stack; some at the hardware level, some at the OS level, some in the connectivity between regions. And those configurations produce both the characteristic difficulties and the characteristic strengths associated with each neurotype. The standard observer is a useful fiction. The frameworks built around it are incomplete. The cost is not just missed individuals; it is missed understanding.
That case, if it holds, implies a research agenda. Not a set of answers, but a set of orientations. It implies reframing of what we are trying to find out, and why the questions we have been asking have been producing an incomplete picture.
The questions that follow cannot be asked from inside the standard-observer framework. That is precisely why they have not been asked. What they have in common is a shift in starting point: not what does this configuration fail to do, but what does this configuration make possible?

Question One: What Does Each Configuration Actually Produce?
The dominant research paradigm in neurodivergent neuroscience has been deficit-oriented. We have studied what autistic brains struggle to do in standard-observer environments. What ADHD brains fail to sustain. What dyslexic brains cannot decode. What synesthetic brains conflate. The research has been rigorous, the findings real, and the clinical applications genuinely useful. But the instrument was pointed in one direction.
If the calibration argument is correct — if different neurotypes represent different configurations of the processing stack, each with characteristic offsets that produce both difficulties and strengths — then the research agenda needs a complementary direction. Not instead of deficit research, but in addition to it.
The question is: for each neurotype-specific configuration, what does that configuration produce when it is working with its architecture rather than against it?
Imagine a study that maps autistic pattern recognition strengths directly to specific connectivity profiles in the visual cortex. Not asking what autistic individuals struggle to see, but asking which visual tasks their local over-connectivity makes them better at, and whether those tasks cluster predictably around the specific connectivity signature. If they do, we would have not just a description of a strength, but a neural explanation for it, and a basis for predicting which domains a given configuration would excel in before any assessment has been done.
We have partial answers already. Autistic individuals show enhanced performance on tasks requiring local detail processing, pattern detection within a domain, and resistance to certain visual illusions that rely on top-down contextual prediction. Dyslexic individuals frequently show enhanced performance on tasks requiring three-dimensional spatial reasoning and holistic pattern recognition. ADHD individuals show advantages in divergent thinking and performance under novel or high-stimulus conditions. Synesthetic individuals show enhanced memory for certain types of information, particularly when the synesthetic association provides an additional encoding dimension.
These findings are real but scattered. They have not been systematically mapped against the underlying neural architecture in a way that would allow us to say: this specific configuration tends to produce these specific strengths in these specific domains, and here is the neural reason why. The regional/connectivity axis that runs through this series has not yet been used as the organising framework for that mapping.
That mapping does not yet exist, as far as I know, but it should.
Question Two: Where Does Perception End and Interpretation Begin, and Does It Differ Between Neurotypes?
Ildefonso had a perceptual world. He saw color, processed the social cues of the people around him with remarkable accuracy, and inhabited space and time with full coherence. What he lacked was not perception or interpretation. What he lacked was the application layer of linguistic and cultural encoding that most people use to name, categorise, and communicate what they perceive.
Which raises a question that the architecture of Article 1 put a focus on: if perception is constructed at every layer of the stack, and if the balance between layers differs between neurotypes, does the experience of perception itself differ; not just in what is noticed, but in how raw or processed the experience feels?
The predictive processing research points toward yes. Autistic perception appears to weight bottom-up sensory input more heavily and top-down prediction less heavily than neurotypical perception. The world comes in less filtered, less pre-processed by expectation. This is not a malfunction of the predictive system. It is a different balance point between what the signal says and what the prior model predicts.

Viktor Kossakovsky’s documentary “Svyato” explores self-cognition and shows the impact of perception and awareness. It’s a film consisting almost entirely of close observation of a very young child confronted with himself. Kossakovsky’s 2-year old son had never seen a mirror; the film captures, up close, the reaction to his own reflection for the very first time in his life. Every surface demands investigation. Every change in light is worth attention. What we are watching is something close to pure bottom-up perception: the world arriving with high fidelity, before the operating system has learned to manage the signal load by suppressing what it has already categorised.
Some neurodivergent perception may be closer to this; not because the OS has not developed, but because the balance point is set differently. More raw signal. More fidelity to what is actually there, rather than to what the prior model expects.
And if that is the case, the question becomes: what does that mean for how differently-calibrated brains experience color, sound, spatial layout, and social information? And can we design environments, interfaces, and instruments that work with that balance point rather than assuming the standard-observer ratio?
In the color perception stack alone, we already have the complete architecture for asking this question. Every layer from the hardware variation in cone cells, via OS-level differences in predictive model calibration, to the application-layer differences in linguistic color categories, is known to vary between individuals. What is not yet known is whether those variations cluster systematically around neurotype configurations. That question is researchable. It has simply not been asked in that form.
Question Three: What Can Music Tell Us That Language Cannot?
Clive Wearing, whose story appeared in the first article of this series, could not tell you what he had for breakfast. He could not tell you his wife’s name, or what year it was, or what had happened to him. But he could play the piano. He could inhabit a piece of music with full presence and full precision, and in those moments he was, by every observable measure, entirely himself.
What Clive’s case demonstrates is that musical encoding operates at a different, more primitive, more resilient layer of the brain’s architecture than episodic or semantic memory. It is stored closer to the procedural layer, in regions that neurodegeneration reaches later or not at all. Music is not just a therapeutic tool for cognitive impairment. It is a research instrument for understanding how differently-calibrated brains encode experience at layers that standard cognitive testing cannot reach.
Now recall the synesthetic artist from the previous article; the one who sees music as color, who paints the emotional tone of a piece in ways that other people may not see but certainly feel. Her experience is not metaphorical. When she hears a minor chord resolve, a specific color field appears in her visual field, involuntarily and consistently. She has learned to use that cross-modal signal as a compositional tool, letting the color guide the brushstroke, letting the music shape the canvas. The result is work that carries emotional information through a channel most viewers cannot name but can feel.
What her brain is doing is routing auditory processing through visual processing via connections that most brains suppress. The hardware regions (auditory cortex, visual cortex) are both fully functional. What differs is the OS-level connectivity between them: the inhibitory boundaries are weaker, the cross-activation is stronger, and the result is an experience that is richer in some dimensions and potentially overwhelming in others.

This raises a research question with practical consequences: what if we designed music-based interventions for ADHD that specifically leverage the connectivity between auditory and reward regions in ADHD brains, rather than attempting to suppress hyperactivity? The dopaminergic system in ADHD rewards novelty and exploration. Music, as a structured form of organised novelty, may be a particularly effective intervention pathway for ADHD specifically. Not because music is generally calming, but because the auditory-reward connectivity in ADHD brains may make music-based attention modulation more effective than standard attentional training.
The Eindhoven-based company Alphabeats is already working at this intersection, using music and biofeedback to modulate attention and stress responses. Whether the effectiveness of that approach varies systematically between neurotype configurations is, as far as I know, a question that has not yet been studied.
Erik Scherder’s research on music and the brain adds a dimension relevant to all neurotypes: genuine novelty, encountering something the brain has not yet categorised, activates regions well beyond those obviously involved in the task, engaging the system broadly in a way that targeted, domain-specific training does not produce.
For ADHD brains in particular, whose dopaminergic reward structure is tuned to novelty, this suggests that music designed to remain genuinely novel, varied, unpredictable, structurally rich, may activate the attention system more effectively than interventions that rely on repetition and routine.
Whether that hypothesis holds, and whether the effect size differs between neurotypes, is exactly the kind of question that requires the calibration framework rather than the deficit one.
Question Four: What Does Neuroplasticity Look Like When You Stop Trying to Normalise?
The brain’s capacity to reorganise itself (neuroplasticity) works within the existing architecture. It does not overwrite the chipset. The blind individual whose visual cortex is recruited for touch processing, the stroke patient who relearns movement through an entirely different neural pathway: in each case documented in Norman Doidge’s work, the brain reorganises around its own constraints and affordances. The outcome depends heavily on which architecture is doing the reorganising.
This has a direct implication for neurodivergent individuals that research has not yet fully pursued. Current interventions are almost entirely oriented toward approximating standard-observer function. Applied Behaviour Analysis attempts to shape autistic social behaviour toward neurotypical norms. Phonics-based reading programmes attempt to build phonological decoding capacity in dyslexic brains whose hardware in that region is configured differently. Attention training attempts to extend sustained focus in ADHD brains whose dopaminergic system rewards novelty over persistence. These interventions are not without value. For many individuals they represent real quality-of-life improvements, and dismissing them entirely would be as incomplete as the deficit framing they sometimes carry.
But they are all pointed in the same direction: toward the standard observer, away from the existing configuration. And neuroplasticity, as the research consistently shows, works best when it builds on what the architecture already does well, and not when it tries to override it.
The question that has not been asked is: what does neuroplasticity look like when you point it the other way? When the goal is not to make a local over-connectivity brain function like a standard-observer brain, but to understand what that local over-connectivity brain can do that a standard-observer brain cannot, and to build the conditions for it to do that more fully?
We have almost no research on this. Not because it is technically impossible, but because the research agenda has been oriented around correction. Calibration — understanding and developing what is actually there — has not been the goal. Making it the goal would require not just new studies, but a new starting question: given this specific neural configuration, what are its natural affordances, and what would it look like to develop those affordances to their fullest extent?
Question Five: What Are We Missing in the Domains We Think We Understand?
Consider the inner monologue. Some people have a continuous verbal inner voice. Others think in images, in spatial configurations, or in something that has no verbal dimension at all. This variation is real, documented, and deeply consequential for how people process information, plan, remember, and communicate. It is also almost entirely absent from standard cognitive frameworks, which implicitly assume verbal inner processing as the default. Think of Ildefonso’s story: he had a full inner life before he had a word for any of it.
Consider rhythmic ability. Some people feel pulse and metre as something close to physical; rhythm is not heard so much as inhabited. Others process music primarily melodically or harmonically, with rhythm as a secondary layer. Some people have almost no natural rhythmic sense. This variation maps imperfectly onto neurotype profiles, but the mapping has not been systematically studied.
Consider the three-part structure of the brain (instinct, emotion, cognition) and the question of how differently-calibrated brains balance or integrate those three levels. If the OS layer mediates between the primitive instinctive layer and the higher cognitive one, does a different OS configuration produce a differently working relationship between instinct, emotion, and thought? Does the autistic experience of sensory overwhelm represent, in part, a different threshold between the instinctive and the cognitive? Is it a reduced buffering capacity at the OS level?

These questions are named here deliberately, not developed. They are examples of the territory that opens up once you stop treating the standard observer as a ground truth and start treating it as one configuration among many. The point is not that they are unanswerable. The point is that they have not yet been asked in the right form. Mainly because the instrument was not designed to detect what they are asking about.
From Correction to Calibration
Ildefonso did not need to be fixed. He needed a language. Not so that he could become a standard observer, but so that the rich, coherent inner world he had always inhabited could finally be communicated to the people around him.
The design imperative from the previous article (“If the chipset was set before we arrived, the question is not how do we fix it, but how do we build systems that work with it?”), is not a soft claim about inclusion or acceptance. It is a hard claim about epistemology and design.
We do not run GPU-optimised software on a CPU architecture and call the GPU broken. We write different software. Or we ask different questions about what the hardware can do. Because the GPU was not designed to fail; it was designed to do something specific, something that a CPU does less well. And if we only ever evaluate it on CPU tasks, we will never find out what it is capable of.
The same is true of every configuration of the human processing stack that differs from the standard observer. We have spent decades evaluating those configurations on tasks designed for a different architecture. We have produced a detailed and rigorous map of what those configurations cannot do in standard-observer environments. We have produced almost no map of what they can do in environments designed for them.
That map does not yet seem to exist. Building it would not be an act of accommodation. It would be an act of scientific curiosity. It’s the same curiosity that found a dedicated navigation region in the parahippocampal cortex, that traced musical memory to the procedural layer, that discovered the seeking system running beneath every act of exploration.
The brain, it turns out, is more specific, more layered, and more various than the standard observer ever suggested.
The actual observers are where the interesting questions live. The idiom to ask them does not yet exist though. And that is not a reason to stop; it is the precise reason to start.
A closing statement: Everything in this series was built from genuine curiosity, not from expertise. I followed questions I could not leave alone, through territory I had no formal training for, using simplifications that a neuroscientist would not use. The hardware and software comparisons were scaffolding, not blueprints. The map I drew is a first attempt, not a final one. I remain, after all of this, not a neuroscientist. I remain an explorer.

Comments are closed