How the brain adapts to shifting context
18366 views
At a farmer’s market, you pick up a carrot. Now, in one moment, your brain faces a dilemma: would you group it with lettuce and other vegetables? Or would you see it alongside tangerines and other orange-colored food? Or both? While the carrot itself doesn't change, what changes are the mental rules you’re applying to categorize it into different groups based on its properties.
This everyday flexibility is the subject of a fascinating study published in Nature Communications by Margaret Henderson (Carnegie Mellon University), John Serences (University of California, San Diego), and Nuttida Rungratsameetaweemana (The Salk Institute and Columbia University). Their research tries to answer the question: How does the brain adapt when the same object suddenly belongs to a different category?
For decades, neuroscientists treated the early visual cortex, the brain’s first stop for processing what we see, like a passive camera. It was thought to capture light and shapes and then hand off the heavy-duty thinking (like decision-making) to higher brain regions such as the prefrontal cortex. But growing evidence challenges that neat division of labor. In animals, even basic sensory areas can “tune in” to expectations, choices, and tasks. Henderson and colleagues wanted to know: Does the same happen in humans when categories are dynamic and changing?
Let's test this out, the authors said
To test this, the researchers invented a “shape universe.” Using mathematical formulas called radial frequency contours, they generated dozens of odd, abstract silhouettes that could be placed on a two-dimensional grid. Imagine a board of alien puzzle pieces, shapes with bulges, dips, and curves that morph gradually from one to another (see page 3 figure, which shows the full grid of shapes arranged like a quirky checkerboard).
Ten volunteers lay in an fMRI scanner while performing categorization tasks, which consisted in the following. Sometimes they had to divide shapes using a vertical boundary; sometimes they had to use a horizontal boundary. And sometimes, the rules got trickier, as they were required to use a non-linear boundary that grouped non-adjacent quadrants together. As the same exact shape might fall into one category in one task and a different category in the next, participants had to adapt on the fly constantly. Unsurprisingly, the nonlinear task was hardest, with slower reaction times and more mistakes. But participants managed above 80% accuracy overall, showing just how flexible our minds can be.
The fMRI scans provided a window into how the visual cortex handled this cognitive juggling act. Using advanced machine learning analyses, the team could see how “separable” the brain’s shape representations were under different rules.
The findings of the study revealed how the brain processes categories. Apparently, context matters most near the boundary. In fact, when shapes sat close to the dividing line, the ambiguous zone where errors were most likely, the brain’s visual areas “stretched” their representations. Shapes were encoded in a way that made them easier to tell apart given the current task rule. Farther from the boundary, the effect faded. Also early visual areas do the heavy lifting. Surprisingly, the strongest task-related changes appeared in early regions like V1 and V2, areas traditionally thought to just relay raw visual input. This suggests that even at these early stages, the brain is actively reshaping perception to fit the task at hand. Another important finding was that performance and brain patterns went hand-in-hand. On difficult trials, when participants answered correctly, their brain activity showed sharper category separation. Incorrect answers, by contrast, were linked to muddier representations (see classifier confidence plots on page 10). In other words, success was mirrored by the brain’s ability to draw clearer lines. Finally, the study found that linear boundaries were easier to “code” than nonlinear ones. The brain adapted more robustly to simple linear rules than to the quirky nonlinear grouping. This makes sense: enhancing a single axis (up-down or left-right) may be something visual cortex is especially good at, while weird non-adjacent groupings are harder to “encode” cleanly.
This study isn’t just about abstract shapes in a lab. It gets at the heart of how humans can adapt so fluidly to new contexts. From driving in a foreign city, to learning a new sport, to navigating social norms in different cultures, we constantly shift the “rules” of categorization. The findings suggest that our brains don’t simply passively see the world and then let higher cognition sort it out. Instead, perception itself is shaped by the rules of the moment. The carrot at the farmer’s market is visually represented differently depending on whether you’re thinking about vegetables or colors.
If you want to learn more, read the original article titled "Dynamic categorization rules alter representations in human visual cortex" on Nature Communications at http://dx.doi.org/10.1038/s41467-025-58707-4.