top of page
Search

3.3 Do sensory systems sometimes operate independently?

  • Writer: Dylan Smith
    Dylan Smith
  • Nov 16, 2025
  • 4 min read

Updated: Nov 24, 2025

[The first in a series of 3 excerpts re sensory integration from my book, Ready to Learn.]


Objectives:

A. Distinguish two examples of independent sensory function in animals.

B. Appreciate the sophistication of multisensory interactions in humans but also

understand that independent sensory function is sometimes observed, and most

often in children.

C. Define “sensory dominance.”


Independent sensory function is often observed in animals. In the spirit of developmental

psychobiology, we will consider two examples. Birch and Lefford (1963) described one

stark example, a remarkable account of frog behaviour first documented more than a

century earlier. A frog will repeatedly strike at a living fly impaled on a rod and surrounded

by sharp skewers even when each strike inflicts a laceration to its tongue. With apologies to

squeamish readers, a frog in such circumstances will continue to strike at the fly even if its

tongue has been torn to shreds. In other words, the frog fails to make a connection between

its vision-triggered strike response and the sensation of pain. The tragic display indicates

that the two sensory systems involved, vision and touch, are not equipped with the neural

connections needed to make an association and block the response. On the other hand,

the authors add, a frog can learn in a few trials to avoid striking a bitter-tasting caterpillar.

The second class of independent sensory function in animals occurs when sensory

systems work in sequence. Turkewitz referred to such sequencing as “serial switching” and

cited the prey-finding methods of pit vipers, a family of venomous snakes that includes the

copperhead and rattlesnake. Pit vipers use chemical cues to track prey before using infrared

thermal cues to target their mark just prior to the actual strike (Turkewitz, 1994).

In contrast to these illustrations of sensory independence in animals, human sensory

systems appear to operate in a thoroughly multisensory fashion. Consider the unassuming

sense of taste, typically thought to do its work more or less independently with an

occasional assist from the sense of smell. In truth, when food is chosen and taken into the

mouth for chewing, the visual, taste, smell, and touch sensory systems are all activated (de

Araujo, 2009; Rolls, 2004b). Two principal taste regions of the brain’s cortex are involved:

the primary gustatory area (G1) and the secondary gustatory area (G2). G1 is situated

alongside the face area of the primary touch cortex. It is home to a variety of unisensory

and multisensory neurons that allow the taste, texture, smell, and temperature of foods to

integrate. Motor movements of the mouth, tongue, and jaws during chewing also factor in G1. They generate sensations of texture, viscosity, and retronasal smells (smells arriving to

the nose through the back of the mouth) that help identify the presence of desirable fats

or dangerous toxins (de Araujo, 2009). In addition, chewing sounds can be heard through

the ears as usual or when directly conducted to the middle and inner ear via the jawbone

(Verhagen & Engelen, 2006).

Scientists usually refer to G2 as the orbitofrontal cortex. It is a small region at the very

front of the brain, and it sits some distance from G1. This area also serves as secondary

smell cortex, and so it encodes information related to both smells and tastes, including

their learned “pleasantness” or reward value (Rolls, 2004a). More than half of the neurons

in G2 doing this work are sensitive to unisensory taste, smell, and visual inputs, but smell-visual, taste-visual, and taste-smell multisensory neurons also populate the region (Rolls,

2004b). G2 also receives relayed visual and touch inputs to help assess the reward value of

particular nutrients or toxins.

And what about that familiar feeling we get that says, “I’ve eaten enough of that

food?” Remarkably, G2 coordinates running sensory-specific calculations in four sensory

systems—vision, taste, touch, and smell—to track accumulating feelings of “I’ve had

enough” for each of the foods we are eating (Rolls, 2004b).

The brief outline above only begins to describe the full range of sensory interactions

involved in the everyday activity of tasting food. It would be far more accurate to say we

have a broad multisensory system for the safe ingestion of food than to claim we have a

standalone sensory system for taste.

Gerald Turkewitz and his contemporaries understood that humans are multisensory

beings. However, as a researcher who knew that sensory systems in animals sometimes

function independently, he remained open to the possibility that human sensory systems

also did so in some situations. In his opinion, sensory independence in humans was most

likely to be seen in young children. Infants, for example, always explore a truly novel object

with a visual inspection before placing it in the mouth for continued exploration (Ruff et

al., 1992). Turkewitz viewed this well-studied sequential approach as reflecting a measure

of isolation or even dominance between vision and touch. He compared it to the serial

switching observed in pit vipers.

Other studies have reported sensory independence in older children. Nardini and colleagues (2008) investigated to what extent self-motion (vestibular) cues versus visual location cues help 4- to 5-year-olds, 7- to 8-year-olds, and adults find their way around a darkened room. Participants were asked to move around the perimeter of a darkened room to collect illuminated objects from three locations that were modelled by three illuminated landmarks at the room’s centre. Then the landmarks were turned off. Following a brief delay in total darkness, participants were required to return the first object to its original location under varying conditions. In one condition, the landmark locations in the room’s centre remained unilluminated, requiring participants to rely on only vestibular cues to estimate perimeter location. In another condition, participants were turned around until disoriented and landmarks were re-illuminated, requiring participants to rely on only those visual cues. In a third condition, participants were not disoriented, and landmarks were re-illuminated. Each participant’s accuracy in relocating the first object was compared to mathematical predictions based on two possible scenarios: self-motion cues and visual cues being integrated or used in alternation. The results revealed clear effects. For one, accuracy in returning an object to its original location improved with age. Second, adults improved the accuracy of their estimations by integrating the vestibular and visual cues, whereas the performance of the children was consistent with cues used in alternation.


Recent Posts

See All
The origins of universal design and UDL

This brief post is an excerpt from "Universal design supports access, autonomy, and self-regulation," a section of my book "Ready to Learn: A crash course in child development, and how children experi

 
 
 

Comments


Website created by FriesenPress. Powered and secured by Wix

bottom of page