Blog » Music & the brain

Last Updated on

Approx. reading time: 13 minutesJanuary 15, 2018


A blog with quotes from various online articles about music and it’s relationship to the functioning of the brain. The information placed in this blog are not the whole articles but the most essential passages. Below every abstract you can find a link to web sites the articles were published on.



In our everyday lives, language and instrumental music are obviously different things. Neuroscientist and musician Ani Patel is the author of a recent, elegantly argued offering from Oxford University Press, “Music, Language and the Brain.” Oliver Sacks calls Patel a “pioneer in the use of new concepts and technology to investigate the neural correlates of music.” In Patel’s presentation, he discusses some of the hidden connections between language and instrumental music that are being uncovered by empirical scientific studies.

The Music and the Brain Lecture Series is a cycle of lectures and special presentations that highlight an explosion of new research in the rapidly expanding field of “neuromusic.” Programming is sponsored by the Library’s Music Division and its Science, Technology and Business Division, in cooperation with the Dana Foundation.

Aniruddh Patel is the Esther J. Burnham Senior Fellow in Theoretical Neurobiology at the Neurosciences Institute.


(TEDx speech by Anita Collins, animation by Sharon Colman Graham.)


Date: December 6, 2011 
Source: Suomen Akatemia (Academy of Finland)

Finnish researchers have developed a groundbreaking new method that allows to study how the brain processes different aspects of music, such as rhythm, tonality and timbre (sound color) in a realistic listening situation. The study is pioneering in that it for the first time reveals how wide networks in the brain, including areas responsible for motor actions, emotions, and creativity, are activated during music listening. The new method helps us understand better the complex dynamics of brain networks and the way music affects us.

The researchers found that music listening recruits not only the auditory areas of the brain, but also employs large-scale neural networks. For instance, they discovered that the processing of musical pulse recruits motor areas in the brain, supporting the idea that music and movement are closely intertwined. Limbic areas of the brain, known to be associated with emotions, were found to be involved in rhythm and tonality processing. Processing of timbre was associated with activations in the so-called default mode network, which is assumed to be associated with mind-wandering and creativity.
“Our results show for the first time how different musical features activate emotional, motor and creative areas of the brain,” says Prof. Petri Toiviainen from the University of Jyväskylä. “We believe that our method provides more reliable knowledge about music processing in the brain than the more conventional methods.”

The study was published in the journal NeuroImage.

Vinoo Alluri, Petri Toiviainen, Iiro P. Jääskeläinen, Enrico Glerean, Mikko Sams, Elvira Brattico.
Large-scale brain networks emerge from dynamic processing of musical timbre, key and rhythm. NeuroImage, 2011; DOI:



(By William J. Cromie Gazette Staff)

“All humans come into the world with an innate capability for music,” agrees Kay Shelemay, professor of music at Harvard. “At a very early age, this capability is shaped by the music system of the culture in which a child is raised. That culture affects the construction of instruments, the way people sound when they sing, and even the way they hear sound. By combining research on what goes on in the brain with a cultural understanding of music, I expect we’ll learn a lot more than we would by either approach alone.”

Looking for a music center

A human brain is divided into two hemispheres, and the right hemisphere has been traditionally identified as the seat of music appreciation. However, no one has found a “music center” there, or anywhere else. Studies of musical understanding in people who have damage to either hemisphere, as well as brain scans of people taken while listening to tunes, reveal that music perception emerges from the interplay of activity in both sides of the brain.

Some brain circuits respond specifically to music; but, as you would expect, parts of these circuits participate in other forms of sound processing. For example, the region of the brain dedicated to perfect pitch is also involved in speech perception.

Music and other sounds entering the ears go to the auditory cortex, assemblages of cells just above both ears. The right side of the cortex is crucial for perceiving pitch as well as certain aspects of melody, harmony, timbre, and rhythm. (All the people tested were right-handed, so brain preferences may differ in lefties.)

The left side of the brain in most people excels at processing rapid changes in frequency and intensity, both in music and words. Such rapid changes occur when someone plucks a violin string versus running a bow across it.

Both left and right sides are necessary for complete perception of rhythm. For example, both hemispheres need to be working to tell the difference between three-quarter and four-quarter time.

The front part of your brain (frontal cortex), where working memories are stored, also plays a role in rhythm and melody perception.

Researchers have found activity in brain regions that control movement even when people just listen to music without moving any parts of their bodies.



Montreal researchers find that music lessons before age seven create stronger connections in the brain.

A study published last month in the Journal of Neuroscience suggests that musical training before the age of seven has a significant effect on the development of the brain, showing that those who began early had stronger connections between motor regions – the parts of the brain that help you plan and carry out movements.

This research was carried out by students in the laboratory of Concordia University psychology professor Virginia Penhune, and in collaboration with Robert J. Zatorre, a researcher at the Montreal Neurological Institute and Hospital at McGill University.

The study provides strong evidence that the years between ages six and eight are a “sensitive period” when musical training interacts with normal brain development to produce long-lasting changes in motor abilities and brain structure. “Learning to play an instrument requires coordination between hands and with visual or auditory stimuli,” says Penhune. “Practicing an instrument before age seven likely boosts the normal maturation of connections between motor and sensory regions of the brain, creating a framework upon which ongoing training can build.”

With the help of study co-authors, PhD candidates Christopher J. Steele and Jennifer A. Bailey, Penhune and Zatorre tested 36 adult musicians on a movement task, and scanned their brains. Half of these musicians began musical training before age seven, while the other half began at a later age, but the two groups had the same number of years of musical training and experience. These two groups were also compared with individuals who had received little or no formal musical training.

When comparing a motor skill between the two groups, musicians who began before age seven showed more accurate timing, even after two days of practice. When comparing brain structure, musicians who started early showed enhanced white matter in the corpus callosum, a bundle of nerve fibres that connects the left and right motor regions of the brain. Importantly, the researchers found that the younger a musician started, the greater the connectivity.

Interestingly, the brain scans showed no difference between the non-musicians and the musicians who began their training later in life; this suggests that the brain developments under consideration happen early or not at all. Because the study tested musicians on a non-musical motor skill task, it also suggests that the benefits of early music training extend beyond the ability to play an instrument.

“This study is significant in showing that training is more effective at early ages because certain aspects of brain anatomy are more sensitive to changes at those time points,” says co-author Dr. Zatorre, researcher at the Montreal Neurological Institute and co-director of the International Laboratory for Brain Music and Sound Research.

But, says Penhune, who is also a member of Concordia’s Centre for Research in Human Development, “it’s important to remember that what we are showing is that early starters have some specific skills and differences in the brain that go along with that. But, these things don’t necessarily make them better musicians. Musical performance is about skill, but it is also about communication, enthusiasm, style, and many other things that we don’t measure. So, while starting early may help you express your genius, it probably won’t make you a genius.”



Music research indicates that music education not only has the benefits of self-expression and enjoyment, but is linked to improved cognitive function (Schellenberg), increased language development from an early age (Legg), and positive social interaction (Netherwood). Music listening and performance impacts the brain as a whole, stimulating both halves – the analytical brain and the subjective-artistic brain, affecting a child’s overall cognitive development and possibly increasing a child’s overall intellectual capacity more than any other activity affecting the brain’s bilaterism (Yoon).

How does music stimulate the right and left hemispheres?
We often hear about an analytical person, like an accountant, being left-brained while a more “free spirit”, like an artist or poet, is considered “right-brained”. Yet music research indicates that the average professional musician or composer, despite incorrect personality stereotypes, encompasses both the analytical traits of the left brain and the more creative aspects of the right brain.

Music Listening vs. Music Performance/Activity
Music research indicates that both music listening and music performance have significant benefits. Several years ago popular culture was abuzz with the Mozart Effect, the incorrect notion that simply listening to Mozart for several minutes a day increased a child’s IQ on a permanent basis. While subsequent music research indicates Mozart Effect does not exist, there have been several studies that indicate the listening to music does have significant physiological benefits.

The act of listening to music has several noted benefits (Yoon):

  • Stress relief and emotional release 
  • Increased creativity and abstract thinking
  • Positive influences on the bodies overall energy levels and heart rhythm

Music research on music education suggests that musical activities like dancing, playing an instrument, and singing demonstrate long term benefits in memory, language development, concentration, and physical agility. (Netherwood, Schellenberg). Added memory and language skills help the average musician gain a better understanding of human language than those who do not engage in musical activities. (Moreno) Long term cognitive and language skills increased for student musicians who maintained long term commitments to music by studying an instrument or engaging in vocal performance.

Key Points
Music research shows that music education benefits students notably by its positive effects on the brain’s functions.

  • Music research indicates the music education benefits students by increasing self-expression, cognitive abilities, language development, and agility.
  • Music is unique in its ability to affect more than a single brain hemisphere, incorporating both the right and left sides of the brain.
  • While music listening has marked benefits regarding physiological effects of stress, playing an instrument or taking vocal lessons offers a marked increase in the benefits of music education, especially in regards to memory, language, and cognitive development.




According to the traditional theory of nerves, two nerve impulses sent from opposite ends of a nerve annihilate when they collide. New research from the Niels Bohr Institute now shows that two colliding nerve impulses simply pass through each other and continue unaffected. This supports the theory that nerves function as sound pulses. The results are published in the scientific journal Physical Review X.

Nerve signals control the communication between the billions of cells in an organism and enable them to work together in neural networks. But how do nerve signals work?

In 1952, Hodgkin and Huxley introduced a model in which nerve signals were described as an electric current along the nerve produced by the flow of ions. The mechanism is produced by layers of electrically charged particles (ions of sodium and potassium) on either side of the nerve membrane that change places when stimulated. This change in charge creates an electric current.

This model has enjoyed general acceptance. For more than 60 years, all medical and biology textbooks have said that nerves function is due to an electric current along the nerve pathway. However, this model cannot explain a number of phenomena that are known about nerve function.

Researchers at the Niels Bohr Institute at the University of Copenhagen have now conducted experiments that raise doubts about this well-established model of electrical impulses along the nerve pathway.

“According to the theory of this ion mechanism, the electrical signal leaves an inactive region in its wake, and the nerve can only support new signals after a short recovery period of inactivity. Therefore, two electrical impulses sent from opposite ends of the nerve should be stopped after colliding and running into these inactive regions,” explains Thomas Heimburg, Professor and head of the Membrane Biophysics Group at the Niels Bohr Institute at the University of Copenhagen.

Thomas Heimburg and his research group conducted experiment in the laboratory using nerves from earthworms and lobsters. The nerves were removed and used in an experiment in which allowed the researchers to stimulate the nerve fibres with electrodes on both ends. Then they measured the signals en route.

“Our study showed that the signals passed through each other completely unhindered and unaltered. That’s how sound waves work. A sound wave doesn’t stop when it meets another sound wave. Both waves continue on unimpeded. The nerve impulse can therefore be explained by the fact that the pulse is a mechanical wave in the form of a sound pulse, a soliton, that moves along the nerve membrane,” explains Thomas Heimburg.

When the sound pulse moves through the nerve pathway, the membrane changes locally from a liquid to a more solid form. The membrane is compressed slightly, and this change leads to an electrical pulse as a consequence of the piezoelectric effect. “The electrical signal is thus not based on an electric current but is caused by a mechanical force,” points out Thomas Heimburg.

Thomas Heimburg, along with Professor Andrew Jackson, first proposed the theory that nerves function by sound pulses in 2005. Their research has since provided support for this theory, and the new experiments offer additional confirmation for the theory that nerve signals are sound pulses.



(Anne Trafton | MIT News Office)

In Western styles of music, from classical to pop, some combinations of notes are generally considered more pleasant than others. To most of our ears, a chord of C and G, for example, sounds much more agreeable than the grating combination of C and F# (which has historically been known as the “devil in music”).

For decades, neuroscientists have pondered whether this preference is somehow hardwired into our brains. A new study from MIT and Brandeis University suggests that the answer is no.

In a study of more than 100 people belonging to a remote Amazonian tribe with little or no exposure to Western music, the researchers found that dissonant chords such as the combination of C and F# were rated just as likeable as “consonant” chords, which feature simple integer ratios between the acoustical frequencies of the two notes.

“This study suggests that preferences for consonance over dissonance depend on exposure to Western musical culture, and that the preference is not innate,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT.

The findings suggest that it is likely culture, and not a biological factor, that determines the common preference for consonant musical chords, says Brian Moore, a professor of psychology at Cambridge University, who was not involved in the study.

“Overall, the results of this exciting and well-designed study clearly suggest that the preference for certain musical intervals of those familiar with Western music depends on exposure to that music and not on an innate preference for certain frequency ratios,” Moore says.



(McGovern Institute for Brain Research at MIT)

Research om the overall organization and function properties of the Auditory Cortex in the human brain. The goal of this study was to get a broad view on how the Auditory Cortex might be organized, using and MRI scanner and 10 test subjects. The research team played 160 different sounds to the test subject to measure the response. The results of this research seem to suggest that there distinct cortical pathways for music and speech, neural “machinery” that is specialized to some extend for music perception.

Source: McGovern Institute for Brain Research at MIT


Western music improvisers learn to realize chord symbols in multiple ways according to functional classifications, and practice making substitutions of these realizations accordingly. In contrast, Western classical musicians read music that specifies particular realizations so that they rarely make such functional substitutions. We advance a theory that experienced improvisers more readily perceive musical structures with similar functions as sounding similar by virtue of this categorization, and that this categorization partly enables the ability to improvise by allowing performers to make substitutions. We tested this with an oddball task while recording electroencephalography. In the task, a repeating standard chord progression was randomly interspersed with two kinds of deviants: one in which one of the chords was substituted with a chord from the same functional class (“exemplar deviant”), and one in which the substitution was outside the functional class (“function deviant”). For function compared to exemplar deviants, participants with more improvisation experience responded more quickly and accurately and had more discriminable N2c and P3b ERP components. Further, N2c and P3b signal discriminability predicted participants’ behavioral ability to discriminate the stimuli. Our research contributes to the cognitive science of creativity through identifying differences in knowledge organization as a trait that facilitates creative ability.


Creative Commons License