- Tracking two tones at once, at 3 and 7 months
- Internalized timing of isochronous sounds is represented in neuromagnetic beta oscillations
- Learning about musical rhythm and timing
- Use of prosody and information structure in high functioning adults with Autism in relation to language ability
- Becoming musical listeners Part 1: Was that a wrong note?
- Becoming musical listeners Part 2: Learning about which notes belong in a musical key
- Becoming musical listeners Part 3: What makes "nice sounds" sound nice to infants?
- Hearing pitch: Timing codes in 4- and 8-month-old infants
- Monkeying around: Infants' ability to tell apart Human voices and Monkey voices
- Can 2-month-olds tell where a sound is coming from?
- Active music classes in infancy enhance musical, communicative and social development
- The benefits of music lessons
- The music babies hear changes their brain responses
- Infants can organize the sounds in their world
- The influence of musical movement on infants' social behaviour
The purpose of this study was to examine how polyphonic music and simultaneous sounds are encoded in the brain during development. Polyphonic music contains multiple melodic lines (referred to as "voices") which are often separated in pitch range and are equally important to the music. Because each voice is important, it is crucial for individuals to be able to separate and simultaneously analyze the individual melodies. In adults, we used Electroencephalography (EEG) to show that the brain is able to process the voices of a polyphonic melody in separate memory traces. Interestingly, we also found that adult brains process the higher pitched voice better than the lower pitched voice. We wanted to know how 7-month-old infants process simultaneous sounds. We presented two sounds as a repeating standard and occasionally modified the pitch of one tone. We found that at 7 months, infants showed very similar responses to adults. They are able to process each tone separately, and they already show better processing of the high voice than the low voice. We also found an association between amount of music listening at home and quality of simultaneous sound processing: the more they listened to music at home, the better their brain was able to process this polyphonic music. We are now extending this study to 3-month-old infants who have had much less music exposure. We hope to learn (1) when the ability to process each tone separately appears, and (2) if humans have an inherent preference for the high voice, or whether this is something learned through exposure to Western music.
Moving in synchrony with an auditory rhythm requires predictive action based on neurodynamic representation of temporal information. Although it is known that a regular auditory rhythm can facilitate rhythmic movement, the neural mechanisms underlying this phenomenon remain poorly understood. In this experiment using human magnetoencephalography, 12 young healthy adults listening passively to an isochronous auditory rhythm without producing rhythmic movement. We hypothesized that the dynamics of neuromagnetic beta-band oscillations (20 Hz) which are known to reflect changes in an active status of sensorimotor functions—would show modulations in both power and phase-coherence related to the rate of the auditory rhythm across both auditory and motor systems. Despite the absence of an intention to move, modulation of beta amplitude as well as changes in cortico-cortical coherence followed the tempo of sound stimulation in auditory cortices and motor-related areas including the sensorimotor cortex, inferior-frontal gyrus, supplementary motor area, and the cerebellum. The time course of beta decrease after stimulus onset was consistent regardless of the rate or regularity of the stimulus, but the time course of the following beta rebound depended on the stimulus rate only in the regular stimulus conditions such that the beta amplitude reached its maximum just before the occurrence of the next sound. Our results suggest that the time course of beta modulation provides a mechanism for maintaining predictive timing, that beta oscillations reflect functional coordination between auditory and motor systems, and that coherence in beta oscillations dynamically configure the sensorimotor networks for auditory-motor coupling.
When we listen to music, speech, and other auditory information, we use the timing of events to help organize what we hear. For example, speech is organized into groups of syllables that form words, and strings of words that make sentences. Music uses timing information too, in the form of beat and rhythm. The beat of music is the steady pulse that we can tap our feet to. Although almost every kind of music has an underlying beat, the music of different cultures group musical beats together in different ways. North American music tends to group beats into repeated groups of two, like a march, or three, like a waltz. However, in other cultures, music is often grouped into fives, sevens, or other more complex combinations of beats. These complex groupings are easy to understand for people who grow up in a musical culture that uses them, but difficult for those of us who didn't. However, we don't know at what age this becomes true, or why it would be helpful for us to specialize our listening abilities this way. In this study, we wanted to find out if 5-year-old children show the same kind of specialization that adults do. We also wanted to know if these rhythmic skills are related to other abilities that also use timing (like language and reading), or abilities involved in listening to sequences and copying them back (memory and motor skills). Early results suggest that 5-year-olds are, in fact, better at listening to and drumming back music that uses culturally familiar groupings, compared to more unfamiliar groupings. Additionally, children who have bigger vocabularies and better pre-reading skills often perform better on the musical rhythm tasks. Since both of these tasks require children to pay attention to timing information, it is possible that children who are more sensitive to timing information in both speech and music have an advantage in these areas.
4. Use of prosody and information structure in high functioning adults with Autism in relations to language ability
Abnormal prosody is a striking feature of the speech of those with Autism spectrum disorder(ASD), but previous reports suggest large variability among those with ASD. We showed that part of this heterogeneity can be explained by level of language functioning. We recorded semi spontaneous but controlled conversations in adults with and without ASD and measured features related to pitch and duration to determine (1) general use of prosodic features, (2) prosodic use in relation to marking information structure, specifically, the emphasis of new information in a sentence (focus) as opposed to information already given in the conversational context (topic), and (3) the relation between prosodic use and level of language functioning. We found that, compared to typical adults, those with ASD with high language functioning generally used a larger pitch range than controls but did not mark information structure, whereas those with moderate language functioning generally used a smaller pitch range than controls but marked information structure appropriately to a large extent. Both impaired general prosodic use and impaired marking of information structure would be expected to seriously impact social communication and thereby lead to increased difficulty in personal domains, such as making and keeping friendships, and in professional domains, such as competing for employment opportunities.
Adults who have never taken a music lesson can still typically tell which notes are wrong and which are right, even in a song that they have never heard before! This is because we gather a lot of knowledge about the music of our culture simply from listening to music – even without any formal musical training. Two of the more sophisticated musical abilities that we acquire during childhood are sensitivity to key membership (knowing which notes and chords belong in a key and which do not) and harmony perception (knowing which notes and chords should follow others). While we know quite a lot about how children learn the rules of their native language, we don't know much about how they learn the rules of the music of their culture. In our study, we examined whether 4- and 5-year-olds could demonstrate any knowledge of key membership and harmony. We tested this question in two ways. In the first, we showed children videos of two puppets playing the piano: one played a song that followed the rules, and one played a song that contained either wrong notes or chords that went outside the key, or a note or chord that stayed in the key but was unexpected at that point. We asked children to make judgments about which puppet should get a prize for playing the best song. With this method, we found that 5-year-olds could demonstrate some understanding of key membership but not harmony, but 4-year-olds seemed to think every song sounded great, even the ones with wrong notes! Even though 4-year-olds couldn't show any explicit knowledge of key membership or harmony, we wondered whether their brains could register that there had been a wrong or unexpected note or chord even though they couldn't actually tell us that this had happened. To answer this question, we recorded EEG while children listened to the songs playing in the background and watched a silent movie to keep them entertained. We found that the pattern of brain activity was actually different in response to chords that were correct compared to out-of-key or unexpected chords. This means that even though 4-year-olds may not be consciously aware that a wrong note or chord has occurred, their brains still register the mistake. We are currently using the same method to see if 3-year-olds can show any knowledge of key membership and harmony. In addition, we are designing "easy" versions of these tasks, with many changes and more wrong notes. We want to know if children as young as 2 years of age can detect wrong notes if we give them a very simple and fun game to play (see next article).
When adults and older children hear an unfamiliar song, they can immediately point out when someone plays a "wrong note" or sings "out of key". How can they do this when they haven't heard the song before? How do they even know that the note is wrong? People learn about the structure of the language and musical systems in their environment just through everyday exposure. With more exposure, they become more sensitive to rules about which words can go together in sentences, and which notes go together in a musical key. This year, we tested 11- and 12-month-old infants' sensitivity to musical key structure. We created two versions of a piano Sonatina by Thomas Atwood. One version presented the music in the key of G major (tonal version) and the other version alternated every beat between G major and G-flat major (atonal version). To an adult, the atonal version sounds wrong or out-of-key, because it has no tonal centre and all 12 chromatic notes are present. However, every chord in this version is still consonant (for more information on consonance and dissonance, see the "Part 3" article next). Infants control how long they listen to the tonal and atonal versions. If they spend most of their time listening to the tonal version, we interpret this as a preference for that version. Interestingly, 12-month-old infants who participate in interactive music classes with their parents already show a preference for the tonal version. However, 12-month-olds who do not participate in such classes do not show this preference until later. We believe that the increased exposure to Western music in the music classes leads to earlier sensitivity to this aspect of musical structure. We are just beginning an exciting new study to find out when the majority of children will start to prefer the tonal over atonal version of a musical piece. In this study, 2-year-olds will play a game on coloured foam mats, controlling which version of the song plays based on where they move on the mats.
We combine notes in different ways to make music. Some combinations of notes (intervals) are called consonant: adult listeners (even those without any musical training) say that consonant intervals sound pleasant, good, smooth, in-tune, and correct. Other intervals are called dissonant, and listeners describe these intervals as unpleasant, rough, tense, out-of-tune, and incorrect. Six-month-old infants can detect an occasional dissonant interval in a set of consonant intervals, and infants as young as 2 months of age prefer to listen to consonant over dissonant intervals. In this study, we want to find out which aspects of musical sound contribute to this preference. In adults, researchers recently found that the aspect called harmonicity (how the different overtones or harmonics of a complex sound line up in frequency space) is closely related to perception of consonance. In the first study, we looked at harmonicity perception in 5- and 6-month-old infants. Infants sat on their parent's lap and controlled how long they listened to harmonic (perfectly lined up) or inharmonic (not-quite lined up) sounds over 20 trials by where they looked. The infants in this study looked much longer in order to hear the harmonic sounds, which we interpret as a preference for these sounds. These results show us that even at 6 months of age, infants are sensitive to the harmonicity aspect of consonance. In a second study, we are looking at another aspect of dissonance called beating. When we hear two tones that are very close together in frequency, we experience a rough or "beating" sensation. We can greatly reduce that sensation by taking these two tones and presenting one to the left ear and the other to the right ear. In order to figure out if infants prefer tones without beating, we are using headphones so that we can present two different sounds to each ear. Infants control which sounds they hear based on where they look on a computer monitor. Overall, so far we have discovered that infants show sensitivity to differences in harmonicity, which tells us that this aspect of musical sound is important for the perception of consonance and dissonance.
Pitch perception is central to hearing music. Pitch is also important in other areas, such as attributing sounds to their appropriate sources, identifying a familiar voice, and extracting meaning from speech such as understanding whether a sentence was meant as a statement or a question. Past studies from our lab have shown that, even as infants, we have sophisticated auditory skills that allow us to make very accurate pitch judgments. Pitch is encoded in our brain in two codes, a temporal (timing) code and a spectral (frequency) code. Past research indicates that infants can use the frequency code. Here we investigated the development of the timing code. In order to do this, we needed to create sounds that contained timing but not frequency cues. Such sounds are referred to as iterated rippled noise (IRN) stimuli. First, we found out that 8-month-old infants could detect a change in the pitch of this type of sound using our conditioned head-turn method. We then moved to the EEG lab to examine the brain responses to a change in the pitch of IRN sounds. Here we found that 8-month-old infants showed a characteristic brain response to occasional pitch changes using IRN sounds. However, infants did not find these tasks easy! Unlike with sounds containing frequency information, when the infants were first exposed to IRN sounds, they could not tell if there was a change in pitch. Only after some experience and training did they show brain responses to pitch changes. We are currently examining whether or not 4-month-old listeners can also detect pitch changes in IRN sounds. In sum, these results suggest that the timing code for processing pitch in our brains is present but not well developed in young infants.
The ability to tell the difference between two individuals is very important for everyday social interaction. When someone cannot be seen, as over a telephone, a useful way for a listener to identify them is by the sound of their voice. Adults are specialized for human voices. They are much better at telling apart two voices from their own (human) species than they are at telling apart two voices from a foreign (monkey) species. Here we investigated whether people learn to be so good at human voices through a lot of experience with human voices, or whether we are simply born better at processing human voices. The purpose of Study 1 was to test whether specialization for human voices develops during infancy. Six- and twelve-month-old infants came to the lab, and were seated on their parent's lap facing the researcher. The infants listened to the voice of one human or monkey female and were taught to turn their head to the side every time the voice of a second female of the same species was presented. Correct head turns were rewarded with a dancing toy! Using this method we found that infants of both ages turned their heads more often when there was a change in speaker than when there was no change, indicating that they were able to tell the difference between the two speakers for both monkey and human voices. However, 12-month-olds showed better discrimination for human compared to monkey voices, while 6-month-olds were equally good at both species. This shows that between 6 and 12 months of age infants become increasingly specialized for discriminating voices in their own species. This specialization for human voices likely occurs because infants experience many human voices in their daily environment, but no monkey voices. In Study 2, we tested whether experience is driving the effects we saw in the first study. So we gave 12-month-old infants experience with monkey voices, and then repeated the discrimination tests of Study 1. Specifically, we gave parents a book and CD in the form of a narrated storybook called "Beach Day for the Monkey Family" in which infants heard the monkey voices of a "father", "mother", "sister" and "brother" monkey. We found that these infants were better at telling apart new monkey voices than the infants in Study 1. This result indicates that older infants lose the ability to tell apart monkey voices because they are not exposed to these voices in their environment, and that by providing this exposure, their ability to discriminate monkey voices can be recovered. Discrimination of people by their voices is important for social interaction, recognizing familiar people and forming friendships. Together, these two studies show us that development of good discrimination for human voices depends on experience hearing human voices. To hear some sound clips from the experiment, click here.
Sound localization refers to the auditory system's ability to use differences between the two ears in the loudness and timing of a sound in order to determine where in space the sound is coming from. For example, a sound in front of you will reach the two ears at the same time, and be equally loud in both ears, but a sound coming from the right will reach the right ear before the left ear and sound louder in the right ear than in the left ear. This ability provides listeners with useful spatial information, helping to direct attention to important sounding objects, such as someone talking, or oncoming traffic. Adults are very skilled at figuring out the location of a sound. Previously we have used the event-related potential (ERP) technique with EEG data to measure brain activity from 5, 8, and 12-month-old infants in response to changes in sound location. These data revealed that infants 5 to 12 months of age respond to the change in location with both an immature brain response (not seen in adults) and an adult-like mature brain response. In the current study, our results show that 2-month-olds respond with the immature brain response but do not show the mature response. This indicates that 2-month-olds can process sound location to some extent, but that that adult-like processing of sound location does not begin until after 2 months of age. Future studies will look at infants between 2 and 5 months of age and use very small changes in sound location to determine whether the emergence of the adult-like response is associated with better sound localization abilities.
Previous studies suggest that musical training in children can positively affect various aspects of development. However, it remains unknown as to how early in development musical experience can have an effect, the nature of any such effects, and whether different types of music experience affect development differently. We found that random assignment to 6 months of active participatory musical experience beginning at 6 months of age accelerates acquisition of culture-specific knowledge of Western tonality in comparison to a similar amount of passive exposure to music. Furthermore, infants assigned to the active musical experience showed superior development of prelinguistic communicative gestures and social behaviour compared to infants assigned to the passive musical experience. These results indicate that (1) infants can engage in meaningful musical training when appropriate pedagogical approaches are used, (2) active musical participation in infancy enhances culture-specific musical acquisition, and (3) active musical participation in infancy impacts social and communication development.
Music lessons can be a fun and engaging way to stimulate the minds of children and help them learn a skill that they can enjoy throughout their lives. But can participating in music training also benefit children in other areas of their education and development? Media reports have certainly been full of claims that music lessons can improve children's math and language skills, and make them smarter in general. However, science has yet to conclusively demonstrate whether these claims are justified. Past research suggests that music training may affect a wide variety of different skills, including reasoning abilities, language development, and academic competence. If extensive music training is associated with such widespread benefits, we reasoned that its greatest effects might be observed for more basic skills like memory, attention, and reading ability. These types of skills could have a broad impact on general cognitive skills and school performance. To test this question, we studied a group of 6- to 9-year-old music students who had taken music lessons for varying lengths of time. We examined whether the longer children had been in music lessons, the better they performed on a variety of cognitive tests. Our results suggested that music training was associated with two specific skills, memory and reading comprehension, rather than cognitive skills in general. These findings are exciting because improving memory and reading may form a foundation for improving many other important skills, such as acquiring general knowledge about the world, following directions or instructions, and reasoning through problems. Interestingly, we found that a particular aspect of attention that allows us to focus on one thing while ignoring distractions (e.g., saying that the centre fish is pointing left while ignoring the other fish pointing right, above) was not associated with music training. This result suggests that music lessons do not have immediate benefits for all cognitive skills. At this point we don't know whether it takes more training to see effects of music on attention, or whether musical training is simply not related to attention. Further research is needed for us to answer this question. But we can conclude that musical training does have benefits for memory and reading comprehension. This important research from last year was recently published in the journal "Music Perception".
We know from other research (some of it from our lab!) that experience changes the brain. But how much experience do you need in young infancy to see changes in the brain? In this study, two groups of 4-month-old infants listened to a CD of children's songs played on musical instruments for 20 minutes every day for one week. For one group, all songs were played in guitar timbre, and for the other group, all songs were played in marimba timbre. Timbre is the word we use to describe the difference in sound quality between different instruments, people, and objects. Timbre perception is extremely important in everyday listening; for example, to recognize individual voices and musical instruments. After a week of listening at home, we recorded each infant's brain activity in the lab using EEG as they listened to small pitch changes in both guitar and marimba timbre. As it turns out, the babies who heard guitar music showed larger brain responses to guitar tones, and the babies who heard marimba music showed larger brain responses to marimba tones! This difference is even more impressive because the specific musical notes they heard in the lab were different than the notes on the CD. This means that their learning of the timbre generalized to new notes they had not heard before in that timbre. The pattern of results shows us that even a short amount of exposure lasting a few minutes a day for one week can have measurable effects on infant brain responses. More and more, we are now learning about how brain representations can be shaped by particular experiences in infancy and childhood. This exciting research was recently published in the neuroscience journal "Brain Topography".
In most natural environments there are many sounds occurring at the same time. The sound waves combine in the air and reach the ear as one complex waveform. The brain needs to figure out how many sounds are present and which parts of the complex waveform belong to each sound source in the environment. Adults are very good at doing this. For example, if adults are at a busy party with loud music and many people talking, they have little problem separating the different voices from each other and from the music. We wanted to understand whether infants also have this ability to organize sounds occurring at the same time into different auditory objects. Over the past 3 years we have run a number of studies testing how well infants can separate sounds in their environment. In study 1 we tested 6-month-old infants using our conditioned head-turn setup. We used one complex tone with multiple harmonics at integer multiples of the fundamental frequency. Infants heard this tone repeating and every once in a while we mistuned one of the harmonics. For adults, the in-tune complex tone is heard as one sound, but mistuning one harmonic causes that harmonic to be heard as a separate, second sound in addition to the complex tone. At 6 months, infants turned their head (in order see the toys light up) on occasional presentations of the sound with the mistuned harmonic, indicating that they can hear two tones at once. This important study will soon be published in the Journal for the Acoustical Society of America. In study 2 we used EEG to study whether 2, 4, 6, 8, 10, and 12-month-olds can perceive two auditory objects at once. Thank you to all those babies who wore our EEG cap and listened to our 1-tone and 2-tone sounds! We have tested over 140 infants between 2 and 12 months of age to determine if the infant brain can tell the difference between when it is hearing one sound compared to two. In adults and older children we know that there is a characteristic brain response to hearing two sounds. We used EEG to study whether or not infants also show this brain response. At 2 months of age we did not see different brain responses between one verses two sounds. However, we found that as early as 4 months of age, infants show these different responses. This tells us that by 4 months they can separate two sounds that are happening at the same time. In study 3 we used EEG to study 4-year-olds. Although the 4- to 12-month-olds showed different brain responses to one versus two sounds, these responses were different from those of adults. We are now testing 4-year-olds to determine when the brain begins to show mature adult-like responses to hearing two sounds at once. In Study 4 were studying how infants Integrate sound and vision to perceive objects. In the everyday world, we get auditory and visual information about objects, and we integrate this information together. For example, the speech we hear from a person is integrated with the mouth movements we see. We are testing this integration now. Specifically, we want to know whether 4-month-old infants prefer to look at one bouncing ball when they hear the sound of one ball and two bouncing balls when they hear two balls. This study will help us understand if infants know that one sound should be coming from one object and two sounds from two objects.
Musical behaviour like dancing, singing, and playing musical instruments encourages high levels of interpersonal coordination, and have been associated with increased group cohesion and social bonding between group members. Specifically, individuals who walk, sing, or tap together are then more helpful, compliant or cooperative in later interactions with one another. The purpose our study was to determine when in childhood this effect starts to influence social behaviour. In our first experiment, it was revealed that 14-month-old infants were more likely to help out an adult stranger after having been bounced to music in synchrony (as opposed to out-of-synchrony) with that person’s movements. We are now investigating whether 10- and 12- month old infants also develop a social preference for someone after moving with that individual in synchrony to music. To see some example videos and sounds from the experiment, click here.