This is a list of projects available in my laboratory for MSc students

This is a list of projects available in my laboratory for MSc students. Please note that
if you work with me, you will also be expected to contribute to the lab life, including
attending weekly lab meetings -- a great opportunity to learn about other projects
going on and for presenting your work.
Overview
Much of the current research in the lab concentrates on the imagistic (sensory, motor
and affective) information provided by speakers in face-to-face communication, its
role in language learning and comprehension and its neural underpinning. This
imagistic information includes the gestures produced (especially iconic gestures), the
intonation (prosody) and the use of words (like onomatopoeias) that evoke properties
of objects (e.g., animal sounds).
This information is always present in face-to-face communication. However,
traditionally it has been considered as “non-verbal” and largely ignored. But then why
would people make iconic gestures, use evocative intonation and onomatopoeais
when speaking? Our proposal is that this information is critical in learning and that it
also supports online comprehension (see Perniss & Vigliocco, 2014 for overview).
The projects listed below address the following questions: (1) does the imagistic nonverbal information support learning new words and, critically, new concepts in
childhood and adulthood? (2) Does it (and to what extent) support language
comprehension?
Project 1: Learning new words and concepts in childhood
For children, learning new concepts and their labels has been characterised as a
problem of establishing reference between a spoken label and an object. This has been
considered as a hard problem because the label (the object’s name) is argued to be
only arbitrarily linked to the object; and because there usually are multiple objects in
the visual scene. The problem is, clearly, even harder when the object is not in view.
Previous work has identified a number of cues that can help children solve this
problem. In our work, we have made the novel hypothesis that actually the language
used by caregivers may not be so arbitrary after all: caregivers use iconic gestures
when speaking and use other cues, such as prosody, to imagistically bring to mind’s
eye features of referents. In our previous work we have tested this hypothesis in
British Sign Language (BSL), now we are extending the work to spoken English. The
basic prediction we will test is that caregivers adjust their gestures and intonation
depending upon whether the child can interact with the objects being labelled or not,
and whether the object and the label are new to the child or not. Working on the
project, you will be contributing to designing the study, collecting recordings of
caregiver-child interactions, annotate the data and then carry out the analyses.
Project 2: Learning new concepts and words in adulthood
We learn new concepts and their labels through out our lives. In the ideal scenario,
when you learn about a new object you are given the opportunity to interact with it
(e.g., suppose you learn about the “stringil” during a visit to the British museum.
There, you see the object, you see a video of how it was used and you learn its name),
however, this is often not the case as we learn about a large number of concepts just
by reading, or being taught about them. The question we address here is whether
speakers adjust their communication when talking about new objects vs. talking about
known objects, for example making gestures that illustrate how the object is used, and
using their intonation to highlight any special property of the object (e.g., saying
something like: “it has a looooong stick” to emphasise the actual dimension). We will
record dyads of adults (a “teacher” and a “learner” when talking about sets of objects,
some new and some known, in order to assess how communication reflects
imagistically properties of the objects, especially when the objects are novel and
especially when the objects are not present in the immediate context. Working on the
project, you will be in charge of selecting the materials for the study, design the study,
collecting recordings of face-to-face interactions, annotate the data and then carry out
the analyses.
Project 3: Using gestures to predict speech in spoken comprehension
Gestures and speech are a clear example of highly interconnected modalities that are
part of human communication. There is evidence that in comprehension, speech and
gestures are automatically integrated and facilitates comprehension, at least when the
meaning expressed by the words and the gestures coincide. It is also known that
during online processing the onset of gestures is about 200ms before the onset of
speech. Here we ask whether listeners use gestures to predict what is about to be said.
To investigate this question we will first develop quantitative measures of how
unpredictable a given word is in a sentence (surprisal) based on text (thus without
considering gestures). We already know that these measures predict reading times and
neural responses (N400, EEG) to text and spoken language (Frank, 2013). But what
happen when we include gestures? If listeners use seen informative gestures to predict
upcoming words in speech (in addition to the linguistic information captured by our
measures of surprisal), our text-based measures of surprisal should be a much better
predictor of comprehension when no gestures are present than when gestures are
present.
Given a set of sentences annotated with surprisal scores for a target word, we want to
collect and analyse reaction times for these target words in two experimental
conditions: speech-only and speech and gesture together. In the project, you will
learn about how to use computational tools to derive probabilistic measures of
predictability; prepare audio-visual materials engaging professional actors (from
RADA); programme and perform the behavioral experiment; analyse the data and
write up the report
Reference:
Frank, S. L. (2013). Uncertainty reduction as a measure of cognitive load in sentence
comprehension. Topics in Cognitive Science, 5(3), 475–494.
Project 4: Speech-gesture integration in aphasia and apraxia
Recent studies have established that co-speech gestures are integrated with speech online during language comprehension as evidenced by the fact that people are less
accurate in recognizing words if these words are accompanied by incongruent
gestures (e.g., seeing a speaker saying “ball” and making a hammering-like gesture).
It has been argued that gestures can aid language processing by enhancing activation
of the meaning of the word. If this is the case, then, gestures may be especially useful
for individuals whose language processing is impaired such as aphasic speakers.
There are some studies in the literature addressing this question which, however,
provide mixed results. There may be many reasons for the mixed results, including
the specific stimuli being presented to the patient. In addition, claims have been made
that different neural network underscore the processing of co-speech gestures from
those underscoring our ability of comprehending actions. In the project we investigate
patients who have aphasia and/ or apraxia after stroke in order to better understand the
relationship between speech, gesture and action. In the first part we will compare the
patients to controls on their behavioural profile. In the second part, we will carry out
Lesion-Symptom Mapping analyses to better understand the networks involved. This
project is carried out in collaboration with scientists in the US (Moss Rehabilitation
Research Institute) and some of the testing will be carried out in Philadelphia.
Project 5: The neural representation of onomatopoeias
Onomatopoeias are an interesting set of words that seem to straddle between
environmental sounds and other arbitrary words. There is evidence that these words
are among the first words being learnt and that may be more resistant to brain damage
after stroke. This raises the question of how they are processed in the brain: to what
extent their processing overlap with processing of environmental sounds or with the
processing of non-iconic words; to what extent is the right brain engaged in
processing them, in addition to the left and what is the time course of their processing.
In this project we will investigate differences in the neural processing of
onomatopoeias and other words using EEG. Starting from previous work by Jyrki
Tuomainen investigating sound-symbolic words in Japanese, we ask here whether
onomatopoeias will show early ERP components more typical of environmental
sounds. In this project you will be responsible for designing the task to be used, you
will run the EEG study and analyse the data.