WORKSHOP Individual differences in language processing across

WORKSHOP
Individual differences in language processing
across the adult life span
10 - 11 December 2015
Organizers:
Esther Janse
Thordis Neger
Xaver Koch
Sponsored by:
Netherlands Organization for Scientific Research: VIDI grant
awarded to Esther Janse
Max Planck Institute Nijmegen
Radboud University Nijmegen
2
Contents
1
Program
4
2
Abstracts oral presentations
6
3
Abstracts poster presentations
22
4
General information
43
3
THURSDAY, December 10
1
Program
8.50 - 9.00h
Esther Janse
Opening
9.00 - 9.40h
Ardi Roelofs
Attentional sources of individual differences in
language production: Implications for models
9.40 - 10.20h
Matt Goldrick
Mechanisms of language control: Insights from
individual differences
10.50 - 11.30h
Valerie Hazan,
Outi Tuomainen
Clear speech adaptations in younger and older
adults
11.30 - 12.10h
Megan
McAuliffe
The perceptual challenge of distorted speech
associated with neurologic disease or injury
13.10 -13.50h
Thordis Neger,
Cornelia Moers
How sensitivity to co-occurrence frequencies
aids language processing across the adult life
span
13.50 -14.30h
Anna Woollams
Meaning in reading: Convergent evidence of
systematic individual differences
Coffee break
Lunch break
Coffee break
15.00 - 15.40h
Jerker Rönnberg The Ease of Language Understanding model:
Some recent data
16.00 - 18.00h
Poster session
4
FRIDAY, December 11
9.00 - 9.40h
Mirjam
Ernestus
Individual differences in speech reduction in
informal conversations
9.40 - 10.20h
Patti Adank
Eye gaze during recognition of and adaptation
to audiovisual distorted speech
10.50 - 11.30h
James
McQueen
Individual variation in speech perception:
Qualitative differences in learning, not in
recognition
11.30 - 12.10h
Florian Jaeger
The role of experience in understanding
individual differences
Coffee break
Lunch break
13.10 -13.50h
Deniz Başkent, Individual differences in CI speech
Tamati Terrin perception outcomes and related cognitive
mechanisms
13.50 -14.30h
Arthur
Wingfield
Cognitive Supports, Cognitive Constraints, and
Effects of Effortful Listening in Hearing-Impaired
Older Adults
14.30 -15.10h
Antje Heinrich
Investigating how hearing ability and level of
education modulate the contribution of
cognition to speech intelligibility
15.40 - 16.20h
Xaver Koch,
Esther Janse
Age, hearing loss and memory representations
16.20 - 17.00h
Falk Huettig
The effect of learning to read on the neural
systems for vision and language: A longitudinal
approach with illiterate participants
17.00 - 17.15h
Esther Janse
Closing remarks
Coffee break
5
2
Abstracts oral presentations
Thursday December 10th, 9:00 - 9:40h
Attentional sources of individual differences in language production:
Implications for models
Ardi Roelofs
Donders Institute for Brain, Cognition and Behaviour, Centre for Cognition, Radboud
University, Nijmegen, The Netherlands
If nonlinguistic abilities like attention are intrinsically involved in language
function, models of production that only address language processes are
incomplete. In my talk, I will present work from my lab that has examined
whether individual differences in attention are reflected in language
production by young adult speakers. Attention is an umbrella term covering
a number of abilities, such as alerting, orienting, and executive control, with
the latter including updating, inhibiting, and shifting. I will present
behavioral and electrophysiological evidence that individual differences in
these attentional abilities are reflected in language production performance.
These findings imply that models of production should include an account of
the role of attention.
6
Thursday December 10th, 9:40 - 10:20h
Mechanisms of language control: Insights from individual differences
Matt Goldrick
Northwestern University
(joint work with Tamar Gollan, University of California San Diego)
What allows bilingual speakers to so easily switch languages? Theories of
the regulation and control of production processes have appealed to two
types of mechanisms: language-specific mechanisms (such as grammatical
encoding) and domain-general cognitive control mechanisms (such as
response inhibition). Manipulation of the speaker's environment—for
example, varying the syntactic properties of sentences they are asked to
produce—allows us to probe the role of language-specific mechanisms. To
gain insight to the role of domain-general mechanisms, we can capitalize on
naturally occurring "manipulations"—variation in executive function abilities
across individuals. I will discuss work done in collaboration with Tamar
Gollan where we examine the contribution of grammatical properties and
executive function abilities to language control (using both continuous
variation within a population of young bilinguals and a between-group
comparison of younger and older bilinguals). Our results demonstrate
robust effects of grammatical encoding on language selection, and imply a
significant but limited role for executive control in bilingual language
production.
7
Thursday December 10th, 10:50 - 11:30h
Clear speech adaptations in younger and older adults
Valerie Hazan & Outi Tuomainen
Speech Hearing and Phonetic Sciences, UCL
In order to be effective communicators, talkers need to adapt their speaking
style to a range of communicative conditions. This skilled aspect of speech
production may be affected by reduced motor or cognitive control and
greater perception difficulties in older talkers. To examine the acousticphonetic adaptations made by children, young and older adult talkers in
adverse communicative conditions, three corpora (kidLUCID, LUCID and
elderLUCID) have been collected. These include recordings made while pairs
of talkers complete a problem-solving task (diapix), either when
communication is easy or when a communication barrier is placed on one or
both of the talkers.
In this talk, we focus on two sets of acoustic-phonetic analyses resulting
from these corpora. As regards within-group variability in speech
adaptations, we present analyses of individual differences in clear speech
strategies from the young-adult corpus (LUCID). Overall, the degree to which
talkers made adaptations to their speech was a stronger predictor of
communication efficiency than the acoustic-phonetic profile of their clear
speech; this suggests that a talker’s flexibility in adapting to different
communicative conditions may be a strong determinant of successful
communication. As regards between-group variability, we present
preliminary findings from the elderLUCID corpus. Although communication
in adverse conditions seems more effortful for older talkers, they tend to
make less global adaptations to their speech than younger talkers. For
example, decreases in speaking rate are used as key clear speech strategy by
children and young adults but not by older adults as a group.
8
Thursday December 10th, 11:30 - 12:10h
The perceptual challenge of distorted speech associated with
neurologic disease or injury
Megan McAuliffe
University of Canterbury
Reduced speech intelligibility associated with dysarthria, a neurologically
based speech disorder, can have devastating effects on quality of life.
Traditionally, research in this area has focused on the nature of the speech
production deficit and approaches to rehabilitation. However, dysarthric
speech has recently been the focus of perceptual studies, including those
with the potential to inform theoretical models of speech perception.
Dysarthric speech presents a considerable perceptual challenge to the
listener, one that limits their ability to successfully employ typical processing
strategies (Mattys & Liss, 2008). The naturally degraded signal can be highly
variable, and represents a step away from tightly controlled laboratory
experimentation. Using distorted speech associated with Parkinson’s
disease, our lab recently undertook a series of experiments that investigated
younger and older listeners’ processing of dysarthric speech. Results
highlighted the role of vocabulary knowledge in speech processing for both
groups, with older listeners’ processing further mediated by hearing
thresholds. Lexical segmentation data showed that older listeners were less
reliant on stress cues to inform segmentation than the younger group. The
presence of larger vocabularies and reduced hearing acuity in the older
group may have led them to prioritize lexical cues to segmentation
(McAuliffe et al., 2013). This talk will present a brief overview of the
characteristics of naturally occurring degraded speech and discuss how the
use of such speech in experimentation may inform models of perception—
focusing on recent results from our lab to highlight the role of individual
differences in speech processing.
9
Thursday December 10th, 13:10 - 13:50h
How sensitivity to co-occurrence frequencies aids language processing
across the adult life span
Thordis Neger1 & Cornelia Moers2
1
Centre for Language Studies, Radboud University Nijmegen
2
Max Planck Institute for Psycholinguistics, Nijmegen
Picking up on the frequencies with which sensory events co-occur has been
considered one of the core mechanisms of the brain to discover regularities
in the environment and, consequently, to form predictions about it.
Language users' sensitivity to complex patterns of sequentially presented
units such as phonemes and syllables has been shown to play a key role in
language processing.
In this talk we will present two lines of research addressing the role of cooccurrence frequencies in language processing across the adult life span. In
the first part, we focus on the concept of statistical learning which describes
the ability to implicitly pick up on underlying regularities in an input. We
show that younger adults' statistical learning ability in a visual, nonlinguistic
learning task can be used to predict how well they adapt to unfamiliar
speech input. This suggests that adaptation to novel speech conditions and
statistical learning share mechanisms of implicit regularity detection which
are neither modality-specific nor specific for language processing. Results of
a follow-up experiment indicate that older adults' ability to detect temporal
regularities is impaired in the visual but preserved in the auditory modality.
In the second part, we present a study in which we investigated how the
distributional statistics of specific words and word combinations (nounverbs) influenced younger and older readers’ eye-movements during
sentence reading. Both word frequency and transitional probability affected
the length of fixations and skipping/regression patterns in younger and older
adults. Furthermore, word frequency effects increased with age, but TP
effects were equal in size for the two age groups. Hence, the use of local
dependencies between neighboring words to increase reading efficiency
may plateau in later adulthood.
10
Thursday December 10th, 13:50 - 14:30h
Meaning in reading:
Convergent evidence of systematic individual differences
Anna Woollams
Neuroscience and Aphasia Research Unit (NARU), University of Manchester
Although understanding word meaning is not necessary for reading aloud,
there have been reports of semantic influences on skilled reading. These
effects have, however, been subtle and unreliable, as would be expected if
there were individual differences along this dimension. In this talk, I will
present a theoretical framework derived from connectionist
neuropsychology that provides predictions concerning normal individual
differences in degree of semantic reliance when reading aloud. Experimental
psycholinguistic data is used to validate an empirical measure of individual
differences in degree of semantic reliance. Data from structural and
functional neuroimaging and neurostimulation are then used to capture
individual differences along this dimension in skilled readers. This work has
implications for computational models of normal and disordered reading.
11
Thursday December 10th, 15:00 - 15:40h
The Ease of Language Understanding model: Some recent data
Jerker Rönnberg
Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping
University, Sweden
Department of Behavioural Sciences and Learning, Linköping University, Sweden
The Ease of Language Understanding model (ELU, Rönnberg, 2003;
Rönnberg et al., 2008; Rönnberg et al., 2010; Rönnberg et al., 2013) predicts
that speech understanding in adverse, mismatching noise conditions is
dependent on explicit processing resources such as working memory
capacity (WMC). This presentation will focus on some new features and data
related to the new ELU model (Rönnberg et al., 2013). The model is now
conceptualized as a meaning prediction system that depends on
phonological and semantic interactions in rapid implicit and slower explicit
processing mechanisms that both depend on WMC, albeit in different ways.
Here I focus on findings that address (a) the relationship between WMC and
early attention processes in listening to speech, (b) the relations between
types of signal processing and WMC, (c) some factors that modulate the
relationship between WMC and speech understanding in noise, and (d) the
importance of WMC for episodic long-term memory.
12
Friday December 11th, 9:00 - 9:40h
Individual differences in speech reduction in informal conversations
Mirjam Ernestus
Radboud University Nijmegen & Max Planck Institute for Psycholinguistics
In spontaneous speech, native speakers pronounce many words with
weakened segments or with fewer segments compared to the words'
citation forms. In casual American English, for instance, the word yesterday
may sound like yeshay. I will present two studies showing evidence for
individual differences in how speakers from the same social-economic group
reduce words. Both studies are based on a corpus in which ten pairs of
highly educated, male, native speakers of Dutch from the Western part of
the Netherlands casually conversed with each other for 90 minutes. The
first study is a detailed acoustic study of 159 tokens of the word eigenlijk /
ɛɪxәlәk/ 'actually'. We show that the speakers differ in the number of
syllables (one, two or three) with which they pronounce the word.
Moreover, speakers clearly differ in how they pronounce the /xәl/ sequence.
The second study is based on an automatically generated phonetic
transcription of the corpus. We investigated, using the Balanced Winnow
Classifier, whether identification of speakers improves if we do not only take
into account which words and word combinations the speakers produce but
also how they reduce these words. A ten fold cross-validation shows that
this indeed is the case (improvement of 4.5 percent points). Speakers differ
especially in their pronunciation of highly frequent semantically weak
words, in their reduction of full vowels to schwa and in their frequencies of
consonant deletion. In conclusion, our studies show that even speakers from
a socially homogeneous group may differ in their degree of speech
reduction.
13
Friday December 10th, 9:40 - 10:20h
Eye gaze during recognition of and adaptation to
audiovisual distorted speech
Patti Adank
University College London
Listeners use visual speech cues to improve speech recognition under
adverse listening conditions. However, it is not known how listeners use
visual cues and we examined this issue in three studies. In study 1, we
tested whether perceptual adaptation to accented speech is facilitated if
listeners can see a speaker’s facial and mouth movements. Participants
listened to sentences in a novel accent and underwent a period of training
with audiovisual or audio-only speech, in quiet or in noise. A control group
underwent training with visual-only cues. No differences between groups
were observed. In Study 2, we addressed questions arising from Study 1 by
using a different accent, speaker, and design. Here, participants listened to
sentences in a non-native accent with audiovisual or audio-only cues,
without off-line training, while their eye gaze was recorded. Recognition
accuracy was better for audiovisual than for audio-only stimuli; however, no
difference in adaptation was observed between the two modalities. In study
3, we examined when and how listeners directed their eye gaze towards a
speaker’s mouth, to determine when listeners gain and use visual speech
cues during 1) recognition of individual noise-vocoded sentences, and 2)
adaptation to noise-vocoded speech. We additionally investigated whether
measurements of eye gaze towards a speaker’s mouth are related to
successful recognition of noise-vocoded speech. Longer fixations on the
mouth related to successful recognition of distorted speech. Also, listeners’
use of visual speech cues varied over time, according to their needs; but
such changes could also reflect variation in cognitive effort.
14
Friday December 11th, 10:50 - 11:30h
Individual variation in speech perception:
Qualitative differences in learning, not in recognition
James McQueen
Radboud University Nijmegen
There are quantitative differences among adult listeners in their perceptual
abilities (e.g. in discrimination between speech sounds, in speed of lexical
access). There are also quantitative differences among adults in their ability
to learn about speech (e.g. in acquisition of second-language sounds). A
theoretically interesting question is whether there are also qualitative
differences in these processes. Data will be presented from studies on
adults learning about non-native speech which suggest that quantitative
differences in perceptual ability can lead to qualitative differences in
learning. Changes in the nature of the acquisition process may be driven by
the quantitative perceptual differences. But there are no grounds to
suppose that there are qualitative differences in recognition. In spite of the
quantitative differences among listeners, the computational problems in
speech recognition remain the same for all of them.
15
Friday December 10th, 11:30 - 12:10h
The role of experience in understanding individual differences
Florian Jaeger
Brain and Cognitive Sciences, Computer Science, and Linguistics, University of
Rochester
The occasional stroke of genius aside, theory building tends to proceed from
simple to more complex models, to the extent that the latter are required to
explain the data. For the cognitive sciences, perhaps the most pervasive
effects on processing, language processing included, originate in prior
experience: implicit expectations and predictions based on the statistic of
previously experienced input seem to strongly shape how subsequent input
is processed (for a recent review, among many, see Kuperberg & Jaeger, in
press). Any reasonable theory of language processing should be able to
account for these effects and the need for novel theories should be assessed
(also) by asking whether the null model (based on effects of previous
experience) can already account for the effects this novel theory seeks to
account for. However, this endeavor is complicated by the fact that we still
know relatively little about precisely how previous experience incrementally
shapes processing. I give an overview over recent studies from my lab that
aim to contribute to this question. Specifically, I report on work that seeks to
understand a) the role of experience, compared to other factors, in
individual differences, b) how previous experience is represented and how
these representations can change.
16
Friday December 11th, 13:10 - 13:50h
Individual differences in CI speech perception outcomes and related
cognitive mechanisms
Deniz Başkent & Terrin Tamati
University of Groningen, University Medical Center Groningen, Department of
Otorhinolaryngology/Head and Neck Surgery
Cochlear implants (CIs) are the auditory prosthetic devices for deaf people.
Although CIs have been very successful as a medical treatment for profound
deafness, the speech signal transmitted by a CI is less detailed in spectrotemporal information, due to device- and physiology-related factors, as well
as the nature of electric stimulation. While many CI users achieve a good
level of speech understanding in favorable listening conditions, performance
across individual CI users is still highly variable. Additionally, understanding
speech further degraded by other factors, for example, due to interfering
sounds or speech variability, remains a challenge. To approach these issues,
we have looked beyond the initial sensory input from the device and have
explored what CI users are able to do with this degraded sensory
information. We will discuss top-down restoration of speech, where
perception of degraded speech is enhanced using acoustic speech cues,
syntactic and semantic constraints, as well as linguistic knowledge, and also
the perception of difficult real-life forms of speech. Both situations, which
tap into effective use of cognitive and linguistic mechanisms, capture the
high variability and the resulting individual differences in speech perception
compensation in CI users. Together, our experiments demonstrate that what
the CI users are able to do with the degraded information received through
a CI is important for speech and language outcomes. Understanding the
sources of individual differences is crucial for identifying CI candidates,
assessing outcomes in CI users, and developing new device features or
training programs to improve speech perception.
17
Friday December 10th, 13:50 - 14:30h
Cognitive Supports, Cognitive Constraints, and Effects of Effortful Listening
in Hearing-Impaired Older Adults
Art Wingfield
Volen National Center for Complex Systems
Brandeis University, Waltham, MA, USA
The biological changes associated with adult aging bring with them cognitive
and sensory changes that are increasingly well understood, many of which
would be expected to impact the comprehension of spoken language. To
counter these effects is the general maintenance of linguistic knowledge and
the procedural rules for its implementation. For this reason, the ability to
comprehend spoken language typically reflects relative stability, or at most a
“graceful decline,” rather than an abrupt or catastrophic failure in adult
aging. I will present data illustrative of the two sides of the cognitive coin as
they relate to comprehension and recall of spoken materials. On the one
hand, I will describe research showing that while older adults with mild-tomoderate hearing loss require more of a word onset or a more favorable
signal-to-noise ratio to identify words in isolation, this difference is
ameliorated when recognition can be supported by linguistic context, or in
the case of rapid speech, by increasing the amount of available processing
time. The other side of the cognitive coin is represented by declines in
working memory, processing speed, and efficiency in inhibition that can
interfere with speech recognition, create special difficulty for
comprehending sentences whose syntax place heavy demands on working
memory resources, and the negative effects of perceptual effort on
comprehension and recall of speech materials. I will end my presentation by
suggesting the importance of individual differences in self-efficacy and
control beliefs on listeners’ approach to language processing.
18
Friday December 11th, 14:30 - 15:10h
Investigating how hearing ability and level of education modulate the
contribution of cognition to speech intelligibility
Antje Heinrich
MRC Institute of Hearing Research, Nottingham, UK
This talk discusses language processing in the context of speech-in-noise
(SiN) perception. Understanding (and predicting) SiN perception on an
individual basis is notoriously difficult, most likely because performance
draws on a wide range of hearing-related and cognitive abilities. What these
abilities are is still hotly debated, with the only consensus that both hearing
sensitivity and working memory play a role. The question is further
complicated by the fact that the contributions of particular functions
probably differ with the listening situation. This project investigated in a
large group (N=50) of older listeners (age: 60-86, mean: 70 years) how
various hearing and cognitive functions contributed to the perception of
single words and semantically low- and high-predictable sentences
presented in speech-modulated noise at two SNRs. The talk will specifically
address the question how various measures of working memory, inhibition
and general linguistic ability relate to speech perception in these situations
and how this general pattern is modified by a listener’s hearing sensitivity
and level of education. In many cases, poorer hearing sensitivity was
associated with a heavier reliance on cognitive functions while years of
education, possibly a measure of cognitive reserve, showed a more
inconsistent pattern. Sometimes these modulations were specific to a
particular listening situation, with the type of target speech (words, LP/HP
sentences) playing a more important role than SNR. Any model not taking
these complex relationships into account will fail to fully explain language
processing.
This research was supported by BBSRC grant BB/K021508/1. Special
acknowledgment: Sarah Knight.
19
Friday December 10th, 15:40 - 16:20h
Age, hearing loss and memory representations
Xaver Koch & Esther Janse
Centre for Language Studies, Radboud University Nijmegen
Aging and the hearing loss that often comes with aging have been found to
make speech processing more demanding. This presentation focuses on
possible consequences of age-related hearing loss for representations
stored in memory. We will first present the results of a study on effect of
age-related hearing loss on sibilant acoustics (i.e., on the sibilant contrast in
'sue' vs. 'shoe'). Younger, middle-aged and older adults read aloud words
starting with the sibilants [s] or [S]. Acoustic analysis showed no general age
effect on sibilant realisation. However, even relatively mild forms of hearing
loss affected the realisation of [s]. This suggests that hearing loss, but not
age, affects the memory representation of sounds. Secondly, we will present
a visual nonword recall study, set up to test the hypothesis that age-related
hearing loss affects the quality or accessibility of sublexical representations
in long-term memory. Forty-four older adults, varying in degree of highfrequency hearing loss, saw multisyllabic nonwords varying in phonotactic
frequency (i.e., the phoneme-co-occurrence statistics of the language). They
saw the nonwords for 5 seconds and were prompted to produce them from
memory after another 3 seconds. As expected, response accuracy was
influenced by phonotactic frequency of the nonword. Crucially, response
accuracy was also higher if the participant had better hearing, supporting
the claim that hearing loss degrades or weakens sublexical representations
in long-term memory. These results emphasize the broad consequences
age-related hearing loss has on language use beyond its immediate effect on
speech audibility.
20
Friday December 11th, 16:20 - 17:00h
The effect of learning to read on the neural systems for vision and
language: A longitudinal approach with illiterate participants
Falk Huettig
Max Planck Institute for Psycholinguisticts
How do human cultural inventions such as reading result in neural reorganization? In this first longitudinal study with young completely illiterate
adult participants, we measured brain responses to speech, text, and other
categories of visual stimuli with fMRI before and after a group of illiterate
participants in India completed a literacy training program in which they
learned to read and write Devanagari script. A literate and an illiterate notraining control group were matched to the training group in terms of
socioeconomic background and were recruited from the same societal
community in two villages near Lucknow, India. This design permitted
investigating effects of literacy cross-sectionally across groups before
training (N=86) as well as longitudinally (training group N=25). The two
analysis approaches yielded converging results: Literacy was associated with
enhanced left-lateralized responses to written text along the ventral stream,
dorsal stream, (pre-) motor systems and thalamus. These effects
corroborate and extend previous findings from cross-sectional studies.
However, effects of literacy were specific to written text and to false fonts.
Contrary to previous research, we found no direct evidence of literacy
affecting the processing of other types of visual stimuli such as faces, tools,
houses, and checkerboards (cf. Dehaene et al., 2010, Science). Furthermore,
we did not find any evidence for effects of literacy on responses in the
auditory cortex in our Hindi-speaking participants. The latter result in
particular raises questions about the extent to which phonological
representations in the auditory cortex are altered by literacy acquisition or
recruited online during reading.
21
3
Abstracts poster presentations
Poster overview
Poster
Presenter
Title
01
Amaia
CarrionCastillo
Evaluating the genetic risk for dyslexia in
multi-generation families
02
Audrey
Bürki
03
Carolien
van den
Hazelkamp
Individual differences in reading times modulate
sensitivity to distractors in the picture-word
interference paradigm
Differences between individuals in relative dominance
of semantic and syntactic processing streams
04
Eline van
Knijff
The role of language, hearing and cognition in speech in
noise perception in elderly adults
05
Elke
Huysmans
The influence of congenital hearing impairment on
language production and language reception abilities
in adults
06
Evelien
Heyselaar
Syntactic operations rely on implicit memory:
Evidence from patients with amnesia.
07
Jana
Reifegerste
Illusory licensing effects in young vs. older adults
08
Jorrig
Vogels
Cognitive load and individual differences in
multitasking abilities
09
Katja
Münster
Delayed integration of emotional cues into situated
language processing in older age
22
Poster
No
Presenter
Title
10
Louise
Schubotz
The cocktail party effect revisited:
Aging, co-speech gestures, and speech in noise
11
Meghan
Clayards
Individual differences in cue weights are correlated
across contrasts
12
Nathalie
Giroud
Comparing the impact of hearing aid algorithms for
neural auditory learning
13
Nicola Savill
An individual differences approach to semantic and
phonological effects in reading, repetition and verbal
short-term memory.
14
Nicole
Ayasse
Eye Movements as a Measure of On-Line Spoken
Sentence Comprehension:
Are older adults truly slower?
15
Nivja de
Jong
Examining the relation between articulatory skills and
speaking fluency
16
Rebecca
Carroll
The role of the mental lexicon for speech recognition
in different acoustic settings
17
Rory
Turnbull
Phonetic reduction in production and perception:
Insights from variation in theory of mind
18
Saskia
Lensink
Parts or wholes: On the individual differences in the
production of multi-word units in Dutch
19
Yingying Tan
Verbal WM Capacities in Sentence Comprehension:
Evidence from Aphasia
23
Poster 01
Evaluating the genetic risk for dyslexia in multi-generation families
Amaia Carrion-Castillo, Clyde Francks, Simon E. Fisher
Max Planck Institute of Psycholinguistics Nijmegen
Dyslexia or reading disability is a neurodevelopmental condition with a
relatively high prevalence in the population (5-10% depending on diagnostic
criteria). Typically, reading assessment and diagnosis is focused on children,
who are categorized as dyslexic if they have reading difficulties that cannot
be explained by other factors such as low IQ or other neurological disorders.
Although dyslexia can become milder in adulthood, people often retain
lifelong difficulties with reading that may affect training, employment and
life choices.
Dyslexia usually has a complex and multifactorial background that includes
genetic contributions. Some unusual families may have relatively rare forms
of the disorder that are caused by single genetic mutations with strong
effects on reading ability.
Here we have focused on extended families with multiple affected
members, which may have these kinds of genetic subforms of the disorder.
Word and nonword reading fluency measures have been taken for all family
members available. We have evaluated these continuous traits across the
generations in order to best discriminate affected from unaffected
members. We also propose a genetic strategy focused on sequencing the
genomes of key members in order to identify possible risk variants. Genes
that are found through this approach may be particularly crucial for the
development of normal reading and language skills.
24
Poster 02
Individual differences in reading times modulate sensitivity to distractors
in the picture-word interference paradigm
Audrey Bürki
University of Geneva
Numerous experiments in the language production literature make use of
the picture-word interference paradigm; participants are asked to name
pictures while ignoring a written distractor superimposed on these pictures.
When the target and distractor word share phonemic content, response
latencies (time lag between the presentation of the picture and the onset of
the vocal response) are shorter than when there is no phonological overlap
between the two words. Crucially, the phonological facilitation effect
depends on the time interval between the presentation of the picture and
that of the distractor (e.g., Meyer & Schriefers, 1991). In this study, I tested
the hypothesis that the phonological facilitation effect depends on how fast
the participants process the distractor word. I presented participants with
pictures and superimposed written words. Pictures and distractors appeared
on the screen simultaneously. The target and distractor words either shared
the first syllable, the second syllable, or were unrelated. Participants were
asked to name each picture while ignoring the distractor word and to read
each written word aloud while ignoring the picture. Response latencies were
collected in both tasks. Results revealed that response latencies in the
reading task modulated the phonological facilitation effect in the naming
task. These results show that individual differences are crucial in
understanding the outcome of psycholinguistic experiments and suggest
that previous findings in picture-word interference studies could partly be
accounted for by individual differences in the processing of written words.
25
Poster 03
Differences between individuals in relative dominance of semantic and
syntactic processing streams
Carolien van den Hazelkamp
Utrecht University
Individual differences in the process of understanding language have been
reported repeatedly, but most models of language processing do not
account for them. We take as a starting point multi-stream models of
language processing that distinguish streams operating on the basis of
information from semantic memory or based on combinatorial (morphosyntactic) features. We hypothesize that the relative dominance of these
streams differs between individuals, leading to substantial individual
differences. We predict that individuals with a dominant combinatorial
stream show faster response times and/or higher accuracy in grammatical
judgments and are less affected by semantic-thematic incoherence than
people with a dominant semantic-memory based stream. In a self-paced
reading experiment we disrupted semantic memory-based processing, using
stimuli without meaningful content (syntactic prose), and the results show
that some subjects slowed down when they were confronted with syntactic
prose, but other subjects were actually faster when processing syntactic
prose than normal prose. These data are in line with the hypothesis, as they
show that disrupting one processing stream does not affect all participants
in the same way. The results showed a systematic relation with performance
on grammatical violation detection and a speed-accuracy grammaticality
judgment experiment, the participants that slowed down for syntactic prose
were faster in the violation detection task, but less accurate in
grammaticality judgments in a speed-accuracy tradeoff paradigm. This
pattern only partly fits our predictions, which is why further research is now
being carried out.
26
Poster 04
The role of language, hearing and cognition in speech in noise perception
in elderly adults
Eline van Knijff, Martine Coene & Paul Govaerts
Vrije Universiteit Amsterdam & The Eargroup, Antwerp
Background. Previous work indicates that speech perception in elderly
adults is negatively influenced by hearing loss, a decline in cognitive abilities
such as inhibitory control, the presence of background noise, and syntactic
complexity of speech stimuli. However, it remains unclear if and how these
factors interact.
Objective and Hypothesis. We intend to investigate the interacting influence
of the above mentioned factors on speech perception in elderly adults. We
hypothesize that in elderly listeners speech perception accuracy is
negatively affected by reduced inhibitory control and by syntactically
complex stimuli. Both factors combined further compromise speech
perception in elderly listeners with hearing loss.
Study Design. The participants consisted of elderly adults with (N=15) and
without (N=11) hearing loss and a control group (N=24) of (younger) hearing
adults. Inhibitory control was measured through a Simon task. Speech-innoise perception accuracy was measured through a word- and sentence
repetition task, in stationary and fluctuating speech noise.
Results. Mixed ANOVAs revealed main effects of group, inhibitory control,
noise type and syntactic complexity of sentences. Reduced inhibitory control
impaired speech perception only in elderly adults with hearing loss, while
high syntactic complexity worsened perception for all elderly listeners. For
stimuli in simple sentences, perception increased for fluctuating noise.
Conclusions. Linguistic factors such as the syntactic complexity of the speech
stimulus and cognitive factors such as inhibitory control prove to be both
essential in speech perception in elderly listeners. Further research needs to
clarify the relationship between syntactic complexity and perceptual
advantages of noise fluctuations, and cognitive training.
27
Poster 05
The influence of congenital hearing impairment on language production
and language reception abilities in adults
Elke Huysmans, S. Theo Goverts
Dept. of Otolaryngology‐Head and Neck Surgery (section Ear & Hearing), EMGO+
Institute for Health and Care Research, and Language and Hearing Center
Amsterdam; VU University Medical Center, Amsterdam, the Netherlands
People with congenital hearing impairment (CHI) acquire spoken language
with limitations in auditory perception, even when using hearing aids or a
cochlear implant. We tested the hypothesis that moderate to severe CHI
affects language acquisition in a way that weaknesses in language
production persist into adulthood. A second hypothesis was that CHI‐
induced linguistic weaknesses also impede the use of linguistic knowledge in
language reception (top‐down), additional to bottom‐up impediment
resulting from current hearing limitations. A top‐down, linguistic
impediment of CHI on language reception would explain part of language
processing differences between hearing impaired people with similar
audiograms.
In two successive studies, we examined the long‐term effects of moderate
to severe CHI on linguistic abilities. Language production and reception were
assessed in the visual and auditory modality to identify modality
independent and dependent consequences of CHI. Language production was
studied by analyzing morphosyntactic correctness of spoken and written
language samples. Language reception was assessed with tasks for sentence
recognition, either while reading or listening. To examine the use of specific
morphosyntactic knowledge in language processing, the sensitivity for
morphosyntactic distortions in sentences was additionally assessed. For all
tasks, we compared performance of normal hearing adults (NH), adults with
congenital hearing impairment (CHI), and adults with acquired hearing
impairment (AHI). This latter group was included to disentangle the
consequences of current hearing limitations on auditory task performance
from the consequences of hearing limitations during the era of language
acquisition. The poster presents the research method and group findings of
the study.
28
Poster 06
Syntactic operations rely on implicit memory: Evidence from patients with
amnesia.
Evelien Heyselaar1, Katrien Segaert2, Arie J. Wester†3, Roy P. C. Kessels3,4,5,
Peter Hagoort1,4
1
Neurobiology of Language Department, Max Planck Institute for Psycholinguistics
2
School of Psychology, University of Birmingham
3
Vincent van Gogh Institute for Psychiatry, Centre of Excellence for Korsakoff and
Alcohol-Related Cognitive Disorders, Venray
4
Donders Institute for Brain, Cognition and Behaviour
5
Department of Medical Psychology, Radboud University Medical Center
†
Deceased July 9th, 2015.
Syntactic priming, the phenomenon in which participants adopt the
linguistic behaviour of their partner, is widely used in psycholinguistics to
investigate syntactic operations. Although the phenomenon of syntactic
priming is well documented, the memory system that supports the retention
of this syntactic information long enough to influence future utterances, is
not as widely investigated. We aimed to shed light on this issue by
measuring patients with Korsakoff’s amnesia on an active-passive syntactic
priming task and compare their performance to controls matched in age,
education and premorbid intelligence. Patients with Korsakoff’s syndrome
display deficits in all subdomains of explicit memory, yet their implicit
memory remains intact, making them an ideal patient group to determine
which memory system supports syntactic priming. In line with the
hypothesis that syntactic priming relies on procedural memory, the patient
group showed strong priming tendencies (12.6%). Unexpectedly, our
healthy control group did not show a priming tendency (1.7%) although this
task has been used successfully with the undergraduate student population.
We discuss the results in relation to age and compensatory mechanisms.
29
Poster 07
Illusory licensing effects in young vs. older adults
Reifegerste, J., & Felser, C.
Potsdam Research Institute for Multilingualism, University of Potsdam, Germany
During both sentence comprehension and production, the presence of an
'intrusive' licenser such as the plural-marked noun cabinets in (i) below
often gives rise to false-positive acceptability judgements or ungrammatical
production errors (Phillips et al., 2011).
(i)
The key to the cabinets was/*were rusty.
To further test claims to the effect that grammatical processing mechanisms
remain largely unaffected by ageing (Shafto & Tyler, 2014), we investigated
older (OA, MAge = 59) and younger (YA, MAge = 24) German-speaking adults'
susceptibility to different kinds of illusory licenser using a speeded
acceptability judgement task.
Examining subject-verb agreement computation and pronoun resolution, we
found that OAs took longer than YAs to judge both types of stimulus items.
OAs also provided more false-positive judgements for incorrect agreement
dependencies (ii) compared to YAs. Furthermore, we found that OAs (but
not YAs) were slowed down by an illicit but gender-matching antecedent for
a pronoun like the masculine noun Opa in (iii), if there was no licensor
available.
(ii)
*Die Fahrt in die Berge machten viel Spaß.
'The trip to the mountains were a lot of fun.'
(iii)
Susi hofft, dass Opa ihn besucht.
'Susi hopes that Grandpa will visit him.'
This suggest that OAs have more difficulty than YAs blocking illusory
licensers from interfering with the computation of syntactically mediated
dependencies, which might be accounted for by OAs' reduced executivecontrol abilities (Goral et al., 2011).
30
Poster 08
Cognitive load and individual differences in multitasking abilities
Jorrig Vogels, Vera Demberg, Jutta Kray
Saarland University
Performing multiple tasks simultaneously, such as listening to someone
while driving, uses more cognitive resources than performing these tasks
separately. In the case of language comprehension, individuals differ in the
capacity of the channel through which incoming information is processed
(Rabbitt, 1968). In general, high-density information (i.e., an unexpected
word) will place a larger burden on the channel than low-density
information. The risk of the channel being overloaded is especially high
when its capacity is reduced by performing a secondary task or in individuals
with lower cognitive capacity. Previous work on multitasking abilities in
coordinating language comprehension and driving applied the Index of
Cognitive Activity (ICA), which measures rapid increases in pupil size, as an
indicator of cognitive load (e.g., Demberg et al., 2013). The ICA was found to
increase when information density in the linguistic input was higher and
when driving was more difficult. However, contrary to expectation, the
overall ICA level decreased in a driving and language comprehension dual
task compared to single-task driving. To replicate this finding and to further
investigate how the ICA reacts to cognitive load, we systematically test
additional dual-task combinations, combining both language comprehension
and driving with a memory task. In addition, we will assess individual
differences in working memory to explore whether the increase in the ICA in
various dual-task situations is restricted to individuals with lower working
memory capacities. We present data from an experiment with young adults,
which will later be supplemented by data from elderly adults.
31
Poster 09
Delayed integration of emotional cues into situated
language processing in older age
Katja Münster
University of Bielefeld
Two visual-world eye-tracking studies investigated the integration of natural
emotional facial expressions and depicted actions into real‐time sentence
processing in younger (18-30 years) and older (60‐90 years) adults.
Participants were primed with a positive emotional facial expression (vs.
negative expression). Following this they inspected a target scene depicting
two potential agents either performing or not performing an action towards
a patient. This scene was accompanied by a positively‐valenced German
Object‐Verb‐Adverb- Subject sentence describing the scene. Anticipatory
eye-movements to the agent of the action, i.e., the sentential subject in NP2
position (vs. distractor agent) were measured in order to investigate if
positive emotional facial expressions and depicted actions can facilitate
thematic role assignment in older and younger adults. Results in both age
groups showed that depicted actions (vs. no actions) can reliably be used to
anticipate the target agent. However, older adults do so to a lesser extent.
Moreover, even though both age groups can also integrate the positive
facial expression, older adults seem to able to do so only after the target
agent (NP2 of the sentence) has already been mentioned, whereas effects of
the positive prime emerged already during the adverb region for the
younger adults. This on‐line data is supported by accuracy results. Both age
groups answered comprehension questions for who-is-doing-what-to-whom
more accurately when an action was depicted (vs. wasn’t). However, only
younger adults made also use of the emotional cue for answering the
comprehension questions.
32
Poster 10
The cocktail party effect revisited:
Aging, co-speech gestures, and speech in noise
Louise Schubotz1, Linda Drijvers2, Judith Holler1, Asli Özyürek2
Max Planck Institute for Psycholinguistics
Radboud University Nijmegen
Understanding speech in noisy surroundings is notoriously difficult,
especially for older adults. Previous research suggests that visual
information such as articulatory lip movements can improve comprehension
in these situations. Here, we investigated whether older and younger adults’
comprehension of speech presented in multi-talker babble additionally
benefits from the presence of iconic co-speech gestures, hand movements
that convey semantic information related to the speech content. Older and
younger adults saw videos of an actress uttering an action verb and
subsequently were asked to select the target from a choice of four verbs in a
cued recall task. Videos were presented in three visual conditions (mouth
blocked/audio only, visible lip movements, visible lip movements + cospeech gesture) and four audio conditions (clear speech, SNR -18, SNR -24,
no audio).
Response accuracies showed no age-related differences in trials
where either only auditory or only visual information was presented.
However, older adults performed significantly worse than younger adults in
trials with combined visual and auditory input. Yet, both age groups
benefitted equally from the presence of gestures in addition to visible lip
movements as compared to visible lip movements without gestures; this
benefit was significantly larger at the worst noise level. These results
suggest an overall age-related deficit in comprehending multi-modal
language in noisy surroundings. In spite of that, iconic co-speech gestures
provide reliable semantic cues beyond articulatory lip movements alone,
which both older and younger adults can make use of when disambiguating
speech in noise, particularly as the level of background noise increases.
33
Poster 11
Individual differences in cue weights are correlated across contrasts
Meghan Clayards, Sarah Colby
McGill University
Cue weights are thought to underlie second language learning
(Chandrasekaran et al., 2010) and cochlear implant use success (Moberly et
al., 2014). VOT and f0 as cues to initial voicing in English are positively
correlated across individuals (Shultz et al., 2012). We ask whether this is
specific to VOT and f0 or a more general property of cue weighting across
individuals. Secondly, across contrasts do the same listeners have stronger
cue weights and if so, does this relate to general speech perception abilities,
such as hearing in noise?
27 listeners performed a cue-weighting task (2AFC) for two sets of stimuli
that varied orthogonally in two dimensions (‘sock‐‘shock’: sibilant spectral
quality vs. vowel formant transitions, ‘bet’‐‘bat’: vowel formant frequency
vs. duration) and provided AQ subset and Hearing in Noise (HINT) scores.
Cue weights for each participant were highly positively correlated within
continua (sock-shock, R2=0.82, p<0.00, bet-bat, R2=0.70, p<0.00). Cue
weights across continua were also positively correlated for the stronger cue
for each continuum (sibilant spectral quality and vowel formant frequency,
R2=0.41, p=0.036) but not for the weaker cues (vowel formant transition
and vowel duration, R2=0.20, p=0.31). There were trends for a weak positive
correlation between vowel duration and HINT scores (R2 = 0.38, p = 0.08),
and between vowel formant frequency and AQ scores (R2 = -.42 p =0.06).
We find that cue weights are positively correlated within individuals both
within and across contrasts. Some individuals are better able to use the
acoustic-phonetic information in the signal than others. This was only
weakly correlated with hearing in noise suggesting that hearing in noise
versus quiet taps different abilities.
34
Poster 12
Comparing the impact of hearing aid algorithms
for neural auditory learning
Giroud N.1,2,3, Lemke U.4, Meyer M.1,2,3,5
1
2
Neuroplasticity and Learning in the Healthy Aging Brain, University of Zurich
International Normal Aging and Plasticity Imaging Center, University of Zurich
3
University Research Priority Program "Dynamics of Healthy Aging"
4
Science & Technology, Phonak AG, Stäfa
5
Cognitive Neuroscience, University of Klagenfurt
Non-linear frequency compression (NLFC) is an alternative sound processing
strategy in hearing aids that aims to improve the audibility of high-frequency
sounds. NLFC transfers speech-relevant acoustic information from the high
frequency spectrum (> 5 kHz), typically lost in individuals with hearing loss,
to the lower frequency range. So far, little is known about the individual
neural effects of such signal-processing strategies in older adults with
hearing loss. In the present study we investigated behavioral and neural
differences between older adults using hearing aids with or without
activated NLFC during auditory learning. In a longitudinal EEG setting (three
times at an interval of two weeks), we set out to examine the dynamic
interplay of bottom-up and top-down processes during an auditory oddball
paradigm. The stimuli comprised logatomes consisting of natural highpitched fricatives (/sh/ or /s/), set in the problematic high frequency
spectrum, embedded between two /a/ sounds. Three participant groups
with an age range (60-77 years) typical for hearing aid users were tested:
Group 1 was moderately hearing impaired and fitted with hearing aids using
non-linear frequency compression (NLFC), while group 2 used the same type
of hearing aids, while NLFC was switched off. Group 3 represented normalfor-age hearing individuals as controls. Hearing aid users with NLFC
demonstrated a more efficient top-down related neural processing
compared to users of hearing aids without NLFC. Auditory learning was in
general more related to bottom-up processes, rather than driven by topdown processes as it was the case for younger adults (Giroud et al., in
prep.). Moreover, the two signal processing algorithms differ in that they
support auditory learning at different neural stages of speech perception.
35
Poster 13
An individual differences approach to semantic and phonological effects in
reading, repetition and verbal short-term memory
Nicola Savill & Beth Jefferies
University of York
According to Primary Systems accounts of language processing (e.g.,
Patterson & Lambon Ralph, 1999; Jefferies, Sage & Lambon Ralph, 2007),
linguistic abilities such as word reading and repetition can be understood in
terms of the integrity and function of underlying visual, phonological and
semantic systems that are important for both language-based and nonlanguage-based tasks. This view has developed out of work with patients but
lends itself to investigation of cross-task comparisons in healthy individuals.
We took an individual differences approach, employing a sample of healthy
adults to investigate the extent to which sensitivities to linguistic
manipulations in short-term memory might relate to effects in single word
reading and repetition, as well as performance in a range of broader
linguistic and semantic measures. We present preliminary analyses
demonstrating that individuals showing greater effects of a semantic
variable, imageability, in a synonym judgement task also showed larger
effects of this variable in speeded single word repetition and in rates of
phonological errors for high imageability words in immediate serial recall.
These findings support and constrain language-based views of verbal shortterm memory.
36
Poster 14
Eye Movements as a Measure of On-Line Spoken Sentence
Comprehension: Are older adults truly slower?
Nicole D. Ayasse, Amanda Lash, Arthur Wingfield
Brandeis University
Aging is associated with overall slower processing and oftentimes hearing
loss. Therefore, the very fast rate at which speech is ordinarily delivered
poses everyday difficulties for older adults. In the literature, spoken
language processing in older adults has traditionally been studied using offline techniques, such as after the fact recall or comprehension testing of
sentences or narratives. This study’s goal is to determine the on-line time
course of sentence comprehension by examining older adults with good and
poor hearing acuity and comparing their performance to younger adults.
Participants heard recorded sentences that raised the contextual probability
of the sentence-final word (target word) to varying degrees. This target
word was always the name of a picturable object. Participants were
presented with four pictures of objects on a screen, one of which
corresponded to the target word. Participants were instructed to use a
computer mouse to move a cursor over the target picture as quickly as
possible as the sentence unfolded and click in order to select this picture.
Target picture selections were slower for older adults than for younger
adults (p < .01). However, no effect of age was found in the time of the eye
fixations to the correct target picture. This relative absence of an age
difference for eye fixations indicates that the difference observed in target
selections is due to age-related slowing in the motor and decision-making
systems, and not due to slowed comprehension. (Work supported in part by
NIH grant RO1 AG19714.)
37
Poster 15
Examining the relation between articulatory skills and speaking fluency
Nivja de Jong1, Natalia Fullana2, Joan-Carles Mora2
1
2
Utrecht University
University of Barcelona
Individual differences with respect to fluency (e.g. pausing, speed) exist
when speaking in the native language (L1) as well as the second language
(L2). This study investigates to what extent L1 and L2 fluency in spontaneous
speech can be explained by individual differences in articulatory skills, such
as the speed with which an individual can accomplish the articulatory
targets in the production of sound sequences.
A group of EFL learners (n=45) performed three semi-spontaneous speaking
tasks in their L1, Spanish, and three similar speaking tasks in their L2,
English. In addition, all participants performed two articulatory skill tasks.
The first of these tasks, the diadochokinetic (DDK)-task, measured
participants’ speed at moving their articulators. The second task, a delayed
picture naming task, carried out in both the L1 and the L2, measured the
speed at which participants could articulate their speech plans.
The results show that, replicating earlier studies (Derwing et al., 2009; De
Jong et al., 2015), fluency in the L2 can largely be predicted by the same
fluency measures in the L1. This finding confirms the hypothesis that
measures of fluency are, for a substantial part, measures of personal
speaking style, independent of language proficiency. The analyses also show
that speakers’ individual articulatory skills are only weakly related to
measures of fluency. The results therefore suggest that individual
differences in fluency do not originate from individual differences in
articulatory skills and hence must be found elsewhere in speech production
process.
38
Poster 16
The role of the mental lexicon for speech recognition in
different acoustic settings
Rebecca Carroll1,2, Anna Warzybok1,3, Esther Ruigendijk1,2
1
Cluster of Excellence ‘Hearing4all’, University of Oldenburg
2
Institute of Dutch Studies, University of Oldenburg
3
Department of Medical Physics & Acoustics, University of Oldenburg
Older adults often underperform in speech recognition tests compared to
younger adults; especially so if speech is presented in background noise. For
listeners with normal hearing, we recently showed that recognition scores
for a German speech-in-noise test were predicted by age group (younger vs.
older), signal-to-noise ratio (SNR), vocabulary size, and lexical access time
(Carroll et al., submitted). For both, younger and older listeners, a
combination of vocabulary size and lexical access time predicted speech
recognition scores at low SNRs.
Here, we investigate whether the same predictors may explain speech
recognition scores of the same group of listeners with normal hearing. We
tested a group of 22 younger adults (18-35 yrs.) and a group of older adults
(60-78 yrs.), all native listeners of German. Speech recognition scores were
tested using the Göttingen Sentence Test in four more acoustic conditions:
reverberated speech with 3.24 and 2.03 sec. reverberation time, a
combination of reverberation (3.25 sec) and noise (7 dB SNR), and
interrupted speech with an interruption rate of 2.50 Hz. Individual
differences were captured with respect to averaged pure tone hearing
levels, working memory, vocabulary size, and lexical access time. Speech
recognition scores and lexical access times were worse for the older
compared to the younger group. Yet, the role of individual differences was
not as clear as previously found for speech in noise (Carroll et al.,
submitted). Our results can inform psycholinguistic models of speech
recognition about the roles of age, individual differences, but also type of
the acoustic setting.
39
Poster 17
Phonetic reduction in production and perception:
Insights from variation in theory of mind
Rory Turnbull
Laboratoire de Sciences Cognitives et Psycholinguistique, École Normale Supérieure
Theory of Mind (ToM) is the ability to impute mental states to others, and
manifests itself in a diverse array of skills, including anticipating the thoughts
and emotions of others, understanding intentions, and distinguishing mental
and physical causation. While all neurotypical adults possess some degree of
ToM, studies have shown that adults vary in how well developed their ToM
is, and the extent to which they are able to successfully apply their ToM in
social situations. Some of this variation may have linguistic consequences,
especially in light of “listeneroriented” models of speech production which
require the talker to model, on some level, the knowledge state of their
interlocutor. This poster presents an overview of recent work on
interactions between variation in theory of mind in neurotypical adults and
phonetic reduction, in both perception and production. As a ToM deficit has
been argued to be one of several components of the autism phenotype,
research focusing on linguistic consequences of variation in autistic traits is
also discussed. In a production study, repetition (or “second mention”)
reduction was not found to be influenced by variation in talker ToM.
Reduction due to contextual semantic predictability, however, was found to
vary in magnitude as a function of talker ToM. In a series of perception
studies, ToM was not observed to influence reduction perception, although
one experiment suggested that autistic traits may impede the processing of
reduced speech. Taken together, these data call for a complexification of
our theories of language production and perception.
40
Poster 18
Parts or wholes: On the individual differences in the production of
multi-word units in Dutch
Saskia Lensink
Leiden University
There is a growing body of psycho- and neurolinguistic research suggesting
that we store frequent sequences of multiple words as wholes (e.g. Arnon &
Snider, 2010; Tremblay & Baayen, 2010; Tremblay & Tucker, 2011),
corroborating predictions of usage-based models of language (Dąbrowska,
2014; Goldberg, 2003). However, these same usage-based models also
predict, that there is a significant amount of individual variation in the way
individual speakers represent and process language.
The cognitive reality of multi-word units has only been investigated in
English. Also, it is not clear to what extent individuals differ in when and
how much they make use of multi-word units in the processing of language.
Therefore, we set out to see whether we could find effects of units larger
than the word in the production of Dutch multi-word units. In a series of
experiments, participants read out loud highly frequent trigrams. In order to
gauge individual differences in the extent to which individuals chunk up
their language, data on working memory span and fluid intelligence
(Unsworth & Engle, 2005) was also collected.
In our analysis we made use of generalized additive mixed modelling
(GAMM) (Wood, 2006), a new type of mixed effects regression model. The
poster will focus on which units of storage are the best predictors of both
onset latencies and production durations, and to what extent individual
differences exist and are attributable to differences in working memory
capacity and fluid intelligence.
41
Poster 19
Verbal WM Capacities in Sentence Comprehension:
Evidence from Aphasia
Yingying Tan1, Randi Martin2, & Julie Van Dyke3
1
Max Planck Institute for Psycholinguistics
2
Rice University
3
Haskins Laboratories
Recent studies suggest that during sentence comprehension, readers link
non-adjacent constituents through an associative, direct-access mechanism,
while working memory may not play an important role in this operation.
However, a study with healthy young subjects found evidence for a role for
semantic short-term memory (STM) in resolving interference from
semantically related nouns, and a role for attentional control in resolving
syntactic interference (Tan et al., 2011). These results were interpreted as
suggesting separable STM capacities for semantic and syntactic processing,
and a role of attentional control in syntactic processing. The current study
examines interference resolution in aphasic patients with dramatic variation
in their STM and attentional control capacities. Ten aphasic patients were
assessed on self-paced sentence reading times and on time and accuracy to
answer comprehension questions. For log-transformed reading times,
patients performed within the range of age-matched controls. For
comprehension accuracy, we observed convergent evidence to that
obtained from the healthy young subjects - patients with relative better
semantic STM showed less difficulty in semantic interference resolution (r =
-.77, p = .045, after controlling for verbal knowledge), while patient with
better attentional control showed less difficulty in syntactic interference
resolution (r = -.93, p < .001). In both studies, phonological STM did not
relate to either type of interference. These results suggest that individuals
vary in the decay rate for different types of verbal information, and poor
maintenance of semantic information leads to difficulties in resolving
semantic interference. Poor attentional control leads to difficulties in
revolving syntactic interference.
42
4
General information
Map
Grotius building:
Het Gerecht
Erasmus building:
De Refter
MPI
Lunch
Max Planck Institute - canteen:
Soup, sandwiches, salad
Grotius building - Het Gerecht (≈ 2 min walk):
Wok- or pasta meals, pancakes (Fridays only), sandwiches, salad
(payments can only be made by debit card)
Erasmus building - De Refter (≈ 10 min walk):
Restaurant with daily changing hot meals (always one vegetarian option), soup,
sandwiches, salad bar, desserts, snacks
43
44